uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,154,228 | arxiv | \section{Introduction}
\subsection{Motivation}
The bounded derived category of coherent sheaves $D^b(\mathrm{coh} Z)$ of a smooth projective variety $Z$ is one of the most important invariants. It determines the variety up to isomorphism when the (anti-)canonical line bundle is ample \cite{BO}. It is expected to play a fundamental role in the minimal model program and in connection with the homological mirror symmetry conjecture \cite{Or2}. There are several methods available for describing $D^b(\mathrm{coh} Z)$, some of which can (or are expected to) work only for special classes. The first method uses full exceptional collections (conjectured by Orlov to work only for rational varieties), or more generally non-trivial semi-orthogonal decompositions (which do not exist when the canonical line bundle is trivial \cite{KO}). There is homological projective duality, which is a very powerful method to construct semi-orthogonal decompositions, but where examples are hard to find \cite{Ku}. Then we have variation of GIT quotients, which works for a variety $Z$ which can be identified with a GIT quotient $U/\hskip-2pt/_{\chi_+}G$, but needs restrictive symmetry properties of the Kempf-Ness stratifications associated with two linearizations $\chi_\pm$ of the $G$-action on $U$ \cite{BFK}, \cite{HLS}. On the other hand equivalences between derived categories of smooth projective varieties can -- at least in principle -- always be described by suitable Fourier-Mukai functors \cite{Or3}. Finally there is geometric tilting theory \cite{HVdB}, the method we will use in this paper.
Many interesting varieties are defined as the zero locus $Z(s)$ of a section $s\in H^0(B,E^\smvee)$, where $B$ is a smooth projective variety, $E^\smvee\to B$ is a globally generated vector bundle, and $s$ is general.
We will develop a graded geometric tilting theory applied to the gauged Landau-Ginzburg model $s^\smvee:E\to {\mathbb C}$, where $E$ and ${\mathbb C}$ are endowed with the natural ${\mathbb C}^*$-actions. Here $s^\smvee$ is the potential associated with the section $s$, defined by $s^\smvee(e)=\langle s(b), e\rangle$ for $e\in E_b$.
The starting point of this approach is the Isik-Shipman theorem, which gives equivalences
$$D^b(\mathrm{coh}(Z(s))\simeq D^\mathrm{gr}_{\mathrm{sg}} (Z(s^\smvee))\simeq D(\mathrm{coh}_{{\mathbb C}^*}E,\theta,s^\smvee).
$$
Here $ D^\mathrm{gr}_{\mathrm{sg}} (Z(s^\smvee))$ stands for the graded singularity category of the zero locus $Z(s^\smvee)$ of the potential, $\theta$ is the character $ \mathrm{id}_{{\mathbb C}^*}$, and $D(\mathrm{coh}_{{\mathbb C}^*}E,\theta,s^\smvee)$ denotes the derived factorization category associated with the 4-tuple $({\mathbb C}^*,E,\theta,s^\smvee)$ \cite{Hi1}.
\subsection{Results} Let $B$ be a smooth projective variety, $H$ a finite dimensional complex vector space, and $E$ a subbundle of the trivial vector bundle $B\times H^\smvee$. Under these assumptions $E$ fits in a commutative diagram
\begin{equation}\label{DiagE}
\begin{tikzcd}
E \ar[d, two heads, "\rho" '] \ar[r, hook] \ar[dr, "\vartheta"] & B\times H^\smvee \ar[d, two heads]\\
C(E) \ar[r, hook] & H^\smvee
\end{tikzcd}
\end{equation}
where $\vartheta$ is projective, $C(E)\coloneqq\mathrm{im}(\vartheta)$ is a subvariety of the affine space $H^\smvee$, and $\rho$ is induced by $\vartheta$. $C(E)$ coincides with the affine cone of the projective variety $S(E)\coloneqq \mathrm{im}({\mathbb P}(\vartheta))\subset{\mathbb P}(H^\smvee).$ We will assume
\vspace{2mm}\\
(A1) ${\mathcal T}_0$ is a locally free sheaf generating $D(\mathrm{Qcoh} (B))$, and ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$ is a tilting sheaf on $E$.
\vspace{2mm}
Define
$$
\Lambda\coloneqq \mathrm {End}_E({\mathcal T}),\ R\coloneqq {\mathbb C}[C(E)].
$$
Then
\begin{enumerate}[(i)]
\item $\Lambda$ is a graded Noetherian ${\mathbb C}$-algebra of finite graded global dimension, and the graded tilting functor
$$R\mathrm{Hom}^\mathrm{gr}_{\mathrm{Qcoh}_{{\mathbb C}^*} E} ({\mathcal T},-): D(\mathrm{Qcoh}_{{\mathbb C}^*} E)\to D(\mathrm{Mod}_{\mathbb Z}\Lambda)
$$
is an equivalence, which restricts to an equivalence
$$D^b(\mathrm{coh}_{{\mathbb C}^*}E)\textmap{\simeq} D^b(\mathrm{mod}_{\mathbb Z}\Lambda).
$$
\item A regular section $s\in H^0(B,E^\smvee)$ defines a central element $s\in \Lambda$, and ${\mathcal T}$ defines an equivalence
$${\mathcal T}_*: D^\mathrm{gr}_\mathrm{sg}(Z(s^\smvee))\textmap{\simeq} D^\mathrm{gr}_\mathrm{sg}(\Lambda / s\Lambda).
$$
Combining this result with the Isik-Shipman theorem, one obtains an equivalence
$$D^b(\mathrm{coh}(Z(s))\textmap{\simeq} D^\mathrm{gr}_\mathrm{sg}(\Lambda / s\Lambda)
$$
which gives a tilting description of the category $D^b(\mathrm{coh}(Z(s))$.
\item If we also assume
\vspace{2mm}\\
\hspace*{-10mm}
(A2) $B$ is connected, the cone $C(E)\subset H^\smvee $ is normal, and $\rho:E\to C(E)$ is birational with $\mathrm{codim}_E\big(\rho^{-1}(\mathrm{Exc}(\rho))\big)\geq 2$,
\vspace{2mm}\\
then $R=\bigoplus_{m\geq 0} H^0(B,S^m {\mathcal E}^\smvee) $, and $\Lambda\supset R$ is a graded non-commutative resolution of $R$. In several cases this resolution is crepant.
\end{enumerate}
\vspace{2mm}
The proof of part (i) is non-trivial: it uses a generalized Beilinson lemma (Lemma \ref{Schwede}), and an infinite family of generators for $D^b(\mathrm{coh}_{{\mathbb C}^*} E)$ (Theorem \ref{prop-old-l2-2}). Part (ii) is stated in Theorem
\ref{FourthEq}, which can be understood as a Baranovsky-Pecharich-type result \cite{BaPe} in the context of tilting theory. The proof uses the Isik-Shipman theorem and ideas of \cite[Theorem 5.1]{BDFIK} to identify $D^b(\mathrm{coh}(Z(s))$ with a homotopy category of graded matrix factorizations $K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle,s)$. In a second step we prove a version of Orlov's comparison theorem \cite[Theorem 3.10]{Or} to identify $K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle,s)$ with the triangulated graded singularity category $D^\mathrm{gr}_\mathrm{sg}(\Lambda/\langle s\rangle)$. Our version of the comparison theorem is necessary since we have to deal with algebras $\Lambda$ which are not connected.
\vspace{2mm}
We will construct and study a large class of diagrams (\ref{DiagE}) satisfying (A1)-(A2) using geometric objects which we call (inspired by the terminology of the physicists) homogeneous algebraic geometric GLSM presentations. In our formalism a GLSM presentation of $(E\stackrel{\pi}{\to}B,s)$ is a 4-tuple $(G,U,F,\chi)$ where $G$ is a reductive Lie group, $U$, $F$ are finite dimensional $G$-representations, and $\chi:G\to {\mathbb C}^*$ is a character, such that the following conditions are satisfied (see Definition \ref{GLSMDef}):
\begin{enumerate}
\item The stable locus $U_\mathrm {st}^\chi$ coincides with the semistable locus $U_{\mathrm {ss}}^\chi$, and $G$ acts freely on $U_{\mathrm {ss}}^\chi$.
\item The base $B$ coincides with the quotient $U_{\mathrm {ss}}^\chi/G$, and the vector bundle $E$ is the $F$-bundle associated with the principal $G$-bundle $p:U_{\mathrm {ss}}^\chi\to B$ and the representation space $F$.
\item The $G$-equivariant map $\hat s:U_{\mathrm {ss}}^\chi\to F^\smvee$ corresponding to $s$ extends to a $G$-equivariant polynomial map $\sigma:U\to F^\smvee$.
\item Any optimal destabilizing 1-parameter subgroup $\xi$ of an unstable point $u\in U^\chi_\mathrm {us}$ acts with non-negative weights on $F$.
\end{enumerate}
In this situation a section $s\in H^0(B,E^\smvee)$ is induced by a covariant $\sigma: U\to F^\smvee$. The fourth condition implies $ U^\chi_\mathrm {ss}\times F=(U\times F)^\chi_\mathrm {ss}$, so the bundle $E=(U^\chi_\mathrm {ss}\times F)/G$ is the GIT quotient associated with the $G$-representation $W\coloneqq U\oplus F$ and the character $\chi$.
A GLSM presentation is homogeneous if there is right action $U\times {\mathcal G}\to U$ by linear isomorphisms of a reductive group ${\mathcal G}$ which commutes with the fixed $G$-action, such that the induced ${\mathcal G}$-action on $B$ is transitive and, for a point $u_0\in U^\chi_\mathrm {ss}$, the obvious morphism ${\mathcal G}_{[u_0]}\to G$ is surjective (see section \ref{HomGLSMSect}). If this holds, the map
$$\rho: E\to C(E)
$$
is a Kempf collapsing, hence $C(E)$ is a normal Cohen-Macaulay variety. The map $\rho$ is birational iff $\dim(E)=\dim(C(E))$, and if this is the case, $C(E)$ has rational singularities.
\subsection{Applications}
Consider the $G$ representation space $W\coloneqq U\oplus F$ associated with a homogeneous GLSM. Then
\begin{enumerate}
\item $S(E)$ is projectively normal and arithmetically Cohen-Macaulay. If $\rho$ is birational, its affine cone $C(E)$ has rational singularities (see Corollary \ref{PropertiesS(E)} (i)).
\item Suppose that $\mathrm{codim}(U_\mathrm {us}^\chi)\geq 2$ and $\rho$ is birational. Then the cone $C(E)$ is naturally identified with the GIT quotient $\mathrm{Spec}({\mathbb C}[W]^G)$ (see Corollary \ref{PropertiesC(E)} (i)).
\end{enumerate}
Moreover, we give explicit criteria (see Lemma \ref{GorCritLm}) for testing, for a given homogeneous GLSM presentation, if the cone $C(E)$ (the projective variety $S(E)$) is (arithmetically) Gorenstein, and we check these criteria in several situations.
Next we identify an important class of GLSM presentations to which our general results apply. This is the class of 4-tuples $(G,U,F,\chi)$ with $G=\mathrm {GL}(Z)$ for a $k$-dimensional complex vector space $Z$, $U=\mathrm{Hom}(V,Z)$ with $V$ a complex vector space of dimension $N$ and $F^\smvee$ a finite dimensional polynomial representation of $\mathrm {GL}(Z)$. The character $\chi$ is chosen such that $U^\chi_\mathrm {ss}/\mathrm {GL}(Z)$ becomes the Grassmannian $\mathrm{Gr}_k(V^\smvee)$. In many cases it is possible to go one step further, and to give a purely algebraic description of the quotient $\Lambda/ s\Lambda$ in terms of the initial data $(V,k,\lambda,\sigma)$. In order to do this, we need section 1.5 to get the identification $\Lambda=\mathrm {End}_R(M)$ (see section \ref{sec:2-4-2}).
Then we identify $R$ with a quotient $S^\bullet(S^\lambda V)/I_k$ following Porras \cite{Po}, and we describe $M$ as the image of a morphism between free graded $R$-modules (see Proposition \ref{prop:2-13}). When certain conditions are satisfied, our final result takes the following form (Theorem \ref{th:2-14}):
$$D^b(\mathrm{coh} Z(s))\simeq D^{\mathrm{gr}}_{\mathrm{sg}}\big(\mathrm {End}_{S/I_k}\big(\bigoplus_{\alpha\in P(k,n-k)}\mathrm{Im}(S^\alpha(\varphi^\smvee)\otimes (S/I_k))\big)\big/\langle\bar\sigma\rangle\big).
$$
This gives a purely algebraic description of $D^b(\mathrm{coh} Z(s))$ in terms of the initial data $(V,k,\lambda,\sigma)$.
\subsection{Examples}
We apply our general formalism and our results to the gauged Landau-Ginzburg models associated with the following varieties (described as zero loci of regular sections):
\begin{enumerate}[1.]
\item Complete intersections. In this case $B={\mathbb P}(V^\smvee)$ (for a complex vector space of dimension $N>1$), $E$ is the total space of the direct sum $\bigoplus_{i=1}^r {\mathcal O}(-d_i)$ (where $d_i>0$) and $s$ is a general element in
$$H^0(\bigoplus_{i=1}^r {\mathcal O}_{V^\smvee}(d_i))=\bigoplus_{i=1}^r S^{d_i}V.$$
\item Isotropic Grassmannians.
Let $V$ be a complex vector space of even dimension $N=2n$, and $k$ a positive even integer with $k\leq n$. Let $\omega\in {\extpw}\hspace{-2pt}^2V$ be a symplectic form on $V^\smvee$. The isotropic Grassmannian $\mathrm{Gr}_k^\omega(V^\smvee)\subset \mathrm{Gr}_k(V^\smvee)$ is the submanifold of $k$-dimensional isotropic subspaces of $(V^\smvee,\omega)$.
Denoting by $T$ the tautological $k$-bundle of $\mathrm{Gr}_k(V^\smvee)$, the form $\omega$ defines a section $s_\omega\in \Gamma(\mathrm{Gr}_k(V^\smvee),{\extpw}\hspace{-2pt}^2 T^\smvee)$ which is transversal to the zero section, and whose zero locus is $\mathrm{Gr}_k^\omega(V^\smvee)$. %
Similarly, let $q\in S^2V$ be a non-degenerate quadratic form on $V^\smvee$, and choose $k \leq N/2$. The isotropic Grassmannian $\mathrm{Gr}_k^q(V^\smvee)\subset \mathrm{Gr}_k(V^\smvee)$ is the submanifold of $k$-dimensional isotropic subspaces of $(V^\smvee,q)$.
The form $q$ defines a section $s_q\in \Gamma(\mathrm{Gr}_k(V^\smvee),S^2 T^\smvee)$ which is transversal to the zero section, and whose zero locus is $\mathrm{Gr}_k^q(V^\smvee)$.
\vspace{2mm}
\item Beauville-Donagi IHS 4-folds.
With the same notation as above we take $E=S^3 T$ on the Grassmannian $\mathrm{Gr}_2(V^\smvee)$. The Beauville-Donagi IHS 4-folds are obtained in the special case $\dim(V)=6$.
\end{enumerate}
\vspace{3mm}
In all these cases the tilting bundle ${\mathcal T}_0$ is the direct sum of the sheaves of a full strongly exceptional collection. In the first case we choose the standard Beilinson collection, and in the other cases (when the basis is a Grassmannian) we use the Kapranov collection.
\subsection{Acknowledgments}
This article builds on and combines fundamental contributions of many mathematicians, notably of Ballard - Favero - Deliu - Isik - Katzarkov \cite{BDFIK}, Bondal - Orlov \cite{BO}, Buchweitz - Leuschke - Van den Bergh \cite{BLVdB2}, Kempf \cite{Ke}, and Orlov \cite{Or}.
We are very grateful to Alexei Bondal for his important remarks at the beginning of this project. We also thank Andrew Kresch for his interest and pointers to the literature, and Greg Stevenson for a useful e-mail exchange.
\newpage
\section{Graded tilting for gauged Landau-Ginzburg models}\label{AlgSection}
\subsection{The Landau-Ginzburg model of a section}\label{LGmodels}
Let $B$ be a smooth complex variety, $\pi:E\to B$ a rank $r$ vector bundle on $B$, and $s\in\Gamma(E^\smvee)$ a section in its dual. The zero locus $Z(s)$ is a local complete intersection of codimension $r$ when $s$ is regular.
If, moreover, $s$ is transversal to the zero section then $Z(s)$ is a smooth submanifold of codimension $r$.
\begin{dt} Let $s\in\Gamma(E^\smvee)$ be a section. The potential associated with $s$ is the map $s^\smvee:E\to{\mathbb C}$ defined by
$$s^\smvee(y)\coloneqq \langle s(x),y\rangle\ ,\ \forall x\in B, \forall y\in E_x.
$$
\end{dt}
\def\mathrm{Crit}{\mathrm{Crit}}
Let $x\in B$ and $y\in E_x$. For a suitable open neighborhood $U$ of $x$ identify the bundles $E_U$, $E^\smvee_U$ with $U\times{\mathbb C}^r$, $U\times{{\mathbb C}^r}^\smvee$ respectively using mutually dual trivializations $\theta$, $\theta^\smvee$. Denote by $s_\theta:U\to{{\mathbb C}^r}^\smvee$ the map corresponding to $s$ via $\theta^\smvee$. For any $y\in E_x$ and a tangent vector $(\dot x,\dot z)\in T_y(E)=T_x(B)\times{{\mathbb C}^r}$ one has
$$d_y s^\smvee(\dot x,\dot z)=\langle s_\theta(x),\dot z\rangle+ \langle d_xs_{\theta}(\dot x),y\rangle.
$$
This formula shows that $\mathrm{Crit}(s^\smvee)\subset p^{-1}(Z(s))$. When $x\in Z(s)$, the differential of $s^\smvee$ at a point $y\in E_x$ can be written in an invariant way:
$$d_y s^\smvee (\dot y)=\langle D_x s(p_*(\dot y)), y\rangle \ \forall \dot y\in T_y(E),
$$
where $D_x s:T_x B\to E_x^\smvee$ stands for the intrinsic derivative of $s$ at $x$. This shows that the critical locus $\mathrm{Crit}(s^\smvee)$ of $s^\smvee$ is
\begin{equation}\label{crit}
\mathrm{Crit}(s^\smvee)=\mathop{\bigcup}_{x\in Z(s)} \bigg\{\qmod{E_x^\smvee}{\mathrm{im}(D_xs)}\bigg\}^\smvee.
\end{equation}
In particular
\begin{re}\label{transv}
If $s$ is a transversal to the zero section, then $\mathrm{Crit}(s^\smvee)$ coincides as a variety with the image of $Z(s)$ via the zero section ${\scriptstyle{\cal O}}:B\to E$, so it can be identified with $Z(s)$ via $\pi$.
\end{re}
We refer to \cite{Or2} and \cite{Hi2} for the following fundamental definition:
\begin{dt} A gauged Landau-Ginzburg model is a 4-tuple $(G,X,\kappa,w)$, where $G$ is an algebraic group, $\kappa\in\mathrm{Hom}(G,{\mathbb C}^*)$ is a character, $X$ is a smooth $G$-variety, and $w:X\to {\mathbb C}$ is a $\kappa$-equivariant regular function on $X$, called the potential of the model.
Let $\pi:E\to B$ be a rank $r$ vector bundle on $B$, and $s\in\Gamma(E^\smvee)$.
The 4-tuple $(E,{\mathbb C}^*, \mathrm{id}_{{\mathbb C}^*}, s^\smvee)$, where $E$ is endowed with the fibrewise scaling ${\mathbb C}^*$-action, will be called the gauged Landau-Ginzburg model associated with $(E\to B,s)$.
\end{dt}
This class of gauged Landau-Ginzburg models will play a fundamental role in this article.
\subsection{\texorpdfstring{${\mathbb C}^*$}{str1}-equivariant derived categories of vector bundles}
Let $B$ be a smooth projective scheme, $H$ a finite dimensional $k$-vector space, and let
$$
\begin{tikzcd}
E \ar[r, hook] \ar[dr, two heads, "\pi"'] & B\times H^\smvee\ar[d, two heads]\\
& B
\end{tikzcd}
$$
be a sub-vector bundle of the trivial vector bundle $B\times H^\smvee$. This implies that $E$ is projective over $H^\smvee$, in particular it belongs to the class of varieties concerned by geometric tilting (see \cite[Theorem 7.6]{HVdB}, \cite[sections 1.8, 1.9]{BH}).
\begin{pr} \label{prop-old-l2-1} Let ${\mathcal T}_0$ be a coherent sheaf on $B$ which generates $D(\mathrm{Qcoh} B)$. Then ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$ generates $D(\mathrm{Qcoh} E)$.
\end{pr}
\begin{proof}
Since $\pi$ is affine, it follows that the functor
$$\pi_*:\mathrm{Qcoh} E\to \mathrm{Qcoh} B
$$
(\cite[Lemma 25.24.1]{Stack}) is exact. Its left adjoint functor is
$$\pi^*:\mathrm{Qcoh} B\to \mathrm{Qcoh} E
$$
(\cite[Lemmas 6.26.2, 17.10.4]{Stack}), and this functor is also exact because $\pi$ is flat (\cite[Lemma 28.11.6]{Stack}). Since the functors $\pi_*$, $\pi^*$ are exact, they induce well defined functors
$$D(\pi_*): D(\mathrm{Qcoh} E)\to D(\mathrm{Qcoh} B),\ D(\pi^*):D(\mathrm{Qcoh} B)\to D(\mathrm{Qcoh} E)
$$
which act on complexes componentwise, and are right and left derived functors of $\pi_*$, respectively $\pi^*$ \cite[p. 75]{BDG}. Using Lemma \cite[Lemma 13.28.5]{Stack} it follows that $D(\pi_*)$ is a right adjoint for $D(\pi^*)$. Therefore for any two objects ${\mathcal M}$, ${\mathcal N}$ in $D(\mathrm{Qcoh} E)$, $D(\mathrm{Qcoh} B)$ respectively one has an identification
\begin{equation}\label{adj}
\mathrm{Hom}_{D(\mathrm{Qcoh} E)}(D(\pi^*)({\mathcal M}),{\mathcal N})=\mathrm{Hom}_{D(\mathrm{Qcoh} B)}({\mathcal M},D(\pi_*)({\mathcal N})).
\end{equation}
Let now ${\mathcal N}$ be a complex of quasi-coherent sheaves on $E$ such that
$$\mathrm{Hom}_{D(\mathrm{Qcoh} E)}({\mathcal T}, {\mathcal N}[m])=0\ \forall m\in{\mathbb Z}.
$$
By (\ref{adj}) we obtain
$$\mathrm{Hom}_{D(\mathrm{Qcoh} B)}({\mathcal T}_0,D(\pi_*)({\mathcal N})[m])=0 \ \forall m\in{\mathbb Z},
$$
so that $D(\pi_*)({\mathcal N})=0$ in $D(\mathrm{Qcoh} B)$, because ${\mathcal T}_0$ generates $D(\mathrm{Qcoh} B)$. Therefore $D(\pi_*)({\mathcal N})$ is an acyclic complex. Using \cite[Lemma 28.11.6]{Stack} it follows that ${\mathcal N}$ is an an acyclic complex, too, hence ${\mathcal N}=0$ in $D(\mathrm{Qcoh} E)$.
\end{proof}
\begin{co}\label{co3}
Let ${\mathcal T}_0$ be a locally free coherent sheaf on $B$ which classically generates $D^b(\mathrm{coh} B)$. Then
\begin{enumerate}
\item ${\mathcal T}_0$ generates $D(\mathrm{Qcoh} B)$.
\item $\pi^*({\mathcal T}_0)$ classically generates $D^b(\mathrm{coh} E)$.
\item The pull-back $\pi^*({\mathcal T}_0)$ is a tilting object of $D(\mathrm{Qcoh} E)$ if and only if
$$H^i\big(B,{\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes (\bigoplus_{m\geq 0} S^m {\mathcal E}^\smvee)\big)=0 \ \forall i>0.
$$
\end{enumerate}
\end{co}
\begin{proof} (1) $D(\mathrm{Qcoh} B)$ is compactly generated, and $D(\mathrm{Qcoh} B)^c$ is equivalent to $D^b(\mathrm{coh} B)$ because $B$ is smooth. The claim follows from Ravenel-Neeman's Theorem (see \cite[Theorem 2.1.2]{BVdB}, \cite[section 1.4]{BH}).
\\ \\
(2)
The sheaf $\pi^*({\mathcal T}_0)$ generates $D(\mathrm{Qcoh} E)$. Since the composition
$$ E\hookrightarrow B\times H^\smvee \to H^\smvee
$$
is a projective morphism, $D(\mathrm{Qcoh} E)$ is compactly generated. But $\pi^*({\mathcal T}_0)$ is an object in $D_\mathrm{Perf}(\mathrm{Qcoh} E)=D(\mathrm{Qcoh} E)^c$ which generates $D(\mathrm{Qcoh} E)$, hence it classically generates $D(\mathrm{Qcoh} E)^c\simeq D^b(\mathrm{coh} E)$ because $E$ is smooth.
\\ \\
(3) ${\mathcal T}$ is a compact generator of $D(\mathrm{Qcoh} E)$, and
$$\mathrm{Ext}^i({\mathcal T},{\mathcal T})=H^i(E,\pi^*({\mathcal T}_0^\smvee\otimes {\mathcal T}_0))=H^i(B,{\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes \pi_*({\mathcal O}_E))$$
$$=H^i\big(B,{\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes (\bigoplus_{m\geq 0} S^m {\mathcal E}^\smvee)\big)=0\ \forall i>0.
$$
\end{proof}
\begin{lm}\label{newLm}
The canonical functor $D^b(\mathrm{coh}_{{\mathbb C}^*} E)\to D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ defines an equivalence between $D^b(\mathrm{coh}_{{\mathbb C}^*} E)$ and the full subcategory of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ of complexes with bounded, coherent cohomology.
\end{lm}
\def\mathrm{bd-coh}{\mathrm{bd-coh}}
\def\mathrm{bd}{\mathrm{bd}}
\begin{proof}
The category of ${\mathbb C}^*$-equivariant quasi-coherent sheaves on $E$ can be identified with the category of quasi-coherent sheaves on the quotient stack $[E/{\mathbb C}^*]$ \cite{Tho}. Combining this identification with \cite[Corollary 2.11 p. 10]{ArBe} we see that the canonical functor
$$D^b(\mathrm{coh}_{{\mathbb C}^*} E)\to D^b(\mathrm{Qcoh}_{{\mathbb C}^*} E)$$
gives an equivalence
$$D^b(\mathrm{coh}_{{\mathbb C}^*} E)\to D^b_\mathrm{coh}(\mathrm{Qcoh}_{{\mathbb C}^*} E)
$$
with the full subcategory $D^b_\mathrm{coh}(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ of $ D^b(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ consisting of objects with coherent cohomology.
On the other hand, \cite[Lemma 11.7, p. 15]{Kell} gives an equivalence
$$U:D^b(\mathrm{Qcoh}_{{\mathbb C}^*} E)\to D_{\mathrm{bd}}(\mathrm{Qcoh}_{{\mathbb C}^*} E)
$$
from $D^b(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ to the full subcategory $D_{\mathrm{bd}}(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ consisting of objects with bounded cohomology.
Since cohomology is invariant under isomorphisms in the derived category, this equivalence restricts to an equivalence $D^b_\mathrm{coh}(\mathrm{Qcoh}_{{\mathbb C}^*} E)\to D_{\mathrm{bd-coh}}(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ between $D^b_\mathrm{coh}(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ and the full subcategory of $D_{\mathrm{bd}}(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ consisting of objects with coherent cohomology:
$$
\begin{tikzcd}[column sep=6mm]
& D_\mathrm{bd-coh} (\mathrm{Qcoh}_{{\mathbb C}^*}E) \ar[r, hook] & D_\mathrm{bd} (\mathrm{Qcoh}_{{\mathbb C}^*}E)\ar[r,hook]&D (\mathrm{Qcoh}_{{\mathbb C}^*}E)\\
D^b(\mathrm{coh}_{{\mathbb C}^*}E)\ar[r, "\simeq"] &D_\mathrm{coh}^b (\mathrm{Qcoh}_{{\mathbb C}^*}E)\ar[r, hook] \ar[u, "U_\mathrm{coh}", "\simeq"' ] & D^b (\mathrm{Qcoh}_{{\mathbb C}^*}E) \ar[u, "\simeq", "U"']\,.&
\end{tikzcd}
$$
\end{proof}
For a locally free coherent sheaf ${\mathcal F}_0$ we denote by ${\mathcal F}$ its pull-back to $E$ regarded as ${\mathbb C}^*$-sheaf in the obvious way, and by ${\mathcal F}\langle k\rangle$ the ${\mathbb C}^*$-sheaf on $E$ obtained from ${\mathcal F}$ by twisting with the character $z\mapsto z^k$.
\begin{thry} \label{prop-old-l2-2} Let ${\mathcal T}_0$ be a coherent sheaf on $B$ which generates $D(\mathrm{Qcoh} B)$. Put ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$. The family $({\mathcal T}\langle k\rangle)_{k\in{\mathbb Z}}$ generates $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$, and classically generates $D^b(\mathrm{coh}_{{\mathbb C}^*} E)$.
\end{thry}
\begin{proof} Let ${\mathcal H}$ be a ${\mathbb C}^*$-equivariant quasi-coherent sheaf on $B$ endowed with the trivial ${\mathbb C}^*$-action. For any open affine subscheme $U\subset B$ we obtain a ${\mathbb C}[U]$-group scheme ${\mathbb C}^*_U\coloneqq U\times {\mathbb C}^*$ in the sense of \cite[section I.2.1, p. 19]{Ja}. Using \cite[section 1.2, p. 241]{Tho} we see that the ${\mathbb C}[U]$-module ${\mathcal H}(U)$ becomes a ${\mathbb C}^*_U$-module in the sense of \cite[section I.2.7, p. 19]{Ja}. By \cite[section I.2.11, p. 30]{Ja} we obtain a decomposition
$${\mathcal H}(U)=\bigoplus_{\lambda \in{\mathbb Z}} {\mathcal H}(U)_\lambda
$$
of ${\mathcal H}(U)$ as direct sum of ${\mathbb C}[U]$-submodules, each ${\mathcal H}(U)_\lambda$ being the submodule ${\mathcal H}(U)$ on which ${\mathbb C}^*$ acts with weight $\lambda$. Therefore we have a global direct sum decomposition
\begin{equation}
{\mathcal H}=\bigoplus_{\lambda \in{\mathbb Z}} {\mathcal H}_\lambda
\end{equation}
of ${\mathcal H}$ as direct sum of quasi-coherent subsheaves. Note that this weight decomposition holds for an arbitrary quasi-coherent sheaf on $B$ (coherence is not necessary).
Let now ${\mathcal F}$ be a ${\mathbb C}^*$-equivariant quasi-coherent sheaf on $E$. The corresponding decomposition
\begin{equation}\label{decweights}
\pi_*({\mathcal F})=\bigoplus_{\lambda \in{\mathbb Z}} \pi_*({\mathcal F})_\lambda
\end{equation}
combined with \cite[Lemma 28.11.6]{Stack} shows that the functor $\pi_*$ is an equivalence between the category $\mathrm{Qcoh}_{{\mathbb C}^*} E$ and the category of ${\mathbb Z}$-graded quasi-coherent $S^\bullet{\mathcal E}^\smvee$-modules on $B$.
Let
$$\pi^{*0}:\mathrm{Qcoh} B\to \mathrm{Qcoh}_{{\mathbb C}^*} E
$$
be the functor obtained by endowing the pull-back $\pi^*({\mathcal H})$ of a quasi-coherent sheaf on $B$ with its obvious ${\mathbb C}^*$-structure. Its right adjoint is the functor
$$\pi_{*0}:\mathrm{Qcoh}_{{\mathbb C}^*}E\to \mathrm{Qcoh} B
$$
given by ${\mathcal F}\mapsto \pi_*({\mathcal F})_0=\pi_*({\mathcal F})^{{\mathbb C}^*}$, so for any quasi-coherent sheaf ${\mathcal H}$ on $B$ and ${\mathbb C}^*$-equivariant quasi-coherent sheaf ${\mathcal F}$ on $E$ we have an identification
$$\mathrm{Hom}_{\mathrm{Qcoh}_{{\mathbb C}^*}E}(\pi^{*0}({\mathcal H}),{\mathcal F})=\mathrm{Hom}_{\mathrm{Qcoh} B}({\mathcal H},\pi_{*0}({\mathcal F})).
$$
For $k\in{\mathbb Z}$ we get an identification
$$\mathrm{Hom}_{\mathrm{Qcoh}_{{\mathbb C}^*}E}(\pi^{*0}({\mathcal H})\langle k\rangle,{\mathcal F})=\mathrm{Hom}_{\mathrm{Qcoh}_{{\mathbb C}^*}E}(\pi^{*0}({\mathcal H}),{\mathcal F}\langle -k\rangle)$$
$$=\mathrm{Hom}_{\mathrm{Qcoh} B}({\mathcal H},\pi_{*0}({\mathcal F}\langle -k\rangle))=\mathrm{Hom}_{\mathrm{Qcoh} B}({\mathcal H}, \pi_*({\mathcal F}\langle -k\rangle)^{{\mathbb C}^*})$$
$$=\mathrm{Hom}_{\mathrm{Qcoh} B}({\mathcal H}, \pi_*({\mathcal F})_k),
$$
which shows that the functor
$$\pi_{*k}:\mathrm{Qcoh}_{{\mathbb C}^*}E\to \mathrm{Qcoh} B
$$
given by ${\mathcal F}\mapsto \pi_*({\mathcal F})_k$ is the right adjoint of the composition
$$\pi^{*k}\coloneqq \langle k\rangle\circ \pi^{*0}:\mathrm{Qcoh} B\to \mathrm{Qcoh}_{{\mathbb C}^*} E.$$
The functors $\pi_{*k}$, $\pi^{*k}$ are exact, because $\pi^*$, $\pi_*$ are exact. As in the proof of (\ref{adj}) we obtain well-defined, mutually adjoint functors
$$D(\pi_{*k}):D(\mathrm{Qcoh}_{{\mathbb C}^*}E)\to D(\mathrm{Qcoh} B),\ D(\pi^{*k}):D(\mathrm{Qcoh} B)\to D(\mathrm{Qcoh}_{{\mathbb C}^*} E).
$$
Therefore, for any objects ${\mathcal M}$, ${\mathcal N}$ of $D(\mathrm{Qcoh} B)$, $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ respectively we have an identification
\begin{equation}\label{adjk}
\mathrm{Hom}_{D(\mathrm{Qcoh}_{{\mathbb C}^*}E)}((D\pi^{*k})({\mathcal M}),{\mathcal N})=\mathrm{Hom}_{D(\mathrm{Qcoh} B)}({\mathcal M},D(\pi_{*k}){\mathcal N}).
\end{equation}
Let ${\mathcal N}$ be an object of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ such that
%
$$\mathrm{Hom}_{D(\mathrm{Qcoh}_{{\mathbb C}^*} E)}({\mathcal T}\langle k\rangle, {\mathcal N}[m])=0\ \forall (k,m)\in{\mathbb Z}\times{\mathbb Z}.
$$
Using (\ref{adjk}) we obtain
$$\mathrm{Hom}_{D(\mathrm{Qcoh} B)}({\mathcal T}_0, D(\pi_{*k})({\mathcal N}[m]))=0\ \forall (k,m)\in{\mathbb Z}\times{\mathbb Z}.
$$
Therefore
$$\mathrm{Hom}_{D(\mathrm{Qcoh} B)}\big({\mathcal T}_0, D(\pi_{*k})({\mathcal N})[m]\big)=0\ \forall (k,m)\in{\mathbb Z}\times{\mathbb Z}.
$$
Since ${\mathcal T}_0$ generates $D(\mathrm{Qcoh} B)$, it follows $D(\pi_{*k})({\mathcal N})=0$ in $D(\mathrm{Qcoh} B)$ for any $k\in{\mathbb Z}$. Therefore the complex $D(\pi_{*k})({\mathcal N})$ is acyclic for any $k\in{\mathbb Z}$. Applying (\ref{decweights}) to the terms of the complex $D(\pi_*)({\mathcal N})$ and taking into account that cohomology commutes with direct sums, it follows that $D(\pi_*)({\mathcal N})$ is an acyclic complex. Thus the complex ${\mathcal N}$ is acyclic, so ${\mathcal N}=0$ in $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$.\\
In order to prove that $({\mathcal T}\langle k\rangle)_{k\in{\mathbb Z}}$ classically generates $D^b(\mathrm{coh}_{{\mathbb C}^*} E)$ note first that ${\mathcal T}\langle k\rangle$ is a compact object of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ for any $k\in{\mathbb Z}$. Therefore $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ is compactly generated by the family $({\mathcal T}\langle k\rangle)_{k\in{\mathbb Z}}$. By Ravenel-Neeman's Theorem \cite[2.1.2]{BVdB} it follows that $({\mathcal T}\langle k\rangle)_{k\in{\mathbb Z}}$ classically generates $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)^c$. We claim that $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)^c$ coincides with with the full subcategory %
$${\mathcal I}\coloneq D_\mathrm{bd-coh} (\mathrm{Qcoh}_{{\mathbb C}^*}E)$$
of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ introduced in the proof of Lemma \ref{newLm}. According to this lemma, ${\mathcal I}$ is formed by the complexes which are isomorphic (in $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$) with a bounded complex of coherent ${\mathbb C}^*$-equivariant sheaves. Since any object ${\mathcal F}$ in $\mathrm{coh}_{{\mathbb C}^*} E$ defines obviously a compact object of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$, it follows that $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)^c$ contains the full triangulated subcategory generated by $\mathrm{coh}_{{\mathbb C}^*} E$, so $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)^c\supset{\mathcal I}$. On the other hand $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)^c$ is classically generated by the family $({\mathcal T}\langle k\rangle)_{k\in{\mathbb Z}}$. Since ${\mathcal I}$ is a thick subcategory of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ which contains these objects, it follows that ${\mathcal I}\supset D(\mathrm{Qcoh}_{{\mathbb C}^*} E)^c$. Therefore ${\mathcal I}=D(\mathrm{Qcoh}_{{\mathbb C}^*} E)^c$ and the last assertion of the theorem follows from Lemma \ref{newLm}.
\end{proof}
\subsection{Graded tilting on vector bundles}\label{TiltingIH}
We will need the following generalized Beilinson lemma:
\begin{lm}\label{Schwede} Let ${\mathcal C}$, ${\mathcal D}$ be a triangulated categories with arbitrary set-indexed coproducts, $F:{\mathcal C}\to {\mathcal D}$ be an exact functor which commutes with set-indexed coproducts, and let $(T_j)_{j\in J}$ be a family of compact generators of ${\mathcal C}$ such that
\begin{enumerate}
\item $(F(T_j))_{j\in J}$ is a family of compact generators of ${\mathcal D}$.
\item For any pair $(j,n)\in J\times {\mathbb Z}$ the map
$$\mathrm{Hom}_{\mathcal C}(T_j[n],T_k)\to \mathrm{Hom}_{\mathcal D}(F(T_j)[n],F(T_k))
$$
induced by $F$ is bijective for any $k\in{\mathbb Z}$.
\end{enumerate}
Then $F$ is an equivalence.
\end{lm}
The case of a single generator is \cite[Proposition 3.10]{Sch}, and the proof in the general case follows the same method. We give this proof below for completeness.
\begin{proof}
Let ${\mathcal C}'$ be the full subcategory of ${\mathcal C}$ whose objects are the objects $Y$ of ${\mathcal C}$ for which the map
$$\mathrm{Hom}_{\mathcal C}(T_j[n],Y)\to \mathrm{Hom}_{\mathcal D}(F(T_j)[n],F(Y))
$$
is bijective for any $(j,n)\in J\times {\mathbb Z}$. Since $F$ commutes with coproducts, and $T_j[n]$, $F(T_j)[n]$ are compact objects, it follows that the subcategory ${\mathcal C}'$ is closed under set-indexed coproducts. Moreover, using the exactness of $F$ and \cite[Lemma 1.1.10]{Nee} it follows that ${\mathcal C}'$ is closed under extensions, i.e. if two objects of a distinguished triangle are in ${\mathcal C}'$, then so is the third object of the triangle. In particular ${\mathcal C}'$ is a triangulated subcategory. Since ${\mathcal C}'$ contains the family of generators $(T_j)_{j\in J}$ it follows by \cite[Lemma 2.2.1]{SchSh} that ${\mathcal C}'={\mathcal C}$.
Fix now an object $Y$ of ${\mathcal C}$, and let ${\mathcal C}_Y$ be the full subcategory of ${\mathcal C}$ whose objects are the objects $X$ of ${\mathcal C}$ for which the map
$$\mathrm{Hom}_{\mathcal C}(X,Y)\to \mathrm{Hom}_{\mathcal D}(F(X),F(Y))
$$
induced by $F$ is bijective. Since $F$ commutes with direct sums, and the functors $\mathrm{Hom}(-,Y)$, $\mathrm{Hom}(-,F(Y))$ sends coproducts to products (by the universal property of the coproduct), it follows that ${\mathcal C}_Y$ is closed under direct sums. Using the exactness of $F$ and \cite[Remark 1.1.11]{Nee} it follows that ${\mathcal C}_Y$ is closed under extensions, in particular it is a triangulated subcategory of ${\mathcal C}$. By the first part of the proof we know that ${\mathcal C}_Y$ contains the family of generators $(T_j)_{j\in J}$, so ${\mathcal C}_Y={\mathcal C}$. This proves that $F$ is fully faithful.
To prove that $F$ is essentially surjective, let ${\mathcal D}'$ be the full subcategory of ${\mathcal D}$ whose objects are the objects of ${\mathcal D}$ which are isomorphic to an object of the form $F(X)$. ${\mathcal D}'$ is closed under the shift functor and coproducts, because $F$ commutes with these operations. We prove that ${\mathcal D}'$ is also closed under extensions. Let
$$U\textmap{\phi} V\to W\to U[1]
$$
be a distinguished triangle with $U$, $V$ objects of ${\mathcal D}'$. Therefore there exists objects $X$, $Y$ in ${\mathcal C}$ such that $U\simeq F(X)$, $V\simeq F(Y)$. Fix isomorphisms $u:F(X)\to U$, $v:F(Y)\to V$. We know that $F$ is full, so there exists $f\in \mathrm{Hom}(X,Y)$ such that $F(f)=v^{-1}\phi u$. We can embed $f$ in a distinguished triangle
$$X\textmap{f} Y\to Z\to X[1],
$$
which gives (since $F$ is exact) a distinguished triangle
$$F(X)\textmap{F(f)=v^{-1}\phi u} F(Y)\to F(Z)\to F(X)[1].
$$
In the following commutative diagram
$$\begin{tikzcd}
F(X)\ar[d, "u"', "\simeq"] \ar[r, "F(f)"] & F(Y)\ar[d, "v", "\simeq"'] \ar[r]&F(Z)\ar[r] \ar[d, dashed, "\simeq"]& F(X)[1] \\
U\ar[r,"\phi"] & V\ar[r]& W\ar[r]& U[1]
\end{tikzcd}
$$
the rows are distinguished triangles. Therefore $F(Z)$ and $W$ are isomorphic, so $W$ is also an object of ${\mathcal D'}$. Since ${\mathcal D}'$ is closed under shifts in both directions, it follows that ${\mathcal D}'$ is closed under extensions, in particular it is a triangulated subcategory. But we know that ${\mathcal D}'$ is also closed under coproducts and contains the family of generators $(F(T_j))_{j\in J}$. Therefore ${\mathcal D}'={\mathcal D}$, which shows that $F$ is essentially surjective.
\end{proof}
In this section
we let again ${\mathcal T}_0$ be a locally free sheaf on $B$ classically generating $D^b(\mathrm{coh} B)$, which satisfies the hypothesis of Corollary \ref{co3} (3), so that ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$ is a tilting sheaf on $E$.
Using Geometric Tilting Theory and Corollary \ref{co3} (3) we see that the associated graded ${\mathbb C}$-algebra
$$\Lambda=\mathrm {End}_E({\mathcal T})=\bigoplus_{m\geq 0} H^0(B, {\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes S^m{\mathcal E}^\smvee )
$$
is a finite $R$-algebra, finitely generated over ${\mathbb C}$, and has finite global dimension. Let $\mathrm{Mod}_{\mathbb Z}\Lambda$ ($\mathrm{mod}_{\mathbb Z}\Lambda$) be the category of (respectively finitely generated) graded right $\Lambda$-modules. The functor
$$\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},-):\mathrm{Qcoh}_{{\mathbb C}^*} E\to \mathrm{Mod}_{\mathbb Z}\Lambda
$$
defined by
$${\mathcal F}\to \bigoplus_{k\in{\mathbb Z}} \mathrm{Hom}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal F}\langle k\rangle)
$$
is left exact. Its right derived functor
%
$$R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},-):D(\mathrm{Qcoh}_{{\mathbb C}^*} E)\to D(\mathrm{Mod}_{\mathbb Z}\Lambda)
$$
will be called the graded tilting functor. For an object ${\mathcal F}$ in $\mathrm{Qcoh}_{{\mathbb C}^*} E$ we have
%
$$H^n(R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal F}))=\bigoplus_{k\in {\mathbb Z}} \mathrm{Ext}^n_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal F}\langle k\rangle).
$$
In particular, for $m\in{\mathbb Z}$ we have
%
\begin{equation}\label{F1}
H^n(R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal T}\langle m\rangle))=\bigoplus_{k\in {\mathbb Z}} \mathrm{Ext}^n_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal T}\langle m+k\rangle).
\end{equation}
%
Using \cite[Lemma 2.2.8]{BFK} we obtain the general formula
%
$$\mathrm{Ext}^n_{\mathrm{Qcoh}_{{\mathbb C}^*}E}({\mathcal F}',{\mathcal F}'')=\mathrm{Ext}^n_{\mathrm{Qcoh} E}({\mathcal F}',{\mathcal F}'')^{{\mathbb C}^*},
$$
which shows that
$$\mathrm{Ext}^n_{\mathrm{Qcoh}_{{\mathbb C}^*}E}({\mathcal T}\langle s\rangle, {\mathcal T}\langle t\rangle)=\mathrm{Ext}^n_{\mathrm{Qcoh} E}({\mathcal T},{\mathcal T})_{t-s}.
$$
Since ${\mathcal T}$ is a tilting sheaf on $E$ it follows that
\begin{equation}\label{F2}\mathrm{Ext}^n_{\mathrm{Qcoh}_{{\mathbb C}^*}E}({\mathcal T}\langle s\rangle, {\mathcal T}\langle t\rangle)=
\left\{
\begin{array}{ccc}
\Lambda_{t-s} & \rm for & n=0\\
0 & \rm for & n>0
\end{array}\right..
\end{equation}
Combining this formula with (\ref{F1}) we see that $R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal T}\langle m\rangle)$ is acyclic in degree $n\ne 0$, and
\begin{equation}\label{F3}
R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal T}\langle m\rangle)=\bigoplus_{k\in{\mathbb Z}} \Lambda_{m+k} =\Lambda\langle m\rangle.
\end{equation}
Here $\langle m\rangle:\mathrm{Mod}_{\mathbb Z}\Lambda\to \mathrm{Mod}_{\mathbb Z}\Lambda$ is the standard $m$-order shift functor on the category of graded right $\Lambda$-modules.
\begin{lm}\label{shifts}
Let $\Lambda$ be a graded ${\mathbb C}$-algebra of finite global dimension. The family $(\Lambda\langle j\rangle )_{j\in{\mathbb Z}}$ classically generates $D^b(\mathrm{mod}_{\mathbb Z}\Lambda)$ and generates $D(\mathrm{Mod}_{\mathbb Z}\Lambda)$.
\end{lm}
\begin{proof}
The thick envelope of the family $(\Lambda\langle j\rangle)_{j\in{\mathbb Z}}$ contains all direct sums of the form $\bigoplus_{i\in I} \Lambda\langle k_i\rangle^{\oplus n_i}$
with $I$ finite, $k_i\in{\mathbb Z}$, $n_i\in{\mathbb N}$. Therefore it contains all finitely generated graded projective $\Lambda $-modules. Since $\Lambda$ has finite global dimension, every finitely generated graded $\Lambda$-module has a finite resolution by finitely generated graded projective $\Lambda$-modules. This implies that $(\Lambda\langle j\rangle)_{j\in{\mathbb Z}}$ classically generates $D^b(\mathrm{mod}_{\mathbb Z}\Lambda)$.\\
For the second claim, let $M^\bullet=\bigoplus_{k\in{\mathbb Z}} M^\bullet_k $ be a complex of graded right $\Lambda$-modules such that
\begin{equation}\label{OrthGr}
\mathrm{Hom}_{D(\mathrm{Mod}_{\mathbb Z}\Lambda)}(\Lambda\langle j\rangle, M^\bullet[n])=0 \ \forall (j,n)\in{\mathbb Z}\times{\mathbb Z}.
\end{equation}
Note that, since $\Lambda(j)$ is a projective object in the category of graded $\Lambda$-modules, we have
$$\mathrm{Hom}_{D(\mathrm{Mod}_{\mathbb Z}\Lambda)}(\Lambda\langle j\rangle, M^\bullet[n])=H^n(\mathrm{Hom}^\bullet(\Lambda\langle j\rangle, M^\bullet))=H^n(M^\bullet\langle -j\rangle).$$
Therefore (\ref{OrthGr}) implies that the complex $M^\bullet_k$ is acyclic for any $k\in{\mathbb Z}$, so $M^\bullet=0$ in $D(\mathrm{Mod}_{\mathbb Z}\Lambda)$.
\end{proof}
\begin{thry}\label{gap}
Let $B$ be a smooth projective variety over ${\mathbb C}$, $H$ a finite dimensional complex vector space, and
$$
\begin{tikzcd}[row sep=7ex]
E \ar[r, hook, "i"] \ar[rr, "\pi", bend left=28, two heads ] & B\times H^\smvee \ar[r, two heads, "p"] &B
\end{tikzcd}
$$
a sub-bundle of the trivial bundle $B\times H^\smvee$ over $B$.
Let ${\mathcal T}_0$ be a locally free sheaf on $B$ classically generating $D^b(\mathrm{coh} B)$, such that ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$ is a tilting sheaf on $E$. Set $\Lambda\coloneqq \mathrm {End}_E({\mathcal T})$. Then $\Lambda$ is a graded Noetherian ${\mathbb C}$-algebra of finite graded global dimension, and the graded tilting functor
$$R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},-): D(\mathrm{Qcoh}_{{\mathbb C}^*} E)\to D(\mathrm{Mod}_{\mathbb Z}\Lambda)
$$
is an equivalence which restricts to an equivalence
$$D^b(\mathrm{coh}_{{\mathbb C}^*} E)\to D^b(\mathrm{mod}_{\mathbb Z}\Lambda).
$$
\end{thry}
\begin{proof} Geometric tilting theory applies, and shows that $\Lambda$ (as ungraded ${\mathbb C}$-algebra) has finite global dimension. By \cite[Theorem II.8.2 p. 122]{NaOy} it follows that $\Lambda$ has also finite graded global dimension. By Theorem \ref{prop-old-l2-2} we know that $({\mathcal T}\langle k\rangle)_{k\in{\mathbb Z}}$ is a family of compact generators of the category $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$. We will show that the graded tilting functor and this family of compact generators satisfy the hypothesis of Lemma \ref{Schwede}. The graded tilting functor is exact and commutes with set-indexed coproducts. Formula (\ref{F3}) shows that
$$
R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},{\mathcal T}\langle m\rangle)=\Lambda\langle m\rangle \ \forall m\in{\mathbb Z}.
$$
Since $(\Lambda\langle m\rangle)_{m\in{\mathbb Z}}$ is a family of compact generators of $D(\mathrm{Mod}_{\mathbb Z}\Lambda)$ by Lemma \ref{shifts}, we see that the first hypothesis of Lemma \ref{Schwede} is satisfied. To check the second hypothesis, it suffices to note that by (\ref{F2}) one has canonical identifications
$$\mathrm{Hom}_{D(\mathrm{Qcoh}_{{\mathbb C}^*} E)}({\mathcal T}\langle k\rangle, {\mathcal T}\langle l\rangle)=\mathrm{Hom}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T}\langle k\rangle, {\mathcal T}\langle l\rangle)=\Lambda_{l-k}=$$
$$=\mathrm{Hom}_{\mathrm{Mod}_{\mathbb Z}\Lambda}(\Lambda\langle k\rangle, \Lambda\langle l\rangle)=\mathrm{Hom}_{D(\mathrm{Mod}_{\mathbb Z}\Lambda)}(\Lambda\langle k\rangle, \Lambda\langle l\rangle).
$$
This shows that Lemma \ref{Schwede} applies, hence
$$R\mathrm{Hom}^{\mathrm{gr}}_{\mathrm{Qcoh}_{{\mathbb C}^*} E}({\mathcal T},-): D(\mathrm{Qcoh}_{{\mathbb C}^*} E)\to D(\mathrm{Mod}_{\mathbb Z}\Lambda)
$$
is an equivalence as claimed. To prove that this functor restricts to an equivalence
$$D^b(\mathrm{coh}_{{\mathbb C}^*} E)\to D^b(\mathrm{mod}_{\mathbb Z}\Lambda)
$$
it suffices to note that
\begin{enumerate}[(a)]
\item The thick closure of $(\Lambda\langle m\rangle)_{m\in{\mathbb Z}}$ is $D^b(\mathrm{mod}_{\mathbb Z}\Lambda)$. This was proved in Lemma \ref{shifts}.
\item The canonical functor $D^b(\mathrm{coh}_{{\mathbb C}^*} E)\to D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ defines an equivalence between $D^b(\mathrm{coh}_{{\mathbb C}^*} E)$ and the full subcategory ${\mathcal I}= D_\mathrm{bd-coh} (\mathrm{Qcoh}_{{\mathbb C}^*}E)$ of $D(\mathrm{Qcoh}_{{\mathbb C}^*} E)$ of complexes with bounded, coherent cohomology. This is Lemma \ref{newLm}.
\item The thick closure of $({\mathcal T}\langle k\rangle)_{k\in{\mathbb Z}}$ coincides with ${\mathcal I}$. This was proved in Theorem \ref{prop-old-l2-2}.
\end{enumerate}
\end{proof}
\defs^{\hskip-0.2ex\scriptscriptstyle \vee}{s^{\hskip-0.2ex\scriptscriptstyle \vee}}
\subsection{A tilting version of the Isik-Shipman theorem }
Consider now an element $s\in H$, defining a regular section of $E^\smvee$ with zero locus $Z(s)\subset B$. The element $s$ defines
also a degree 1 central non zero-divisor (denoted by the same symbol) in the graded ${\mathbb C}$-algebra
$$\Lambda=\bigoplus_{m\geq 0} H^0(B, {\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes S^m{\mathcal E}^\smvee ).
$$
Let $\langle 1\rangle$ be the shift functor on the category of graded right $\Lambda$-modules, and let
$${\mathfrak s}: \mathrm{id}_{\mathrm{Mod}_{\mathbb Z}\Lambda}\to \langle 1\rangle
$$
be the natural transformation given by multiplication with $s\in \Lambda_1$. Denote by $\theta$ the character $ \mathrm{id}_{{\mathbb C}^*}$, and by $D(\mathrm{coh}_{{\mathbb C}^*} E, \theta, s^{\hskip-0.2ex\scriptscriptstyle \vee})$ the derived factorization category associated with the 4-tuple $({\mathbb C}^*,E, \theta, s^{\hskip-0.2ex\scriptscriptstyle \vee})$ \cite{Hi1}.
We will use the following result
\begin{thry} \label{FirstEq} With the notations and under the assumptions of Theorem \ref{gap}, let $s\in H^0(B,{\mathcal E}^\smvee)$ be a regular section. The tilting sheaf ${\mathcal T}=\pi^*({\mathcal T}_0)$ induces an equivalence:
$$\tilde {\mathcal T}_*: D(\mathrm{coh}_{{\mathbb C}^*} E, \theta, s^{\hskip-0.2ex\scriptscriptstyle \vee})\to D(\mathrm{mod}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})
$$
\end{thry}
\begin{proof}
Since $\Lambda$ is a Noetherian, the method of proof of \cite[Theorem 5.1]{BDFIK} applies. Note however that this proof needs Theorem \ref{gap}, which is our ${\mathbb C}^*$-equivariant version of Geometric Tilting Theory. \end{proof}
In the equivalence given by Theorem \ref{FirstEq} one can substitute the derived factorization category $D(\mathrm{mod}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})$ by the homotopy category $K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})$ of factorisations whose components are finitely generated projective graded $\Lambda$-modules. This is an important progress, because the morphisms in the category $K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})$ are just homotopy classes of morphisms of factorisations. The precise statement is
\begin{co} \label{SecondEq}
With the notations and under the assumptions of Theorem \ref{gap} suppose also that $s\in H^0(B,{\mathcal E}^\smvee)$ is a regular section. Then the sheaf ${\mathcal T}=\pi^*({\mathcal T}_0)$ induces an equivalence
$$D(\mathrm{coh}_{{\mathbb C}^*} E, \theta, s^{\hskip-0.2ex\scriptscriptstyle \vee})\to K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s}).
$$
\end{co}
\begin{proof}
As $\Lambda$ has finite graded global dimension, \cite[Corollary 2.25, p. 210]{BDFIK} applies and yields an equivalence
$$K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})\simeq D(\mathrm{mod}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s}).
$$
\end{proof}
Finally we will identify the homotopy category $K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})$ with the triangulated graded singularity category of the quotient algebra $\Lambda/s\Lambda$.
\begin{pr} \label{Orlov-newProp} Under the assumptions and with the notations of Theorem \ref{gap} suppose also that $s\in H^0(B,{\mathcal E}^\smvee)$ is a regular section. Then there exists a natural equivalence
%
\begin{equation}\label{Orlov-new}
\Phi: K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})\textmap{\simeq} D^{\mathrm{gr}}_{\mathrm{sg}}(\Lambda/s\Lambda).
\end{equation}
\end{pr}
\begin{proof}
This equivalence is obtained using a version of Orlov's Theorem \cite[Theorem 3.10]{Or}. Orlov's result gives an equivalence
$$F:\mathrm{DGrB}(s)\to D^{\mathrm{gr}}_{\mathrm{sg}}(\Lambda/s\Lambda),$$
where:
\begin{enumerate}
\item $\Lambda=\bigoplus_{i\geq 0}\Lambda_i$ is a {\it connected}, Noetherian algebra of finite global dimension over a field $K$.
\item $s\in\Lambda_n$ is a homogeneous, central element of positive degree which is not a zero divisor.
\item $s\Lambda=\Lambda s$ is the two-sided ideal generated by $s$.
\item $\mathrm{DGrB}(s)$ is the triangulated category of graded brains of type $B$ associated with the pair $(\Lambda, s)$ \cite[section 3.1]{Or}.
\item $D^{\mathrm{gr}}_{\mathrm{sg}}(\Lambda/s\Lambda)$ denotes the graded singularity category of the graded quotient algebra $\Lambda/s\Lambda$.
\end{enumerate}
The construction of the equivalence $\Phi$ starts with the following remark: Orlov's category $\mathrm{DGrB}(s)$ coincides with the full subcategory of $K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})$ whose objects are factorizations with (finitely generated) {\it free} graded right $\Lambda$-modules. Moreover, Orlov's functor $F:\mathrm{DGrB}(s)\to D^{\mathrm{gr}}_{\mathrm{sg}}(\Lambda/s\Lambda)$ defined in \cite[Proposition 3.5]{Or} extends in an obvious way to a functor $\Phi:K(\mathrm{proj}_{\mathbb Z}\Lambda,\langle 1\rangle, {\mathfrak s})\to D^{\mathrm{gr}}_{\mathrm{sg}}(\Lambda/s\Lambda)$.
The arguments of \cite[Proposition 3.9]{Or} also apply to the extension $\Phi$, proving that this functor is fully faithful as well.
The proof is completed by noting that, for an arbitrary (not-necessarily connected) Noetherian algebra $\Lambda$, the extension $\Phi$ is always essentially surjective. We indicate briefly the necessary changes to Orlov's proof:
the proof of essential surjectivity in \cite[Proposition 3.10]{Or} is obtained in two steps. First, for an object $T$ in $D_\mathrm{sg}^{\mathrm{gr}}(\Lambda/s\Lambda)$, he obtains a factorization
\begin{equation}\label{KK}
K^{-1}\textmap{k^{-1}} K^0\textmap{k^0} K^{-1}(n)
\end{equation}
with $K^0$ free finitely generated and $K^{-1}$ projective finitely generated, which is mapped to $T$ via $\Phi$. Second, using the connectedness of $\Lambda/s\Lambda$, he proves that $K^{-1}$ is free as well.
For the essential surjectivity of $\Phi$ the second step is no longer necessary, so it suffices to check that the construction of (\ref{KK}) does not need the connectedness of $\Lambda$. The key ingredients used in this construction are:
\begin{enumerate}[({I}1)]
\item If $\Lambda$ has finite injective dimension, then $\Lambda/s\Lambda$ has finite injective dimension.
In non-commutative algebra the condition ``$A$ has finite injective dimension" means: ``$A$ has finite injective dimension as both right and left module over itself". A ring satisfying this condition is called Iwanaga-Gorenstein. \vspace{2mm}
\item The quotient algebra $A\coloneqq \Lambda/s\Lambda$ is a dualizing complex over itself, i.e. the functors
$$D\coloneqq R\mathrm{Hom}_{\mathrm{mod}_{\mathbb Z} A}(-,A):D^b(\mathrm{mod}_{\mathbb Z} A)\to D^b(\mathrm{mod}_{\mathbb Z} A^{\mathrm{op}})^{\mathrm{op}},$$
$$D\coloneqq R\mathrm{Hom}_{\mathrm{mod}_{\mathbb Z} A}(-,A):D^b(\mathrm{mod}_{\mathbb Z} A^{\mathrm{op}})\to D^b(\mathrm{mod}_{\mathbb Z} A)^{\mathrm{op}}
$$
are quasi-inverse equivalences.
\end{enumerate}
(I1) follows using the spectral sequences with second pages
%
$$\mathrm{Ext}_A^p(M,\mathrm{Ext}^q_\Lambda(A,\Lambda))\Rightarrow \mathrm{Ext}^n_\Lambda(M,\Lambda),\ \mathrm{Ext}_{A^{\mathrm{op}}}^p(M,\mathrm{Ext}^q_{\Lambda^{\mathrm{op}}}(A,\Lambda))\Rightarrow \mathrm{Ext}^n_{\Lambda^{\mathrm{op}}}(M,\Lambda)
$$
associated with an $A$-module (respectively $A^{\mathrm{op}}$-module) $M$ \cite[p. 349]{CaE}, and the short exact sequence
$$0\to \Lambda \textmap{s}\Lambda\to A\to 0
$$
to compute $\mathrm{Ext}^q_\Lambda(A,\Lambda)$, $\mathrm{Ext}^q_{\Lambda^{\mathrm{op}}}(A,\Lambda)$.
\\ \\
(I2) is stated in \cite[Lemma 5.3]{BuSt}. The proof uses only the assumption ``$A$ is Iwanaga-Gorenstein".
\end{proof}
Combining Corollary \ref{SecondEq} and Proposition \ref{Orlov-newProp} with the Isik-Shipman theorem \cite{Is}, \cite{Sh}, one obtains the following result, which gives a tilting description of the derived category of the zero locus of a regular section $s$:
\begin{thry} \label{FourthEq} Let $B$ be a smooth projective variety over ${\mathbb C}$, $H$ a finite dimensional complex vector space, and
$$
\begin{tikzcd}[row sep=7ex]
E \ar[r, hook, "i"] \ar[rr, "\pi", bend left=28, two heads ] & B\times H^\smvee \ar[r, two heads, "p"] &B
\end{tikzcd}
$$
a sub-bundle of the trivial bundle $B\times H^\smvee$ over $B$.
Let ${\mathcal T}_0$ be a locally free sheaf on $B$ classically generating $D^b(\mathrm{coh} B)$, such that ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$ is a tilting sheaf on $E$. Set $\Lambda\coloneqq \mathrm {End}_E({\mathcal T})$, and let $s\in H$ be an element defining a regular section. Then one has an equivalence of triangulated categories:
$$ D^b(\mathrm{coh} Z(s)) \to D^{\mathrm{gr}}_{\mathrm{sg}}(\Lambda/s\Lambda).
$$
\end{thry}
\subsection{\texorpdfstring{${\mathbb C}^*$-}{t2}equivariant non-commutative resolutions}\label{Cequiv}
\def\mathrm{gldim}{\mathrm{gldim}}
Let $R$ be a normal, Noetherian domain. We recall (\cite {VdB1}, \cite[sect. 1.1.1]{SVdB}) that a non-commutative ({\rm nc}) resolution of $R$ is an $R$-algebra of the form $\Lambda=\mathrm {End}_R(M)$, where $M$ is a non-trivial finitely generated reflexive $R$-module, such that $\mathrm{gldim}(\Lambda)<\infty$. A non-commutative resolution $\Lambda$ is called crepant if $R$ is a Gorenstein ring, and $\Lambda$ is a maximal Cohen-Macaulay (MCM) $R$-module.
\begin{dt}
Let $R$ be a non-negatively graded, normal, Noetherian domain with $R_0={\mathbb C}$.
A non-negatively graded $R$-algebra $\Lambda$ will be called a graded (crepant) {\rm nc} resolution of $R$ if $\Lambda$ is a (crepant) {\rm nc} resolution in the classical (non-graded) sense.
\end{dt}
This definition is justified by \cite[Proposition 2.4]{SVdB}, which shows in particular that, denoting by ${\mathfrak m}=\bigoplus_{k>0} R_k\subset R$ the augmentation ideal of $R$, if
$\Lambda$ is a graded (crepant) {\rm nc} resolution of $R$, then the ${\mathfrak m}$-completion completion $\hat\Lambda_m$ is a (crepant) {\rm nc} resolution of $\hat R_{\mathfrak m}$. Moreover, as we will see in this section, this notion becomes natural in the framework of a Kempf collapsing map (see Lemma \ref{l6} below).
\begin{re} \label{newRe} Suppose that $\Lambda=\mathrm {End}_R(M)$ where $M$ is a non-trivial non-negatively graded finitely generated $R$-module which is reflexive in the non-graded sense, and the inclusion
$$\bigoplus_{k=0}^\infty \mathrm{Hom}^k(M,M)\subset \mathrm {End}_R(M)
$$
is an equality. Then $\mathrm {End}_R(M)$ endowed with its natural non-negative grading is a graded {\rm nc} resolution of $R$, which is crepant if $R$ is Gorenstein ring and $\mathrm {End}_R(M)$ is an MCM $R$-module.
\end{re}
Let $B$ be a smooth projective variety over ${\mathbb C}$, $H$ a finite dimensional complex vector space, and
$$
\begin{tikzcd}[row sep=7ex]
E \ar[r, hook, "i"] \ar[rr, "\pi", bend left=28, two heads ] & B\times H^\smvee \ar[r, two heads, "p"] &B
\end{tikzcd}
$$
a vector subbundle of the trivial bundle $B\times H^\smvee$ over $B$. Let $p:B\times H^\smvee\to B$, $q:B\times H^\smvee\to H^\smvee $ be the projections on the two factors, $\pi=p\circ i$ be the bundle projection of $E$, $\vartheta\coloneqq q\circ i$, and $C(E)\coloneqq (q\circ i)(E)$ the image of the {\it proper} morphism $\vartheta$, endowed with its reduced induced scheme structure. We obtain a surjective projective morphism $\rho:E\to C(E)$ fitting in the commutative diagram:
\begin{equation}\label{DiagramC(E)}
\begin{tikzcd}[row sep=7ex, column sep=7ex]
&[-22pt]E \ar[r, hook, "i"] \ar[dr, "\vartheta"] \ar[d, "\rho"', two heads] \ar[rr, "\pi", bend left=28, two heads ] & B\times H^\smvee \ar[d, two heads, "q"]\ar[r, "p"] & B\\
\mathrm{Spec}(R)\ar[r, equal]&C(E) \ar[r, hook, "j"] & H^\smvee &
\end{tikzcd}
\end{equation}
\begin{re}\label{AffineConeRem}
Let $E\hookrightarrow B\times H^\smvee$ be
a vector subbundle of the trivial bundle $B\times H^\smvee$ over $B$. Then $C(E)$ coincides with the affine cone over the projective variety
$$S(E)\coloneqq {\mathbb P}(\vartheta)({\mathbb P}(E))\subset {\mathbb P}(H^\smvee).
$$
\end{re}
The scaling ${\mathbb C}^*$-action on $H^\smvee$ induces ${\mathbb C}^*$-actions on $E$ and $C(E)$, and $\rho$ is a ${\mathbb C}^*$-equivariant projective morphism. Since $C(E)$ is ${\mathbb C}^*$-invariant, the associated ideal $I_{C(E)}\subset {\mathbb C}[H^\smvee]$ is homogeneous, so the ring ${\mathbb C}[C(E)]={\mathbb C}[H^\smvee]/I_{C(E)}$ comes with a natural grading induced by the standard grading of ${\mathbb C}[H^\smvee]=S^*H$. Put $R\coloneqq {\mathbb C}[C(E)]$.
\\
{\it In this section, from here on, we will make the following assumptions: we will suppose that $B$ is connected, $\rho$ is birational, and $C(E)$ is normal.}\\
We will denote by ${\mathcal E}$ the locally free sheaf on $B$ associated with $E$.
\begin{lm}\label{l5} One has an isomorphism of graded rings
$$R\simeq \bigoplus_{m\geq 0} H^0(B, S^m {\mathcal E}^\smvee).
$$
\end{lm}
\begin{proof}
Since $C(E)=\mathrm{Spec}(R)$ is normal and $\rho$ is proper and birational, we have $\rho_*({\mathcal O}_E)={\mathcal O}_{\mathrm{Spec}(R)}$. Therefore
$$R=H^0(\mathrm{Spec}(R),{\mathcal O}_{\mathrm{Spec}(R)})=H^0(\mathrm{Spec}(R),\rho_*({\mathcal O}_E))=H^0(E,{\mathcal O}_E)$$
$$=H^0(B,\pi_*({\mathcal O}_E))=H^0(B, \bigoplus_{m\geq 0}S^m{\mathcal E}^\smvee).
$$
The gradings of $R$ and $\bigoplus_{m\geq 0} H^0(B, S^m {\mathcal E}^\smvee)$ agree via this isomorphism, because ${\mathbb C}^*$ acts with weight $m$ on $H^0(B,S^m{\mathcal E}^\smvee)$.
\end{proof}
\begin{re}\label{NewRemarkNew}
In section \ref{HomGLSMSect} we will study a large class of bundles $E$ which fit in a diagram of the form (\ref{DiagramC(E)}) with $H=H^0(B,E^\smvee)$, which satisfy the assumptions of Lemma \ref{l5}. These examples will be obtained using homogeneous GLSM presentations.
\end{re}
From now let ${\mathcal T}_0$ be a locally free sheaf on $B$ classically generating $D^b(\mathrm{coh} B)$, which satisfies the hypothesis of Corollary \ref{co3} (3). Therefore, by this corollary, we know that ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$ is a tilting sheaf on $E$.
Put $\Lambda\coloneqq \mathrm {End}_E({\mathcal T})$. Geometric tilting theory (see \cite[sect. 7.6]{HVdB}, \cite[sect. 1.8, 1.9]{BH}, \cite[sect. D]{Le}) implies
\begin{enumerate}
\item $\Lambda$ is a finite $R$-algebra, finitely generated as a ${\mathbb C}$-algebra.
\item $\Lambda$ has finite global dimension.
\end{enumerate}
Note also that $\Lambda$ has a natural grading given by the direct sum decomposition:
\begin{equation}\label{Lambda-grading}
\Lambda=H^0(E,\pi^*({\mathcal T}_0^\smvee\otimes{\mathcal T}_0))=H^0\big(B, {\mathcal T}_0^\smvee\otimes{\mathcal T}_0\otimes (\bigoplus_{m\geq 0}S^m{\mathcal E}^\smvee)\big).
\end{equation}
We are interested in criteria which guarantee that $\Lambda$ is a graded (crepant) non-commutative resolution of $R$. Let $M$ be the graded $R$-module
$$M\coloneqq H^0(E,{\mathcal T})=H^0\big(B,{\mathcal T}_0\otimes (\bigoplus_{m\geq 0}S^m{\mathcal E}^\smvee)\big).
$$
\begin{lm}\label{l6} Suppose that $\Lambda\coloneqq \mathrm {End}_E({\mathcal T})$ is a reflexive $R$-module, and ${\mathcal O}_B$ is a direct summand of ${\mathcal T}_0$. If the exceptional locus $\mathrm{Exc}(\rho)\subset {C(E)}$ of $\rho$ has codimension $\geq 2$, then $\Lambda$ is a graded \rm{nc} resolution of $R$.
\end{lm}
\begin{proof} (see \cite[Proposition 3.4]{BLVdB2}) Using the known direct sum decompositions
\begin{align}
R&=H^0(B, \bigoplus_{m\geq 0}S^m{\mathcal E}^\smvee),\\
M&=H^0\big(B,{\mathcal T}_0\otimes (\bigoplus_{m\geq 0}S^m{\mathcal E}^\smvee)\big),\\
\label{LambdaDec}
\Lambda &=H^0\big(B, {\mathcal T}_0^\smvee\otimes{\mathcal T}_0\otimes (\bigoplus_{m\geq 0}S^m{\mathcal E}^\smvee)\big),
\end{align}
we see that, if ${\mathcal O}_B$ is a direct summand of ${\mathcal T}_0$, then $M$ can be identified (as a graded $R$-module) with a direct summand of $\Lambda$. Since we assumed that $\Lambda$ is a reflexive $R$-module, it follows that $M$ is reflexive.
The natural evaluation map
$$\rho_*({\mathcal E}nd_E({\mathcal T}))\times \rho_*({\mathcal T})\to \rho_*({\mathcal T})
$$
is ${\mathcal O}_{C(E)}$-bilinear, so it defines a morphism
$$\nu: \rho_*({\mathcal E}nd_E({\mathcal T}))\to {\mathcal E}nd_{C(E)}(\rho_*({\mathcal T})),
$$
which is obviously an isomorphism on ${C(E)}\setminus\mathrm{Exc}(\rho)$. On the other hand we have
$$H^0({C(E)},\rho_*({\mathcal E}nd_E({\mathcal T}))=H^0(E,{\mathcal E}nd_E({\mathcal T}))=\Lambda.
$$
Therefore $\Lambda$ is the $R$-module associated with the coherent sheaf $\rho_*({\mathcal E}nd_E({\mathcal T}))$ on the affine scheme $\mathrm{Spec}(R)$.
Since $\Lambda$ is a finitely generated reflexive $R$-module and $R$ is Noetherian, it follows that for any $p\in \mathrm{Spec}(R)$ the stalk $\Lambda_p$ is a reflexive $R_p$-module, so the coherent sheaf $\rho_*({\mathcal E}nd_E({\mathcal T}))$ is reflexive. The same argument shows that $\rho_*({\mathcal T})$ is reflexive, so ${\mathcal E}nd(\rho_*({\mathcal T}))$ is reflexive as well. Therefore $\nu$ induces an isomorphism
\begin{equation}\label{iso}
\begin{split}
\Lambda=& H^0({C(E)},\rho_*({\mathcal E}nd_E({\mathcal T}))= H^0({C(E)}\setminus\mathrm{Exc}(\rho),\rho_*({\mathcal E}nd_E({\mathcal T}))\textmap{\simeq}\\
&H^0({C(E)}\setminus\mathrm{Exc}(\rho),{\mathcal E}nd(\rho_*({\mathcal T}))) =H^0({C(E)},{\mathcal E}nd(\rho_*({\mathcal T})))=\mathrm {End}_{C(E)} (\rho_*({\mathcal T})).
\end{split}
\end{equation}
On the other hand $H^0(\rho_*({\mathcal T}))=H^0(E,{\mathcal T})=M$, so $M$ is the $R$-module associated with the coherent sheaf $\rho_*({\mathcal T})$ on the affine scheme ${C(E)}$. Using \cite[Ex. 5.3, p. 124]{Ha} we obtain a natural isomorphism $\mathrm {End}_{C(E)} (\rho_*({\mathcal T}))=\mathrm {End}_R(M)$, so (\ref{iso}) induces an isomorphism $\Lambda\textmap{\simeq} \mathrm {End}_R(M)$. Moreover, via this isomorphism one has for any $m\geq 0$ the inclusion
$$H^0 (B, {\mathcal T}_0^\smvee\otimes{\mathcal T}_0\otimes S^m{\mathcal E}^\smvee )\subset \mathrm{Hom}^m_R(M,M).
$$
Therefore, taking into account (\ref{LambdaDec}) we obtain
$$\mathrm {End}_R(M)=\Lambda=\bigoplus_{m\geq 0}H^0 (B, {\mathcal T}_0^\smvee\otimes{\mathcal T}_0\otimes S^m{\mathcal E}^\smvee )\subset \bigoplus_{m\geq 0} \mathrm{Hom}^m_R(M,M),
$$
so the hypothesis of Remark \ref{newRe} is fulfilled. The claim follows now by this Remark.
\end{proof}
The following lemma is similar to \cite[Proposition 2.7 (3)]{WZ}.
\begin{lm}\label{l7} Suppose that the pre-image $\rho^{-1}(\mathrm{Exc}(\rho))$ of the exceptional locus of $\rho$ has codimension $\geq 2$ in $E$. Then $\Lambda$ and $M$ are reflexive $R$-modules, and the natural morphism $\Lambda\to \mathrm {End}_R(M)$ is an isomorphism of reflexive $R$-modules, and of graded rings.
\end{lm}
\begin{proof}
Since the pre-image $\rho^{-1}(\mathrm{Exc}(\rho))$ has codimension $\geq 2$ in $E$ and $\rho$ is birational, it follows that $\mathrm{Exc}(\rho)$ has also codimension $\geq 2$ in ${C(E)}$, so the functor $\rho_*$ maps reflexive sheaves to reflexive sheaves by \cite[4.2.1]{VdB2}.
The sheaves ${\mathcal T}$, ${\mathcal E}nd_E({\mathcal T})$ are locally free, hence reflexive. Therefore $\rho_*({\mathcal T})$, $\rho_*({\mathcal E}nd_E({\mathcal T}))$ are reflexive coherent sheaves on ${C(E)}$. The $R$-modules associated with these sheaves are $M$, respectively $\Lambda$. For $p\in\mathrm{Spec}(R)$ the localizations $M_p$, $\Lambda_p$ are identified with the stalks $\rho_*({\mathcal T})_p$, $\rho_*({\mathcal E}nd_E({\mathcal T}))_p$ respectively, so they are reflexive $R_p$-modules. Using \cite[Lemma 15.23.4]{Stack}, it follows that $M$, $\Lambda$ are reflexive $R$-modules. The isomorphism $\Lambda\to \mathrm {End}_R(M)$ is obtained as in the proof of Lemma \ref{l6}.
\end{proof}
\begin{lm}\label{l8} Suppose $R$ is a Gorenstein ring. $\Lambda$ is an MCM $R$-module if and only if
$$H^i\big(B,{\mathcal T}_0^\smvee\otimes{\mathcal T}_0\otimes S^\bullet{\mathcal E}^\smvee\otimes \omega_B\otimes\det{\mathcal E}^\smvee\big)=0\ \forall i>0.
$$
\end{lm}
\begin{proof}
We follow \cite[Lemma 3.2]{BLVdB2}. Since ${\mathcal T}$ is a tilting locally free sheaf on $E$ it follows that
$$H^i(E,{\mathcal E}nd({\mathcal T}))=0 \ \forall i>0. $$
Using the Leray spectral sequence associated with $\rho$, and taking into account that ${C(E)}=\mathrm{Spec}(R)$ is affine, we obtain
$$
H^0({C(E)},R^i\rho_*({\mathcal E}nd({\mathcal T})))=0 \ \forall i>0,
$$
so $R^i\rho_*({\mathcal E}nd({\mathcal T}))=0$ for any $i>0$. Thus the derived direct image $R\rho_*({\mathcal E}nd({\mathcal T}))$ reduces to the direct image $\rho_*({\mathcal E}nd({\mathcal T}))$. On the other hand we have
$$\mathrm{Ext}^i_R\big(\mathrm {End}_E({\mathcal T}),\omega_R\big)=\mathrm{Ext}^i_{C(E)}\big(\rho_* ({\mathcal E}nd({\mathcal T})) ,\omega_{C(E)}\big)=$$
$$H^0\big({C(E)},{\mathcal E}xt^i_{C(E)}\big(\rho_* ({\mathcal E}nd({\mathcal T})) ,\omega_{C(E)}\big)\big).$$
Using Weyman's Duality Theorem for proper morphisms \cite[Theorem 1.2.22]{We}, taking into account that ${\mathcal E}nd({\mathcal T})$ is locally free and $\rho^{!}\omega_{C(E)}=\omega_E$ \cite[Proposition 1.2.21 (f)]{We}, we obtain an isomorphism
$${\mathcal E}xt^i_{C(E)}\big(\rho_* ({\mathcal E}nd({\mathcal T})) ,\omega_{C(E)}\big)\simeq R^i\rho_* {\mathcal H}om\big({\mathcal E}nd({\mathcal T}),\rho^{!}\omega_{C(E)}\big) $$
$$=R^i\rho_* {\mathcal H}om\big({\mathcal E}nd({\mathcal T}), \omega_E\big) .
$$
The Leray spectral sequence associated with $\rho$ gives
$$H^0({C(E)},R^i\rho_* {\mathcal H}om\big({\mathcal E}nd({\mathcal T}), \omega_E\big)=H^i\big(E,{\mathcal H}om({\mathcal E}nd({\mathcal T}),\omega_E)\big),
$$
so we get an isomorphism
$$\mathrm{Ext}^i_R\big(\mathrm {End}_E({\mathcal T}),\omega_R)\simeq H^i\big(E,{\mathcal H}om({\mathcal E}nd({\mathcal T}),\omega_E)\big).
$$
Now use the Leray spectral sequence associated with the affine map $\pi$ and the obvious identification $\omega_E=\pi^*(\omega_B\otimes\det({\mathcal E}^\smvee))$. We obtain an isomorphism
$$H^i\big(E,{\mathcal H}om({\mathcal E}nd({\mathcal T}),\omega_E)\big)=H^i\big(B,{\mathcal T}_0^\smvee\otimes{\mathcal T}_0\otimes(\bigoplus_{m\geq 0}S^m{\mathcal E}^\smvee)\otimes \omega_B\otimes\det{\mathcal E}^\smvee\big),
$$
which completes the proof.
\end{proof}
\begin{co}\label{co9} Suppose that
$R$ is a Gorenstein ring, the exceptional locus $$\mathrm{Exc}(\rho)\subset {C(E)}$$ has codimension $\geq 2$, ${\mathcal O}_B$ is a direct summand of ${\mathcal T}_0$, and $\det({\mathcal E})\simeq \omega_B$. Then $\Lambda$ is a graded crepant {\rm nc} resolution of $R$.
\end{co}
\begin{proof}
By Corollary \ref{co3} (3) and Lemma \ref{l8}, $\Lambda$ is an MCM $R$-module. Taking into account that $R$ is Gorenstein, it follows by \cite[Lemma 4.2.2 (iii)]{Bu} that $\Lambda$ is reflexive. Since $M$ is a direct summand of $\Lambda$ (see the proof of Lemma \ref{l6}), $M$ is reflexive as well.
\end{proof}
Using Lemmas \ref{l7}, \ref{l8}, and Corollary \ref{co9} we obtain
\begin{thry}\label{co10} With the notations and under the assumptions above we have:
\begin{enumerate}
\item Suppose that
\begin{enumerate}[(i)]
\item $\mathrm{cod}_E\big(\rho^{-1}(\mathrm{Exc}(\rho)))\geq 2$, or
\item $\mathrm{cod}_{C(E)}\big(\mathrm{Exc}(\rho)\big)\geq 2$, ${\mathcal O}_B\subset {\mathcal T}_0$ is a direct summand, and $\Lambda$ is reflexive.
\end{enumerate}
Then $\Lambda$ is isomorphic to $\mathrm {End}_R(M)$ and is a graded {\rm nc} resolution of $R$.
%
\item Suppose $R$ is a Gorenstein ring, $\det({\mathcal E})\simeq \omega_B$, and that
\begin{enumerate}[(i)]
\item $\mathrm{cod}_E\big(\rho^{-1}(\mathrm{Exc}(\rho)))\geq 2$, or
\item $\mathrm{cod}_{C(E)}\big(\mathrm{Exc}(\rho)\big)\geq 2$, and ${\mathcal O}_B\subset {\mathcal T}_0$ is a direct summand.
\end{enumerate}
Then $\Lambda$ is isomorphic to $\mathrm {End}_R(M)$ and is a graded crepant {\rm nc} resolution of $R$.
\end{enumerate}
\end{thry}
All our examples will be obtained using homogeneous GLSM presentations as explained in Remark
\ref{NewRemarkNew}. For this class of examples we will prove an explicit Gorenstein criterion for the ring $R$.
\section{Homogeneous gauged linear sigma models}\label{GeomSect}
\subsection{GIT for representations of reductive groups}
Let $U$ be a finite dimensional complex vector space. For a morphism $\psi:{\mathbb C}^*\to \mathrm {GL}(U)$ and an integer $m\in{\mathbb Z}$ we denote by $U^{\psi}_m\subset U$ the linear subspace corresponding to the weight $\zeta\to \zeta^m$, and by ${\mathfrak W}(\psi)\subset{\mathbb Z}$ the set of weights:
$${\mathfrak W}(\psi)=\{m\in {\mathbb Z}|\ U^{\psi}_m\ne\{0\}\}.
$$
The weight decomposition of $U$ reads
$$U=\bigoplus_{m\in{\mathbb Z}} U^{\psi}_m=\bigoplus_{m\in{\mathfrak W}(\psi)} U^{\psi}_m.
$$
For a vector $u\in U$ we denote by $u^\psi_m$ its $U^{\psi}_m$-component.
Let $\alpha:G\to \mathrm {GL}(U)$ be a linear representation of a complex reductive group $G$ on $U$, and let $\chi:G\to{\mathbb C}^*$ be a character of $G$. A vector $u\in U$ is called
\begin{itemize}
\item $\chi$-semistable, if there exists $n>0$ and
$$f\in {\mathbb C}[U]^G_{\chi^n}\coloneqq \{f\in {\mathbb C}[U]| f(gu)=\chi(g)^n f(u)\ \forall g\in G\ \forall u\in U\}$$
such that $f(u)\ne 0$.
\item $\chi$-stable if it is $\chi$-semistable, the stabilizer $G_u$ is finite, and the orbit $Gu$ is closed in the Zariski open subset of $\chi$-semistable vectors.
\item $\chi$-unstable, if it is not $\chi$-semistable.
\end{itemize}
We will denote by $U_{\mathrm {st}}^\chi$, $U_{\mathrm {ss}}^\chi$, $U_{\mathrm {us}}^\chi$ the sets of $\chi$-stable, $\chi$-semistable and $\chi$-unstable points of $U$.
The $\chi$-(semi)stability condition coincides with the classical GIT (semi)stability condition associated with the linearization of $\alpha$ in the $G$-line bundle ${\mathcal O}_U\otimes{\chi^{-1}}$ \cite[Lemma 2.4]{Ho}.
For a morphism of algebraic groups $\xi:{\mathbb C}^*\to G$ let $\langle \chi,\xi\rangle\in{\mathbb Z}$ be the degree of the composition $\chi\circ\xi:{\mathbb C}^*\to{\mathbb C}^*$. The limit $\lim_{\zeta\to 0} (\alpha\circ\xi)(\zeta)(u)$ exists if and only if $u\in \bigoplus_{m\geq 0} U^{\alpha\circ\xi}_m$. Define
$$\mu^\chi(u,\xi)\coloneqq \left\{
\begin{array}{ccc}
\infty &\rm if & \exists m\in{\mathbb Z}_{<0} \hbox{ such that } u^{\alpha\circ\xi}_m\ne 0\\
\langle \chi,\xi\rangle & \rm if & u\in \bigoplus_{m\in{\mathbb Z}_{\geq 0}} U^{\alpha\circ\xi}_m.
\end{array}
\right.
$$
Denote by $e$ the unit element of $G$ and also the trivial morphism ${\mathbb C}^*\to G$. Using the Hilbert-Mumford stability criterion for linear actions (\cite{He}, \cite{Ho}, \cite{Te}) we obtain
\begin{pr}\label{HM}
A vector $u\in U$ is $\chi$-stable ($\chi$-semistable) if and only if for any $\xi\in \mathrm{Hom}({\mathbb C}^*,G)\setminus\{e\}$ for which $\langle \chi,\xi\rangle\leq 0$ (respectively $\langle\chi,\xi\rangle < 0$) there exists $m\in {\mathbb Z}_{<0}$ such that $u^{\alpha\circ\xi}_m\ne 0$.
\end{pr}
Equivalently,
\begin{re} A vector $u\in U$ is $\chi$-unstable if and only if there exists a morphism $\xi:{\mathbb C}^*\to G$ such that $\langle \chi,\xi\rangle < 0$ and
$u\in \bigoplus_{m\in {\mathbb Z}_{\geq 0}}U^{\alpha\circ\xi}_m$.
\end{re}
Let ${\mathfrak D}:\mathrm{Hom}({\mathbb C}^*,G)\to {\mathfrak g}$ be the injective map defined by the condition
${\mathfrak D}(\xi)=d_1\xi(1)$, where $d_1\xi:\mathrm{Lie}({\mathbb C}^*)={\mathbb C}\to{\mathfrak g}$ stands for the differential of $\xi$ at 1. Note that for any torus $T\subset G$ the image ${\mathfrak D}(\mathrm{Hom}({\mathbb C}^*,T))\subset {\mathfrak g}$ is a free ${\mathbb Z}$-submodule of rank $\dim(T)$.
Let $\llangle\cdot,\cdot\rrangle:{\mathfrak g}\times{\mathfrak g}\to{\mathbb C}$ be an $\mathrm {ad}_G$-invariant symmetric, bilinear form on the Lie algebra ${\mathfrak g}$ of $G$ with the property that for every torus $T\subset G$, its restriction
%
$$
{\mathfrak D}(\mathrm{Hom}({\mathbb C}^*,T))\times {\mathfrak D}(\mathrm{Hom}({\mathbb C}^*,T))\to{\mathbb C}
$$
is an inner product with rational coefficients (see \cite[section 2.1]{He}). This condition implies
%
$$\llangle {\mathfrak D}(\xi) ,{\mathfrak D}(\xi)\rrangle\in [0,\infty)\cap {\mathbb Q} \ \ \forall \xi\in\mathrm{Hom}({\mathbb C}^*,G).$$
%
We obtain an $\mathrm {ad}_G$-invariant norm on $\mathrm{Hom}({\mathbb C}^*,G)$ given by
%
$$\vertiii{\xi}\coloneqq \sqrt{\llangle {\mathfrak D}(\xi) ,{\mathfrak D}(\xi)\rrangle}.
$$
Let $u\in U_{\mathrm {us}}^\chi$. An indivisible morphism $\xi\in \mathrm{Hom}({\mathbb C}^*,G)\setminus \{e\}$ is called an optimal destabilizing morphism for $u$ if it realizes the negative minimum of the map $\alpha\mapsto \frac{1}{\vertiii{\alpha}}\mu^\chi(u,\alpha)$, i.e. if
%
$$\frac{\langle \chi,\xi\rangle }{\vertiii{\xi}}=\inf \bigg\{\frac{\langle \chi,\beta\rangle }{\vertiii{\beta}}\ \vline\ \beta\in \mathrm{Hom}({\mathbb C}^*,G)\setminus \{e\},\ u\in \bigoplus_{m\in {\mathbb Z}_{\geq 0}}U^\beta_m \bigg\}.
$$
For a vector $u\in U^\chi_\mathrm {ss}$ let $\Xi^\chi(u)$ be the set of optimal destabilizing morphisms for $u$. Recall that any morphism $\xi\in \mathrm{Hom}({\mathbb C}^*,G)$ defines a parabolic subgroup $P(\xi)\subset G$ given by
$$P(\xi)\coloneqq \{g\in G|\ \lim_{\zeta\to 0} \xi(\zeta) g \xi(\zeta)^{-1}\hbox{ exists}\}.
$$
By a result of Kempf \cite{Ke1} the parabolic subgroup $P^\chi(u)$ associated with an element $\xi\in \Xi^\chi(u)$ is independent of $\xi$. Moreover, the set $\Xi^\chi(u)$ is an orbit with respect to the action of $P^\chi(u)$ by conjugation on $\mathrm{Hom}({\mathbb C}^*,G)\setminus \{e\}$.
\begin{ex}\label{ExGrass}
Let $V$, $Z$ be non-trivial, finite dimensional complex vector spaces, and let $\alpha$ be the standard representation of $\mathrm {GL}(Z)$ on $\mathrm{Hom}(V,Z)$. The set of characters of $\mathrm {GL}(Z)$ is $\{\det^t|\ t\in{\mathbb Z}\}$. Denoting by $\mathrm{Hom}(V,Z)^{\rm epi}\subset \mathrm{Hom}(V,Z)$ the open subspace of epimorphisms, one has
$$\mathrm{Hom}(V,Z)_{\mathrm {ss}}^{\det^t}=\left\{
\begin{array}{ccc}
\mathrm{Hom}(V,Z)^{\rm epi} &\rm if & t>0\\
\mathrm{Hom}(V,Z) &\rm if & t=0\\
\emptyset &\rm if & t<0
\end{array}
\right..
$$
The bilinear map
$$\mathrm {gl}(Z)\times\mathrm {gl}(Z)\ni (x,y)\mapsto \llangle x,y\rrangle\coloneqq \mathrm {Tr}(xy)
$$
is $\mathrm {ad}_{\mathrm {GL}(Z)}$-invariant, and satisfies the rationality condition mentioned above.
\begin{itemize}
\item Let $t>0$. A morphism $\xi\in \mathrm{Hom}({\mathbb C}^*,G)\setminus \{e\}$ is optimal destabilizing for
$$u\in \mathrm{Hom}(V,Z)\setminus \mathrm{Hom}(V,Z)^{\rm epi}=\mathrm{Hom}(V,Z)_{\mathrm {us}}^{\det^t}$$
if and only if there exists a complement $\Gamma$ of $I\coloneqq \mathrm{im}(u)$ in $Z$ such that, with respect to the direct sum decomposition $Z=I\oplus \Gamma$, one has
\begin{equation}\label{optimal+GR}\xi(\zeta)=\begin{pmatrix}
\mathrm{id}_I & 0\\
0 & \zeta^{-1} \mathrm{id}_\Gamma
\end{pmatrix} \ \forall \zeta\in{\mathbb C}^*.
\end{equation}
In this case one has $\mathrm{Hom}(V,Z)_{\mathrm {ss}}^{\det^t}=\mathrm{Hom}(V,Z)_{\mathrm {st}}^{\det^t}$, and the group $\mathrm {GL}(Z)$ acts freely on this space. Putting $N\coloneqq \dim(V)$, $k\coloneqq \dim(Z)$ the corresponding GIT quotient is
$$\qmod{\mathrm{Hom}(V,Z)^{\rm epi}}{\mathrm {GL}(Z)}=\mathrm{Gr}_{N-k}(V)=\mathrm{Gr}_{k}(V^\smvee).
$$
\item Let $t<0$. In this case any vector $u\in \mathrm{Hom}(V,Z)$ is $\det^t$-unstable, and has a unique optimal destabilizing morphisms which is $\zeta\mapsto\zeta \mathrm{id}_Z$.
\end{itemize}
\end{ex}
\begin{pr} \label{new} Let $\alpha:G\to \mathrm {GL}(U)$, $\beta:G\to \mathrm {GL}(F)$ be linear representations of $G$, and $\chi\in \mathrm{Hom}(G,{\mathbb C}^*)$ a character. Then
\begin{enumerate}
\item $U_{\mathrm {ss}}^\chi\times F\subset (U\times F)_{\mathrm {ss}}^\chi$,
\item Suppose that for every $u\in U_\mathrm {us}^\chi$ and $\xi\in\Xi^\chi(u)$ one has ${\mathfrak W}(\beta\circ \xi)\subset{\mathbb Z}_{\geq 0}$. Then
$$U_{\mathrm {ss}}^\chi\times F= (U\times F)_{\mathrm {ss}}^\chi\ .
$$
\end{enumerate}
\end{pr}
\def\mathrm{Spec}{\mathrm{Spec}}
\begin{proof}
Let $\alpha\beta:G\to \mathrm {GL}(U\times F)$ be the morphism induced by the pair $(\alpha,\beta)$. \\
(1) The first claim follows from Proposition \ref{HM} using the equality
%
$$(U\times F)^{\alpha\beta\circ\xi}_m=U^{\alpha\circ\xi}_m\times F^{\beta\circ\xi}_m\ \forall m\in{\mathbb Z}.$$
(2) Taking into account (1) it suffices to prove the inclusion $(U\times F)_{\mathrm {ss}}^\chi\subset U_{\mathrm {ss}}^\chi\times F$ or, equivalently, $ U_{\mathrm {us}}^\chi\times F\subset (U\times F)_{\mathrm {us}}^\chi$. Let $(u,y)\in U^\chi_\mathrm {us}\times F$. We claim that any optimal destabilizing morphism $\xi\in \Xi^\chi(u)$ destabilizes the pair $(u,y)$. Indeed, since $\xi$ destabilizes $u$, it follows that $\langle \chi,\xi\rangle <0$, and $u\in \bigoplus_{m\in {\mathbb Z}_{\geq 0}}U^{\alpha\circ\xi}_m$. By assumption one has $F=\bigoplus_{m\in {\mathbb Z}_{\geq 0}} F^{\beta\circ \xi}_m$, so $(u,y)\in \bigoplus_{m\in {\mathbb Z}_{\geq 0}}(U\times F)^{\alpha\beta\circ\xi}_m$.
\end{proof}
\subsection{GLSM presentations}
Let $B$ be a complex projective manifold, $\pi:E\to B$ a rank $r$ vector bundle on $B$, and $s\in\Gamma(E^\smvee)$ a section in the dual bundle.
\begin{dt}\label{GLSMDef}
An algebraic geometric GLSM presentation of the pair $(E\stackrel{\pi}{\to}B,s)$ is the data of a 4-tuple $(G,U,F,\chi)$, where $G$ is a complex reductive group, $U$ and $F$ are finite dimensional $G$-representation spaces, and $\chi\in \mathrm{Hom}(G,{\mathbb C}^*)$ is a character such that the following conditions are satisfied:
\begin{enumerate}
\item $U_\mathrm {st}^\chi=U_{\mathrm {ss}}^\chi$ and $G$ acts freely on this set.
\item The base $B$ coincides with the quotient $U_{\mathrm {ss}}^\chi/G$, and the vector bundle $E$ is the $F$-bundle associated with the principal $G$-bundle $p:U_{\mathrm {ss}}^\chi\to B$ and the representation space $F$.
\item \label{signs} Any optimal destabilizing morphism of any unstable point $u\in U^\chi_\mathrm {us}$ acts with non-negative weights on $F$.
\item \label{extension-eq} The $G$-equivariant map $\hat s:U_{\mathrm {ss}}^\chi\to F^\smvee$ corresponding to $s$ extends to a $G$-equivariant polynomial map $\sigma:U\to F^\smvee$.
\end{enumerate}
\end{dt}
Note that the map $\sigma:U\to F^\smvee$ is a covariant extension of the $G$-equivariant map $\hat s:U_{\mathrm {ss}}^\chi\to F^\smvee$ corresponding to $s$.
Many interesting manifolds (e.g. Grassmann manifolds, Flag manifolds, projective toric manifolds) can be obtained as quotients of the form $B=U^\chi_\mathrm {ss}/G$ for a pair $(U,\chi)$ satisfying condition (1) in Definition \ref{GLSMDef}. For such a quotient manifold $B$ one obtains submanifolds $X\subset B$ defined as zero loci of regular sections $s$ in associated bundles of the form $E^\smvee= U^\chi_\mathrm {ss}\times_G F^\smvee$. This very general construction method yields a large class of algebraic manifolds with interesting properties (see section \ref{ExSect}).
To explain the role of the fourth condition in this definition note first that, giving a $G$-covariant $\sigma:U\to F^\smvee$ is equivalent to defining a $G$-invariant, polynomial map $\sigma^\smvee: U\times F\to {\mathbb C}$, which is linear with respect to the second argument. The map $\sigma^\smvee$ is given by
$$\sigma^\smvee (u,z)=\langle \sigma(u),z\rangle.
$$
The bundle $E$ is the associated bundle $U_\mathrm {ss}^\chi\times_G F$. Denote by $q:U_\mathrm {ss}^\chi\times F\to E$ the projection map. Then $\resto{\sigma^\smvee}{U_\mathrm {ss}^\chi\times F}=s^\smvee\circ q$, so $\sigma^\smvee$ is an extension of the pull back of the potential $s^\smvee:E\to{\mathbb C}$ intervening in the gauged LG model associated with $s$ (see section \ref{LGmodels}).
In many cases the complement of $U_\mathrm {st}^\chi$ in $U$ has codimension $\geq 2$, hence the existence of a regular $G$-equivariant extension $\sigma:U\to F^\smvee$ of $\hat s$ follows automatically. Therefore in this way we obtain a large class of triples satisfying Definition \ref{GLSMDef}.
Condition (\ref{signs}) in Definition \ref{GLSMDef} implies that, for every $u\in U_\mathrm {us}^\chi$ and any $\xi\in \Xi^\chi(u)$, the ${\mathbb C}^*$-representation $\beta^\smvee\circ\xi$ on $F$ has only non-negative weights. Therefore, by Proposition \ref{new}
\begin{pr} \label{SSProd}
Let $(G,U,F,\chi)$ be an algebraic geometric GLSM presentation of $(E\textmap{\pi} B,s)$ with $s\in \Gamma(E^\smvee)$. Then
\begin{equation}\label{first-char}
U_{\mathrm {ss}}^\chi\times F=(U\times F)_{\mathrm {ss}}^\chi,
\end{equation}
so the bundle $E=U_{\mathrm {ss}}^\chi\times_G F$ coincides with the GIT quotient $(U\times F)_{\mathrm {ss}}^\chi/G$.
\end{pr}
We will see that this proposition has important consequences (see Proposition \ref{KempfProp} in the next section).
\subsection{Homogeneous GLSM presentations}
\label{HomGLSMSect}
In this section we introduce a homogeneity condition for algebraic geometric GLSM presentations which has interesting geometric consequences.
Let $B$ be a complex projective manifold, $\pi:E\to B$ a rank $r$ vector bundle on $B$, and $s\in\Gamma(E^\smvee)$ a section in the dual bundle.
%
\begin{dt}\label{HomGLSMDef}
An algebraic geometric GLSM presentation $(G,U,F,\chi)$
of the pair $(E\textmap{\pi} B,s)$ will be called homogeneous, if there exists a right action $\gamma:U\times{\mathcal G}\to U$ by linear automorphisms of a connected, complex reductive group ${\mathcal G}$ on $U$ such that:
\begin{enumerate}
\item $\gamma$ commutes with the fixed $G$-action on $U$.
%
\item The induced action $\gamma_B:B\times{\mathcal G}\to B$ on the quotient $B=U^\chi_\mathrm {ss}/G$ is transitive.
\item Let ${\mathcal P}_0\subset {\mathcal G}$ be the stabilizer of a fixed point $b_0=Gu_0\in B$. The group morphism $\lambda_0:{\mathcal P}_0\to G$ defined by
%
$$\lambda_0(y)u_0= u_0 y,\ \forall y\in {\mathcal P}_0$$
%
is surjective.
\end{enumerate}
\end{dt}
Note that the third condition is independent of the pair $(b_0,u_0)$. Since the quotient ${\mathcal P}_0\backslash{\mathcal G}\simeq B$ is projective, it follows that ${\mathcal P}_0$ is a parabolic subgroup of ${\mathcal G}$. On the other hand, the first condition implies that $U^\chi_\mathrm {ss}$ is ${\mathcal G}$-invariant, and the induced ${\mathcal G}$-action on $U^\chi_\mathrm {ss}$ induces a fibrewise linear action (which lifts $\gamma_B$) on any vector bundle on $B$ which is associated with the principal $G$-bundle $U^\chi_\mathrm {ss}\to B$. In other words, any such associated vector bundle is naturally a homogeneous vector bundle on the ${\mathcal G}$-manifold $B$. In particular $E$, $E^\smvee$ become homogeneous vector bundles on $B$, and $H^0(E^\smvee)^\smvee$ is naturally a representation space of ${\mathcal G}$.
\vspace{2mm}
Let $B$ be a projective variety, and $\pi:E\to B$ be a vector bundle on $B$ such that $E^\smvee$ is globally generated. This implies that the evaluation map $\vartheta:E\to H^0(E^\smvee)^\smvee$ is fibrewise injective, so it identifies $E$ with a subbundle of the trivial bundle $B\times H^0(E^\smvee)^\smvee$. Therefore, putting $C(E)\coloneqq \mathrm{im}(\vartheta)$ we obtain a commutative diagram
\begin{equation}\label{DiagEKempf}
\begin{tikzcd}
E \ar[d, two heads, "\rho" '] \ar[r, hook] \ar[dr, "\vartheta"] & B\times H^0(E^\smvee)^\smvee \ar[d, two heads]\\
C(E) \ar[r, hook] & H^0(E^\smvee)^\smvee
\end{tikzcd}
\end{equation}
where $\rho$ is induced by $\vartheta$. Recall (see Remark \ref{AffineConeRem}) that $C(E)$ coincides with the affine cone over the projective variety
$$S(E)\coloneqq {\mathbb P}(\vartheta)({\mathbb P}(E))\subset {\mathbb P}(H^0(E^\smvee)^\smvee).
$$
\begin{pr}\label{KempfProp}
Let $(G,U,F,\chi)$ be a homogeneous, algebraic geometric GLSM presentation of $(E\textmap{\pi} B,s)$. Suppose that $E^\smvee$ is globally generated.
Then
\begin{enumerate}
\item The cone $C(E)\subset H^0(E^\smvee)^\smvee$ is a Cohen-Macaulay normal variety.
\item Suppose that $\dim(C(E))=\dim(E)$. Then the morphism $\rho: E \to C(E)$ induced by the proper morphism $\vartheta: E\to H^0(E^\smvee)^\smvee$ is birational, and $C(E)$ has rational singularities.
\item Suppose that $\dim(C(E))=\dim(E)$ and $\mathrm{codim}(U_\mathrm {us}^\chi)\geq 2$. Then there exists an isomorphism $\eta:U\times F/\hskip-2pt/ G\textmap{\simeq} C(E)$ such that the diagram
$$
\begin{tikzcd}
U\times F/\hskip-2pt/_\chi \ar[r, "\simeq"] \ar[d, "q^\chi"']G&E\ar[d, "\rho"] \\
U\times F/\hskip-2pt/ G \ar[r, "\simeq", "\eta"']&C(E)
\end{tikzcd}
$$
is commutative, in particular the cone $C(E)$ can be identified with the affine GIT quotient $\mathrm{Spec}({\mathbb C}[U\times F]^G)$.
\end{enumerate}
\end{pr}
\begin{proof}
(1) The linear subspace $\vartheta(E_{b_0})\subset H^0(E^\smvee)^\smvee$ is ${\mathcal P}_0$-invariant. Moreover, the left ${\mathcal P}_0$-action on $\vartheta(E_{b_0})$ is induced is induced by the $G$-action on $\{u_0\}\times F\simeq F$ via the group morphism $\lambda_0$ intervening in Definition \ref{HomGLSMDef}. Since $\lambda_0$ is surjective, and $G$ is reductive, it follows that the ${\mathcal P}_0$ representation space $\vartheta(E_{b_0})$ is completely reducible. Since ${\mathcal G}$ acts transitively on $B$ one has $E={\mathcal G}E_{b_0}$, so
$$C(E)\coloneqq \vartheta(E)=\vartheta({\mathcal G}E_{b_0})={\mathcal G}\vartheta(E_{b_0}).
$$
The first claim follows now from \cite[Theorem 0]{Ke}.
\vspace{2mm}\\
(2) Since $\dim(C(E))=\dim(E)$ it follows by \cite[Proposition 2(c)]{Ke} that $\rho$ is birational, so the claim follows from the second statement of \cite[Theorem 0]{Ke}.
\vspace{2mm}\\
(3) Since $\mathrm{codim}(U\setminus U^\chi_\mathrm {ss})\geq 2$, we also have $\mathrm{codim}(U\times F)\setminus (U\times F)^\chi_\mathrm {ss}\geq 2$ by Proposition \ref{SSProd}, so the composition
$$(U\times F)^\chi_\mathrm {ss}\to E\textmap{\vartheta} H^0(E^\smvee)^\smvee $$
extends to a $G$-invariant morphism $U\times F\to H^0(E^\smvee)^\smvee$. Therefore we obtain a $G$-invariant, surjective extension
$$\tilde \Sigma: U\times F\to C(E)
$$
of the composition $\Sigma:(U\times F)^\chi_\mathrm {ss}\to E\textmap{\rho} C(E)$. We claim that the morphism
$$\eta: U\times F/\hskip-2pt/ G\to C(E)$$
induced by $\tilde \Sigma$ is an isomorphism. Since $\eta$ is a morphism of affine schemes, it suffices to prove that the induced ring morphism
$$\eta^*:{\mathbb C}[C(E)]=H^0({\mathcal O}_{C(E)})\to {\mathbb C}[U\times F]^G$$
is an isomorphism. The inclusion morphism $j: (U\times F)^\chi_\mathrm {ss}\hookrightarrow U\times F$ induces a restriction monomorphism
$$ j^*:{\mathbb C}[U\times F]^G\to {\mathbb C}[(U\times F)^\chi_\mathrm {ss}]^G=H^0({\mathcal O}_{E}).$$
The composition $j^*\circ \eta^*: H^0({\mathcal O}_{C(E)})\to H^0({\mathcal O}_{E})$ coincides with the morphism $\rho^*$ induced by $\rho$. On the other hand, since $\rho$ is proper and birational, {\it and $C(E)$ is normal}, it follows that $\rho_*({\mathcal O}_{E})={\mathcal O}_{C(E)}$. Therefore $\rho^*=j^*\circ \eta^*$ is an isomorphism. Since $j^*$ is a monomorphism, it follows that $j^*$ and $\eta^*$ are both isomorphisms.
\end{proof}
Let $G$ be a connected reductive group, $W$ a finite dimensional $G$-representation space. The set of stable points with respect to the trivial character of $G$ is
$$W_\mathrm {st}=\{w\in W|\ Gw \hbox{ is closed},\ G_w\hbox { is finite}\}.
$$
This set is Zariski open in $W$.
\begin{lm}\label{GorCritLm} (Gorenstein criterion) Suppose that $\mathrm{codim}(W\setminus W_\mathrm {st})\geq 2$, and the $G$-representation $\det W$ is trivial. Then ${\mathbb C}[W]^G$ is a Gorenstein ring.
\end{lm}
\begin{proof}
The closed set
$$W_s\coloneqq \{w\in W|\ \dim(G_w)>0\}
$$
defined in \cite[p. 40]{Kn2} is contained in $W\setminus W_\mathrm {st}$, so the assumption $\mathrm{codim}(W\setminus W_\mathrm {st})\geq 2$ implies $\mathrm{codim}(W_s)\geq 2$. The same condition also implies that $W_\mathrm {st}\ne\emptyset$, in particular a general $G$-orbit in $W$ is closed. Since $G$ is connected, the determinant $\det(\mathrm {ad})$ of the adjoint representation is trivial (because any connected reductive group is unimodular), so $\det(W)=\det(\mathrm {ad})$. The claim follows from Knop's criterion \cite[Satz 2, p. 41]{Kn2}.
\end{proof}
See \cite[5.1]{SVdB} for further remarks. Using Proposition \ref{KempfProp}, a well-known Cohen-Macaulay criterion for affine GIT quotients \cite[Corollaire, p. 66]{Bout} and Lemma \ref{GorCritLm}, we obtain:
\begin{co} \label{PropertiesC(E)}
Let $(G,U,F,\chi)$ be a homogeneous, algebraic geometric GLSM presentation of $(E\textmap{\pi} B,s)$. Suppose that $E^\smvee$ is globally generated, $\dim(E)=\dim(C(E))$ and $\mathrm{codim}(U_\mathrm {us}^\chi)\geq 2$. Then
\begin{enumerate}
\item The cone $C(E)$ is normal, Cohen-Macaulay, has rational singularities, and is canonically isomorphic to the affine quotient $\mathrm{Spec}({\mathbb C}[U\times F]^G)$.
\item Suppose that $G$ is connected, the $G$-representation $\det(U\times F)$ is trivial, and $\mathrm{codim}\big((U\times F)\setminus(U\times F)_\mathrm {st}\big)\geq 2$. Then $C(E)$ is also Gorenstein.
\end{enumerate}
\end{co}
Taking into account Remark \ref{AffineConeRem} we obtain
\begin{co}\label{PropertiesS(E)}
Let $(G,U,F,\chi)$ be a homogeneous, algebraic geometric GLSM presentation of $(E\textmap{\pi} B,s)$. Suppose that $E^\smvee$ is globally generated, $\dim(E)=\dim(C(E))$ and $\mathrm{codim}(U_\mathrm {us}^\chi)\geq 2$. Then the projective variety $S(E)\coloneqq \mathrm{im}({\mathbb P}(\vartheta))\subset {\mathbb P}(H^0(E^\smvee)^\smvee)$ has the following properties:
\begin{enumerate}
\item $S(E)$ is projectively normal, arithmetically Cohen-Macaulay and its affine cone has rational singularities.
\item If $G$ is connected, the 1-dimensional $G$-representation $\det(U\times F)$ is trivial, and $\mathrm{codim}\big((U\times F)\setminus(U\times F)_\mathrm {st}\big)\geq 2$, then $S(E)$ is also arithmetically Gorenstein.
\end{enumerate}
\end{co}
\subsection{Geometric applications}\label{ExSect}
\subsubsection{A general set up}
In this section we identify an important class of examples to which Theorem \ref{FourthEq} can be applied. Let $V$, $Z$ be complex vector spaces of dimensions $N$, $k$ respectively with $1\leq k< N$. Choose $r\in {\mathbb N}_{>0}$, $(d_1,\dots,d_r)\in {\mathbb N}_{>0}^r$, and for any $1\leq i\leq r$, choose a $\mathrm {GL}(Z)$-invariant subspace $F_i$ of the tensor power $\otimes^{d_i}Z^\smvee$. Thus $F_i^\smvee$ is a polynomial $\mathrm {GL}(Z)$-representation of degree $d_i$ \cite[5.3, 5.8, 5.9]{KrPr}. Put
$$F\coloneqq \bigoplus_{i=1}^r F_i.$$
The class of these $\mathrm {GL}(Z)$-representations coincides with the class of duals of finite dimensional polynomial representations of $\mathrm {GL}(Z)$, i.e. each such $F^\smvee$ is isomorphic to a finite direct sum of Schur modules
$$F^\smvee=\bigoplus_{\lambda\in P(k)} N_\lambda\otimes S^\lambda Z.
$$
Here $P(k)$ denotes the set of partitions $\lambda=(\lambda_1,\cdots,\lambda_k)$ with $\lambda_1\geq\cdots\geq\lambda_k\geq 0$.
Now consider the $\mathrm {GL}(Z)$ representation $U=\mathrm{Hom}(V,Z)$. Choosing $t\in {\mathbb N}_{>0}$ we have
$$U^{\det^t}_\mathrm {ss}=U^{\det^t}_\mathrm {st}=\mathrm{Hom}(V,Z)^{\rm epi},$$
and the quotient $U^{\det^t}_\mathrm {ss}/\mathrm {GL}(Z)$ can be identified with the Grassmannian $\mathrm{Gr}_{k}(V^\smvee)$ (see Example \ref{ExGrass}). Denote by $E$ the vector bundle associated with the principal $\mathrm {GL}(Z)$-bundle $U^{\det^t}_\mathrm {ss}\to \mathrm{Gr}_{k}(V^\smvee)$ and the representation $F= \bigoplus_{\lambda\in P(k)} N_\lambda^\smvee\otimes S^\lambda Z^\smvee$, and by $E_i$ the associated bundle with fiber $F_i$. Then
%
$$E=\bigoplus_{\lambda\in P(k)} N_\lambda^\smvee\otimes S^\lambda T, $$
where $T$ is the tautological subbundle of $\mathrm{Gr}_k(V^\smvee)$. By the Borel-Weil theorem we get
$$H\coloneqq H^0(\mathrm{Gr}_k(V^\smvee), {\mathcal E}^\smvee)=\bigoplus_{\lambda\in P(k)} N_\lambda\otimes S^\lambda V.
$$
The condition $1\leq k<N$ implies $\mathrm{codim}(U^{\det^t}_\mathrm {us})\geq 2$, so
$$H^0(\mathrm{Gr}_{k}(V^\smvee),E_i^\smvee)={\rm Map}_{\mathrm {GL}(Z)}(U^{\det^t}_\mathrm {ss},F_i^\smvee)={\rm Map}_{\mathrm {GL}(Z)}(U,F_i^\smvee),$$
where ${\rm Map}_{\mathrm {GL}(Z)}$ stands for the set of $\mathrm {GL}(Z)$-equivariant regular maps. Any $\mathrm {GL}(Z)$-equivariant regular map $U\to F_i^\smvee$ is homogeneous of degree $d_i$.
Therefore the data of a section $s\in H^0(\mathrm{Gr}_{k}(V^\smvee),E^\smvee)$ is equivalent to the data of a system $\sigma=(\sigma_1,\dots,\sigma_r)$ of
homogeneous covariants $\sigma_i\in ({\mathbb C}[U]\otimes F_i^\smvee)^{\mathrm {GL}(Z)}$ on $U$ of type $F_i^\smvee$ \cite[p. 9]{KrPr}. This system can be regarded as a covariant of type $F^\smvee$ on $U$ which extends the $\mathrm {GL}(Z)$-equivariant map $\hat s:U^{\det^t}_\mathrm {ss}\to F^\smvee$ associated with $s$. On the other hand, using formula (\ref{optimal+GR}) it follows that any optimal destabilizing element $\xi\in \Xi^{\det^t}(u)$ of an unstable point $u\in U_{\mathrm {us}}^{\det^t}$ acts with non-negative weights on tensor powers $\otimes^{d_i}Z^\smvee$, so also on $F_i$. Therefore condition (\ref{signs}) in Definition \ref{GLSMDef} is satisfied, hence the 4-tuple
$$(\mathrm {GL}(Z),\mathrm{Hom}(V,Z),F=\oplus_{i=1}^rF_i, {\det}^t) $$
is a GLSM presentation of the pair $(E\to \mathrm{Gr}_k(V^\smvee),s)$.
The obvious right $\mathrm {GL}(V)$-action on $U=\mathrm{Hom}(V,Z)$ satisfies the conditions of Definition \ref{HomGLSMDef}, hence any such GLSM presentation is homogeneous. We will apply the general results proved in sections \ref{TiltingIH} and \ref{HomGLSMSect} to this class of GLSM's.
\begin{re}
The cones $C(S^\lambda T)\subset S^\lambda V^\smvee$ are the higher rank varieties first studied by O. Porras \cite{Po} , and later by J. Weyman \cite[Chapter 7]{We}. We refer to the more general cones
$$C(\bigoplus_{\lambda\in P(k)} N_\lambda^\smvee\otimes S^\lambda T)\subset \bigoplus_{\lambda\in P(k)} N_\lambda^\smvee \otimes S^\lambda V^\smvee$$
as generalized higher rank varieties.
\end{re}
Let $P(k,N-k)\subset P(k)$ be the set of partitions $(\alpha_1,\cdots,\alpha_k)$ with $\alpha_1\leq N-k$. Kapranov \cite{Ka} has shown that the collection of locally free sheaves $$(S^\alpha {\mathcal U}^\smvee)_{\alpha\in P(k,N-k)}$$ is a full strongly exceptional collection on $\mathrm{Gr}_k(V^\smvee)$. The Kapranov bundle
$$
{\mathcal T}_0\coloneqq \bigoplus_{\alpha\in P(k,N-k)} S^\alpha {\mathcal U}^\smvee
$$
is therefore a tilting bundle on $\mathrm{Gr}_k(V^\smvee)$. Let $E=\bigoplus_{\lambda\in P(k)} N_\lambda^\smvee\otimes S^\lambda T$ be the bundle above, and denote by $\pi: E\to \mathrm{Gr}_k(V^\smvee)$ its bundle projection. Recall (Corollary \ref{co3} (3)) that $\pi^*({\mathcal T}_0)$ is a tilting object in $D^b(\mathrm{Qcoh} E)$ provided the following cohomology vanishing holds true:
\begin{equation}\label{eq:*}
H^i\big(\mathrm{Gr}_k(V^\smvee), {\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes (\bigoplus_{m\geq 0} S^m {\mathcal E}^\smvee)\big)=0 \ \forall i>0.
\end{equation}
\begin{thry}\label{th:2-11} Let $E=\bigoplus_{\lambda\in P(k)} N_\lambda^\smvee\otimes S^\lambda T$, let ${\mathcal T}_0$ be the Kapranov bundle on on $\mathrm{Gr}_k(V^\smvee)$. Suppose that the cohomology vanishing condition (\ref{eq:*}) holds true, and let $s\in H^0(\mathrm{Gr}_k(V^\smvee),{\mathcal E}^\smvee)$ be a regular section. Then there exists an exact equivalence
$$D^b(\mathrm{coh} Z(s))\simeq D^{\mathrm{gr}}_{\mathrm{sg}}(\Lambda/ s\Lambda).$$
\end{thry}
\begin{proof}
We verify the assumptions of Theorem \ref{FourthEq}. We know that ${\mathcal T}_0$ is a tilting bundle on $\mathrm{Gr}_k(V^\smvee)$, and that ${\mathcal T}=\pi^*({\mathcal T}_0)$ is a tilting object in $D^b(\mathrm{Qcoh} E)$ provided the vanishing condition (\ref{eq:*}) holds. Clearly $H=\bigoplus_{\lambda\in P(k)} N_\lambda\otimes S^\lambda V$ generates ${\mathcal E}^\smvee$, so $E$ is a subbunde of the trivial bundle $\mathrm{Gr}_k(V^\smvee)\times H^\smvee$.
\end{proof}
Note that the cone $C(E)$ is normal and Cohen-Macaulay by Proposition \ref{KempfProp} (1). Moreover, assuming $\dim(C(E))=\dim(E)$, the Kempf collapsing $\rho:E\to C(E)$ is birational and $C(E)$ has rational singularities by Proposition \ref{KempfProp} (2).
\begin{re} In our examples in subsection \ref{sec:2-4-3}, the conditions (i) and (ii) can be found in the literature, or can be checked ``by hand". It is possible to give general sufficient conditions which imply (i) or (ii). For (i) one needs a Borel-Bott-Weil type argument as e.g. in \cite[Proposition 1.4]{BLVdB3}. For (ii) general results can be found in \cite[3.3.5, 3.3.6]{Po} or \cite[7.1.4]{We}.
\end{re}
\subsubsection{An algebraic description of \texorpdfstring{$\Lambda$}{str3} for higher rank varieties}
\label{sec:2-4-2}
Throughout this section $(V,k,\lambda,\sigma)$ denotes a 4-tuple consisting of a complex vector space $V$ of dimension $N$, an integer $k$ with $1\leq k <N$, a partition $\lambda\in P(k)$, and a tensor $\sigma\in S^\lambda V$. Let $s_\sigma\in H^0(\mathrm{Gr}_k(V^\smvee), S^\lambda {\mathcal U}^\smvee)$ be the section in $S^\lambda {\mathcal U}^\smvee$ defined by $\sigma$.
Recall that $\Lambda$ is the graded ${\mathbb C}$-algebra
$$\Lambda= \bigoplus_{m\geq 0} H^0(\mathrm{Gr}_k(V^\smvee), {\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes S^m {\mathcal E}^\smvee),
$$
and the section $s_\sigma$ is an element of the commutative ring $R\subset \Lambda$,
$$R=\bigoplus_{m\geq 0} H^0(\mathrm{Gr}_k(V^\smvee), S^m {\mathcal E}^\smvee).
$$
In this section we show that in the case of higher rank varieties, $\Lambda$ and the ideal $s\Lambda$ have a purely algebraic description in terms of the initial data $(V,k,\lambda,\sigma)$ provided Theorem \ref{co10} and Theorem \ref{th:2-11} apply.
We have shown in section \ref{Cequiv}, that $\Lambda$ can then be identified with the endomorphism algebra $\mathrm {End}_R(M)$ of the graded module
$$M\coloneqq \bigoplus_{m\geq 0} H^0(\mathrm{Gr}_k(V^\smvee), {\mathcal T}_0\otimes S^m {\mathcal E}^\smvee).
$$
Therefore we need an algebraic description of $R$ as a graded ring, an identification of the element $s\in R_1$, and a description of $M$ as a graded $R$-module.
Let $S^\lambda V$ be the Schur representation of $\mathrm {GL}(V)$ defined by $\lambda\in P(k)$, and denote by $S=S^\bullet (S^\lambda V)=\bigoplus_{m\geq 0} S^m(S^\lambda V)$ the symmetric algebra of $S^\lambda V$ endowed with its natural grading.
\begin{pr} (Porras) \label{prop:2-12} The ideal $I_k$ defining the higher rank variety $C(S^\lambda T)$ consists of all representations $S^\mu V\subset S$, $\mu=(\mu_1,\cdots,\mu_t)$, with $t>k$. The graded ring $R$ is isomorphic to the graded quotient ring $S/I_k$.
\end{pr}
\begin{proof}
The first assertion is \cite[3.3.2]{Po}, the second is \cite[3.3.3]{Po}.
\end{proof}
Recall that $s_\sigma\in H^0(\mathrm{Gr}_k(V^\smvee), S^\lambda {\mathcal U}^\smvee)=R_1$ is given by the tensor $\sigma\in S^\lambda V$. Then $\bar \sigma\in S/I_k$ corresponds to $s_\sigma\in R$, hence $s_\sigma\Lambda$ is the ideal generated by $\bar \sigma\in R\subset\Lambda$.
In order to describe $M$ as a graded $R$-module we use the map
$$\varphi: S^{\lambda/1}V\otimes S\to V^\smvee\otimes S\langle 1\rangle
$$
of free graded $S$-modules defined in \cite[3.2.1]{Po}.
Consider the Kapranov bundle
$${\mathcal T}_0=\bigoplus_{\alpha\in P(k,N-k)} S^\alpha {\mathcal U}^\smvee, $$
and the graded $R$-module
$$M_\alpha=H^0(S^\lambda T, \pi^* S^\alpha {\mathcal U}^\smvee)=H^0(\mathrm{Gr}_k(V^\smvee), S^\alpha {\mathcal U}^\smvee\otimes S^\bullet(S^\lambda {\mathcal U}^\smvee)).
$$
We have $M=\bigoplus_{\alpha\in P(k,N-k)} M_\alpha$, so that it suffices to describe each $M_\alpha$ as a graded $R$-module.
\begin{pr}\label{prop:2-13} $M_\alpha$ is isomorphic to the image of the following morphism of free graded $R$-modules:
$$S^\alpha(\varphi^\smvee)\otimes R: S^\alpha V\otimes R\langle-1\rangle\to S^\alpha(S^{\lambda/1} V^\smvee)\otimes R.
$$
\end{pr}
\begin{proof}
The map $\varphi: S^{\lambda/1} V\otimes S\to V^\smvee \otimes S\langle 1\rangle$ corresponds to a map of trivial vector bundles over $S^\lambda V^\smvee$:
$$
\begin{tikzcd}
S^{\lambda/1}\underline{V} \ar[dr] \ar[rr, "\tilde\varphi"] & &\underline{V}^\smvee \ar[dl]\\
&S^\lambda V^\smvee &
\end{tikzcd}.
$$
The pull-back of its dual $\tilde \varphi^\smvee$ via the Kempf collapsing $\rho:S^\lambda T\to C(S^\lambda T)\subset S^\lambda V^\smvee$ induces a map
$$\rho^* \tilde \varphi^\smvee: S^\lambda T\times V\to S^\lambda T\times S^{\lambda/1} V^\smvee
$$
which factorizes over $\pi^* {\mathcal U}^\smvee$:
$$
\begin{tikzcd}
S^{\lambda}T \times V \ar[dr, "\varepsilon"'] \ar[rr, "\rho^* \tilde \varphi^\smvee"] & &S^{\lambda}T\times S^{\lambda/1} V^\smvee \\
&\pi^* {\mathcal U}^\smvee \ar[ur, "\iota"'] &
\end{tikzcd}
$$
Here $\varepsilon$ is a vector bundle epimorphism, and $\iota$ is a sheaf monomorphism. Applying the Schur functor $S^\alpha$ yields an epi-mono factorization:
\begin{equation}\label{epi-mono}
\begin{tikzcd}
S^\alpha\underline{V}\ar[r, two heads] &\pi^* S^\alpha {\mathcal U}^\smvee \ar[r, hook] & S^\alpha(S^{\lambda/1}\underline{V}^\smvee).
\end{tikzcd}
\end{equation}
Now the argument of \cite[3.5]{BLVdB2} can be used: it suffices to show that the composition
\begin{equation}\label{eq:**}
\begin{tikzcd}
S^\alpha\underline{V}\otimes S^\bullet(S^\lambda\underline{V})\ar[r] & S^\alpha\underline{V}\otimes S^\bullet (S^\lambda {\mathcal U}^\smvee) \ar[r]& S^\alpha{\mathcal U}^\smvee\otimes S^\bullet (S^\lambda {\mathcal U}^\smvee)
\end{tikzcd}
\end{equation}
remains surjective after taking sections on $\mathrm{Gr}_k(V^\smvee)$. For this the filtration argument in \cite[3.5]{BLVdB2} applies verbatim.
\end{proof}
We can now state our final result:
\begin{thry}\label{th:2-14}
Let $(V,k,\lambda,\sigma)$ be the initial data as above, such that
$$s_\sigma\in H^0(\mathrm{Gr}_k(V^\smvee), S^\lambda {\mathcal U}^\smvee)$$
is a regular section with zero locus $Z(s_\sigma)\subset \mathrm{Gr}_k(V^\smvee)$. Let $I_k\subset S$ be the graded ideal consisting of all representations $S^{(\mu_1,\cdots,\mu_t)} V\subset S$ with $t>k$, and let $\bar \sigma\in S/I_k$ be the image of $\sigma$. Let $\varphi: S^{\lambda/1} V\otimes S\to V^\smvee\otimes S\langle 1\rangle $ be Porras' map. If one of the conditions in (1) or (2) of Theorem \ref{co10}, and both of the conditions (i) and (ii) of Theorem \ref{th:2-11} are satisfied, then the bounded derived category of $Z(s_\sigma)$ has the following purely algebraic description in terms of the initial data $(V,k,\lambda,\sigma)$:
$$D^b(\mathrm{coh} Z(s_\sigma))\simeq D^{\mathrm{gr}}_{\mathrm{sg}}\big(\mathrm {End}_{S/I_k}\big(\bigoplus_{\alpha\in P(k,N-k)}\mathrm{Im}(S^\alpha(\varphi^\smvee)\otimes (S/I_k))\big)\big/\langle\bar\sigma\rangle\big).
$$
\end{thry}
\subsubsection{Some concrete examples}
\label{sec:2-4-3}
\begin{enumerate}[1.]
\item {\it Complete intersections.} In this case the algebra $\Lambda$ can be described explicitly, and the conditions (i) and (ii) of Theorem \ref{th:2-11} can be easily checked. Choose $k\coloneqq \dim(Z)=1$, $F_i=S^{d_i} Z^\smvee$. In this case we have $\mathrm {GL}(Z)\simeq{\mathbb C}^*$, $\mathrm{Gr}_k(V^\smvee)={\mathbb P}(V^\smvee)$, and
$$E=|\bigoplus_{i=1}^r {\mathcal O}_{V^\smvee}(-d_i)|, \ H^0({\mathbb P}(V^\smvee),E^\smvee)=\bigoplus_{i=1}^r S^{d_i} V.
$$
The zero locus $Z(s)\subset {\mathbb P}(V^\smvee)$ of a section $s\in\Gamma({\mathbb P}(V^\smvee),E^\smvee)$ is the complete intersection
$$\bigcap_{i=1}^r Z_h(\sigma_i)\subset {\mathbb P}(V^\smvee),$$
where
$$\sigma=(\sigma_1,\dots,\sigma_r)\in \bigoplus_{i=1}^r S^{d_i}V$$
is the system of polynomials defining $s$. The corresponding GLSM presentation is the 4-tuple
%
$$(\mathrm {GL}(Z)\simeq{\mathbb C}^*,\,\mathrm{Hom}(V,Z),\,\bigoplus_{i=1}^r S^{d_i} Z^\smvee,\,{\det}^{t})$$ with $t\in{\mathbb N}_{>0}$.
%
The map
%
$$ \vartheta:E\to H^0({\mathbb P}(V^\smvee),E^\smvee)^\smvee= \bigoplus_i S^{d_i} V^\smvee$$
%
acts as follows: The fibre $E_{l}$ over a point $l \in {\mathbb P}(V^\smvee)$ is the product $\bigtimes_{i=1}^r l^{\otimes d_i}$, and its image $ \vartheta(E_l)$ in $\bigoplus_i S^{d_i} V^\smvee $ is the $r$-dimensional subspace $\bigoplus_{i=1}^r l^{\otimes d_i}\subset \bigoplus_i S^{d_i} V^\smvee$ spanned by the lines $l^{\otimes d_i}\subset S^{d_i} V$. Therefore ${\mathbb P}(\vartheta)$ is an embedding, and the corresponding projective variety
%
$$S(E)\coloneqq {\mathbb P}(\vartheta)({\mathbb P}(E))=\mathop{\bigcup}_{l\in {\mathbb P}(V^\smvee)}{\mathbb P}(\bigoplus_{i=1}^r l^{\otimes d_i})\subset {\mathbb P}(\bigoplus_{i=1}^r S^{d_i} V^\smvee)$$
%
is smooth. According to Corollary \ref{PropertiesS(E)} the projective variety $S(E)$ is projectively normal, arithmetically Cohen-Macaulay, and the vertex of its affine cone $C(E)$ (its only singularity) is a rational singularity. Moreover, if $N=\sum_{i=1}^r d_i$ then $S(E)$ is also arithmetically Gorenstein.
The graded ring $R={\mathbb C}[C(E)]={\mathbb C}[\mathrm{Hom}(V,Z)\oplus(\oplus_{i=1}^r S^{d_i}Z^\smvee)]^{\mathrm {GL}(Z)} $ is
%
$$R=\bigoplus_{m\geq 0} H^0\big({\mathbb P}(V^\smvee), S^m(\bigoplus_{i=1}^r {\mathcal O}_{V^\smvee}(d_i))\big)=\bigoplus_{m\geq 0} H^0\big({\mathbb P}(V^\smvee), \bigoplus_{\substack{k\in {\mathbb N}^r\\|k|=m}} {\mathcal O}_{V^\smvee}(\sum_{i=1}^r k_id_i)\big)
$$
$$=\bigoplus_{m\geq 0}\big(\bigoplus_{\substack{k\in {\mathbb N}^r\\|k|=m}}S^{\sum_{i=1}^r k_id_i}(V)\big),
$$
and its multiplication is given by the obvious bilinear maps
%
$$S^{\sum_{i=1}^r k_id_i}(V)\times S^{\sum_{i=1}^r l_id_i}(V)\to S^{\sum_{i=1}^r (k_i+l_i)d_i}(V).
$$
%
The locally free sheaf
%
$${\mathcal T}_0\coloneqq \bigoplus_{i=0}^{N-1}{\mathcal O}_{V^\smvee}(i)
%
$$
generates $D^b(\mathrm{coh}{\mathbb P}(V^\smvee))$ classically, and satisfies the condition of Corollary \ref{co3} (3), so ${\mathcal T}\coloneqq \pi^*({\mathcal T}_0)$ is a tilting bundle on $E$ and Theorem \ref{FourthEq} applies, giving a purely algebraic interpretation of the derived category of the complete intersection $Z(s)=\cap_{i=1}^r Z_h(\sigma_i)$:
The graded ring $\Lambda=\mathrm {End}_E({\mathcal T})$ intervening in this algebraic interpretation is
%
$$\Lambda=\bigoplus_{m\geq 0}H^0({\mathbb P}(V^\smvee),{\mathcal T}_0^\smvee\otimes {\mathcal T}_0\otimes S^m({\mathcal E}^\smvee))=\bigoplus_{m\geq 0}\bigoplus_{\substack{0\leq a\leq N-1\\ 0\leq b\leq N-1}} \bigoplus_{\substack{k\in {\mathbb N}^r\\ |k|=m}}S^{\sum_{i=1}^r k_i d_i+b-a}(V)
$$
and its multiplication is induced by the obvious bilinear maps
%
$$S^{\sum_{i=1}^r k_i d_i+b-a}(V)\times S^{\sum_{i=1}^r l_i d_i+c-b}(V)\to S^{\sum_{i=1}^r (k_i+l_i) d_i+c-a}(V).
$$
Note also that, by Corollary \ref{co9}, in the case $N=\sum_{i=1}^r d_i$ the ring $\Lambda$ is a crepant resolution of $R$.
\vspace{3mm}
Interesting special cases of this family of Abelian GLSM presentations associated with complete intersections have been studied by several authors. The case $N=5$, $r=1$, $d_1=5$ reproduces Witten's original GLSM \cite{Wi}. In \cite{CDHPS} the authors study the cases
$$(d_1,\dots,d_r)\in\big\{(2,2), (2,2,2),(2,2,2,2),(3),(3,3)\big\}$$
for different values of $N=\dim(V)$. \\
In the following two cases we use Theorem \ref{th:2-11}, so it suffices to verify conditions (i) and (ii) of Theorem \ref{th:2-11}.
\\
\item {\it Isotropic orthogonal Grassmannians.} Let $q\in S^2V$ be a non-degenerate quadratic form on $V^\smvee$. Let $1\leq k\leq\frac{N}{2}$. The isotropic Grassmannian $\mathrm{Gr}_k^q(V^\smvee)\subset \mathrm{Gr}_k(V^\smvee)$ is the submanifold of $k$-dimensional isotropic subspaces of $(V^\smvee,q)$:
$$\mathrm{Gr}_k^q(V^\smvee)\coloneqq \{K\subset V^\smvee|\ K\hbox{ linear subspace of dimension }k,\ \resto{q}{K}\equiv 0\}.$$
The form $q$ defines a section $s_q\in \Gamma(\mathrm{Gr}_k(V^\smvee),S^2 T^\smvee)$ which is transversal to the zero section, and whose zero locus is $\mathrm{Gr}_k^q(V^\smvee)$. Therefore
$$\dim(\mathrm{Gr}_k^q(V^\smvee))=k(N-k)-\binom{k+1}{2},$$
and the 4-tuple $(\mathrm {GL}(Z),\mathrm{Hom}(V,Z),S^2 Z^\smvee,\det^t)$ (with $t\in {\mathbb N}_{>0}$) is a GLSM presentation of $({\extpw}\hspace{-2pt}^2 T\to \mathrm{Gr}_k(V^\smvee),s_q)$.
Note that $\mathrm{Gr}_k^q(V^\smvee)$ comes with an obvious action of the group $\mathrm {SO}(V^\smvee,q)$. This action is transitive unless $N=2k$ \cite[section 4]{BKT}. In the latter case $\mathrm{Gr}_k^q(V^\smvee)$ has two connected components $\mathrm{Gr}_k^q(V^\smvee)_\pm$, and $\mathrm {SO}(V^\smvee,q)$ acts transitively on each component. One has isomorphisms $\mathrm{Gr}_k^q(V^\smvee)_\pm\simeq \mathrm{Gr}_{k-1}^{q_U}(U)$, where $U\subset V^\smvee$ is a general hyperplane, and $q_U$ is the restriction of $q$ to $U$, given by intersecting with $U$. The conditions (i) and (ii) of Theorem \ref{th:2-11} are in \cite[Theorem B]{WZ}.
\\
\item {\it Isotropic symplectic Grassmannians.}. Let $V$ be a complex vector space of even dimension $N=2n$, and let $\omega\in {\extpw}\hspace{-2pt}^2V$ be a symplectic form on $V^\smvee$. For even $k$ with $2\leq k\leq n$, the isotropic Grassmannian $\mathrm{Gr}_k^\omega(V^\smvee)\subset \mathrm{Gr}_k(V^\smvee)$ is the submanifold of $k$-dimensional isotropic subspaces of $(V^\smvee,\omega)$:
$$\mathrm{Gr}_k^\omega(V^\smvee)\coloneqq \{K\subset V^\smvee|\ K\hbox{ linear subspace of dimension }k,\ \resto{\omega}{K}\equiv 0\}.$$
Denoting by $T$ the tautological $k$-bundle of $\mathrm{Gr}_k(V^\smvee)$, the form $\omega$ defines a section $s_\omega\in \Gamma(\mathrm{Gr}_k(V^\smvee),{\extpw}\hspace{-2pt}^2 T^\smvee)$ which is transversal to the zero section, and whose zero locus is $\mathrm{Gr}_k^\omega(V^\smvee)$. Therefore
$$\dim(\mathrm{Gr}_k^\omega(V^\smvee))=k(N-k)-\binom{k}{2},$$
and the 4-tuple $(\mathrm {GL}(Z),\mathrm{Hom}(V,Z),{\extpw}\hspace{-2pt}^2 Z^\smvee,\det^t)$ (with $t\in {\mathbb N}_{>0}$) is a GLSM presentation of $({\extpw}\hspace{-2pt}^2 T \to \mathrm{Gr}_k(V^\smvee),s_\omega)$. Note that $\mathrm{Gr}_k^\omega(V^\smvee)$ comes with a transitive action of the group $\mathrm{Sp}(V^\smvee,\omega)$ \cite[section 4]{BKT}. The conditions (i) and (ii) of Theorem \ref{th:2-11} are in \cite[Theorem C]{WZ}.\\
In our final example we can apply Theorem \ref{th:2-14}. We obtain a graded crepant nc resolution of the invariant ring $R$, and a purely algebraic description of $D^b(\mathrm{coh}(Z(s_\sigma)))$.\\
\item {\it Beauville-Donagi IHS 4-folds. } Let $V$ be a complex vector space of dimension $N$, and $\sigma\in S^3V$. Choosing $k=\dim(Z)=2$, $r=1$, and $F_1=S^3 Z^\smvee$, we get the bundle $S^3 T$ on $\mathrm{Gr}_2(V^\smvee)$, and a section $s_\sigma\in H^0(\mathrm{Gr}_2(V^\smvee), S^3{\mathcal U}^\smvee)$. The zero locus $Z(s_\sigma)\subset \mathrm{Gr}_2(V^\smvee)$ is the Fano variety of lines in the cubic hypersurface $Z_h(\sigma)\subset P(V^\smvee)$ defined by $\sigma\in S^3 V=H^0({\mathbb P}(V^\smvee),{\mathcal O}_{V^\smvee}(3))$. For $N=6$ and $\sigma$ general, $Z(s_\sigma)$ is a Beauville-Donagi IHS 4-fold \cite{BD}. Note that the symmetric algebra $S=S^\bullet(S^3V)$ is not multiplicity free. Condition (i) of Theorem \ref{th:2-11} is in \cite[section 2]{Kan}, and we have verified the vanishing condition (ii) by a Borel-Bott-Weil computation.
In order to see that condition (2)(i) of Theorem \ref{co10} is satisfied, we prove a general lemma which computes the codimension of the pre-image of the exceptional locus of the Kempf collapsing in this case.
Let $V$ be a complex vector space of dimension $N$, and $d$, $k$ be positive integers with $d\geq 2$, $k<N$. Let $T_k$ be the tautological subbundle of $\mathrm{Gr}_k(V^\smvee)$, and $\rho: S^dT_k\to C(S^d T_k)\subset S^d V^\smvee$ the Kempf collapsing. The cone $C(S^d T_k)$ can be described as follows:
For an element $q\in S^dV^\smvee$ put
$$L_q\coloneqq \bigcap_{\substack{L\subset V^\smvee\\ q\in S^d L}} L\ ,\ \ \mathrm {rk}(q)\coloneqq \dim(L_q).
$$
It is easy to see that $L_q$ is the minimal subspace of $V^\smvee$ whose $d$-symmetric power contains $q$, and this definition of $\mathrm {rk}(q)$ agrees with the algebraic definition, i.e. with the rank of the linear map $V\to S^{d-1} V^\smvee$ associated with $q$.
One has
$$C(S^d T_k)=S^dV^\smvee_{\leq k}\coloneqq \{q\in S^d V^\smvee|\ \mathrm {rk}(q)\leq k\},
$$
so $C(S^d T_k)$ coincides with the catalecticant variety associated with the triple $(V,d,k)$. In \cite[Theorem 2.2]{Kan} Kanev shows that this variety is irreducible, normal, Cohen-Macauley of dimension $\binom{k+d-1}{d}+k(N-k)$, with rational singularities along $\mathrm{Sing}(S^dV^\smvee_{\leq k})= S^dV^\smvee_{\leq k-1}$. For $0\leq r\leq N$ put
$$S^dV^\smvee_r\coloneqq S^dV^\smvee_{\leq r}\,\setminus\, S^dV^\smvee_{\leq r-1}.
$$
For a point $q\in S^dV^\smvee_r$, $L_q$ is the unique $r$-dimensional subspace of $V^\smvee$ whose $d$-symmetric power contains $q$. The map $\gamma_r: S^dV^\smvee_{r}\to \mathrm{Gr}_r(V^\smvee)$ given by $\gamma_r(q)=L_q$ is regular; its graph is the intersection
$$\Gamma_r=[S^dV^\smvee_{r}\times \mathrm{Gr}_r(V^\smvee)]\cap S^dT_r.
$$
Let now $r\leq k$. The restriction
$$\resto{\rho}{\rho^{-1}(S^dV^\smvee_{r})}: \rho^{-1}(S^dV^\smvee_{r})\to S^dV^\smvee_{r}
$$
is a fiber bundle. Its fiber over a point $q\in S^dV^\smvee_{r}$ can be identified with
$$\{ U\in \mathrm{Gr}_k(V^\vee)|\ L_q\subset U\}\simeq\mathrm{Gr}_{k-r}(V^\smvee/L_q).
$$
This proves:
\begin{lm}
With the notations above the following holds:
\begin{enumerate}[(1)]
\item $\mathrm{Exc}(\rho)=S^dV^\smvee_{\leq k-1}$.
\item For $0\leq r\leq k$ one has
$$\dim(\rho^{-1}(S^dV^\smvee_{r}))=\binom{r+d-1}{d}+r(N-r)+ (k-r)(N-k),$$
$$\mathrm{codim}_{S^dT_k}(\rho^{-1}(S^dV^\smvee_{r}))=\binom{k+d-1}{d}-\binom{r+d-1}{d}-r(k-r).
$$
\item One has
$$\mathrm{codim}_{S^dT_k}(\rho^{-1}(\mathrm{Exc}(\rho)))=\underset{0\leq r < k}{\min}\bigg\{\binom{k+d-1}{d}-\binom{r+d-1}{d}-r(k-r)\bigg\}.
$$
%
\end{enumerate}
\end{lm}
\begin{re}
In the case $d=2$, the codimension of $\rho^{-1}(\mathrm{Exc}(\rho))$ in $S^dT_k$ is always 1.
\end{re}
%
\end{enumerate}
|
2,869,038,154,229 | arxiv | \section{Introduction}
Star formation is one of the key processes that drives the evolution of galaxies. Understanding when, where, and how stars are formed remains one of the major goals of extragalactic astronomy. Internal processes like supernovae and Active Galactic Nuclei (AGN) feedback \citep{Efstathiou2000,Croton2006,Bower2008,Somerville2008, Fabian2012} and dynamical instabilities \citep{Kormendy2013} regulate star formation in a galaxy. Likewise, external physical processes such as galaxy-galaxy interactions, ram pressure stripping \citep{Gunn1972, Cortese2010}, strangulation \citep{Mihos1996,Moore1998,Balogh2000}, gas accretion \citep{Kacprzak2017, Tumlinson2017} and tidal interactions also leave their imprint on the star formation history of each galaxy.
Galaxies falling into the cluster potential are subjected to interactions with other galaxies and the Intra-cluster Medium (ICM). Galaxy-galaxy mergers can trigger a star formation phase in the galaxy by funneling the flow of atomic gas into the core of the galaxies \citep{Barnes1996,Springel2005, Ellison2010}. Processes like ram pressure stripping, tidal interactions and harassment deplete the star formation fuel of a galaxy. These processes cumulatively lead to a gradual decline of star formation in galaxies in the cluster environment and can be observed as the higher fraction of quenched early type galaxies in the local universe ($z<0.1$) \citep{Balogh2000,Kauffmann2004,Presotto2012a, Wetzel2013, Paccagnella2016, Barsanti2018,Pasquali2019, Schaefer2019}.
While the effect of environment on the star formation in galaxies is well studied at $z\sim0$, the same remains ambiguous at higher redshift. Some studies find lower star formation rates in cluster members compared to isolated field galaxies at $z\sim 1$ similar to what we observe in the local universe \citep{Williams2009, Vulcani2010, Patel2011, Popesso2012, Old2020a}. Whereas others find a reversal in this trend \citep{Elbaz2007, Peng2010,Tran2010,Elbaz2011,Muzzin2011, Wetzel2012, Allen2016}. It is still ambiguous when the environmental effects leads to significant suppression in the star formation rates of galaxies in dense regions.
The {\tt ZFIRE}\ survey \citep{Nanayakkara2016} targets proto-cluster galaxies at $z\sim$ 2 and 1.6 selected from the {\tt ZFOURGE}\ survey in the COSMOS and UDS fields to identify the onset of environmental effects on galaxy properties. Environmental effects on inter stellar medium (ISM) properties, such as gas phase metallicity and electron density, appear to be not significant till $z\sim 1.5$ \citep{Alcorn2019, Kacprzak2015, Kewley2016a} similar to the results from {\tt IllustrisTNG}\ \citep{Gupta2018}. However, at $z = 1.6$, there is tentative evidence of effect of environment on the star formation rates of galaxies in the proto-cluster core \citep{Tran2015} and electron density \citep{Harshan2020}.
Observations at low redshift ($z<1$) find that the massive galaxies form their stars earlier and more rapidly compared to the low mass galaxies \citep{ Cowie1996a, Brinchmann2004, Thomas2005,Treu2005,Cimatti2006, Thomas2010, Carnall2018, Webb2020}. Massive galaxies form the majority of their stars within the first 1-2 Gyr of cosmic history and start to quench as early as $z\sim3$ \citep{Straatman2014, Glazebrook2017, Forrest2019}. Similarly, galaxies in high density environments form the majority of their stars earlier compared to the field galaxies and are on average 1-2 Gyr older than the field galaxies \citep{Thomas2005}. However, at higher redshift ($z\sim 1$), the age difference between cluster and field galaxies is less significant, $\lesssim 0.5$ Gyr \citep{Webb2020}.
Cosmological simulations and semi-analytic models \citep{DeLucia2012,Furlong2015, Bahe2017, Tremmel2019, Donnari2020a, Donnari2021} similarly show higher quenched fractions in the cluster members compared to the field sample at $z=0$ to $z = 2$. In the {\tt IllustrisTNG}\ simulations, \cite{Donnari2020a, Donnari2021} find a higher quenched fraction in the low mass satellite galaxies compared to low mass centrals indicating a role of environment in quenching of low mass galaxies. On the other hand, high mass galaxies, whether they are centrals or satellites, have high quenched fractions indicating effects of both secular and environmental quenching mechanisms.
One straightforward way of studying the evolution of galaxies is to study the star formation histories (SFH) of galaxies. The reconstruction of SFHs allows us to study the stellar mass assembly and gas accretion histories of galaxies over cosmic time. However, inferring the SFHs from observables is an extremely complex process. Star formation histories can be reconstructed by fitting the spectral energy distributions (SED) models for different stellar populations to the observed photometry of galaxies. We have moved from a simple exponential to more complex functional forms \citep{Buat2008,Maraston2010, Papovich2011} such as lognormals \citep{Gladders2013,Abramson2015,Carnall2018} even to non parametric SFHs \citep{CidFernandas2005, Ocvirk2006, Kelson2014, Leja2017, Chauke2018, Robotham2020a}.
In this paper we will study the effect of environment on the star formation histories of galaxies in a COSMOS proto-cluster at $z=2.095$ \citep{Spitler2012,yuan2014}. We present the first measurement of SFHs in the proto-cluster environment at $z=2$. We use the SED fitting code {\tt PROSPECTOR}\ \citep{Leja2017,Johnson2019} in conjunction with the extensive photometric data from the {\tt ZFOURGE}\ survey \citep{Straatman2016} and spectroscopic redshifts from the {\tt ZFIRE} survey \citep{Tran2015, Nanayakkara2016} to reconstruct the SFHs. We study the correlation of SFHs with the stellar mass and the environment of the galaxy. We then compare our results from observations to the SFHs retrieved from the cosmological hydrodynamical simulations {\tt IllustrisTNG}.
This paper is organised as follows. In Section \ref{sec:data} we describe the data used with {\tt PROSPECTOR}\ to create SFHs as described in Section \ref{sec:pros}. In Section \ref{sec:illustris} we describe the SFHs from {\tt IllustrisTNG}. In Sections \ref{sec:results} and \ref{sec:discussion} we state our results and discussion and in Section \ref{sec:summary} summarise the results. For this work, we assume a flat $\Lambda CDM$ cosmology with $\Omega_{M}=0.3$, $\Omega_{\Lambda}=0.7$, and $H_0=69.6\, \rm{km s}^{-1} \rm{Mpc}^{-1}$.
\section{Methodology}
\subsection{{\tt ZFIRE}\ and {\tt ZFOURGE}\ Surveys}
\label{sec:data}
Our observation sample is taken from the {\tt ZFOURGE}\ - Fourstar Galaxy Evolution Survey \citep{Straatman2016}, a deep, UV to FIR, medium-band survey, completed on the FourStar instrument \citep{persson2013} on the Magellan Telescope. It reaches depth of $\sim 26 $ mag in $J_1, J_2, J_3$ and $\sim 25$ mag in $H_s, H_l , K_s$ bands. {\tt ZFOURGE}\ spans the Cosmic Evolution Survey field \citep[COSMOS]{Scoville2007} that covers the spectroscopically confirmed proto-cluster at $z_{cl}=2.09 \pm 0.00578$ \citep{Spitler2012, yuan2014}. The {\tt ZFOURGE}\ survey reaches $80\%$ completeness till 25.5 AB magnitude in K$_s$ band, which corresponds to $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$\ = 9 at $z=2$ \citep{Straatman2016}. The photometric redshifts and stellar mass catalogs were created with the publicly available SED fitting codes EAZY \citep{Brammer2008} and FAST \citep{Kriek2009}. The UV-IR star formation rates were derived in \cite{Tomczak2016}.
The {\tt ZFIRE}\ survey \citep{Tran2015, Nanayakkara2016} is the spectroscopic follow-up of {\tt ZFOURGE}\ survey using Keck-MOSFIRE \citep{Mclean2010,Mclean2012}. The spectroscopic sample was selected based on the {\tt ZFOURGE}\ photometric redshifts of the proto-cluster discovered at $z\sim2$ \citep{Spitler2012}. The estimated halo mass of the $z=2.09$ COSMOS proto-cluster based on the velocity dispersion measurements has virial mass in range $\rm{M}_{vir} = 10^{13.5\pm0.2} \rm{M}_\odot$. More than two emission lines: $H\alpha, H\beta, [NII]$ or $[OIII]$ observed in H and K bands are used to measure the spectroscopic redshifts.
We select 57 cluster members in the COSMOS proto-cluster between redshift $2.08 < z < 2.12$ \citep{Spitler2012,yuan2014, Tran2015} and 130 field galaxies in $1.8 < z < 2.5$. Figure \ref{fig:radec} shows the spatial distribution of the spectroscopically confirmed proto-cluster (red circles) and field (blue diamonds) across the COSMOS field. This distrubution is not a depiction of true distribution of galaxies in $1.8 < z < 2.5$ in the {\tt ZFOURGE}\ survey, but an effect of the observational strategy used in the spectroscopic follow-up for the {\tt ZFIRE}\ survey \citep{yuan2014, Nanayakkara2016}. Figure \ref{fig:sfrmass} shows the SFR-Stellar mass relation of the selected cluster (red circles) and field (blue diamonds) galaxies. Due to the observational limit on the H$\alpha$ flux, {\tt ZFIRE}\ observations have a SFR lower limit of $0.8$ $\rm{M}_\odot\rm{yr}^{-1}$\ \citep{yuan2014}.
\begin{figure}
\noindent
\includegraphics[scale = 0.345]{RA_Dec.png}
\caption{Spatial Distribution of spectroscopically confirmed {\tt ZFIRE}\ proto-cluster (red circle) and field (blue diamonds) galaxies in the COSMOS field at $z\sim2$. The dashed rings indicate the peaks in surface density as identified by \cite{yuan2014} and \cite{Spitler2012}. }
\label{fig:radec}
\end{figure}
\begin{figure}
\noindent
\includegraphics[scale=0.345]{sfr_mass.png}
\caption{Stellar Mass - SFR relation of selected cluster (red circle) and field (blue diamonds) galaxies from {\tt ZFIRE}. Stellar Mass and the UV-IR SFR has been taken from the {\tt ZFOURGE}\ survey \citep{Straatman2016, Tomczak2016}}
\label{fig:sfrmass}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.24]{2510.png}
\includegraphics[scale=0.24]{17639.png}\\
\includegraphics[scale=0.24]{5561.png}
\includegraphics[scale=0.24]{1217.png}
\caption{Examples of observed {\tt ZFOURGE}\ photometry (orange squares) fitted with {\tt PROSPECTOR}\ to get modelled photometry (purple diamonds) and modelled SED (purple line). The stamps show 1.5$\arcsec$x1.5$\arcsec$ RGB (F160W, F814W, F606W) images of galaxies from 3D-HST survey. }
\label{fig:SED}
\end{figure*}
\subsection{Star Formation Histories using {\tt PROSPECTOR}\ SED Fitting}
\label{sec:pros}
{\tt PROSPECTOR}\ is a SED fitting tool used to derive physical properties of galaxies using photometry and spectra. {\tt PROSPECTOR}\ uses the python-Flexible Stellar Population Synthesis package \citep{Conroy2009} and we use the MESA Isochrones and Stellar Tracks (MIST; \cite{Dotter2016,Paxton2015,Paxton2013,Paxton2011,Calzetti1994} and takes into account the nebular emission \citep{Byler2018}, dust attenuation and re-radiation. It uses Bayesian inference framework to derive non parametric formulation of star formation histories using simple piece-wise constant functions. {\tt PROSPECTOR}\ also allows for adaptive time binning for the SFHs with varying number of bins. It fits non-parametric SFHs by calculating the fraction of stellar mass formed in a particular time bin \citep{Leja2019}. We use Calzetti dust attenuation model \citep{Calzetti1994}, Chabrier IMF \citep{Chabrier2003} and WMAP9 \citep{Hinshaw2013} cosmology throughout the analysis.
We use {\tt PROSPECTOR}\ on the 5 NIR medium-band photometry from the {\tt ZFOURGE}\ survey along with 32 other UV-MIR photometric bands from the legacy data sets covering the wavelength regime of $0.4448 - 7.9158\, \mu m $ \citep{Straatman2016} for the cluster and field galaxies and spectroscopic redshifts from {\tt ZFIRE}. We use a uniform prior across all stellar mass bin allowing us to do a comparative analysis of the SFHs. We keep 9 free parameters: stellar mass, stellar and gas-phase metallicity, dust attenuation and five independent non-parametric SFH bins. We choose the age bins to roughly match the time resolution of the {\tt IllustrisTNG}\ cosmological simulation \citep{Nelson2019a} (described in Section \ref{sec:illustris}). We use the following 5 time bins : 0-200 Myr, 200-400 Myr, 400-600 Myr, 600-1000 Myr and 1000- $(t_{univ} - 1000)/2$ Myr. With the prescribed age bins, {\tt PROSPECTOR}\ fits for 6 SFH bins, but the additional constraint on fractional stellar mass to be summed to one results in only five independent SFH bins. Our results do not depend significantly on the choice of age bins.
The stellar mass of galaxies calculated from {\tt PROSPECTOR}\ agrees reasonably with the stellar masses from FAST from the {\tt ZFOURGE}\ survey. There is a reasonable agreement between the two with higher stellar masses of galaxies from {\tt PROSPECTOR}\ as seen and discussed in \cite{Leja2019}. This is speculated to be a result of different assumptions and models for SFHs in FAST and {\tt PROSPECTOR}.
We fit SEDs of 57 cluster galaxies in the redshift regime $2.08 \geq z \geq 2.12$ and 130 field galaxies in the redshift regime $1.8 \geq z \geq 2.5$ using the {\tt ZFIRE}\ spectroscopic redshifts. We extract the posterior distribution of the stellar mass, five SFHs time bins and take the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ percentile of the distribution from {\tt PROSPECTOR}. Figure \ref{fig:SED} shows the SED fits of four galaxies. The model spectrum (solid purple line) and the model photometry (purple diamonds) are well fitted to the observed photometry (orange squares).
We divide the proto-cluster and field galaxies into four stellar mass bins: $ 9 - 9.5$ $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$, $ 9.5 - 10$ $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$, $ 10 - 10.5$ $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$\ and $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $\geq 10.5 $. We create boostrapped samples of individual SFHs in each stellar mass bins and present the medians and errors in the median in Figure \ref{fig:sfh_pros}.
\subsection{Star Formation Histories from {\tt IllustrisTNG}}
\label{sec:illustris}
{\tt IllustrisTNG}\ is a suite of magneto-hydro-dynamical cosmological simulations based on the $\lambda$-CDM cosmology \citep{Pillepich2018b, Nelson2018, Springel2017, Marinacci2017, Naiman2017}. {\tt IllustrisTNG}\ extends the Illustris framework with kinetic black hole feedback, magneto-hydrodynamics, and a revised scheme for galactic winds, among other changes \citep{Weinberger2017, Pillepich2018a}.
In this paper we use the TNG100 box with $\rm{L}_{box} = 110.7$ cMpc, and total volume $\sim10^6 ~\rm{Mpc}^3 $, to map the star formation rate histories of the cluster and field galaxies. The data of TNG100 have been made publicly available and is described by \cite{Nelson2019a}. The TNG100 simulation has a baryonic mass resolution of $\rm{m}_b = 1.4\times 10^6 ~\rm{M}_\odot$. This resolution provides about 1000 stellar particles per galaxy for a galaxy with stellar mass $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 9$, and proportionally more stellar particles for more massive galaxies. For the entire analysis, we constrain our galaxies to have $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $\geq 9$ at redshift $z=2$.
We define galaxy clusters as halos of total mass $\rm{M}_{200} \geq10^{13}\, \rm{M}_\odot$ at $z=2$ and the galaxies residing in the cluster halo except the most massive central galaxy are satellites. We identify galaxies that reside in halos of total mass $\rm{M}_{200} < 10^{13}\, \rm{M}_\odot$ at $z=2$ as field galaxies. In this analysis, galaxies are subhalos from the {\sc subfind} catalog with stellar mass $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $\geq 9 $ (in twice the half mass radius) at $z=2$. We select galaxies (centrals+satellites) associated with the cluster halos as cluster members whereas the field sample is made of galaxies associated with the field halos. \cite{Donnari2019} show that the main sequence of star forming galaxies in {\tt IllustrisTNG}\ is lower compared to the observationally-inferred star formation main sequence at $z>\sim0.75$. Hence, to match the SFR threshold from the observations to simulations we do the following. We calculate the difference between the star formation main sequence at $z=2$ from \cite{Tomczak2016} and the observational SFR threshold of $0.8\, \rm{M}_*\rm{yr}^{-1}$(Section \ref{sec:data}) as a function of stellar mass. We subtract the calculated difference from the median SFR-stellar mass relation of the entire sample of TNG100 and get a SFR threshold of $0.4$ $\rm{M}_\odot\rm{yr}^{-1}$. Following this selection criteria, we get a sample of 232 cluster galaxies from 24 cluster halos and 9577 field galaxies.
We use the {\sc sublink} algorithm which tracks the merger histories of the galaxies to quantitatively follow the evolution of galaxies \citep{Rodriguez-Gomez2015}. We trace the SFRs of the cluster and field galaxies in the four defined stellar mass bins from $z=2$ to $z=6$. Figure \ref{fig:sfh_illustris} shows the distribution of the SFHs of the selected sample sets.
\section{Results}
\label{sec:results}
In this Section we show and discuss the SFHs of galaxies in a proto-cluster at $z\sim2$ derived from SED fitting of observations with {\tt PROSPECTOR}. We compare our results with predictions from {\tt IllustrisTNG}\ and discuss the dependence of SFH on stellar mass and environment of the galaxy. We also discuss possible physical motivators to describe our findings.
\subsection{Star Formation Histories and Environment}
\label{sec:SFHvsenv}
\begin{figure*}
\noindent
\includegraphics[scale = 0.32]{SFH_z=2_PROS_paper.png}
\caption{ Specific star formation rate ($\log(\rm{SFR}(t)/\rm{M}_*(t_0))$) histories from {\tt ZFIRE}\ - {\tt ZFOURGE}\ observations derived with {\tt PROSPECTOR}\ vs age of the Universe (bottom x-axis) and corresponding redshift (top x-axis) for field (blue) and proto-cluster (red) galaxies at $z\sim2$. The four panels correspond to 4 mass bins (from left to right): $9\leq$$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<9.5$, $9.5\leq$$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<10$, $10\leq$$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<10.5$, $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$\geq 10.5$. The solid and dashed lines show the bootstrapped median sSFH and the shaded region shows the error in median. The number of galaxies in each bin is given in parenthesis in each label. The bottom right panel ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$\geq 10.5$) shows evidence for the suppressed star formation activity in massive proto-cluster galaxies in the recent time bins. }
\label{fig:sfh_pros}
\end{figure*}
\begin{figure*}
\noindent
\includegraphics[scale = 0.32]{SFH_z=2_PROS_cummulative_stellarmass_paper.png}
\caption{ Normalised cumulative stellar mass in galaxies from {\tt ZFIRE}\ - {\tt ZFOURGE}\ observations derived with {\tt PROSPECTOR}\ vs age of the Universe (bottom x-axis) and corresponding redshift (top x-axis) for field (blue) and proto-cluster (red) galaxies at $z\sim2$. The four panels correspond to 4 mass bins (from left to right): $9\leq$$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<9.5$, $9.5\leq$$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<10$, $10\leq$$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<10.5$, $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$\geq 10.5$. The solid and dashed lines show the bootstrapped median growth of stellar mass and the shaded region shows the $1 \sigma$ error in median. The number of galaxies in each bin is given in parenthesis in each label. The bottom right panel ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$\geq 10.5$) shows evidence of early stellar mass buildup in massive proto-cluster galaxies compared to field galaxies. }
\label{fig:stellarmass_pros}
\end{figure*}
\begin{figure*}
\noindent
\includegraphics[scale = 0.32]{SFH_z=2_ILLUSTRIS_paper.png}
\caption{ Instantaneous specific star formation rate ($\log[\rm{SFR}(t)/\rm{M}_{*,z=2}]$) histories from {\tt IllustrisTNG}\ vs age of the Universe (bottom x-axis) and corresponding redshift (top x-axis) for field (teal) and cluster (orange) galaxies at $z\sim2$. The four panels are the 4 mass bins (from left to right): $9\leq $$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<9.5$, $9.5 \leq $$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<10$, $10\leq $$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$<10.5$, $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$\geq10.5$. The solid and dashed lines show the bootstrapped median sSFH and the shaded region shows error in the median. The black dashed lines correspond to the age bins used in SFHs from observations (Figure \ref{fig:sfh_pros}). The number of galaxies in each bin is given in parenthesis in each label. Similar to observations, galaxies in the stellar mass bin $10.5\leq $$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$\leq11$ (bottom right panel) show effect of environment on the sSFH in contrast to lower mass bins where field and cluster galaxies have comparable sSFH. }
\label{fig:sfh_illustris}
\end{figure*}
To study the effect of environment on the star formation of galaxies we compare the SFH of proto-cluster and field galaxies at $z\sim2$.
Figure \ref{fig:sfh_pros} and \ref{fig:sfh_illustris} shows the star formation histories of galaxies in proto-cluster and field environment from {\tt ZFIRE}\ -{\tt ZFOURGE}\ observations and the {\tt IllustrisTNG}\ simulations.
\subsubsection{SFH Vs Environment: Observations}
We derive the star formation histories of galaxies at $z\sim2$ using the extensive {\tt ZFOURGE}\ photometry and the SED fitting tool {\tt PROSPECTOR}. Figure \ref{fig:sfh_pros} shows the median star formation rates in each age bin normalised to the total stellar mass of the galaxy at the $z\sim2$ of proto-cluster (red) and field (blue) galaxies. The effect of environment is most evident on the highest mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ > 10.5$). The same trend is not observed in the other mass bins although cluster galaxies in the $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $=10-10.5$ bin also show slightly ($\sim1\sigma$) higher star formation in the most recent age bin compared to the field sample (log sSFR = $-0.81\pm0.09\ \rm{Gyr}^{-1}$ in proto-cluster and $-0.92\pm0.06\ \rm{Gyr}^{-1}$ in field).
Massive proto-cluster galaxies show higher star formation in the earliest age bin ($>1$ Gyr in look-back time; until $\sim 2.1$ Gyr of age of the Universe) compared to field sample (log sSFR = $-0.65\pm0.06\,\rm{Gyr}^{-1}$ in proto-cluster and $-0.8\pm0.04\,\rm{Gyr}^{-1}$ in field) and a lower star formation (log sSFR = $-0.71\pm0.03\,\rm{Gyr}^{-1}$ in proto-cluster and $-0.61\pm0.02\,\rm{Gyr}^{-1}$ in field) in the more recent age bins ($0-200$ Myr in look-back time; $\sim 3.1 - 2.9 $ Gyr of age of the Universe). This indicates an earlier formation and stellar mass build up of massive proto-cluster galaxies compared to field galaxies.
Figure \ref{fig:stellarmass_pros} shows the normalised cumulative median stellar mass build-up in proto-cluster (red) and field (blue) galaxies in the four stellar mass bins. The most massive ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ > 10.5$) proto-cluster galaxies build up their stellar mass faster than the field galaxies, whereas proto-cluster and field galaxies with $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ < 10.5$ have consistent stellar mass build-up history.
Environmental effects arising from the interactions of galaxies with the ICM through processes like ram pressure stripping, starvation, harassment etc. are shown to be more effective in quenching lower mass galaxies, which are rarely quenched in field environment \citep{Medling2018}. In our sample, we do not see any effect of environment on the low mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ =9-9.5$). In the second stellar mass bin ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ =9.5-10$), we find a higher SFR in the the most recent age bin (3.1- 3.3 Gyrs in the age of the universe) for proto-cluster galaxies than the field sample. Figure \ref{fig:stellarmass_pros} shows the difference in the stellar mass formed in the proto-cluster galaxies in the small time scale (200 Myrs) of the most recent age bin is not significant to be seen as a deviation from the otherwise closely following SFH of the field galaxies. The star formation histories of star forming proto-cluster galaxies in the low mass bins closely follow that of the field galaxies in the same mass bin. Importantly, however, this could be due to the bias of {\tt ZFIRE}\ sample towards star forming galaxies of SFR $>0.8$ $\rm{M}_\odot\rm{yr}^{-1}$\ \citep{yuan2014}.
\subsubsection{SFH Vs Environment: Simulations}
The star formation histories from {\tt IllustrisTNG} \ (Figure \ref{fig:sfh_illustris}) track the evolution of instantaneous star formation of galaxy in twice the stellar half mass radius of the galaxy in cluster (orange) and field (teal) environment at $z = 2$. The instantaneous SFRs are different from the SFHs based on stellar ages from {\tt PROSPECTOR}, but can still be compared qualitatively. As the observational limit on H$\alpha$ flux biases our observational sample towards galaxies with SFR $>0.8 $ $\rm{M}_\odot\rm{yr}^{-1}$, we have imposed an analog SFR threshold to the simulated galaxies to have a comparable sample. In practice, we have imposed a SFR cut of $0.4$ $\rm{M}_\odot\rm{yr}^{-1}$\ on the selection of galaxies from {\tt IllustrisTNG}\ to match the observations (refer to Section \ref{sec:illustris}).
Figure \ref{fig:sfh_illustris} shows the star formation rate normalised to the stellar mass of cluster and field galaxies at $z=2$ from {\tt IllustrisTNG}. Similar to observations, star formation in cluster galaxies declines with time in the high mass ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10.5$) sample, whereas in the low mass sample, field and cluster galaxies have comparable SFHs. We also see a slight elevation ($<0.2$ dex) of star formation in cluster galaxies compared to field galaxies in mass bin $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 10 - 10.5$. The average SFH of low mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $< 10.5$) is comparable in cluster and field environments.
Contrary to our result of rising of star formation in low mass cluster galaxies, \cite{Donnari2021} find a quenched fraction of $\sim 0.4$ in the low mass ($9>$$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$>9.5$) satellite galaxies in {\tt IllustrisTNG}\ at $z=2$. By selecting galaxies above SFR $> 0.8$ $\rm{M}_\odot\rm{yr}^{-1}$\ in observations and $> 0.4$ $\rm{M}_\odot\rm{yr}^{-1}$\ in simulations, we are probably removing galaxies whose star formation activity has already been suppressed. This is certainly the case for the simulated sample. For galaxies in the real Universe, we can only say that either the star formation in low mass galaxies is not affected by the environment, or that the environmental suppression of star formation happens rapidly in low mass cluster galaxies compared to high mass galaxies.
\subsection{Star Formation Histories and Stellar Mass}
\label{sec:SFHvssm}
\begin{figure*}
\noindent
\includegraphics[scale = 0.3]{massbins_jan230200Myr.png}
\includegraphics[scale = 0.3]{massbins_jan23200-400Myr.png}
\includegraphics[scale = 0.3]{massbins_jan23400-600Myr.png}
\end{figure*}
\begin{figure*}
\noindent
\centering
\includegraphics[scale = 0.3]{massbins_jan2306-1Gyr.png}
\includegraphics[scale = 0.3]{massbins_jan231-2Gyr.png}
\caption{Specific Star formation rate ($\rm{SFR}(t)/\rm{M}_{*,z=2}$) from observations vs stellar mass in 5 look back time age bins: $0-200$ Myr, $200 - 400$ Myr, $400 - 600$ Myr, $600- 1000$ Myr and $>1$ Gyr (top-left to bottom right) for proto-cluster (red circles) and field (blue diamonds) galaxies. The sSFRs are plotted against the median stellar mass of each mas bin. The error bars represent the error in median. In the earliest age bin (t = 1-2 Gyr) massive galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 10-11$) are forming more stars compared to lower stellar mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 9-10$) in contrast to the most recent age bin (t = 0-200 Myr), where lower mass galaxies are forming more stars than the massive galaxies across environments. }
\label{fig:sfh_bins}
\end{figure*}
The star formation activity in a galaxy is closely related to its stellar mass at $z= 0-4$ \citep{Noeske2007,Tomczak2016,Koyama2018}. Since, SFH tracks how galaxies grow its stellar mass, the SFH of galaxies are also related to its stellar mass \citep{Thomas2005}. We find comparable results from the star formation histories derived from {\tt ZFOURGE}\ photometry using {\tt PROSPECTOR}\ and {\tt IllustrisTNG}.
\subsubsection{SFH Vs Stellar Mass: Observations}
Figure \ref{fig:sfh_bins} shows dependence of star formation rates in each age bin normalised to the stellar mass of the galaxy at the time of observation. From top left to bottom right, the age bins range from recent to earliest. The most massive galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10.5$) have higher sSFR ($\sim 0.6$ dex) compared to low mass galaxies in the earliest age bin, indicating that massive galaxies formed their stellar masses early on irrespective of the environment. Compared to that, towards the most recent age bin ($0-200$ Myr), we find this trend gradually reverses. In the most recent age bin ($0-200$ Myr, the most massive galaxies have lower sSFR ($\sim 0.8$ dex) compared to the lowest mass galaxies.
From SED fitting of observations with {\tt PROSPECTOR}, figure \ref{fig:stellarmass_pros} shows that the high mass cluster galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10.5$) form $\approx 45 \pm 8\%$ of stellar mass in the first $\sim2$ Gyr of the Universe compared to $9\pm 1\%$ to $19\pm2\%$ of stars formed for lower stellar mass galaxies in the same environment. In the field sample, the high mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ > 10.5$) form $\approx 31 \pm 2\%$ of their stellar mass in the first $\sim 2$ Gyr of the Universe compared to $12\pm 1\%$ to $17\pm 1\%$ in the lower stellar mass galaxies. Figure \ref{fig:sfh_pros} also shows that the shape of the SFH for galaxies is dependent on the stellar mass of the galaxies. The highest mass bin have a constant SFH compared to galaxies in the lowest mass bin, for which the median SFH is rising. Whereas galaxies in stellar mass range $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 9.5-10.5$ have a bursty star formation history.
The rising SFH is measured only in the lowest stellar mass galaxies, where the observational limitation of our sample excludes low stellar mass galaxies with low star formation rates. The low stellar mass galaxies that have suppressed star formation either due to environmental or secular processes are thus not included in our sample and this would explain the rising SFH measured in the lowest stellar mass bin for our sample.
\subsubsection{SFH Vs Stellar Mass: Simulations}
Figure \ref{fig:sfh_illustris} shows the star formation rates of the cluster and field galaxies from {\tt IllustrisTNG}. The SFHs of TNG100 galaxies show similar trends to the observations. The SFH of high stellar mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10$) either plateaus with time in the field environment or drops in clusters. On the other hand, low stellar mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ < 10$) have rising SFHs irrespective of the environment. This result is comparable to our observations (Figure \ref{fig:sfh_pros}). The SFH of galaxies from {\tt IllustrisTNG}\ in the highest stellar mass bin ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ > 10.5$) plateaus $\approx 500$ Myr earlier than the galaxies in mass bin ($10> $ $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10.5$) and $\approx 1$ Gyr earlier than the galaxies in mass bin ($9.5>$ $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10$). Massive galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10.5$) form most stars ($38 \pm 3 \%$ and $32 \pm 0.4 \%$ of their total stellar mass in proto-cluster and field environments, respectively) in the first $2$ Gyr.\\
Our analysis indicates an earlier formation and evolution of massive galaxies irrespective of environment compared to lower mass galaxies. This result is in agreement to many theoretical and observational studies \citep{Thomas2005,SanchezBlazquez2006, Renzini2016}. Recent observational study by \cite{Webb2020} have comparable results for quenched galaxies at $z<1.5$. \cite{Thomas2005} show that most massive early type galaxies at $z=0$ have peak star formation activity $1-2$ Gyr before low stellar mass galaxies, comparable to our results.
\subsection{Stellar Age: Mass and Environment}
Our results from {\tt ZFOURGE}\ and {\tt IllustrisTNG}\ show significant effect of environment on the SFH of high mass galaxies at $z=2$. In this Section, we explore in {\tt IllustrisTNG}\ if earlier formation or merger histories could explain the measured decline in star formation activity of massive cluster galaxies.
To estimate if massive galaxies in the proto-cluster environment formed earlier than the field galaxies, we analyse the average age of their stellar populations in {\tt IllustrisTNG}. We extract the mass weighted stellar age for each galaxy in our field and cluster environment as follows:
\begin{equation}
t_g = \frac{\sum_{i=1}^{N} t_iM_i}{\sum_{i=1}^{N}M_i}
\end{equation}
where $t_g$ is the mass weighted stellar age of the galaxy, $t_i$ and $\rm{M}_i$ are the age and mass of each stellar particle in the galaxy and N is the total number of stellar particles in the galaxy. We use the snapshot particle data of {\tt IllustrisTNG}\ \citep{Nelson2019a} to get the stellar ages of galaxies at $z=2$. Figure \ref{fig:age} shows the mass weighted stellar ages of the cluster (orange) and field (teal) galaxies in the respective stellar mass bins. The error bars show the error in the median of the distribution.
The median mass weighted stellar ages increase with the stellar mass of the galaxy in the cluster. The mass-weighted stellar ages of cluster galaxies is comparable to those of the field galaxies across the stellar mass ages. Our results do not change significantly when comparing the ages of the oldest stellar particle in the galaxy.
The median stellar ages of the lowest mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ =9-9.5$) is $\approx 800\pm 20$ Myr and high mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $>10.5$) is $\approx 980 \pm 10$ Gyr. The highest mass galaxies are $\approx 190 \pm 30$ Myr older than the low stellar mass galaxies. The cluster galaxies are the same age as the field galaxies across the stellar mass bin. This result is different to the observational study by \cite{Webb2020}, who find that at $z=1$, cluster galaxies are $310$ Myr older than the field galaxies in the mass bin $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $=10-11.8 $.
\subsection{Merger events}
Many theoretical and observational studies correlate galaxy mergers with stellar mass growth, increased gas fractions, enhanced AGN activity and enhanced SFR \citep{Kewley2006,Ellison2015,Dutta2019,Moreno2019, Hani2020}. \cite{Watson2019} show that galaxy mergers are twice as frequent in the proto-cluster environment compared to the field at $z\sim2$. Using {\tt IllustrisTNG}\, \cite{Hani2020} show that the enhancement of the SFR activity due to mergers correlates with the stellar mass, mass ratio and the gas fraction of the merging pair. The decay of SFR enhancement in post merger phase happens over $500$ Myr and the galaxies that underwent strongest merger driven starburst events quench on a faster timescale. %
We explore if the differences in the SFH of massive galaxies between environments is driven by mergers. We use the merger catalogs by \cite{Rodriguez-Gomez2015} from {\tt IllustrisTNG}\ to explore the effect of merger on the star formation histories. Figure \ref{fig:mergerdist} shows that the distribution of total mergers (mass ratio $> 0.1$) encountered in cluster (orange) and field (teal) environment in different stellar mass bins is comparable. However, massive galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $> 10.5$) on average have experienced $8\pm0.3$ and $10\pm1$ mergers (mass ratio $> 0.1$) in field and cluster sample respectively, compared to $5\pm0.3$ mergers experienced by low mass cluster and field galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ = 9-9.5$) in their lifetimes.
While merger events often lead to further star formation, gas poor mergers do not and in turn will not affect the average stellar age of the galaxy. To estimate if massive galaxies have experienced gas poor mergers we analyse the cold gas fraction and total gas fraction of the mergers. We use the merger history catalogs by \cite{Rodriguez-Gomez2017} to get the mean cold gas fraction (weighted by the stellar mass) of all the progenitors until $z=2$ (Figure \ref{fig:coldgasfrac}). The mean cold gas fraction of cluster and field galaxies are comparable across stellar mass bins.
To calculate if the difference in SFHs could be driven by the total supply of gas, we calculate the total gas fraction of the progenitors for galaxies at each redshift snapshot as:
\begin{equation}
f_{gas,z} = \frac{\sum_{p=1}^{N} \rm{M_{gas}}_{p, z}}{\sum_{p=1}^{N} \rm{M_{gas}}_{p, z}+\sum_{p=1}^{N} \rm{M_{*}}_{p, z}}
\end{equation}
where $f_{gas,z}$ is the total gas fraction of all progenitors (p) at redshift $z$, $\rm{M_{gas}}_{p, z}$ and $\rm{M_{*}}_{p, z}$ are the total gas mass and stellar mass of each progenitor $p$ at the same redshift snapshot $z$.
Figure \ref{fig:gasfrac} shows the total gas fraction of the progenitors for the cluster (orange) and field (teal) galaxies in the 4 stellar mass bins. The cluster galaxies in the highest stellar mass bin ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ >10.5 $) encounter lower gas fractions mergers consistently since 1 Gyr after the Big Bang compared to field galaxies in the same stellar mass bin. On the other hand, the total gas fraction of progenitors in the lower stellar mass bin ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ = 9-9.5 $) are comparable across environments throughout their merger histories.
\begin{figure}
\noindent
\centering
\includegraphics[scale = 0.28]{ages_binned_ill.png}
\caption{Mass weighted stellar age (Gyr) vs log stellar mass ($\rm{M}_\odot$) of the galaxy in cluster(orange) and field(teal) environments at $z=2$ in {\tt IllustrisTNG}. The massive galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $>10.5$) are $\approx$190 Myr older than the low mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $=9-9.5$). Stellar population in cluster galaxies have comparable ages to the field galaxies across the stellar mass range.}
\label{fig:age}
\end{figure}
\begin{figure*}
\noindent
\centering
\includegraphics[scale = 0.31]{merger_dist.png}
\caption{Distribution of total number of mergers (mass ratio $> 0.1$) in field (teal) and cluster (orange) galaxies in four stellar mass bins in {\tt IllustrisTNG}\ at $z=2$. (left to right: $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ = 9-9.5,\ 9.5-10,\ 10-10.5,\ \geq 10.5$ ). Massive galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $>10.5$) on average have experienced 3-5 more mergers (mass ratio $> 0.1$) compared to the low mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 9-9.5$) in both cluster and field environment. }
\label{fig:mergerdist}
\end{figure*}
\begin{figure}
\noindent
\centering
\includegraphics[scale = 0.27]{mean_cold_gasfrac_mergers.png}
\caption{Mean cold gas fraction in mergers Vs stellar mass of galaxies in field (teal diamonds) and cluster (orange circles) environment. The markers show the median and error bars are error in median for the sample. The mean cold gas fraction of mergers decreases with increasing stellar mass of the galaxies and are comparable across environments. }
\label{fig:coldgasfrac}
\end{figure}
\begin{figure}
\noindent
\centering
\includegraphics[scale = 0.27]{gas_fraction.png}
\caption{Total gas fraction of progenitors Vs age of the Universe for four stellar mass bins in field and cluster environment. Progenitors of the most massive cluster galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $\geq 10.5$, solid orange line) are consistently gas poorer in comparison to the progenitors of field galaxies in the same stellar mass bin (teal solid line) since $\sim1$ Gyr of the age of the Universe.}
\label{fig:gasfrac}
\end{figure}
\section{Discussion}
\label{sec:discussion}
The star formation rate of a galaxy depends strongly on its stellar mass and local environment. This dependence has been demonstrated to evolve since $z\sim2$ in both observations and cosmological simulations \citep{ Peng2010, Muzzin2011, Tran2015, Tomczak2016, Kawinwanichakij2017, Darvish2017, Donnari2019, Donnari2020a, Donnari2021, Webb2020}. In this work, we have compared the star formation histories of star-forming galaxies in the cluster and field environments at $z\sim2$ using observations and simulations.
\subsection{Star Formation History: Environment and Stellar Mass}
In Section \ref{sec:results} we have shown that the star formation histories of galaxies from {\tt ZFOURGE}\ at $z\sim2$ strongly depends on the stellar mass of the galaxy in both proto-cluster and field environments. Figure \ref{fig:stellarmass_pros} shows that galaxies in the highest mass bin have formed $45 \pm 8 \%$ and $31 \pm 2 \%$ (in proto-cluster and field environments) of their present stellar mass in the first $\sim 2$ Gyr. Compared to the high mass galaxies in the same time epoch (first $\sim 2$ Gyr), the lowest mass galaxies form $<20\%$ of their present stellar mass in both proto-cluster and field environments. The star formation histories from {\tt PROSPECTOR}\ also show that the most massive galaxies have a constant or declining star formation history as opposed to the rising star formation history of the lowest mass galaxies in both high and low density environments.
This points to a faster and early stellar mass build up in the most massive galaxies and a delayed evolution of the lowest mass galaxies.
This result is consistent with the observational and theoretical studies that show a mass dependence of star formation histories \citep{ Thomas2005,Poggianti2006, Sanchez-Blazquez2009,Thomas2010, Webb2020}. These studies show that the evolution of more massive galaxies happen over shorter time scales compared to their lower mass counterparts. Stellar population studies predict a difference of $\sim 2 $ Gyr between the evolution of high mass cluster early type galaxies and high mass field early type galaxies at $z\sim0$ \citep{Thomas2005, Renzini2006}, which is over estimated compared to our results. However, the $2 $ Gyr difference could be a result of redshift evolution.
In our sample, the effect of environment on SFH of star forming galaxies in $z\sim2$ proto-cluster is present only in the highest mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ > 10.5$). he massive proto-cluster galaxies, which form $\approx 45 \pm 8\%$ of their stellar mass in the first 2 Gyr, have a declining star formation history compared to the massive field galaxies which forms $\approx 31 \pm 2 \%$ of their stellar mass in the first $\approx 2$ Gyr (Figure \ref{fig:stellarmass_pros}). The field galaxies takes $\approx 2.8 $ Gyr to form $46\%$ of its total stellar mass. This shows a slower and delayed (by $0.8$ Gyr) evolution of high mass field galaxies compared to the proto-cluster galaxies. The lack of environmental effect on the SFH of low mass galaxies at $z\sim2 $ is comparable to the result in \cite{Papovich2018}, who find that the environmental quenching efficiency of galaxies decreases with stellar mass until $z\sim 0.5 $. Our conclusion for the observed low mass galaxies remains to be confirmed with an observational sample with SFR completeness below SFR threshold of $0.8$ $\rm{M}_\odot\rm{yr}^{-1}$.
We find similar trends of star formation histories in consistently-selected galaxies from {\tt IllustrisTNG}\ (Figure \ref{fig:sfh_illustris}). As we go from the highest mass galaxies to the lowest mass galaxies we see that the plateauing of star formation occurs earlier for high mass galaxies compared to the low mass sample. For the highest mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 10.5 - 11 $) the star formation plateaus at $\sim 1.5$ Gyr compared to $\sim 2$ Gyr, and $2.7$ Gyr for galaxies in $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ = 10 - 10.5 $ and $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ = 9.5 - 10 $ mass bins respectively. Galaxies in the lowest mass bin ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ = 9 - 9.5 $) have a rising star formation until $z=2$. This also indicates early evolution of massive galaxies comparable to our results from {\tt PROSPECTOR}. We also find that the environment significantly affects the star formation histories of galaxies in the highest mass bin, with cluster galaxies forming more stars early on compared to the field galaxies similar to our results from observations.\\
In {\tt IllustrisTNG}, the stellar mass formed by the most massive galaxies in the first 2 Gyr of the universe is $38\pm3\%$ in the cluster and $32\pm0.4\%$ in the field. The difference in fraction of stellar mass formed in the first 2 Gyr of the universe is within $1\sigma$ error ($6\pm3.4\%$) compared to the observations ($14\pm10\%$). Cluster and field galaxies in the lowest mass bin form $24 \pm 0.7\%$ and $23 \pm 0.5 \%$ of their total stellar mass at $z=2$ which is lower than the fraction of total stellar mass formed in high mass galaxies, similar to our results from observations.\\
In {\tt IllustrisTNG}, \cite{Donnari2020a} show that the fraction of quenched satellite galaxies at $z=2$ is $\sim 0.2-0.4$ in the stellar mass range $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $= 9 - 9.5 $ and $\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $ = 10.5 - 11$. Due to the SFR cut imposed on the selection of galaxies from {\tt IllustrisTNG}\ to match the observations, we do not see the suppression of SFH in low mass cluster galaxies. The low mass cluster galaxies experiencing the effect of environment and undergoing suppression of star formation would thus be removed due to the SFR threshold. We need spectroscopic redshift confirmation of faint ($K_{AB}>24$) galaxies at $z\sim2$ to be able to measure the effect of environment on SFH of the low mass quenched galaxies.
\subsection{Early Formation?}
Our results from the observations ({\tt ZFIRE}\ - {\tt ZFOURGE}) and {\tt IllustrisTNG}\ simulations show signs of the onset of the star formation quenching in the most massive galaxies in the cluster environment. Using the {\tt IllustrisTNG}\ simulations, we investigate a possible earlier formation of cluster galaxies driving the difference in SFHs (\ref{sec:SFHvsenv} and \ref{sec:SFHvssm}). Galaxies in the most massive bin are $\approx 190 \pm 30$ Myr older than lowest mass galaxies. Our result is unaffected if we consider the age of oldest stellar particle in the galaxy age as a proxy for formation time instead of the mass weighted stellar ages.
The age difference of $\approx 190 \pm 30$ Myr at $z=2$ between high and low stellar mass galaxies is consistent with the observational results at redshift $z\sim1$ where cluster galaxies are $\sim 300 - 400$ Myr older than the field galaxies \citep{VanDokkum2007,Webb2020} and $\sim 1$ Gyr at $z\sim0.1$ \citep{Thomas2005}. The increasing difference in stellar ages could be driven by the redshift evolution of ages of cluster and field galaxies. Nevertheless, the mass weighted stellar ages are unable to explain the measured difference in SFH of massive galaxies in cluster and field in our sample. This indicates that the suppression of star formation in high stellar mass cluster galaxies at $z=2$ in {\tt IllustrisTNG}\ is not a result of earlier formation and evolution.
\subsection{Role of Mergers}
Studies have shown that massive galaxies grow their stellar mass through mergers \citep{Rodriguez-Gomez2015,Pillepich2018b,Gupta2020}. Recent work by \cite{Hani2020} uses {\tt IllustrisTNG}\ to find that mergers enhance the SFR of post merger galaxies, but the relative increase in SFR depends on the stellar mass, mass ratio of the progenitor pair and the gas fraction of the progenitors. We investigate the possible role of mergers in shaping the SFHs.
We track the merger histories of our sample from {\tt IllustrisTNG}\ and find that in the combined cluster+field sample, on average massive galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $\geq 10.5$) experience $8\pm0.26$ mergers compared to $5\pm0.03$ mergers encountered by the low mass galaxies ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ = $9-9.5$) in $2\leq z < 20$ (Figure \ref{fig:mergerdist}). We speculate that the higher number of mergers experienced by massive galaxies early on leads to the build up of stellar mass and higher SFR in the earlier time bins. The higher SFRs of massive galaxies in the early time bins (Figures \ref{fig:sfh_pros}, \ref{fig:sfh_illustris}) could lead to the depletion of gas reservoir faster compared to the lower mass galaxies (Figure \ref{fig:gasfrac}). The decreasing mean cold gas fraction of mergers with stellar mass of the galaxy also indicates the depletion of star formation fuel in the massive galaxies compared to the low stellar mass galaxies, explaining the relatively flat SFHs of massive galaxies (Figure \ref{fig:sfh_bins}).
The effect of environment on the SFHs of galaxies is only evident in the highest mass bin ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$ $\geq 10.5$) in both observations (Figure \ref{fig:sfh_pros}) and simulations (Figure \ref{fig:sfh_illustris}). The high mass cluster galaxies experience more mergers ($10\pm1$) compared to the high mass field galaxies ($8\pm0.3$ mergers). Moreover, the progenitors of the massive cluster galaxies at $z=2$ have lower total gas fraction than the progenitors of high mass field galaxies even when the universe was 1 Gyr old (Figure \ref{fig:gasfrac}). Although, the mean cold gas fraction of the progenitors of two population is comparable at $z=2$ (Figure \ref{fig:coldgasfrac}).
We hypothesize that the observed suppression of sSFR in massive cluster galaxies in the recent time bins is a delayed effect of the lower gas fraction in its progenitors. In other words, we find that massive galaxies in proto-clusters at $z\sim2$ show signatures of environmental effects not because of direct environmental processes due to their interaction with other cluster galaxies or the intra-halo medium, but rather because of the very nature of the environment they live in, which in turn affects their merger history and their opportunities to acquire gas.
Star formation rapidly progresses in the massive cluster galaxies with the available gas reservoirs. However, by $z=2$ massive cluster galaxies are starved because recent mergers have been systematically gas poorer in comparison to the massive field galaxies. The early onset of depletion of the gas reservoir in the massive cluster galaxies would cause the suppression of star formation by $z=2$ and could lead to an eventual quenching in the future via starvation. The environment-dependent depletion of gas fractions progresses from massive to low mass galaxies as we approach $z=2$ (Figure \ref{fig:gasfrac}). This suggests that the massive cluster galaxies would grow via dry mergers in the low redshift universe \citep{Tran2005, Webb2015}, and the environmental quenching would progress from massive galaxies to low mass galaxies in the cluster environment as is observed in the low redshift universe \citep{Donnari2021}.
In a recent work \citep{Gupta2021} find that in {\tt IllustrisTNG}\ star formation quenching in massive galaxies depends on their stellar size and is driven by the black hole feedback \citep{Davies2020,Zinger2020}. In future, we will test if the signs of early onset of star formation suppression in massive cluster galaxies is imprinted on the size of their stellar disks. We will further investigate if the growth and feedback of central super massive black hole is affected by the local environment of the galaxy.
\section{Summary}
In this paper, we have presented the first measurements of the SFHs of galaxies in the proto-cluster environment at $z=2$ using the {\tt ZFIRE}\ - {\tt ZFOURGE}\ surveys. We have compared our results with the SFHs of galaxies in different environments in the {\tt IllustrisTNG}\ simulations and used the latter to provide a possible physical interpretation of the our findings. Our main results are summarised as:
1. {\tt ZFIRE}\ - {\tt ZFOURGE}: The SFHs of massive star forming galaxies ($10.5\leq $$\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ \leq 11$) in the proto-cluster are constant compared to the field galaxies in the same mass bin (Figure \ref{fig:sfh_pros}). In the first two Gyr of age of the Universe, massive proto-cluster galaxies form $45\pm 8 \%$ of total stellar mass compared to $31\pm 2 \%$ formed in massive field galaxies (Figure \ref{fig:stellarmass_pros}). However, a similar dependence of SFHs on environment is not observed in galaxies in the lower mass bin ($\log[{\rm M}_{\ast}/{\rm M}_{\odot}]$$ \leq 10.5$).
2. {\tt ZFIRE}\ - {\tt ZFOURGE}: High mass galaxies form more stars ($45\pm 8 $ to $31\pm 2\%$ in cluster and field environment respectively) in the earliest age bin ($> 2$ Gyr age of Universe), compared to low stellar mass galaxies ($17\pm 1\%$ to $19\pm 2 \%$ in cluster and field environment respectively) in the same age bin (Figure \ref{fig:stellarmass_pros}). This indicates a faster/earlier stellar mass build up of massive galaxies.
3. {\tt IllustrisTNG}: SFHs from simulations are comparable to our results from observations. The effect of environment is most prominent in the most massive galaxies (Figure \ref{fig:sfh_illustris}). Star formation in most massive cluster galaxies is suppressed compared to the field galaxies in the same mass bin. However, the SFHs of low mass galaxies do not show any dependence on environment.
4. Stellar Ages: In {\tt IllustrisTNG}, low mass galaxies are on average $190$ Myr younger than the high mass galaxies. However there is no difference in stellar ages in different environments across the studied stellar mass range (Figure \ref{fig:age}). Hence, the observed differences in the SFHs between cluster and field massive galaxies cannot be a result of early formation or earlier evolution of massive cluster galaxies.
5. Mergers and Gas Fractions: Based on the outcome of {\tt IllustrisTNG}, we find that massive galaxies on average have experienced more mergers than low mass galaxies ($5$ mergers), irrespective of their environment (Figure \ref{fig:mergerdist}) by $z=2$. The mean cold fractions in mergers decrease with increasing stellar mass but are comparable across environments (Figure \ref{fig:coldgasfrac}). On the other hand, the total gas fractions in the progenitors of massive cluster galaxies are consistently lower since $\sim1$ Gyr after the Big Bang in comparison to field massive galaxies (Figure \ref{fig:gasfrac}).
We hence hypothesize that the reduced star formation in the massive cluster galaxies at $z\sim2$ is a delayed cumulative effect of the lower gas fractions in their progenitors due to the very environment they evolve in instead of direct interactions with other cluster galaxies or the intra-cluster medium.
\label{sec:summary}
\section{Acknowledgement}
The authors would like to thank Dr. Joel Leja for insightful feedback for the paper. The authors would also like to thank the anonymous referee for their careful reading of the manuscript which has helped improved the clarity of the work. GGK acknowledges the support of the Australian Research Council through the Discovery Project DP170103470. AG acknowledges support of the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. T.N. and K. G., acknowledge support from Australian Research Council Laureate Fellowship FL180100060
This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile and W M Keck Observatory, Hawaii. The authors also wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from the summit.
\software{
Prospector \citep{Leja2017, Johnson2019},
EAZY \citep{Brammer2008},
FAST \citep{Kriek2009},
FSPS \citep{Conroy2009},
MIST \citep{Dotter2016}}
|
2,869,038,154,230 | arxiv |
\section*{Acknowledgements}
\input{acknowledgements/Acknowledgements}
\printbibliography
\clearpage
\input{atlas_authlist}
\end{document}
\section{Introduction}
\label{sec:intro}
Following the observation~\cite{HIGG-2012-27,CMS-HIG-12-028} of a
Higgs boson, $H$, with a mass of approximately
125~GeV~\cite{CMS-HIG-14-042} by the ATLAS and CMS collaborations at the Large Hadron Collider (LHC), the
properties of its interactions with the electroweak gauge bosons have
been measured
extensively~\cite{CMS-HIG-15-002,HIGG-2014-06,CMS-HIG-14-009}.
The coupling of the Higgs boson to leptons has been established
through the observation of the $H\to\tau^+\tau^-$
channel~\cite{CMS-HIG-15-002,HIGG-2013-32,CMS-HIG-13-004},
while in the quark sector indirect evidence is available for the
coupling of the Higgs boson to the
top-quark~\cite{CMS-HIG-15-002} and evidence for the Higgs
boson decays into $b\bar{b}$ has been recently presented~\cite{HIGG-2016-29,CMS-HIG-16-044}.
Despite this progress, the Higgs
boson interaction with the fermions of the first and second generations
is still to be confirmed experimentally.
In the Standard Model (SM), Higgs boson interactions to fermions are implemented through Yukawa
couplings, while a wealth of beyond-the-SM theories predict
substantial modifications. Such scenarios include the Minimal Flavour Violation
framework~\cite{DAmbrosio:2002ex}, the Froggatt--Nielsen
mechanism~\cite{Froggatt:1978nt}, the Higgs-dependent Yukawa couplings
model~\cite{Giudice:2008uua}, the Randall--Sundrum family of models
\cite{Randall:1999ee}, and the possibility of the Higgs boson being a
composite pseudo-Goldstone boson~\cite{Dugan:1984hq}. An overview of
relevant models of new physics is provided in
Ref.~\cite{deFlorian:2016spz}.
The rare decays of the Higgs boson into a heavy quarkonium state,
$\Jpsi$ or $\Upsilon(nS)$ with $n=1,2,3$, and a photon have been suggested for probing the charm- and bottom-quark couplings to the Higgs
boson~\cite{VatoHBranching,HBranching,Bodwin:2014bpa,Bodwin:2016edd}
and have already been searched for by the ATLAS
Collaboration~\cite{HIGG-2014-03}, resulting in 95\% confidence level (CL) upper limits of
$1.5\times 10^{-3}$ and $\left(1.3,1.9,1.3\right)\times 10^{-3}$ on the branching fractions,
respectively. The $H\to\Jpsi\gamma$ decay mode has also been searched for by the
CMS Collaboration~\cite{CMS-HIG-14-003}, yielding the same upper
limit. The corresponding SM predictions for these branching fractions~\cite{Koenig:2015pha} are
${\cal B}\left(H\to\Jpsi\gamma\right)=\left(2.95\pm 0.17\right)\times 10^{-6}$ and
${\cal B}\left(H\to\Upsilon(nS)\gamma\right)=\left(4.6^{+1.7}_{-1.2},2.3^{+0.8}_{-1.0},2.1^{+0.8}_{-1.1}\right)\times
10^{-9}$.
The prospects for observing and studying exclusive Higgs boson decays into a
meson and a photon with an upgraded High Luminosity
LHC~\cite{deFlorian:2016spz} or a future hadron
collider~\cite{Contino:2016spe} have also been studied.
Currently, the light ($u,d,s$) quark couplings to the Higgs boson are
loosely constrained by existing data on the total Higgs boson width, while the large multijet background
at the LHC inhibits the study of such
couplings with inclusive $H\to q\bar{q}$ decays.
Rare exclusive decays of the Higgs boson into a light meson, $M$, and a
photon, $\gamma$, have been suggested as a probe of the couplings of
the Higgs boson to
light quarks and would allow a search for
potential deviations from the SM prediction~\cite{Kagan:2014ila,Koenig:2015pha,Perez:2015lra}.
Specifically, the observation of the Higgs boson decay to a $\phi$ or $\rho(770)$ (denoted as $\rho$ in the following) meson and a photon
would provide sensitivity to its couplings to the strange-quark, and the up-
and down-quarks, respectively. The expected SM branching
fractions are
${\cal B}\left(H\to\phi\gamma\right)=(2.31\pm0.11)\times 10^{-6}$
and
${\cal B}\left(H\to\rho\gamma\right)=(1.68\pm0.08)\times10^{-5}$%
~\cite{Koenig:2015pha}.
The decay amplitude receives two main contributions that interfere
destructively. The first is referred to as ``direct'' and proceeds through
the $H\to q\bar{q}$ coupling, where subsequently a photon is emitted
before the $q\bar{q}$ hadronises exclusively to $M$.
The second is referred to as ``indirect'' and proceeds via the
$H\to\gamma\gamma$ coupling followed by the fragmentation $\gamma^*\to
M$. In the SM, owing to the smallness of the light-quark Yukawa
couplings, the latter amplitude dominates, despite being loop
induced. As a result, the expected branching fraction predominantly
arises from the ``indirect'' process, while the Higgs boson couplings to
the light quarks are probed by searching for modifications of this
branching fraction due to changes in the ``direct'' amplitude.
This paper describes a search for Higgs boson decays into the
exclusive final states $\phi\gamma$ and $\rho\gamma$. The decay
$\phi\to K^{+}K^{-}$ is used to reconstruct the $\phi$ meson, and
the decay $\rho\to\pi^+\pi^-$ is used to reconstruct the $\rho$
meson. The branching fractions of the respective meson decays are well known and are accounted for when
calculating the expected signal yields. The presented search uses approximately 13
times more
integrated luminosity than the first search for
$H\to\phi\gamma$ decays~\cite{HIGG-2016-05}, which led to a 95\%
CL upper limit of ${\cal
B}\left(H\to\phi\gamma\right)<1.4\times 10^{-3}$, assuming SM
production rates of the Higgs boson. Currently, no other experimental information about the
$H\to\rho\gamma$ decay mode exists.
The searches for the analogous decays of the $Z$ boson into a meson and
a photon are also presented in this paper. These have been
theoretically studied~\cite{Huang:2014cxa,Grossmann:2015lea} as
a unique precision test of the SM and the factorisation approach in
quantum chromodynamics (QCD), in an environment where the power corrections
in terms of the QCD energy scale over the vector boson's mass are
small~\cite{Grossmann:2015lea}.
The large $Z$ boson production cross section at the LHC means that rare
$Z$ boson decays can be probed at branching fractions much smaller
than for Higgs boson decays into the same final states. The SM branching fraction
predictions for the decays considered in this paper are ${\cal
B}\left(Z\to\phi\gamma\right)=(1.04\pm0.12)\times
10^{-8}$~\cite{Grossmann:2015lea,Huang:2014cxa} and ${\cal
B}\left(Z\to\rho\gamma\right)=(4.19\pm0.47)\times
10^{-8}$~\cite{Grossmann:2015lea}.
The first search for $Z\to\phi\gamma$ decays by the ATLAS Collaboration was presented in
Ref.~\cite{HIGG-2016-05} and a 95\% CL upper limit of
${\cal B}\left(Z\to\phi\gamma\right)<8.3\times 10^{-6}$ was
obtained. So far no direct experimental information about the decay
$Z\to\rho\gamma$ exists.
\section{ATLAS detector}
\label{sec:atlas}
ATLAS~\cite{PERF-2007-01} is a multi-purpose particle physics detector
with a forward-backward symmetric cylindrical geometry and near 4$\pi$
coverage in solid angle.\footnote{ATLAS uses a right-handed coordinate
system with its origin at the nominal interaction point (IP) in the
centre of the detector and the $z$-axis along the beam pipe. The
$x$-axis points from the IP to the centre of the LHC ring, and the
$y$-axis points upward. Cylindrical coordinates $(r,\phi)$ are used
in the transverse plane, $\phi$ being the azimuthal angle around the
$z$-axis. The pseudorapidity is defined in terms of the polar angle
$\theta$ as $\eta=-\ln\tan(\theta/2)$.} It consists of an inner
tracking detector surrounded by a thin superconducting solenoid,
electromagnetic and hadronic calorimeters, and a muon spectrometer.
The inner tracking detector (ID) covers the pseudorapidity range
$|\eta|<2.5$, and is surrounded by a thin superconducting solenoid
providing a 2~T magnetic field. At small radii, a high-granularity
silicon pixel detector covers the vertex region and typically provides
three measurements per track. A new innermost pixel-detector layer,
the insertable B-layer, was added before $13\,\TeV$\ data-taking began in 2015
and provides an additional measurement at a radius of about
\SI{33}{\milli\meter} around a new and thinner beam
pipe~\cite{ATLAS-TDR-19}. The pixel detectors are followed by a
silicon microstrip tracker, which typically provides four space-point measurements
per track. The silicon detectors are complemented by a gas-filled
straw-tube transition radiation tracker, which enables radially
extended track reconstruction up to $|\eta| = 2.0$, with typically 35
measurements per track.
The calorimeter system covers the pseudorapidity range $|\eta| < 4.9$.
A high-granularity lead/liquid-argon (LAr) sampling electromagnetic
calorimeter covers the region $|\eta|<3.2$, with an additional thin
LAr presampler covering $|\eta| < 1.8$ to correct for
energy losses upstream. The electromagnetic calorimeter is divided
into a barrel section covering $|\eta| < 1.475$ and two endcap
sections covering $1.375 < |\eta| < 3.2$. For $|\eta| < 2.5$ it is
divided into three layers in depth, which are finely segmented in
$\eta$ and $\phi$. A steel/scintillator-tile calorimeter provides
hadronic calorimetry in the range $|\eta|<1.7$. LAr technology, with
copper as absorber, is used for the hadronic calorimeters in the
endcap region, $1.5<|\eta|<3.2$. The solid-angle coverage is
completed with forward copper/LAr and tungsten/LAr calorimeter modules
in $3.1 < |\eta| < 4.9$, optimised for electromagnetic and hadronic
measurements, respectively.
The muon spectrometer surrounds the calorimeters and comprises
separate trigger and high-precision tracking chambers measuring the
deflection of muons in a magnetic field provided by three air-core
superconducting toroids.
A two-level trigger and data acquisition system is used to provide an online selection and record
events for offline analysis~\cite{TRIG-2016-01}. The level-1
trigger is implemented in hardware and uses a subset of detector
information to reduce the event rate to \SI{100}{\kilo\hertz} or less from the maximum LHC collision rate of \SI{40}{\mega\hertz}. It is
followed by a software-based high-level trigger which filters events using the
full detector information and records events for detailed offline analysis
at an average rate of \SI{1}{\kilo\hertz}.
\section{Data and Monte Carlo simulation}
\label{sec:datamc}
The search is performed with a sample of $pp$ collision data recorded at a
centre-of-mass energy $\rts=13\,\TeV$. Events are retained for further analysis
only if they were collected under stable LHC beam conditions and the detector
was operating normally. This results in an integrated luminosity of 35.6 and
32.3~\ifb\ for the $\phi\gamma$ and $\rho\gamma$ final states, respectively.
The integrated luminosity of the data sample has an uncertainty of $3.4\%$
derived using the method described in Ref.~\cite{DAPR-2013-01}.
The $\phi\gamma$ and $\rho\gamma$ data samples used in this analysis were each collected with a specifically designed trigger.
Both triggers require an isolated photon with a transverse momentum, $\pt$, greater
than $35\,\GeV$ and an isolated pair of ID tracks, one of which must
have a $\pt$ greater than $15\,\GeV$, associated with a topological
cluster of calorimeter cells~\cite{PERF-2014-07} with a transverse energy
greater than $25\,\GeV$. The photon part of the trigger follows the same process
as the inclusive photon trigger requiring an electromagnetic cluster in the
calorimeter consistent with a photon and is described with more detail in
Ref.~\cite{TRIG-2016-01}, while
requirements on the ID tracks are applied in the high-level trigger
through an appropriately modified version of the $\tau$-lepton trigger
algorithms which are described in more detail in Ref.~\cite{ATLAS-CONF-2017-061}.
The trigger for the $\phi\gamma$ final state was introduced in
September 2015. This trigger requires that the invariant mass of the
pair of tracks, under the charged-kaon hypothesis, is in the range 987--1060\,$\MeV$,
consistent with the $\phi$ meson mass. The trigger
efficiency for both the Higgs and
$Z$ boson signals is approximately 75\% with respect to the offline
selection, as described in Section~\ref{sec:selection}.
The corresponding trigger for the $\rho\gamma$ final state was introduced
in May 2016. This trigger requires the invariant mass of the pair of tracks,
under the charged-pion hypothesis, to be in the range 475--1075\,$\MeV$
to include the bulk of the broad $\rho$ meson mass distribution. The trigger efficiency
for both the Higgs and $Z$ boson signals is approximately 78\% with
respect to the offline selection.
Higgs~boson production through the gluon--gluon fusion ($ggH$) and vector-boson fusion
(VBF) processes was modelled up to next-to-leading order (NLO) in $\alphas$
using the \POWHEGBOX v2 Monte Carlo (MC) event
generator~\cite{powheg1,powheg2,powheg3,powheg4,powheg5} with CT10 parton
distribution functions~\cite{Lai:2010vv}.
\POWHEGBOX was interfaced with the
\PYTHIAV{8.186} MC event
generator~\cite{Sjostrand:2007gs,Sjostrand:2006za} to model the parton shower,
hadronisation and underlying event. The corresponding parameter values were set according to
the AZNLO tune~\cite{STDM-2012-23}.
Additional contributions from the associated production of a Higgs
boson and a $W$ or $Z$ boson (denoted by $WH$ and $ZH$, respectively) are
modelled by the \PYTHIAV{8.186} MC event
generator
with NNPDF23LO
parton distribution functions~\cite{Ball:2012cx} and the A14 tune for hadronisation and the underlying event~\cite{ATL-PHYS-PUB-2014-021}. The production
rates and kinematic distributions for the SM Higgs boson with $\mH =125\,\gev$ are assumed throughout. These were obtained from Ref.~\cite{deFlorian:2016spz} and are summarised below. The $ggH$
production rate is normalised such that it reproduces the total cross
section predicted by a next-to-next-to-next-to-leading-order
QCD calculation with NLO electroweak corrections
applied~\cite{Anastasiou:2015ema,Anastasiou:2016cez,Actis:2008ug,Anastasiou:2008tj}. The
VBF production rate is normalised to an approximate NNLO QCD cross section
with NLO electroweak corrections
applied~\cite{Ciccolini:2007jr,Ciccolini:2007ec,Bolzoni:2010xr}. The
$WH$ and $ZH$ production rates are normalised to cross sections
calculated at next-to-next-to-leading order (NNLO) in QCD with NLO electroweak
corrections~\cite{Brein:2003wg,Denner:2011id} including the NLO QCD
corrections~\cite{Altenkamp:2012sx} for $gg\to ZH$. The expected
signal yield is corrected to include the 2\% contribution from the
production of a Higgs boson in association with a $t\bar{t}$ or a
$b\bar{b}$ pair.
The \POWHEGBOX v2 MC event generator with CT10 parton
distribution functions was also used to model
inclusive $Z$ boson production.
\PYTHIAV{8.186} with
CTEQ6L1 parton distribution functions~\cite{Pumplin:2002vw} and the
AZNLO parameter tune was used to simulate parton showering and
hadronisation.
The prediction is normalised to the total cross section obtained from
the measurement in Ref.~\cite{STDM-2015-03}, which has an uncertainty
of 2.9\%.
The Higgs and $Z$ boson decays were simulated as a cascade of two-body
decays, respecting angular momentum conservation. The meson line shapes were simulated by
\PYTHIA. The branching fraction for the decay
$\phi\to K^{+}K^{-}$ is $(48.9\pm0.5)\%$ whereas the decay
$\rho\to\pi^+\pi^-$ has a branching fraction close to
$100\%$~\cite{PDG}.
The simulated events were passed through the detailed \GEANT 4 simulation of the
ATLAS detector~\cite{GEANT4,SOFT-2010-01} and processed with the same software used
to reconstruct the data. Simulated pile-up events (additional $pp$ collisions
in the same or nearby bunch crossings) are also included and the distribution of
these is matched to the conditions observed in the data.
\section{Event selection for $\phi\gamma\to K^{+}K^{-}\gamma$ and $\rho\gamma\to\pi^+\pi^-\gamma$ final states}
\label{sec:selection}
The $\phi\gamma$ and $\rho\gamma$ exclusive final states are very
similar. Both final states consist of a pair of oppositely charged
reconstructed ID tracks. The difference is that for the former the
mass of the pair, under the charged-kaon hypothesis for the two
tracks, is consistent with the $\phi$ meson mass, while for the later,
under the charged-pion hypothesis for the tracks, it is consistent
with the $\rho$ meson mass.
Events with a $pp$ interaction vertex reconstructed from at least two ID
tracks with $\pT > 400\,\mev$ are considered in the analysis. Within an event, the primary
vertex is defined as the reconstructed vertex with the largest
$\sum\pT^2$ of associated ID tracks.
Photons are reconstructed from clusters of energy in the
electromagnetic calorimeter. Clusters without matching ID tracks are
classified as unconverted photon candidates while clusters matched to
ID tracks consistent with the hypothesis of a photon conversion into
$\ee$ are classified as converted photon
candidates~\cite{PERF-2013-04}. Reconstructed photon candidates are
required to have $\ensuremath{p_{\textrm{T}}^{\gamma}}>35\,\GeV$, $|\ensuremath{\eta^{\gamma}}|<2.37$, excluding the
barrel/endcap calorimeter transition region $1.37<|\ensuremath{\eta^{\gamma}}|<1.52$,
and to satisfy ``tight'' photon identification
criteria~\cite{PERF-2013-04}. An isolation requirement is imposed to
further suppress contamination from jets. The sum of the transverse
momenta of all tracks within $\Delta R = \sqrt{(\Delta\phi)^{2} +
(\Delta\eta)^{2}} = 0.2$ of the photon direction, excluding those
associated with the reconstructed photon, is required to be less than
$5\%$ of $\ensuremath{p_{\textrm{T}}^{\gamma}}$. Moreover, the sum of the transverse momenta of
all calorimeter energy deposits within $\Delta R =
0.4$ of the photon direction, excluding those associated with the
reconstructed photon, is required to be less than $2.45\,\GeV +
0.022\times\ensuremath{p_{\textrm{T}}^{\gamma}}$. To mitigate the effects of multiple $pp$
interactions in the same or neighbouring bunch crossings, only ID
tracks which originate from the primary vertex are considered in the
photon track-based isolation. For the calorimeter-based isolation the
effects of the underlying event and multiple $pp$ interactions are also
accounted for on an event by event basis using an average underlying event
energy density determined from data, as described in Ref.~\cite{PERF-2013-04}.
Charged particles satisfying the requirements detailed below are
assumed to be a $K^{\pm}$ meson in the $\phi\gamma$ analysis and a
$\pi^\pm$ meson in the $\rho\gamma$ analysis. No further particle
identification requirements are applied. In the following,
when referring to charged particles collectively the term
``charged-hadron candidates'' is used, while when referring to the
charged particles relevant to the $\phi\gamma$ and the
$\rho\gamma$ analyses the terms ``kaon candidates'' and ``pion candidates'' are
used, respectively, along with the corresponding masses. A pair of oppositely-charged charged-hadron candidates is referred to collectively as $M$.
Charged-hadron candidates are reconstructed from ID tracks which are
required to have $|\eta|<2.5$, $\pt>15\,\GeV$ and to satisfy basic
quality criteria, including a requirement on the number of hits in the
silicon detectors~\cite{ATL-PHYS-PUB-2015-051}.
The $\ensuremath{\phi\to K^{+}K^{-}}$ and $\rho\to\PiPi$ decays are reconstructed from pairs of
oppositely charged-hadron candidates; the candidate with the
higher $\pt$, referred to as the leading charged-hadron candidate, is required to have $\pt>20\,\GeV$.
Pairs of charged-hadron candidates are selected based on their
invariant masses. Those with an invariant mass, under the charged-kaon
hypothesis, $m_{K^{+}K^{-}}$ between $1012\,\MeV$ and $1028\,\MeV$ are
selected as $\ensuremath{\phi\to K^{+}K^{-}}$ candidates. Pairs with an invariant mass,
under the charged-pion hypothesis, $m_{\pi^{+}\pi^{-}}$ between $635\,\MeV$
and $915\,\MeV$ are selected as $\rho\to\PiPi$ candidates. The
candidates where $m_{K^{+}K^{-}}$ is consistent with the $\phi$ meson
mass are rejected from the $\rho\gamma$ analysis. This requirement
rejects a negligible fraction of the signal in the $\rho\gamma$
analysis.
Selected $M$ candidates are required to satisfy an isolation
requirement: the sum of the $\pt$ of the reconstructed ID tracks from
the primary vertex within $\Delta R= 0.2$ of the leading charged
hadron candidate (excluding the charged-hadron candidates defining
the pair) is required to be less than 10\% of the $\pt$ of the $M$
candidate.
The $M$ candidates are combined with the photon candidates, to form
$M\gamma$ candidates. When multiple combinations are possible, a
situation that arises only in a few percent of the events, the
combination of the highest-\pt photon and the $M$ candidate with an
invariant mass closest to the respective meson mass is selected. The
event is retained for further analysis if the requirement
$\Delta\phi(M,\gamma)>\pi/2$ is satisfied.
The transverse momentum of the $M$ candidates is required to be
greater than a threshold that varies as a function of the invariant
mass of the three-body system, $m_{M\gamma}$.
Thresholds of $40\,\GeV$ and $47.2\,\GeV$ are imposed on $\pt^{M}$ for the regions $m_{M\gamma}<91\,\GeV$ and
$m_{M\gamma}\geq140\,\GeV$, respectively. The threshold is varied from
$40\,\GeV$ to $47.2\,\GeV$ as a linear function of $m_{M\gamma}$ in the
region $91\leq m_{M\gamma}<140\,\GeV$. This approach ensures good
sensitivity for both the Higgs and $Z$ boson searches, while keeping a
single kinematic selection.
For the $\phi(\to\ensuremath{K^{+}K^{-}})\,\gamma$ final state, the total signal efficiencies
(kinematic acceptance, trigger and reconstruction efficiencies) are
17\% and 8\% for the Higgs and $Z$~boson decays, respectively. The
corresponding efficiencies for the $\rho\gamma$ final state are
10\% and 0.4\%.
The difference in efficiency between the Higgs and $Z$ boson decays
arises primarily from the softer $\pt$ distributions of the photon
and charged-hadron candidates associated with the $Z\to M\,\gamma$
production, as can be seen for the $\phi\gamma$ case by comparing
Figures~\ref{fig:pt_ph_kaons_H} and \ref{fig:pt_ph_kaons_Z}.
The overall lower efficiency in the $\rho\gamma$ final state is
a result of the lower efficiency of the $m_{M}$ requirement due to the
large $\rho$-meson natural width and the different kinematics of the $\rho$
decay products%
, as presented in Figures~\ref{fig:pt_ph_pions_H_rho} and~\ref{fig:pt_ph_pions_Z_rho}.
Meson helicity effects have a relatively small impact for the
$\phi\to\ensuremath{K^{+}K^{-}}$ decays, where the kaons carry very little momentum in the
$\phi$ rest frame. Specifically, the expected Higgs ($Z$) boson signal
yield in the signal region is 6\% larger (9\% smaller) than in the hypothetical scenario where the meson is unpolarised. For the
$\rho\to\PiPi$ decays the yields are increased by 33\% (decreased
by 83\%).
\begin{figure}[!h]
\centering
\subfigure[\label{fig:pt_ph_kaons_H}]{\includegraphics[width=0.40\textwidth]{Signal_Pt_ph_kaons_H_phi}}
\subfigure[\label{fig:pt_ph_kaons_Z}]{\includegraphics[width=0.40\textwidth]{Signal_Pt_ph_kaons_Z_phi}}
\vspace{-0.4cm}
\subfigure[\label{fig:pt_ph_pions_H_rho}]{\includegraphics[width=0.40\textwidth]{Signal_Pt_ph_pions_H_rho}}
\subfigure[\label{fig:pt_ph_pions_Z_rho}]{\includegraphics[width=0.40\textwidth]{Signal_Pt_ph_pions_Z_rho}}
\vspace{-0.4cm}
\caption{Generator-level transverse momentum ($\pt$) distributions of the photon
and of the charged-hadrons, ordered in $\pt$, for \subref{fig:pt_ph_kaons_H}
$H\to\phi\gamma$, \subref{fig:pt_ph_kaons_Z} $Z\to\phi\gamma$,
\subref{fig:pt_ph_pions_H_rho} $H\to\rho\gamma$ and
\subref{fig:pt_ph_pions_Z_rho} $Z\to\rho\gamma$ simulated signal events, respectively. The hatched histograms denote the full event selection while the
dashed histograms show the events at generator level that fall within the analysis
geometric acceptance (both charged-hadrons are required to have $|\eta|<2.5$ while the
photon is required to have $|\eta|<2.37$, excluding the region
$1.37<|\eta|<1.52$). The dashed histograms are normalised to unity, and the
relative difference between the two sets of distributions corresponds to the
effects of reconstruction, trigger, and event selection efficiencies. The
leading charged-hadron candidate $h= K, \pi$ is denoted by $p_\textrm{T}^{h1}$
and the sub-leading candidate by $p_\textrm{T}^{h2}$.\label{fig:pt_ph_kaons}}
\end{figure}
The average $m_{M\gamma}$ resolution is 1.8\% for both the Higgs and $Z$~boson
decays. The Higgs boson signal $m_{M\gamma}$ distribution is modelled with a sum of two Gaussian
probability density functions (pdf) with a common mean value, while
the $Z$ boson signal $m_{M\gamma}$ distribution is modelled with a double Voigtian pdf (a convolution of
relativistic Breit--Wigner and Gaussian pdfs) corrected with a mass-dependent
efficiency factor.
The $m_{\ensuremath{K^{+}K^{-}}}$ distribution for the selected $\phi\gamma$ candidates,
with no $m_{\ensuremath{K^{+}K^{-}}}$ requirement applied, is shown in
Figure~\ref{fig:MassPhiFit} exhibiting a visible peak at the $\phi$
meson mass. The $\phi$ peak is fitted with a Voigtian pdf, while the
background is modelled with a function typically used to
describe kinematic thresholds~\cite{Lees:2013uxa}. The experimental
resolution in $m_{K^{+}K^{-}}$ is approximately $4\,\MeV$, comparable to the
$4.3\,\MeV$~\cite{PDG} width of the $\phi$ meson.
In Figure~\ref{fig:MassRhoFit}, the corresponding distribution for the
selected $\rho\gamma$ candidates is shown, where the $\rho$ meson
can also be observed. The $\rho$ peak is fitted with a single Breit--Wigner pdf,
modified by a mass-dependent width to match the distribution obtained from \PYTHIA~\cite{Sjostrand:2007gs}. The
background is fitted with the sum of a combinatoric background, estimated from events containing a
same-sign di-track pair, and other backgrounds determined in the fit
using a linear combination of Chebychev polynomials up to the second order.
Figure~\ref{fig:MassFit} only qualitatively illustrates
the meson selection in the studied final state, and is not used
any further in this analysis.
\begin{figure}[h]
\centering
\subfigure[\label{fig:MassPhiFit}]{\includegraphics[width=0.45\linewidth]{PhiMassFit}}
\subfigure[\label{fig:MassRhoFit}]{\includegraphics[width=0.45\linewidth]{RhoMassFit}}
\vspace{0.2cm}
\caption{The \subref{fig:MassPhiFit} $m_{\ensuremath{K^{+}K^{-}}}$ and \subref{fig:MassRhoFit} $m_{\PiPi}$
distributions for $\phi\gamma$ and $\rho\gamma$
candidates, respectively. The candidates fulfil the complete event selection
(see text), apart from requirements on $m_{\ensuremath{K^{+}K^{-}}}$ or $m_{\PiPi}$. These
requirements are marked on the figures with dashed lines topped with arrows
indicating the included area. The signal and background models are discussed in
the text.
\label{fig:MassFit}}
\end{figure}
\section{Background}
\label{sec:background}
For both the $\phi\gamma$ and $\rho\gamma$ final states, the main
sources of background in the searches are events involving inclusive
photon + jet or multijet processes where an $M$ candidate is
reconstructed from ID tracks originating from a jet.
From the selection criteria discussed earlier, the shape of this
background exhibits a turn-on structure in the $m_{M\gamma}$
distribution around $100\,\GeV$, in the region of the $Z$ boson signal,
and a smoothly falling background in the region of the Higgs
boson signal. Given the complex shape of this background,
these processes are modelled in an inclusive fashion with a
non-parametric data-driven approach using templates to describe the
relevant distributions. The background normalisation and
shape are simultaneously
extracted from a fit to the data. A similar procedure was used in the
earlier search for Higgs and $Z$ boson decays into
$\phi\gamma$~\cite{HIGG-2016-05} and the search for Higgs and $Z$
boson decays into $\Jpsi\,\gamma$ and $\Upsilon(nS)\,\gamma$ described
in Ref.~\cite{HIGG-2014-03}.
\subsection{Background modelling}
\label{sec:backgroundmodel}
The background modelling procedure for each final state exploits a
sample of approximately 54\,000 $\ensuremath{K^{+}K^{-}}\gamma$ and 220\,000 $\PiPi\gamma$
candidate events in data. These events pass all the
kinematic selection requirements described previously, except that the
photon and $M$ candidates are not required to satisfy the nominal
isolation requirements, and a looser $\pt^{M}>35\,\GeV$ requirement is
imposed. This selection defines the background-dominated ``generation
region'' (GR).
From these events, pdfs are constructed to describe the distributions
of the relevant kinematic and isolation variables and their most
important correlations. In this way, in the absence of appropriate simulations,
pseudocandidate events are generated, from which the
background shape in the discriminating variable is derived.
This ensemble of pseudocandidate events is produced by randomly
sampling the distributions of the relevant kinematic and isolation
variables, which are estimated from the data in the GR. Each
pseudocandidate event is described by $M$ and $\gamma$
four-momentum vectors and the associated $M$ and photon isolation
variables. The $M$ four-momentum vector is constructed from sampled $\eta_{M}$,
$\phi_{M}$, $m_{M}$ and $\pt^{M}$ values. For the $\gamma$
four-momentum vector, the $\eta_{\gamma}$ and $\phi_{\gamma}$ are determined from the
sampled $\Delta\phi(M,\gamma)$ and $\Delta\eta(M,\gamma)$ values whereas
$\ensuremath{p_{\textrm{T}}^{\gamma}}$ is sampled directly.
The most important correlations among these kinematic
and isolation variables in background events are retained in the
generation of the pseudocandidates through the following
sampling scheme, where the steps are performed sequentially:
\begin{enumerate}[i)]
\item Values for $\eta_{M}$, $\phi_{M}$, $m_{M}$ and $\pt^{M}$ are drawn randomly and independently according to the corresponding pdfs.
\item The distribution of $\ensuremath{p_{\textrm{T}}^{\gamma}}$ values is parameterised in bins of $\pt^{M}$, and values are drawn from the corresponding bins given the previously generated value of $\pt^{M}$.
The $M$ isolation variable is parameterised in bins of
$\pt^{M}(\ensuremath{p_{\textrm{T}}^{\gamma}})$\ for the $\phi\gamma$ ($\rho\gamma$) model and sampled accordingly. The difference between the two approaches for the $\phi\gamma$\ and
$\rho\gamma$ accounts for the difference in the observed correlations
arising in the different datasets.
\item The distributions of the values for $\Delta\eta(M,\gamma)$, photon calorimeter isolation, normalised to $\ensuremath{p_{\textrm{T}}^{\gamma}}$, and their correlations are parameterised in a
two-dimensional distribution. For the $\phi\gamma$ analysis,
several distributions are produced corresponding to
the $\pt^{M}$ bins used earlier to describe the $\ensuremath{p_{\textrm{T}}^{\gamma}}$ and $M$ isolation variables, whereas for the $\rho\gamma$ final state the
two-dimensional distribution is produced inclusively for all $\pt^{M}$ values.
\item The photon track isolation, normalised to $\ensuremath{p_{\textrm{T}}^{\gamma}}$, and the $\Delta\phi(M,\gamma)$ variables are sampled from pdfs generated in bins of relative photon calorimeter
isolation and $\Delta\eta(M,\gamma)$, respectively, using the values drawn in step iii).
\end{enumerate}
The nominal selection requirements are imposed on the ensemble, and the
surviving pseudocandidates are used to construct templates for the
$m_{M\gamma}$ distribution, which are then smoothed using Gaussian
kernel density estimation~\cite{Cranmer:2000du}.
It was verified through signal
injection tests that the shape of the background model is not affected by potential signal contamination.
\subsection{Background validation}
\label{sec:backgroundvalid}
To validate the background model, the $m_{M\gamma}$
distributions in several validation regions, defined by kinematic and
isolation requirements looser than the nominal signal requirements,
are used to compare the prediction of the background model with the
data. Three validation regions are defined, each based on the GR
selection and adding one of the following: the $\pt^{M}$
requirement (VR1), the photon isolation requirements (VR2), or the
meson isolation requirement (VR3).
The $m_{M\gamma}$ distributions in these validation regions are shown
in Figure~\ref{fig:BkgdVal}.
The background model is found to describe the data in all
regions within uncertainties (see Section~\ref{sec:systematic}).
Potential background contributions from $Z\to\ell\ell\gamma$
decays and inclusive Higgs decays were studied and found to
be negligible for the selection requirements and
dataset used in this analysis.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.32\textwidth]{h_Higgs_Mass_ZVR1_INC_Ratio}
\includegraphics[width=0.32\textwidth]{h_Higgs_Mass_ZVR2_INC_Ratio}
\includegraphics[width=0.32\textwidth]{h_Higgs_Mass_ZVR3_INC_Ratio}
\includegraphics[width=0.32\textwidth]{HRhoGamma_h_Higgs_Mass_VR1_INC_Ratio}
\includegraphics[width=0.32\textwidth]{HRhoGamma_h_Higgs_Mass_VR2_INC_Ratio}
\includegraphics[width=0.32\textwidth]{HRhoGamma_h_Higgs_Mass_VR3_INC_Ratio}
\end{center}
\vspace{-0.6cm}
\caption{The distribution of $m_{\ensuremath{K^{+}K^{-}}\gamma}$ top ($m_{\PiPi\gamma}$ bottom) in data compared to the prediction of the background model for the VR1, VR2 and VR3 validation regions. The background model is normalised to the observed number of events within the region shown. The uncertainty band corresponds to the uncertainty envelope
derived from variations in the background modelling procedure. The ratio of the data to the background model is shown below the distributions.\label{fig:BkgdVal}
}
\end{figure}
A further validation of the background modelling is performed using
events within a sideband of the $M$ mass distribution. For the
$\phi\gamma$ analysis the sideband region is defined by $1.035\,\gev <
\mKK < 1.051\,\gev$. For the $\rho\gamma$ analysis the sideband region
is defined by $950\,\MeV < m_{\PiPi} < 1050\,\MeV$. All other selection
requirements and modelling procedures are identical to those used in the
signal region. Figures \ref{fig:BkgdSidebandsPhi} and
\ref{fig:BkgdSidebandsRho} show the $m_{M\gamma}$ distributions
for the sideband region. The background model is found to describe the
data within the systematic uncertainties described in
Section~\ref{sec:systematic}.
\begin{figure}[htb]
\begin{center}
\subfigure[\label{fig:BkgdSidebandsPhi}]{\includegraphics[width=0.45\textwidth]{h_Higgs_Mass_SB_ZTR2_INC_Ratio}}
\subfigure[\label{fig:BkgdSidebandsRho}]{\includegraphics[width=0.45\textwidth]{HRhoGamma_h_Higgs_Mass_SR_INC_RatioSideband}}
\end{center}
\vspace{-0.6cm}
\caption{The distribution of $m_{M\gamma}$ for the
\subref{fig:BkgdSidebandsPhi} $\phi\gamma$\ and \subref{fig:BkgdSidebandsRho}
$\rho\gamma$\ selections in the sideband control region. The background
model is normalised to the observed number of events within the region shown.
The uncertainty band corresponds to the uncertainty envelope derived from
variations in the background modelling procedure. The ratio of the data to the background model is shown below the distributions.\label{fig:BkgdSidebands}}
\end{figure}
\FloatBarrier
\section{Systematic uncertainties}
\label{sec:systematic}
Trigger and identification efficiencies for photons are determined
from samples enriched with $Z\to\ee$ events in
data~\cite{PERF-2013-04,TRIG-2016-01}. The systematic
uncertainty in the expected signal yield associated with the trigger
efficiency is estimated to be 2.0\%. The photon identification
and isolation uncertainties, for both the converted and unconverted photons,
are estimated to be 2.4\% and 2.6\% for the Higgs and $Z$~boson
signals, respectively. An uncertainty of 6.0\% per $M$ candidate is
assigned to the track reconstruction efficiency and accounts for
effects associated with the modelling of ID material and track reconstruction
algorithms if a nearby charged
particle is present. This uncertainty is derived conservatively by
assuming a 3\% uncertainty in the reconstruction efficiency of each
track~\cite{PERF-2015-08}, and further assuming the uncertainty to
be fully correlated between the two tracks of the $M$ candidate.
The systematic uncertainties in the Higgs production cross section are obtained
from Ref.~\cite{deFlorian:2016spz} as described in
Section~\ref{sec:datamc}. The $Z$ boson production cross-section uncertainty is
taken from the measurement in Ref.~\cite{STDM-2015-03}.
The photon energy scale uncertainty, determined from $Z\to\ee$ events
and validated using $Z\to\ell\ell \gamma$
events~\cite{PERF-2013-05}, is applied to the
simulated signal samples as a function of $\ensuremath{\eta^{\gamma}}$ and $\ensuremath{p_{\textrm{T}}^{\gamma}}$.
The impact of the photon energy scale uncertainty on the Higgs and $Z$
boson mass distributions does not exceed 0.2\%. The uncertainty associated with the
photon energy resolution is found to have a negligible
impact. Similarly, the systematic uncertainty associated with the ID
track momentum measurement is found to be negligible. The systematic
uncertainties in the expected signal yields
are summarised in Table~\ref{tab:systematics}.
The shape of the background model is allowed to vary around the
nominal shape, and the parameters controlling these systematic variations
are treated as nuisance parameters in the maximum-likelihood fit used to
extract the signal and background yields.
Three such shape variations are implemented through
varying $\ensuremath{p_{\textrm{T}}^{\gamma}}$, linear
distortions of the shape of the $\Delta\phi({M},\gamma)$, and a
global tilt of the three-body mass.
The first two variations alter the kinematics
of the pseudocandidates that are propagated to the
three-body mass.
\begin{table}[h]
\caption{Summary of the relative systematic uncertainties in the expected signal yields. The magnitude of the effects are the same for both the $\phi\gamma$ and $\rho\gamma$ selections.\label{tab:systematics}}
\centering
\begin{tabular}{lcc}
\hline
Source of systematic uncertainty & Yield uncertainty \\\hline
Total $H$ cross section & $6.3\%$ \\
Total $Z$ cross section & $2.9\%$ \\
Integrated luminosity & $3.4\%$ \\
Photon ID efficiency & $2.5\%$\\
Trigger efficiency & $2.0\%$ \\
Tracking efficiency & $6.0\%$ \\ \hline
\end{tabular}
\end{table}
\section{Results}
\label{sec:results}
The data are compared to background and signal predictions using an
unbinned maximum-likelihood fit to the $m_{M\gamma}$ distribution.
The parameters of interest are the Higgs and $Z$ boson
signal normalisations. Systematic uncertainties are modelled using
additional nuisance parameters in the fit; in particular the
background normalisation is a free parameter in the model.
The fit uses the selected events with $m_{{M}\gamma}<300\,\gev$.
The expected and observed numbers of background events within the
$m_{M\gamma}$ ranges relevant to the Higgs and $Z$ boson signals are
shown in Table~\ref{tab:events}. The observed yields are consistent
with the number of events expected from the background-only prediction
within the systematic and statistical uncertainties.
The results of the background-only fits for the $\phi\gamma$ and
$\rho\gamma$ analyses are shown in Figures~\ref{fig:FitPhi} and
\ref{fig:FitRho}, respectively.
\begin{table}[h]
\caption{The number of observed events and the mean expected
background, estimated from the maximum-likelihood fit and shown with
the associated total uncertainty, for the $m_{M\gamma}$ ranges of
interest. The expected Higgs and $Z$~boson signal yields, along with
the total systematic uncertainty, for $\phi\gamma$ and $\rho\gamma$,
estimated using simulations, are also shown in parentheses.
\label{tab:events}}
\centering
\begin{tabular}{c|r|rr|rr|c|c}
\hline
&\multicolumn{5}{c}{Observed yields (Mean expected background)}& \multicolumn{2}{|c}{Expected signal yields} \\ \cline{2-8}
&\multicolumn{5}{c|}{Mass range [GeV]} & $H$ & $Z$\\ \cline{2-6}
& All & \multicolumn{2}{c|}{$81$--$101$} & \multicolumn{2}{c|}{$120$--$130$} & $[{\cal B} = 10^{-4}]$ & $[{\cal B} = 10^{-6}]$ \\ \hline
$\phi\gamma$ & 12051 & 3364 & (3500 $\pm$ 30) & 1076 & (1038 $\pm$ \hphantom{0}9) & 15.6 $\pm$ 1.5 & 83\hphantom{.} $\pm$ 7\hphantom{.0} \\
$\rho\gamma$ & 58702 & 12583 & (12660 $\pm$ 60) & 5473 & (5450 $\pm$ 30) & 17.0 $\pm$ 1.7 & 7.5 $\pm$ 0.6 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[h]
\centering
\subfigure[\label{fig:FitPhi}]{\includegraphics[width=0.45\linewidth]{paperPlotPhi_Fit_BGONLY_ZoomSignal}}
\subfigure[\label{fig:FitRho}]{\includegraphics[width=0.45\linewidth]{paperPlotRho_Fit_BGONLY_ZoomSignal}}
\vspace{-0.4cm}
\caption{The \subref{fig:FitPhi} $m_{\ensuremath{K^{+}K^{-}}\gamma}$ and \subref{fig:FitRho} $m_{\PiPi\gamma}$ distributions of the
selected $\phi\gamma$ and $\rho\gamma$ candidates, respectively, along with
the results of the maximum-likelihood fits with a background-only model. The
Higgs and $Z$~boson contributions for the branching fraction values
corresponding to the observed 95\% CL upper limits are also shown. Below the figures the ratio of the data to the background-only fit is shown.
\label{fig:Fit}}
\end{figure}
Upper limits are set on the
branching fractions for the Higgs and $Z$~boson decays into $M\,\gamma$
using the CL$_{\text{s}}$ modified frequentist formalism~\cite{cls} with the
profile-likelihood-ratio test statistic~\cite{Cowan:2010js}. For the
upper limits on the branching fractions, the SM production
cross section is assumed for the Higgs boson~\cite{deFlorian:2016spz},
while the ATLAS measurement of the inclusive $Z$~boson cross section
is used for the $Z$~boson signal~\cite{STDM-2015-03}, as discussed in
Section~\ref{sec:datamc}. The results are summarised in
Table~\ref{fig:Limit}.
The observed 95\% CL upper limits on the branching
fractions for $H\to\phi\gamma$ and $Z\to\phi\,\gamma$ decays are
$208$ and $87$ times the expected SM branching fractions,
respectively. The corresponding values for the $\rho\gamma$ decays
are $52$ and $597$ times the expected SM branching fractions,
respectively.
Upper limits at 95\% CL on the production cross section
times branching fraction are also estimated for the Higgs boson
decays, yielding \SI{25.3}{\femto\barn} for the $H\to\phi\gamma$ decay, and
\SI{45.5}{\femto\barn} for the $H\to\rho\gamma$ decay.
The systematic uncertainties described in Section~\ref{sec:systematic}
result in a 14\% deterioration of the post-fit expected 95\% CL upper
limit on the branching fraction in the $H\to\phi\gamma$ and $Z\to\phi\gamma$ analyses, compared to the result including only statistical uncertainties.
For the $\rho\gamma$ analysis the systematic uncertainties result in a 2.3\%
increase in the post-fit expected upper limit for the Higgs boson decay, while
for the $Z$ boson decay the upper limit deteriorates by 29\%.
\begin{table}[h!]
\caption{Expected and observed branching fraction upper limits at 95\%
CL for the $\phi\gamma$ and $\rho\gamma$ analyses. The $\pm
1\sigma$ intervals of the expected limits are also
given.\label{fig:Limit}} \centering
\begin{tabular}{c|c|c}
\hline
Branching Fraction Limit (95\% CL) & Expected & Observed \\ \hline
${\cal B}\left(H\to\phi\gamma\right)[\tabscript{10}{-4}{}]$ & $4.2_{-1.2}^{+1.8}$ & 4.8 \\
${\cal B}\left(Z\to\phi\gamma\right)[\tabscript{10}{-6}{}]$ & $1.3_{-0.4}^{+0.6}$ & 0.9 \\
${\cal B}\left(H\to\rho\gamma\right)[\tabscript{10}{-4}{}]$ & $8.4_{-2.4}^{+4.1}$ & 8.8 \\
${\cal B}\left(Z\to\rho\gamma\right)[\tabscript{10}{-6}{}]$ & $33_{-9}^{+13}$ & 25 \\
\hline
\end{tabular}
\end{table}
\FloatBarrier
\section{Summary}
\label{sec:summary}
A search for the decays of Higgs and $Z$ bosons into $\phi\gamma$ and
$\rho\gamma$ has been performed with $\rts=13\,\TeV$ $pp$ collision data
samples collected with the ATLAS detector at the LHC corresponding to
integrated luminosities of up to $35.6\,\ifb$. The $\phi$ and $\rho$ mesons are
reconstructed via their dominant decays into the $\ensuremath{K^{+}K^{-}}$ and $\PiPi$ final
states, respectively. The background model is derived using a fully data
driven approach and validated in a number of control regions including
sidebands in the $\ensuremath{K^{+}K^{-}}$ and $\PiPi$ mass distributions.
No significant excess of events above the background
expectations is observed, as expected from the SM. The obtained 95\% CL upper limits are ${\cal
B}\left(H\to\phi\gamma\right)<4.8\times10^{-4}$, ${\cal
B}\left(Z\to\phi\gamma\right) < 0.9\times10^{-6}$
,${\cal B}\left(H\to\rho\gamma\right)<8.8\times10^{-4}$ and ${\cal
B}\left(Z\to\rho\gamma\right) < 25\times10^{-6}$.
|
2,869,038,154,231 | arxiv | \section{Introduction}\label{intro}
Let $k$ be a global function field of characteristic a prime number $\rho$. Let $\mathbb{F}_q$, $q:=\rho^n$, be the field of constants of $k$. Let $\infty$ be a place of $k$ and let $\mathcal{O}_k$ be the Dedekind ring of functions $f\in k$ regular outside $\infty$. We denote by $k_\infty$ the completion of $k$ at $\infty$. Let us also fix $K\subset k_\infty$ a finite abelian extension of $k$, and write $\mathrm{G}$ for the Galois group of $K/k$, $\mathrm{G}:=\mathrm{Gal}(K/k)$. The inclusion $K\subset k_\infty$ simply means that the place $\infty$ splits completely in $K$. Let $\mathcal{O}_K$ and $\mathcal{O}_K^\times$ be respectively the integral closure of $\mathcal{O}_k$ in $K$ and the group of units of $\mathcal{O}_K$. One may use Stark units to define a subgroup $\mathcal{E}_K$ of $\mathcal{O}_K^\times$ such that the factor group $\mathcal{O}_K^\times/\mathcal{E}_K$ is finite (See the definition of $\mathcal{E}_K$ in the next section). Let $H\subset k_\infty$ be the maximal abelian unramified extension of $k$ contained in $k_\infty$. In \cite{ouk91} it is proved that in case $K\subset H$ we have
\begin{equation}\label{indice1}
[\mathcal{O}_K^\times:\mathcal{E}_K]=\frac{h(\mathcal{O}_K)}{[H:K]},
\end{equation}
where $h(\mathcal{O}_K)$ is the order of the ideal class group of $\mathcal{O}_K$. In the general case, one may obtain (a complicated formula for) the quotient $[\mathcal{O}_K^\times:\mathcal{E}_K]/h(\mathcal{O}_K)$ in terms of numerical invariants of $K/k$, exactly as the first author did in \cite[formula (3.3)]{ouk09}. Let $\mathrm{Cl}(\mathcal{O}_K)$ be the ideal class group of $\mathcal{O}_K$. Recently, in \cite{Vig}, the second author used his notion of index-module to prove the following remarkable result. Let $\mathrm{g}:=[K:k]$, then, for every nontrivial irreducible rational character $\Psi$ of $\mathrm{G}$ we have
\begin{equation}\label{indice2}
\bigl[e_{\Psi}\bigl(\mathbb{Z}[\mathrm{g}^{-1}]\otimes_{\mathbb{Z}}\mathcal{O}_K^\times\bigr) :e_{\Psi}\bigl(\mathbb{Z}[\mathrm{g}^{-1}]\otimes_{\mathbb{Z}}\mathcal{E}_K\bigr)\bigr]=\#[e_{\Psi}\bigl(\mathbb{Z}[\mathrm{g}^{-1}]\otimes_{\mathbb{Z}}\mathrm{Cl}(\mathcal{O}_K)\bigr)],
\end{equation}
where $e_\Psi$ is the idempotent of $\mathbb{Z}[\mathrm{g}^{-1}][\mathrm{G}]$ associated to $\Psi$. By $\# X$ we mean the cardinality of the finite set $X$. The formula (\ref{indice2}) may be considered as a weak form of the Gras conjecture for $\mathcal{E}_K$.\par
In this paper we use Euler systems to prove the Gras conjecture for $\mathcal{E}_K$, for every prime number $p$, $p\nmid\rho[K:k]$ and every irreducible $\mathbb{Z}_p$-character of $\mathrm{G}$, not in the set $\Xi_p$ defined as follows. If $p\neq\rho$ is a prime number then we denote by $\mu_p$ the group of $p$-th roots of unity. If $p\,\vert\,[H:k]$ and $\mu_p\subset K$ then we denote by $\omega$ the Teichmuller character giving the action of $\mathrm{G}$ on $\mu_p$. Let $f$ be the order of $q$ in $(\mathbb{Z}/p\,\mathbb{Z})^\times$. Then $\mathbb{F}_q(\mu_p)=\mathbb{F}_{q^f}$ and the order of $\omega$ is $f$. We define
\begin{equation*}
\Xi_p:=
\begin{cases}
\emptyset & \mathrm{if}\ \mu_p\not\subset K\ \mathrm{or}\ p\nmid[H:k],\\
\{\omega^i,\ i\in(\mathbb{Z}/f\mathbb{Z})^\times\} & \mathrm{if}\ p\,\vert\,[H:k]\ \mathrm{and}\ \mu_p\subset K.
\end{cases}
\end{equation*}
\begin{theorem}\label{tresgras} Let $p$ be a prime number such that $p\nmid \rho[K:k]$. Let $\chi$ be a nontrivial irreducible $\mathbb{Z}_p$-character of $\mathrm{G}$ such that $\chi\not\in\Xi_p$. Then we have
\begin{equation}\label{gras}
\bigl[e_{\chi}\bigl(\mathbb{Z}_p\otimes_{\mathbb{Z}}\mathcal{O}_K^\times\bigr) :e_{\chi}\bigl(\mathbb{Z}_p\otimes_{\mathbb{Z}}\mathcal{E}_K\bigr)\bigr]=\#[e_{\chi}\bigl(\mathbb{Z}_p\otimes_{\mathbb{Z}}\mathrm{Cl}(\mathcal{O}_K)\bigr)],
\end{equation}
where $e_\chi$ is the idempotent of $\mathbb{Z}_p[\mathrm{G}]$ associated to $\chi$.
\end{theorem}
The proof of this theorem is given at the end of the paper. The formula (\ref{gras}) is proved first by Keqin Feng and Fei Xu in \cite{Feng96} when $k=\mathbb{F}_q(T)$ is a rational function field in one variable, $\infty$ is the place associated to the unique pole of $(1/T)$, $K=H_{\mathfrak{m}}$ for some ideal $\mathfrak{m}$ of $\mathcal{O}_k=\mathbb{F}_q[T]$ (the field $H_{\mathfrak{m}}$ is defined below) and $p\nmid q(q-1)[K:k]$. To obtain their result, Keqin Feng and Fei Xu also used the method of Euler systems.
\section{The group $\mathcal{E}_K$}\label{the group}
For each nonzero ideal $\mathfrak{m}$ of $\mathcal{O}_k$, we denote by $H_{\mathfrak{m}}$ the maximal abelian extension of $k$ contained in $k_\infty$, such that the conductor of $H_{\mathfrak{m}}/k$ divides $\mathfrak{m}$. The function field version of the abelian conjectures of Stark, proved by P.\,Deligne in \cite{Tate84} by using \'etale cohomology or by D.\,Hayes in \cite[Theorem 1.1]{Hay85} by using Drinfel'd modules, asserts, for any $\mathfrak{m}\not\in\{(0),\mathcal{O}_k\}$, the existence of an element $\varepsilon=\varepsilon_{\mathfrak{m}}\in H_{\mathfrak{m}}$, unique up to a root of unity such that
\begin{itemize}
\item[(i)] If we set $w_\infty=q^{d_\infty}-1$, where $d_\infty$ is the degree of $\infty$,
then the extension $H_{\mathfrak{m}}(\varepsilon^{1/w_\infty})/k$ is abelian.
\item[(ii)] If $\mathfrak{m}$ is divisible by two prime ideals then
$\varepsilon$ is a unit of $\mathcal{O}_{H_{\mathfrak{m}}}$. If $\mathfrak{m}=\mathfrak{q}^e$, where $\mathfrak{q}$ is a prime ideal then
\begin{equation*}
\varepsilon\mathcal{O}_{H_{\mathfrak{m}}}=(\mathfrak{q})_{\mathfrak{m}}^{\frac{w_\infty}{w_k}}\end{equation*}
where $w_k:=q-1$ and $(\mathfrak{q})_{\mathfrak{m}}$ is the product of the prime ideals of $\mathcal{O}_{H_{\mathfrak{m}}}$ which divide $\mathfrak{q}$.
\item[(iii)] We have
\begin{equation}\label{kronecker}
L_{\mathfrak{m}}(0,\chi)=
\frac{1}{w_\infty}\sum_{\sigma\in\mathrm{Gal}(H_{\mathfrak{m}}/k)
}\chi(\sigma)v_\infty(\varepsilon^\sigma),
\end{equation}
for all complex irreducible characters $\chi$ of $\mathrm{Gal}(H_{\mathfrak{m}}/k)$.
\end{itemize}
Here $s\longmapsto L_{\mathfrak{m}}(s,\chi)$ is the
$L$-function associated to $\chi$, defined for the complex numbers
$s$ such that $Re(s)>1$, by the Euler product
\begin{equation*}
L_{\mathfrak{m}}(s,\chi)=
\prod_{\mathfrak{v}\nmid\mathfrak{m}}\bigl(1-\chi(\sigma_{\mathfrak{v}})N(\mathfrak{v})^{-s}\bigr)^{-1},
\end{equation*}
where $\mathfrak{v}$ runs through all the places of $k$ not dividing $\mathfrak{m}$. For such a place, $\sigma_{\mathfrak{v}}$ and $N(\mathfrak{v})$ are the Frobenius automorphism of $H_{\mathfrak{m}}/k$ and the order of the residue field at $\mathfrak{v}$ respectively. Let us remark that $\sigma_{\infty}=1$ and $N(\infty)=q^{d_\infty}$.\par
For any finite abelian extension $L$ of $k$, we denote by $\mu_L$ the group of roots of unity (nonzero constants) in $L$, by $w_L$ the order of $\mu_L$ and by $\mathcal{F}_L\subset\mathbb{Z}[\mathrm{Gal}(L/k)]$ the annihilator of $\mu_L$. The description of $\mathcal{F}_L$ given in \cite[Lemma 2.5]{Hay85} and the property (i) of $\varepsilon_{\mathfrak{m}}$ imply, in particular, that for any $\eta\in\mathcal{F}_{H_{\mathfrak{m}}}$ there exists $\varepsilon_{\mathfrak{m}}(\eta)\in H_{\mathfrak{m}}$ such that
\begin{equation*}
\varepsilon_{\mathfrak{m}}(\eta)^{w_\infty}=\varepsilon_{\mathfrak{m}}^\eta.
\end{equation*}
\begin{definition}\label{ek} Let $\mathcal{P}_K$ be the subgroup of $K^\times$ generated by $\mu_K$ and by all the norms
\begin{equation*}
N_{H_{\mathfrak{m}}/H_{\mathfrak{m}}\cap K}(\varepsilon_{\mathfrak{m}}(\eta)),
\end{equation*}
where $\mathfrak{m}$ is any nonzero proper ideal of $\mathcal{O}_k$ and $\eta$ is any element of $\mathcal{F}_{H_{\mathfrak{m}}}$. We define
\begin{equation*}
\mathcal{E}_K:=\mathcal{P}_K\cap\mathcal{O}_K^\times.
\end{equation*}
\end{definition}
\section{The Euler system}
For any finite abelian extension $F$ of $k$, and any fractional ideal $\mathfrak{a}$ of $\mathcal{O}_k$ prime to the conductor of $F/k$, we denote by $(\mathfrak{a}, F/k)$ the automorphism of $F/k$ associated to $\mathfrak{a}$ by the Artin map. If $\mathfrak{a}\subset\mathcal{O}_k$ then we denote by $N(\mathfrak{a})$ the cardinality of $\mathcal{O}_k/\mathfrak{a}$. Let $\mathcal{I}(\mathcal{O}_k)$ be the group of fractional ideals of $\mathcal{O}_k$ and let us consider its subgroup
$\mathcal{P}(\mathcal{O}_k):=\{x\mathcal{O}_k,\ x\in k^\times\}$. Then, the Artin map gives an isomorphism from ${\bf Pic}(\mathcal{O}_k):=\mathcal{I}(\mathcal{O}_k)/\mathcal{P}(\mathcal{O}_k)$ into Gal$(H/k)$. Let $p$ be a prime number, and let $\mathbf{Pic}_p(\mathcal{O}_k)$ be the $p$-part of ${\bf Pic}(\mathcal{O}_k)$. Then, fix $\mathfrak{a}_1,\ldots,\mathfrak{a}_s$, a finite set of ideals of $\mathcal{O}_k$ such that
\begin{equation}\label{pic}
{\bf Pic}_p(\mathcal{O}_k)=<\bar{\mathfrak{a}}_1>\times\cdots\times<\bar{\mathfrak{a}}_s>,
\end{equation}
where $<\bar{\mathfrak{a}}_i>\neq1$ is the group generated by the class $\bar{\mathfrak{a}}_i$ of $\mathfrak{a}_i$ in ${\bf Pic}(\mathcal{O}_k)$. If $n_i$ is the order of $<\bar{\mathfrak{a}}_i>$, then $(\mathfrak{a}_i)^{n_i}=a_i\mathcal{O}_k$, with $a_i\in\mathcal{O}_k$. If ${\bf Pic}_p(\mathcal{O}_k)=1$ then we set $s=1$ and $\mathfrak{a}_1:=\mathcal{O}_k$ and $a_1=1$.\par
Let $p\neq\rho$ be a prime number, and let $M$ be a power of $p$. Let $\mu_M$ be the group of $M$-th roots of unity. Then we define
\begin{equation}\label{kaem}
K_M:=
\begin{cases}
K((\mathbb{F}_q^\times)^{1/M}) & \mathrm{if}\ \mu_p\subset k\\
K(\mu_M) & \mathrm{if}\ \mu_p\not\subset k.
\end{cases}
\end{equation}
Moreover, we denote by $\mathcal{L}$ the set of prime ideals $\ell$ of $\mathcal{O}_k$ such that $\ell$ splits completely in the Galois extension $K_M\bigl(a_1^{1/M},\ldots,a_s^{1/M}\bigr)/k$.
Exactly as in \cite[Lemma 3]{Ru94} we have
\begin{lemma}\label{extension} For each prime $\ell\in\mathcal{L}$ there exists a cyclic extension $K(\ell)$ of $K$ of degree $M$, contained in the compositum $K.H_{\ell}$, unramified outside $\ell$, and such that $K(\ell)/K$ is totally ramified at all primes above $\ell$.
\end{lemma}
\begin{proof} Let us remark that the group $C:=\mathrm{Gal}(H_\ell/H)$ is cyclic of order $(N(\ell)-1)/w_k$. Since $\ell$ splits completely in $K_M$ the integer $M$ divides $(N(\ell)-1)/w_k$. In particular the fixed field of $C^M$ is a cyclic extension of $H$ of degree $M$. Let us denote it by $H(\ell)$. Let $\sigma_{\mathfrak{a}_i}:=(\mathfrak{a}_i, H(\ell)/k)$, for $i=1,\ldots,s$ (remark that $\mathfrak{a}_i$ is prime to $\ell$). Let $D:=<\sigma_{\mathfrak{a}_1},\ldots,\sigma_{\mathfrak{a}_s}>$ be the subgroup of $\mathrm{Gal}(H(\ell)/k)$ generated by the automorphisms $\sigma_{\mathfrak{a}_i}$. Let $E$ be the fixed field of $D$. Let $P$ be the $p$-part of $\mathrm{Gal}(H/k)$ and $L$ be the fixed field of $P$. From (\ref{pic}) we deduce that $E\cap H=L$. Moreover, if $\sigma\in D\cap\mathrm{Gal}(H(\ell)/H)$ then $\sigma$ is the restriction to $H(\ell)$ of some automorphism $(x\mathcal{O}_k, H_\ell/k)$, where $x=\prod_{i=1}^s a_i^{e_i}$. But such elements are $M$-th powers modulo $\ell$ for $\ell\in\mathcal{L}$, which implies that $(x\mathcal{O}_k, H_\ell/k)\in C^M$ and hence $\sigma=1$. Therefore $H(\ell)=E.H$. It is obvious now that $E=E'L$, where $E'$ is a subfield of $E$ such that $E'\cap L=k$. The field $K(\ell):=E'.K$ satisfies the required properties stated in the lemma.
\end{proof}
Let $\mathcal{S}$ be the set of squarefree ideals of $\mathcal{O}_k$ divisible only by primes $\ell\in\mathcal{L}$. If $\mathfrak{a}=\ell_1\cdots\ell_n\in\mathcal{S}$ then we set $K(\mathfrak{a}):=K(\ell_1)\cdots K(\ell_n)$ and $K(\mathcal{O}_k):=K$. If $\mathfrak{g}$ is an ideal of $\mathcal{O}_k$ then we denote by $\mathcal{S}(\mathfrak{g})$ the set of ideals $\mathfrak{a}\in\mathcal{S}$ that are prime to $\mathfrak{g}$. Following Rubin we define an Euler system to be a function
\begin{equation*}
\alpha:\mathcal{S}(\mathfrak{g})\longrightarrow k_\infty^\times,
\end{equation*}
such that
\begin{description}
\item{E1.}\ $\alpha(\mathfrak{a})\in K(\mathfrak{a})^\times$.
\item{E2.}\ $\alpha(\mathfrak{a})\in\mathcal{O}_{K(\mathfrak{a})}^\times$, if $\mathfrak{a}\neq\mathcal{O}_k$.
\item{E3.}\ $N_{K(\mathfrak{a}\ell)/K(\mathfrak{a})}\bigl(\alpha(\mathfrak{a}\ell)\bigr)=\alpha(\mathfrak{a})^{1-\mathrm{Fr}(\ell)^{-1}}$, where Fr$(\ell)$ is the Frobenius of $\ell$ in Gal$(K(\mathfrak{a})/k)$.
\item{E4.}\ $\alpha(\mathfrak{a}\ell)\equiv\alpha(\mathfrak{a})^{\mathrm{Frob}(\ell)^{-1}(N(\ell)-1)/M}$ modulo all primes above $\ell$.
\end{description}\par
We use the theory of sign-normalized Drinfel'd modules, developped by D.\,Hayes in \cite{Hay85}, to produce Euler systems. Let $\Omega_k$ be the completion of the algebraic closure of $k_\infty$. Then $\Omega_k$ is algebraically closed. We briefly recall the definition of the Drinfel'd module $\Phi^\Gamma$, associated to any $\mathcal{O}_k$-lattice $\Gamma$ of $\Omega_k$, that is, any finitely generated $\mathcal{O}_k$-submodule of $\Omega_k$, of rank one. Let $\Omega_k[\mathbf{F}]$ be the left twisted polynomial ring in the Frobenius endomorphism $\mathbf{F}:x\longmapsto x^q$, with the rule $\mathbf{F}.w=w^q.\mathbf{F}$, for all $w\in\Omega_k$. Then, $\Phi^\Gamma: \mathcal{O}_k\longrightarrow\Omega_k[\mathbf{F}]$, is the $\mathbb{F}_q$-algebra homomorphism such that the image of $x$ is the unique element $\Phi_x^\Gamma$ of $\Omega_k[\mathbf{F}]$ satisfying
\begin{equation}\label{exponent}
e_\Gamma(xz)=\Phi_x^\Gamma(e_\Gamma(z)),\ \mathrm{for\ all}\ z\in\Omega_k.
\end{equation}
Here, by $e_\Gamma(z)$ we mean the infinite product
\begin{equation*}
e_\Gamma(z):=z\prod_{\gamma\in\Gamma}(1-z/\gamma)\quad(\gamma\neq0).
\end{equation*}
\begin{theorem} The infinite product $e_\Gamma(z)$ converges uniformly on any bounded subset of $\Omega_k$. We thus obtain a surjective $\mathbb{F}_q$-linear entire function of $\Omega_k$, periodic with $\Gamma$ as a group of periods.
\end{theorem}
\begin{proof} See \cite[Theorem 8.5]{Hay92}.
\end{proof}
\begin{theorem} If $\Gamma\subset\Gamma'$ then the factor group $\Gamma'/\Gamma$ is finite. Moreover,
\begin{equation}\label{rela1}
e_{\Gamma'}(z)=P(\Gamma,\Gamma';e_\Gamma(z)),\ \mathrm{for\ all}\ z\in\Omega_k,
\end{equation}
where $P(\Gamma,\Gamma';t)$ is the polynomial
\begin{equation*}
P(\Gamma,\Gamma';t):=t\prod_{\gamma\neq0}(1-t/\gamma),\quad\quad(\gamma\in e_\Gamma(\Gamma')).
\end{equation*}
\end{theorem}
\begin{proof} Since $\Gamma$ and $\Gamma'$ are finitely generated $\mathcal{O}_k$-submodules of $\Omega_k$, of rank one there exist $\alpha,\alpha'\in k^\times$ and fractional ideals $\mathfrak{a}$ and $\mathfrak{a}'$ of $k$ such that $\Gamma=\alpha\mathfrak{a}$ and $\Gamma'=\alpha'\mathfrak{a}'$. The hypothesis $\Gamma\subset\Gamma'$ means that $\mathfrak{c}:=\alpha\alpha'^{-1}\mathfrak{a}\mathfrak{a}'^{-1}$ is a nonzero ideal of $\mathcal{O}_k$. But $\Gamma\subset\Gamma'$ is isomorphic to $\mathcal{O}_k/\mathfrak{c}$ which is necessarily finite. The other assertion of the theorem corresponds to \cite[Theorem 8.7]{Hay92}.
\end{proof}
It is easy to check the following equality
\begin{equation}\label{lineaire}
P(\omega\Gamma,\omega\Gamma';\omega t)=\omega P(\Gamma,\Gamma';t),
\end{equation}
for all $\omega\in\Omega_k^\times$.
\begin{corollary} Let $\Gamma\subset\Gamma'\subset\Gamma''$ be three $\mathcal{O}_k$-lattices of $\Omega_k$. Then
\begin{equation}\label{composition}
P(\Gamma,\Gamma'';t)=P(\Gamma',\Gamma'';P(\Gamma,\Gamma';t)).
\end{equation}
\end{corollary}
\begin{proof} The identity (\ref{composition}) is an immediate consequence of (\ref{rela1}).
\end{proof}
Let us set
\begin{equation*}
\delta(\Gamma,\Gamma'):=\prod_{\gamma\neq0}\gamma^{-1},\quad\quad(\gamma\in e_\Gamma(\Gamma')).
\end{equation*}
\begin{lemma}\label{fix} We have
\begin{equation*}
\Phi_x^\Gamma(t)=xP(\Gamma,x^{-1}\Gamma;t),
\end{equation*}
for all $x\in\mathcal{O}_k-\{0\}$. In particular, the leading coefficient of $\Phi_x^\Gamma$ is $x\delta(\Gamma,x^{-1}\Gamma)$.
\end{lemma}
\begin{proof} We have to apply (\ref{rela1}) to the lattice $\Gamma':=x^{-1}\Gamma$. But since $e_\Gamma(xz)=xe_{\Gamma'}(z)$ we obtain $e_\Gamma(xz)=xP(\Gamma,x^{-1}\Gamma;e_\Gamma(z))$. Comparing with (\ref{exponent}) gives us the formula for $\Phi_x^\Gamma(t)$.
\end{proof}
\begin{proposition}\label{fia} Let $\mathfrak{a}$ be a nonzero ideal of $\mathcal{O}_k$. Then, the left ideal of $\Omega_k[\mathbf{F}]$ generated by $\Phi_a^\Gamma$, $a\in\mathfrak{a}$, is principal generated by the monic polynomial
\begin{equation*}
\Phi_{\mathfrak{a}}^\Gamma(t):=\delta(\Gamma,\mathfrak{a}^{-1}\Gamma)^{-1}P(\Gamma,\mathfrak{a}^{-1}\Gamma;t).
\end{equation*}
\end{proposition}
\begin{proof} If $a\in\mathfrak{a}$ then $\Phi_a^\Gamma(t)=aP(\mathfrak{a}^{-1}\Gamma,a^{-1}\Gamma;P(\Gamma,\mathfrak{a}^{-1}\Gamma;t))$, thanks to (\ref{composition}). This proves that our left ideal is generated by $P(\Gamma,\mathfrak{a}^{-1}\Gamma;t)$.
\end{proof}
\begin{corollary} Let $D:\Omega_k[\mathbf{F}]\longrightarrow\Omega_k$ be the map which associates to a polynomial in $\mathbf{F}$ its constant term. Then
\begin{equation}\label{identity}
D(\Phi_x^\Gamma)=x\quad\mathrm{and}\quad D(\Phi_{\mathfrak{a}}^\Gamma)=\delta(\Gamma,\mathfrak{a}^{-1}\Gamma)^{-1}.
\end{equation}
Moreover, if $s_{\Phi^\Gamma}(x)$ is the leading coefficient of $\Phi_x^\Gamma$. Then
\begin{equation}\label{pieuvre}
D(\Phi_{x\mathcal{O}_k}^\Gamma)=s_{\Phi^\Gamma}(x)^{-1}x.
\end{equation}
\end{corollary}
\begin{proof} This is immediate from the explicite description of $\Phi_x^\Gamma$ and $\Phi_{\mathfrak{a}}^\Gamma$ given by Lemma \ref{fix} and Proposition \ref{fia}.
\end{proof}
Let $k(\infty)=\mathbb{F}_{q^{d_\infty}}$ be the field of constants of $k_\infty$ and let $\mathbf{sgn}$ be a sign-function of $k_\infty$ (fixed throughout this article), that is a continuous group homomorphism $\mathbf{sgn}:k_\infty^\times\longrightarrow k(\infty)^\times$ such that $\mathbf{sgn}(x)=x$ for all $x\in k(\infty)^\times$.
\begin{definition}The Drinfel'd module $\Phi^\Gamma$ is called $\mathbf{sgn}$-normalized if the leading coefficient of $\Phi_x^\Gamma$ has the form $\mathbf{sgn}(x)^{\tau_\Gamma}$, where $\tau_\Gamma\in\mathrm{Gal}(k(\infty)/\mathbb{F}_q)$ is an automorphism of $k(\infty)/\mathbb{F}_q$ depending only on $\Gamma$.
\end{definition}
\begin{theorem}Let $\mathfrak{c}$ be a fractional ideal of $\mathcal{O}_k$. Then, there exists a nonzero element $\xi(\mathfrak{c})\in\Omega_k^\times$ such that the Drinfel'd module $\Phi^{\tilde{\mathfrak{c}}}$ associated to $\tilde{\mathfrak{c}}:=\xi(\mathfrak{c})\mathfrak{c}$ is a $\mathbf{sgn}$-normalized $\mathcal{O}_k$-module. Moreover, $\xi(\mathfrak{c})$ is determined up to multiplication by elements of $k(\infty)^\times$.
\end{theorem}
\begin{proof} We refer the reader to \cite[Theorem 12.3 and Proposition 13.1]{Hay92}.
\end{proof}
\begin{proposition} If $\xi(\mathcal{O}_k)$ is fixed then for every fractional ideal $\mathfrak{c}$ of $\mathcal{O}_k$ there exists a unique choice of $\xi(\mathfrak{c})$ so that
\begin{equation}\label{cond}
\xi(\mathfrak{a}^{-1}\mathfrak{c})=D(\Phi_{\mathfrak{a}}^{\tilde{\mathfrak{c}}})\xi(\mathfrak{c}),
\end{equation}
for all ideal $\mathfrak{a}$ of $\mathcal{O}_k$.
\end{proposition}
\begin{proof} By \cite[Theorem 8.14 and Theorem 13.8]{Hay92} if $\xi(\mathfrak{c})$ is given then $\xi(\mathfrak{a}^{-1}\mathfrak{c})$ is determined by (\ref{cond}) up to an element of $k(\infty)^\times$. We leave it to the reader to check that if $\xi(\mathcal{O}_k)$ is fixed then (\ref{cond}) allows us to fix a value of $\xi(\mathfrak{c})$ for every $\mathfrak{c}$.
\end{proof}
Let us fix $\xi(\mathcal{O}_k)$ and let us denote by $H^*_{\mathfrak{e}}$ the normalizing field with respect to $\mathbf{sgn}$, cf. \cite[Definition 4.9]{Hay85}. We recall that if $\Phi$ is a $\mathbf{sgn}$-normalized Drinfel'd module, then $H^*_{\mathfrak{e}}$ is the subfield of $\Omega_k$ generated by the coefficients of the polynomials $\Phi_x$, $x\in\mathcal{O}_k$.
\begin{theorem}The extension $H^*_{\mathfrak{e}}/k$ is finite abelian and unramified except at $\infty$. The ramification index at $\infty$ is $w_\infty/w_k$. Also we have $H\subset H^*_{\mathfrak{e}}$ and $[H^*_{\mathfrak{e}}:H]=w_\infty/w_k$.
\end{theorem}
\begin{proof} See \cite[Theorem 4.10]{Hay85} or \cite[\S14]{Hay92}.
\end{proof}
Let $\mathfrak{m}$ be a nonzero proper ideal of $\mathcal{O}_k$, and let us consider the element
\begin{equation*}
\lambda_{\mathfrak{m}}:=\xi(\mathfrak{m})e_{\mathfrak{m}}(1).
\end{equation*}
Then, we may deduce from above the following
\begin{lemma} We have
\begin{equation}\label{rela3}
\xi(\mathfrak{a}^{-1}\mathfrak{m})e_{\mathfrak{a}^{-1}\mathfrak{m}}(1)=\Phi_{\mathfrak{a}}^{\tilde{\mathfrak{m}}}(\lambda_{\mathfrak{m}}),
\end{equation}
for any nonzero ideal $\mathfrak{a}$ of $\mathcal{O}_k$.
\end{lemma}
\begin{proof} By (\ref{cond}), (\ref{rela1}) and (\ref{lineaire}) we obtain $\xi(\mathfrak{a}^{-1}\mathfrak{m})e_{\mathfrak{a}^{-1}\mathfrak{m}}(1)=D(\Phi_{\mathfrak{a}}^{\tilde{\mathfrak{m}}})P(\tilde{\mathfrak{m}},\mathfrak{a}^{-1}\tilde{\mathfrak{m}};\lambda_{\mathfrak{m}})$. Now use Proposition \ref{fia} and (\ref{identity}) to conclude.
\end{proof}
In the sequel we shall use the field $k_{\mathfrak{m}}:=H^*_{\mathfrak{e}}(\lambda_{\mathfrak{m}})$. As proved in \cite[\S4]{Hay85} $k_{\mathfrak{m}}$ is a finite abelian extension of $k$. Moreover, $H_{\mathfrak{m}}\subset k_{\mathfrak{m}}$ and
\begin{theorem}
\begin{equation}\label{stark1}
N_{k_{\mathfrak{m}}/H_{\mathfrak{m}}}(\lambda_{\mathfrak{m}})=-\lambda_{\mathfrak{m}}^{w_\infty}\quad\mathrm{is\ a\ Stark\ unit}
\end{equation}
\end{theorem}
\begin{proof} The equality is Theorem 4.17 of \cite{Hay85}. The fact that $-\lambda_{\mathfrak{m}}^{w_\infty}$ is a Stark unit is also proved in \cite[\S\S 4 and 6]{Hay85}.
\end{proof}
\begin{lemma}\label{unlemme} If $\mathfrak{q}$ is a prime ideal of $\mathcal{O}_k$ then
\begin{equation}\label{distribution}
N_{k_{\mathfrak{mq}}/k_{\mathfrak{m}}}(\lambda_{\mathfrak{mq}})=\left\lbrace
\begin{array}{cc}
\lambda_{\mathfrak{m}} & \mathrm{if}\ \mathfrak{q}\vert\mathfrak{m}\\
\lambda_{\mathfrak{m}}^{1-\mathrm{Fr}(\mathfrak{q})^{-1}} & \mathrm{if}\ \mathfrak{q}\nmid\mathfrak{m}
\end{array}
\right.
\end{equation}
where $\mathrm{Fr}(\mathfrak{q})$ is the Frobenius of $\mathfrak{q}$ in $\mathrm{Gal}(k_{\mathfrak{m}}/k)$.
\end{lemma}
\begin{proof} Let $\Gamma:=\xi(\mathfrak{mq})\mathfrak{mq}$, $\Phi:=\Phi^\Gamma$ and $\xi:=\xi(\mathfrak{mq})$. Let $X$ be a complete set of representatives modulo $\mathfrak{mq}$ of the kernel of the natural map
\begin{equation*}
(\mathcal{O}_k/\mathfrak{mq})^\times\longrightarrow(\mathcal{O}_k/\mathfrak{m})^\times.
\end{equation*}
We may choose $X$ so that $\mathbf{sgn}(x)=1$ for all $x\in X$. By \cite[formula (4.8)]{Hay85} we see that $\mathrm{Gal}(k_{\mathfrak{mq}}/k_{\mathfrak{m}})$ is equal to the set $\{(x\mathcal{O}_k,k_{\mathfrak{mq}}/k)$, $x\in X\}$. Since $\lambda_{\mathfrak{mq}}=e_\Gamma(\xi)$ and $\Phi_x=\Phi_{x\mathcal{O}_k}$ if $\mathbf{sgn}(x)=1$, the theorem 4.12 of \cite{Hay85} and the above formula (\ref{exponent}) give
\begin{equation*}
N_{k_{\mathfrak{mq}}/k_{\mathfrak{m}}}(\lambda_{\mathfrak{mq}})=\prod_{x\in X}e_\Gamma(x\xi).
\end{equation*}
Suppose for the moment that $\mathfrak{q}\vert\mathfrak{m}$. Then, the set $Y:=\{\xi t,\ 1-t\in X\}$ is a complete system of representatives of $\mathfrak{q}^{-1}\Gamma/\Gamma$. This allows us to deduce
\begin{equation*}
\prod_{x\in X}e_\Gamma(x\xi)=\prod_{y\in Y}e_\Gamma(\xi-y)=e_{\mathfrak{q}^{-1}\Gamma}(\xi)D(\Phi_{\mathfrak{q}}),
\end{equation*}
where the last equality is a direct application of (\ref{rela1}) and (\ref{identity}). But, on one hand, it is obvious that $e_{\mathfrak{q}^{-1}\Gamma}(\xi)=\xi e_{\mathfrak{m}}(1)$ and on the other hand $\xi D(\Phi_{\mathfrak{q}})=\xi(\mathfrak{m})$ by (\ref{cond}). Thus we proved the norm formula when $\mathfrak{q}\vert\mathfrak{m}$.\par
Let us now assume that $\mathfrak{q}\nmid\mathfrak{m}$.
Let $t_0\in\mathfrak{m}$ be such that $t_0\equiv1$ modulo $\mathfrak{q}$, then, the set $Z:=\{\xi t_0\}\cup\{\xi t, 1-t\in X\}$ give a complete system of representatives of $\mathfrak{q}^{-1}\Gamma/\Gamma$. Therefore (\ref{rela1}) implies
\begin{equation*}
e_\Gamma(\xi-\xi t_0)\prod_{x\in X}e_\Gamma(x\xi)=\prod_{z\in Z}e_\Gamma(\xi-z)=D(\Phi_{\mathfrak{q}})e_{\mathfrak{q}^{-1}\Gamma}(\xi)=\lambda_{\mathfrak{m}}.
\end{equation*}
Thus we have
\begin{equation*}
N_{k_{\mathfrak{mq}}/k_{\mathfrak{m}}}(\lambda_{\mathfrak{mq}})=\frac{\lambda_{\mathfrak{m}}}{\xi(\mathfrak{mq})e_{\mathfrak{mq}}(x_0)},
\end{equation*}
where $x_0=1-t_0$. But, we may choose $x_0$ such that $\mathbf{sgn}(x_0)=1$. In particular, $D(\Phi_{x_0\mathcal{O}_k})=x_0$ and $\xi(\mathfrak{mq})e_{\mathfrak{mq}}(x_0)=\xi(x_0^{-1}\mathfrak{mq})e_{x_0^{-1}\mathfrak{mq}}(1)=\xi(\mathfrak{a}^{-1}\mathfrak{m})e_{\mathfrak{a}^{-1}\mathfrak{m}}(1)$, where $\mathfrak{a}:=x_0\mathfrak{q}^{-1}$. By (\ref{rela3}), \cite[Theorem 4.12]{Hay85} and the fact that $(x_0\mathcal{O}_k, k_{\mathfrak{m}}/k)=1$ we obtain
\begin{equation}\label{asavoir}
\xi(\mathfrak{mq})e_{\mathfrak{mq}}(x_0)=\lambda_{\mathfrak{m}}^{\mathrm{Fr}(\mathfrak{q})^{-1}}.
\end{equation}
This completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{poisson} Assume $\mathfrak{q}\nmid\mathfrak{m}$. Then,
\begin{equation}\label{congruence}
\lambda_{\mathfrak{mq}}-\lambda_{\mathfrak{m}}^{\mathrm{Fr}(\mathfrak{q})^{-1}}=\lambda_{\mathfrak{q}}^{\sigma_{\mathfrak{m}}^{-1}}.
\end{equation}
where $\sigma_{\mathfrak{m}}:=(\mathfrak{m}, k_{\mathfrak{q}}/k)$ is the automorphism of $k_{\mathfrak{q}}/k$ associated to $\mathfrak{m}$ by the Artin map.
\end{lemma}
\begin{proof} let us keep the notation of the proof of Lemma \ref{unlemme}. Then, an easy computation based on (\ref{asavoir}) gives
\begin{equation}
\lambda_{\mathfrak{mq}}-\lambda_{\mathfrak{m}}^{\mathrm{Fr}(\mathfrak{q})^{-1}}=\xi(\mathfrak{mq})e_{\mathfrak{mq}}(t_0).
\end{equation}
On the other hand, (\ref{pieuvre}) and (\ref{cond}) give $\xi(t_0^{-1}\mathfrak{mq})=s_\Phi(t_0)^{-1}t_0\xi(\mathfrak{mq})$. Thus, if we set $\mathfrak{a}:=t_0\mathfrak{m}^{-1}\subset\mathcal{O}_k$, then we obtain
\begin{equation*}
\xi(\mathfrak{mq})e_{\mathfrak{mq}}(t_0)=s_\Phi(t_0)\xi(\mathfrak{a}^{-1}\mathfrak{q})e_{\mathfrak{a}^{-1}\mathfrak{q}}(1)=s_\Phi(t_0)\Phi_{\mathfrak{a}}^{\tilde{\mathfrak{q}}}(\lambda_{\mathfrak{q}}).
\end{equation*}
The last equality is a special case of (\ref{rela3}). By \cite[Theorem 4.12 and Corollary 4.14]{Hay85} we have $\Phi_{\mathfrak{a}}^{\tilde{\mathfrak{q}}}(\lambda_{\mathfrak{q}})=(s_{\Phi'}(t_0)^{-1}\lambda_{\mathfrak{q}})^{\sigma_{\mathfrak{m}}^{-1}}$, where $\Phi':=\Phi^{\tilde{\mathfrak{q}}}$. Let $*$ be the operation introduced in \cite[\S3]{Hay79}. Then, $\Phi'=\mathfrak{m}*\Phi$, as proved in \cite[Proposition 5.10]{Hay79} or \cite[Theorem 8.14]{Hay92} or the proof of \cite[Theorem 5.1]{Hay85}. By \cite[formula (4.5)]{Hay85} we have $s_{\Phi'}(t_0)=s_\Phi(t_0)^{\sigma_{\mathfrak{m}}}$. This proves the lemma.
\end{proof}
Let us keep the sign-function $\mathbf{sgn}$ fixed. Then, for any nonzero proper ideal $\mathfrak{m}$ of $\mathcal{O}_K$ and any $\eta\in\mathcal{F}_{k_{\mathfrak{m}}}$ the element $(\lambda_{\mathfrak{m}})^\eta$ belongs to $H_{\mathfrak{m}}$, thanks to \cite[Corollary 4.14]{Hay85}. Moreover, by (\ref{stark1}) and \cite[Lemma 2.5]{Hay85} we see that $\mathcal{P}_K$ is generated by $\mu_K$ and by all the norms
\begin{equation*}
\lambda_{\mathfrak{m}}(\mathfrak{g}):=N_{H_{\mathfrak{m}}/H_{\mathfrak{m}}\cap K}(\lambda_{\mathfrak{m}}^{N(\mathfrak{g})-(\mathfrak{g},\, k_{\mathfrak{m}}/k)}),
\end{equation*}
where $\mathfrak{m}$ and $\mathfrak{g}$ are any nonzero coprime ideals of $\mathcal{O}_k$ such that $\mathfrak{m}\neq\mathcal{O}_k$. Furthermore, the map $\alpha:\mathcal{S}(\mathfrak{mg})\longrightarrow k_\infty^\times$, defined by
\begin{equation*}
\alpha(\mathfrak{a}):=N_{KH_{\mathfrak{ma}}/K(\mathfrak{a})}(\lambda_{\mathfrak{ma}}^{N(\mathfrak{g})-(\mathfrak{g},\, k_{\mathfrak{ma}}/k)}),
\end{equation*}
is an Euler system such that $\alpha(1)=\lambda_{\mathfrak{m}}(\mathfrak{g})$. Indeed, the properties E1 and E2 are immediately seen to be satisfied. The property E3 is a consequence of Lemma \ref{unlemme}. The property E4 follows from Lemma \ref{poisson} and \cite[Lemma 4.19]{Hay85}.
\begin{corollary}\label{sardine}If $u\in\mathcal{E}_K$ then there exist an ideal $\mathfrak{g}$ of $\mathcal{O}_k$ and an Euler system $\alpha:\mathcal{S}(\mathfrak{g})\longrightarrow k_\infty^\times$, such that $\alpha(1)=u$
\end{corollary}
\begin{proof} In view of the discussion above we only have to check the corollary for the roots of unity in $K$. But this is obvious.
\end{proof}
\section{The Gras conjecture}
If $\ell\in\mathcal{L}$ then we denote by $\sigma_\ell$ a generator of the cyclic group G$_{\ell}:=\mathrm{Gal}(K(\ell)/K)$. Further, we set
\begin{equation*}
N_\ell:=\sum_{\tau\in\mathrm{G}_{\ell}}\tau\quad\mathrm{and}\quad D_\ell:=\sum_{i=0}^{M-1}i\sigma_\ell.
\end{equation*}
Let $\mathfrak{a}\in\mathcal{S}$ and let G$_{\mathfrak{a}}:=\mathrm{Gal}(K(\mathfrak{a})/K)$. Then $\mathrm{G}_{\mathfrak{a}}\simeq\prod_{\ell\vert\mathfrak{a}}\mathrm{G}_{\ell}$. Moreover, the inertia group of $\ell$ in G$_{\mathfrak{a}}$ is $\mathrm{Gal}(K(\mathfrak{a})/K(\mathfrak{a}/\ell))$, which we shall identify with G$_{\ell}$.\par
Let us now define $Y$ to be the free multiplicative $\mathbb{Z}[\mathrm{G}_{\mathfrak{a}}]$-module generated by the symbols $x(\mathfrak{b})$, $\mathfrak{b}\vert\mathfrak{a}$, and we denote by $Z$ its submodule generated by the relations
\begin{description}
\item{1.} $x(\mathfrak{b})^{\sigma-1}=1,\ \mathrm{for\ all}\ \mathfrak{b}\vert\mathfrak{a}\ \mathrm{and\ all}\ \sigma\in\mathrm{Gal}(K(\mathfrak{a})/K(\mathfrak{b}))$
\item{2.} $x(\mathfrak{b}\ell)^{N_\ell}=x(\mathfrak{b})^{1-\mathrm{Fr}(\ell)^{-1}},\ \mathrm{for\ all}\ \mathfrak{b}\in\mathcal{S}\ \mathrm{and\ all}\ \ell\in\mathcal{L}\ \mathrm{such\ that}\ \mathfrak{b}\ell\vert\mathfrak{a}.$
\end{description}
\begin{lemma}\label{dist} The $\mathbb{Z}[\mathrm{G}_{\mathfrak{a}}]$-module $X:=Y/Z$ is a free $\mathbb{Z}$-module, with basis the set
\begin{equation*}
\lbrace x(\mathfrak{b})^\sigma,\ \mathfrak{b}\vert\mathfrak{a},\sigma\in\mathrm{G}_{\mathfrak{b}}-\cup_{\mathfrak{c}\vert\mathfrak{b},\mathfrak{c}\neq\mathfrak{b}}\mathrm{G}_{\mathfrak{c}}\rbrace.
\end{equation*}
Moreover, if we set $D_{\mathfrak{b}}:=\prod_{\ell\vert\mathfrak{b}} D_\ell$ then, for all $\mathfrak{b}\vert\mathfrak{a}$ and all $\sigma\in\mathrm{G}_{\mathfrak{a}}$, $x(\mathfrak{b})^{D_{\mathfrak{b}}(\sigma-1)}\in X^M$.
\end{lemma}
\begin{proof} The proof is exactly the same as for \cite[Lemma 1.1]{Ru91}.
\end{proof}
\begin{corollary}\label{kappaalpha} For any Euler system $\alpha:\mathcal{S}(\mathfrak{g})\longrightarrow k_\infty^\times$ there is a natural map
\begin{equation}\label{kappa}
\kappa_\alpha:\mathcal{S}(\mathfrak{g})\longrightarrow K^\times/(K^\times)^M,\quad\mathrm{such\ that}\quad \kappa_\alpha(\mathfrak{a})\equiv\alpha(\mathfrak{a})^{D_{\mathfrak{a}}}\ \mathrm{modulo}\ (K^\times)^M.
\end{equation}
\end{corollary}
\begin{proof} The existence of $\kappa_\alpha$ is deduced from the above lemma \ref{dist} by applying the theorem 90 of Hilbert. We refer the reader to \cite[Proposition 2.2]{Ru91}.
\end{proof}
To go further we need to understand the prime factorization of $\kappa_\alpha(\mathfrak{a})$, for $\mathfrak{a}\in\mathcal{S}(\mathfrak{g})$. To this end we adopt the following notation of Rubin, cf.\cite[\S2]{Ru91}. Let $\mathcal{I}=\oplus_\lambda\mathbb{Z}\lambda$ be the group of fractional ideals of $\mathcal{O}_K$ written additively. If $\ell$ is a prime ideal of $\mathcal{O}_k$ then we write $\mathcal{I}_\ell=\oplus_{\lambda\vert\ell}\mathbb{Z}\lambda$. If $y\in K^\times$ then we denote by $(y)_\ell\in\mathcal{I}_\ell$, $[y]\in\mathcal{I}/M\mathcal{I}$ and $[y]_\ell\in\mathcal{I}_\ell/M\mathcal{I}_\ell$ the projections of the fractional ideal $(y):=y\mathcal{O}_K$.
\begin{proposition}\label{fielle1} Let $\ell\in\mathcal{L}$ and let us consider the map
\begin{equation}\label{requin}
\psi_\ell:K(\ell)^\times\longrightarrow(\mathcal{O}_K/\ell\mathcal{O}_K)^\times/((\mathcal{O}_K/\ell\mathcal{O}_K)^\times)^M,
\end{equation}
which associates to $z$ the sum $\oplus_{\lambda\vert\ell}z_\lambda$ such that the image of $z^{1-\sigma_\ell}$ in $(\mathcal{O}_K/\lambda)^\times$ is equal to $(z_\lambda)^d$, where $d:=(N(\ell)-1)/M$. Then there exists a unique $\mathrm{G}$-equivariant isomorphism
\begin{equation}\label{fielle}
\varphi_\ell:(\mathcal{O}_K/\ell\mathcal{O}_K)^\times/((\mathcal{O}_K/\ell\mathcal{O}_K)^\times)^M\longrightarrow\mathcal{I}_\ell/M\mathcal{I}_\ell,
\end{equation}
such that
\begin{equation}\label{baleine}
(\varphi_\ell\circ\psi_\ell)(x)=[N_{K(\ell)/K}(x)]_\ell.
\end{equation}
\end{proposition}
\begin{proof} We first prove the existence of $\varphi_\ell$. Let $\lambda'$ be a prime ideal of $\mathcal{O}_{K(\ell)}$ above $\ell$. Let $v_{\lambda'}$ be the normalized valuation of $K(\ell)$ defined by $\lambda'$, and let $\pi\in\lambda'-(\lambda')^2$. Then $\pi^{1-\sigma_\ell}$ has exact order $M$ in the cyclic group $(\mathcal{O}_{K(\ell)}/\lambda')^\times$, because $K(\ell)/K$ is cyclic, totally ramified at $\lambda:=\lambda'\cap\mathcal{O}_K$. In particular, using the isomorphism $\mathcal{O}_{K(\ell)}/\lambda'\simeq\mathcal{O}_K/\lambda$, there exists $x_\lambda\in(\mathcal{O}_K/\lambda)^\times$ such that the image of $\pi^{1-\sigma_\ell}$ in $(\mathcal{O}_K/\lambda)^\times$ is equal to $(x_\lambda)^d$. Let us remark that the projection of $x_\lambda$ in $(\mathcal{O}_K/\lambda)^\times/((\mathcal{O}_K/\lambda)^\times)^M$ is well defined, does not depend on $\pi$ and, in fact, has exact order $M$. Thus, the isomorphism $\mathcal{O}_K/\ell\mathcal{O}_K\simeq\oplus_{\lambda\vert\ell}\mathcal{O}_K/\lambda$ allows us to define a $\mathrm{G}$-equivariant isomorphism
\begin{equation*}
\hat{\varphi}_\ell:(\mathcal{O}_K/\ell\mathcal{O}_K)^\times/((\mathcal{O}_K/\ell\mathcal{O}_K)^\times)^M\longrightarrow\mathcal{I}_\ell/M\mathcal{I}_\ell,
\end{equation*}
such that the image of an element $x:=\oplus_{\lambda\vert\ell}(x_\lambda)^{e_\lambda}$ is $\hat{\varphi}_\ell(x):=\oplus_{\lambda\vert\ell}e_\lambda\lambda$.
It is clear that the map $\varphi_\ell:=-\hat{\varphi}_\ell$ satisfies (\ref{baleine}). The unicity of $\varphi_\ell$ follows from the fact that $\psi_\ell$ and the map $K(\ell)^\times\longrightarrow\mathcal{I}_\ell/M\mathcal{I}_\ell$, $x\longmapsto[N_{K(\ell)/K}(x)]_\ell$ have the same kernel, that is the set of elements $x\in K(\ell)^\times$ such that $v_{\lambda'}(x)\equiv0$ modulo $M$, for all prime ideal $\lambda'$ above $\ell$.
\end{proof}
The map $\varphi_\ell$ induces a homomorphism $\{y\in K^\times/(K^\times)^M,\ [y]_\ell=0\}\longrightarrow\mathcal{I}_\ell/M\mathcal{I}_\ell$ which we also denote by $\varphi_\ell$.
\begin{lemma} For any Euler system $\alpha:\mathcal{S}(\mathfrak{g})\longrightarrow k_\infty^\times$, and any $\mathfrak{a}\in\mathcal{S}(\mathfrak{g})$, such that $\mathfrak{a}\neq1$
\begin{equation}\label{marrakech}
[\kappa_\alpha(\mathfrak{a})]_\ell=
\begin{cases}
0&\mathrm{if}\ \ell\nmid\mathfrak{a}\\
\varphi_\ell(\kappa_\alpha(\mathfrak{a}/\ell))&\mathrm{if}\ \ell\vert\mathfrak{a}.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof} The proof is exactly the same as in \cite[Proposition 2.4]{Ru91}.
\end{proof}
In the sequel, if $p$ is a prime number such that $p\nmid[K:k]$, $\chi$ a nontrivial irreducible $\mathbb{Z}_p$-character of $\mathrm{G}$, and $\Pi$ is a $\mathbb{Z}_p[\mathrm{G}]$-module then we define $\Pi_\chi:=e_\chi\Pi$. If $\Pi$ is a $\mathbb{Z}[\mathrm{G}]$-module then we define $\Pi_\chi:=e_\chi(\mathbb{Z}_p\otimes_{\mathbb{Z}}\Pi)$. Before proving Theorem \ref{tresgras} we first need to prove the analoguous of \cite[Theorem 4]{Ru94} and \cite[Theorem 3.1]{Ru91}. For this we set
\begin{equation*}
K':=K_M(a_1^{1/M},\ldots,a_s^{1/M}).
\end{equation*}
\begin{lemma}\label{fund1} Let $p$ be a prime number such that $p\nmid\rho[K:k]$ and let $M$ be a power of $p$. Then
the natural map
\begin{equation*}
f:K^\times/(K^\times)^M\longrightarrow K_M^\times/(K_M^\times)^M
\end{equation*}
is injective if $\mu_p\not\subset k$. But if $\mu_p\subset k$ its kernel is contained in $\mathbb{F}_{q}^{\times}/(\mathbb{F}_{q}^{\times})^M$. In particular, it is annihilated by $[K:k]-s(\mathrm{G})$, where $s(\mathrm{G}):=\sum\sigma, \sigma\in\mathrm{G}$. Furthermore, the kernel of the natural map
\begin{equation*}
g:K^\times/(K^\times)^M\longrightarrow K'^\times/(K'^\times)^M
\end{equation*}
is also annihilated by $[K:k]-s(\mathrm{G})$.
\end{lemma}
\begin{proof} Let $\mathbb{F}_{q^r}$ (resp. $\mathbb{F}_{q^{r'}}$) be the field of constants of $K$ (resp. $K_M$). Then $K_M=K\mathbb{F}_{q^{r'}}$ and $K_M$ is a cyclic extension of $K$, with Galois group canonically isomorphic to $\mathrm{Gal}(\mathbb{F}_{q^{r'}}/\mathbb{F}_{q^r})$. Since $H^1(\mathbb{F}_{q^{r'}}/\mathbb{F}_{q^r},\mathbb{F}_{q^{r'}}^\times)=0$ the kernel of $V$ is equal to
\begin{equation*}
(\mathbb{F}_{q^r}^\times\cap(\mathbb{F}_{q^{r'}}^\times)^M)/(\mathbb{F}_{q^r}^\times)^M
\end{equation*}
If $\mu_p\not\subset k$ then $\mathbb{F}_{q^{r'}}=\mathbb{F}_{q^r}(\mu_M)$. In this case it is easy to check that $\mathbb{F}_{q^r}^\times\cap(\mathbb{F}_{q^{r'}}^\times)^M=(\mathbb{F}_{q^r}^\times)^M$. If $\mu_p\subset k$ then $\mathbb{F}_{q^{r'}}=\mathbb{F}_{q^r}((\mathbb{F}_q^\times)^{1/M})$. In particular $(\mathbb{F}_{q^{r'}}^\times)^M=\mathbb{F}_{q^r}^\times$. Moreover, since $p\nmid[K:k]$ the integer $r$ is prime to $p$. and hence $\mathbb{F}_{q^r}^\times/(\mathbb{F}_{q^r}^\times)^M$ is naturally isomorphic to $\mathbb{F}_q^\times/(\mathbb{F}_q^\times)^M$. This proves the assertions of the lemme about the kernel of $f$. Further, since $K'/K_M$ is a Kummer extension and $a_1,\ldots,a_s$ are elements of $k$, an elementary use of Kummer theory shows that the kernel of $g$ is also annihilated by $[K:k]-s(\mathrm{G})$.
\end{proof}
\begin{lemma}\label{capital} Suppose $p$ is a prime number such that $p\nmid\rho[K:k]$ and let $M$ be a power of $p$. Let $\chi$ be a nontrivial irreducible $\mathbb{Z}_p$-character of $\mathrm{G}$. If $\mu_p\subset K$ and $p\,\vert[H:k]$, then we assume $\chi\neq\omega$. Let $\mathsf{H}^\chi$ be the abelian extension of $K$ corresponding to the $\chi$-part $\mathrm{Cl}(\mathcal{O}_K)_\chi$. Then $\mathsf{H}^\chi\cap K'=K$.
\end{lemma}
\begin{proof} The group $\mathrm{G}$ acts trivially on $\mathrm{Gal}(\mathsf{H}^\chi\cap K_M/K)$ because $K_M$ is abelian over $k$. On the other hand, $\mathrm{Gal}(\mathsf{H}^\chi\cap K_M/K)$ is a $\mathrm{G}$-quotient of $\mathrm{Gal}(\mathsf{H}^\chi/K)\simeq\mathrm{Cl}(\mathcal{O}_K)_\chi$. This implies that $\mathsf{H}^\chi\cap K_M=K$ since $\chi\neq1$. In addition, if $p\nmid[H:k]$ then $K'=K_M$. In particular we have proved that $\mathsf{H}^\chi\cap K'=K$ in case $p\nmid[H:k]$. Let $E:=K_M(\mathsf{H}^\chi\cap K')$. By Kummer theory we deduce from the inclusion $E\subset K'$ that $E=K_M(V^{1/M})$, where $V$ is a subgroup of the multiplicative group $<a_1,\ldots,a_s>\subset k^\times$. But recall that $K_M$ and $\mathsf{H}^\chi$ are abelian over $K$. In particular, if $x\in V^{1/M}$ and $\tau\in\mathrm{Gal}(E/K_M)$ we have $\tau(x)/x\in\mu_M\cap K$. Thus, if $\mu_p\not\subset K$ then $E=K_M$ and $\mathsf{H}^\chi\cap K'=K$ because of the isomorphism $\mathrm{Gal}(E/K_M)\simeq\mathrm{Gal}(\mathsf{H}^\chi\cap K'/K)$. If $\mu_p\subset K$, $p\,\vert[H:k]$ then $\mathrm{G}$ acts on $\mathrm{Gal}(E/K_M)$ via $\omega$. This implies that $\mathrm{Gal}(E/K_M)=1$ because this group is isomorphic to $\mathrm{Gal}(\mathsf{H}^\chi\cap K'/K)$ on which $\mathrm{G}$ acts via $\chi\neq\omega$. The proof of the lemma is now complete.
\end{proof}
\begin{theorem}\label{Chebotarev} Suppose $p$ is a prime number such that $p\nmid\rho[K:k]$ and let $M$ be a power of $p$. Let $\chi$ be a nontrivial irreducible $\mathbb{Z}_p$-character of $\mathrm{G}$. If $\mu_p\subset K$ and $p\,\vert[H:k]$, then we assume $\chi\neq\omega$. Let $\beta\in (K^\times/(K^\times)^M)_\chi$ and $A$ be a $\mathbb{Z}_p[\mathrm{G}]$-quotient of $\mathrm{Cl}(\mathcal{O}_K)_\chi$. Let $m$ be the order of $\beta$ in $K^\times/(K^\times)^M$, $W$ the $\mathrm{G}$-submodule of $K^\times/(K^\times)^M$ generated by $\beta$, $\mathsf{H}$ the abelian extension of $K$ corresponding to $A$, and $L:=\mathsf{H}\cap K'(W^{1/M})$. Then, there is a $\mathbb{Z}[\mathrm{G}]$ generator $\mathfrak{c}'$ of $\mathrm{Gal}(L/K)$ such that for any $\mathfrak{c}\in A$ whose restriction to $L$ is $\mathfrak{c}'$, there are infinitely many prime ideals $\lambda$ of $\mathcal{O}_K$ such that:
\begin{description}
\item(i) the projection of the class of $\lambda$ in $A$ is $\mathfrak{c}$,
\item(ii) if $\ell:=\lambda\cap\mathcal{O}_k$ then $\ell\in\mathcal{L}$,
\item(iii) $[\beta]_\ell=0$ and there is $u\in(\mathbb{Z}/M\mathbb{Z}[\mathrm{G}])_\chi^\times$ such that $\varphi_\ell(\beta)=(M/m)u\lambda$.
\end{description}
\end{theorem}
\begin{proof} We follow \cite[Theorem 3.1]{Ru91}. Since $W\subset (K^\times/(K^\times)^M)_\chi$ and $\chi\neq1$, we deduce from Lemma \ref{fund1} that the Galois group of the Kummer extension $K'(W^{1/M})/K'$ is isomorphic as a $\mathbb{Z}[\mathrm{Gal}(K_M/k)]$-module to $\mathrm{Hom}(W,\mu_M)$. But $W\simeq(\mathbb{Z}/m\mathbb{Z}[\mathrm{G}])_\chi$, which is a direct factor of $(\mathbb{Z}/m\mathbb{Z})[\mathrm{G}]$.
On the other hand, $\mathrm{Hom}((\mathbb{Z}/m\mathbb{Z})[\mathrm{G}],\mu_M)$ is $\mathbb{Z}[\mathrm{Gal}(K_M/k)]$-cyclic, generated for instance by the group homomorphism $\Psi:(\mathbb{Z}/m\mathbb{Z})[\mathrm{G}]\longrightarrow\mu_M$ defined by $\Psi(1)=\zeta$ and $\Psi(g)=1$, for $g\neq1$, where $\zeta\in\mu_M$ is a primitive $m$-th root of unity. Therefore, we can find $\tau\in\mathrm{Gal}(K'(W^{1/M})/K')$ which generates $\mathrm{Gal}(K'(W^{1/M})/K')$ over
$\mathbb{Z}[\mathrm{Gal}(K_M/k)]$. The restriction $\mathfrak{c}'$ of $\tau$ to $L$ is a $\mathbb{Z}[\mathrm{G}]$ generator of $\mathrm{Gal}(L/K)\simeq\mathrm{Gal}(LK'/K')$ by Lemma \ref{capital}. Let $\mathfrak{c}\in\mathrm{Gal}(\mathsf{H}/K)=A$ be any extension of $\mathfrak{c}'$ to $\mathsf{H}$. Then one can find $\sigma\in\mathrm{Gal}(\mathsf{H}K'(W^{1/M})/K)$ such that
\begin{equation*}
\sigma_{\vert\mathsf{H}}=\mathfrak{c}\quad\mathrm{and}\quad\sigma_{\vert K'(W^{1/M})}=\tau.
\end{equation*}
By \cite[Theorem 12 of Chapter XIII, page 289]{Weil} there exist infinitely many primes $\lambda$ of $\mathcal{O}_K$ whose Frobenius in $\mathrm{Gal}(\mathsf{H}K'(W^{1/M})/K)$ is the congugacy class of $\sigma$, and such that $\ell:=\lambda\cap\mathcal{O}_k$ is unramified in $K'(W^{1/M})/k$. Now it is immediate that $(i)$ and $(ii)$ are satisfied. The rest of the proof is exactly the same as the proof of \cite[Theorem 3.1]{Ru91}.
\end{proof}
\begin{theorem}\label{casfacile} Suppose $p$ is a prime number such that $p\nmid\rho[K:k]$. Let $\chi$ be a nontrivial irreducible $\mathbb{Z}_p$-character of $\mathrm{G}$. If $\mu_p\in K$ and $p\,\vert[H:k]$, then we assume $\chi\neq\omega$. Then we have
\begin{equation}\label{division}
\#\mathrm{Cl}(\mathcal{O}_K)_\chi\,\vert\,\#(\mathcal{O}_K^\times/\mathcal{E}_K)_\chi.
\end{equation}
\end{theorem}
\begin{proof} We follow \cite[Theorem 3.2]{Ru91}. Let $\hat{\chi}$ be a $p$-adic irreducible character of $\mathrm{G}$ such that $\hat{\chi}\vert\chi$, and let $\hat{\chi}(\mathrm{G}):=\{\hat{\chi}(\sigma),\ \sigma\in\mathrm{G}\}$. Then, the ring $R:=\mathbb{Z}_p[\mathrm{G}]_\chi$ is isomorphic to $\mathbb{Z}_p[\hat{\chi}(\mathrm{G})]$, which is the ring of integers of the unramified extension $\mathbb{Q}_p[\hat{\chi}(\mathrm{G})]$ of $\mathbb{Q}_p$. Thus, $R$ is a discret valuation ring. Moreover, the $R$-torsion of any $R$-module is equal to its $\mathbb{Z}_p$-torsion. It is well known that $\mathcal{O}_K^\times/\mu_K$ is a free $\mathbb{Z}$-module of rank $[K:k]-1$. More precisely there exists an isomorphism of $\mathbb{Z}[\mathrm{G}]$-modules $\log:\mathcal{O}_K^\times/\mu_K\longrightarrow\Delta$, where $\Delta$ is a submodule of the augmentation ideal of $\mathbb{Z}[\mathrm{G}]$. Moreover, $\Delta$ is a free $\mathbb{Z}$-module of rank $[K:k]-1$. Hence, since $\chi\neq1$, the quotient $(\mathcal{O}_K^\times)_\chi/(\mu_K)_\chi$ is a free $R$-module of rank $1$. Let us define
\begin{equation*}
M:=p\#(\mathcal{O}_K^\times/\mathcal{E}_K)_\chi\#\mathrm{Cl}(\mathcal{O}_K)_\chi.
\end{equation*}
Let $\mu$, $U$ and $V$ be the images of $\mu_K$, $\mathcal{O}_K^\times$ and $\mathcal{E}_K$ in $K^\times/(K^\times)^M$ respectively. We deduce from above that, $U_\chi/\mu_\chi$ is a free $R/MR$-module of rank $1$. But since
\begin{equation}\label{tachfine}
U_\chi/V_\chi\simeq(\mathcal{O}_K^\times)_\chi/(\mathcal{E}_K)_\chi\simeq R/tR,
\end{equation}
for some divisor $t$ of $M$, there exists $\xi\in U_\chi$ giving an $R$-basis of $U_\chi/\mu_\chi$ and such that $\xi^t\in(\mathcal{E}_K)_\chi$. In particular $\xi$ has order $M$ in $K^\times/(K^\times)^M$. By Corollary \ref{sardine} there exist an ideal $\mathfrak{g}$ of $\mathcal{O}_k$ and an Euler system $\alpha:\mathcal{S}(\mathfrak{g})\longrightarrow k_\infty^\times$, such that the map $\kappa:=\kappa_\alpha$ defined by (\ref{kappa}) satisfies $\kappa(1)=\xi^t$. We define inductively classes $\mathfrak{c}_0,\ldots,\mathfrak{c}_i\in\mathrm{Cl}(\mathcal{O}_K)_\chi$, prime ideals $\lambda_1,\ldots,\lambda_i$ of $\mathcal{O}_K$, coprime with $\mathfrak{g}$, and ideals $\mathfrak{a}_0,\ldots,\mathfrak{a}_i$ of $\mathcal{O}_k$ such that $\mathfrak{c}_0=1$ and $\mathfrak{a}_0=1$. Let $i\geq0$, and suppose that $\mathfrak{c}_0,\ldots,\mathfrak{c}_i$ and $\lambda_1,\ldots,\lambda_i$ (if $i\geq1$) are defined. Then we set $\mathfrak{a}_i=\prod_{1\leq n\leq i}\ell_n$ (if $i\geq1$), where $\ell_n:=\lambda_n\cap\mathcal{O}_k$. Moreover,
\begin{itemize}
\item If $\mathrm{Cl}(\mathcal{O}_K)_\chi\neq<\mathfrak{c}_0,\ldots,\mathfrak{c}_i>_{\mathrm{G}}$, where $<\mathfrak{c}_0,\ldots,\mathfrak{c}_i>_{\mathrm{G}}$ is the G-module generated by $\mathfrak{c}_0,\ldots,\mathfrak{c}_i$, then we define $\mathfrak{c}_{i+1}$ to be any element of $\mathrm{Cl}(\mathcal{O}_K)_\chi$ whose image in $\mathrm{Cl}(\mathcal{O}_K)_\chi/<\mathfrak{c}_0,\ldots,\mathfrak{c}_i>_{\mathrm{G}}$ is nontrivial and is equal to a class $\mathfrak{c}$ which restricts to the generator $\mathfrak{c}'$ of $\mathrm{Gal}(L/K)$ in Theorem \ref{Chebotarev} applied to $\beta:=\kappa(\mathfrak{a}_i)_\chi$, the image of $\kappa(\mathfrak{a}_i)$ in $(K^\times/(K^\times)^M)_\chi$, and $A:=\mathrm{Cl}(\mathcal{O}_K)_\chi/<\mathfrak{c}_0,\ldots,\mathfrak{c}_i>_{\mathrm{G}}$. Also we let $\lambda_{i+1}$ be any prime ideal of $\mathcal{O}_K$ prime to $\mathfrak{g}$ and satisfying Theorem \ref{Chebotarev} with the same conditions.
\item If $\mathrm{Cl}(\mathcal{O}_K)_\chi=<\mathfrak{c}_0,\ldots,\mathfrak{c}_i>_{\mathrm{G}}$ then we stop.
\end{itemize}
This construction of our classes $\mathfrak{c}_j$ implies that the ideals $\ell_j:=\lambda_j\cap\mathcal{O}_k\in\mathcal{S}(\mathfrak{g})$. Let $m_i$ be the order of $\kappa(\mathfrak{a}_i)_\chi$ in $K^\times/(K^\times)^M$, and let $t_i:=M/m_i$. By the assertion $(iii)$ of Theorem \ref{Chebotarev} we have $\varphi_{\ell_{i+1}}(\kappa(\mathfrak{a}_i)_\chi)=ut_i\lambda_{i+1}$, for some $u\in\ \mathbb{Z}/M\mathbb{Z}[\mathrm{G}]_\chi^\times$. But $\mathfrak{a}_{i+1}=\mathfrak{a}_i\ell_{i+1}$. Thus
\begin{equation}\label{thonrouge}
[\kappa(\mathfrak{a}_{i+1})_\chi]_{\ell_{i+1}}=ut_i\lambda_{i+1},
\end{equation}
thanks to (\ref{marrakech}). Now, by the definition of $t_{i+1}$, the fractional ideal of $\mathcal{O}_K$ generated by $\kappa(\mathfrak{a}_{i+1})_\chi$ is a $t_{i+1}$-th power. Thus, we must have $t_{i+1}\vert t_i$. Actually, we can say more. Indeed, there exist $\zeta\in\mu_K$ and $z\in K^\times$ such that $\kappa(\mathfrak{a}_{i+1})_\chi=\zeta z^{t_{i+1}}$. Therefore, (\ref{marrakech}) and (\ref{thonrouge}) imply
\begin{equation}\label{toumarte}
z\mathcal{O}_K=(\lambda_{i+1})^{ut_i/t_{i+1}}(\prod_{j=1}^i\lambda_j^{u_j})\mathfrak{b}^{M/t_{i+1}},
\end{equation}
where $u_j\in\mathbb{Z}_p[\mathrm{G}]$ for all $j\in\{1,\ldots,i\}$ and $\mathfrak{b}$ is a fractional ideal of $\mathcal{O}_K$. But we see from (\ref{tachfine}) that $t_0\vert\#(\mathcal{O}_K^\times/\mathcal{E}_K)_\chi$, and since $t_{i+1}\vert t_0$ the integer $M/t_{i+1}$ annihilates $\mathrm{Cl}(\mathcal{O}_K)_\chi$. The identity (\ref{toumarte}) then implies
\begin{equation}\label{idriss}
(t_i/t_{i+1})\mathfrak{c}_{i+1}\in<\mathfrak{c}_0,\ldots,\mathfrak{c}_i>_{\mathrm{G}}.
\end{equation}
Let $\dim(\chi):=[\mathbb{Q}_p[\hat{\chi}(\mathrm{G})]:\mathbb{Q}_p]$, then (\ref{idriss}) implies
\begin{equation*}
\#\mathrm{Cl}(\mathcal{O}_K)_\chi\,\vert\,\prod_{j=1}^i(t_{j-1}/t_j)^{\dim(\chi)}\,\vert\, t_0^{\dim(\chi)}=\#(\mathcal{O}_K^\times/\mathcal{E}_K)_\chi.
\end{equation*}
\end{proof}
\noindent{\bf Proof of Theorem \ref{tresgras}}. Let the hypotheses and notation be as in Theorem \ref{tresgras}. Let $\Psi$ be the irreducible rational character of $\mathrm{G}$ such that $\chi\vert\Psi$. The formula (\ref{indice2}) may be written as follows
\begin{equation*}
\prod_{\chi'\vert\Psi}\#\mathrm{Cl}(\mathcal{O}_K)_{\chi'}=\prod_{\chi'\vert\Psi}\#(\mathcal{O}_K^\times/\mathcal{E}_K)_{\chi'},
\end{equation*}
where $\chi'$ runs over the irreducible $\mathbb{Z}_p$-characters of $\mathrm{G}$ such that $\chi'\vert\Psi$. Moreover, the formula (\ref{division}) is satisfied for such characters $\chi'$ since $\chi\not\in\Xi_p$. This implies (\ref{gras}). \par
\vskip 5pt
\noindent{\bf Acknowledgement} The authors express their sincere thanks to the referee for his suggestions to improve the content of the paper.
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
|
2,869,038,154,232 | arxiv | \section{Introduction}
X-waves are a well-known example of a localized wave packet, and have
been central to many efforts to generate optical pulses that are able
to resist diffraction~\cite{hernandez_book07a}. They were originally
introduced as a superposition of Bessel beams. that are non-diffracting solutions of
the homogeneous wave equation~\cite{lu92a}
\begin{equation}
[\boldsymbol{\nabla}^2-(1/c^2)\partial^2_t]\Psi(\mathbf{r},t)=0,
\end{equation}
and can thus be encountered in a wide range of fields such as
acoustics, electromagnetism, quantum physics and potentially
seismology or gravitation.
Solitons and solitary waves are another famous type of non-spreading
wave packet which rely on a balance between dispersion and nonlinear
self-focussing to remain localized during
propagation~\cite{eiermann04a,amo09a,sich12a}. However, X-waves do
not require any nonlinearity in the wave equation, a feature which they share with Bessel beams and other remarkable solutions, such as Airy beams---non-spreading solutions
of the Schr\"{o}dinger equation discovered by Berry and Balazs
\cite{berry79a} which have peculiar self-accelerating and self-healing
properties.
X-waves, Bessel and Airy beams are non-physical
solutions since, like plane waves, they cannot be normalized and hence
would require an infinite energy to maintain their spectacular
properties through propagation.
These solutions were thus initially considered a mathematical curiosity, but it was later realized and experimentally demonstrated that square-integrable approximations retain their surprising features for a significant amount of time~\cite{durnin87a,lu92b}. For Airy beams, such demonstration even came several decades later its original prediction~\cite{siviloglou07a,voloch13a}. A later experiment confirmed
Airy beams' self-healing property, showing their ability to self-reconstruct
even after strong perturbations, and also demonstrated their
robustness in adverse environments, such as in scattering and
turbulent media~\cite{broky08a}. Similarly, approximations of X-wave
packets must also reproduce their characteristic features, including
X-shape preserving propagation, but only for a finite time.
While Airy beams are typically produced by pulse shaping and
can be made arbitrarily close to their ideal (unphysical) blueprint,
it has been found that X-waves can conveniently be spontaneously
generated in dispersive and interacting media that feature a
hyperbolic dispersion, \textit{i.e.}, where the effective mass takes
opposite signs in transverse dimensions. In this instance they are
called ``nonlinear X-waves'' or X-wave
solitons~\cite{conti03a,trapani03a}. We will adopt this X-wave
terminology to refer to any similar phenomenology that results from
the combined effects of hyperbolic dispersion and interactions. We
note that this is at best a finite-time approximation of an idealised
scenario which, as we shall discuss, opens new doors for alternative
interpretations in a realistic implementation.
X-waves were first discussed in a condensed matter context with a
theoretical proposal for their observation in an atomic Bose-Einstein
condensate (BEC)~\cite{conti04a}, where the hyperbolic dispersion can
be engineered by placing the BEC in a 1D optical lattice, ``bending''
the dispersion near the edge of the Brillouin zone.
Similar band engineering was proposed by Sedov
\textit{et~al.} with Bragg
exciton-polaritons~\cite{sedov15a}, using a periodical arrangement of
quantum wells to realize hyperbolic metamaterials that support
X-wave solutions.
However, a suitably hyperbolic
dispersion naturally occurs with exciton-polaritons, which are
bosonic quasiparticles that arise from the strong coupling between
photons and excitons in semiconductors
microcavities~\cite{kavokin_book17a}. As a result of their hybrid
nature, they possess a highly non-parabolic and tunable dispersion
relation that provides inflection points, and thus regions of negative
effective mass, without the need for externally imposed potentials or Bragg polaritons. In 2D, one can find hyperbolic
regions that sustain X-waves solutions, as was first pointed out by
Voronych~\textit{et~al.}~\cite{voronych16a}, who also studied these
solutions extensively. Another feature of polaritons is that their
interaction strength is tunable to some extent, either by changing the
excitonic (interacting) fraction or by altering the density of
particles, which allows the study of X-waves in both the weakly and
strongly interacting regimes. Recently, the experimental observation
of polaritonic X-waves was reported~\cite{gianfrate18}. In this
experiment polariton interactions were used to reshape an initial
Gaussian packet (easily created with a laser pulse) into an X-wave by
imparting it with a finite momentum above the inflection point of the
dispersion. While this yielded a beautiful proof of principle of the
underlying idea, important questions remain open. In particular,
although one cannot hope to create an ideal X-wave, how close can one
can get through this interaction-based mechanism? In a realistic
polariton system, how robust is the nonlinear instability that
converts a Gaussian wave packet into an X-wave~\cite{voronych16a}?
And for how long can an X-wave generated in this manner display its
expected characteristics?
To answer these questions, we examine the nonlinear X-wave formation
mechanism under the prism of the wavelet transform (WT), a spectral
decomposition that provides unique insights into the nontrivial
dynamics of wave packet propagation. Previously this technique has
been used to explain and fully characterize so-called self-interfering packets
(SIPs), another phenomenology observed with polaritons due to an
inflection point in the dispersion relation. This results in
negative-mass effects (counter propagation) coexisting with normal
(forward) propagation, producing a constant flow of propagating
fringes~\cite{colas16a}. While purely a linear wave phenomenon, the
SIP can also be triggered due by a nonlinearity leading to the spread of the wave packet across the inflection point in momentum space. The
formation of a SIP, powered by nonlinear interactions, was recently
observed in an atomic spin-orbit coupled
BEC~\cite{khamehchi17a,colas18a}.
In this paper, we show how the wavelet transform provides a new
understanding of the nature and formation of a nonlinear X-wave. The
X-wave is indeed found to be a transient effect that occurs during the
reshaping of a Gaussian wave packet under the
combined effects of a non-parabolic dispersion and repulsive
interactions. The spatial interference of two resulting sub-packets
travelling at different speeds accounts for the X-wave pattern. The
polaritonic X-wave can thus be understood as another type of SIP
rather than a shape-preserving non-interacting ``soliton''.
This confirms the self-interference mechanism is the key to
understanding the general problem of wave packet propagation under
nontrivial dispersion relations that feature inflection points and
thus both negative and infinite effective masses, either with or
without nonlinearity.
This paper is organized as follows. In Sec.~\ref{sec:hyperbolic} we
introduce our method of analysis, and provide an idealized example of
X-wave formation in a complex wave equation with a purely hyperbolic
dispersion relation and a weak nonlinearity. In
Sec.~\ref{sec:polaritons} we demonstrate how the same phenomenon
arises in the formation of X-waves in an exciton-polariton system.
Section~\ref{sec:SOCBEC} proposes how X-waves can be formed in atomic
Bose-Einstein condensates with artificial spin-orbit coupling, instead
of an additional optical lattice potential~\cite{conti03a}. We
conclude in Sec.~\ref{sec:conclusions}.
\section{Hyperbolic dispersion}
\label{sec:hyperbolic}
We start with the simplest system allowing the generation of
nonlinear X-waves, a Gross-Pitaevskii equation for the field $\psi(x,y)$
\begin{equation}
i \hbar \partial_t \psi(x,y)=H_{\mathrm{hyp}}\psi(x,y).
\label{eq:GPE}
\end{equation}
The nonlinear operator
\begin{equation}
\label{eq:1}
H_\mathrm{hyp} = \frac{\hbar^2 k_x^2}{2 m_x} + \frac{\hbar^2 k_y^2}{2 m_y} + g|\psi(x,y)|^2\,,
\end{equation}
has masses of opposite signs in the $x$ and $y$ dimensions $m_x=-m_y$, and thus the system combines a hyperbolic dispersion with repulsive
interactions. A 3D representation of the hyperbolic dispersion
is shown in Fig.~\ref{fig:1}(a). The dispersion is parabolic in both
directions but with an inverted curvature in the $x$ direction, as
seen in Fig.~\ref{fig:1}(b). The last term in Eq.~(\ref{eq:1}) accounts
for the nonlinear interaction, characterised by the constant $g$. An
example of a nonlinear X-wave formation out of an initial Gaussian
wave packet imparted with a momentum $k_x^0$ is shown in Fig.~\ref{fig:1}(c--f)~\cite{footnote3}. One
can see the typical X-shape appearing in the density as
it propagates. Phase singularities with opposite winding also appear
when the X-wave fully forms, here marked as blue and red dots. However
the X-wave does not maintain its shape and breaks in larger packets at
long time, Fig.~\ref{fig:1}(f), much like square integrable Airy beam approximations
lose their self-accelerating property during
propagation~\cite{siviloglou07a}.
The X-wave formation mechanism can be better understood when
considering the field $\psi(\boldsymbol{r},t)$ in a different representation space. Various spectral representations of the wave function are accessible through the Fourier Transform, such as the space-energy $\psi(\boldsymbol{r},E)$ or the momentum-energy $\psi(\boldsymbol{k},E)$ (also called \textit{far-field}) representations. They can provide useful information on, \textit{e.g.}, relaxation processes yielding the Bose-Einstein condensation~\cite{estrecho18a}, or the characterization of topological effects with the presence of Dirac cones or flat-bands~\cite{jacqmin14a,baboux16a}.
\\
However, such representations are poorly adapted for the detection of a transient interference effect, as either the spatial or the temporal dynamics vanishes when integrating towards the momentum or energy domains. An alternative method of analysis is to make use of the Wavelet Transform (WT) --- a convenient manner in which to simultaneously represent the field in both position and momentum space at a given instant in time.
The WT was initially introduced in
signal processing to obtain a representation of the signal in both
time and frequency. It has proven to be particularly useful to analyse the interference
between different wave packets~\cite{baker12a} or more recently
the self-interference from a single wave
packet~\cite{colas16a,colas18a}.
Unlike the usual Fourier Transform that is
based on the decomposition of the signal into a sum of unphysical states (delocalized
sine and cosine functions), the WT uses more physical states with localized wavelets
$\mathcal{G}$ as basis functions.
For a 1D wave packet $\psi(x)$, the general WT
reads~\cite{debnath_book15a}:
\begin{equation}
\mathbb{W}(x,k)=(1/\sqrt{|k|})\int_{-\infty}^{+\infty}\psi(x')\mathcal{G}^\ast [(x'-x)/k]\mathrm{d}x'\,.
\label{eq:5}
\end{equation}
A suitable representation when analysing Schr\"odinger wave
packets is the Gabor wavelet:
\begin{equation}
\mathcal{G}(x)=\sqrt[4]{\pi}\exp(i w_\mathcal{G} x)\exp(-x^2/2)\,,
\label{eq:6}
\end{equation}
This wavelet family consists of a Gaussian envelope, which is an
elementary constituent of the Schr\"{o}dinger dynamics, with an
internal phase that oscillates at a defined wavelet-frequency $\omega_\mathcal{G}$. The physical momentum~$k$ can be retrieved from the WT parameters (wavelet-frequency, grid specifics etc) using a numerical procedure that is detailed in Ref.~\cite{colas18a}. The quantity $|\mathbb{W}(x,k)|^2$ thus measures the cross-correlation between
the wavelet $\mathcal{G}(x)$ and the wave function
$\psi(x)$. This allows us to show in a transparent
way the position~$x$ in real-space of the different $k$-components of
the wave packet.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{figure1.pdf}
\caption{X-wave formation and propagation for a hyperbolic dispersion. (a) 2D
hyperbolic-dispersion. (b) Effective dispersion along $k_x$ and
$k_y$, with $v(k_x)$. (c--f) Evolution of the
density $|\psi(x,y)|^2$ at selected times, starting from a
Gaussian wave packet with $\sigma= 20$ and imparted with a momentum
$k_x^0=1$. Light-blue (red) dots indicate a $\pm 2\pi$ phase
winding. (g--j) Corresponding wavelet energy density $|\mathbb{W}|^2$
computed along the $x$ direction. (k--o) Idem but computed along the $y$
direction. The green dashed curve shows the displacement of the
$k_{x,y}$-components $d(k_{x,y})=v(k_{x,y})t$. (p) Evolution of the components of the energy: Total (brown), interaction (orange), kinetic
(purple) with its two components along $x$ (dashed-dark purple) and
$y$ (dashed-dark blue). Parameters: $\hbar=m_x=-1$, $g=0.003$. Supplemental Movie 1 provides an animation of the nonlinear X-wave formation for this system~\cite{footnote2}.}
\label{fig:1}
\end{figure}
We apply the 1D-WT to the slice $\psi(x,y=0)$, \textit{i.e.}, along the
direction of propagation, and at different times of the X-wave
evolution, as shown in Fig.~\ref{fig:1}(g--j). The mechanism leading to
the X-wave formation appears clearly in this spectral
representation. At $t=0$, the wavelet energy density is tightly
distributed around the value $k_x^0$, Fig.~\ref{fig:1}(c,g), which is
the momentum initially imparted to the wave packet. Since the wave packet is
not spatially confined by any external potential, the initial interaction energy is converted into kinetic energy, leading to an
increase of the packet's spread in momentum space, as previously
observed in 1D systems~\cite{colas18a}. This first distortion can be
seen in the WT, Fig.~\ref{fig:1}(h), along with its consequence on the packet shape in real
space, which shrinks in the~$x$ direction,
Fig.~\ref{fig:1}(d). Indeed, in the direction of propagation, the
group velocity $v(k_x)=\partial_{k_x}E(k_x,0)$ decreases as the
momentum increases, see the dashed-green curve for $v(k_x)$ in
Fig~\ref{fig:1}(b). This means that a particle acquiring additional momentum
will travel more slowly. This feature is the key ingredient for the
X-wave formation. As the packet's spread in $k_x$ keeps increasing, the
latter effect leads to the break up of the initial packet into two
sub-packets, located at different $k_x$ and hence travelling at different
velocities. In Fig.~\ref{fig:1}(g--j), the green dashed line shows the
expected displacement of the $k_x$-components $d(k_x)=v(k_x)t$. In real space, the
sub-packet with the lowest momentum but with the highest group velocity
formed at the tail overtakes the other sub-packet
formed at a higher momentum but propagating at a lower velocity. The
spatial overlap of these two sub-packets creates the interference fringes that are at the heart of the peculiar X-shape of the wave packet.
We also apply the 1D-WT to the transverse direction of the center of
the packet while following its drift in $x$, \textit{i.e.}, we consider the
$y$-WT of $\psi(x=v(k_x^0)t,y)$. The wavelet energy density
$|\mathbb{W}(y,k_y-k_x^0)|^2$~\cite{footnote4} is shown in
Fig.~\ref{fig:1}(k--o). The interactions also lead to an increase of
the packet's spread in $k_y$, followed by a breaking of the packet
into two distinct parts, but unlike for the $x$-direction, this
time the sub-packet with a higher momentum travels faster than the
one with a lower momentum, which prevents any interference from
occurring.
To complete the X-wave analysis, we take a closer look at the energy
exchanges occurring during the wave packet propagation. The Gaussian
wave packet set as an initial condition undergoes reshaping under the
joint action of the dispersion and repulsive interaction, under the
constraint of conservation of the total energy:
\begin{multline}
\label{eq:sabfeb23172742GMT2019}
E_\mathrm{Tot}=E_\mathrm{kin}+E_\mathrm{int} \\
=\int\big[E(\mathbf{k})-E(\mathbf{k}_0)\big]|\psi(\mathbf{k})|^2\,d\mathbf{k}+\int\frac{g}{2}|\psi(\mathbf{r})|^4\,d\mathbf{r}\,.
\end{multline}
The kinetic energy $E_\mathrm{kin}$ is here computed in momentum space
in order to remove the important energy shift $E(\mathbf{k}_0)$
induced by the imparted momentum set in the initial condition. The
interaction energy $E_\mathrm{int}$ is more conveniently computed in
real space. The evolution of these different energy components is
shown in Fig.~\ref{fig:1}(p), with the total, interaction and kinetic
energies plotted in brown, orange and purple, respectively. It is also
instructive to consider the components of the kinetic energy
$E_{\mathrm{kin},x}$ and $E_{\mathrm{kin},y}$ along the $x$ and $y$
directions. They are plotted as dark purple and blue dashed lines,
respectively. Note that at $t=0$, $E_\mathrm{kin}=0$ as
$E_{\mathrm{kin},x}=-E_{\mathrm{kin},y}$ since the initial packet is a
symmetrical Gaussian that spreads equally in both $x$ and $y$
directions of the hyperbolic dispersion with $E(k_x)=-E(k_y)$, which
cancels the overall kinetic energy. For the same reason, an increasing
spread in momentum along the $k_y$ direction leads to an increase of
$E_{\mathrm{kin},y}$ whereas an increasing spread in momentum along
the $k_x$ direction actually leads to a decrease of
$E_{\mathrm{kin},x}$. As the total energy has to be conserved, this
causes a momentary rise of the interaction energy as observed in
Fig.~\ref{fig:1}(p). The energy peak corresponds to the time of
maximum interference between the sub-packets, and also corresponds to
the time of the emergence of the phase singularities. At long times, when the new
packets spread out, all the interaction energy is converted into
kinetic energy, leaving the system behaving essentially as linear
waves. The above discussion illustrates neatly how the
WT analysis captures the key physics that rules the wave
packet reshaping, namely, the interplay between
the hyperbolic dispersion and its resulting negative energy, and the
interactions which peak to break the packet and create phase
singularities.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{figure2.pdf}
\caption{Exciton-polariton X-wave dynamics. (a) 2D lower polariton
dispersion. The dashed-blue line indicates the position of the
inflection points. (b) Effective dispersion in two transverse
directions from the point
$(k_x=\unit{2.5}{\mu\reciprocal\meter},k_y=\unit{0}{\mu\reciprocal\meter})$,
with $v(k_x)$. (c--f) Evolution of the polariton density $|\psi|^2$
at selected times. Light-blue (red) dots indicate a $\pm 2\pi$ phase
winding. (g--j) Corresponding wavelet energy density $|\mathbb{W}|^2$
computed along the $x$ direction. The packet is imparted with a momentum
$k_x^0=\unit{2.5}{\mu\reciprocal\meter}$, above the inflection
point $k_1$. The green dashed curve shows the displacement of
the $k_x$-components $d(k_x)=v(k_x)t$. The vertical blue line correspond to
displacement $d(k_1)$ delimiting the interference
area. Supplemental Movie 2 provides an animation of the nonlinear X-wave formation for this system~\cite{footnote2}.}
\label{fig:2}
\end{figure}
\section{Exciton-polaritons}
\label{sec:polaritons}
We now study a realistic and physical exciton-polariton system, whose
dynamics can be well-captured by the following
two-component Gross-Pitaevkii operator~\cite{carusotto04a,gianfrate18}:
\begin{equation}
\label{eq:2}
H_{\mathrm{pol}}
=
\begin{pmatrix}
\frac{\hbar \boldsymbol{k}^2}{2 m_\mathrm{C}} + \Delta -i \frac{\gamma_\mathrm{C}}{2} & \frac{\Omega_\mathrm{R}}{2}\\
\frac{\Omega_\mathrm{R}}{2} & \frac{\hbar \boldsymbol{k}^2}{2 m_\mathrm{X}} -i \frac{\gamma_\mathrm{X}}{2} +g_\mathrm{X} |\psi_\mathrm{X}|^2
\end{pmatrix}\,,
\end{equation}
which acts on the spinor field
$\boldsymbol{\psi}=(\psi_\mathrm{C},\psi_\mathrm{X})^T$. The parameter $m_{\mathrm{C},(\mathrm{X})}$ is the photon (exciton) mass, $\Delta$ the detuning between the photonic and excitonic modes and $\Omega_\mathrm{R}$ their coupling strength. Both fields have an independent decay rate $\gamma_\mathrm{C,(X)}$. The nonlinearity is here introduced through the exciton-exciton interaction with a strength $g_\mathrm{X}$. Diagonalising the non-interacting and dissipationless part of the operator leads to dressed upper and lower polariton modes:
\begin{multline}
E_\mathrm{U,L} = \frac{\hbar \boldsymbol{k}^2}{2 m_{+}} +\frac{\Delta}{2}
\pm \sqrt{\left(\frac{\hbar \boldsymbol{k}^2}{2 m_{-}}-\frac{\Delta}{2}\right)^2 +\left(\frac{\Omega_\mathrm{R}}{2}\right)^2 }\,,\\
\label{eq:3}
\end{multline}
where
$m_\pm = (m_\mathrm{C} \pm
m_\mathrm{X})/2m_\mathrm{C}m_\mathrm{X}$. In the following, we use a
similar set of parameters to Gianfrante \textit{et
al.}~\cite{gianfrate18}. The lower branch $E_\mathrm{L}$ is plotted
in Fig.~\ref{fig:2}(a) and shows a circularly symmetric profile,
approximately parabolic at small $\boldsymbol{|k|}$, and possessing an inflection
point at $k_1=\unit{1.61}{\mu\reciprocal\meter}$ (dashed-blue
line). An X-wave can be generated by exciting the branch above the
inflection point in any given direction, where the effective
dispersion thus appears locally hyperbolic, as shown in
Fig.~\ref{fig:2}(b). The dynamical evolution of a polariton wave
packet can be obtained by solving the following equation:
\begin{equation}
i \hbar \partial_t \boldsymbol{\psi}=H_{\mathrm{pol}}\boldsymbol{\psi} + \mathbf{P}\,,
\label{eq:4}
\end{equation}
where
$\mathbf{P}= (\mathrm{LG}_{00}\mathrm{e}^{-(t-t_0)^2/2 \sigma_t}
\mathrm{e}^{-i \omega_\mathrm{L} t}\mathrm{e}^{-i k_x^0
x},0)^\mathrm{T}$ stands for the pulse excitation. The photonic
field is excited with a Gaussian pulse arriving at time $t_0$, with a
temporal spread $\sigma_t$, an energy $\omega_\mathrm{L}$ and with an
imparted momentum $k_x^0$. The pulse parameters are chosen so that
only the lower branch is populated
($\omega_\mathrm{L}=\unit{-3}{\milli\electronvolt}$,
$\sigma_t=\unit{0.5}{\pico\second}$), preventing Rabi oscillations
between the two modes~\cite{dominici14a}. The initial momentum of the
pulse is set to be above the inflection point of the branch, at
$k_x^0=\unit{2.5}{\mu\reciprocal\meter}$. Selected time frames of the
density evolution are presented in
Fig.~\ref{fig:2}(c--f). Approximately $\unit{10}{\pico\second}$ after
the pulse arrival, the wave packet starts to distort, Fig.~\ref{fig:2}(d), then shrinks,
Fig.~\ref{fig:2}(e), before forming a typical X-shape profile Fig.~\ref{fig:2}(f) along with phase singularities. The formation of a vortex-antivortex pair is here again a consequence of the hyperbolic topology of the dispersion relation, which leads to an inwards polaritons flow along the propagation direction and outwards in the transverse one, as noted in Ref.~\cite{gianfrate18}.
The WT analysis reveals that the exact same formation mechanism
as for the ideal hyperbolic dispersion occurs in the polariton
system. Shortly after the pulse arrival, the wavelet energy density is
distributed around $k_x^0$, Fig.~\ref{fig:2}(g). The packet then
spreads in $k_x$ due to the interaction, Fig.~\ref{fig:2}(h), and narrows
in the $x$-dimension in real space, Fig.~\ref{fig:2}(d). Above the inflection point
$k_1$, $v(k_x)=\partial_{k_x}E(k_x,0)$ decreases as the momentum
increases, which corresponds to the region where the effective mass
parameters $m_2=\hbar^2[\partial_k^2 E_\mathrm{L}(k)]^{-1}$ becomes
negative~\cite{colas16a}, see Fig.~\ref{fig:2}(b). The origin of the
subsequent X-wave formation is again identified as the result of an
interference between two sub-packets with different momenta and
travelling at different velocities, Fig.~\ref{fig:2}(g--j). The
observed X-wave profile slightly differs from the one obtained with
the symmetrically hyperbolic dispersion in Fig~\ref{fig:1}. This is
due to specifics of the polariton system, such as the asymmetry of
the branch above the inflection, which translates in a different
effective mass (in absolute value) in the transverse direction. Because the polariton system does not conserve the total energy, the analysis of the different energy components field is not as informative as it was for the hyperbolic case.
Regardless of these relatively minor departures, it is clear that the
mechanism is otherwise the same as that discussed in the
previous section, which clarifies the nature and underlying formation mechanism
for the polaritonic nonlinear X-waves.
As a final remark in this section, we comment on the the ``superluminal'' propagation of X-waves observed and discussed in Ref.~\cite{gianfrate18}. Here, ``superluminal''
refers to the observed propagation of a density peak at a speed
exceeding the speed of the packet's center-of-mass by $\sim6$\%. The
later speed is set by the initial imparted momentum $k_x^0$, \textit{i.e.}, the slope of the polariton dispersion at this point,
$$v(k_x^0)=\partial_{k_x} E_\mathrm{L} (k_x,0)\Bigr\rvert_{k_x=k_x^0}.$$
From the results presented in Fig.~\ref{fig:2}, we also
observe that the speed of the main peak exceeds the speed of the
center-of-mass by $\sim 5$--$6$\%. Note that the WT is here not a practical way to measure the peak velocity as it results in the decomposition of the two sub-packets at the origin of the interference peaks.
The simplest way to observe the superluminal propagation thus remains to track the position of the main peak in the real space density $|\psi(\mathbf{r},t)|^2$ and to find the corresponding velocity.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{figure3.pdf}
\caption{X-wave dynamics in a SOCBEC. (a) 2D-SOCBEC dispersion. The
green line on the bottom projection encloses the inflection points
region. (b) Effective dispersion in two transverse directions from
the point $(k_x=1.35 k_\mathrm{R},k_y=0)$, with $v_x(k)$. (c--f)
Evolution of the atomic density $|\psi|^2$ at selected
times. Light-blue (red) dots indicate a $\pm 2\pi$ phase
winding. We note that the vortices present in frame (e) have moved outside of the boundary of frame (f) (g--j) Corresponding wavelet energy density $|\mathbb{W}|^2$
computed along the $x$ direction. The packet is
imparted with a momentum $k_x^0$, between the inflection points
$k_1$ and $k_2$. The green dashed curve shows the displacement of
the $k_x$-components $d(k_x)=v(k_x)t$. The vertical blue lines correspond
to displacements $d(k_1)$ and $d(k_2)$ delimiting the interference
area. (k) Evolution of the different energies: total (brown),
interaction (orange), kinetic (purple) with its two components
along $x$ (dashed-dark purple) and y (dashed-dark blue).
Supplemental Movie 3 provides an animation of the nonlinear X-wave formation for this system~\cite{footnote2}.}
\label{fig:3}
\end{figure}
\section{Spin-orbit coupled Bose-Einstein condensates}
\label{sec:SOCBEC}
\begin{figure*}[t!]
\includegraphics[width=\linewidth]{figure4.pdf}
\caption{Wave packet propagation in 2D-SOCBEC. The top row shows the
atomic density $|\psi(x,y)|^2$ at a given time of its
evolution and the bottom row shows the corresponding wavelet
energy density $|\mathbb{W}(x,k)|^2$, with a WT performed at
$k_y=0$. The green line delimits the self-interference
area. (a--d) Evolution from an initial Gaussian wave packet of width
$\sigma_{\boldsymbol{r}}=\unit{3.5}{\micro\meter}$ in the linear
regime ($g_\mathrm{2D}=0$). (e--h) Self-interfering regime,
obtained from an initial Gaussian wave packet of width
$\sigma_{\boldsymbol{r}}=\unit{0.35}{\micro\meter}$ in the linear
regime ($g_\mathrm{2D}=0$). (i--l) The more strongly interacting regime compared to Fig.~\ref{fig:3} with $g_{2D}N=14\times10^{-4}E_\mathrm{R}$, from an initial Gaussian wave packet of
width $\sigma_{\boldsymbol{r}}=\unit{3.5}{\micro\meter}$.}
\label{fig:4}
\end{figure*}
We finally consider a third condensed-matter system in which SIPs have
been recently encountered in a one-dimensional setting --- a 1D-spin-orbit coupled
Bose-Einstein condensate (SOCBEC)~\cite{khamehchi17a,colas18a}. When
extended to two dimensions, this system also possesses the key elements to
generate nonlinear X-waves.
A non-interacting 2D-SOCBEC can be described by the
following Hamiltonian~\cite{stanescu08a,linyj11a}:
\begin{equation}
\label{eq:7}
H_\mathrm{SOC}
=
\begin{pmatrix}
\frac{\hbar (k_x^2 + k_y^2)}{2 m} +\gamma k_x +\frac{\delta}{2} & \frac{\Omega}{2}\\
\frac{\Omega}{2} & \frac{\hbar (k_x^2 + k_y^2)}{2 m} -\gamma k_x -\frac{\delta}{2}
\end{pmatrix}\,,
\end{equation}
which acts on the spinor field
$\boldsymbol{\psi}=(\psi_\uparrow,\psi_\downarrow)^T$. Two hyperfine
pseudo-spin states up $\ket{\uparrow}=\ket{F=1,m_F=0}$ and down
$\ket{\downarrow}=\ket{F=1,m_F=-1}$ are coupled with the Raman
coupling strength $\Omega$ and detuned by $\delta/2$. We also
introduce $\gamma=\hbar k_\mathrm{R}/m $. The energy and momentum
units are set by $E_\mathrm{R}=(\hbar k_\mathrm{R})^2/2m$,
$E_\mathrm{R}$ and $k_\mathrm{R}$ being the recoil energy and the
Raman wavevector, respectively.
Once
diagonalised, the individual dispersion relations of the two spin
states are mixed, leading to the upper ($+$) and lower ($-$) energy
bands:
\begin{equation}
\label{eq:8}
E_\pm (\boldsymbol{k}) = \frac{\hbar (k_x^2 + k_y^2)}{2m} \pm \sqrt{\left(\gamma k_x +\frac{\delta}{2}\right)^2 + \left(\frac{\Omega}{2}\right)^2}\, .
\end{equation}
The lower band $E_{-}(\boldsymbol{k})$ is plotted in
Fig.~\ref{fig:3}(a). Unlike the polariton dispersion, see
Fig~\ref{fig:2}(a), the 2D-SOCBEC dispersion is not circularly
symmetric and inflection points are only present in a finite region of
momentum space~\cite{footnote5}. This region can be determined
analytically. To do so, we make a change of coordinates
${k_x=k\cos(\theta),\,k_y=k\sin(\theta)}$ in Eq.~(\ref{eq:8}) to obtain
the dispersion relation $E(k,\theta)$ in polar coordinates. We can
then find the inflection points of the dispersion for each specific
angle $\theta$ by solving $\partial_k^2 E(k,\theta)=0$. This yields
the following expression:
\begin{equation}
\label{eq:50}
k_{1,2}(\theta)=\frac{\delta}{4 k_\mathrm{R}}\pm \frac{\sec\theta}{4 k_\mathrm{R}}\sqrt{(2 k_\mathrm{R}\Omega \cos\theta)^{\frac{4}{3}}-\Omega^2}\,.
\end{equation}
These two solutions $k_{1,2}(\theta)$ are plotted as a light-green
line in Fig.~\ref{fig:3}(a) and form the delimiting region of momentum
space in which one can find a locally hyperbolic dispersion. From
Eq.~(\ref{eq:8}), one can also define the critical angle $\theta_c$ from
which the dispersion is no longer hyperbolic:
\begin{equation}
\label{eq:5151}
\theta_c=\tan^{-1}\left[\frac{\sqrt{\Omega}}{ k_\mathrm{R}}\bigg/\sqrt{4-\frac{\Omega}{k_\mathrm{R}^2}}\right]\,.
\end{equation}
Corresponding to this hyperbolic region in momentum space, one can
then define a corresponding velocity range in real space. For each
point $(k_{x,i},k_{y,i})$ of the hyperbolic region limit---see the
green curve in Fig.~\ref{fig:3}(a)---we can derive a corresponding
velocity $(v_{x,i},v_{y,i})$ given by:
\begin{subequations}
\begin{align}
v_{x,i}=\partial_{k_y} E(k_{x,i},k_y)\Bigr\rvert_{k_y=k_{y,i}}\,,\\
v_{y,i}=\partial_{k_x} E(k_{x},k_{y,i})\Bigr\rvert_{k_x=k_{x,i}}\,.
\end{align}
\label{eq:9898}
\end{subequations}
Finally from $(v_{x,i},v_{y,i})$, we can then obtain a set of
coordinates defining a propagating distance
$(d_{x,i},d_{y,i})=(v_{x,i}t,v_{y,i}t)$. This set $(d_{x,i},d_{y,i})$
defines a closed surface in real space, that increases with time. This
area delimits the region of space into which self-interference can
occur. This is the 2D equivalent of the ``diffusion cone'' previously
derived in 1D~\cite{colas16a}. X-waves can thus be generated by
exciting $E_{-}(\boldsymbol{k})$ in this specific region, between two
inflection points $k_1$ and $k_2$, where the effective dispersion
appears locally hyperbolic.
The condensate dynamics can be obtained from a single-band
2D-Gross-Pitaevskii equation~\cite{khamehchi17a}:
\begin{equation}
\label{eq:9}
i\partial_t\psi(\boldsymbol{r})= \mathscr{F}^{-1}_{\boldsymbol{r}}[E_{-}(\boldsymbol{k})\psi(\boldsymbol{k})] +g_\mathrm{2D}|\psi(\boldsymbol{r})|^2\psi(\boldsymbol{r})\,,
\end{equation}
where $E_{-}(\boldsymbol{k})$ is the lower band defined in
Eq.~(\ref{eq:8}). $\mathscr{F}^{-1}_{\boldsymbol{r}}$ indicates the 2D
inverse Fourier transform, and $g_\mathrm{2D}$ the effective 2D
interaction strength.
The experiment of Khamehchi~\textit{et al.}~explored effectively one-dimensional dynamics, where the inital SOCBEC was released from its initial cigar-shaped harmonic trap into a waveguide~\cite{khamehchi17a}. The SOCBEC interaction energy was transformed into kinetic energy, leading to a spread in momentum space across the inflection point of the dispersion, and the development of a SIP~\cite{khamehchi17a,colas18a}. Here we explore a similar scenario where a SOCBEC is released from a circularly symmetric harmonic trap into a two-dimensional waveguide, leading to the formation of a nonlinear X-wave.
As in the polariton case, only a weak nonlinearity is needed to
trigger the X-wave formation in a SOCBEC. We choose
$g_{2D}N=7\times10^{-4}E_\mathrm{R}$, and an initial condensate size
of $\sigma_r=\unit{3.5}{\micro\meter}$, assumed to be Gaussian in this
regime~\cite{footnote1,pethick_book01a}. We impart an initial momentum to the wave packet of $(k_x^0,k_y^0) = (1.35,0)\times k_\mathrm{R}$
which is within the inflection point region of the dispersion, as shown in
Fig.~\ref{fig:3}(a,b). In Fig.~\ref{fig:3}(c--f) we present
selected time frames of the density evolution obtained from
Eq.~(\ref{eq:9}), along with the corresponding 1D-WT performed in the
direction of propagation at $y=0$, Fig.~(g--j). Once again, one can
observe the mechanism leading to the X-wave formation, that is, the
splitting of the wave packet into two sub-packets of different momenta
in a configuration where the faster packet is in a position to overlap
with the slower one and thus interfere with it. We
note again the formation of vortex-antivortex pairs in
Fig.~\ref{fig:3}(e), which have moved outside the boundary of Fig.~\ref{fig:3}(f).
We can perform a similar analysis for the energy of the system that we did for
the ideal hyperbolic case. The evolution of the different energy components is
presented in Fig.~\ref{fig:3}(k) and shows qualitatively the same features
as the hyperbolic case previously shown in
Fig.~\ref{fig:1}(k). One can, however, see that at $t=0$, the kinetic
energy is not zero, since the 2D-SOCBEC dispersion does not possess
the same $x$-$y$ symmetry.
For the parameters we have considered the nonlinearity is strong enough to
form an X-wave, but remains weak enough to restrict the packet's
spread between the two inflection points $k_1$ and $k_2$. Increasing
the effective interaction strength would increase the packet's spread in
momentum and lead to the formation of more complex wave structures in
real space.
Without interactions ($g_{2D} = 0$) the internal reshaping of the wave packet does not occur, and the condensate dynamics are simply those of a slowly diffusing wave packet as shown in Fig.~\ref{fig:4}(a--d). However, in this case the SIP regime can still be reached by setting a tight Gaussian as initial condition~\cite{colas16a}. Such dynamics are shown in Fig.~\ref{fig:4}(e--h). The real space density $|\psi(x,y)|^2$ displays self-interference fringes fully bounded in the delimiting area $d(x_i,y_i)$ previously derived (green line). In the $x$-$k$ space representation, the wavelet energy density closely follows the displacement associated with each wave vector $d(k_x,t)$. In the absence of interactions, the spread in momentum space is entirely defined from the initial condition through the wave packet's width $\sigma_\mathrm{r}$.
Reaching the SIP regime requires a sufficiently broad wave packet in
momentum space that straddles the inflection points. If the initial
wave packet does not have this structure, it can be achieved by a
transformation of interaction energy to kinetic
energy~\cite{colas18a}. To demonstrate this, we again take the
configuration used to generate the X-wave as in Fig.~\ref{fig:3}, but
with an interaction strength twice as large, $g_{2D} N=14\times10^{-4}E_\mathrm{R}$, shown in Fig.~\ref{fig:4}(i--l). At early times an X-wave
still forms thanks to the spread in momentum caused by the
nonlinearity, as shown in Fig.~\ref{fig:4}(i). The corresponding
wavelet transform shows the wave packet reshaping and the typical
feature of an X-wave self-interference, Fig.~\ref{fig:4}(k). However, at longer
times the X-wave shape in the density is no longer present and the
density exhibits a considerably more complex structure. The wavelet
analysis performed at this particular time of the evolution shows that
the packet's spread is now large enough to populate the dispersion
above the second inflection point, which is typical of the
SIP regime. This shows that X-waves generated in nonlinear systems only
exist and propagate for a finite time, and that more complicated effects can follow in their wake.
The internal reshaping of the wave packet due to a nonlinearity
leading to the X-wave formation is in many ways similar to the linear
self-interfering effect previously described for 1D
systems~\cite{colas16a, colas18a}. However the two mechanisms should
not be confused, even if they can both occur during the same
experiment, as shown in Fig.~\ref{fig:4}(i--l). The X-wave formation
mechanism exploits the spread in momentum space provided by the
nonlinear interaction to generate two distinct sub-packets, far from
the inflections points (if any) in the negative effective mass region, overlapping and interfering in real space. On
the other hand, the linear self-interference mechanism occurs due to
the change of sign of the $k$-dependent group velocity at the
inflection points to create an effective superposition across a
broad and continuous range of momenta.\\[2mm]
\section{Conclusions}
\label{sec:conclusions}
In this paper we have shown that nonlinear X-waves, including those
recently observed in excition-polariton systems, arise from an interference
mechanism triggered by the nonlinear interaction. The interaction
increases the packet's spread in momentum space, leading to the
formation of two effective sub-packets travelling at a different
velocities, hence overlapping in space and interfering. The complex
wave packet dynamics can be revealed and understood by utilising the
wavelet transform. The key ingredient in the X-wave formation is the
presence of a locally hyperbolic dispersion relation, and we have
shown that similar X-waves can be obtained in other physical systems
with this feature. For example, X-waves can be formed in
SOCBECs in the weakly interacting regime without the need for an
optical lattice potential.
Overall, our analysis of the X-wave formation dynamics utilising the wavelet transform provides
physically insight into otherwise puzzling wave packet
dynamics, and has identified the central role of self-interference. This emphasizes the importance of the self-interfering packet effect for nonstandard
dispersion relations either with or without the influence of nonlinearities.
The Supplemental Material for this manuscript includes movies of the
full dynamics for the three different systems we have considered in each of Figs.~\ref{fig:1},~\ref{fig:2},~\ref{fig:3} which shed further light on the nonlinear X-wave dynamics~\cite{footnote2}.
\begin{acknowledgements}
This research was supported by the Australian Research Council
Centre of Excellence in Future Low-Energy Electronics Technologies
(project number CE170100039) and funded by the Australian
Government. It was also supported by the Ministry of Science and
Education of the Russian Federation through the Russian-Greek
project RFMEFI61617X0085 and the Spanish MINECO under contract
FIS2015-64951-R (CLAQUE).
\end{acknowledgements}
|
2,869,038,154,233 | arxiv | \section{Introduction}
Cartesian differential categories, introduced by Blute, Cockett, and Seely in \cite{blute2009cartesian}, formalize the theory of multivariable differential calculus by axiomatizing the (total) derivative, and also provide the categorical semantics of the differential $\lambda$-calculus, as introduced by Ehrhard and Regnier in \cite{ehrhard2003differential}. Briefly, a Cartesian differential category (Definition \ref{cartdiffdef}) is a category with finite products such that each homset is a commutative monoid, which allows for zero maps and sums of maps (Definition \ref{CLACdef}), and equipped with a differential combinator $\mathsf{D}$, which for every map ${f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ produces its derivatives $\mathsf{D}[f]: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$. The differential combinator satisfies seven axioms, known as \textbf{[CD.1]} to \textbf{[CD.7]}, which formalize the basic identities of the (total) derivative from multi-variable differential calculus such as the chain rule, linearity in vector argument, symmetry of partial derivatives, etc. Two main examples of Cartesian differential categories are the category of Euclidean spaces and real smooth functions between them (Example \ref{ex:smooth}), and the Lawvere Theory of polynomials over a commutative (semi)ring (Example \ref{ex:CDCPOLY}). An important class of examples of Cartesian differential categories, especially for this paper, are the coKleilsi categories of the comonads of differential categories \cite[Propostion 3.2.1]{blute2009cartesian}.
Differential categories, introduced by Blute, Cockett, and Seely in \cite{blute2006differential}, provide the algebraic foundations of differentiation and the categorical semantics of differential linear logic \cite{ehrhard2017introduction}. Briefly, a differential category (Example \ref{ex:diffcat}) is a symmetric monoidal category with a comonad $\oc$, with comonad structure maps $\delta_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc\oc(A)$ and ${\varepsilon_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A}$, such that for each object $A$, $\oc(A)$ is a cocommutative comonoid with comultiplication ${\Delta_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A) \otimes \oc(A)}$ and counit $e_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} I$, and equipped with a deriving transformation, which is a natural transformation ${\mathsf{d}_A: \oc(A) \otimes A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)}$. The deriving transformation satisfies five axioms, this time called \textbf{[d.1]} to \textbf{[d.5]}, which formalize basic identities of differentiation such as the chain rule and the product rule. In the opposite category of a differential category, called a codifferential category, the deriving transformation is a derivation in the classical algebra sense. Examples of differential categories include the opposite category of the category of vector spaces over a field where $\oc$ is induced by the free symmetric algebra \cite{blute2006differential,Blute2019}, as well as the opposite category of the category of real vector spaces where $\oc$ is instead induced by free $\mathcal{C}^\infty$-rings \cite{cruttwell2019integral}.
In a differential category, a smooth map from $A$ to $B$ is a map of type $\oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$. In other words, the (infinitely) differentiable maps are precisely the coKleisli maps. The interpretation of coKleisli maps as smooth can be made precise when the differential category has finite (bi)products where one uses the deriving transformation to define a differential combinator on the coKleisli category. Briefly, for a coKleisli map $f: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ (which is a map of type $A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in the coKleisli category), its derivative $\mathsf{D}[f]: \oc(A\times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ (which is a map of type $A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in the coKleisli category) is defined as the following composite (in the base category):
\[ \mathsf{D}[f] := \xymatrixcolsep{3pc}\xymatrix{\oc(A \times A) \ar[r]^-{\Delta_{A \times A}} & \oc(A \times A) \otimes \oc(A \times A) \ar[r]^-{\oc(\pi_0) \otimes \oc(\pi_1)} & \oc(A) \otimes \oc(A) \ar[r]^-{1_{\oc(A)} \otimes \varepsilon_A} & \oc(A) \otimes A \ar[r]^-{\mathsf{d}_A} & \oc(A) \ar[r]^-{ f } & B
} \]
where $\pi_i$ are the product projection maps. One then uses the five deriving transformations axioms \textbf{[d.1]} to \textbf{[d.5]} to prove that $\mathsf{D}$ satisfies the seven differential combinator axioms \textbf{[CD.1]} to \textbf{[CD.7]}. Thus, for a differential category with finite (bi)products, its coKleisli category is a Cartesian differential category. For the examples where $\oc$ is the free symmetric algebra or given by free $\mathcal{C}^\infty$-rings, the resulting coKleisli category can respectively be interpreted as the category of polynomials or real smooth functions over possibly infinite variables (but that will still only depend on a finite number of them), of which the Lawvere theory of polynomials or category of real smooth functions is a sub-Cartesian differential category.
Taking another look at the construction of the differential combinator for the coKleisli category, if we define the natural transformation $\partial_A: \oc(A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ as the following composite:
\[ \partial_A := \xymatrixcolsep{3.75pc}\xymatrix{\oc(A \times A) \ar[r]^-{\Delta_{A \times A}} & \oc(A \times A) \otimes \oc(A \times A) \ar[r]^-{\oc(\pi_0) \otimes \oc(\pi_1)} & \oc(A) \otimes \oc(A) \ar[r]^-{1_{\oc(A)} \otimes \varepsilon_A} & \oc(A) \otimes A \ar[r]^-{\mathsf{d}_A} & \oc(A)
} \]
then the differential combinator is simply defined by precomposing a coKleisli map $f: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ with $\partial$:
\[ \mathsf{D}[f] := \xymatrixcolsep{5pc}\xymatrix{\oc(A \times A) \ar[r]^-{\partial_A} & \oc(A) \ar[r]^-{f} & B} \]
It is important to stress that this is the composition in the base category and not the composition in the coKleisli category. Thus, the properties of the differential combinator $\mathsf{D}$ in the coKleisli category are fully captured by the properties of the natural transformation $\partial$ in the base category, which in turn are a result of the axioms of the deriving transformation $\mathsf{d}$. However, observe that the type of $\partial_A: \oc(A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ does not involve any monoidal structure. In fact, if one starts with a comonad whose coKleisli category is a Cartesian differential category, it is always possible to construct $\partial$, and to show that $\mathsf{D}[-] = - \circ \partial$, but it is not always possible to extract a monoidal structure on the base category. Thus, if one's goal is simply to build Cartesian differential categories from coKleisli categories, then a monoidal structure $\otimes$ or a deriving transformation $\mathsf{d}$, or even a comonoid structure $\Delta$ and $e$, are not always necessary. Therefore, the objective of this paper is to precisely characterize the comonads whose coKleisli categories are Cartesian differential categories. To this end, in this paper we introduce the novel notion of a Cartesian differential comonad.
Cartesian differential comonads are precisely the comonads whose coKleisli categories are Cartesian differential categories. Briefly, a Cartesian differential comonad is a comonad $\oc$ on a category with finite biproducts equipped with a differential combinator transformation, which is a natural transformation $\partial_A: \oc(A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ which satisfies six axioms called \textbf{[dc.1]} to \textbf{[dc.6]} (Definition \ref{def:cdcomonad}). The axioms of a differential combinator transformation are analogues of the axioms of a differential combinator. Thus, the coKleisli category of a Cartesian differential comonad is a Cartesian differential category where the differential combinator is defined by precomposition with the differential combinator transformation (Theorem \ref{thm1}). This is proven by reasonably straightforward calculations, but one must be careful when translating back and forth between the base category and the coKleisli category. Conversely, a comonad on a category with finite biproduct whose coKleisli category is a Cartesian differential category is in fact a Cartesian differential comonad, where the differential combinator transformation is the derivative of the identity map ${1_{\oc(A)}: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)}$ seen as a coKleisli map $A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ (Proposition \ref{prop1}). Using this, since we already know that the coKleisli category of a differential category is a Cartesian differential category, it immediately follows that the comonad of a differential category is a Cartesian differential comonad, where the differential combinator transformation is precisely the one defined above. Therefore, Cartesian differential comonads and differential combinator transformations are indeed generalizations of differential categories and deriving transformations. However, Cartesian differential comonads are a strict generalization since, as mentioned, they can be defined without the need of a monoidal structure. A very simple separating example is the identity comonad on any category with finite biproducts, where the differential combinator transformation is simply the second projection map (Example \ref{ex:identity}). While this example is trivial, it recaptures the fact that any category with finite biproducts is a Cartesian differential category and this example clearly works without any extra monoidal structure, and thus is not a differential category example. Therefore, Cartesian differential comonads allow for a wider variety of examples of Cartesian differential categories. As such, in this paper we present three new interesting examples of Cartesian differential comonads, which are not differential categories, and their induced Cartesian differential categories. These three examples are respectively based on formal power series, divided power algebras, and Zinbiel algebras. It is worth mentioning that these new examples arise more naturally as Cartesian differential monads (Example \ref{ex:CDM}), the dual notion of Cartesian differential comonads, and thus it is the opposite of the Kleisli category which is a Cartesian differential category.
The first example (Section \ref{sec:PWex}) is based on reduced power series. Recall that a formal power series is said to be reduced if it has no constant/degree 0 term. While the composition of arbitrary multivariable formal power series is not always well defined, because of their non-zero constant terms, the composition of reduced multivariable power series is always well-defined \cite[Section 4.1]{brewer2014algebraic}, and so we may construct categories of reduced power series. Also, it is well known that power series are always and easily differentiable, similarly to polynomials, and that the derivative of a reduced multivariable power series is again reduced. Motivated by capturing power series differentiation, we show that the free reduced power series algebra monad \cite[Section 1.4.3]{Fresse98} is a Cartesian differential monad (Corollary \ref{cor:POW}) whose monad structure is based on reduced power series composition (Lemma \ref{lem:powmonad}) and whose differential combinator transformation is induced by standard power series differentiation (Proposition \ref{prop:powpartial}). Furthermore, the Lawvere of reduced power series (Example \ref{ex:CDCPOW}) is a sub-Cartesian differential category of the opposite category of the resulting Kleisli category.
The second new example (Section \ref{secpuisdiv}) is based on divided power algebras. Divided power algebras, defined by Cartan \cite{cartan54}, are commutative non-unital associative algebras equipped with additional operations $(-)^{[n]}$ for all strictly positive integer $n$, satisfying some relations (Definition \ref{defipuisdiv}). In characteristic $0$, divided power algebras correspond precisely to commutative non-unital associative algebras. In positive characteristics, however, the two notions diverge. There exist free divided power algebras and we show that the free divided power algebra monad \cite[Section 10, Th{\'e}or{\`e}me 1 and 2]{roby65} is a Cartesian differential monad (Corollary \ref{cor:DIV}). Free divided power algebras correspond to the algebra of reduced divided power polynomials. Thus the differential combinator transformation of this example (Proposition \ref{Gammacomb}) captures differentiating divided power polynomials \cite{keigher2000}. In particular, the Lawvere theory of reduced divided power polynomials (Example \ref{ex:CDCdiv}) is a sub-Cartesian differential category of the opposite category of the Kleisli category of the free divided power algebra monad.
The third new example (Section \ref{sec:ZAex}), and perhaps the most exotic example in this paper, is based on Zinbiel algebras. The notion of Zinbiel algebra was introduced by Loday \cite{loday95} and also further studied by Dokas \cite{dokas09}. A Zinbiel algebra is a vector space $A$ endowed with a non-associative and non-commutative bilinear operation $<$. Using the Zinbiel product, every Zinbiel algebra can be turned into a commutative non-unital associative algebra. The underlying vector space of free Zinbiel algebras is the same as the underlying vector space of the non-unital tensor algebra. Therefore, free Zinbiel algebras are spanned by (non-empty) associative words and equipped with a product $<$ (which is sometimes referred to as the semi-shuffle product \ref{proploday}. The resulting commutative associative algebra is then precisely the non-unital shuffle algebra over $V$. We show that the free Zinbiel algebra monad is a Cartesian differential monad (Corollary \ref{cor:ZIN}) whose differential combinator transformation (Proposition \ref{difcombzin}) corresponds to differentiating non-commutative polynomials with respect to the Zinbiel product. The resulting Cartesian differential category can be understood as the category of reduced non-commutative polynomials where the composition is defined using the Zinbiel product, which we simply call Zinbiel polynomials. As such, the Lawvere theory of Zinbiel polynomials is a new exotic example of a Cartesian differential category. It is worth mentioning that the shuffle algebra has been previously studied as an example of another generalization of differential categories in \cite{bagnol2016shuffle}, but not from the point of view of Zinbiel algebras.
An important class of maps in a Cartesian differential category are the $\mathsf{D}$-linear maps (Definition \ref{linmapdef}), also often simply called linear maps \cite{blute2009cartesian}. A map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\mathsf{D}$-linear if its derivative $\mathsf{D}[f]: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is equal to $f$ evaluated in its second argument, that is, $\mathsf{D}[f] = f \circ \pi_1$ (where $\pi_1$ is the projection map of the \emph{second} argument). A $\mathsf{D}$-linear map should be thought of as being of degree 1, and thus does not have any higher-order derivative. Thus, in many examples, $\mathsf{D}$-linearity often coincides with the classical notion of linearity. For example, in the Cartesian differential category of real smooth functions, a smooth function is $\mathsf{D}$-linear if and only if it is $\mathbb{R}$-linear. For a Cartesian differential comonad, every map of the base category provides a $\mathsf{D}$-linear map in the coKleisli category. However, it is not necessarily the case that the base category is isomorphic to the subcategory of $\mathsf{D}$-linear maps of the coKleisli category. Indeed, a simple example of such a case is the trivial Cartesian differential comonad which maps every object to the zero object and thus every coKleisli map is a zero map. Clearly, if the base category is non-trivial it will not be equivalent to the subcategory of $\mathsf{D}$-linear maps. Instead, it is possible to provide necessary and sufficient conditions for the base category to be isomorphic to the subcategory of $\mathsf{D}$-linear maps of the coKleisli category. It turns out that this is precisely the case when the Cartesian differential comonad comes equipped with a $\mathsf{D}$-linear unit, which is a natural transformation $\eta_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ satisfying two axioms \textbf{[du.1]} and \textbf{[du.2]} (Definition \ref{def:Dunit}). If it exists, a $\mathsf{D}$-linear unit is unique and it is equivalent to an isomorphism between the base category and the subcategory of $\mathsf{D}$-linear maps of the coKleisli category (Proposition \ref{etaFlem1}). In the context of differential categories, specifically in categorical models of differential linear logic, the $\mathsf{D}$-linear unit is precisely the codereliction \cite{blute2006differential,Blute2019,ehrhard2017introduction}. The power series, divided power algebras, and Zinbiel algebras Cartesian differential comonads all come equipped with $\mathsf{D}$-linear units.
In \cite{blute2015cartesian}, Blute, Cockett, and Seely give a characterization of the Cartesian differential categories which are the coKleisli categories of differential categories. Generalizing their approach, it is also possible to precisely characterize the Cartesian differential categories which are the coKleisli categories of Cartesian differential categories (Section \ref{sec:abstract}). To this end, we must work with abstract coKleisli categories (Definition \ref{def:abstract}), which gives a description of coKleisli categories without starting from a comonad. Abstract coKleisli categories are the dual notion of F\"{u}hrmann's thunk-force-categories \cite{fuhrmann1999direct}, which instead do the same for Kleisli categories. Every abstract coKleisli category is canonically the coKleisli category of a comonad on a certain subcategory (Lemma \ref{lem:ep-com}), and conversely, the coKleisli category of any comonad is an abstract coKleisli category (Lemma \ref{cokleisliabstractlem}). In this paper, we introduce Cartesian differential abstract coKleisli categories (Definition \ref{def:abCDC}) which are abstract coKleisli categories that are also Cartesian differential categories such that the differential combinator and the abstract coKleisli structure are compatible. Every Cartesian differential abstract coKleisli category is canonically the coKleisli category of a Cartesian differential comonad over a certain subcategory of $\mathsf{D}$-linear maps (Proposition \ref{propab1}), and conversely, the coKleisli category of a Cartesian differential comonad is a Cartesian differential abstract category (Proposition \ref{propabcok}).
In conclusion, Cartesian differential comonads give a minimum general construction to build coKleisli categories which are Cartesian differential categories. The theory of Cartesian differential comonads also highlights the interaction between the coKleisli structure and the differential combinator. While Cartesian differential comonads recapture some of the notions of differential categories, they are more general. Therefore, Cartesian differential comonads open the door to a variety of new, interesting, and exotic examples of Cartesian differential categories. New examples will be particularly important and of interest, especially since applications of Cartesian differential categories keep being developed, especially in the fields of machine learning and automatic differentiation.
\paragraph{Conventions:} In an arbitrary category, we use the classical notation for composition as opposed to diagrammatic order which was used in other papers on differential categories (such as in \cite{blute2009cartesian,lemay2018tangent} for example). The composite map ${g \circ f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C}$ is the map that first does $f: A\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ then $g: B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$. We denote identity maps as ${1_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A}$.
\section{Cartesian Differential Categories}\label{sec:CDC-background}
In this background section we quickly review Cartesian differential categories \cite{blute2009cartesian}.
The underlying structure of a Cartesian differential category is that of a Cartesian left additive category, which in particular allows one to have zero maps and sums of maps, while also allowing for maps which do not preserve said sums or zeros. Maps which do preserve the additive structure are called \emph{additive} maps. Categorically speaking, a left additive category is a category which is \emph{skew}-enriched over the category of commutative monoids and monoid morphisms \cite{garner2020cartesian}. Then a Cartesian left additive category is a left additive category with finite products such that the product structure is compatible with the commutative monoid structure, that is, the projection maps are additive. Note that since we are working with commutative monoids, we do not assume that our Cartesian left additive categories necessarily come equipped with additive inverses, or in other words negatives. For a category with (chosen) finite products we denote the (chosen) terminal object as $\top$, the binary product of objects $A$ and $B$ by $A \times B$ with projection maps $\pi_0: A \times B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $\pi_1: A \times B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ and pairing operation $\langle -, - \rangle$, so that for maps $f: C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $g: C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, $\langle f,g \rangle: C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A \times B$ is the unique map such that $\pi_0 \circ \langle f, g \rangle = f$ and $\pi_1 \circ \langle f, g \rangle = g$. As such, the product of maps $h: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ and $k: C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} D$ is the map $h \times k: A \times C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B \times D$ defined as $h \times k = \langle h \circ \pi_0, k \circ \pi_1 \rangle$.
\begin{definition}\label{CLACdef} A \textbf{left additive category} \cite[Definition 1.1.1]{blute2009cartesian} is a category $\mathbb{X}$ such that each hom-set $\mathbb{X}(A,B)$ is a commutative monoid, with binary addition $+: \mathbb{X}(A,B) \times \mathbb{X}(A,B) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}(A,B)$, $(f,g) \mapsto f +g$ and zero $0 \in \mathbb{X}(A,B)$, and such that pre-composition preserves the additive structure, that is, for any maps $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, $g: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, and $x: A^\prime \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$, the following equality holds:
\begin{align*}
(f+g) \circ x = f \circ x + g \circ x && 0 \circ x = 0
\end{align*}
A map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is said to be \textbf{additive} \cite[Definition 1.1.1]{blute2009cartesian} if post-composition by $f$ preserves the additive structure, that is, for any maps $x: A^\prime \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $y: A^\prime \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$, the following equality holds:
\begin{align*}
f \circ (x + y) = f \circ x + g \circ y && f \circ 0 = 0
\end{align*}
A \textbf{Cartesian left additive category} \cite[Definition 2.3]{lemay2018tangent} is a left additive category $\mathbb{X}$ which has finite products and such that all the projection maps $\pi_0: A \times B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $\pi_1: A \times B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are additive.
\end{definition}
We note that the definition of a Cartesian left additive category presented here is not precisely that given in \cite[Definition 1.2.1]{blute2009cartesian}, but was shown to be equivalent in \cite[Lemma 2.4]{lemay2018tangent}. Also note that in a Cartesian left additive category, the unique map to the terminal object $\top$ is the zero map ${0: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \top}$. Here are now some important maps for Cartesian differential categories that can be defined in any Cartesian left additive category:
\begin{definition}\label{CLACmapsdef} In a Cartesian left additive category $\mathbb{X}$:
\begin{enumerate}[{\em (i)}]
\item \label{injdef} For each pair of objects $A$ and $B$, define the \textbf{injection maps} $\iota_0: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A \times B$ and $\iota_1: B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A \times B$ respectively as $\iota_0 := \langle 1_A, 0 \rangle$ and $\iota_1 := \langle 0, 1_B \rangle$
\item \label{nabladef} For each object $A$, define the \textbf{sum map} $\nabla_A: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ as $\nabla_A := \pi_0 + \pi_1$.
\item \label{elldef} For each object $A$, define the \textbf{lifting map} $\ell_A: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} (A \times A) \times (A \times A)$ as follows $\ell := \iota_0 \times \iota_1$.
\item \label{cdef} For each object $A$ define the \textbf{interchange map} $c_A: (A \times A) \times (A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} (A \times A) \times (A \times A)$ as follows $c_A : = \left \langle \pi_0 \times \pi_0, \pi_1 \times \pi_1 \right \rangle$.
\end{enumerate}
\end{definition}
It is important to note that while $c$ is natural in the expected sense, the injection maps $\iota_j$, the sum map $\nabla$, and the lifting map $\ell$ are not natural transformations. Instead, they are natural only with respect to additive maps. In particular, since the injection maps are not natural map for arbitrary maps, it follows that these injection maps do not make the product a coproduct, and therefore not a biproduct. However, the biproduct identities still hold in a Cartesian left additive category in the sense that the following equalities hold:
\begin{align*}
\pi_0 \circ \iota_0 = 1_A && \pi_0 \circ \iota_1 = 0 && \pi_1 \circ \iota_0 = 0 && \pi_1 \circ \iota_1 = 1_B && \iota_0 \circ \pi_0 + \iota_1 \circ \pi_1 = 1_{A \times B}
\end{align*}
With all this said, it turns out that a category with finite biproducts is precisely a Cartesian left additive category where every map is additive \cite[Example 2.3.(ii)]{garner2020cartesian}. In that case, note the injection maps of Definition \ref{CLACmapsdef}.(\ref{injdef}) are precisely the injection maps of the coproduct, while the sum map of Definition \ref{CLACmapsdef}.(\ref{nabladef}) is the co-diagonal map of the coproduct.
Cartesian differential categories are Cartesian left additive categories which come equipped with a differential combinator, which in turn is axiomatized by the basic properties of the directional derivative from multivariable differential calculus. There are various equivalent ways of expressing the axioms of a Cartesian differential category. Here we have chosen the one found in \cite[Definition 2.6]{lemay2018tangent} (using the notation for Cartesian left additive categories introduced above). It is important to notice that in the following definition, unlike in the original paper \cite{blute2009cartesian} and other early works on Cartesian differential categories, we use the convention used in the more recent works where the linear argument of $\mathsf{D}[f]$ is its second argument rather than its first argument.
\begin{definition}\label{cartdiffdef} A \textbf{Cartesian differential category} \cite[Definition 2.1.1]{blute2009cartesian} is a Cartesian left additive category $\mathbb{X}$ equipped with a \textbf{differential combinator} $\mathsf{D}$, which is a family of operators:
\begin{align*} \mathsf{D}: \mathbb{X}(A,B) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}(A \times A,B) && \frac{f : A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}{\mathsf{D}[f]: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}
\end{align*}
and such that the following seven axioms hold:
\begin{enumerate}[{\bf [CD.1]}]
\item \label{CDCax1} $\mathsf{D}[f+g] = \mathsf{D}[f] + \mathsf{D}[g]$ and $\mathsf{D}[0] = 0$
\item \label{CDCax2} $\mathsf{D}[f] \circ \left(1_A \times \nabla_A \right) = \mathsf{D}[f] \circ (1_A \times \pi_0) + \mathsf{D}[f] \circ (1_A \times \pi_1)$ and $\mathsf{D}[f] \circ \iota_0 = 0$
\item \label{CDCax3} $\mathsf{D}[1_A]=\pi_1$, $\mathsf{D}[\pi_0] = \pi_0 \circ \pi_1$, and $\mathsf{D}[\pi_1] = \pi_0 \circ \pi_1$
\item \label{CDCax4} $\mathsf{D}[\left\langle f,g \right \rangle] = \left \langle \mathsf{D}[f], \mathsf{D}[g] \right \rangle$
\item \label{CDCax5} $\mathsf{D}[g \circ f] = \mathsf{D}[g] \circ \langle f \circ \pi_0, \mathsf{D}[f] \rangle$
\item \label{CDCax6} $\mathsf{D}\left[\mathsf{D}[f] \right] \circ \ell_A = \mathsf{D}[f]$
\item \label{CDCax7} $\mathsf{D}\left[\mathsf{D}[f] \right] \circ c_A = \mathsf{D}\left[\mathsf{D}[f] \right]$
\end{enumerate}
For a map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, $\mathsf{D}[f]: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is called the derivative of $f$.
\end{definition}
A discussion on the intuition for the differential combinator axioms can be found in \cite[Remark 2.1.3]{blute2009cartesian}. It is also worth mentioning that there is a sound and complete term logic for Cartesian differential categories \cite[Section 4]{blute2009cartesian}.
An important class of maps in a Cartesian differential category is the class of linear maps. In this paper, however, we borrow the terminology from \cite{garner2020cartesian} and will instead call them $\mathsf{D}$-linear maps. This terminology will help distinguish between the classical notion of linearity from commutative algebra and the Cartesian differential category notion of linearity.
\begin{definition}\label{linmapdef} In a Cartesian differential category $\mathbb{X}$ with differential combinator $\mathsf{D}$, a map $f$ is said to be \textbf{$\mathsf{D}$-linear} \cite[Definition 2.2.1]{blute2009cartesian} if $\mathsf{D}[f]= f \circ \pi_1$. Define the subcategory of linear maps $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}]$ to be the category whose objects are the same as $\mathbb{X}$ and whose maps are $\mathsf{D}$-linear in $\mathbb{X}$, and let $\mathsf{U}: \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}] \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}$ be the obvious forgetful functor.
\end{definition}
By \cite[Lemma 2.2.2]{blute2009cartesian}, every $\mathsf{D}$-linear is additive, and therefore it follows that $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}]$ has finite biproducts, and is thus also a Cartesian left additive category (where every map is additive) such that the forgetful functor ${\mathsf{U}: \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}] \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}}$ preserves the Cartesian left additive structure strictly. It is important to note that although additive and linear maps often coincide in many examples of Cartesian differential category, in an arbitrary Cartesian differential category, not every additive map is necessarily linear. Here are now some useful properties of linear maps:
\begin{lemma}\label{linlem} \cite[Lemma 2.2.2, Corollary 2.2.3]{blute2009cartesian} In a Cartesian differential category $\mathbb{X}$ with differential combinator $\mathsf{D}$:
\begin{enumerate}[{\em (i)}]
\item \label{linlem.add} If $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\mathsf{D}$-linear then $f$ is additive;
\item\label{linlemimportant1} For any map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, define $\mathsf{L}[f]: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ (called the linearization of $f$ \cite[Definition 3.1]{cockett2020linearizing}) as the following composite:
\[ \mathsf{L}[f] := \xymatrixcolsep{5pc}\xymatrix{A \ar[r]^-{\iota_1} & A \times A \ar[r]^-{\mathsf{D}[f]} & B
} \]
Then $\mathsf{L}[f]$ is $\mathsf{D}$-linear.
\item\label{linlemimportant2} A map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\mathsf{D}$-linear if and only if $f = \mathsf{L}[f]$.
\item \label{linlem.pre} If $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\mathsf{D}$-linear then for every map $g: B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$, $\mathsf{D}[g \circ f] = \mathsf{D}[g] \circ (f \times f)$;
\item \label{linlem.post} If $g: B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$ is $\mathsf{D}$-linear then for every map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, $\mathsf{D}[g \circ f] = f \circ \mathsf{D}[f]$.
\end{enumerate}
\end{lemma}
We conclude this section of with some examples of well-known Cartesian differential categories and their $\mathsf{D}$-linear maps. The first three examples are based on the standard notions of differentiating linear functions, polynomials, and smooth functions respectively.
\begin{example}\label{ex:CDCbiproduct} \normalfont Any category $\mathbb{X}$ with finite biproduct is a Cartesian differential category where the differential combinator is defined by precomposing with the second projection map: $ \mathsf{D}[f] = f \circ \pi_1$.
In this case, every map is $\mathsf{D}$-linear by definition and so $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}] = \mathbb{X}$.
As a particular example, let $\mathbb{F}$ be a field and let $\mathbb{F}\text{-}\mathsf{VEC}$ be the category of $\mathbb{F}$-vector spaces and $\mathbb{F}$-linear maps between them. Then $\mathbb{F}\text{-}\mathsf{VEC}$ is a Cartesian differential category where for an $\mathbb{F}$-linear map ${f: V \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} W}$, its derivative $\mathsf{D}[f]: V \times V \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} W$ is defined as $\mathsf{D}[f](v,w) = w$.
\end{example}
\begin{example} \normalfont \label{ex:CDCPOLY} Let $\mathbb{F}$ be a field. Define the category $\mathbb{F}\text{-}\mathsf{POLY}$ whose object are $n \in \mathbb{N}$, where a map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ is a $m$-tuple of polynomials in $n$ variables, that is, $P = \langle p_1(\vec x), \hdots, p_m(\vec x) \rangle$ with $p_i(\vec x) \in \mathbb{F}[x_1, \hdots, x_n]$. $\mathbb{F}\text{-}\mathsf{POLY}$ is a Cartesian differential category where the differential combinator is given by the standard differentiation of polynomials, that is, for a map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$, with $P = \langle p_1(\vec x), \hdots, p_m(\vec x) \rangle$, its derivative $\mathsf{D}[P]: n \times n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m$ is defined as the tuple of the sum of the partial derivatives of the polynomials $p_i(\vec x)$:
\begin{align*}
\mathsf{D}[P](\vec x, \vec y) := \left( \sum \limits^n_{i=1} \frac{\partial p_1(\vec x)}{\partial x_i} y_i, \hdots, \sum \limits^n_{i=1} \frac{\partial p_n(\vec x)}{\partial x_i} y_i \right) && \sum \limits^n_{i=1} \frac{\partial p_j (\vec x)}{\partial x_i} y_i \in \mathbb{F}[x_1, \hdots, x_n, y_1, \hdots, y_n]
\end{align*}
A map $P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m$ is $\mathsf{D}$-linear if it of the form:
\begin{align*}
P = \left \langle \sum \limits^{n}_{i=0} r_{i,1}x_{i}, \hdots, \sum \limits^{n}_{i=0} r_{m,1}x_{i} \right \rangle && r_{i,j} \in \mathbb{F}
\end{align*}
In other words, $P = \langle p_1(\vec x), \hdots, p_m(\vec x) \rangle$ is $\mathsf{D}$-linear if and only if each $p_i(\vec x)$ induces an $\mathbb{F}$-linear map ${\mathbb{F}^n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{F}}$. As such, $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{F}\text{-}\mathsf{POLY}]$ is equivalent to the category $\mathbb{F}\text{-}\mathsf{LIN}$ whose objects are the finite powers $\mathbb{F}^n$ for each $n \in \mathbb{N}$ (including the singleton $\mathbb{F}^0 = \lbrace 0 \rbrace$) and whose maps are $\mathbb{F}$-linear maps ${\mathbb{F}^n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{F}}$. We note that this example generalize to the category of polynomials over an arbitrary commutative (semi)ring.
\end{example}
\begin{example}\label{ex:smooth} \normalfont Let $\mathbb{R}$ be the set of real numbers. Define $\mathsf{SMOOTH}$ as the category whose objects are the Euclidean real vector spaces $\mathbb{R}^n$ and whose maps are the real smooth functions ${F: \mathbb{R}^n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{R}^m}$ between them. $\mathsf{SMOOTH}$ is a Cartesian differential category, arguably the canonical example, where the differential combinator is defined as the directional derivative of a smooth function. So for a smooth function $F: \mathbb{R}^n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{R}^m$, its derivative ${\mathsf{D}[F]: \mathbb{R}^n \times \mathbb{R}^n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{R}^m}$ is then defined as:
\[\mathsf{D}[F](\vec x, \vec y) := \left \langle \sum \limits^n_{i=1} \frac{\partial f_1}{\partial x_i}(\vec x) y_i, \hdots, \sum \limits^n_{i=1} \frac{\partial f_n}{\partial x_i}(\vec x) y_i \right \rangle\]
Note that $\mathbb{R}\text{-}\mathsf{POLY}$ is a sub-Cartesian differential category of $\mathsf{SMOOTH}$. A smooth function $F: \mathbb{R}^n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{R}^m$ is $\mathsf{D}$-linear if and only if it is $\mathbb{R}$-linear in the classical sense. Therefore, $\mathsf{D}\text{-}\mathsf{lin}[\mathsf{SMOOTH}]= \mathbb{R}\text{-}\mathsf{LIN}$.
\end{example}
\begin{example} \normalfont An important source of examples of Cartesian differential categories, especially for this paper, are those which arise as the coKleisli category of a differential category \cite{blute2006differential,blute2015cartesian}. We will review this example in Example \ref{ex:diffcat}.
\end{example}
There are many other interesting (and sometimes very exotic) examples of Cartesian differential categories in the literature. See \cite{garner2020cartesian,cockett2020linearizing} for lists of more examples of Cartesian differential categories. Interesting generalizations of Cartesian differential categories include $R$-linear Cartesian differential categories \cite{garner2020cartesian} (which adds the ability of scalar multiplication by a commutative ring $R$), generalized Cartesian differential categories \cite{cruttwell2017cartesian} (which generalizes the notion of differential calculus of smooth functions between open subsets), differential restriction categories \cite{cockett2011differential} (which generalizes the notion of differential calculus of partially defined smooth functions), and tangent categories \cite{cockett2014differential} (which generalizes the notion of differential calculus over smooth manifolds).
\section{Cartesian Differential Comonads}\label{sec:CDComonad}
In this section, we introduce the main novel concept of study in this paper: Cartesian differential comonads, which are precisely the comonads whose coKleisli category is a Cartesian differential category. This is a generalization of \cite[Proposition 3.2.1]{blute2009cartesian}, which states that the coKleisli category of the comonad of a differential category is a Cartesian differential category. The generalization comes from the fact that a Cartesian differential comonad can be defined without the need for a monoidal product or cocommutative comonoid structure on the comonad's coalgebras. As such, this allows for a wider variety of examples of Cartesian differential categories. Briefly, a Cartesian differential comonad is a comonad on a category with finite biproducts, which comes equipped with a differential combinator transformation, which generalizes the notion of a deriving transformation in a differential category \cite{blute2006differential,Blute2019}. The induced differential combinator is defined by precomposing a coKleisli map with the differential combinator transformation (with respect to composition in the base category). Conversely, a comonad whose coKleisli category is a Cartesian differential category is a Cartesian differential comonad, where the differential combinator transformation is defined using the coKleisli category's differential combinator. We point out that this statement, regarding comonads whose coKleisli categories are Cartesian differential categories, is a novel observation and shows us that even if one cannot extract a monoidal product on the base category from the coKleisli category, it is possible to obtain a natural transformation which captures differentiation. Lastly, we will also study the case where the $\mathsf{D}$-linear maps of the coKleisli category correspond precisely to the maps of the base category. The situation arises precisely in the presence of what we call a $\mathsf{D}$-linear unit, which generalizes the notion of a codereliction from differential linear logic \cite{blute2006differential,Blute2019,fiore2007differential,ehrhard2017introduction}.
If only to introduce notation, recall that a comonad on a category $\mathbb{X}$ is a triple $(\oc, \delta, \varepsilon)$ consisting of a functor ${\oc: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}}$, and two natural transformations $\delta_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc \oc (A)$, called the comonad comultiplication, and ${\varepsilon_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A}$, called the comonad counit, and such that the following diagrams commute:
\begin{equation}\label{comonadeq}\begin{gathered}
\delta_{\oc(A)} \circ \delta_A = \oc(\delta_A) \circ \delta_A \quad \quad \quad \varepsilon_{\oc(A)} \circ \delta_A = 1_{\oc(A)} = \oc(\varepsilon_A) \circ \delta_A
\end{gathered}\end{equation}
\begin{definition}\label{def:cdcomonad} For a comonad $(\oc, \delta, \varepsilon)$ on a category $\mathbb{X}$ with finite biproducts, a \textbf{differential combinator transformation} on $(\oc, \delta, \varepsilon)$ is a natural transformation $\partial_A: \oc(A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ such that the following diagrams commute:
\begin{enumerate}[{\bf [dc.1]}]
\item Zero Rule:
\[ \xymatrixcolsep{5pc}\xymatrix{ \oc(A) \ar[dr]_-{0} \ar[r]^-{\oc(\iota_0)} & \oc(A \times A) \ar[d]^-{\partial_A} \\
& \oc(A)
} \]
where $\iota_0: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A \times A$ is defined as in Definition \ref{CLACmapsdef}.(\ref{injdef}).
\item Additive Rule:
\[ \xymatrixcolsep{7pc}\xymatrix{ \oc\!\left( A \times (A \times A) \right) \ar[d]_-{\oc(1_A \times \pi_0) + \oc(1_A \times \pi_1)} \ar[r]^-{\oc(1_A \times \nabla_A)} & \oc(A \times A) \ar[d]^-{\partial_A} \\
\oc(A \times A) \ar[r]_-{\partial_A} & \oc(A)
} \]
where $\nabla_A: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ is defined as in Definition \ref{CLACmapsdef}.(\ref{nabladef}).
\item Linear Rule:
\[ \xymatrixcolsep{5pc}\xymatrix{ \oc(A \times A) \ar[r]^-{\partial_A} \ar[d]_-{\varepsilon_{A \times A}} & \oc(A) \ar[d]^-{\varepsilon_A} \\
A \times A \ar[r]_-{\pi_1} & A
} \]
\item Chain Rule:
\[ \xymatrixcolsep{5pc}\xymatrix{ \oc(A \times A) \ar[d]_-{\delta_{A \times A}} \ar[rr]^-{\partial_A} && \oc(A) \ar[d]^-{\delta_A} \\
\oc\oc(A \times A) \ar[r]_-{\oc\left(\langle \oc(\pi_0), \partial_{A} \rangle \right)} & \oc\!\left( \oc(A) \times \oc(A) \right) \ar[r]_-{\partial_{\oc(A)}} & \oc\oc(A)
} \]
\item Lift Rule:
\[ \xymatrixcolsep{5pc}\xymatrix{\oc\left( A \times A \right) \ar[ddr]_-{\partial_{A}} \ar[r]^-{\oc(\ell_A)} & \oc\!\left( (A \times A) \times (A \times A) \right) \ar[d]^-{\partial_{A \times A}} \\
& \oc(A \times A) \ar[d]^-{\partial_A} \\
& \oc(A)
} \]
where $\ell_A: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} (A \times A) \times (A \times A)$ is defined as in Definition \ref{CLACmapsdef}.(\ref{elldef}).
\item Symmetry Rule:
\[ \xymatrixcolsep{5pc}\xymatrix{\oc\left( (A \times A) \times (A \times A) \right) \ar[dd]_-{\partial_{A \times A}} \ar[r]^-{\oc(c_A)} & \oc\!\left( (A \times A) \times (A \times A) \right) \ar[d]^-{\partial_{A \times A}} \\
& \oc(A \times A) \ar[d]^-{\partial_A} \\
\oc(A \times A) \ar[r]_-{\partial_A} & \oc(A)
} \]
where $c_A: (A \times A) \times (A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} (A \times A) \times (A \times A)$ is defined as in Definition \ref{CLACmapsdef}.(\ref{cdef}).
\end{enumerate}
A \textbf{Cartesian differential comonad} on a category $\mathbb{X}$ with finite biproducts is a quadruple $(\oc, \delta, \varepsilon, \partial)$ consisting of a comonad $(\oc, \delta, \varepsilon)$ and a differential combinator transformation $\partial$ on $(\oc, \delta, \varepsilon)$.
\end{definition}
As the name suggests, the differential combinator transformations axioms correspond to some of the axioms a differential combinator. The zero rule \textbf{[dc.1]} and the additive rule \textbf{[dc.2]} correspond to \textbf{[CD.2]}, the linear rule \textbf{[dc.3]} corresponds to \textbf{[CD.3]}, the chain rule \textbf{[dc.4]} corresponds to \textbf{[CD.5]}, the lift rule corresponds to \textbf{[CD.6]}, and lastly the symmetry rule \textbf{[dc.6]} corresponds to \textbf{[CD.7]}.
Our goal is now to show that the coKleisli category of a Cartesian differential comonad is a Cartesian differential category. As we will be working with coKleisli categories, we will use the notation found in \cite{blute2015cartesian} and use interpretation brackets $\llbracket - \rrbracket$ to help distinguish between composition in the base category and coKleisli composition. So for a comonad $(\oc, \delta, \varepsilon)$ on a category $\mathbb{X}$, let $\mathbb{X}_\oc$ denote its coKleisli category, which is the category whose objects are the same as $\mathbb{X}$ and where a map $A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in the coKleisli category is map of type $\oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in the base category, that is, $\mathbb{X}_\oc(A,B) = \mathbb{X}(\oc(A), B)$. Composition of coKleisli maps ${\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ and $\llbracket g \rrbracket: \oc(B) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$ is defined as follows:
\begin{align*}
\llbracket g \circ f \rrbracket := \xymatrixcolsep{3pc}\xymatrix{\oc (A) \ar[r]^-{\delta_A} & \oc\oc(A) \ar[r]^-{\oc \left( \llbracket f \rrbracket \right) } & \oc(B) \ar[r]^-{\llbracket g \rrbracket} & C } && \llbracket g \circ f \rrbracket = \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \delta_A
\end{align*}
The identity maps in the coKleisli category is given by the comonad counit:
\begin{align*}
\llbracket 1_A \rrbracket := \xymatrixcolsep{5pc}\xymatrix{\oc (A) \ar[r]^-{\varepsilon_A} & A }
\end{align*}
Let $\mathsf{F}_\oc: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}_\oc$ be the standard inclusion functor which is defined on objects as $F(A)$ and on maps ${f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ as follows:
\begin{align*}
\llbracket \mathsf{F}_\oc(f) \rrbracket := \xymatrixcolsep{3pc}\xymatrix{\oc (A) \ar[r]^-{\varepsilon_A} & A \ar[r]^-{f} & B } && \llbracket \mathsf{F}_\oc(f) \rrbracket = f \circ \varepsilon_A
\end{align*}
A key map in this story is the coKleisli map whose interpretation is the identity map in the base category. So for every object $A$, define the map $\varphi_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ in the coKleisli category as follows:
\begin{equation}\label{varphidef}\begin{gathered}
\llbracket \varphi_A \rrbracket := \xymatrixcolsep{5pc}\xymatrix{\oc (A) \ar[r]^-{1_{\oc(A)}} & \oc(A) }
\end{gathered}\end{equation}
Here are now some useful identities in the coKleisli category:
\begin{lemma}\label{cokleislilem1} Let $(\oc, \delta, \varepsilon)$ be a comonad on a category $\mathbb{X}$. Then the following equalities hold:
\begin{enumerate}[{\em (i)}]
\item \label{cokleislilem1.right} $\llbracket g \circ \mathsf{F}_\oc(f) \rrbracket = \llbracket g \rrbracket \circ \oc(f)$
\item\label{cokleislilem1.left} $\llbracket \mathsf{F}_{\oc}(g) \circ f \rrbracket = g \circ \llbracket f \rrbracket$
\item \label{cokleislilem1.varphi} $\llbracket f \rrbracket = \llbracket \mathsf{F}_\oc\left( \llbracket f \rrbracket \right) \circ \varphi_A \rrbracket$
\item \label{cokleislilem1.varphi2} $\llbracket \varphi_B \circ \mathsf{F}_\oc(f) \rrbracket = \oc(f)$
\item \label{cokleislilem1.varphi3} $\llbracket \varphi_{\oc(A)} \circ \varphi_A \rrbracket = \delta_A$
\end{enumerate}
\end{lemma}
It is a well-known result that if the base category has finite products, then so does the coKleisli category.
\begin{lemma} \label{cokleisliproduct} \cite[Dual of Proposition 2.2]{szigeti1983limits} Let $(\oc, \delta, \varepsilon)$ be a comonad on a category $\mathbb{X}$ with finite products. Then the coKleisli category $\mathbb{X}_\oc$ has finite products where:
\begin{enumerate}[{\em (i)}]
\item The product $\times$ on objects is defined as as in $\mathbb{X}$;
\item The projection maps $\llbracket \pi_0 \rrbracket: \oc(A \times B) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $\llbracket \pi_1 \rrbracket: \oc(A \times B) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are defined respectively as follows:
\begin{align*}
\llbracket \pi_0 \rrbracket:= \xymatrixcolsep{3pc}\xymatrix{\oc (A \times B) \ar[r]^-{\varepsilon_{A \times B}} & A \times B \ar[r]^-{\pi_0} & A } &&\llbracket \pi_0 \rrbracket = \pi_0 \circ \varepsilon_{A \times B} \\
\llbracket \pi_1 \rrbracket:= \xymatrixcolsep{3pc}\xymatrix{\oc (A \times B) \ar[r]^-{\varepsilon_{A \times B}} & A \times B \ar[r]^-{\pi_1} & B } &&\llbracket \pi_1 \rrbracket = \pi_1 \circ \varepsilon_{A \times B}
\end{align*}
\item The pairing of coKleisli maps $\llbracket f \rrbracket: \oc(C) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $\llbracket g \rrbracket: \oc(C) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is defined as in $\mathbb{X}$, that is:
\begin{align*}
\llbracket \langle f, g \rangle \rrbracket := \xymatrixcolsep{5pc}\xymatrix{\oc (C) \ar[r]^-{\left \langle \llbracket f \rrbracket, \llbracket g \rrbracket \right \rangle} & A \times B }
\end{align*}
\item The terminal object $\top$ is the same as in $\mathbb{X}$.
\item For coKleisli maps $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$ and $\llbracket g \rrbracket: \oc(B) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} D$, their product is equal to the following composite:
\begin{align*}
\llbracket f \times g \rrbracket:= \xymatrixcolsep{3.5pc}\xymatrix{\oc (A \times B) \ar[r]^-{\langle \oc(\pi_0), \oc(\pi_1) \rangle} & \oc(A) \times \oc(B) \ar[r]^-{\llbracket f \rrbracket \times \llbracket g \rrbracket} & C \times D } &&\llbracket f \times g \rrbracket = \left( \llbracket f \rrbracket \times \llbracket g \rrbracket \right) \circ \langle \oc(\pi_0), \oc(\pi_1) \rangle
\end{align*}
\item \label{cokleisliproduct.F} $\mathsf{F}_\oc: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}_\oc$ preserves the finite product strictly, that is, the following equalities hold:
\begin{align*}
\mathsf{F}_\oc(A \times B) &= A \times B && &\mathsf{F}_\oc(\top) &= \top \\
\llbracket \mathsf{F}_\oc(\pi_0) \rrbracket &= \llbracket \pi_0 \rrbracket &&& \llbracket \mathsf{F}_\oc(\pi_1) \rrbracket&= \llbracket \pi_1 \rrbracket \\
\llbracket \mathsf{F}_\oc\left(\langle f, g \rangle \right) \rrbracket &= \llbracket \langle \mathsf{F}_\oc(f), \mathsf{F}_\oc(g) \rangle \rrbracket &&& \llbracket \mathsf{F}_\oc\left( f \times g \right) \rrbracket &= \llbracket \mathsf{F}_\oc(f) \times \mathsf{F}_\oc(g) \rrbracket
\end{align*}
\end{enumerate}
\end{lemma}
If the base category is also Cartesian left additive, then so is the coKleisli category in a canonical way, that is, where the additive structure is simply that of the base category.
\begin{lemma}\label{cokleisliCLAC} \cite[Proposition 1.3.3]{blute2009cartesian} Let $(\oc, \delta, \varepsilon)$ be a comonad on a Cartesian left additive category $\mathbb{X}$ with finite products. Then the coKleisli category $\mathbb{X}_\oc$ is a Cartesian left additive category where the finite product structure is given in Lemma \ref{cokleisliproduct} and where:
\begin{enumerate}[{\em (i)}]
\item The sum of coKleisli maps $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ and $\llbracket g \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is defined as in $\mathbb{X}$, that is:
\[\llbracket f+g \rrbracket = \llbracket f \rrbracket + \llbracket g \rrbracket\]
\item The zero $\llbracket 0 \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is the same as in $\mathbb{X}$, that is:
\[\llbracket 0 \rrbracket = 0\]
\item \label{cokleisliCLAC.F1} $\mathsf{F}_\oc: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}_\oc$ preserves the additive structure strictly, that is, the following equalities hold:
\begin{align*}
\llbracket \mathsf{F}_\oc(f + g) \rrbracket= \llbracket \mathsf{F}_\oc(f) + \mathsf{F}_\oc(g) \rrbracket && \llbracket \mathsf{F}_\oc(0) \rrbracket= 0
\end{align*}
\item \label{cokleisliCLAC.F2} The following equalities hold:
\begin{align*}
\llbracket \iota_0 \rrbracket = \iota_0 \circ \varepsilon_{A} = \llbracket \mathsf{F}_\oc(\iota_0) \rrbracket && \llbracket \iota_1 \rrbracket = \iota_0 \circ \varepsilon_{B} = \llbracket \mathsf{F}_\oc(\iota_1) \rrbracket\\
\llbracket \nabla_A \rrbracket = \nabla_A \circ \varepsilon_{A \times A} = \llbracket \mathsf{F}_\oc(\nabla_A) \rrbracket && \llbracket \ell_A \rrbracket = \ell_A \circ \varepsilon_{A \times A} = \llbracket \mathsf{F}_\oc(\ell_A) \rrbracket
\end{align*}
\[ \llbracket c_A \rrbracket = c_A \circ \varepsilon_{(A \times A) \times (A \times A)} = \llbracket \mathsf{F}_\oc(c_A) \rrbracket \]
where $\iota_j$, $\nabla$, $\ell$, and $c$ are defined as in Definition \ref{CLACmapsdef}.
\end{enumerate}
\end{lemma}
Now since every category $\mathbb{X}$ with finite biproducts is a Cartesian left additive category, it follows that for every comonad $(\oc, \delta, \varepsilon)$ on $\mathbb{X}$, the coKleisli category $\mathbb{X}_\oc$ is a Cartesian left additive category. It is important to point out that even if all maps in $\mathbb{X}$ are additive maps, the same is not true for $\mathbb{X}_\oc$. This is due to the fact that $\oc(f +g)$ and $\oc(0)$ do not necessarily equal $\oc(f) + \oc(g)$ and $0$ respectively.
We now provide the first main result of this paper: that the coKleisli category of a Cartesian differential comonad is a Cartesian differential category.
\begin{theorem}\label{thm1} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. Then the coKleisli category $\mathbb{X}_\oc$ is a Cartesian differential category where the Cartesian left additive structure is defined as in Lemma \ref{cokleisliCLAC} and the differential combinator $\mathsf{D}$ is defined as follows: for a map ${\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$, its derivative $\llbracket \mathsf{D}[f] \rrbracket: \oc(A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is defined as the following composite:
\[ \llbracket \mathsf{D}[f] \rrbracket := \xymatrixcolsep{5pc}\xymatrix{\oc(A \times A) \ar[r]^-{\partial_A} & \oc(A) \ar[r]^-{\llbracket f \rrbracket} & B
} \]
Furthermore:
\begin{enumerate}[{\em (i)}]
\item \label{thm1.varphi} For every object $A$ in $\mathbb{X}$, $\llbracket \mathsf{D}[\varphi_A] \rrbracket = \partial_A$.
\item \label{thm1.lin}A coKleisli map $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\mathsf{D}$-linear in the coKleisli category if and only if $\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) = \llbracket f \rrbracket$.
\item For every map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in $\mathbb{X}$, $\llbracket \mathsf{F}_\oc(f) \rrbracket$ is $\mathsf{D}$-linear in $\mathbb{X}_\oc$.
\item \label{Flindef} There is a functor $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc]$ which is defined on objects as $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}(A) = A$ and on maps $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ as $\llbracket \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}(f) \rrbracket = f \circ \varepsilon_A = \llbracket \mathsf{F}_{\oc}(f) \rrbracket$, and such that the following diagram commutes:
\[ \xymatrixcolsep{5pc}\xymatrix{ \mathbb{X} \ar[dr]_-{\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}} \ar[rr]^-{\mathsf{F}_\oc} && \mathbb{X}_\oc \\
& \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc] \ar[ur]_-{\mathsf{U}}
} \]
\end{enumerate}
\end{theorem}
\begin{proof} We prove the seven axioms of a differential combinator. We make heavy use of Lemma \ref{cokleislilem1}.(\ref{cokleislilem1.right}).
\begin{enumerate}[{\bf [CD.1]}]
\item Here we use the additive enrichment of $\mathbb{X}$:
\begin{align*}
\llbracket \mathsf{D}[f +g] \rrbracket &= \llbracket f + g \rrbracket \circ \partial_A \\
&= \left( \llbracket f \rrbracket + \llbracket g \rrbracket \right) \circ \partial_A \\
&= \llbracket f \rrbracket \circ \partial_A + \llbracket g \rrbracket \circ \partial_A \\
&= \llbracket \mathsf{D}[f] \rrbracket + \llbracket \mathsf{D}[g] \rrbracket \\\\
\llbracket \mathsf{D}[0] \rrbracket &= \llbracket 0 \rrbracket \circ \partial_A \\
&= 0 \circ \partial_A \\
&= 0
\end{align*}
\item Here we use the fact that every map in $\mathbb{X}$ is additive, and both the zero rule \textbf{[dc.1]} and additive rule \textbf{[dc.2]}:
\begin{align*}
\llbracket \mathsf{D}[f] \circ (1_A \times \nabla_A) \rrbracket &=~\llbracket \mathsf{D}[f] \circ \mathsf{F}_{\oc}(1_A \times \nabla_A) \rrbracket \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F}) + Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}[f] \rrbracket \circ \oc(1_A \times \nabla_A) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \oc(1_A \times \nabla_A) \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \left( \oc(1_A \times \pi_0) + \oc(1_A \times \pi_1) \right) \tag{\textbf{[dc.2]}}\\
&=~\llbracket f \rrbracket \circ \partial_A \circ \oc(1_A \times \pi_0) + \llbracket f \rrbracket \circ \partial_A \circ \oc(1_A \times \pi_1) \\
&=~\llbracket \mathsf{D}[f] \rrbracket \circ \oc(1_A \times \pi_0) + \llbracket \mathsf{D}[f] \rrbracket \circ \oc(1_A \times \pi_1) \\
&=~ \llbracket \mathsf{D}[f] \circ \mathsf{F}_{\oc}(1_A \times \pi_0) \rrbracket + \llbracket \mathsf{D}[f] \circ \mathsf{F}_{\oc}(1_A \times \pi_1) \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~ \llbracket \mathsf{D}[f] \circ (1_A \times \pi_0) \rrbracket + \llbracket \mathsf{D}[f] \circ (1_A \times \pi_1) \rrbracket \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F}) + Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}[f] \circ (1_A \times \pi_0) + \mathsf{D}[f] \circ (1_A \times \pi_1) \rrbracket
\end{align*}
\begin{align*}
\llbracket \mathsf{D}[f] \circ \iota_0 \rrbracket &=~\llbracket \mathsf{D}[f] \circ \mathsf{F}_\oc(\iota_0) \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}[f] \rrbracket \circ \oc(\iota_0) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_0) \\
&=~\llbracket f \rrbracket \circ 0 \tag{\textbf{[dc.1]}}\\
&=~ 0
\end{align*}
\item Here we use the linear rule \textbf{[dc.3]} and Lemma \ref{cokleislilem1}.(\ref{cokleislilem1.left}):
\begin{align*}
\llbracket \mathsf{D}[1_A] \rrbracket &=~ \llbracket 1_A \rrbracket \circ \partial_A \\
&=~\varepsilon_A \circ \partial_A \\
&=~\pi_1 \circ \varepsilon_{A \times A} \tag{\textbf{[dc.3]}}\\
&=~\llbracket \pi_1 \rrbracket \\ \\
\llbracket \mathsf{D}[\pi_j] \rrbracket &=~ \llbracket \pi_0 \rrbracket \circ \partial_{A \times B} \\
&=~\pi_j \circ \varepsilon_{A \times B} \circ \partial_{A \times B} \\
&=~\pi_j \circ \pi_1 \circ \varepsilon_{(A \times B) \times (A \times B)} \tag{\textbf{[dc.3]}}\\
&=~ \pi_j \circ \llbracket \pi_1 \rrbracket \\
&=~\llbracket \mathsf{F}_\oc(\pi_j) \circ \pi_1 \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.left})} \\
&=~ \llbracket \pi_j \circ \pi_1 \rrbracket \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F})}
\end{align*}
\item This is mostly straightforward from the product structure:
\begin{align*}
\llbracket \mathsf{D}\left[ \langle f, g \rangle \right] \rrbracket &=~\llbracket \langle f, g \rangle \rrbracket \circ \partial_A \\
&=~\left \langle \llbracket f \rrbracket, \llbracket g \rrbracket \right \rangle \circ \partial_A \\
&=~\left \langle \llbracket f \rrbracket \circ \partial_A, \llbracket g \rrbracket \circ \partial_A \right\rangle \\
&=~ \left \langle \llbracket \mathsf{D}[f] \rrbracket, \llbracket \mathsf{D}[g] \rrbracket \right \rangle \\
&=~\llbracket \mathsf{D}[ \langle f,g \rangle] \rrbracket
\end{align*}
\item Here we use the chain rule \textbf{[dc.4]} and the naturality of $\partial$:
\begin{align*}
\llbracket \mathsf{D}[g \circ f] \rrbracket &=~\llbracket g \circ f \rrbracket \circ \partial_A \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \delta_A \circ \partial_A \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \partial_{\oc(A)} \circ \oc\left( \langle \oc(\pi_0), \partial_A \rangle \right) \circ \delta_{A \times A} \tag{\textbf{[dc.4]}} \\
&=~ \llbracket g \rrbracket \circ \partial_{B} \circ \oc\left( \llbracket f \rrbracket \times \llbracket f \rrbracket \right) \circ \oc\left( \langle \oc(\pi_0), \partial_A \rangle \right) \circ \delta_{A \times A} \tag{Naturality of $\partial$} \\
&=~ \llbracket \mathsf{D}[g] \rrbracket \circ \oc\left( \llbracket f \rrbracket \times \llbracket f \rrbracket \right) \circ \oc\left( \langle \oc(\pi_0), \partial_A \rangle \right) \circ \delta_{A \times A} \\
&=~ \llbracket \mathsf{D}[g] \rrbracket \circ \oc\left( \left(\llbracket f \rrbracket \times \llbracket f \rrbracket \right) \circ \langle \oc(\pi_0), \partial_A \rangle \right) \circ \delta_{A \times A} \tag{Functoriality of $\oc$} \\
&=~ \llbracket \mathsf{D}[g] \rrbracket \circ \oc\left( \left \langle \llbracket f \rrbracket \circ \oc(\pi_0), \llbracket f \rrbracket \circ \partial_A \right \rangle \right) \circ \delta_{A \times A} \\
&=~ \llbracket \mathsf{D}[g] \rrbracket \circ \oc\left( \left \langle \llbracket f \rrbracket \circ \oc(\pi_0), \llbracket \mathsf{D}[f] \rrbracket \right \rangle \right) \circ \delta_{A \times A} \\
&=~ \llbracket \mathsf{D}[g] \rrbracket \circ \oc\left( \left \langle \llbracket f \circ \mathsf{F}_\oc(\pi_0) \rrbracket, \llbracket \mathsf{D}[f] \rrbracket \right \rangle \right) \circ \delta_{A \times A} \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~ \llbracket \mathsf{D}[g] \rrbracket \circ \oc\left( \left \langle \llbracket f \circ \pi_0 \rrbracket, \llbracket \mathsf{D}[f] \rrbracket \right \rangle \right) \circ \delta_{A \times A} \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F})} \\
&=~ \llbracket \mathsf{D}[g] \rrbracket \circ \oc\left( \left \llbracket \left \langle f \circ \pi_0, \mathsf{D}[f] \right \rangle \right \rrbracket \right) \circ \delta_{A \times A} \\
&=~ \left \llbracket \mathsf{D}[g] \circ \left \langle f \circ \pi_0, \mathsf{D}[f] \right \rangle \right \rrbracket
\end{align*}
\item Here we use the lifting rule \textbf{[dc.5]}:
\begin{align*}
\llbracket \mathsf{D}\left[\mathsf{D}[f] \right] \circ \ell_A \rrbracket &=~\llbracket \mathsf{D}\left[\mathsf{D}[f] \right] \circ \mathsf{F}_\oc(\ell_A) \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}\left[\mathsf{D}[f] \right] \rrbracket \circ \oc(\ell_A) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~ \llbracket \mathsf{D}[f] \rrbracket \circ \partial_{A \times A} \circ \oc(\ell_A) \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \partial_{A \times A} \circ \oc(\ell_A) \\
&=~\llbracket f \rrbracket \circ \partial_A \tag{\textbf{[dc.5]}} \\
&=~ \llbracket \mathsf{D}[f] \rrbracket
\end{align*}
\item Here we use the symmetry rule \textbf{[dc.6]}:
\begin{align*}
\llbracket \mathsf{D}\left[\mathsf{D}[f] \right] \circ c_A \rrbracket &=~\llbracket \mathsf{D}\left[\mathsf{D}[f] \right] \circ \mathsf{F}_\oc(c_A) \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}\left[\mathsf{D}[f] \right] \rrbracket \circ \oc(c_A) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~ \llbracket \mathsf{D}[f] \rrbracket \circ \partial_{A \times A} \circ \oc(c_A) \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \partial_{A \times A} \circ \oc(c_A) \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \partial_{A \times A} \tag{\textbf{[dc.6]}} \\
&=~ \llbracket \mathsf{D}[f] \rrbracket \circ \partial_{A \times A} \\
&=~\llbracket \mathsf{D}\left[\mathsf{D}[f] \right] \rrbracket
\end{align*}
\end{enumerate}
So we conclude that $\mathsf{D}$ is a differential combinator, and therefore that the coKleisli category $\mathbb{X}_\oc$ is a Cartesian differential category. Next we prove the remaining claims.
\begin{enumerate}[{\em (i)}]
\item This is automatic by definition since:
\begin{align*}
\llbracket \mathsf{D}[\varphi_A] \rrbracket &=~ \llbracket \varphi_A \rrbracket \circ \partial_A \\
&=~ 1_{\oc(A)} \circ \partial_A \\
&=~\partial_A
\end{align*}
\item By Lemma \ref{linlem}.(\ref{linlemimportant2}), $\llbracket f \rrbracket$ is $\mathsf{D}$-linear if and only if $\llbracket \mathsf{L}[f] \rrbracket = \llbracket f \rrbracket$. However, expanding the left hand side of the equality we have that:
\begin{align*}
\llbracket \mathsf{L}[f] \rrbracket &=~ \llbracket \mathsf{D}[f] \circ \iota_1 \rrbracket \\
&=~\llbracket \mathsf{D}[f] \circ \mathsf{F}_\oc(\iota_1) \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}[f] \rrbracket \circ \oc(\iota_1) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1)
\end{align*}
Therefore, $\llbracket f \rrbracket$ is $\mathsf{D}$-linear if and only if $\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) = \llbracket f \rrbracket$.
\item For any map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in $\mathbb{X}$, using the linear rule \textbf{[dc.3]}, naturality of $\varepsilon$, and the biproduct identities, we compute:
\begin{align*}
\llbracket \mathsf{F}_\oc(f) \rrbracket \circ \partial_A \circ \oc(\iota_1) &=~f \circ \varepsilon_A \circ \partial_A \circ \oc(\iota_1) \\
&=~f \circ \pi_1 \circ \varepsilon_{A \times A} \circ \oc(\iota_1) \\
&=~ f \circ \pi_1 \circ \iota_1 \circ \varepsilon_{A} \tag{Naturality of $\varepsilon$} \\
&=~f \circ \varepsilon_A \tag{Biproduct Identity} \\
&=~\llbracket \mathsf{F}_\oc(f) \rrbracket
\end{align*}
Therefore, since $\llbracket \mathsf{F}_\oc(f) \rrbracket \circ \partial_A \circ \oc(\iota_1) = \llbracket \mathsf{F}_\oc(f) \rrbracket$, by the above, it follows that $\llbracket \mathsf{F}_\oc(f) \rrbracket $ is $\mathsf{D}$-linear.
\item By the above, $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ is well-defined and is indeed a functor since $\mathsf{F}_\oc$ is a functor. Furthermore, it is automatic by definition that $\mathsf{U} \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}} = \mathsf{F}_\oc$. \end{enumerate}
\end{proof}
We will now prove the converse of Theorem \ref{thm1} by showing that a comonad whose coKleisli category is a Cartesian differential category is indeed a Cartesian differential comonad.
\begin{proposition}\label{prop1} Let $\mathbb{X}$ be a category with finite biproducts and let $(\oc, \delta, \varepsilon)$ be a comonad on $\mathbb{X}$. Suppose that the coKleisli category $\mathbb{X}_\oc$ is a Cartesian differential category with differential combinator $\mathsf{D}$ such that:
\begin{enumerate}[{\em (i)}]
\item The underlying Cartesian left additive structure of $\mathbb{X}_\oc$ is the one from Lemma \ref{cokleisliCLAC};
\item For every map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in $\mathbb{X}$, $\llbracket \mathsf{F}_\oc(f) \rrbracket$ is a $\mathsf{D}$-linear map in $\mathbb{X}_\oc$.
\end{enumerate}
Define the natural transformation $\partial_A: \oc(A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ as follows:
\begin{equation}\label{partialdef}\begin{gathered}\partial_A := \xymatrixcolsep{5pc}\xymatrix{ \oc(A \times A) \ar[r]^-{\llbracket \mathsf{D}[\varphi_A] \rrbracket} & \oc(A)
} \end{gathered}\end{equation}
Then $(\oc, \delta, \varepsilon, \partial)$ is a Cartesian differential comonad and furthermore for every coKleisli map $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, the following diagram commutes (in $\mathbb{X}$):
\begin{equation}\label{Dvarphi1}\begin{gathered}
\xymatrixcolsep{5pc}\xymatrix{ \oc(A \times A) \ar[dr]_-{\llbracket \mathsf{D}[f] \rrbracket} \ar[r]^-{\partial_A} & \oc(A) \ar[d]^-{\llbracket f \rrbracket} \\
& B }
\end{gathered}\end{equation}
\end{proposition}
\begin{proof} We begin by proving (\ref{Dvarphi1}) as it will be useful in other parts of the proof:
\begin{align*}
\llbracket \mathsf{D}[f] \rrbracket &=~\llbracket \mathsf{D}\left[ \mathsf{F}_\oc\left( \llbracket f \rrbracket \right) \circ \varphi_A \right] \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.varphi})} \\
&=~\llbracket \mathsf{F}_\oc\left( \llbracket f \rrbracket \right) \circ \mathsf{D}\left[ \varphi_A \right] \rrbracket \tag{$\mathsf{F}_\oc\left( \llbracket f \rrbracket \right)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~ \llbracket f \rrbracket \circ \llbracket \mathsf{D}\left[ \varphi_A \right] \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.left})} \\
&=~ \llbracket f \rrbracket \circ \partial_A
\end{align*}
So $\llbracket \mathsf{D}[f] \rrbracket = \llbracket f \rrbracket \circ \partial_A$. Next we prove that $\partial$ is natural:
\begin{align*}
\partial_B \circ \oc(f \times f) &=~\llbracket \mathsf{D}[\varphi_B] \rrbracket \circ \oc(f \times f) \\
&=~ \llbracket \mathsf{D}[\varphi_B] \circ \mathsf{F}_\oc(f \times f) \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\llbracket \mathsf{D}[\varphi_B] \circ \left( \mathsf{F}_\oc(f) \times \mathsf{F}_\oc(f) \right) \rrbracket \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F})} \\
&=~\llbracket \mathsf{D}[\varphi_B \circ \mathsf{F}_\oc(f) ] \rrbracket \tag{$\mathsf{F}_\oc(f)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.pre})} \\
&=~\llbracket \varphi_B \circ \mathsf{F}_\oc(f) \rrbracket \circ \partial_A \tag{\ref{Dvarphi1}} \\
&=~\oc(f) \circ \partial_A \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.varphi2})}
\end{align*}
So $\partial$ is a natural transformation. Next we show the six axioms of a differential combinator transformation:
\begin{enumerate}[{\bf [dc.1]}]
\item Here we use \textbf{[CD.2]}:
\begin{align*}
\partial_A \circ \oc(\iota_0) &=~ \llbracket \mathsf{D}[\varphi_A] \rrbracket \circ \oc(\iota_0) \\
&=~\llbracket \mathsf{D}[\varphi_A] \circ \mathsf{F}_\oc(\iota_0) \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\llbracket \mathsf{D}[\varphi_A] \circ \iota_0 \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket 0 \rrbracket \tag{\textbf{[CD.2]}} \\
&=~ 0
\end{align*}
\item Here we use \textbf{[CD.2]}:
\begin{align*}
\partial_A \circ \oc(1_A \times \nabla_A) &=~\llbracket \mathsf{D}[\varphi_A] \rrbracket \circ \oc(1_A \times \nabla_A) \\
&=~\llbracket \mathsf{D}[\varphi_A] \circ \mathsf{F}_\oc(1_A \times \nabla_A) \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\llbracket \mathsf{D}[\varphi_A] \circ (1_A \times \nabla_A) \rrbracket \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F}) + Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}[\varphi_A] \circ (1_A \times\pi_0) + \mathsf{D}[\varphi_A] \circ (1_A \times\pi_1) \rrbracket \tag{\textbf{[CD.2]}} \\
&=~ \llbracket \mathsf{D}[\varphi_A] \circ (1_A \times\pi_0) \rrbracket + \llbracket \mathsf{D}[\varphi_A] \circ (1_A \times\pi_1) \rrbracket \\
&=~ \llbracket \mathsf{D}[\varphi_A] \circ \mathsf{F}_\oc(1_A \times\pi_0) \rrbracket + \llbracket \mathsf{D}[\varphi_A] \circ \mathsf{F}_\oc(1_A \times\pi_1) \rrbracket \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F}) + Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}[\varphi_A] \rrbracket \circ \oc(1_A \times \pi_0) + \llbracket \mathsf{D}[\varphi_A] \rrbracket \circ \oc(1_A \times \pi_1) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\partial_A \circ \oc(1_A \times \pi_0) + \partial_A \circ \oc(1_A \times \pi_1) \\
&=~ \partial_A \circ \left( \oc(1_A \times \pi_0) + \oc(1_A \times \pi_0) \right)
\end{align*}
\item Here we use \textbf{[CD.3]}:
\begin{align*}
\varepsilon_A \circ \partial_A &=~\llbracket 1_A \rrbracket \circ \partial_A \\
&=~\llbracket \mathsf{D}[1_A] \rrbracket \tag{\ref{Dvarphi1}} \\
&=~\llbracket \pi_1 \rrbracket \tag{\textbf{[CD.3]}} \\
&=~\pi_1 \circ \varepsilon_{A \times A}
\end{align*}
\item Here we use \textbf{[CD.5]}:
\begin{align*}
\delta_A \circ \partial_A &=~\llbracket \varphi_{\oc(A)} \circ \varphi_A \rrbracket \circ \partial_A \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.varphi3})} \\
&=~\llbracket \mathsf{D}\left[ \varphi_{\oc(A)} \circ \varphi_A \right] \rrbracket \tag{\ref{Dvarphi1}} \\
&=~ \llbracket \mathsf{D}\left[ \varphi_{\oc(A)} \right] \circ \langle \varphi_A \circ \pi_0, \mathsf{D}[\varphi_A] \rangle \rrbracket \tag{\textbf{[CD.5]}} \\
&=~\llbracket \mathsf{D}\left[ \varphi_{\oc(A)} \right] \rrbracket \circ \oc\left( \llbracket \langle \varphi_A \circ \pi_0, \mathsf{D}[\varphi_A] \rangle \rrbracket \right) \circ \delta_{A \times A} \\
&=~\partial_{\oc(A)} \circ \oc\left( \llbracket \langle \varphi_A \circ \pi_0, \mathsf{D}[\varphi_A] \rangle \rrbracket \right) \circ \delta_{A \times A} \\
&=~\partial_{\oc(A)} \circ \oc\left( \left \langle \llbracket \varphi_A \circ \pi_0 \rrbracket , \llbracket \mathsf{D}[\varphi_A] \rrbracket \right \rangle \right) \circ \delta_{A \times A} \\
&=~\partial_{\oc(A)} \circ \oc\left( \left \langle \llbracket \varphi_A \circ \pi_0 \rrbracket , \partial_A \right \rangle \right) \circ \delta_{A \times A} \\
&=~\partial_{\oc(A)} \circ \oc\left( \left \langle \llbracket \varphi_A \circ \mathsf{F}_\oc(\pi_0) \rrbracket , \partial_A \right \rangle \right) \circ \delta_{A \times A} \tag{Lem.\ref{cokleisliproduct}.(\ref{cokleisliproduct.F})} \\
&=~\partial_{\oc(A)} \circ \oc\left( \left \langle \oc(\pi_0) , \partial_A \right \rangle \right) \circ \delta_{A \times A} \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.varphi2})}
\end{align*}
\item Here we use \textbf{[CD.6]}:
\begin{align*}
\partial_{A} \circ \partial_{A \times A} \circ \oc(\ell_A) &=~\llbracket \mathsf{D}[\varphi_{A}] \rrbracket \circ \partial_{A \times A} \circ \oc(\ell_A) \\
&=~ \llbracket \mathsf{D}\left[ \mathsf{D}[\varphi_{A}] \right] \rrbracket \circ \oc(\ell_A) \tag{\ref{Dvarphi1}} \\
&=~ \llbracket \mathsf{D}\left[ \mathsf{D}[\varphi_{A}] \circ \mathsf{F}_\oc(\ell_A) \right] \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~ \llbracket \mathsf{D}\left[ \mathsf{D}[\varphi_{A}] \circ \ell_A \right] \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}[\varphi_{A}] \rrbracket \tag{\textbf{[CD.6]}} \\
&=~ \partial_A
\end{align*}
\item Here we use \textbf{[CD.7]}:
\begin{align*}
\partial_{A} \circ \partial_{A \times A} \circ \oc(c_A) &=~\llbracket \mathsf{D}[\varphi_{A}] \rrbracket \circ \partial_{A \times A} \circ \oc(c_A) \\
&=~ \llbracket \mathsf{D}\left[ \mathsf{D}[\varphi_{A}] \right] \rrbracket \circ \oc(c_A) \tag{\ref{Dvarphi1}} \\
&=~ \llbracket \mathsf{D}\left[ \mathsf{D}[\varphi_{A}] \circ \mathsf{F}_\oc(c_A) \right] \rrbracket \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~ \llbracket \mathsf{D}\left[ \mathsf{D}[\varphi_{A}] \circ c_A \right] \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~\llbracket \mathsf{D}\left[ \mathsf{D}[\varphi_{A}] \right] \rrbracket \tag{\textbf{[CD.7]}} \\
&=~\llbracket \mathsf{D}[\varphi_{A}] \rrbracket \circ \partial_{A \times A} \rrbracket \tag{\ref{Dvarphi1}} \\
&=~\partial_{A} \circ \partial_{A \times A}
\end{align*}
\end{enumerate}
So we conclude that $\partial$ is a differential combinator transformation and therefore that $(\oc, \delta, \varepsilon, \partial)$ is a Cartesian differential comonad.
\end{proof}
As a result, we obtain a bijective correspondence between differential combinator transformations and differential combinators.
\begin{corollary} Let $\mathbb{X}$ be a category with finite biproducts and let $(\oc, \delta, \varepsilon)$ be a comonad on $\mathbb{X}$. Then the following are in bijective correspondence:
\begin{enumerate}[{\em (i)}]
\item Differential combinator transformations $\partial$ on $(\oc, \delta, \varepsilon)$
\item Differential combinators $\mathsf{D}$ on the coKleisli category $\mathbb{X}_\oc$ with respect to the Cartesian left additive structure from Lemma \ref{cokleisliCLAC} and such that for every map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in $\mathbb{X}$, $\llbracket \mathsf{F}_\oc(f) \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is a $\mathsf{D}$-linear map in $\mathbb{X}_\oc$.
\end{enumerate}
via the constructions of Theorem \ref{thm1} and Proposition \ref{prop1}.
\end{corollary}
\begin{proof} This follows immediately from Theorem \ref{thm1}.(\ref{thm1.varphi}) and (\ref{Dvarphi1}).
\end{proof}
We now turn our attention back to the $\mathsf{D}$-linear maps in the coKleisli category of a Cartesian differential comonad. Specifically, we wish to provide necessary and sufficient conditions for when the subcategory of $\mathsf{D}$-linear maps is isomorphic to the base category. Explicitly, we wish to study when $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc]$ as defined in Theorem \ref{thm1}.(\ref{Flindef}) is an isomorphism. The answer, as it turns out, is requiring that the comonad counit has a section.
\begin{definition}\label{def:Dunit} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. A \textbf{$\mathsf{D}$-linear unit} on $(\oc, \delta, \varepsilon, \partial)$ is a natural transformation $\eta_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ such that the following diagrams commute:
\begin{enumerate}[{\bf [du.1]}]
\item Linear Rule:
\[ \xymatrixcolsep{5pc}\xymatrix{ A \ar@{=}[dr]_-{} \ar[r]^-{\eta_A} & \oc(A) \ar[d]^-{\varepsilon_A} \\
& A
} \]
\item Linearization Rule:
\[ \xymatrixcolsep{5pc}\xymatrix{ \oc(A) \ar[r]^-{\varepsilon_A} \ar[d]_-{\oc(\iota_1)} & A \ar[d]^-{\eta_A} \\
\oc(A \times A) \ar[r]_-{\partial_A} & \oc(A)
} \]
where $\iota_1: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A \times A$ is defined as in Definition \ref{CLACmapsdef}.(\ref{injdef}).
\end{enumerate}
In other words, for every object $A$, $\partial_A \circ \oc(\iota_1)$ is a split idempotent via $\eta_A$ and $\varepsilon_A$.
\end{definition}
Our first observation is that $\mathsf{D}$-linear units are unique.
\begin{lemma} For a Cartesian differential comonad, if a $\mathsf{D}$-linear unit exists, then it is unique.
\end{lemma}
\begin{proof} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. Suppose that $\eta$ and $\eta^\prime$ are two $\mathsf{D}$-linear units on $(\oc, \delta, \varepsilon, \partial)$. Combining the linear rule \textbf{[du.1]} and the linearization rule \textbf{[du.2]}, we compute:
\begin{align*}
\eta^\prime_A &=~\eta^\prime_A \circ 1_A \\
&=~\eta^\prime_A \circ \varepsilon_A \circ \eta_A \tag{\textbf{[du.1]} for $\eta$} \\
&=~\partial_A \circ \oc(\iota_1) \circ \eta_A \tag{\textbf{[du.2]} for $\eta^\prime$} \\
&=~\eta_A \circ \varepsilon_A \circ \eta_A \tag{\textbf{[du.2]} for $\eta$} \\
&=~ \eta_A \circ 1_A \tag{\textbf{[du.1]} for $\eta$} \\
&=~\eta_A
\end{align*}
So $\eta= \eta^\prime$. Therefore we conclude that if it exists, a $\mathsf{D}$-linear unit must be unique. \end{proof}
We now prove that for a Cartesian differential comonad with a $\mathsf{D}$-linear unit, the $\mathsf{D}$-linear maps in the coKleisli category correspond precisely to the maps in the base category. To do so, we will first need the following useful identity:
\begin{lemma} \label{Lvarphi} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. Then $\llbracket \mathsf{L}[\varphi_A] \rrbracket = \partial_A \circ \oc(\iota_1)$.
\end{lemma}
\begin{proof} We compute:
\begin{align*}
\llbracket \mathsf{L}[\varphi_A] \rrbracket &=~\llbracket \mathsf{D}[\varphi_A] \circ \iota_1 \rrbracket \\
&=~\llbracket \mathsf{D}[\varphi_A] \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \oc(\iota_1) \right) \rrbracket \tag{Lem.\ref{cokleisliCLAC}.(\ref{cokleisliCLAC.F2})} \\
&=~ \llbracket \mathsf{D}[\varphi_A] \rrbracket \circ \oc(\iota_1) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~\partial_A \circ \oc(\iota_1) \tag{Theorem \ref{thm1}.(\ref{thm1.varphi})}
\end{align*}
So the desired equality holds.
\end{proof}
\begin{proposition}
\label{etaFlem1} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. Then the following are equivalent:
\begin{enumerate}[{\em (i)}]
\item $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc]$ is an isomorphism (where $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ is defined as in Theorem \ref{thm1}.(\ref{Flindef}));
\item $(\oc, \delta, \varepsilon, \partial)$ has a $\mathsf{D}$-linear unit $\eta_A : A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc (A)$.
\end{enumerate}
\end{proposition}
\begin{proof} Suppose that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc]$ is an isomorphism. By Lemma \ref{linlem}.(\ref{linlemimportant1}), $\llbracket \mathsf{L}[\varphi_A] \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ is a $\mathsf{D}$-linear map from $A$ to $\oc(A)$ in the coKleisli category. Thus, we obtain a map of the desired type ${\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_A] \rrbracket \right): A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)}$ in $\mathbb{X}$. So define $\eta_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ as:
\begin{equation}\label{etavarphi}\begin{gathered}
\eta_A = \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_A] \rrbracket \right)
\end{gathered}\end{equation}
We will use Lemma \ref{Lvarphi} to show that $\eta$ is indeed a $\mathsf{D}$-linear unit. Starting with the naturality of $\eta$, for any map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in $\mathbb{X}$ we compute:
\begin{align*}
\eta_B \circ f &=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_B] \rrbracket \right) \circ f \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_B] \rrbracket \right) \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( f \right) \right) \tag{$\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ is an isomorphism} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \left \llbracket \mathsf{L}[\varphi_B] \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( f \right) \right \rrbracket \right) \tag{$\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ is a functor} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \left \llbracket \mathsf{L}[\varphi_B] \circ \mathsf{F}_\oc \left( f \right) \right \rrbracket \right) \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_B] \rrbracket \circ \oc(f) \right) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.right})} \\
&=~ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \partial_B \circ \oc(\iota_1) \circ \oc(f) \right) \tag{Lemma \ref{Lvarphi}} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \partial_B \circ \oc\left( \iota_1 \circ f \right) \right) \tag{$\oc$ is a functor} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \partial_B \circ \oc\left( (f \times f) \circ \iota_1 \right) \right) \tag{Naturality of $\iota_1$} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \partial_B \circ \oc(f \times f) \circ \oc(\iota_1) \right) \tag{$\oc$ is a functor} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \oc(f) \circ \partial_A \circ \oc(\iota_1) \right) \tag{Naturality of $\partial$} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \oc(f) \circ \llbracket \mathsf{L}[\varphi_A] \rrbracket \right)\tag{Lemma \ref{Lvarphi}} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_\oc\left( \oc(f) \right) \circ \mathsf{L}[\varphi_A] \rrbracket \right) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.left})} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \left \llbracket \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \oc(f) \right) \circ \mathsf{L}[\varphi_A] \right \rrbracket \right) \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \oc(f) \right) \right) \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_A] \rrbracket \right) \tag{$\mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}}$ is a functor} \\
&=~\oc(f) \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_A] \rrbracket \right) \tag{$\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ is an isomorphism} \\
&=~ \oc(f) \circ \eta_A
\end{align*}
So $\eta$ is a natural transformation. Next we show the two axioms of a $\mathsf{D}$-linear unit:
\begin{enumerate}[{\bf [du.1]}]
\item We compute:
\begin{align*}
\varepsilon_A \circ \eta_A &=~ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \varepsilon_A \right) \rrbracket \right) \circ \eta_A \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_\oc \left( \varepsilon_A \right) \rrbracket \right) \circ \eta_A \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_\oc \left( \llbracket 1_A \rrbracket \right) \rrbracket \right) \circ \eta_A \\
&=~ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_\oc \left( \llbracket 1_A \rrbracket \right) \rrbracket \right) \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_A] \rrbracket \right) \\
&=~ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_\oc \left( \llbracket 1_A \rrbracket \right) \circ \mathsf{L}[\varphi_A] \rrbracket \right) \\
&=~ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_\oc \left( \llbracket 1_A \rrbracket \right) \circ \mathsf{D}[\varphi_A] \circ \iota_1 \rrbracket \right) \\
&=~ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{D}\left[ \mathsf{F}_\oc \left( \llbracket 1_A \rrbracket \right) \circ \varphi_A \right] \circ \iota_1 \rrbracket \right) \tag{$\mathsf{F}_\oc\left( \llbracket \varepsilon \rrbracket \right)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{D}[1_A] \circ \iota_1 \rrbracket \right) \tag{Lem.\ref{cokleislilem1}.(\ref{cokleislilem1.varphi})} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \pi_1 \circ \iota_1 \rrbracket \right) \tag{\textbf{[CD.3]}} \\
&=~ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket 1_A \rrbracket \right) \tag{Biproduct identities} \\
&=~ 1_A \tag{$\mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}}$ is a functor}
\end{align*}
\item We compute:
\begin{align*}
\eta_A \circ \varepsilon_A &=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \eta_A \right) \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{L}[\varphi_A] \rrbracket \right) \right) \\
&=~ \llbracket \mathsf{L}[\varphi_A] \rrbracket \tag{$\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ is an isomorphism} \\
&=~\partial_A \circ \oc(\iota_1) \tag{Lemma \ref{Lvarphi}}
\end{align*}
\end{enumerate}
So we conclude that $\eta$ is a $\mathsf{D}$-linear unit. Conversely, suppose that $(\oc, \delta, \varepsilon, \partial)$ comes equipped with a $\mathsf{D}$-linear unit $\eta_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$. Define the functor $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}: \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc] \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}$ on objects as $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}(A) = A$ and on $\mathsf{D}$-linear coKleisli maps $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ as the following composite:
\begin{align*}
\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}(\llbracket f \rrbracket ) := \xymatrixcolsep{5pc}\xymatrix{A \ar[r]^-{\eta_A} & \oc (A) \ar[r]^-{\llbracket f \rrbracket} & B
} && \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}(\llbracket f \rrbracket ) = \llbracket f \rrbracket \circ \eta_A
\end{align*}
We must show that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}$ is indeed a functor. We first show that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}$ preserves identities using the linear rule \textbf{[du.1]}:
\begin{align*}
\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket 1_A \rrbracket \right) &=~ \llbracket 1_A \rrbracket \circ \eta_A \\
&=~ \varepsilon_A \circ \eta_A \\
&=~ 1_A \tag{\textbf{[du.1]}}
\end{align*}
Next we show that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}$ also preserves composition using the linearization rule \textbf{[du.2]}:
\begin{align*}
\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket g\circ f \rrbracket \right) &=~ \llbracket g\circ f \rrbracket \circ \eta_A \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \delta_A \circ \eta_A \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) \right) \circ \delta_A \circ \eta_A \tag{$ \llbracket f \rrbracket $ is $\mathsf{D}$-linear and Thm.\ref{thm1}.(\ref{thm1.lin})} \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \circ \eta_A \circ \varepsilon_A \right) \circ \delta_A \circ \eta_A \tag{\textbf{[du.2]}} \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \oc(\eta_A) \circ \oc(\varepsilon_A) \circ \delta_A \circ \eta_A \tag{$\oc$ is a functor} \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \oc(\eta_A) \circ 1_{\oc(A)} \circ \eta_A \tag{Comonad Identity} \\
&=~ \llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \oc(\eta_A) \circ \eta_A \\
&=~\llbracket g \rrbracket \circ \oc\left( \llbracket f \rrbracket \right) \circ \eta_{\oc(A)} \circ \eta_A \tag{Naturality of $\eta$} \\
&=~\llbracket g \rrbracket \circ \eta_{B} \circ \llbracket f \rrbracket \circ \eta_A \tag{Naturality of $\eta$} \\
&=~\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket g \rrbracket \right) \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket f \rrbracket \right)
\end{align*}
So $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}$ is indeed a functor. Next we show that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ and $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}$ are inverses. Starting with $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1} \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$, clearly on objects we have that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}(A) \right) = A$, while for maps we compute:
\begin{align*}
\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1}\left( \llbracket \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}(f) \rrbracket \right) &=~\llbracket \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}(f) \rrbracket \circ \eta_A \\
&=~f \circ \varepsilon_A \circ \eta_A \\
&=~ f \circ 1_A \tag{\textbf{[du.1]}} \\
&=~ f
\end{align*}
Therefore, $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}^{-1} \circ \mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}} = 1_\mathbb{X}$. Next for $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}} \circ \mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}}$, clearly on objects we have that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}}(A) \right) = A$, while for maps we compute:
\begin{align*}
\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \llbracket f \rrbracket \right) \right) &=~ \mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}}\left( \llbracket f \rrbracket \right) \circ \varepsilon_A \\
&=~\llbracket f \rrbracket \circ \eta_A \circ \varepsilon_A \\
&=~ \llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) \tag{\textbf{[du.2]}} \\
&=~ \llbracket f \rrbracket \tag{$ \llbracket f \rrbracket $ is $\mathsf{D}$-linear and Thm.\ref{thm1}.(\ref{thm1.lin})}
\end{align*}
Therefore, $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}} \circ \mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}} = 1_{\mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc]}$. So we conclude that $\mathsf{F}_{\mathsf{D}\text{-}\mathsf{lin}}$ is an isomorphism.
\end{proof}
As a result, in the presence of a $\mathsf{D}$-linear unit, we obtain the following characterizations of $\mathsf{D}$-linear maps.
\begin{corollary}\label{etacor1} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. If $(\oc, \delta, \varepsilon, \partial)$ has a $\mathsf{D}$-linear unit $\eta$, then the following are equivalent for a coKleisli map $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$,
\begin{enumerate}[{\em (i)}]
\item $\llbracket f \rrbracket$ is $\mathsf{D}$-linear in $\mathbb{X}_\oc$
\item There exists a (necessarily unique) map $g: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in $\mathbb{X}$ such that $\llbracket f \rrbracket = g \circ \varepsilon_A = \llbracket \mathsf{F}_\oc(g ) \rrbracket$.
\item $\llbracket f \rrbracket \circ \eta_A \circ \varepsilon_A = \llbracket f \rrbracket$
\end{enumerate}
\end{corollary}
\begin{proof} For $(i) \Rightarrow (ii)$, suppose that $\llbracket f \rrbracket$ is $\mathsf{D}$-linear. Then set $g = \llbracket f \rrbracket \circ \eta_A = \mathsf{F}^{-1}_{\mathsf{D}\text{-}\mathsf{lin}} (\llbracket f \rrbracket)$. By Proposition \ref{etaFlem1}, we clearly have that $\llbracket f \rrbracket = g \circ \varepsilon_A = \llbracket \mathsf{F}_\oc(g ) \rrbracket$, and also that $g$ is unique. For $(ii) \Rightarrow (iii)$, suppose that $\llbracket f \rrbracket = g \circ \varepsilon_A$. Then by \textbf{[du.1]}, we have that:
\begin{align*}
\llbracket f \rrbracket \circ \eta_A \circ \varepsilon_A &=~ g \circ \varepsilon_A \circ \eta_A \circ \varepsilon_A \\
&=~ g \circ 1_A \circ \varepsilon_A \tag{\textbf{[du.1]}} \\
&=~ g \circ \varepsilon_A \\
&=~ \llbracket f \rrbracket
\end{align*}
Lastly, for $(iii) \Rightarrow (i)$, suppose that $\llbracket f \rrbracket \circ \eta_A \circ \varepsilon_A = \llbracket f \rrbracket$. By \textbf{[du.2]}, this implies that $\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) = \llbracket f \rrbracket$. However by Theorem \ref{thm1}.(\ref{thm1.lin}), this implies that $\llbracket f \rrbracket$ is $\mathsf{D}$-linear.
\end{proof}
We conclude with some examples of Cartesian differential comonads and, if it exists, their $\mathsf{D}$-linear units.
\begin{example} \label{ex:diffcat} \normalfont The main example of a Cartesian differential comonad is the comonad of a differential category. Briefly, a differential category \cite[Definition 2.4]{blute2006differential} is an additive symmetric monoidal category $\mathbb{X}$ equipped with a comonad $(\oc, \delta, \varepsilon)$, two natural transformations $\Delta_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A) \otimes \oc(A)$ and $e_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} I$ such that $\oc(A)$ is a cocommutative comonoid, and a natural transformation called a deriving transformation $\mathsf{d}_A: \oc(A) \otimes A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ satisfying certain coherences which capture the basic properties of differentiation \cite[Definition 7]{Blute2019}. By \cite[Proposition 3.2.1]{blute2009cartesian}, for a differential category $\mathbb{X}$ with finite products, its coKleisli category $\mathbb{X}_\oc$ is a Cartesian differential category where the differential combinator is defined using the deriving transformation. For a coKleisli map $\llbracket f \rrbracket: \oc A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, its derivative $\llbracket \mathsf{D}[f] \rrbracket: \oc(A \times A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is defined as:
\[ \llbracket \mathsf{D}[f] \rrbracket := \xymatrixcolsep{2.75pc}\xymatrix{\oc(A \times A) \ar[r]^-{\Delta_{A \times A}} & \oc(A \times A) \otimes \oc(A \times A) \ar[r]^-{\oc(\pi_0) \otimes \oc(\pi_1)} & \oc(A) \otimes \oc(A) \ar[r]^-{1_{\oc(A)} \otimes \varepsilon_A} & \oc(A) \otimes A \ar[r]^-{\mathsf{d}_A} & \oc(A) \ar[r]^-{ f } & B
} \]
Applying Proposition \ref{prop1}, we obtain a differential combinator transformation:
\[ \partial_A := \xymatrixcolsep{3.75pc}\xymatrix{\oc(A \times A) \ar[r]^-{\Delta_{A \times A}} & \oc(A \times A) \otimes \oc(A \times A) \ar[r]^-{\oc(\pi_0) \otimes \oc(\pi_1)} & \oc(A) \otimes \oc(A) \ar[r]^-{1_{\oc(A)} \otimes \varepsilon_A} & \oc(A) \otimes A \ar[r]^-{\mathsf{d}_A} & \oc(A)
} \]
Furthermore, if there exists a natural transformation $u_A: I \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ such that $e_A \circ u_A = 1_I$ and $u_A \circ e_A = \oc(0)$, then we obtain a $\mathsf{D}$-linear unit defined as follows:
\[ \eta_A:= \xymatrixcolsep{5pc}\xymatrix{ A \ar[r]^-{\lambda^{-1}_A} & I \otimes A \ar[r]^-{u_A \otimes 1_A} & \oc(A) \otimes A \ar[r]^-{\mathsf{d}_A} & \oc(A)
} \]
Readers familiar with differential linear logic will note that any differential \emph{storage} category \cite[Definition 4.10]{blute2006differential} has such a map $u$ and that in this case the $\mathsf{D}$-linear unit is precisely the codereliction \cite[Section 5]{Blute2019}. However, we stress that it is possible to have a $\mathsf{D}$-linear unit for differential categories that are not differential storage categories. We invite the reader to see \cite[Section 9]{Blute2019} and \cite[Example 4.7]{garner2020cartesian} for lists of examples of differential categories.
\end{example}
\begin{example} \label{ex:CDM} \normalfont Our three main novel examples of Cartesian differential comonads that we introduce in Sections \ref{sec:PWex}, \ref{secpuisdiv}, and \ref{sec:ZAex} below, arise instead more naturally as the dual notion, which we simply call \textbf{Cartesian differential monads}. Following the convention in the differential category literature for the dual notion of differential categories, we have elected to keep the same terminology and notation for the dual notion of a differential combinator transformation. Briefly, a Cartesian differential monad on a category $\mathbb{X}$ with finite biproducts is a quadruple $(\mathsf{S}, \mu, \eta, \partial)$ consisting of a monad $(\mathsf{S}, \mu, \eta)$ (where ${\mu_A: \mathsf{S}\mathsf{S}(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{S}(A)}$ and $\eta_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{S}(A)$) and a natural transformation $\partial_A: \mathsf{S}(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{S}(A \times A)$, again called a differential combinator transformation, such that the dual diagrams of Definition \ref{def:cdcomonad} commute. By the dual statement of Proposition \ref{prop1}, the opposite category of the Kleisli category of a Cartesian differential monad is a Cartesian differential category. The dual notion of a $\mathsf{D}$-linear unit is called a $\mathsf{D}$-linear counit, which would be natural transformation $\varepsilon_A: \mathsf{S}(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ such that the dual diagrams of Definition \ref{def:Dunit} commute. By the dual statement of Proposition \ref{etaFlem1}, the existence of a $\mathsf{D}$-linear counit implies that the opposite of the base category is isomorphic to the subcategory of the $\mathsf{D}$-linear of the opposite of the Kleisli category.
\end{example}
The following are two ``trivial'' examples of Cartesian differential categories any category with finite biproducts. While both are ``trivial'' in their own way, they both provide simple separating examples. Indeed, the first is an example of a Cartesian differential comonad without a $\mathsf{D}$-linear unit, while the second is a Cartesian differential comonad which is not induced by a differential category.
\begin{example} \normalfont Let $\mathbb{X}$ be a category with finite biproducts, and let $\top$ be the chosen zero object. Then the constant comonad $\mathsf{C}$ which sends every object to the zero object $\mathsf{C}(A) = \top$ and every map to zero maps $\mathsf{C}(f) = 0$ is a Cartesian differential comonad whose differential combinator transformation is simply $0$. This Cartesian differential comonad has a $\mathsf{D}$-linear unit if and only if every object of $\mathbb{X}$ is a zero object.
\end{example}
\begin{example} \normalfont \label{ex:identity} Let $\mathbb{X}$ be a category with finite biproducts. Then the identity comonad $1_{\mathbb{X}}$ is a Cartesian differential comonad whose differential combinator transformation is the second projection $\pi_1: A \times A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and has a $\mathsf{D}$-linear unit given by the identity map $1_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$. The resulting coKleisli category is simply the entire base category $\mathbb{X}$ and whose differential combinator the same as in Example \ref{ex:CDCbiproduct}. As such, this example recaptures Example \ref{ex:CDCbiproduct} that every category with finite biproducts is a Cartesian differential category where every map is $\mathsf{D}$-linear. \end{example}
\section{Cartesian Differential Abstract coKleisli Categories}\label{sec:abstract}
The goal of this section is to give a precise characterization of the Cartesian differential categories which are the coKleisli categories of Cartesian differential comonads. This is a generalization of the work done by Blute, Cockett, and Seely in \cite{blute2015cartesian}, where they characterize which Cartesian differential categories are the coKleisli categories of the comonads of differential categories. This was achieved using the concept of abstract coKleisli categories \cite[Section 2.4]{blute2015cartesian}, which is the dual notion of thunk-force-categories as introduced by F\"{u}hrmann in \cite{fuhrmann1999direct}. Abstract coKleisli categories provide a direct description of the structure of coKleisli categories in such a way that the coKleisli category of a comonad is an abstract coKleisli category and, conversely, every abstract coKleisli category is canonically the coKleisli category of a comonad on a certain subcategory. As such, here we introduced Cartesian differential abstract coKleisli categories which, as the name suggests, are abstract coKleisli categories that are also Cartesian differential categories such that the differential combinator and abstract coKleisli structure are compatible. We show that the coKleisli category of a Cartesian differential comonad is a Cartesian differential abstract coKleisli categories and that, conversely, every Cartesian differential abstract coKleisli category is canonically the coKleisli category of a Cartesian differential comonad on a certain subcategory. We will also study the $\mathsf{D}$-linear maps of abstract coKleisli categories.
We will start from the abstract coKleisli side of the story.
\begin{definition}\label{def:abstract} An \textbf{abstract coKleisli structure} on a category $\mathbb{X}$ is a triple $(\oc, \varphi, \epsilon)$ consisting of an endofunctor $\oc: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}$, a natural transformation $\varphi_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$, and a family of maps $\epsilon_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ (which are not necessarily natural), such that $\epsilon_{\oc(A)}: \oc\oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ is a natural transformation and the following diagrams commute:
\begin{equation}\label{abstracteq}\begin{gathered} \xymatrixcolsep{5pc}\xymatrix{ A \ar@{=}[dr]^-{} \ar[r]^-{\varphi_A} & \oc(A) \ar[d]^-{\epsilon_A} & \oc(A) \ar@{=}[dr]^-{} \ar[r]^-{\oc(\varphi_A)} & \oc\oc(A) \ar[d]^-{\epsilon_{\oc(A)}} & \oc\oc(A) \ar[r]^-{\epsilon_{\oc(A)}} \ar[d]_-{\oc(\epsilon_A)}& \oc(A) \ar[d]^-{\epsilon_A} \\
& A & & \oc(A) & \oc(A) \ar[r]_-{\epsilon_A} & A
}\end{gathered}\end{equation}
An \textbf{abstract coKleisli category} \cite[Definition 2.4.1]{blute2015cartesian} is a category $\mathbb{X}$ equipped with an abstract coKleisli structure $(\oc, \varphi, \epsilon)$.
\end{definition}
Below in Lemma \ref{cokleisliabstractlem}, we will review how every coKleisli category is an abstract coKleisli category. In order to obtain the converse, we first need from an abstract coKleisli category to construct a category with comonad. In an abstract coKleisli category, there are an important class of maps called the $\epsilon$-natural maps (which are the dual of thunkable maps in thunk-force categories \cite[Definition 7]{fuhrmann1999direct}). These $\epsilon$-natural maps form a subcategory which comes equipped with a comonad, and the coKleisli category of this comonad is the starting abstract coKleisli category.
\begin{definition} In an abstract coKleisli category $\mathbb{X}$ with abstract coKleisli structure $(\oc, \varphi, \epsilon)$, a map ${f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ is said to \textbf{$\epsilon$-natural} if the following diagram commutes:
\begin{equation}\label{}\begin{gathered} \xymatrixcolsep{5pc}\xymatrix{ \oc(A) \ar[d]_-{\epsilon_A} \ar[r]^-{\oc(f)} & \oc(B) \ar[d]^-{\epsilon_B} \\
A \ar[r]_-{f} & B
} \end{gathered}\end{equation}
Define the subcategory of $\epsilon$-natural maps $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ to be the category whose objects are the same as $\mathbb{X}$ and whose maps are $\epsilon$-natural in $\mathbb{X}$, and let $\mathsf{U}_{\epsilon}:\epsilon\text{-}\mathsf{nat}[\mathbb{X}] \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}$ be the obvious forgetful functor.
\end{definition}
As we will discuss in Lemma \ref{lemexact}, in the context of a coKleisli category of a comonad, these $\epsilon$-natural maps should be thought of as the maps in the base category. Here are now some basic properties of $\epsilon$-natural maps.
\begin{lemma}\label{eplem} \cite[Section 2.4]{blute2015cartesian} Let $\mathbb{X}$ be an abstract coKleisli category with abstract coKleisli structure $(\oc, \varphi, \epsilon)$. Then:
\begin{enumerate}[{\em (i)}]
\item Identity maps $1_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ are $\epsilon$-natural;
\item \label{eplem.comp} If $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ and $g: B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$ are $\epsilon$-natural, then their composite $g \circ f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$ is $\epsilon$-natural;
\item \label{eplem.ep} For every object $A$, $\epsilon_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ is $\epsilon$-natural;
\item \label{eplem.oc} For every map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, $\oc(f): \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(B)$ is $\epsilon$-natural.
\end{enumerate}
\end{lemma}
We now review in detail how every abstract coKleisli category is isomorphic to a canonical comonad on the subcategory of $\epsilon$-natural maps.
\begin{lemma} \label{lem:ep-com} \cite[Dual of Theorem 4]{fuhrmann1999direct} Let $\mathbb{X}$ be an abstract coKleisli category with abstract coKleisli structure $(\oc, \varphi, \epsilon)$. Define the natural transformation $\beta_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc\oc(A)$ as follows:
\[ \beta_A := \xymatrixcolsep{5pc}\xymatrix{\oc(A) \ar[r]^-{\oc(\varphi_A)} & \oc\oc(A)
} \]
Then $(\oc, \beta, \epsilon)$ is a comonad on $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ such that the functor $\mathsf{G}_\epsilon: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc$ defined on objects as $\mathsf{G}_\epsilon(A)$ and on a map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ as the following composite:
\begin{align*}
\llbracket \mathsf{G}_\epsilon(f) \rrbracket := \xymatrixcolsep{3pc}\xymatrix{\oc (A) \ar[r]^-{\oc(f)} & \oc(B) \ar[r]^-{\epsilon_B} & B } && \llbracket \mathsf{G}_\epsilon(f) \rrbracket = \epsilon_B \circ \oc(f)
\end{align*}
is an isomorphism with inverse $\mathsf{G}^{-1}_\epsilon: \epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}$ defined on objects as $\mathsf{G}_\epsilon(A)$ and on a coKleisli map ${\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ as the following composite:
\begin{align*}
\mathsf{G}^{-1}_\epsilon\left( \llbracket f \rrbracket \right) := \xymatrixcolsep{3pc}\xymatrix{A \ar[r]^-{\varphi_A} & \oc(A) \ar[r]^-{\llbracket f \rrbracket} & B } && \mathsf{G}^{-1}_\epsilon\left( \llbracket f \rrbracket \right) = \llbracket f \rrbracket \circ \varphi_A
\end{align*}
\end{lemma}
We now wish to equip abstract coKleisli categories with Cartesian differential structure. To do so, we must first discuss Cartesian left additive structure for abstract coKleisli categories. We start with the finite product structure:
\begin{definition} A \textbf{Cartesian abstract coKleisli category} \cite[Definition 2.4.1]{blute2015cartesian} is an abstract coKleisli category $\mathbb{X}$ with abstract coKleisli structure $(\oc, \varphi, \epsilon)$ such that $\mathbb{X}$ has finite products and all the projection maps $\pi_0: A \times B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $\pi_1: A \times B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are $\epsilon$-natural.
\end{definition}
For a Cartesian abstract coKleisli category, it follows that $\epsilon$-natural maps are closed under the finite product structure.
\begin{lemma} \cite[Section 2.4]{blute2015cartesian} Let $\mathbb{X}$ be a Cartesian abstract coKleisli category with abstract coKleisli structure $(\oc, \varphi, \epsilon)$. Then:
\begin{enumerate}[{\em (i)}]
\item If $f: C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ and $g: C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are $\epsilon$-natural, then their pairing $\langle f, g \rangle: C \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A \times B$ is $\epsilon$-natural;
\item If $h: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C$ and $k: B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} D$ are $\epsilon$-natural, then their product $h \times k: A \times B \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} C \times D$ is $\epsilon$-natural.
\end{enumerate}
Therefore, $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ has finite products (which is defined as in $\mathbb{X}$).
\end{lemma}
Next we discuss Cartesian left additive structure for abstract coKleisli categories, where we require that $\epsilon$-natural maps are closed under the additive structure.
\begin{definition} A \textbf{Cartesian left additive abstract coKleisli category} is a Cartesian abstract coKleisli category $\mathbb{X}$ with abstract coKleisli structure $(\oc, \varphi, \epsilon)$ such that $\mathbb{X}$ is also a Cartesian left additive category, zero maps $0: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are $\epsilon$-natural, and if $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ and $g: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are $\epsilon$-natural, then their sum ${f+g : A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ is $\epsilon$-natural.
\end{definition}
For a Cartesian left additive abstract coKleisli category, the subcategory of $\epsilon$-natural maps also form a Cartesian left additive category. It is important to stress however that $\epsilon$-natural maps are not assumed to be additive, and therefore the subcategory of $\epsilon$-natural maps does not necessarily have biproducts.
\begin{lemma}\label{abstractCLAClem} Let $\mathbb{X}$ be a Cartesian left additive abstract coKleisli category with abstract coKleisli structure $(\oc, \varphi, \epsilon)$. Then $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ is a Cartesian left additive category (where the necessary structure is defined as in $\mathbb{X}$). Furthermore,
\begin{enumerate}[{\em (i)}]
\item \label{ep.zero} $\epsilon_A \circ \oc(0) = 0$
\item \label{ep.sum} If $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ and $g: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are $\epsilon$-natural, then $\varepsilon_B \circ \oc(f + g) = \epsilon_B \circ \oc(f) + \epsilon_B \circ \oc(g)$
\end{enumerate}
\end{lemma}
\begin{proof} It is clear that $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ is a Cartesian left additive category. For (\ref{ep.zero}) we use the fact that $0$ is $\epsilon$-natural:
\begin{align*}
\epsilon_A \circ \oc(0) &=~ 0 \circ \epsilon_A \tag{$0$ is $\epsilon$-natual} \\
&=~ 0
\end{align*}
For (\ref{ep.sum}), we use the fact that the sum of $\epsilon$-natural maps is $\epsilon$-natural:
\begin{align*}
\epsilon_B \circ \oc(f + g)&=~ (f+g) \circ \epsilon_A \tag{$f+g$ is $\epsilon$-natual} \\
&=~ f \circ \epsilon_A + g \circ \epsilon_A \\
&=~ \epsilon_B \circ \oc(f) + \epsilon_B \circ \oc(g) \tag{$f$ and $g$ are $\epsilon$-natural}
\end{align*}
\end{proof}
We are now in a position to define Cartesian differential abstract coKleisli categories.
\begin{definition}\label{def:abCDC} A \textbf{Cartesian differential abstract coKleisli category} is a Cartesian differential category $\mathbb{X}$, with differential combinator $\mathsf{D}$, such that $\mathbb{X}$ is also a Cartesian abstract coKleisli category with abstract coKleisli structure $(\oc, \varphi, \epsilon)$ and every $\epsilon$-natural map is $\mathsf{D}$-linear.
\end{definition}
We will now show that for a Cartesian differential abstract coKleisli category, the canonical comonad on the subcategory of $\epsilon$-natural maps is a Cartesian differential comonad and that the coKleisli category is isomorphic to the starting Cartesian differential abstract coKleisli category.
\begin{proposition}\label{propab1} Let $\mathbb{X}$ be a Cartesian differential abstract coKleisli category with differential combinator $\mathsf{D}$ and abstract coKleisli structure $(\oc, \varphi, \epsilon)$. Then $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ is a category with finite biproducts and $(\oc, \beta, \epsilon, \partial)$ (where $(\oc, \beta, \epsilon)$ is defined as in Lemma \ref{lem:ep-com}) is a Cartesian differential comonad on $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ where the differential combinator transformation $\partial_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A \times A)$ is defined as follows:
\begin{equation}\label{partialdef2}\begin{gathered}\partial_A := \xymatrixcolsep{5pc}\xymatrix{ \oc(A \times A) \ar[r]^-{\oc\left( \mathsf{D}[\varphi_A] \right)} & \oc\oc(A) \ar[r]^-{\epsilon_{\oc(A)}} & \oc(A)
} \end{gathered}\end{equation}
Furthermore, $\mathsf{G}_\epsilon: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc$ is a Cartesian differential isomorphism, so in particular, the following equalities hold:
\begin{equation}\label{}\begin{gathered}
\llbracket \mathsf{G}_\epsilon(\mathsf{D}[f] ) \rrbracket = \llbracket \mathsf{D}\left[ \mathsf{G}_\epsilon(f) \right] \rrbracket \quad \quad \quad \mathsf{G}^{-1}_\epsilon\left( \llbracket \mathsf{D}[f] \rrbracket \right) = \mathsf{D}\left[ \mathsf{G}^{-1}_\epsilon\left( \llbracket f \rrbracket \right) \right]
\end{gathered}\end{equation}
where the differential combinator on the coKleisli category $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc$ is defined as in Theorem \ref{thm1}.
\end{proposition}
\begin{proof} We must first explain why $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ has finite biproducts. By Lemma \ref{abstractCLAClem}, $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ is a Cartesian left additive category. However, by assumption, every $\epsilon$-natural map is $\mathsf{D}$-linear, so by Lemma \ref{linlem}.(\ref{linlem.add}), every $\epsilon$-natural map is also additive. Therefore, $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ is a Cartesian left additive category such that every map is additive, and so we conclude that $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]$ has finite biproducts.
Next we must explain why the proposed differential combinator transformation is well defined, that is, we must show that $\partial_A$ is indeed $\epsilon$-natural. However, by Lemma \ref{eplem}.(\ref{eplem.ep}) and (\ref{eplem.oc}), $\epsilon_A$ and $\oc\left( \mathsf{D}[\varphi_A] \right)$ are both $\epsilon$-natural. Then by Lemma \ref{eplem}.(\ref{eplem.comp}), their composite $\partial_A = \epsilon_A \circ \oc\left( \mathsf{D}[\varphi_A] \right) $ is $\epsilon$-natural. Next we show the naturality of $\partial$. So for an $\epsilon$-natural map $f: A\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, we compute:
\begin{align*}
\partial_B \circ \oc(f \times f) &=~ \epsilon_{\oc(B)} \circ \oc\left( \mathsf{D}[\varphi_B] \right) \circ \oc(f \times f) \\
&=~ \epsilon_{\oc(B)} \circ \oc\left( \mathsf{D}[\varphi_B] \circ (f \times f) \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{\oc(B)} \circ \oc\left( \mathsf{D}[\varphi_B \circ f] \right) \tag{$f$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.pre})} \\
&=~\epsilon_{\oc(B)} \circ \oc\left( \mathsf{D}[\oc(f) \circ \varphi_A] \right) \tag{Naturality of $f$} \\
&=~\epsilon_{\oc(B)} \circ \oc\left( \oc(f) \circ \mathsf{D}[\varphi_A] \right) \tag{$\oc(f)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\epsilon_{\oc(B)} \circ \oc\oc(f) \circ \oc\left( \mathsf{D}[\varphi_A] \right) \tag{$\oc$ is a functor} \\
&=~ \oc(f) \circ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \tag{Naturality of $\epsilon_{\oc(-)}$} \\
&=~\oc(f) \circ \partial_A
\end{align*}
So $\partial$ is a natural transformation. Now we must show that $\partial$ satisfies the six axioms of a differential combinator transformation. Note that the calculations below are similar to the ones in the proof of Proposition \ref{prop1}.
\begin{enumerate}[{\bf [dc.1]}]
\item Here we use \textbf{[CD.2]}:
\begin{align*}
\partial_A \circ \oc(\iota_0) &=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \oc(\iota_0) \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \circ \iota_0 \right) \tag{$\oc$ is a functor} \\
&=~ \epsilon_{\oc(A)} \circ \oc(0) \tag{\textbf{[CD.2]}} \\
&=~ 0 \tag{Lem.\ref{abstractCLAClem}.(\ref{ep.zero})}
\end{align*}
\item Here we use \textbf{[CD.2]}:
\begin{align*}
\partial_A \circ \oc(1_A \times \nabla_A) &=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \oc(1_A \times \nabla_A) \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \circ (1_A \times \nabla_A) \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \circ (1_A \times\pi_0) + \mathsf{D}[\varphi_A] \circ (1_A \times\pi_1) \right) \tag{\textbf{[CD.2]}} \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \circ (1_A \times\pi_0) \right) + \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \circ (1_A \times\pi_1) \right) \tag{Lem.\ref{abstractCLAClem}.(\ref{ep.sum})} \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A]\right) \circ \oc\left (1_A \times\pi_0 \right) + \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \oc\left(1_A \times\pi_1 \right) \tag{$\oc$ is a functor} \\
&=~\partial_A \circ \oc(1_A \times \pi_0) + \partial_A \circ \oc(1_A \times \pi_1) \\
&=~ \partial_A \circ \left( \oc(1_A \times \pi_0) + \oc(1_A \times \pi_0) \right)
\end{align*}
\item Here we use \textbf{[CD.3]} and the fact that since $\epsilon_A$ is $\epsilon$-natural, that it is also $\mathsf{D}$-linear:
\begin{align*}
\epsilon_A \circ \partial_A &=~\epsilon_A \circ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \\
&=~ \epsilon_A \circ \oc(\epsilon_A) \circ \oc\left( \mathsf{D}[\varphi_A] \right) \tag{Abstract coKleisli structure identity} \\
&=~ \epsilon_A \circ \oc\left(\epsilon_A \circ \mathsf{D}[\varphi_A] \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_A \circ \oc\left( \mathsf{D}\left[ \epsilon_A \circ \varphi_A \right] \right) \tag{$\epsilon$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\epsilon_A \circ \oc\left( \mathsf{D}\left[ 1_A \right] \right) \tag{Abstract coKleisli structure identity} \\
&=~\epsilon_A \circ \oc\left( \pi_1 \right) \tag{\textbf{[CD.3]}} \\
&=~\pi_1 \circ \epsilon_{A \times A} \tag{$\pi_1$ is $\epsilon$-natural}
\end{align*}
\item Here we use \textbf{[CD.5]}:
\begin{align*}
\beta_A \circ \partial_A &=~\oc(\varphi_A) \circ \partial_A \\
&=~\oc(\varphi_A) \circ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \\
&=~ \epsilon_{\oc\oc(A)} \circ \oc\oc(\varphi_A) \circ \oc\left( \mathsf{D}[\varphi_A] \right) \tag{Naturality of $\epsilon_{\oc(-)}$} \\
&=~\epsilon_{\oc\oc(A)} \circ \oc\left( \oc(\varphi_A) \circ \mathsf{D}[\varphi_A] \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \oc(\varphi_A) \circ \varphi_A \right] \right) \tag{$\oc(\varphi_A)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_{\oc(A)} \circ \varphi_A \right] \right) \tag{Naturality of $\varphi$} \\
&=~\epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_{\oc(A)} \right] \circ \left\langle \varphi_A \circ \pi_0, \mathsf{D}[\varphi_A] \right \rangle \right) \tag{\textbf{[CD.5]}} \\
&=~\epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_{\oc(A)} \right] \right) \circ \oc\left( \left\langle \varphi_A \circ \pi_0, \mathsf{D}[\varphi_A] \right \rangle \right)\tag{$\oc$ is a functor} \\
&=~\partial_{\oc(A)} \circ \oc\left( \left\langle \varphi_A \circ \pi_0, \mathsf{D}[\varphi_A] \right \rangle \right) \\
&=~\partial_{\oc(A)} \circ \oc\left( \left\langle \oc(\pi_0) \circ \varphi_{A \times A}, \mathsf{D}[\varphi_A] \right \rangle \right) \tag{Naturality of $\varphi$} \\
&=~ \partial_{\oc(A)} \circ \oc\left( \left\langle \oc(\pi_0) \circ \varphi_{A \times A}, 1_{\oc(A)} \mathsf{D}[\varphi_A] \right \rangle \right) \\
&=~ \partial_{\oc(A)} \circ \oc\left( \left\langle \oc(\pi_0) \circ \varphi_{A \times A}, \epsilon_{\oc(A)} \circ \varphi_{\oc(A)} \mathsf{D}[\varphi_A] \right \rangle \right) \tag{Abstract coKleisli structure identity} \\
&=~ \partial_{\oc(A)} \circ \oc\left( \left\langle \oc(\pi_0) \circ \varphi_{A \times A}, \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \varphi_{A \times A} \right \rangle \right)\tag{Naturality of $\varphi$} \\
&=~\partial_{\oc(A)} \circ \oc\left( \left\langle \oc(\pi_0), \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \right \rangle \circ \varphi_{A \times A} \right) \\
&=~\partial_{\oc(A)} \circ \oc\left( \left\langle \oc(\pi_0), \partial_A \right \rangle \circ \varphi_{A \times A} \right) \\
&=~\partial_{\oc(A)} \circ \oc\left( \left\langle \oc(\pi_0), \partial_A \right \rangle \right) \circ \oc\left( \varphi_{A \times A} \right) \tag{$\oc$ is a functor} \\
&=~\partial_{\oc(A)} \circ \oc\left( \left \langle \oc(\pi_0) , \partial_A \right \rangle \right) \circ \beta_{A \times A}
\end{align*}
\end{enumerate}
For the remaining two axioms, it'll be useful to compute $\partial_{A} \circ \partial_{A \times A}$:
\begin{align*}
\partial_{A} \circ \partial_{A \times A} &=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \partial_{A \times A} \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \epsilon_{\oc(A \times A)} \circ \oc\left( \mathsf{D}[\varphi_{A \times A}] \right) \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc\oc(A)} \circ \oc\oc\left( \mathsf{D}[\varphi_A] \right) \circ \oc\left( \mathsf{D}[\varphi_{A \times A}] \right) \tag{Naturality of $\epsilon_{\oc(-)}$} \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc\oc(A)} \circ \oc\left( \oc\left( \mathsf{D}[\varphi_A] \right) \circ \mathsf{D}[\varphi_{A \times A}] \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \varphi_{A \times A} \right] \right) \tag{$\oc\left( \mathsf{D}[\varphi_A] \right)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_{\oc(A)} \circ \mathsf{D}[\varphi_A] \right] \right) \tag{Naturality of $\varphi$} \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_{\oc(A)} \right] \circ \left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{\textbf{[CD.5]}} \\
&=~\epsilon_{\oc(A)} \circ \epsilon_{\oc\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_{\oc(A)} \right] \right) \circ \oc\left(\left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{\oc(A)} \circ \oc(\epsilon_{\oc(A)}) \circ \oc\left( \mathsf{D}\left[ \varphi_{\oc(A)} \right] \right) \circ \oc\left(\left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{Abstract coKleisli structure identity} \\
&=~\epsilon_{\oc(A)} \circ \oc\left(\epsilon_{\oc(A)} \circ \mathsf{D}\left[ \varphi_{\oc(A)} \right] \right) \circ \oc\left(\left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \epsilon_{\oc(A)} \circ \varphi_{\oc(A)} \right] \right) \circ \oc\left(\left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{$\epsilon$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ 1_{\oc(A)} \right] \right) \circ \oc\left(\left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{Abstract coKleisli structure identity} \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \pi_1 \right) \circ \oc\left(\left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{\textbf{[CD.3]}} \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \pi_1 \circ \left \langle \mathsf{D}[\varphi_A] \circ\pi_0, \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right \rangle \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right)
\end{align*}
So we have that:
\begin{equation}\label{partial2}\begin{gathered}
\partial_{A} \circ \partial_{A \times A} = \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right)
\end{gathered}\end{equation}
So now we can easily show \textbf{[dc.5]} and \textbf{[dc.6]},
\begin{enumerate}[{\bf [dc.1]}]
\setcounter{enumi}{4}
\item Here we use \textbf{[CD.6]}, \textbf{[CD.2]}, and the above identity:
\begin{align*}
\partial_{A} \circ \partial_{A \times A} \circ \oc(\ell_A) &=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right) \circ \oc(\ell_A) \tag{partial2} \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \circ \ell_A \right) \tag{$\oc$ is a functor} \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \tag{\textbf{[CD.6]}} \\
&=~ \partial_A
\end{align*}
\item Here we use \textbf{[CD.7]}:
\begin{align*}
\partial_{A} \circ \partial_{A \times A} \circ \oc(c_A) &=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right) \circ \oc(c_A) \tag{partial2} \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \circ c_A \right) \tag{$\oc$ is a functor} \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \mathsf{D}[\varphi_A] \right] \right) \tag{\textbf{[CD.7]}} \\
&=~\partial_{A} \circ \partial_{A \times A} \tag{partial2}
\end{align*}
\end{enumerate}
So we conclude that $\partial$ is a differential combinator transformation, and therefore $(\oc, \beta, \epsilon, \partial)$ is a Cartesian differential comonad. It remains to show that $\mathsf{G}_\epsilon$ and $\mathsf{G}^{-1}_\epsilon$ commute with the differential combinator. First, for any map ${f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ in $\mathbb{X}$, we compute:
\begin{align*}
\llbracket \mathsf{D}\left[ \mathsf{G}_\epsilon(f) \right] \rrbracket &=~\llbracket \mathsf{G}_\epsilon(f) \rrbracket \circ \partial_A \\
&=~\epsilon_{B} \circ \oc(f) \circ \partial_A \\
&=~\epsilon_{B} \circ \oc(f) \circ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \\
&=~\epsilon_{B} \circ \epsilon_{\oc(B)} \circ \oc\oc(f) \circ \oc\left(\mathsf{D}[\varphi_A] \right) \tag{Naturality of $\epsilon_{\oc(-)}$} \\
&=~\epsilon_{B} \circ \epsilon_{\oc(B)} \circ \oc\left( \oc(f) \circ\mathsf{D}[\varphi_A] \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{B} \circ \epsilon_{\oc(B)} \circ \oc\left( \mathsf{D}\left[ \oc(f) \circ \varphi_A \right] \right) \tag{$\oc(f)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\epsilon_{B} \circ \epsilon_{\oc(B)} \circ \oc\left( \mathsf{D}\left[ \varphi_B \circ f \right] \right) \tag{Naturality of $\varphi$} \\
&=~\epsilon_{B} \circ \epsilon_{\oc(B)} \circ \oc\left( \mathsf{D}\left[ \varphi_B \right] \circ \langle f \circ \pi_0, \mathsf{D}[f] \rangle \right) \tag{\textbf{[CD.5]}} \\
&=~\epsilon_{B} \circ \oc(\epsilon_B) \circ \oc\left( \mathsf{D}\left[ \varphi_B \right] \circ \langle f \circ \pi_0, \mathsf{D}[f] \rangle \right) \tag{Abstract coKleisli structure identity} \\
&=~\epsilon_{B} \circ \oc\left( \epsilon_B \circ \mathsf{D}\left[ \varphi_B \right] \circ \langle f \circ \pi_0, \mathsf{D}[f] \rangle \right) \tag{$\oc$ is a functor} \\
&=~\epsilon_{B} \circ \oc\left( \mathsf{D}\left[ \epsilon_B \circ \varphi_B \right] \circ \langle f \circ \pi_0, \mathsf{D}[f] \rangle \right) \tag{$\epsilon$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\epsilon_{B} \circ \oc\left( \mathsf{D}\left[ 1_B \right] \circ \langle f \circ \pi_0, \mathsf{D}[f] \rangle \right) \tag{Abstract coKleisli structure identity} \\
&=~\epsilon_{B} \circ \oc\left( \pi_1 \circ \langle f \circ \pi_0, \mathsf{D}[f] \rangle \right)\tag{\textbf{[CD.3]}} \\
&=~\epsilon_B \circ \oc\left( \mathsf{D}[f] \right) \\
&=~\llbracket \mathsf{G}_\epsilon(\mathsf{D}[f] ) \rrbracket
\end{align*}
Next for any $\epsilon$-natural coKleisli map $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, note that $\llbracket f \rrbracket$ is then $\mathsf{D}$-linear in $\mathbb{X}$, so we compute:
\begin{align*}
\mathsf{D}\left[ \mathsf{G}^{-1}_\epsilon\left( \llbracket f \rrbracket \right) \right] &=~ \mathsf{D}\left[ \llbracket f \rrbracket \circ \varphi_A \right] \\
&=~ \llbracket f \rrbracket \circ \mathsf{D}[\varphi_A] \tag{$\llbracket f \rrbracket$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~\llbracket f \rrbracket \circ 1_{\oc(A)} \circ \mathsf{D}[\varphi_A] \\
&=~\llbracket f \rrbracket \circ \epsilon_{\oc(A)} \circ \varphi_{\oc(A)} \circ \mathsf{D}[\varphi_A] \tag{Abstract coKleisli structure identity} \\
&=~\llbracket f \rrbracket \circ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}[\varphi_A] \right) \circ \varphi_{A \times A} \tag{Naturality of $\varphi$} \\
&=~\llbracket f \rrbracket \circ \partial_A \circ \varphi_{A \times A} \\
&=~ \llbracket \mathsf{D}[f] \rrbracket \circ \varphi_{A \times A} \\
&=~\mathsf{G}^{-1}_\epsilon\left( \llbracket \mathsf{D}[f] \rrbracket \right)
\end{align*}
So we conclude that $\mathbb{X}$ and $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc$ are isomorphic as Cartesian differential categories.
\end{proof}
It is important to note that while $\epsilon$-natural maps are assumed to be $\mathsf{D}$-linear, the converse is not necessarily true. It turns out that all $\mathsf{D}$-linear are $\epsilon$-natural precisely when the Cartesian differential comonad has a $\mathsf{D}$-linear unit.
\begin{lemma} Let $\mathbb{X}$ be a Cartesian differential abstract coKleisli category with differential combinator $\mathsf{D}$ and abstract coKleisli structure $(\oc, \varphi, \epsilon)$. Define the natural transformation $\eta_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ as follows:
\begin{equation}\label{etadef2}\begin{gathered}\eta_A := \xymatrixcolsep{5pc}\xymatrix{ A \ar[r]^-{\mathsf{L}[\varphi_A]} & \oc(A)
} \end{gathered}\end{equation}
where $\mathsf{L}$ is defined as in Lemma \ref{linlem}.(\ref{linlemimportant1}). Then the following are equivalent:
\begin{enumerate}[{\em (i)}]
\item $\epsilon\text{-}\mathsf{nat}[\mathbb{X}] = \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}]$, that is, every $\mathsf{D}$-linear map is $\epsilon$-natural;
\item For every object $A$, $\eta_A$ is $\epsilon$-natural;
\item $\eta$ is a $\mathsf{D}$-linear unit for $(\oc, \beta, \epsilon, \partial)$.
\end{enumerate}
\end{lemma}
\begin{proof} We first show that $\eta$ is indeed a natural transformation. So for a $\epsilon$-natural map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, we compute:
\begin{align*}
\oc(f) \circ \eta_A &=~ \oc(f) \circ \mathsf{L}[\varphi_A] \\
&=~ \oc(f) \circ \mathsf{D}[\varphi_A] \circ \iota_1 \\
&=~ \mathsf{D}[\oc(f) \circ \varphi_A] \circ \iota_1 \tag{$\oc(f)$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~ \mathsf{D}[\varphi_B \circ f] \circ \iota_1 \tag{Naturality of $\varphi$} \\
&=~ \mathsf{D}[\varphi_B] \circ (f \times f) \circ \iota_1 \tag{$f$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.pre})} \\
&=~ \mathsf{D}[\varphi_B]\circ \iota_1 \circ f \\
&=~ \eta_B \circ f
\end{align*}
So $\eta$ is indeed a natural transformation.
For $(i) \Rightarrow (ii)$, first note that by definition, every $\epsilon$-natural map is $\mathsf{D}$-linear. So also assuming the converse that every $\mathsf{D}$-linear map is $\epsilon$-natural does indeed imply that $\epsilon\text{-}\mathsf{nat}[\mathbb{X}] = \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}]$. Now suppose that this is the case. By Lemma \ref{linlem}.(\ref{linlemimportant1}), $\mathsf{L}[\varphi_A]$ is $\mathsf{D}$-linear, and so by assumption, $\mathsf{L}[\varphi_A]$ is also $\epsilon$-natural. Next, for $(ii) \Rightarrow (iii)$, we must show that $\eta$ satisfies both $\mathsf{D}$-linear unit axioms:
\begin{enumerate}[{\bf [du.1]}]
\item Here we use \textbf{[CD.3]}:
\begin{align*}
\epsilon_A \circ \eta_A &=~ \epsilon_A \circ \mathsf{L}[\varphi_A] \\
&=~ \epsilon_A \circ \mathsf{D}[\varphi_A] \circ \iota_1 \\
&=~ \mathsf{D}[\epsilon_A \circ \varphi_A] \circ \iota_1\tag{$\varepsilon$ is $\mathsf{D}$-linear and Lem \ref{linlem}.(\ref{linlem.post})} \\
&=~ \mathsf{D}[1_A] \circ \iota_1 \\
&=~ \pi_1 \circ \iota_1 \\
&=~ 1_A
\end{align*}
\item Here we use that $\eta_A$ is assumed to be $\epsilon$-natural:
\begin{align*}
\eta_A \circ \epsilon_A &=~ \epsilon_{\oc(A)} \circ \oc(\eta_A) \tag{$\eta_A$ is $\epsilon$-natural} \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{L}\left[ \varphi_A \right] \right) \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_A \right] \circ \iota_1 \right) \\
&=~ \epsilon_{\oc(A)} \circ \oc\left( \mathsf{D}\left[ \varphi_A \right] \right) \circ \oc(\iota_1) \tag{$\oc$ is a functor} \\
&=~ \partial_A \circ \oc(\iota_1)
\end{align*}
\end{enumerate}
So we conclude that $\eta$ is a $\mathsf{D}$-linear unit. Lastly, for $(iii) \Rightarrow (i)$, by Proposition \ref{etaFlem1}, we have that $\epsilon\text{-}\mathsf{nat}[\mathbb{X}] \cong \mathsf{D}\text{-}\mathsf{lin}\left[\epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc \right]$. By Proposition \ref{propab1}, $\mathbb{X}$ and $\epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc$ are isomorphic as Cartesian differential categories, which implies that we also have that $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}] \cong \mathsf{D}\text{-}\mathsf{lin}\left[\epsilon\text{-}\mathsf{nat}[\mathbb{X}]_\oc \right]$. Therefore, $\epsilon\text{-}\mathsf{nat}[\mathbb{X}] \cong \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}]$. However, it is straightforward to work out that this isomorphism is in fact an equality and so $\epsilon\text{-}\mathsf{nat}[\mathbb{X}] = \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}]$.
\end{proof}
We turn our attention to the converse of Proposition \ref{propab1}. We will now explain how every coKleisli category of a Cartesian differential comonad is a Cartesian differential abstract coKleisli category. To do so, let us first quickly review how every coKleisli category is an abstract coKleisli category.
\begin{lemma}\label{cokleisliabstractlem} \cite[Proposition 2.6.3]{blute2015cartesian} Let $(\oc, \delta, \varepsilon)$ be a comonad on a category $\mathbb{X}$. Then define:
\begin{enumerate}[{\em (i)}]
\item The endofunctor $\oc_\oc: \mathbb{X}_\oc \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{X}_\oc$ on objects as $\oc_\oc(A) = \oc(A)$ and on a coKleisli map $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ as the following composite:
\begin{align*}
\llbracket \oc_\oc(f) \rrbracket := \xymatrixcolsep{3pc}\xymatrix{\oc\oc (A) \ar[r]^-{\varepsilon_{\oc(A)}} & \oc(A) \ar[r]^-{\delta_A} & \oc\oc(A) \ar[r]^-{\oc\left( \llbracket f \rrbracket \right)} & \oc(B) } && \llbracket \oc_\oc(f) \rrbracket = \oc\left( \llbracket f \rrbracket \right) \circ \delta_A \circ \varepsilon_{\oc(A)}
\end{align*}
\item The family of coKleisli maps $\llbracket \epsilon_A \rrbracket: \oc\oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$ as the following composite:
\begin{align*}
\llbracket \epsilon_A \rrbracket := \xymatrixcolsep{3pc}\xymatrix{\oc\oc (A) \ar[r]^-{\varepsilon_{\oc(A)}} & \oc(A) \ar[r]^-{\varepsilon_A} & A } && \llbracket \epsilon_A \rrbracket = \varepsilon_A \circ \varepsilon_{\oc(A)}
\end{align*}
\end{enumerate}
Then the coKleisli category $\mathbb{X}_\oc$ is an abstract coKleisli category with abstract coKleisli structure $(\oc_\oc, \varphi, \epsilon)$, where $\varphi$ is defined as in (\ref{varphidef}). Furthermore,
\begin{enumerate}[{\em (i)}]
\item \label{lemepnatcok} A coKleisli map $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\epsilon$-natural if and only if $\llbracket f \rrbracket \circ \varepsilon_{\oc(A)} = \llbracket f \rrbracket \circ \oc(\varepsilon_A)$;
\item \label{lemepvarep} For every map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ in $\mathbb{X}$, $\llbracket \mathsf{F}_\oc(f) \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\epsilon$-natural;
\item There is a functor $\mathsf{F}_\epsilon: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \epsilon\text{-}\mathsf{nat}[\mathbb{X}_\oc]$ which is defined on objects as $\mathsf{F}_\epsilon(A) = A$ and on maps ${f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ as $\llbracket \mathsf{F}_\epsilon(f) \rrbracket = f \circ \varepsilon_A = \llbracket \mathsf{F}_{\oc}(f) \rrbracket$, and such that the following diagram commutes:
\[ \xymatrixcolsep{5pc}\xymatrix{ \mathbb{X} \ar[dr]_-{\mathsf{F}_\epsilon} \ar[rr]^-{\mathsf{F}_\oc} && \mathbb{X}_\oc \\
& \epsilon\text{-}\mathsf{nat}[\mathbb{X}_\oc] \ar[ur]_-{\mathsf{U}}
} \]
\end{enumerate}
\end{lemma}
A natural question to ask is when the subcategory of $\epsilon$-natural maps of a coKleisli category is isomorphic to the base category. The answer is when the comonad is exact (for monads, this is called the equalizer requirement \cite[Definition 8]{fuhrmann1999direct}).
\begin{lemma}\label{lemexact} \cite[Dual of Theorem 9]{fuhrmann1999direct} Let $(\oc, \delta, \varepsilon)$ be a comonad on a category $\mathbb{X}$. Then the following are equivalent:
\begin{enumerate}[{\em (i)}]
\item $\mathsf{F}_\epsilon: \mathbb{X} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \epsilon\text{-}\mathsf{nat}[\mathbb{X}_\oc]$ is an isomorphism;
\item The comonad $(\oc, \delta, \varepsilon)$ is exact \cite[Secion 2.6]{blute2015cartesian}, that is, for every object $A$, the following is a coequalizer diagram:
\[ \xymatrixcolsep{5pc}\xymatrix{\oc\oc(A) \ar@<1ex>[r]^{\varepsilon_{\oc(A)}} \ar@<-1ex>[r]_{\oc(\varepsilon_A)} & \oc(A) \ar[r]^-{\varepsilon_A} & A } \]
\end{enumerate}
\end{lemma}
In the case of exact comonad, the base category can be recovered from the coKleisli category using the subcategory of $\epsilon$-natural maps. For abstract coKleisli categories, note that the comonad from Lemma \ref{lem:ep-com} is always exact.
For a comonad on the category with finite products, the coKleisli category is a Cartesian abstract coKleisli category.
\begin{lemma} \cite[Secion 2.6]{blute2015cartesian} Let $(\oc, \delta, \varepsilon)$ be a comonad on a category $\mathbb{X}$ with finite products. Then the coKleisli category $\mathbb{X}_\oc$ is a Cartesian abstract coKleisli category with abstract coKleisli structure as defined in Lemma \ref{cokleisliabstractlem}.
\end{lemma}
For a comonad on a Cartesian left additive category, the coKleisli category is a Cartesian left additive abstract coKleisli category.
\begin{lemma} Let $(\oc, \delta, \varepsilon)$ be a comonad on a Cartesian left additive category $\mathbb{X}$. Then the coKleisli category $\mathbb{X}_\oc$ is a Cartesian left additive abstract coKleisli category with abstract coKleisli structure as defined in Lemma \ref{cokleisliabstractlem} and Cartesian left additive structure as defined in Lemma \ref{cokleisliCLAC}.
\end{lemma}
\begin{proof} We must show that zero maps are $\epsilon$-natural, and that the sum of $\epsilon$-natural maps is again $\epsilon$-natural. We will make use of Lemma \ref{cokleisliabstractlem}.(\ref{lemepnatcok}). Starting with zero maps, we compute:
\begin{align*}
\llbracket 0 \rrbracket \circ \varepsilon_{\oc(A)} &=~ 0 \circ \varepsilon_{\oc(A)} \\
&=~ 0 \\
&=~ 0 \circ \oc(\varepsilon_A) \\
&=~ \llbracket 0 \rrbracket \circ \oc(\varepsilon_A)
\end{align*}
So $\llbracket 0 \rrbracket \circ \varepsilon_{\oc(A)} = \llbracket 0 \rrbracket \circ \oc(\varepsilon_A) $, so by Lemma \ref{cokleisliabstractlem}.(\ref{lemepnatcok}), we conclude that $\llbracket 0 \rrbracket$ is $\epsilon$-natural. Next, suppose that ${\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B}$ and $\llbracket g \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ are both $\epsilon$-natural, then we compute
\begin{align*}
\llbracket f+g \rrbracket \circ \varepsilon_{\oc(A)} &=~ \left( \llbracket f \rrbracket + \llbracket g \rrbracket \right) \circ \varepsilon_{\oc(A)} \\
&=~ \llbracket f \rrbracket \circ \varepsilon_{\oc(A)} + \llbracket g \rrbracket \circ \varepsilon_{\oc(A)} \\
&=~ \llbracket f \rrbracket \circ \oc(\varepsilon_A) + \llbracket g \rrbracket \circ \oc(\varepsilon_A) \tag{$\llbracket f \rrbracket$ and $\llbracket g \rrbracket$ are $\epsilon$-nat. + Lem.\ref{cokleisliabstractlem}.(\ref{lemepnatcok}) } \\
&=~ \left( \llbracket f \rrbracket + \llbracket g \rrbracket \right) \circ \oc(\varepsilon_A) \\
&=~ \llbracket f+g \rrbracket \circ \oc(\varepsilon_A)
\end{align*}
So $\llbracket f+g \rrbracket \circ \varepsilon_{\oc(A)} = \llbracket f+g \rrbracket \circ \oc(\varepsilon_A)$, and so by Lemma \ref{cokleisliabstractlem}.(\ref{lemepnatcok}) it follows that $\llbracket f+g \rrbracket$ is $\epsilon$-natural. Therefore, we conclude that $\mathbb{X}_\oc$ is a Cartesian left additive abstract coKleisli category.
\end{proof}
We will now show that for a Cartesian differential comonad, its coKleisli category is a Cartesian differential abstract coKleisli category.
\begin{proposition}\label{propabcok} Let $(\oc, \delta, \varepsilon)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. Then $\mathbb{X}_\oc$ is a Cartesian differential abstract coKleisli category with Cartesian differential structure defined in Theorem \ref{thm1} and abstract coKleisli structure $(\oc_\oc, \varphi, \epsilon)$ as defined in Lemma \ref{cokleisliabstractlem}.
\end{proposition}
\begin{proof} We must show that every $\epsilon$-natural map is $\mathsf{D}$-linear. To do so, we make use of both Theorem \ref{thm1}.(\ref{thm1.lin}) and Lemma \ref{cokleisliabstractlem}.(\ref{lemepnatcok}). So suppose that $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ is $\epsilon$-natural, then we compute:
\begin{align*}
\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) &=~\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) \circ 1_{\oc(A)} \\
&=~ \llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) \circ \oc(\varepsilon_A) \circ \delta_A \tag{Comonad Identity} \\
&=~ \llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1 \circ \varepsilon_A) \circ \delta_A \tag{$\oc$ is a functor} \\
&=~ \llbracket f \rrbracket \circ \partial_A \circ \oc\left( (\varepsilon_A \times \varepsilon_A) \circ \iota_1 \right) \circ \delta_A \tag{Naturality of $\iota_1$} \\
&=~ \llbracket f \rrbracket \circ \partial_A \circ \oc(\varepsilon_A \times \varepsilon_A) \circ \oc(\iota_1) \circ \delta_A \tag{$\oc$ is a functor} \\
&=~ \llbracket f \rrbracket \circ \oc(\varepsilon_A) \circ \partial_{\oc(A)} \circ \oc(\iota_1) \circ \delta_A \tag{Naturality of $\partial$} \\
&=~ \llbracket f \rrbracket \circ \varepsilon_{\oc(A)} \circ \partial_{\oc(A)} \circ \oc(\iota_1) \circ \delta_A \tag{$\llbracket f \rrbracket$ is $\epsilon$-nat. + Lem.\ref{cokleisliabstractlem}.(\ref{lemepnatcok}) } \\
&=~ \llbracket f \rrbracket \circ \pi_1 \circ \varepsilon_{\oc(A) \times \oc(A)} \circ \oc(\iota_1) \circ \delta_A \tag{\textbf{[dc.3]}} \\
&=~ \llbracket f \rrbracket \circ \pi_1 \circ \iota_1 \circ \varepsilon_{\oc(A)} \circ \delta_A \tag{Naturality of $\varepsilon$} \\
&=~ \llbracket f \rrbracket \circ 1_{\oc(A)} \circ 1_{\oc(A)} \tag{Biproduct Idenity + Comonad Identity} \\
&=~ \llbracket f \rrbracket
\end{align*}
So $\llbracket f \rrbracket \circ \partial_A \circ \oc(\iota_1) = \llbracket f \rrbracket$, and so by Theorem \ref{thm1}.(\ref{thm1.lin}), it follows that $\llbracket f \rrbracket$ is $\mathsf{D}$-linear. Therefore, we conclude that $\mathbb{X}_\oc$ is a Cartesian differential abstract coKleisli category.
\end{proof}
We conclude this section by showing that for a Cartesian differential comonad with a $\mathsf{D}$-linear unit, the underlying comonad is exact and that a coKleisli map is $\mathsf{D}$-linear if and only if it $\epsilon$-natural.
\begin{lemma}\label{etaexact} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad on a category $\mathbb{X}$ with finite biproducts. Then the following are equivalent:
\begin{enumerate}[{\em (i)}]
\item $(\oc, \delta, \varepsilon, \partial)$ has a $\mathsf{D}$-linear unit $\eta_A : A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc (A)$;
\item The comonad $(\oc, \delta, \varepsilon)$ is exact and for each object $A$, the $\mathsf{D}$-linear map $\llbracket \mathsf{L}[\varphi_A] \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ is $\epsilon$-natural.
\end{enumerate}
\end{lemma}
\begin{proof} For $(i) \Rightarrow (ii)$, suppose that $(\oc, \delta, \varepsilon, \partial)$ has a $\mathsf{D}$-linear unit $\eta_A : A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc (A)$. We first show that we have a coequalizer. So suppose that $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ coequalizes $\varepsilon_{\oc(A)}$ and $\oc(\varepsilon_A)$, that is, $\llbracket f \rrbracket \circ \varepsilon_{\oc(A)} = \llbracket f \rrbracket \circ \oc(\varepsilon_A)$. Note that by Lemma \ref{cokleisliabstractlem}.(\ref{lemepnatcok}), this implies that $\llbracket f \rrbracket$ is $\epsilon$-natural, and so by Proposition \ref{propabcok}, $\llbracket f \rrbracket$ is also $\mathsf{D}$-linear. Now consider the composite map $\llbracket f \rrbracket \circ \eta_A : A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$. Then by Corollary \ref{etacor1}, since $\llbracket f \rrbracket$ is $\mathsf{D}$-linear, $\llbracket f \rrbracket \circ \eta_A \circ \varepsilon_A = \llbracket f \rrbracket$. For uniqueness, suppose there was another map $h: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$ such that $h \circ \varepsilon_A = \llbracket f \rrbracket$. Precomposing both sides by $\eta_A$, by \textbf{du.1}, we have that $h = \llbracket f \rrbracket \circ \eta_A$. So we conclude that we have a coequalizer and therefore that the comonad is exact. Next we must show that $\llbracket \mathsf{L}[\varphi_A] \rrbracket$ is $\epsilon$-natural. Note that as was shown in the proof of Proposition \ref{etaFlem1}, $\llbracket \mathsf{L}[\varphi_A] \rrbracket = \eta_A \circ \varepsilon_A = \llbracket \mathsf{F}_\oc(\eta_A) \rrbracket$. Then by Lemma \ref{cokleisliabstractlem}.(\ref{lemepvarep}), it follows that $\llbracket \mathsf{L}[\varphi_A] \rrbracket$ is $\epsilon$-natural.
For $(ii) \Rightarrow (i)$, suppose that $(\oc, \delta, \varepsilon)$ is exact and $\llbracket \mathsf{L}[\varphi_A] \rrbracket$ is $\epsilon$-natural. By Lemma \ref{cokleisliabstractlem}.(\ref{lemepnatcok}), the latter implies that $\llbracket \mathsf{L}[\varphi_A] \rrbracket \circ \varepsilon_{\oc(A)} = \llbracket \mathsf{L}[\varphi_A] \rrbracket \circ \oc(\varepsilon_A)$. Therefore, by the couniversal property of the coequalizer, there exists a unique map $\eta_A: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ such that the following diagram commutes:
\[ \xymatrixcolsep{5pc}\xymatrix{\oc\oc(A) \ar@<1ex>[r]^{\varepsilon_{\oc(A)}} \ar@<-1ex>[r]_{\oc(\varepsilon_A)} & \oc(A) \ar[dr]_-{\llbracket \mathsf{L}[\varphi_A] \rrbracket} \ar[r]^-{\varepsilon_A} & A \ar@{-->}[d]^-{\exists \oc ~ \eta_A} \\
&& \oc(A)} \]
To show that $\eta$ is a $\mathsf{D}$-linear unit, we will make use of Lemma \ref{Lvarphi} and of the fact that $\varepsilon_A$ is epic (which is a consequence of the coequalizer assumption). Starting with naturality, for any map $f: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, we compute:
\begin{align*}
\oc(f) \circ \eta_A \circ \varepsilon_A &=~ \oc(f) \circ \llbracket \mathsf{L}[\varphi_A] \rrbracket \\
&=~ \oc(f) \circ \partial_A \circ \oc(\iota_1) \tag{Lemma \ref{Lvarphi}} \\
&=~ \partial_B \circ \oc(f \times f) \circ \oc(\iota_1) \tag{Naturality of $\partial$} \\
&=~ \partial_B \circ \oc\left( (f \times f) \circ \iota_1 \right) \tag{$\oc$ is a functor} \\
&=~ \partial_B \circ \oc( \iota_1 \circ f) \tag{Naturality of $\iota_1$} \\
&=~ \partial_B \circ \oc(\iota_1) \circ \oc(f) \tag{$\oc$ is a functor} \\
&=~ \llbracket \mathsf{L}[\varphi_B] \rrbracket \circ \oc(f) \\
&=~ \eta_B \circ \varepsilon_B \circ \oc(f) \\
&=~ \eta_B \circ f \circ \varepsilon_A \tag{Naturality of $\varepsilon$}
\end{align*}
So $\oc(f) \circ \eta_A \circ \varepsilon_A = \eta_B \circ f \circ \varepsilon_A$. Since $\varepsilon_A$ is epic, it follows that $\oc(f) \circ \eta_A = \eta_B \circ f$. Therefore, $\eta$ is a natural transformation. Next we show that $\eta$ satisfies the two axioms of a $\mathsf{D}$-linear unit:
\begin{enumerate}[{\bf [du.1]}]
\item Here we use \textbf{[dc.3]}:
\begin{align*}
\varepsilon_A \circ \eta_A \circ \varepsilon_A &=~ \varepsilon_A \circ \llbracket \mathsf{L}[\varphi_A] \rrbracket \\
&=~ \varepsilon_A \circ \partial_A \circ \oc(\iota_1) \tag{Lemma \ref{Lvarphi}} \\
&=~ \pi_1 \circ \varepsilon_{A \times A} \circ \oc(\iota_1) \tag{\textbf{[dc.3]}} \\
&=~ \pi_1 \circ \iota_1 \circ \varepsilon_A \tag{Naturality of $\varepsilon$} \\
&=~ 1_A \circ \varepsilon_A \tag{Biproduct Identity}\\
&=~ \varepsilon_A
\end{align*}
So $\varepsilon_A \circ \eta_A \circ \varepsilon_A = \varepsilon_A$. Since $\varepsilon_A$ is epic, it follows that $\varepsilon_A \circ \eta_A = 1_A$.
\item Automatic by Lemma \ref{Lvarphi} since:
\begin{align*}
\eta_A \circ \varepsilon_A &=~\llbracket \mathsf{L}[\varphi_A] \rrbracket \\
&=~\partial_A \circ \oc(\iota_1) \tag{Lemma \ref{Lvarphi}} \\
\end{align*}
\end{enumerate}
So we conclude that $\eta$ is a $\mathsf{D}$-linear unit.
\end{proof}
\begin{corollary}\label{etacor2} Let $(\oc, \delta, \varepsilon, \partial)$ be a Cartesian differential comonad with a $\mathsf{D}$-linear unit $\eta$ on a category $\mathbb{X}$ with finite biproducts. Then for a coKleisli map $\llbracket f \rrbracket: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} B$, $\llbracket f \rrbracket$ is $\mathsf{D}$-linear in $\mathbb{X}_\oc$ if and only if $\llbracket f \rrbracket$ is $\epsilon$-natural in $\mathbb{X}_\oc$. As such, we have the following chain of isomorphisms: $\mathbb{X} \cong \epsilon\text{-}\mathsf{nat}[\mathbb{X}_\oc] \cong \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc]$
\end{corollary}
\begin{proof} By Proposition \ref{propabcok}, we already have that every $\epsilon$-natural map is $\mathsf{D}$-linear. So suppose that $\llbracket f \rrbracket$ is $\mathsf{D}$-linear. By Corollary \ref{etacor1}, since $\llbracket f \rrbracket$ is $\mathsf{D}$-linear, we have that $\llbracket f \rrbracket \circ \eta_A \circ \varepsilon_A = \llbracket f \rrbracket$. However, this also implies that $\llbracket f \rrbracket = \llbracket \mathsf{F}_\oc\left( \llbracket f \rrbracket \circ \eta_A \right) \rrbracket$. Then by Lemma \ref{cokleisliabstractlem}.(\ref{lemepvarep}), it follows that $\llbracket f \rrbracket$ is $\epsilon$-natural. By Proposition \ref{etaFlem1}, we have that $\mathbb{X} \cong \mathsf{D}\text{-}\mathsf{lin}[\mathbb{X}_\oc]$, while by Lemma \ref{etaexact} and Lemma \ref{lemexact}, we have that $\mathbb{X} \cong \epsilon\text{-}\mathsf{nat}[\mathbb{X}_\oc]$. \hfill \end{proof}
\section{Example: Reduced Power Series}\label{sec:PWex}
In this section we construct a Cartesian differential comonad (in the opposite category) based on \emph{reduced} formal power series, which therefore induces a Cartesian differential category of \emph{reduced} formal power series. To the extent of the authors' knowledge, this is a new observation. This is an interesting and important non-trivial example of a Cartesian differential comonad which does not arise from a differential category. Unsurprisingly, the differential combinator will reflect the standard differentiation of arbitrary multivariable power series. However, the problem with arbitrary power series lies with composition. Indeed, famously, power series with degree 0 coefficients, also called constant terms, cannot be composed, since in general this results in an infinite non-converging sum in the base field. Thus, multivariable formal power series do not form a category, since their composition may be undefined. \emph{Reduced} formal power series are power series with no constant term. These can be composed \cite[Section 4.1]{brewer2014algebraic} and thus, we obtain a Lawvere theory of reduced power series. The total derivative of a reduced power series is again reduced, and therefore, we obtain a Cartesian differential category of reduced power series. Futhermore, this Cartesian differential category of reduced power series is in fact a subcategory of the opposite category of the Kleisli category of the Cartesian differential monad $\mathsf{P}$, the free reduced power series algebra monad, which can be seen as the free complete algebra functor induced by the operad of commutative algebras \cite[Section 1.4.4]{Fresse98}. Lastly, it is worth mentioning that, while in this section we will work with vector spaces over a field, we note that all the constructions and examples in this section easily generalize to the category of modules over a commutative (semi)ring.
Let $\mathbb{F}$ be a field. For an $\mathbb{F}$-vector space $V$, define $\mathsf{P}(V)$ as follows:
\[ \mathsf{P}(V)= \prod^{\infty}_{n=1} (V^{\otimes n})_{S(n)}, \]
where we denote by $(V^{\otimes n})_{S(n)}$ the vector space of symmetrized $n$-tensors, that is, classes of tensors of length $n$ under the action of the symmetric group which permutes the factors in $V^{\otimes n}$. An arbitrary element $\mathfrak{t} \in \mathsf{P}(V)$ is then an infinite ordered list $\mathfrak{t} = \left( \mathfrak{t}(n) \right)^\infty_{n=1}$ where $\mathfrak{t}(n) \in (V^{\otimes n})_{S(n)}$. Therefore, an arbitrary element of $ \mathsf{P}(V)$ can be written in the following form:
\[ \mathfrak{t} = \left( \mathfrak{t}(n) \right)^\infty_{n=1} = \left( \sum \limits^m_{i=1} v_{(n,i,1)} \hdots v_{(n,i,n)} \right)^\infty_{n=1} , \]
where $v_{(n,k,1)} \hdots v_{(n,k,n)}$ denotes the class of $v_{(n,k,1)} \otimes \hdots \otimes v_{(n,k,n)}\in V^{\otimes n}$ under the action of the symmetric group. If $X$ is basis of $V$, then $\mathsf{P}(V) \cong \mathbb{F}\llbracket X \rrbracket_+$ \cite[Section 1.4.4]{Fresse98}, where $\mathbb{F}\llbracket X \rrbracket_+$ is the non-unital associative ring of reduced power series over $X$, that is, power series over $X$ with no constant/degree $0$ term. Therefore, $\mathsf{P}(V)$ is a non-unital associative $\mathbb{F}$-algebra. The algebra structure is induced by concatenation of classes of tensors:
\[*:v_{1} \dots v_{n}\otimes w_1\dots w_{k}\mapsto v_{1} \dots v_{n}w_1\dots w_{k}\]
which provides a commutative, associative multiplication $*:(V^{\otimes n})_{S(n)}\otimes (V^{\otimes k})_{S(k)}\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}}(V^{\otimes n+k})_{S(n+k)}$. We will sometimes abbreviate a juxtaposition of a finite family $(v_i)_{i =1}^n$ of elements of $V$ (resp. a concatenation of a finite family of symmetrized tensors $(\underline v_i)_{i=1}^n$ with $v_i\in (V^{\otimes n_i})_{S(n_i)}$) by:
\begin{align*}
\prod_{i\in I}v_i=v_1\hdots v_i, &&\left(\mbox{resp. }\prod_{i\in I}\underline v_i=\underline v_1*\hdots*\underline v_{n}\right).
\end{align*}
It is worth pointing out that $\mathsf{P}(V)$ does not have a unit element. More specifically, $\mathsf{P}(V)$ will not come equipped with a natural map of type $\mathbb{F} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{P}(V)$. So $\mathsf{P}(V)$ will not induce an algebra modality, and therefore will not induce a differential category structure on $\mathbb{F}\text{-}\mathsf{VEC}^{op}$.
The monad structure of $\mathsf{P}$ corresponds to the composition of power series, which is closely related to the composition of polynomials. Define the functor $\mathsf{P}: \mathbb{F}\text{-}\mathsf{VEC} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{F}\text{-}\mathsf{VEC}$ as mapping an $\mathbb{F}$-vector space $V$ to $\mathsf{P}(V)$, as defined above, and mapping an $\mathbb{F}$-linear map $f: V \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} W$ to the $\mathbb{F}$-linear map ${\mathsf{P}(f): \mathsf{P}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{P}(W)}$ defined on elements $\mathfrak t$ as above by:
\begin{align*}
\mathsf{P}(f)(\mathfrak{t}) = \left( \sum \limits^m_{i=1} f(v_{(n,i,1)}) \hdots f(v_{(n,i,n)}) \right)^\infty_{n=1}.
\end{align*}
Define the monad unit $\eta_V: V \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{P}(V)$ by:
\begin{align*}
\eta_V(v) = (v, 0, 0, \hdots)
\end{align*}
From a power series point of view, if $X$ is a basis of $V$, $\eta_V$ maps a basis element $x \in X$ to its associated monomial of degree $1$. For the monad multiplication, let us first consider an element $\mathfrak{s} \in \mathsf{P}\mathsf{P}(V)$, which is a list of symmetrized tensor products of lists of symmetrized tensor products, $\mathfrak{s} = \left( \mathfrak{s}(n) \right)^\infty_{n=1}$, $\mathfrak{s}(n) \in \left((\mathsf{P}(V))^{\otimes n}\right)_{S(n)}$ and thus, $\mathfrak{s}(n)$ is of the form:
\[\mathfrak{s}(n) = \sum \limits^m_{i=1} \mathfrak{s}(n)_{(i,1)}\hdots \mathfrak{s}(n)_{(i,n)} \]
for some $\mathfrak{s}(n)_{(i,j)} \in \mathsf{P}(V)$. Now for every partition of $n$ not involving $0$, that is, for every $n_1+ \hdots + n_k = n$ with $n_j \geq 1$, define $\mathfrak{s}(n_1, \hdots, n_k) \in (V^{\otimes n})_{S(n)}$ as follows:
\[\mathfrak{s}(n_1, \hdots, n_k) = \sum \limits^m_{i=1}\mathfrak{s}(k)_{(i,1)}(n_1)*\hdots* \mathfrak{s}(k)_{(i,k)}(n_k), \ \]
where $*$ is the concatenation multiplication defined above. Define the monad multiplication $\mu_V: \mathsf{P}\mathsf{P}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{P}(V)$ as follows:
\begin{align*}
\mu_V(\mathfrak{s}) &= \left(\sum\limits_{k=1}^n \sum \limits_{n_1 + \hdots + n_k=n} \mathfrak{s}(n_1, \hdots, n_k) \right)^\infty_{n=1} &&
\mu_V(\mathfrak{s})(n) &= \sum\limits_{k=1}^n \sum \limits_{n_1 + \hdots + n_k=n} \mathfrak{s}(n_1, \hdots, n_k)
\end{align*}
This monad multiplication corresponds to the composition of multivariable reduced power series, as defined explicitly in \cite[Section 4.1]{brewer2014algebraic}.
\begin{lemma}\label{lem:powmonad}\cite[Section 1.4.3]{Fresse98}
$(\mathsf{P}, \mu, \eta)$ is a monad on $\mathbb{F}\text{-}\mathsf{VEC}$.
\end{lemma}
We now introduce the differential combinator transformation for $\mathsf{P}$, which will correspond to differentiating power series.
Define $\partial_V: \mathsf{P}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{P}(V \times V)$ by setting:
\begin{align*}
\partial_V(\mathfrak{t})(n) = \sum \limits^m_{i=1}\sum_{j=1}^n \left((v_{(n,i,1)},0) \hdots \widehat{(v_{(n,i,j)},0)}\hdots (v_{(n,i,n)},0)\right) (0,v_{n,i,j}),
\end{align*}
where $\mathfrak t$ is an arbitrary element of $\mathsf P(V)$ and $\widehat{(v_{(n,i,j)},0)}$ indicates the omission of the factor $(v_{(n,i,j)},0)$ in the product. With our conventions, we can abbreviate:
\begin{align*}
\partial_V(\mathfrak{t})(n) = \sum \limits^m_{i=1}\sum_{j=1}^n \left(\prod_{k\neq j}(v_{(n,i,k)},0) \right) (0,v_{n,i,j}).
\end{align*}
If $X$ is a basis of $V$, the differential combinator transformation can described as a map ${\partial_V: \mathbb{F}\llbracket X \rrbracket_+ \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{F}\llbracket X \sqcup X \rrbracket_+}$ which maps a reduced power series $\mathfrak{t}(\vec x)$ to its sum of its partial derivatives:
\[ \partial_V(\mathfrak t(\vec x)) = \sum \limits_{x_i \in \vec x} \frac{\partial \mathfrak{t}(\vec x)}{\partial x_i} x^\ast_i \]
and where recall that $x^\ast_i$ denotes the element $x_i$ in the second copy of $X$ in the disjoint union $X \sqcup X$. Note that even if $\mathfrak{t}(\vec x)$ depends on an infinite list of variables, $\partial_V(\mathfrak{t}(\vec x))$ is well-defined as a formal power series.
It is worth insisting on the fact that $\partial$ cannot be induced by a deriving transformation in the sense of Example \ref{ex:diffcat}. Indeed, as a map, $\partial$ does not factor through a map $\mathsf P(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}}\mathsf P(V)\otimes V$. Note that a power series could have infinite partial derivatives and, since infinite sums and $\otimes$ are generally incompatible, the derivative of a power series could not be described as an element of $\mathsf{P}(V) \otimes V$. Moreover, we already noted the lack of unit: a differential operator of type $\mathsf{P}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{P}(V) \otimes V$ would not be able to properly derive degree 1 monomials without a unit argument to put in the $\mathsf{P}(V)$ component.
\begin{proposition}\label{prop:powpartial} $\partial$ is a differential combinator transformation on $(\mathsf{P}, \mu, \eta)$.
\end{proposition}
\begin{proof}
First, it is straightforward to see that $\partial$ is indeed a natural transformation.
We must show that $\partial$ satisfies the dual of the six axioms \textbf{[dc.1]} to \textbf{[dc.6]} from Definition \ref{def:cdcomonad}. Throughout this proof, it is sufficient to do the calculations on an arbitrary element $\mathfrak t \in \mathsf P(V)$. We can further assume that for all $n\in\mathbb{N}$, $\mathfrak t(n)=v_{(n,1)}*\hdots *v_{(n,n)}$ (the integer $m$ as above is equal to 1), and then extend by linearity.
\begin{enumerate}[{\bf [dc.1]}]
\item One has, for all $n>0$:
\begin{align*}
\mathsf P(\pi_0)\circ \partial_V(\mathfrak t)(n)&= \mathsf P(\pi_0)\left(\sum_{j=1}^n \left(\prod_{k\neq j}(v_{(n,k)},0) \right) (0,v_{(n,j)})\right) =\sum_{j=1}^n\left( \prod_{k\neq j}v_{(n,k)}\right)0=0.
\end{align*}
So $\mathsf{P}(\pi_0) \circ \partial_V = 0$.
\item Let $\Delta: V \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} V \times V$ be the diagonal map. One computes that:
\begin{align*}
\mathsf P(V\times \Delta)\circ\partial_V(\mathfrak t)(n)&=\mathsf P(V\times \Delta)\left(\sum_{j=1}^n \left(\prod_{k\neq j}(v_{(n,k)},0) \right) (0,v_{(n,j)})\right),\\
&=\sum_{j=1}^n\left(\prod_{k\neq j}(v_{n,k},0,0)\right)(0,v_{(n,j)},v_{(n,j)}),\\
&=\left(\sum_{j=1}^n\left(\prod_{k\neq j}(v_{n,k},0,0)\right)(0,v_{(n,j)},0)\right)+\left(\sum_{j=1}^n\left(\prod_{k\neq j}(v_{n,k},0,0)\right)(0,0,v_{(n,j)})\right),\\
&=\left(\mathsf P\mathsf P(V\times\iota_0)+\mathsf P(V\times\iota_1)\right)\left(\sum_{j=1}^n \left(\prod_{k\neq j}(v_{(n,k)},0) \right) (0,v_{(n,j)})\right),\\
&=\left(\mathsf P(V\times\iota_0)+\mathsf P(V\times\iota_1)\right)\circ\partial_V(\mathfrak t)(n).
\end{align*}
So $\mathsf{P}(1_V \times \Delta_V) \circ \partial_V = \left(\mathsf{P}(1_V \times \iota_0) + \mathsf{P}(1_V \times \iota_1) \right) \circ \partial_V$.
\item This is a straightforward verification. So $\partial_V \circ \eta_V = \eta_{V \times V} \circ \iota_1$.
\item Let $\mathfrak{s}$ be an element in $\mathsf P(\mathsf P(V))$ such that
\[
\mathfrak{s}(k)=\mathfrak{s}(k)_{1}\dots\mathfrak{s}(k)_{k},
\]
with, for all $j\in\{1,\dots,k\}, n>0$,
\[
\mathfrak{s}(k)_{j}(n)=v_{(k,j,n,1)}\dots v_{(k,j,n,n)},
\]
where $v_{(k,j,n,l)}\in V$ for all $k,j,n,l$. Note that elements of this type span $\mathsf P(\mathsf P(V))$. We then can make the computation on $\mathfrak s$ and extend by linearity.
On one hand, for all $n>0$,
\begin{align*}
\mu_{\mathsf P}(\mathfrak{s})(n)&=\sum\limits_{k=1}^n \sum \limits_{n_1 + \hdots + n_k=n} \mathfrak{s}(n_1, \hdots, n_k),\\
&=\sum\limits_{k=1}^n \sum \limits_{n_1 + \hdots + n_k=n}\mathfrak s(k)_1(n_1)*\hdots* \mathfrak s(k)_k(n_k),\\
&=\sum\limits_{k=1}^n \sum \limits_{n_1 + \hdots + n_k=n}v_{(k,1,n_1,1)}\hdots v_{(k,1,n_1,n_1)}\hdots v_{(k,k,n_k,1)}\hdots v_{(k,k,n_k,n_k)},
\end{align*}
and so,
\begin{align*}
\partial_V\circ\mu_{\mathsf P}(\mathfrak{s})(n)=\sum\limits_{k=1}^n \sum \limits_{n_1 + \hdots + n_k=n}\sum_{j=1}^{k}\sum_{l=1}^{n_j}(v_{(k,1,n_1,1)},0)\hdots (v_{(k,1,n_1,n_1)},0)\hdots \widehat{(v_{(k,j,n_j,l)},0)}\hdots (v_{(k,k,n_k,n_k)},0)\ (0,v_{(k,j,n_j,l)})
\end{align*}
On the other hand, for all $k>0$,
\[
\partial_{\mathsf P(V)}(\mathfrak{s})(k)=\sum_{j=1}^k\mathfrak (s(k)_1,0)\hdots \widehat{(s(k)_j,0)}\hdots (s(k)_k,0)\ (0,s(k)_j),
\]
So if $[-,-]$ is pairing operator for the coproduct, we have that:
\[
\mathsf P\left(\left[\mathsf P(\iota_0),\partial_V\right]\right)\circ\partial_{\mathsf P(V)}(\mathfrak{s})(k)=\sum_{j=1}^k\mathsf P(\iota_0)(\mathfrak{s}(k)_{1})\hdots \widehat{\mathsf P(\iota_0)(\mathfrak{s}(k)_{j})}\hdots \mathsf P(\iota_0)(\mathfrak{s}(k)_{k})\ \partial_{V}(\mathfrak{s}(k)_j),
\]
and so, for all $n>0$,
\begin{align*}
& \mu_{\mathsf P(V)}\circ\mathsf P\left(\left[\mathsf P(\iota_0),\partial_V\right]\right)\circ\partial_{\mathsf P(V)}(\mathfrak{s})(n)=\sum_{k=1}^n\sum_{n_1+\dots+n_k=n}\mathsf P\left(\left[\mathsf P(\iota_0),\partial_V\right]\right)\circ\partial_{\mathsf P(V)}(\mathfrak{s})(n_1,\dots,n_k)\\
&=\sum_{k=1}^n\sum_{n_1+\dots+n_k=n}\sum_{j=1}^k\mathsf P(\iota_0)(\mathfrak{s}(k)_{1})*\hdots* \widehat{\mathsf P(\iota_0)(\mathfrak{s}(k)_{j})}*\hdots* \mathsf P(\iota_0)(\mathfrak{s}(k)_{k})\ *\partial_{V}(\mathfrak{s}(k)_j),\\
&=\sum\limits_{k=1}^n \sum \limits_{n_1 + \hdots + n_k=n}\sum_{j=1}^{k}\sum_{l=1}^{n_j}(v_{(k,1,n_1,1)},0)\hdots(v_{(k,1,n_1,n_1)},0)\hdots \widehat{(v_{(k,j,n_j,l)},0)}\hdots (v_{(k,k,n_k,n_k)},0)\ (0,v_{(k,j,n_j,l)})
\end{align*}
So $\partial_V \circ \mu_V = \mu_{V \times V} \circ \mathsf{P}\left( [\mathsf{P}(\iota_0), \partial_V ] \right) \circ \partial_{\mathsf{P}(V)}$.
\end{enumerate}
In order to prove the two last identities, let us compute $\partial_{V\times V}(\partial_V(\mathfrak t))$. For all $n>0$,
\begin{align*}
&\partial_{V\times V}(\partial_V(\mathfrak t))(n)=\\
&\sum_{j_0\neq j_1\in\{1,\dots,n\}}\left(\prod_{k\neq j_0,j_1}(v_{(n,i,k)},0,0,0) \right) (0,v_{n,i,j_0},0,0)(0,0,v_{n,i,j_1},0)+\sum_{j=1}^n\left(\prod_{k\neq j}(v_{(n,i,k)},0,0,0) \right) (0,0,0,v_{n,i,j})
\end{align*}
\begin{enumerate}[{\bf [dc.1]}]
\setcounter{enumi}{4}
\item Using our computation for $\partial_{V\times V}(\partial_V(\mathfrak t))$, we get, for all $n>0$:
\begin{align*}
\mathsf P(\pi_0\times\pi_1)\circ\partial_{V\times V}(\partial_V(\mathfrak t))(n)&=\sum_{j_0\neq j_1\in\{1,\dots,n\}}\left(\prod_{k\neq j_0,j_1}(v_{(n,i,k)},0) \right) (0,0)(0,0)+\sum_{j=1}^n\left(\prod_{k\neq j}(v_{(n,i,k)},0) \right) (0,v_{n,i,j})=\partial_V(\mathfrak t)
\end{align*}
so $\mathsf P(\pi_0\times\pi_1)\circ\partial_{V\times V}(\partial_V(\mathfrak t))=\partial_V(\mathfrak t)$.
\item Our computation of $\partial_{V\times V}(\partial_V(\mathfrak t))$ clearly show that this element is fixed under the action of $\mathsf P(c_V)$. So $\mathsf{P}(c_V) \circ \partial_{V \times V} \circ \partial_V = \partial_{V \times V} \circ \partial_V$.
\end{enumerate}
So we conclude that $\partial$ is a differential combinator transformation. \end{proof}
Thus $(\mathsf{P}, \mu, \eta, \partial)$ is a Cartesian differential monad and so the opposite of its Kleisli category is a Cartesian differential category (which we summarize in Corollary \ref{cor:POW} below) which, as we will explain below, captures power series differentiation. This Cartesian differential monad also comes equipped with a $\mathsf{D}$-linear counit. Define $\varepsilon_V: \mathsf{P}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} V$ as simply the projection onto $V$:
\[ \varepsilon_V\left( \mathfrak{t} \right) = \mathfrak{t}(1) \]
From a power series point of view, $\varepsilon$ projects out the degree 1 coefficients of a reduced power series.
\begin{lemma}\label{lem:powepsilon} $\varepsilon$ is a $\mathsf{D}$-linear counit of $(\mathsf{P}, \mu, \eta, \partial)$.
\end{lemma}
\begin{proof} The proof, which is a straightforward verification, is left to the reader as an exercise.
\end{proof}
Therefore, the subcategory of $\mathsf{D}$-linear maps of the opposite category of the Kleisli category of $\mathsf{P}$ is isomorphic to the opposite category of $\mathbb{F}\text{-}\mathsf{VEC}$. We summarize these results as follows:
\begin{corollary}\label{cor:POW} $(\mathsf{P}, \mu, \eta, \partial)$ is a Cartesian differential comonad on $\mathbb{F}\text{-}\mathsf{VEC}^{op}$ with $\mathsf{D}$-linear unit $\varepsilon$. Therefore $\mathbb{F}\text{-}\mathsf{VEC}^{op}_\mathsf{P}$ is a Cartesian differential category and $\mathsf{D}\text{-}\mathsf{lin}\left[ \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{P}} \right] \cong \mathbb{F}\text{-}\mathsf{VEC}^{op}$.
\end{corollary}
The Cartesian differential category $\mathbb{F}\text{-}\mathsf{VEC}^{op}_\mathsf{P}$ can be interpreted as the category whose objects are $\mathbb{F}$-vector spaces and whose maps are reduced power series between them. As a result, focusing on the finite-dimensional vector spaces, specifically $\mathbb{F}^n$, one obtains a Cartesian differential category of reduced power series over finite variables. We describe this category in detail.
\begin{example} \normalfont \label{ex:CDCPOW} Let $\mathbb{F}$ be a field. Define the category $\mathbb{F}\text{-}\mathsf{POW}_{red}$ whose object are $n \in \mathbb{N}$, where a map ${\mathfrak{P}: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ is a $m$-tuple of reduced power series (i.e. power series with no degree $0$ coefficients) in $n$ variables, that is, $\mathfrak{P} = \langle \mathfrak{p}_1(\vec x), \hdots, \mathfrak{p}_m(\vec x) \rangle$ with $\mathfrak{p}_i(\vec x) \in \mathbb{F}\llbracket x_1, \hdots, x_n\rrbracket_+$. The identity maps $1_n: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} n$ are the tuples $1_n = \langle x_1, \hdots, x_n \rangle$ and where composition is given by multivariable power series substitution \cite[Section 4.1]{brewer2014algebraic}. $\mathbb{F}\text{-}\mathsf{POW}_{red}$ is a Cartesian left additive category where the finite product structure is given by $n \times m = n +m$ with projection maps ${\pi_0: n \times m \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} n}$ and ${\pi_0: n \times m \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} n}$ defined as the tuples $\pi_0 = \langle x_1, \hdots, x_n \rangle$ and $\pi_1 = \langle x_{n+1}, \hdots, x_{n+m} \rangle$, and where the additive structure is defined coordinate wise via the standard sum of power series. $\mathbb{F}\text{-}\mathsf{POW}_{red}$ is also a Cartesian differential category where the differential combinator is given by the standard differentiation of power series, that is, for a map ${\mathfrak{P}: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$, with $\mathfrak{P} = \langle \mathfrak{p}_1(\vec x), \hdots, \mathfrak{p}_m(\vec x) \rangle$, its derivative $\mathsf{D}[\mathfrak{P}]: n \times n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m$ is defined as the tuple of the sum of the partial derivatives of the power series $\mathfrak{p}_i(\vec x)$:
\begin{align*}
\mathsf{D}[\mathfrak{P}](\vec x, \vec y) := \left( \sum \limits^n_{i=1} \frac{\partial \mathfrak{p}_1(\vec x)}{\partial x_i} y_i, \hdots, \sum \limits^n_{i=1} \frac{\partial \mathfrak{p}_n(\vec x)}{\partial x_i} y_i \right) && \sum \limits^n_{i=1} \frac{\partial \mathfrak{p}_j (\vec x)}{\partial x_i} y_i \in \mathbb{F}\llbracket x_1, \hdots, x_n, y_1, \hdots, y_n \rrbracket_+
\end{align*}
It is important to note that even if $\mathfrak{p}_j(\vec x)$ has terms of degree 1, every partial derivative $\frac{\partial \mathfrak p_j(\vec x)}{\partial x_i} y_i$ will still be reduced (even if $\frac{\partial \mathfrak p_j(\vec x)}{\partial x_i}$ has a degree 0 term), and thus the differential combinator $\mathsf{D}$ is indeed well-defined. A map ${\mathfrak{P}: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ is $\mathsf{D}$-linear if it of the form:
\begin{align*}
\mathfrak{P} = \left \langle \sum \limits^{n}_{i=0} r_{i,1}x_{i}, \hdots, \sum \limits^{n}_{i=0} r_{m,1}x_{i} \right \rangle && r_{i,j} \in \mathbb{F}
\end{align*}
Thus $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{F}\text{-}\mathsf{POW}_{red}]$ is equivalent to $\mathbb{F}\text{-}\mathsf{LIN}$ (as defined in Example \ref{ex:CDCPOLY}). We note that this example generalize to the category of reduced formal power over an arbitrary commutative (semi)ring.
\end{example}
Observe that we also have the following chain of isomorphisms:
\begin{align*}
\mathbb{F}\text{-}\mathsf{POW}_{red}(n,1) = \mathbb{F}\llbracket x_1, \hdots, x_n\rrbracket_+ \cong \mathsf{P}(\mathbb{F}^n) \cong \mathbb{F}\text{-}\mathsf{VEC} \left(\mathbb{F}, \mathsf{P}(\mathbb{F}^n) \right) = \mathbb{F}\text{-}\mathsf{VEC}_{\mathsf{P}}\left(\mathbb{F}, \mathbb{F}^n \right) = \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{P}}\left(\mathbb{F}^n, \mathbb{F} \right)
\end{align*}
which then implies that $\mathbb{F}\text{-}\mathsf{POW}_{red}(n,m) \cong \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{P}}\left(\mathbb{F}^n, \mathbb{F}^m \right)$. Thus $\mathbb{F}\text{-}\mathsf{POW}_{red}$ is isomorphic to the full subcategory of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{P}}$ whose objects are the finite dimensional $\mathbb{F}$-vector spaces. In the finite dimensional case, the differential combinator transformation corresponds precisely to the differential combinator on $\mathbb{F}\text{-}\mathsf{POW}_{red}$:
\[\partial_{\mathbb{F}^n}(\mathfrak{p}(\vec x)) = \mathsf{D}[\mathfrak{p}](\vec x, \vec y)\]
Therefore, $\mathbb{F}\text{-}\mathsf{POW}_{red}$ is a sub-Cartesian differential category of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{P}}$, where the latter allows for power series over infinite variables.
\section{Example: Divided Power Algebras}\label{secpuisdiv}
In this section, we show that the free divided power algebra monad is a Cartesian differential monad, and therefore, we obtain a Cartesian differential category of divided power polynomials \cite[Section 12]{roby68}. Divided power algebras were introduced by Cartan \cite{cartan54} to study the homology of Eilenberg-MacLane spaces with coefficients in a prime field of positive characteristic. Such structures appear notably on the homotopy of simplicial algebras \cite{cartan54, fresse2000}, and in the study of $D$-modules and crystalline cohomology \cite{berthelot74}. The free divided power algebra monad $\mathsf{\Gamma}$ was first introduced by Norbert Roby \cite{roby65} and generalized in the context of operads by Fresse in \cite{fresse2000}. Much as for reduced power series, the composition of divided power polynomials is only well-defined when they are reduced, that is, have no constant term. More generally, the study of divided power algebras has been widely developed in the non-unital setting \cite{fresse2000}. Since the monad we study encodes a structure of non-unital algebras, this provides another example of a Cartesian differential comonad which is not induced by a differential category. We begin by reviewing the definition of a divided power algebra.
\begin{definition}\label{defipuisdiv} Let $\mathbb{F}$ be a field.
\textbf{A divided power algebra} \cite[Expos{\'e} 7, Section 2]{cartan54} over $\mathbb{F}$ is a commutative associative (non-unital) $\mathbb{F}$-algebra $(A,*)$, where $A$ is the underlying $\mathbb{F}$-vector space and $\ast$ is the $\mathbb{F}$-bilinear multiplication, which comes equipped with a divided power structure, that is, a family of functions $(-)^{[n]}: A \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} A$, $a \mapsto a^{[n]}$, indexed by strictly positive integers $n$, such that the following identities hold:
\begin{enumerate}[{\bf [dp.1]}]
\item\label{relComlambda} $(\lambda a)^{[n]}=\lambda^na^{[n]}$ for all $a\in A$ and $\lambda\in\mathbb{F}$.
\item\label{relComrepet} $a^{[m]}*a^{[n]}=\binom{m+n}{m}a^{[m+n]}$ for all $a\in A$.
\item\label{relComsomme} $(a+b)^{[n]}=a^{[n]}+\big(\sum_{l=1}^{n-1}a^{[l]}*b^{[n-l]}\big)+b^{[n]}$ for all $a\in A$, $b\in A$.
\item\label{relComunit} $a^{[1]}=a$ for all $a\in A$.
\item\label{relComcomp1} $(a*b)^{[n]}=n!a^{[n]}*b^{[n]}=a^{*n}*b^{[n]}=a^{[n]}*b^{*n}$ for all $a\in A$, $b\in A$.
\item\label{relComcomp2} $(a^{[n]})^{[m]}=\frac{(mn)!}{m!(n!)^m}a^{[mn]}$ for all $a\in A$.
\end{enumerate}
The function $(-)^{[n]}$ is called the $n$-th divided power operation.
\end{definition}
When the base field $\mathbb{F}$ is of characteristic $0$, the only divided power structure on a commutative associative algebra $(A,*)$ is given by $a^{[n]}=\frac{a^{*n}}{n!}$, which justifies the name ``divided powers''. Therefore, in the characteristic $0$ case, a divided power algebra is simply a commutative associative (non-unital) algebra. However, in general, for non-zero characteristics, the two notions diverge. Examples of divided power algebras include the homology of Eilenberg-MacLane spaces \cite[Expos{\'e} 7, Section 5 and 8]{cartan54}, the homotopy of simplicial commutative algebras \cite[Expos{\'e} 7, Th{\'e}or{\`e}me 1]{cartan54}, and all Zinbiel algebras (which we review in the next section) \cite[Theorem 3.4]{dokas09}. Furthermore, there exists a notion of free divided power algebras, which we review now.
Let $\mathbb{F}$ be a field. For an $\mathbb{F}$-vector space $V$, define $\mathsf{\Gamma}_n(V)=(V^{\otimes n})^{S(n)} \subseteq V^{\otimes n}$ as the subspace of tensors of length $n$ of $V$ which are fixed under the action of the symmetric group $S(n)$, that is, invariant under all $n$-permutations $\sigma \in S(n)$. Categorically speaking, $\mathsf{\Gamma}_n(V)$ is the joint equalizer of the $n$-permutations.
Define $\mathsf{\Gamma}(V)$ as follows:
\[\mathsf{\Gamma}(V)=\bigoplus_{n=1}^{\infty}\mathsf{\Gamma}_n(V)=V\oplus \mathsf{\Gamma}_2(V)\oplus \mathsf{\Gamma}_3(V) \oplus \hdots
\]
The vector space $\mathsf{\Gamma}(V)$ is endowed with a divided power algebra structure, and is the free divided power algebra over $V$ \cite[Expos{\'e} 7, Section 2]{cartan54}. We will not review the divided power algebra structure in full here. For the purposes of this section, it is sufficient to know that an arbitrary element of $\mathsf{\Gamma}(V)$ can be expressed as a finite sum of divided power monomials \cite[Expos{\'e} 8, Section 4]{cartan54}, which are elements of the form:
\[ v_1^{[r_1]}*\hdots* v_n^{[r_n]} \]
for $v_1,\hdots,v_n\in V$, where $\ast$ is the multiplication of $\mathsf{\Gamma}(V)$, and $(-)^{[r_j]}$ are the divided power operations. Explicitly, a divided power monomial can be worked out to be:
\[ v_1^{[r_1]}*\hdots* v_n^{[r_n]} = \sum_{\sigma\in S(n)/S(r_1,\hdots,r_n)}\sigma (v_1^{\otimes r_1}\otimes\hdots\otimes v_n^{\otimes r_n}),\]
where $S(r_1,\hdots,r_n) = S(r_1)\times\hdots\times S(r_p)$ is the Young subgroup of the symmetric group $S(r_1+\hdots+r_p)$.
Free divided power algebras induce a monad $\mathsf{\Gamma}$ on $\mathbb{F}\text{-}\mathsf{VEC}$. Note that it is sufficient to define the monad structure maps on divided power monomials and then extend by linearity. Define the endofunctor $\mathsf{\Gamma}: \mathbb{F}\text{-}\mathsf{VEC} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{F}\text{-}\mathsf{VEC}$ which sends a $\mathbb{F}$-vector space $V$ to its free divided power algebra $\mathsf{\Gamma}(V)$, and which sends an $\mathsf{F}$-linear map $f:V\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} W$ to the $\mathsf{F}$-linear map $\mathsf{\Gamma}(f):\mathsf{\Gamma}(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{\Gamma}(W)$ defined on divided powers monomials as follows:
\[\mathsf{\Gamma}(f)(v_1^{[r_1]}*\hdots* v_p^{[r_n]})=(f(v_1))^{[r_1]}*\hdots* (f(v_n))^{[r_n]}\]
which we then extend by linearity. The monad unit $\eta_V:V\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{\Gamma}(V)$ is the injection map of $V$ into $\mathsf{\Gamma}(V)$:
\[\eta_V(v)=v^{[1]}.\]
Note that, with this notation, the zero element of $\Gamma(V)$ will here be denoted by $0^{[1]}$. The monad multiplication $\mu_V:\mathsf{\Gamma}(\mathsf{\Gamma}(V))\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}}\mathsf{\Gamma}(V)$ is defined as follows on divided power monomials of divided power monomials, using \textbf{[dp.5]} and \textbf{[dp.6]}:
\begin{align*}
&\mu_V\left((v_{1,1}^{[q_{1,1}]}*\hdots*v_{1,k_1}^{[q_{1,k_1}]})^{[r_1]}*\hdots*(v_{p,1}^{[q_{p,1}]}*\hdots*v_{p,k_p}^{[q_{p,k_p}]})^{[r_p]}\right)\\
&=~\left(\prod_{i=1}^p\frac{1}{r_i!}\prod_{j=1}^{k_i}\frac{(r_iq_{i,j})!}{q_{i,j}!^{r_i}}\right)v_{1,1}^{[r_1q_{1,1}]}*\hdots*v_{1,k_1}^{[r_1q_{1,k_1}]}*\hdots*v_{p,k_p}^{[r_pq_{p,k_p}]}
\end{align*}
which we then extend by linearity.
Note that the functor $\mathsf{\Gamma}$, and the monad structure we described, can be constructed from the operad of commutative (non-unital) algebras \cite[Proposition 1.2.3]{fresse2000}. Furthermore, note that the algebras of the monad $\mathsf{\Gamma}$ are precisely the dividied power algebras \cite[Section 10, Th{\'e}or{\`e}me 1 and 2]{roby65}.
\begin{lemma}\cite[Proposition 1.2.3]{fresse2000}\label{defGamma}
$(\mathsf{\Gamma},\mu,\eta)$ is a monad.
\end{lemma}
Observe that $\mathsf{\Gamma}$ will not be an algebra modality since $\mathsf{\Gamma}(V)$ is non-unital. Therefore, $\mathsf{\Gamma}$ will provide an example of Cartesian differential comonad which is not induced from a differential category structure. We now define the differential combinator transformation for $\mathsf{\Gamma}$. Define $\partial_V:\mathsf{\Gamma}(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}}\mathsf{\Gamma}(V\times V)$ as follows on divided power monomials:
\[ \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]})=\sum_{i=1}^n(v_1,0)^{[r_1]}*\hdots* (v_i,0)^{[r_i-1]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]} \]
which we then extend by linearity. If $r_i=1$, we use the following convention:
\[(v_1,0)^{[r_1]}*\hdots* (v_i,0)^{[r_i-1]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]}\!=\!(v_1,0)^{[r_1]}*\hdots* (v_{i-1},0)^{[r_{i-1}]}* (v_{i+1},0)^{[r_{i+1}]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]}\]
We will see below that $\partial$ corresponds to taking the sum of the partial derivatives of divided power polynomials. Note that a consequence of the lack of a unit in $\mathsf{\Gamma}(V)$ is that $\partial_V$ does not factor through a map $\mathsf{\Gamma}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{\Gamma}(V) \otimes V$ since such a map would be undefined on divided power monomials of degree 1, $v^{[1]}$.
\begin{proposition}\label{Gammacomb} $\partial$ is a differential combinator transformation for $(\mathsf{\Gamma},\mu,\eta)$.
\end{proposition}
\begin{proof} Throughout this proof, it is sufficient to do the calculations on divided power monomials and then extend by linearity. First, clearly $\partial$ is a natural transformation. We now need to show that $\partial$ satisfies the dual of the six axioms {\bf[dc.1]} to {\bf[dc.6]} from Definition \ref{def:cdcomonad}.
\begin{enumerate}[{\bf [dc.1]}]
\item Here we use the fact that multiplication by zero gives zero:
\begin{align*}
\mathsf{\Gamma}(\pi_0) \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right) &=~ \mathsf{\Gamma}(\pi_0) \left( \sum_{i=1}^n (v_1,0)^{[r_1]}*\hdots* (v_i,0)^{[r_i-1]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]} \right) \\
&=~ \sum_{i=1}^n \left(\pi_0(v_1,0) \right)^{[r_1]}*\hdots* \left(\pi_0(v_i,0)\right)^{[r_i-1]} *\hdots* \left(\pi_0(v_n,0)\right)^{[r_n]} * \left(\pi_0(0,v_i)\right)^{[1]} \\
&=~ \sum_{i=1}^n v_1^{[r_1]}*\hdots* v_i^{[r_i-1]} *\hdots* v_n^{[r_n]} * 0^{[1]} \\
&=~ 0^{[1]}
\end{align*}
so $\mathsf{\Gamma}(\pi_0)\circ \partial_V=0$
\item Here we use the fact that, by \textbf{[dp.3]}, $(v+w)^{[1]} = v^{[1]} + w^{[1]}$, and the multilinearity of the multiplication:
{\footnotesize\begin{align*}
&\mathsf{\Gamma}(1_V \times \Delta_V) \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right)= \mathsf{\Gamma}(1_V \times \Delta_V) \left( \sum_{i=1}^n (v_1,0)^{[r_1]}*\hdots* (v_i,0)^{[r_i-1]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]} \right) \\
&=~ \sum_{i=1}^n (v_1,\Delta_V(0))^{[r_1]}*\hdots* (v_i,\Delta_V(0))^{[r_i-1]} *\hdots* (v_n,\Delta_V(0))^{[r_n]} * (0,\Delta_V(v_i))^{[1]} \\
&=~ \sum_{i=1}^n (v_1,0,0)^{[r_1]}*\hdots* (v_i,0,0)^{[r_i-1]} *\hdots* (v_n,0,0)^{[r_n]} * (0,v_i,v_i)^{[1]} \\
&=~ \sum_{i=1}^n (v_1,0,0)^{[r_1]}*\hdots* (v_i,0,0)^{[r_i-1]} *\hdots* (v_n,0,0)^{[r_n]} * \left( (0,v_i,0) + (0,0,v_i) \right)^{[1]} \\
&=~ \sum_{i=1}^n (v_1,0,0)^{[r_1]}*\hdots* (v_i,0,0)^{[r_i-1]} *\hdots* (v_n,0,0)^{[r_n]} * (0,v_i,0)^{[1]} \\
&+~ \sum_{i=1}^n (v_1,0,0)^{[r_1]}*\hdots* (v_i,0,0)^{[r_i-1]} *\hdots* (v_n,0,0)^{[r_n]} * (0,0,v_i)^{[1]} \\
&=~ \sum_{i=1}^n (v_1,\iota_0(0))^{[r_1]}*\hdots* (v_i,\iota_0(0))^{[r_i-1]} *\hdots* (v_n,\iota_0(0))^{[r_n]} * (0,\iota_0(v_i))^{[1]} \\
&+~ \sum_{i=1}^n (v_1,\iota_1(0))^{[r_1]}*\hdots* (v_i,\iota_1(0))^{[r_i-1]} *\hdots* (v_n,\iota_1(0))^{[r_n]} * (0,\iota_1(v_i))^{[1]} \\
&=~ \sum_{i=1}^n \mathsf{\Gamma}(1_V \times \iota_0)\left( (v_1,0)^{[r_1]}*\hdots* (v_i,0)^{[r_i-1]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]} \right) \\
&+~ \sum_{i=1}^n \mathsf{\Gamma}(1_V \times \iota_1)\left( (v_1,0)^{[r_1]}*\hdots* (v_i,0)^{[r_i-1]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]} \right) \\
&=~ \mathsf{\Gamma}(1_V\times\iota_0)\left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right) + \mathsf{\Gamma}(1_V\times\iota_1)\left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right) \\
&=~ \left(\mathsf{\Gamma}(1_V\times\iota_0)+\mathsf{\Gamma}(1_V\times\iota_1)\right) \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right)
\end{align*}}%
So $\Gamma(V\times\Delta_V)\circ \partial_V =\left(\mathsf{\Gamma}(1_V \times\iota_0)+\mathsf{\Gamma}(1_V \times\iota_1)\right)\circ\partial_V$.
\item This is a straightforward. So $\partial_V\circ\eta_V=\eta_{V\times V}\circ\iota_1$.
\item Carefully using the divided power structure axiom {\bf [dp.2]} when expanding out divided power monomials of divided power monomials, we compute:
{\scriptsize \begin{align*}
& \mu_{V \times V}\left( \mathsf{\Gamma}\left( \left[\mathsf{\Gamma}(\iota_0), \partial_V\right] \right) \left( \partial_{\mathsf{\Gamma}(V)}\left((v_{1,1}^{[q_{1,1}]}*\hdots*v_{1,k_1}^{[q_{1,k_1}]})^{[r_1]}*\hdots*(v_{p,1}^{[q_{p,1}]}*\hdots*v_{p,k_p}^{[q_{p,k_p}]})^{[r_p]}\right) \right) \right) \\
&=~ \mu_{V \times V} \left( \mathsf{\Gamma} \left[\mathsf{\Gamma}(\iota_0), \partial_V\right] \left( \sum^{p}_{i=1} \left( (v_{1,1}^{[q_{1,1}]}*\hdots*v_{1,k_1}^{[q_{1,k_1}]}), 0 \right)^{[r_1]}*\hdots*\left( (v_{i,1}^{[q_{i,1}]}*\hdots*v_{i,k_{i}}^{[q_{i,k_{i}}]}), 0 \right)^{[r_{i} - 1]}* \hdots \right. \right. \\
&~\left. \left. *\left((v_{p,1}^{[q_{p,1}]}*\hdots*v_{p,k_p}^{[q_{p,k_p}]}), 0 \right)^{[r_p]} * \left(0, (v_{i,1}^{[q_{i,1}]}*\hdots*v_{i,k_{i}}^{[q_{i,k_{i}}]}) \right)^{[1]} \right) \right) \\
&=~ \mu_{V \times V} \left( \sum^{p}_{i=1} \left( \iota_0(v_{1,1})^{[q_{1,1}]}*\hdots*\iota_0(v_{1,k_1})^{[q_{1,k_1}]} \right)^{[r_1]}*\hdots* \left( \iota_0(v_{i,1})^{[q_{i,1}]}*\hdots*\iota_0(v_{i,k_{i}})^{[q_{i,k_{i}}]} \right)^{[r_{i} - 1]}* \hdots \right. \\
&~ \left. *\left( \iota_0(v_{p,1})^{[q_{p,1}]}*\hdots*\iota_0(v_{p,k_p})^{[q_{p,k_p}]} \right)^{[r_p]} * \left( \partial_V\left( v_{i,1}^{[q_{i,1}]}*\hdots*v_{i,k_{i}}^{[q_{i,k_{i}}]} \right)\right)^{[1]} \right) \\
&=~ \mu_{V \times V} \left( \sum^{p}_{i=1} \left( (v_{1,1},0)^{[q_{1,1}]}*\hdots*(v_{1,k_1},0)^{[q_{1,k_1}]} \right)^{[r_1]}*\hdots* \left( (v_{i,1},0)^{[q_{i,1}]}*\hdots*(v_{i,k_{i}},0)^{[q_{i,k_{i}}]} \right)^{[r_{i} - 1]}* \hdots \right. \\
&~ \left. *\left( (v_{p,1},0)^{[q_{p,1}]}*\hdots*(v_{p,k_p},0)^{[q_{p,k_p}]} \right)^{[r_p]} * \right.\\
&~ \left. \left( \sum^{k_{i}}_{j=1} (v_{i,1},0)^{[q_{i,1}]}*\hdots *(v_{i,j},0)^{[q_{i,j} -1]} * \hdots*(v_{i,k_{i}},0)^{[q_{i,k_{i}}]} * (0, v_{i,j})^{[1]} \right)^{[1]} \right) \\
&=~ \mu_{V \times V} \left( \sum^{p}_{i=1} \sum^{k_{i}}_{j=1} \left( (v_{1,1},0)^{[q_{1,1}]}*\hdots*(v_{1,k_1},0)^{[q_{1,k_1}]} \right)^{[r_1]}*\hdots* \left( (v_{i,1},0)^{[q_{i,1}]}*\hdots*(v_{i,k_{i}},0)^{[q_{i,k_{i}}]} \right)^{[r_{i} - 1]}* \hdots \right. \\
&~ \left. *\left( (v_{p,1},0)^{[q_{p,1}]}*\hdots*(v_{p,k_p},0)^{[q_{p,k_p}]} \right)^{[r_p]} * \left( (v_{i,1},0)^{[q_{i,1}]}*\hdots *(v_{i,j},0)^{[q_{i,j} -1]} * \hdots*(v_{i,k_{i}},0)^{[q_{i,k_{i}}]} * (0, v_{i,j})^{[1]} \right)^{[1]} \right) \\
&=~ \sum^{p}_{i=1} \sum^{k_{i}}_{j=1} \left(\prod_{i_0=1}^p\frac{1}{r_{i_0}!}\prod_{j_0=1}^{k_{i_0}}\frac{(r_{i_0}q_{i_0,j_0})!}{q_{i_0,j_0}!^{r_{i_0}}}\right) (v_{1,1},0)^{[r_1q_{1,1}]}*\hdots*(v_{i,j},0)^{[r_{i}q_{i,j} - 1]}*\hdots*(v_{p,k_p},0)^{[r_pq_{p,k_p}]} * (0, v_{i,j})^{[1]} \\
&=~ \partial_V\left( \left(\prod_{i_0=1}^p\frac{1}{r_{i_0}!}\prod_{j_0=1}^{k_{i_0}}\frac{(r_{i_0}q_{i_0,j_0})!}{q_{i_0,j_0}!^{r_{i_0}}}\right) v_{1,1}^{[r_1q_{1,1}]}*\hdots*v_{1,k_1}^{[r_1q_{1,k_1}]}*\hdots*v_{p,k_p}^{[r_pq_{p,k_p}]} \right) \\
&=~ \partial_V\left( \mu_V\left((v_{1,1}^{[q_{1,1}]}*\hdots*v_{1,k_1}^{[q_{1,k_1}]})^{[r_1]}*\hdots*(v_{p,1}^{[q_{p,1}]}*\hdots*v_{p,k_p}^{[q_{p,k_p}]})^{[r_p]}\right) \right)
\end{align*}}%
So $\partial_V\circ\mu_V=\mu_{V \times V}\circ\mathsf{\Gamma}\left[\mathsf{\Gamma}(\iota_0), \partial_V\right]\circ \partial_{\mathsf{\Gamma}(V)}$
\end{enumerate}
For the last two axioms, it will be useful to first compute $\partial_{V \times V} \circ \partial_V$ separately. So we compute:
{\footnotesize \begin{align*}
& \partial_{V \times V} \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right) =~ \partial_{V \times V} \left( \sum_{i=1}^n (v_1,0)^{[r_1]}*\hdots* (v_i,0)^{[r_i-1]} *\hdots* (v_n,0)^{[r_n]} * (0,v_i)^{[1]} \right) \\
&=~ \sum_{i=1}^n \sum_{j=1}^n (v_1,0,0,0)^{[r_1]}*\hdots* (v_j,0,0,0)^{[r_j-1]} *\hdots* (v_i,0,0,0)^{[r_i-1]} *\hdots* (v_n,0,0,0)^{[r_n]} * (0,v_i,0,0)^{[1]} * (0,0,v_j,0)^{[1]} \\
&+~ \sum_{i=1}^n (v_1,0,0,0)^{[r_1]}*\hdots* (v_i,0,0,0)^{[r_i-1]} *\hdots* (v_n,0,0,0)^{[r_n]} * (0,0,0,v_i)^{[1]} \\
\end{align*}
Thus we have that:
\begin{align*}
\partial_{V \times V} \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right) &=~ \sum_{i=1}^n \sum_{j=1}^n \left( \prod \limits^n_{k=1} (v_k, 0, 0, 0)^{[r_k - \delta_{k,i} - \delta_{k,j}]}\right) * (0,v_i,0,0)^{[1]} * (0,0,v_j,0)^{[1]} \\&+ \sum_{i=1}^n (v_1,0,0,0)^{[r_1]}*\hdots* (v_i,0,0,0)^{[r_i-1]} *\hdots* (v_n,0,0,0)^{[r_n]} * (0,0,0,v_i)^{[1]}
\end{align*}
where $\delta_{x,y} = 0$ if $x \neq y$ and $\delta_{x,y} = 1$ if $x = y$, and we used $\prod$ to denote a $*$-product of a family of elements. Note that there is a slight abuse of notation here in the case that $r_i=1$ and when $i=j=i$, potentially giving $r_i-\delta_{i,i}-\delta_{j,j}=-1$. However, when $r_i=1$, the term $(v_i, 0)^{[r_i-1]}$ vanishes, and so, this is not a problem.
\begin{enumerate}[{\bf [dc.1]}]
\setcounter{enumi}{4}
\item Here we use the fact that the first part of $\partial_{V \times V} \circ \partial_V$ vanishes under $\mathsf{\Gamma}(\pi_0\times\pi_1)$ (since we are multiplying by zero), while the second part becomes $\partial_V$ under $\mathsf{\Gamma}(\pi_0\times\pi_1)$:
\begin{align*}
&\mathsf{\Gamma}(\pi_0\times\pi_1) \left( \partial_{V \times V} \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right)\right) \\
&=~ \mathsf{\Gamma}(\pi_0\times\pi_1) \left( \sum_{i=1}^n \sum_{j=1}^n \left( \prod \limits^n_{k=1} (v_k, 0, 0, 0)^{[r_k - \delta_{k,i} - \delta_{k,j}]}\right) * (0,v_i,0,0)^{[1]} * (0,0,v_j,0)^{[1]}\right) \\
&+~\mathsf{\Gamma}(\pi_0\times\pi_1) \left( \sum_{i=1}^n (v_1,0,0,0)^{[r_1]}*\hdots* (v_i,0,0,0)^{[r_i-1]} *\hdots* (v_n,0,0,0)^{[r_n]} * (0,0,0,v_i)^{[1]} \right) \\
&=~ \sum_{i=1}^n \sum_{j=1}^n \left( \prod \limits^n_{k=1} \left( \pi_0(v_k, 0), \pi_1(0, 0)\right)^{[r_k - \delta_{k,i} - \delta_{k,j}]}\right) * \left( \pi_0(0,v_i) \pi_1(0,0)\right)^{[1]} * \left( \pi_0(0,0), \pi_1(v_j,0)\right)^{[1]} \\
&+~ \sum_{i=1}^n \left( \pi_0(v_1,0), \pi_1(0,0)\right)^{[r_1]}*\hdots* \left( \pi_0(v_i,0), \pi_1(0,0)\right)^{[r_i-1]} *\\
&~~~~~~\hdots* \left( \pi_0(v_n,0), \pi_1(0,0)\right)^{[r_n]} * \left( \pi_0(0,0), \pi_1(0,v_i)\right)^{[1]} \\
&=~ \sum_{i=1}^n \sum_{j=1}^n \left( \prod \limits^n_{k=1} \left( v_k,0 \right)^{[r_k - \delta_{k,i} - \delta_{k,j}]}\right) * \left( 0,0\right)^{[1]} * \left( 0,0\right)^{[1]} \\
&+~ \sum_{i=1}^n \left( v_1,0\right)^{[r_1]}*\hdots* \left( v_i,0\right)^{[r_i-1]} *\hdots* \left( v_n,0\right)^{[r_n]} * \left( 0,v_i\right)^{[1]} \\
&=~ 0^{[1]} + \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]})\\
&=~ \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]})
\end{align*}
So $\mathsf{\Gamma}(\pi_0\times\pi_1)\circ \partial_{V\times V}\circ \partial_V=\partial_V$.
\item Here we use commutativity of the multiplication, which will essentially swap the order of the $i$ and $j$ index in the first part of $\partial_{V \times V} \circ \partial_V$, while the second part is unchanged:
\begin{align*}
&\mathsf{\Gamma}(c_V) \left( \partial_{V \times V} \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right)\right) \\
&=~ \mathsf{\Gamma}(c_V) \left( \sum_{i=1}^n \sum_{j=1}^n \left( \prod \limits^n_{k=1} (v_k, 0, 0, 0)^{[r_k - \delta_{k,i} - \delta_{k,j}]}\right) * (0,v_i,0,0)^{[1]} * (0,0,v_j,0)^{[1]}\right) \\
&+~\mathsf{\Gamma}(c_V) \left( \sum_{i=1}^n (v_1,0,0,0)^{[r_1]}*\hdots* (v_i,0,0,0)^{[r_i-1]} *\hdots* (v_n,0,0,0)^{[r_n]} * (0,0,0,v_i)^{[1]} \right) \\
&=~ \sum_{i=1}^n \sum_{j=1}^n \left( \prod \limits^n_{k=1} \left( c_V(v_k, 0, 0, 0)\right)^{[r_k - \delta_{k,i} - \delta_{k,j}]}\right) * \left( c_V(0,v_i,0,0)\right)^{[1]} * \left( (\pi_0\times\pi_1)(0,0,v_j,0)\right)^{[1]} \\
&+~ \sum_{i=1}^n \left( c_V(v_1,0,0,0)\right)^{[r_1]}*\hdots* \left( c_V(v_i,0,0,0)\right)^{[r_i-1]} *\hdots* \left( c_V(v_n,0,0,0)\right)^{[r_n]} * \left( c_V(0,0,0,v_i)\right)^{[1]} \\
&=~ \sum_{i=1}^n \sum_{j=1}^n \left( \prod \limits^n_{k=1} (v_k, 0, 0, 0)^{[r_k - \delta_{k,i} - \delta_{k,j}]}\right) * (0,0,v_i,0)^{[1]} * (0,v_j,0,0)^{[1]}
\\&+ \sum_{i=1}^n (v_1,0,0,0)^{[r_1]}*\hdots* (v_i,0,0,0)^{[r_i-1]} *\hdots* (v_n,0,0,0)^{[r_n]} * (0,0,0,v_i)^{[1]} \\
&=~ \partial_{V \times V} \left( \partial_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]}) \right)
\end{align*}
So $\mathsf{\Gamma}(c_V)\circ \partial_{V\times V}\circ \partial_V=\partial_{V\times V}\circ \partial_V$.
\end{enumerate}
So we conclude that $\partial$ is a differential combinator transformation. \end{proof}
Thus, $(\mathsf{\Gamma},\mu,\eta,\partial)$ is a Cartesian differential monad and so the opposite of its Kleisli category is a Cartesian differential category (which we summarize in Corollary \ref{cor:DIV} below) which, as we will explain below, captures differentiation of divided power polynomials. The Cartesian differential monad $(\mathsf{\Gamma},\mu,\eta,\partial)$ also comes equipped with a $\mathsf D$-linear counit. Define $\varepsilon_V:\mathsf{\Gamma}(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} V$ as follows on divided power monomials:
\[\varepsilon_V(v_1^{[r_1]}*\hdots* v_n^{[r_n]})=\begin{cases}
v_1,&\mbox{if }n=1, r_n=1,\\
0&\mbox{otherwise.}
\end{cases}\]
which we extend by linearity. Thus $\varepsilon_V$ picks out the divided power monomials of degree 1, $v^{[1]}$ for all $v \in V$, while mapping the rest to zero.
\begin{lemma}
$\varepsilon$ is a $\mathsf D$-linear counit of $(\mathsf{\Gamma},\mu,\eta,\partial)$.
\end{lemma}
\begin{proof} The proof is straightforward, and so we leave this as an excercise for the reader.
\end{proof}
Thus the subcategory of $\mathsf{D}$-linear maps of the opposite category of the Kleisli category of $\mathsf{\Gamma}$ is isomorphic to the opposite category of $\mathbb{F}\text{-}\mathsf{VEC}$. Summarizing, we obtain the following statement:
\begin{corollary}\label{cor:DIV}$(\mathsf{\Gamma}, \mu, \eta, \partial)$ is a Cartesian differential comonad on $\mathbb{F}\text{-}\mathsf{VEC}^{op}$ with $\mathsf{D}$-linear unit $\varepsilon$. Therefore $\mathbb{F}\text{-}\mathsf{VEC}^{op}_\mathsf{\Gamma}$ is a Cartesian differential category and $\mathsf{D}\text{-}\mathsf{lin}\left[ \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{\Gamma}} \right] \cong \mathbb{F}\text{-}\mathsf{VEC}^{op}$.
\end{corollary}
The Kleisli category $\mathbb{F}\text{-}\mathsf{VEC}_{\mathsf{\Gamma}}$ is closely related to the notion of (reduced) divided power polynomials. For a set $X$, we denote by $\mathbb F\lceil X\rceil$ the ring of reduced divided power polynomials over the set $X$, which is by definition the free divided power algebra over the $\mathbb{F}$-vector space with basis $X$ \cite[Section 12]{roby68}. In other words, a reduced divided polynomial with variables in $X$ is an $\mathbb{F}$-linear composition of commutative monomials of the type $x_1^{[k_1]}\hdots x_n^{[k_n]}$ where $x_1,\hdots,x_n$ is a tuple of $n$ different elements of $X$ and $k_1,\hdots,k_n$ are strictly positive integers. By reduced, we mean that these polynomials do not have degree 0 terms. Multiplication is given by concatenation, multilinearity and the relation {\bf[dp.2]} of Definition \ref{defipuisdiv}. Composition of divided polynomials can be deduced from the relations {\bf[dp.1]}, {\bf[dp.3]}, {\bf[dp.5]} and {\bf[dp.6]} of \ref{defipuisdiv}. For example, if $p(x)=x^{[n]}$, and $q(y,z)=y^{[m]}z$, then:
\[p(q(y,z))=(y^{[m]}z)^{[n]}=n!(y^{[m]})^{[n]}z^{[n]}=\frac{(mn)!}{(m!)^n}y^{[mn]}z^{[n]}.\]
We can define a notion of formal partial derivation for divided polynomials. For $x\in X$, we define a linear map $\frac{d}{dx}:\mathbb F\lceil X\rceil\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb F\lceil X\rceil\oplus \mathbb{F}$ on monomials (which we then extend by linearity).
For all monomial $m=x_1^{[k_1]}\hdots x_n^{[k_n]}$, $\frac{d}{dx}(m)=0$ if $x\neq x_i$ for all $i\in\{1,\hdots,n\}$, $\frac{d}{dx}(m)=x_1^{[k_1]}\hdots x_{j-1}^{[k_{j-1}]}x_{j}^{[k_{j}-1]}x_{j+1}^{[k_{j+1}]}\hdots x_n^{[k_n]}$ if $x=x_j$ and $k_j>1$, $\frac{d}{dx}(m)=x_1^{[k_1]}\hdots x_{j-1}^{[k_{j-1}]}x_{j+1}^{[k_{j+1}]}\hdots x_n^{[k_n]}$ if $x=x_j$, $k_j=1$, and $n>1$, and finally, $\frac{d}{dx}(x)=1_{\mathbb{F}}$ where $1_{\mathbb{F}}\in\mathbb{F}$ is a generator of the second term of the direct sum $\mathbb F\lceil X\rceil\oplus \mathbb{F}$ given by the unit of $\mathbb{F}$. We note that, in the case where $X$ is a singleton, these definitions correspond to the notion of derivation for formal divided power series, also called Hurwitz series, as defined by Keigher and Pritchard in \cite{keigher2000}. We can restrict to the finite dimensional case and obtain a sub-Cartesian differential category of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{\Gamma}}$ which is isomorphic to the Lawvere theory of reduced divided power polynomials.
\begin{example} \normalfont \label{ex:CDCdiv} Let $\mathbb{F}$ be a field. Define the category $\mathbb{F}\text{-}\mathsf{DPOLY}$ whose object are $n \in \mathbb{N}$, where a map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ is a $m$-tuple of reduced divided polynomials in $n$ variables, that is, $P=\langle p_1(\vec x),\hdots,p_m(\vec x)\rangle$ with $p_i(\vec x)\in \mathbb F\lceil x_1,\hdots,x_n\rceil$
The identity maps $1_n: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} n$ are the tuples $1_n = \langle x_1^{[1]}, \hdots, x_n^{[1]} \rangle$ and composition is given by divided power polynomial substitution as explained above. $\mathbb{F}\text{-}\mathsf{DPOLY}$ is a Cartesian left additive category where the finite product structure is given by $n \times m = n +m$ with projection maps ${\pi_0: n \times m \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} n}$ and ${\pi_1: n \times m \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ defined as the tuples $\pi_0 = \langle x_1^{[1]}, \hdots, x_n^{[1]} \rangle$ and $\pi_1 = \langle x_{n+1}^{[1]}, \hdots, x_{n+m}^{[1]} \rangle$, and where the additive structure is defined coordinate-wise via the standard sum of divided power polynomials. $\mathbb{F}\text{-}\mathsf{DPOLY}$ is also a Cartesian differential category where for a map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$, with $P = \langle p_1(\vec x), \hdots, p_m(\vec x) \rangle$, its derivative $\mathsf{D}[P]: n \times n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m$ is defined as the tuple of the sum of the partial derivatives of the divided power polynomials $p_i(\vec x)$:
\begin{align*}
\mathsf{D}[P](\vec x, \vec y) := \left( \sum \limits^n_{i=1} \frac{dp_1(\vec x)}{d x_i} y_i^{[1]}, \hdots, \sum \limits^n_{i=1} \frac{dp_n(\vec x)}{d x_i} y_i^{[1]} \right) \\
\sum \limits^n_{i=1} \frac{dp_j(\vec x)}{d x_i} y_i^{[1]} \in \mathbb F\lceil x_1,\hdots,x_n,y_1,\hdots,y_n\rceil
\end{align*}
It is important to note that even if $p_j(\vec x)$ has terms of degree 1, every partial derivative $\frac{dp_j(\vec x)}{d x_i} y_i^{[1]}$ will still be reduced (even if $\frac{dp_j(\vec x)}{d x_i}$ may have a degree 0 term), and thus, the differential combinator $\mathsf{D}$ is indeed well-defined. A map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ is $\mathsf{D}$-linear if it of the form:
\begin{align*}
P = \left \langle \sum \limits^{n}_{i=0} \lambda_{i,1}x_{i}^{[1]}, \hdots, \sum \limits^{n}_{i=0} \lambda_{m,1}x_{i}^{[1]} \right \rangle && \lambda_{i,j} \in \mathbb{F}
\end{align*}
Thus, $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{F}\text{-}\mathsf{DPOLY}]$ is equivalent to $\mathbb{F}\text{-}\mathsf{LIN}$ (as defined in Example \ref{ex:CDCPOLY}).
\end{example}
We have the following chain of isomorphisms:
\begin{align*}
\mathbb{F}\text{-}\mathsf{DPOLY}(n,1) = \mathbb F\lceil x_1,\hdots,x_n\rceil \cong \mathsf{\Gamma}(\mathbb{F}^n) \cong \mathbb{F}\text{-}\mathsf{VEC} \left(\mathbb{F}, \mathsf{\Gamma}(\mathbb{F}^n) \right) = \mathbb{F}\text{-}\mathsf{VEC}_{\mathsf{\Gamma}}\left(\mathbb{F}, \mathbb{F}^n \right) = \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{\Gamma}}\left(\mathbb{F}^n, \mathbb{F} \right)
\end{align*}
which then implies that $\mathbb{F}\text{-}\mathsf{DPOLY}(n,m) \cong \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{\Gamma}}\left(\mathbb{F}^n, \mathbb{F}^m \right)$. Therefore, $\mathbb{F}\text{-}\mathsf{DPOLY}$ is indeed isomorphic to the full subcategory of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{\Gamma}}$ whose objects are the finite dimensional $\mathbb{F}$-vector spaces. In the finite dimensional case, the differential combinator transformation corresponds precisely to the differential combinator on $\mathbb{F}\text{-}\mathsf{DPOLY}$:
\[\partial_{\mathbb{F}^n}(p(\vec x)) = \mathsf{D}[p](\vec x, \vec y)\]
Thus, $\mathbb{F}\text{-}\mathsf{DPOLY}$ is a sub-Cartesian differential category of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{P}}$, where the latter allows for divided power polynomials over infinite variables (but will still only depend on a finite number of them).
\section{Example: Zinbiel Algebras}\label{sec:ZAex}
In this section, we show that the free Zinbiel algebra monad is a Cartesian differential monad, and therefore we construct a Cartesian differential category based on non-commutative polynomials equipped with the half-shuffle product. Zinbiel algebras were introduced by Loday in \cite{loday95}, as Koszul dual to the classical notion of Leibniz algebra. Zinbiel algebras were further studied by Dokas \cite{dokas09}, who shows that they are closely related to divided power algebras. The free Zinbiel algebra is given by the non-unital shuffle algebra. Therefore, this example corresponds to differentiating non-commutative polynomials with a type of polynomial composition defined using the Zinbiel product. Due to the strangeness of the composition, the differential combinator transformation is very different from previous examples. Nevertheless, this is yet another interesting Cartesian differential comonad which does not arise as a differential category. Furthermore, it is worth mentioning that while the (unital) shuffle algebra has been previously studied in a generalization of differential categories in \cite{bagnol2016shuffle}, the Zinbiel algebra perspective was not considered. In future work, it would be interesting to study the link between these two notions.
\begin{definition} Let $\mathbb{F}$ be a field. A \textbf{Zinbiel algebra} \cite[Definition 1.2]{loday95} over $\mathbb{F}$, also called \textbf{dual Leibniz algebra}, is a $\mathbb{F}$-vector space $A$ equipped with a bilinear operation $<$ satisfying:
\[((a<b)<c)=(a<(b<c))+(a<(c<b)).\]
for all $a,b,c\in A$.
\end{definition}
It is important to insist on the fact that the bilinear product $<$ is not assumed to be associative, commutative, or have a distinguished unit element. That said, it is interesting to point out that any Zinbiel algebra is equipped with an associative and commutative bilinear product $*$ defined as $a*b=a<b+b<a$. Thus, a Zinbiel algebra is also a non-unital commutative, associative algebra \cite[Proposition 1.5]{loday95}. The underlying vector space of free Zinbiel algebras is the same as for free non-unital tensor algebras. Readers familiar with the latter will note that the tensor algebra is non-commutative when the multiplication is given by concatenation. However, there is another possible multiplication which is commutative, called the shuffle product. The tensor algebra equipped with the shuffle product is called the shuffle algebra. Furthermore, it turns out that the shuffle product is the commutative associative multiplication $*$ one obtains from the free Zinbiel algebra. Thus, the free Zinbiel algebra and the shuffle algebra are the same object. For the purposes of this paper, we only need to work with the Zinbiel product $<$.
Let $\mathbb F$ be a field. For an $\mathbb F$-vector space $V$, define $\mathsf{Zin}(V)$ as follows:
\[\mathsf{Zin}(V)=\bigoplus_{n=1}^{\infty}V^{\otimes n}=V\oplus (V \otimes V) \oplus (V \otimes V \otimes V) \oplus \hdots
\]
$\mathsf{Zin}(V)$ is the free Zinbiel algebra over $V$ \cite[Proposition 1.8]{loday95} with Zinbiel product $<$ defined on pure tensors by:
\[(v_1\otimes\hdots\otimes v_n) < (w_1\otimes\hdots\otimes w_m)=\sum_{\sigma\in S(n+m)/S(n)\times S(m)}v_1\otimes \sigma\cdot(v_2\otimes \hdots\otimes v_n\otimes w_1\otimes \hdots\otimes w_m)\]
which we then extend by linearity. Thus, $\mathsf{Zin}(V)$ is spanned by words of elements of $V$. Free Zinbiel algebras induce a monad $\mathsf{Zin}$ on $\mathbb{F}\text{-}\mathsf{VEC}$.
Similar to previous examples, note that it is sufficient to define the monad structure maps on pure tensors and then extend by linearity. Define the endofunctor $\mathsf{Zin}: \mathbb{F}\text{-}\mathsf{VEC} \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb{F}\text{-}\mathsf{VEC}$ which sends a $\mathbb{F}$-vector space $V$ to its free Zinbiel algebra algebra $\mathsf{Zin}(V)$, and which sends an $\mathsf{F}$-linear map $f:V\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} W$ to the $\mathsf{F}$-linear map ${\mathsf{Zin}(f):\mathsf{Zin}(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{Zin}(W)}$ defined on pure tensors as follows:
\[\mathsf{Zin}(f)\left( v_0 \otimes \hdots \otimes v_n \right) = f(v_1) \otimes \hdots \otimes f(v_n) \]
which we then extend by linearity. Note that $\mathsf{Zin}(f)$ is a Zinbiel algebra morphism, so we have that:
\[\mathsf{Zin}(f)(\mathfrak{v} < \mathfrak{w}) = \mathsf{Zin}(f)(\mathfrak{v}) < \mathsf{Zin}(f)(\mathfrak{w})\]
for all $\mathfrak{v}, \mathfrak{w} \in \mathsf{Zin}(V)$. The monad unit $\eta_V:V\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{Zin}(V)$ is the injection of $V$ into $\mathsf{Zin}(V)$,
\[\eta_V(v)=v,\]
and the monad multiplication $\mu_V:\mathsf{Zin}\mathsf{Zin}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}}\mathsf {Zin}(V)$ is defined on pure tensors by taking their Zinbiel product starting from the right. That is, $\mu_V$ is defined on a pure tensor $\mathfrak{v_1} \otimes \hdots \mathfrak{v_n} \in \mathsf{Zin}\mathsf{Zin}(V)$, where ${\mathfrak{v_1}, \hdots, \mathfrak{v_n} \in \mathsf{Zin}(V)}$, by:
\begin{align*}
\mu_V\left( \mathfrak{v_1} \otimes \hdots \mathfrak{v_n} \right)= \mathfrak{v_1} < \left( \hdots \left( \mathfrak{v_{n-1}} < \mathfrak{v_n} \right) \hdots \right)
\end{align*}
which we then extend by linearity. Unsurprisingly, the algebras of the monad $\mathsf{Zin}$ are precisely the Zinbiel algebras.
\begin{lemma}\cite[Proposition 1.8]{loday95}\label{proploday}
$(\mathsf{Zin},\mu,\eta)$ is a monad.
\end{lemma}
It is worth noting the link between divided power algebras and Zinbiel algebra. Any Zinbiel algebra is endowed with a divided power algebra structure \cite[Theorem 3.4]{dokas09}, and this results in an inclusion of the free divided power algebra into the free Zinbiel algebra, $\mathsf{\Gamma}(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{Zin}(V)$ \cite[Section 5]{dokas09}. As such, this inclusion can be extended to a monic monad morphism $\mathsf{\Gamma} \Rightarrow \mathsf{Zin}$. Similar to the other examples, due to a lack of unit, $\mathsf{Zin}$ will not be an algebra modality and therefore this will result in another example of a Cartesian differential comonad which does not come from a differential category.
We can now define the differential combinator transformation for $\mathsf{Zin}$. Define $\partial_V: \mathsf{Zin}(V) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{Zin}(V \times V)$ on pure tensors as follows:
\[ \partial_V(v_1\otimes v_2 \hdots\otimes v_n)=(0,v_1)\otimes(v_2,0)\otimes\hdots\otimes(v_n,0) \]
which we then extend by linearity. Note that this differential combinator transformation is quite different from the other examples in appearance. Below, we will explain how this differential combinator transformation corresponds to a sum of partial derivative for non-commutative polynomials with the multiplication given by the Zinbiel product. Before proving that $\partial$ is indeed a differential combinator transformation, we prove the following useful identity:
\begin{lemma}\label{lemmaZin} For all $\mathfrak{v}_1,\mathfrak{v}_2\in\mathsf{Zin}(V)$, the following equality holds:
\[
\partial_V(\mathfrak{v}_1<\mathfrak{v}_2)=\partial_V(\mathfrak{v}_1)<\mathsf{Zin}(\iota_0)(\mathfrak{v}_2),
\]
\end{lemma}
\begin{proof} It suffices to prove this identity on pure tensors. Assume that $\mathfrak{v}_1=v_0\otimes\hdots\otimes v_n$ and $\mathfrak{v}_2=w_1\otimes\hdots\otimes w_m$. Then:
\begin{align*}
&\partial_V \left( (v_0\otimes\hdots\otimes v_n) < (w_1\otimes\hdots\otimes w_m) \right)=~\partial_V\left(v_0\otimes\left(\sum_{\sigma\in S(n+m)/S(n)\times S(m)}\sigma(v_1\otimes\hdots\otimes v_n\otimes w_1\otimes\hdots\otimes w_m)\right)\right)\\
&=~\sum_{\sigma\in S(n+m)/S(n)\times S(m)}(0,v_0)\otimes\sigma\left((v_1,0)\otimes\hdots\otimes(v_p,0)\otimes(w_1,0)\otimes\hdots\otimes(w_r,0)\right)\\
&=~ \left((0,v_0)\otimes (v_2, 0) \hdots\otimes (v_n,0) \right) < \left( (w_1,0) \otimes\hdots\otimes (w_m,0) \right) \\
&=~ \partial_V(v_0\otimes\hdots\otimes v_n) < \left( \iota_0(w_1) \otimes\hdots\otimes \iota_0(w_m) \right) \\
&=~ \partial_V(v_0\otimes\hdots\otimes v_n) < \mathsf{Zin}(\iota_0)(w_1\otimes\hdots\otimes w_m)
\end{align*}
Thus the desired identity holds.
\end{proof}
\begin{proposition}\label{difcombzin}
$\partial$ is a differential combinator transformation for $(\mathsf{Zin},\mu,\eta)$.
\end{proposition}
\begin{proof} It suffices to prove the necessary identities on pure tensors. First, $\partial$ is clearly a natural transformation. We will now show that $\partial$ satisfies the dual of the six axioms {\bf[dc.1]} to {\bf[dc.6]} from Definition \ref{def:cdcomonad}.
\begin{enumerate}[{\bf [dc.1]}]
\item Here we use the fact that tensoring with zero gives zero:
\begin{align*}
\mathsf{Zin}(\pi_0)\left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right) &=~ \mathsf{Zin}(\pi_0)((0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0)) \\
&=~ \pi_0(0,v_1) \otimes \pi_0(v_2,0) \otimes \hdots \otimes \pi_0(v_n,0) \\
&=~ 0 \otimes v_2 \otimes \hdots \otimes v_n \\
&=~ 0
\end{align*}
so $\mathsf{Zin}(\pi_0)\circ\partial_V=0$.
\item Here we use the multilinerity of the tensor product:
\begin{align*}
\mathsf{Zin}(1_V \times \Delta_V)\left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right) &=~ \mathsf{Zin}(1_V \times \Delta_V)((0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0)) \\
&=~ (0,\Delta_V(v_1)) \otimes (v_2,\Delta_V(0)) \otimes \hdots \otimes (v_n,\Delta_V(0)) \\
&=~ (0,v_1,v_1) \otimes (v_2,0,0) \otimes \hdots \otimes (v_n,0,0) \\
&=~ \left( (0,v_1,0) + (0,0,v_1) \right) \otimes (v_2,0,0) \otimes \hdots \otimes (v_n,0,0) \\
&=~ (0,v_1,0) \otimes (v_2,0,0) \otimes \hdots \otimes (v_n,0,0) \\
&+~ (0,0,v_1) \otimes (v_2,0,0) \otimes \hdots \otimes (v_n,0,0) \\
&=~ (0,\iota_0(v_1)) \otimes (v_2,\iota_0(0)) \otimes \hdots \otimes (v_n,\iota_0(0)) \\
&+~ (0,\iota_1(v_1)) \otimes (v_2,\iota_1(0)) \otimes \hdots \otimes (v_n,\iota_1(0)) \\
&=~ \mathsf{Zin}(1_V \times \iota_0)((0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0)) \\
&+~ \mathsf{Zin}(1_V \times \iota_1)((0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0)) \\
&=~ \mathsf{Zin}(1_V \times \iota_0)\left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right) \\
&+~ \mathsf{Zin}(1_V \times \iota_1)\left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right) \\
&=~ \left( \mathsf{Zin}(1_V \times \iota_0) + \mathsf{Zin}(1_V \times \iota_1) \right)\left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right)
\end{align*}
So $\mathsf{Zin}(1_V\times\Delta_V)\circ\partial_V=(\mathsf{Zin}(1_VV\times\iota_0)+\mathsf{Zin}(1_V,\iota_1))\circ\partial_V$.
\item This is a straightforward verification. So $\partial_v\circ\eta_V=\eta_{V\times V}\circ\iota_1$.
\item Let $\mathfrak{v}_1\otimes\hdots\otimes \mathfrak{v}_k$ be a pure tensor in $\mathsf{Zin}(\mathsf{Zin}(V))$, with $\mathfrak{v}_1,\hdots,\mathfrak{v}_k\in\mathsf{Zin}(V)$. Using Lemma \ref{lemmaZin}, and the fact that $\mathsf{Zin}(\iota_0)$ is a Zinbiel algebra morphism, we compute:
\begin{align*}
\partial_V\left( \mu_V(\mathfrak{v}_1\otimes\hdots\otimes\mathfrak{v}_k) \right)&=~\partial_V(\mathfrak{v}_1<(\hdots<(\mathfrak{v}_k)\hdots))\\
&=~\partial_V(\mathfrak{v}_1)<\mathsf{Zin}(\iota_0)\left(\mathfrak{v}_2<(\hdots<\mathfrak{v}_n)\hdots)\right)\\
&=~ \mu_{V \times V}\left( \partial_V(\mathfrak{v}_1) \otimes \mathsf{Zin}(\iota_0)(\mathfrak{v}_2) \otimes \hdots \otimes \mathsf{Zin}(\iota_0)(\mathfrak{v}_n) \right) \\
&=~ \mu_{V \times V}\left( \left[\mathsf{Zin}(\iota_0), \partial_V\right](0,\mathfrak{v}_1) \otimes \left[\mathsf{Zin}(\iota_0), \partial_V\right](\mathfrak{v}_2,0) \otimes \hdots \otimes \left[\mathsf{Zin}(\iota_0), \partial_V\right](\mathfrak{v}_n,0) \right) \\
&=~ \mu_{V \times V}\left( \mathsf{Zin}\left( \left[\mathsf{Zin}(\iota_0), \partial_V\right] \right) \left( (0,\mathfrak{v}_1) \otimes (\mathfrak{v}_2,0) \otimes \hdots \otimes (\mathfrak{v}_n,0)
\right) \right) \\
&=~ \mu_{V \times V}\left( \mathsf{Zin}\left( \left[\mathsf{Zin}(\iota_0), \partial_V\right] \right) \left( \partial_{\mathsf{Zin}(V)} (\mathfrak{v}_1 \otimes \mathfrak{v}_2 \otimes \hdots \otimes \mathfrak{v}_n)
\right) \right)
\end{align*}
So $\partial_V \circ \mu_V = \mu_{V \times V} \circ \mathsf{Zin}\left( \left[\mathsf{Zin}(\iota_0), \partial_V\right] \right)\circ \partial_{\mathsf{Zin}(V)}$.
\item This is straightforward:
\begin{align*}
&\mathsf{Zin}(\pi_0 \times \pi_1)\left( \partial_{V \times V} \left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right) \right)=~ \mathsf{Zin}(\pi_0 \times \pi_1)\left( \partial_{V \times V}((0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0)) \right) \\
&=~ \mathsf{Zin}(\pi_0 \times \pi_1)\left( (0,0,0,v_1) \otimes (v_2,0,0,0) \otimes \hdots \otimes (v_n,0,0,0) \right) \\
&=~ (\pi_0(0,0),\pi_1(0,v_1)) \otimes (\pi_0(v_2,0),\pi_1(0,0)) \otimes \hdots \otimes (\pi_0(v_n,0),\pi_1(0,0)) \\
&=~ (0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0) \\
&=~ \partial_V(v_1 \otimes \hdots \otimes v_n)
\end{align*}
So $\mathsf{Zin}(\pi_0\times\pi_1)\circ\partial_{V\times V}\circ\partial_V=\partial_V$.
\item This is again straightforward:
\begin{align*}
\mathsf{Zin}(c_V)\left( \partial_{V \times V} \left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right) \right) &=~ \mathsf{Zin}(c_V)\left( \partial_{V \times V}((0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0)) \right) \\
&=~ \mathsf{Zin}(c_V)\left( (0,0,0,v_1) \otimes (v_2,0,0,0) \otimes \hdots \otimes (v_n,0,0,0) \right) \\
&=~ c_V(0,0,0,v_1) \otimes c_V(v_2,0,0,0) \otimes \hdots \otimes c_V(v_n,0,0,0) \\
&=~ (0,0,0,v_1) \otimes (v_2,0,0,0) \otimes \hdots \otimes (v_n,0,0,0) \\
&=~ \partial_{V \times V}((0,v_1) \otimes (v_2,0) \otimes \hdots \otimes (v_n,0)) \\
&=~ \partial_{V \times V} \left( \partial_V(v_1 \otimes \hdots \otimes v_n) \right)
\end{align*}
So $\mathsf{Zin}(c_V)\circ\partial_{V\times V}\circ\partial_V=\partial_{V\times V}\circ\partial_V$.
\end{enumerate}
So we conclude that $\partial$ is a differential combinator transformation.
\end{proof}
Thus, $(\mathsf{Zin},\mu,\eta,\partial)$ is a Cartesian differential monad, and so the opposite of its Kleisli category is a Cartesian differential category (which we summarize in Corollary \ref{cor:ZIN} below). As we will explain below, this Cartesian differential category corresponds to differentiating non-commutative polynomials. The Cartesian differential monad $(\mathsf{Zin},\mu,\eta,\partial)$ also comes equipped with a $\mathsf D$-linear counit. Define $\varepsilon_V:\mathsf{Zin}(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} V$ on pure tensors as follows:
\[\varepsilon_V(v_1\otimes\hdots\otimes v_n)=\begin{cases}
v_1&\mbox{if }n=1,\\
0&\mbox{otherwise.}
\end{cases}\]
which we extend by linearity. In other words, $\varepsilon_V$ projects out the $V$ component of $\mathsf{Zin}(V)$.
\begin{lemma}
$\varepsilon$ is a $\mathsf D$-linear counit of $(\mathsf{Zin},\mu,\eta,\partial)$.
\end{lemma}
\begin{proof}The proof is straightforward, and so we leave this as an excercise for the reader.
\end{proof}
Thus the subcategory of $\mathsf{D}$-linear maps of the opposite category of the Kleisli category of $\mathsf{Zin}$ is isomorphic to the opposite category of $\mathbb{F}\text{-}\mathsf{VEC}$. Summarizing, we obtain the following statement:
\begin{corollary}\label{cor:ZIN} $(\mathsf{Zin}, \mu, \eta, \partial)$ is a Cartesian differential comonad on $\mathbb{F}\text{-}\mathsf{VEC}^{op}$ with $\mathsf{D}$-linear unit $\varepsilon$. Therefore, $\mathbb{F}\text{-}\mathsf{VEC}^{op}_\mathsf{Zin}$ is a Cartesian differential category and $\mathsf{D}\text{-}\mathsf{lin}\left[ \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{Zin}} \right] \cong \mathbb{F}\text{-}\mathsf{VEC}^{op}$.
\end{corollary}
The Kleisli category $\mathbb{F}\text{-}\mathsf{VEC}_{\mathsf{Zin}}$ is closely related to non-commutative polynomials. For a set $X$, let $\mathbb F\langle X\rangle$ denote the set of non-commutative polynomials and $\mathbb F\langle X\rangle_+$ be the set of reduced non-commutative polynomials, that is, those without any constant terms. As a vector space, $\mathbb F\langle X\rangle_+$ over a set $X$ is isomorphic to the underlying vector space of the free Zinbiel algebra over the free vector space generated by $X$. Thus, to distinguish between polynomials and non-commutative polynomials, we will use tensor product $\otimes$. For example, $xy=yx$ is the commutative polynomial, while $x \otimes y=y\otimes x$ are non-commutative polynomials. Composition in the Kleisli category corresponds to using the Zinbiel product $<$ to define a new kind of substitution of non-commutative polynomials. For example, if $p(x,y)=x\otimes y$ and $q(u,v)=u\otimes v\otimes u$, then
\[q(p(x,y),v)=(x\otimes y)<(v<(x\otimes y))=(x\otimes y)<(v\otimes x\otimes y)=(x\otimes y\otimes v\otimes x\otimes y)+(x\otimes v\otimes y\otimes x\otimes y)+2(x\otimes v\otimes x\otimes y\otimes y).\]
We will use the term Zinbiel polynomials to refer to reduced non-commutative polynomials with the Zinbiel product and the Zinbiel substitution. We are now in a position to define partial derivatives on non-commutative polynomials. For $x \in X$, define $\frac{d}{dx}:\mathbb F\langle X\rangle\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathbb F\langle X\rangle$ as follows on Zinbiel monomials (which we then extend by linearity):
\[\frac{d(x_1\otimes x_2\otimes\hdots\otimes x_n)}{dx}=\begin{cases}x_2\otimes\hdots\otimes x_n,&\mbox{if }x_1=x,\\
0,&\mbox{otherwise}.
\end{cases}
\]
and with the convention that $\frac{d(x)}{dx} =1$.
We can restrict to the finite-dimensional case and obtain a sub-Cartesian differential category of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{Zin}}$ which is isomorphic to the Lawvere theory of Zinbiel polynomials, and where the differential combinator is defined using their partial derivatives.
\begin{example} \normalfont \label{ex:CDCzin} Let $\mathbb{F}$ be a field. Define the category $\mathbb{F}\text{-}\mathsf{ZIN}$ whose object are natural numbers $n \in \mathbb{N}$, where a map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ is an $m$-tuple of reduced non-commutative polynomials in $n$ variables, that is, $P = \langle {p}_1(\vec x), \hdots, {p}_m(\vec x) \rangle$ with ${p}_i(\vec x) \in \mathbb{F}\langle x_1, \hdots, x_n\rangle_+$. The identity maps $1_n: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} n$ are the tuples $1_n = \langle x_1, \hdots, x_n \rangle$ and where composition is given by Zinbiel substitution, as defined above. $\mathbb{F}\text{-}\mathsf{ZIN}$ is a Cartesian left additive category where the finite product structure is given by $n \times m = n +m$ with projection maps ${\pi_0: n \times m \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} n}$ and ${\pi_1: n \times m \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ defined as the tuples $\pi_0 = \langle x_1, \hdots, x_n \rangle$ and $\pi_1 = \langle x_{n+1}, \hdots, x_{n+m} \rangle$, and where the additive structure is defined coordinate wise via the standard sum of non-commutative polynomials. $\mathbb{F}\text{-}\mathsf{ZIN}$ is also a Cartesian differential category where the differential combinator is given by the differentiation of Zinbiel polynomial given above, that is, for a map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$, with $P = \langle p_1(\vec x), \hdots, p_m(\vec x) \rangle$, its derivative $\mathsf{D}[P]: n \times n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m$ is defined as the tuple of the sum of the partial derivatives of the Zinbiel polynomials $p_i(\vec x)$:
\begin{align*}
\mathsf{D}[P](\vec x, \vec y) := \left( \sum \limits^n_{i=1} y_i\otimes\frac{d{p}_1(\vec x)}{d x_i}, \hdots,\sum \limits^n_{i=1} y_i\otimes\frac{d {p}_n(\vec x)}{d x_i} \right) && \sum \limits^n_{i=1} y_i\otimes\frac{d {p}_j (\vec x)}{d x_i} \in \mathbb{F}\langle x_1, \hdots, x_n, y_1, \hdots, y_n \rangle_+
\end{align*}
It is important to note that even if $p_i(\vec x)$ has terms of degree 1, every partial derivative $y_i\otimes\frac{d p_j(\vec x)}{d x_i} $ will still be reduced. Indeed, the polynomial $y_i\otimes 1\in \mathbb{F}\langle x_1, \hdots, x_n, y_1, \hdots, y_n \rangle$ is identified with the reduced polynomial $y_i\in \mathbb{F}\langle x_1, \hdots, x_n, y_1, \hdots, y_n \rangle_+$, and so, for example, $y_i\otimes\frac{d(x)}{x}=y_i$.
Thus, the differential combinator $\mathsf{D}$ is indeed well-defined. A map ${P: n \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} m}$ is $\mathsf{D}$-linear if it of the form:
\begin{align*}
P = \left \langle \sum \limits^{n}_{i=0} r_{i,1}x_{i}, \hdots, \sum \limits^{n}_{i=0} r_{m,1}x_{i} \right \rangle && r_{i,j} \in \mathbb{F}
\end{align*}
Thus $\mathsf{D}\text{-}\mathsf{lin}[\mathbb{F}\text{-}\mathsf{ZIN}]$ is equivalent to $\mathbb{F}\text{-}\mathsf{LIN}$ (as defined in Example \ref{ex:CDCPOLY}). We note that this example generalize to the category of Zinbiel polynomials over an arbitrary commutative (semi)ring.
\end{example}
We also have the following chain of isomorphisms:
\begin{align*}
\mathbb{F}\text{-}\mathsf{ZIN}(n,1) = \mathbb \mathbb{F}\langle x_1, \hdots, x_n, y_1, \hdots, y_n \rangle_+ \cong \mathsf{\Gamma}(\mathbb{F}^n) \cong \mathbb{F}\text{-}\mathsf{VEC} \left(\mathbb{F}, \mathsf{Zin}(\mathbb{F}^n) \right) = \mathbb{F}\text{-}\mathsf{VEC}_{\mathsf{Zin}}\left(\mathbb{F}, \mathbb{F}^n \right) = \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{Zin}}\left(\mathbb{F}^n, \mathbb{F} \right)
\end{align*}
which then implies that $\mathbb{F}\text{-}\mathsf{Zin}(n,m) \cong \mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{Zin}}\left(\mathbb{F}^n, \mathbb{F}^m \right)$. Therefore, $\mathbb{F}\text{-}\mathsf{Zin}$ is isomorphic to the full subcategory of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{\Gamma}}$ whose objects are the finite dimensional $\mathbb{F}$-vector spaces. In the finite dimensional case, the differential combinator transformation corresponds precisely to the differential combinator on $\mathbb{F}\text{-}\mathsf{ZIN}$:
\[\partial_{\mathbb{F}^n}(p(\vec x)) = \mathsf{D}[p](\vec x, \vec y)\]
Thus, $\mathbb{F}\text{-}\mathsf{ZIN}$ is a sub-Cartesian differential category of $\mathbb{F}\text{-}\mathsf{VEC}^{op}_{\mathsf{P}}$, where the latter allows for divided power polynomials over infinite variables (but will still only depend on a finite number of them).
We conclude this section by noting that the inclusion $\mathsf{\Gamma} \Rightarrow \mathsf{Zin}$ is not compatible with the differential combinators. For instance, let $V$ be the vector space spanned by $x$ and $y$, and let $\partial^{\mathsf{\Gamma}}$ and $\partial^{\mathsf{Zin}}$ denote the differential combinator transformation for the respective monad. Denote by $p(x,y)=x^{[1]}*y^{[1]}\in \mathsf{\Gamma}(V)$. On one hand, the injection ${\mathsf{\Gamma}(V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{Zin}(V)}$ identifies $p(x,y)$ to the non-commutative polynomial $p(x,y)=x\otimes y+y\otimes x$, and so, $\partial^{\mathsf{Zin}}_V(p)(x,y,x^*,y^*)=x^*\otimes y+y^*\otimes x$. On the other hand, $\partial^{\mathsf{\Gamma}}_V(p)(x,y,x^*,y^*)=(x^*)^{[1]}*y^{[1]}+(y^*)^{[1]}*x^{[1]}$, which the injection $\mathsf{\Gamma}(V\times V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{Zin}(V\times V)$ identifies to the non-commutative polynomial $\partial^{\mathsf{\Gamma}}_V(p)(x,y,x^*,y^*)=x^*\otimes y+y\otimes x^*+y^*\otimes x+x\otimes y^*$.
\section{Future Work}\label{conclusion}
Beyond finding and constructing new interesting examples of Cartesian differential comonads, and therefore also new examples of Cartesian differential categories, there are many other interesting possibilities for future work with Cartesian differential comonads. We conclude this paper by listing some of these potential ideas:
\begin{enumerate}[{\em (i)}]
\item In \cite{garner2020cartesian}, it was shown that every Cartesian differential category embeds into the coKleisli category of a differential (storage) category \cite[Theorem 8.7]{garner2020cartesian}. In principle, this already implies that every Cartesian differential category embeds into the coKleisli category of a Cartesian differential comonad. However, Cartesian differential comonads can be defined without the need for a symmetric monoidal structure. Thus, it is reasonable to expect that there is a finer (and possibly simpler) embedding of a Cartesian differential category into the coKleisli category of a Cartesian differential comonad.
\item In this paper, we studied the (co)Kleisli categories of Cartesian differential (co)monads. A natural follow-up question to ask is: what can we say about the (co)Eilenberg-Moore categories of Cartesian differential (co)monads? As discussed in \cite{cockett_et_al:LIPIcs:2020:11660}, for differential categories the answer is tangent categories \cite{cockett2014differential}. Indeed, the Eilenberg-Moore category of any codifferential category is always a tangent category \cite[Theorem 22]{cockett_et_al:LIPIcs:2020:11660}, while the coEilenberg-Moore category of a differential (storage) category with sufficient limits is a (representable) tangent category \cite[Theorem 27]{cockett_et_al:LIPIcs:2020:11660}. As such, it is reasonable to expect the same to be true for Cartesian differential (co)monads, that is, that the (co)Eilenberg-Moore category of Cartesian differential (co)monad is a tangent category by generalizing the constructions found in \cite{cockett_et_al:LIPIcs:2020:11660}.
\item An important part of the theory of calculus is integration, specifically its relationship to differentiation given by antiderivatives and the Fundamental Theorems of Calculus. Integration and antiderivatives have found their way into the theory of differential categories \cite{cockett-lemay2019, ehrhard2017introduction} and Cartesian differential categories \cite{COCKETT201845}. In future work, it would therefore be of interest to define integration and antiderivatives for Cartesian differential (co)monads. We conjecture that integration in this setting would be captured by an \emph{integral combinator transformation}, which should be a natural transformation of the opposite type of the differential combinator transformation, that is, of type $\int_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A \times A)$. The axioms of an integral combinator transformation should be analogue to the axioms of an integral combinator \cite[Section 5]{COCKETT201845} in the coKleisli category. Some of the examples presented in this paper seem to come equipped with an integral combinator transformation. For example, there is a well-established notion of integration for power series which should induce integral combinator transformations in an obvious way. In the case of divided power polynomial, there is a notion of integration in the one-variable case (see \cite{keigher2000} for the integration of formal divided power series in one variable). However, it is unclear to us how integration for multivariable divided power polynomials would be defined, which is necessary if we wish to construct an integral combinator transformation. In the case of Zinbiel algebras, we conjecture that the natural transformation $\int_V: \mathsf{Zin}(V\times V)\@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \mathsf{Zin}(V)$
\[\int (a_{1,0},a_{1,1}) \otimes \hdots \otimes (a_{n,0},a_{n,1})=\sum_{f:\lbrace 1, \hdots, n \rbrace \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}}\{0,1\}}a_{1,f(1)} \otimes \hdots \otimes a_{n,f(n)}\]
is a candidate for an integral combinator transformation (in the dual sense). In a differential category, one way to build an integration operator is via the notion of antiderivatives \cite[Definition 6.1]{cockett-lemay2019}, which is the assumption that a canonical natural transformation $\mathsf{K}_A: \oc(A) \@ifnextchar^ {\mathfrak t@@}{\mathfrak t@@^{}} \oc(A)$ be a natural isomorphism. Another goal for future work would be to generalize antiderivatives (in the differential category sense) for Cartesian differential comonads.
\end{enumerate}
In conclusion, there are many potential interesting paths to take for future work with Cartesian differential comonads.
\bibliographystyle{plain}
|
2,869,038,154,234 | arxiv | \section{Introduction\label{intro}}
The possibility of the Standard-Model (SM) Higgs field serving as the
portal to dark matter \cite{Patt:2006fw} has been extensively
phenomenologically studied in the past two decades. A viable scenario
involves a gauge singlet Higgs field which mixes with the SM Higgs field
through appropriate terms in the Higgs potential, resulting in a
dominantly SU(2)-doublet Higgs boson $h_1$ with mass 125 GeV and an
additional Higgs boson $h_2$ with a priori arbitrary mass
\cite{Schabinger:2005ei,Greljo:2013wja,Krnjaic:2015mbs}. If the mixing angle is sufficiently
small, the couplings of the 125-GeV Higgs $h_1$ comply with their SM
values within the experimental error bars. The other Higgs boson $h_2$,
which is mostly gauge singlet, serves as a mediator to the Dark
Sector. In the simplest models the mediator couples to pairs of
dark-matter (DM) particles. In this paper we are interested in the
imprints of the described Higgs portal scenario on rare B meson decays
which can be studied in the new Belle II experiment. If the $h_2$ mass
is in the desired range below the $B$ mass, the decay of $h_2$ into a
pair of DM particles must necessarily be kinematically forbidden to
comply with the observed relic DM abundance
\cite{Greljo:2013wja,Krnjaic:2015mbs}. Phenomenological studies of the scenario were
recently performed in Refs.~\cite{Krnjaic:2015mbs,Winkler:2018qyg,Matsumoto:2018acr,Boiarska:2019jym,Filimonova:2019tuy}.
In this article we first revisit the calculation of the loop-induced
amplitude $b\to s h_2$. The literature on the topic employs a result
derived from the SM $\bar s b$-Higgs vertex with off-shell Higgs
\cite{Batell:2009jf}. However, it is known that this vertex is
gauge-dependent \cite{Botella:1986gf}. This observation calls for a
novel calculation of the $\bar s b h_2$ vertex in an arbitrary $R_\xi$
gauge in order to investigate the correctness of the standard approach
and to understand how the gauge parameter $\xi$ cancels in physical
observables. After briefly reviewing the model in Sec.~\ref{sec:m} we
present our calculation of the $\bar s b h_2$ vertex in
Sec.~\ref{sec:xi} and demonstrate the cancellation of the gauge
dependence for the two cases with on-shell $h_2$ and an off-shell $h_2$
coupling to a fermion pair, respectively. In Sec.~\ref{sec:p} we present
a phenomenological analysis with several novel aspects, such as a study
of the decay $B\to K^* h_2$ and a discussion of the lifetime information
inferred from data on $B\to K^{(*)} h_2[\to f \bar f]$ with a displaced
vertex of the $h_2$ decay into the fermion pair $f\bar f $. In
Sec.~\ref{sec:c} we conclude.
\section{Model\label{sec:m}}
A minimal extension of the SM with a real scalar singlet boson serving
as mediator to the Dark Sector involves the Higgs potential:
\begin{eqnarray}
V &=& V_H+V_{H\phi}+V_\phi+\text{h.c.} \label{eq:v} \\
\text{with}\qquad V_H &=& - \mu^{2} H^{\dagger} H +
\frac{\bar{\lambda}_0}{4}(H^{\dagger} H)^2,\nonumber\\
V_{H\phi}&=&\frac{\alpha}{2} \phi (H^{\dagger} H),\nonumber\\
V_\phi &=&\frac{m^{2}}{2} \phi^{2} + \frac{1}{4} \lambda_{\phi} \phi^{4},\nonumber
\end{eqnarray}
where $\phi$ denotes the scalar singlet field in the interaction basis, while
$H = \left( G^{+}, (v+h + i G^{0} )/\sqrt{2} \right)^T$ is the SM Higgs doublet. We
minimize the scalar potential $V$ with respect to $\phi$ and $h$ and then choose
to express the mass parameters $\mu$ and $m$ in terms of corresponding vacuum
expectation values (vevs) $v_\phi$ and $v$, respectively:
\begin{eqnarray}
\qquad \mu_h^2&\equiv& \frac{\partial^2V}{\partial h^2}
\, =\,\frac{\bar{\lambda}_0 v^2}{2},\nonumber\\
\mu_{h\phi}^2&\equiv& \frac{\partial^2V}{\partial h \partial\phi}
\,=\, \frac{\alpha v}{2},\nonumber\\
\mu_\phi^2 &\equiv& \frac{\partial^2V}{\partial\phi^2}
\,=\, 2\lambda_\phi v_\phi^2-\frac{\alpha v^2}{4 v_\phi}\,.
\end{eqnarray}
The corresponding off-diagonal mass matrix is diagonalized with the introduction
of the mixing angle $\theta$
\begin{eqnarray}
\quad h = \cos \theta\,h_{1} - \sin \theta\, h_{2},\quad
\phi = \sin \theta\, h_{1} + \cos \theta\, h_{2}\,.
\end{eqnarray}
As mentioned in the introduction, we choose $h_2$ as the light mass eigenstate,
whose signatures we are primarily interested in, while $h_1$ corresponds to
the observed Higgs boson with mass $125\,\text{GeV}$.
An important Feynman rule for the calculation of the scalar penguin in
$R_\xi$ gauge is the one for the $G^+ G^- h_2$ vertex. After diagonalization the
mass matrix we find\footnote{We express the Feynman rules using the conventions of
the SM file in the \emph{FeynArts}\ \cite{Hahn:2000kx} package.}
\begin{eqnarray}
\qquad G^+ G^- h_1:\enskip\quad
-i\frac{e m_{h_{1}}^2 \cos\theta}{2m_W \sin\theta_W}, \nonumber\\
\qquad G^+ G^- h_2:\enskip\quad \phantom{-} i\frac{e m_{h_{2}}^2 \sin\theta}{2m_W \sin\theta_W} .
\label{eq:ggh2}
\end{eqnarray}
One easily verifies that the rest of the vertices that are required for the studies
of low energy phenomenology are simple rescalings of the corresponding SM Higgs
vertices by the factor $(-\sin\theta)$. Note that the $G^+ G^- h_2$ vertex is not
found in the same way from the corresponding SM vertex, but in addition involves
the proper replacement of the SM Higgs mass by $m_{h_2}$.
One could have included more terms in the scalar potential in \eq{eq:v} such as
$\phi^2 H^{\dagger} H$, however, such terms would not change the low-energy
phenomenology related to the process of our interest but would merely influence
the scalar self-interactions that we are currently not concerned with.
\boldmath
\section{The $\bar{s} bh_2$ vertex in the $R_\xi$
gauge\label{sec:xi}}
\unboldmath%
We employ a general $R_\xi$ gauge for the calculation of the Feynman
diagrams contributing to the $\bar{s}\text{-}b\text{-}h_2$ vertex. We
further use the \emph{FeynArts}\ package \cite{Hahn:2000kx} for
generating the amplitudes and the \emph{FeynCalc}\
\cite{Mertig:1990an,Shtabovenko:2016sxi,Shtabovenko:2020gxv},
\emph{Package-X}\ \cite{Patel:2015tea}, and \emph{FeynHelpers}\
\cite{Shtabovenko:2016whf} packages to evaluate the analytic expressions
for the Feynman diagrams. Neglecting the mass of the external $s$
quark, we encounter the diagrams shown in \fig{Fig:diags}. In our final
result we will also neglect the masses of the internal up and
charm quarks. While the expressions for individual diagrams contain
ultraviolet poles, the final result is UV convergent due to the
Glashow-Iliopoulos-Maiani mechanism.
\begin{figure*}[tb]
\begin{center}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_1.pdf}}
\hspace{.6cm}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_2.pdf}}
\hspace{.6cm}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_3.pdf}}\\
\hspace{.6cm}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_4.pdf}}
\hspace{.6cm}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_5.pdf}}
\hspace{.6cm}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_6.pdf}}\\
\hspace{.6cm}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_7.pdf}}
\hspace{.6cm}
\subfigure[t][]{\includegraphics[width=0.25\textwidth]{diagram_8.pdf}}
\end{center}
\caption{One-loop
diagrams contributing to $b\to s h_2$
in $R_\xi$ gauge.}
\label{Fig:diags}
~\\[-3mm]\hrule
\end{figure*}
In order to elucidate the gauge independence of the physical quantities,
we set the $h_2$ boson off the mass shell. In a first step we
present the results in terms of the scalar loop functions $B_0,C_0$ of the
Passarino-Veltman (PV) basis, keeping exact dependences on all momenta
and masses. For the final goal to calculate the low-energy Wilson
coefficient governing the decay process $b\to s\ h_2$ this appears
unnecessary, but it turns out that the expression in terms
of the PV basis is compact and most suitable for studying
the gauge-independence of the physical quantities.
We decompose each diagram $\mathcal{A}_i$ as
$\mathcal{A}_i= \tilde{\mathcal{A}}_{i} + \mathcal{A}_{i}^{(\xi)}$, with
the second term $\mathcal{A}_{i}^{(\xi)}$ comprising all terms which
depend on the $W$ gauge parameter $\xi$. The expressions for
$\tilde{\mathcal{A}}_i$ are collected in \ref{Results}.
The results for the gauge-dependent pieces of the individual diagrams are
rather lengthy, so we only provide the total sum
\begin{eqnarray}
\;\sum_i \mathcal{A}_i^{(\xi)} & = & \sin\theta
\dfrac{ m_b m_t^2 }{ 8\pi^2 v^3 (m_b^2-p_{h_2}^2) }
(p_{h_2}^2-m_{h_2}^2)
\cdot \nonumber\\
&&\!\!\!\bigg[B_0(p_{h_2}^2, m_W^2 \xi, m_W^2 \xi)-B_0(m_b^2, m_t^2, m_W^2
\xi) \nonumber \\
&&\quad +\,(p_{h_2}^2-m_b^2+m_t^2-m_W^2\xi)\cdot \nonumber \\
&&\qquad C_0(0,m_b^2,p_{h_2}^2, m_W^2\xi\,, m_t^2,m_W^2\xi )\bigg]\,,
\label{Gauge-dependent}
\end{eqnarray}
with $\lambda_t = V_{tb}V^\ast_{ts}$. Here and in the following we
suppress the Dirac spinors for the $b$ and $s$ quarks.
It follows from the expression
above that the gauge-dependent contribution $\mathcal{A}^{(\xi)}$
vanishes for the case of an on-shell scalar boson, which confirms the gauge
independence of the corresponding physical on-shell amplitude. We write
the total $\bar{s} bh_2$ vertex
$\mathcal{A} = \sum_i (\tilde{\mathcal{A}}_i + \mathcal{A}_i^{(\xi)}) $ (with on-shell quarks and
off-shell $h_2$) as
\begin{eqnarray}
\quad && \mathcal{A} = G(p_{h_2}^2, m_{h_2}^2) +
(p_{h_{2}}^{2} - m_{h_{2}}^{2}) F(\xi, p_{h_2}^2) ,
\end{eqnarray}
with the second term equal to the expression in \eq{Gauge-dependent}.
We note that $ F(\xi, p_{h_2}^2) $ does not depend on $m_{h_2}$. While
the cancellation of $\xi$ from $\mathcal{A} $ is obvious for an on-shell
$h_2$, i.e.\ for the decay $b\to s\ h_2$, this feature is not
immediately transparent for the case in which an off-shell $h_2$ decays
into a pair of other particles. In such scenarios the gauge dependence
is cancelled by other diagrams. Here we exemplify the cancellation of
the gauge parameter for a model in which our mediator $h_2$ couples to
a pair of invisible final state fermions:
\begin{equation}
\qquad \mathcal{L}_{\phi \chi \chi} \;=\; \lambda_\chi \phi \overline{\chi} \chi\,,
\end{equation}
meaning that $h_2$ in $b\to s\ h_2 [\to \overline{\chi} \chi]$ is
necessarily off-shell \cite{Krnjaic:2015mbs}. In order to find the
cancellation of the gauge parameter we must also consider the
diagrams corresponding to $b\to s\ h_1 [\to \overline{\chi} \chi]$
involving the heavy SM-like state $h_{1}$.
The amplitudes involving the $h_2$ and $h_1$ propagators are
proportional to $- \sin \theta$ and to $\cos \theta$, respectively:
\begin{equation}
\begin{split}
\qquad \mathcal{A}_{b\text{-}s\text{-}h2} \sim - \sin \theta,\qquad\qquad
\mathcal{A}_{b\text{-}s\text{-}h1} \sim \cos \theta,
\end{split}
\end{equation}
while the vertices $\mathcal{V}_{h_{1,2} \chi \chi } $
involving the coupling of the dark-matter fermion to
the scalar bosons depend on $\theta$ as
$\mathcal{V}_{h_{1} \chi \chi } \sim \sin \theta$ and
$\mathcal{V}_{h_{2} \chi \chi } \sim \cos \theta$. The
$b\to s\ h_{1,2} [\to \overline{\chi} \chi]$ amplitudes $\mathcal{A}_{h_{1,2}}$ can be
schematically written as
\begin{eqnarray}
\quad\mathcal{A}_{h_{2}} &=& - \lambda_\chi
\sin \theta \cos \theta \left(F(\xi, p^2) +
\frac{G(p^2, m_{h_{2}}^{2})}{p^{2} - m_{h_{2}}^{2}}\right), \\
\quad\mathcal{A}_{h_{1}} &=& \phantom{-}\lambda_\chi
\sin \theta \cos \theta \left( F(\xi, p^2) +
\frac{G(p^2, m_{h_{1}}^{2})}{p^{2} - m_{h_{1}}^{2}}\right),
\end{eqnarray}
where $p^2$ denotes the square of the momentum transferred to the
fermion pair. By adding the two amplitudes one verifies the cancellation of the
gauge-depend\-ent part $ F(\xi, p^2)$. If one considers processes with
off-shell $h_{1,2}$ exchange to SM fermions, such as in $b\to s\tau^+
\tau^-$ with e.g.\ $m_{h_2}>m_b$, also box diagrams are needed for the
proper gauge cancellation as found in Ref.~\cite{Botella:1986gf} for the
SM case.
We now proceed to integrate out the top quark and W boson within the
gauge independent contribution
$\tilde{\mathcal{A}}\equiv \sum_i \tilde{\mathcal{A}}_i$ to obtain the
Wilson coefficient:
\begin{eqnarray}
\mathcal{L}_{\text{eff}}&=&C_{h_2 s b}\, h_2\, \overline{s} P_R
b+\text{h.c.}, \label{Leff}\\
\qquad\qquad
C_{h_2 s b} &=& -\frac{3\,\sin\theta\,\lambda_t \,m_b\,m_t^2}{16\,\pi^2\,v^3}\,,
\end{eqnarray}
where $v\simeq 246\,\text{GeV}$ is the vacuum expectation value of the
Higgs doublet. This result agrees with Ref.~\cite{Winkler:2018qyg}, whereas it agrees with Refs.~\cite{Krnjaic:2015mbs} and \cite{Batell:2009jf} up to the sign.\footnote{The result in Ref.~\cite{Krnjaic:2015mbs} has the sign opposite to us, while we cannot conclude which sign convention is used in Ref.~\cite{Batell:2009jf}.}
The procedure to multiply the SM result for the $\bar s b$-Higgs vertex
by $-\sin\theta $ to find the $\bar{s} bh_2$ vertex is not correct in an
$R_\xi$ gauge (nor for the special cases $\xi=0$ or $\xi=1$ of the Landau and 't
Hooft-Feynman gauges) because of the subtlety with the $G^\pm$ vertices
in \eq{eq:ggh2}. However, the missing terms are suppressed by higher
powers of $m_{h_2}^2/M_W^2$ and do not contribute to the effective
dimension-4 lagrangian in \eq{Leff}.
\begin{figure*}[tb]
\begin{center}
\subfigure{\includegraphics[width=0.55\textwidth]{BranchingFractions.pdf}}
\end{center}
\caption{Comparison of the branching fractions of
$B^+\to K^+ h_{2}$ (thick orange curve) and $B^+\to K^{\ast +} h_{2}$ (dashed purple curve) for $\sin\theta=10^{-4}$.}
\label{Fig:BranFrac}
~\\[-3mm]\hrule
\end{figure*}
\section{Phenomenology\label{sec:p}}
The experimental signature $B\to K\, h_{2}$ permits the
determination of $m_{h_2}$ from the decay kinematics, while the other
relevant parameter of the model, $\sin\theta$, can be determined from
the measured branching ratio $B(B\to K\, h_{2})$. With increasing
$m_{h_2}$ more $h_2$ decay channels open and the $h_2$ lifetime may be
in a favourable range allowing the $h_2$ to decay within the Belle II
detector. This scenario has a characteristic displaced-vertex signature
which is highly beneficial for the experimental analysis. Higgs-portal
signatures at $B$ factories have been widely studied
\cite{Krnjaic:2015mbs,Winkler:2018qyg,Filimonova:2019tuy,Kamenik:2011vy,Schmidt-Hoberg:2013hba,Clarke:2013aya,Sierra:2015fma}.
In this paper we briefly revisit
the recent analyses of Refs.~\cite{Winkler:2018qyg,Filimonova:2019tuy}
and complement them with novel elements: Firstly, we pre\-sent a novel analysis of the decay mode $B\to K^{*}(892) h_2$ in comparison to $B\to K h_2$. Secondly, we highlight the benefits of the lifetime information
which can be obtained from the displaced-vertex data. Thirdly, we
present a new result of the number of $B\to K h_2[\to f]$ events
(with $f$ representing a pair of light particles) expected at Belle II as
a function of the relevant $B\to K h_2$ and $h_2\to f$ branching ratios.
In our study of $B\to K h_2$ and $B\to K^\ast h_2$ with subsequent
decay of $h_2$ into a visible final states with displaced vertex we
restrict ourselves to the case $m_{h_2}>2m_{\mu}$.
While the leptonic decay rate is given by the simple formula
\begin{equation}
\quad
\Gamma(h_2\to \ell\ell) =
\sin^2\theta \frac{G_F m_{h_{2}} m_{\ell}^2}{4\sqrt{2}\pi}
\bigg(1-\frac{4m_\ell^2}{m_{h_{2}}^2}\bigg)^{3/2}\,,
\label{eq:s}
\end{equation}
the calculation of the decay rate into an exclusive hadronic final state
is challenging. Different calculations of $ \Gamma(h_2\to \pi\pi)$ and
$ \Gamma(h_2\to KK)$
\cite{Voloshin:1985tc,Truong:1989my,Donoghue:1990xh,Monin:2018lee} employing chiral
perturbation theory have been clarified, updated and refined in
Ref.~\cite{Winkler:2018qyg} and we use the results of this reference.
In the region with $m_{h_{2}}> 2\,\text{GeV}$ the inclusive hadronic
decay rate can be reliably calculated in perturbation theory
\cite{Grinstein:1988yu}.
Analyses with fully visible final states $K^{(\ast)}f$ can also be done at LHCb~\cite{Aaij:2016qsm,Aaij:2015tna}.
\subsection{$B \to K h_2$}
The branching ratio of $B\to K h_2$ is
\begin{eqnarray}
\; B(B\to K h_2) = \frac{\tau_B}{32\pi m_B^2}\vert C_{h_2 s
b}\vert^2\bigg(\frac{m_B^2-m_K^2}{m_b-m_s}\bigg)^2\cdot \nonumber\\
f_0(m_{h_{2}}^2)^2
\frac{\lambda(m_B^2, m_K^2,m_{h_{2}}^2)^{1/2}}{2 m_B}\,,\label{BrtoK}
\end{eqnarray}
where $\lambda(a,b,c) = a^2+b^2+c^2 - 2(ab+ac+bc)$, and the scalar form
factor $f_0(q^2)$ is related to the desired scalar hadronic matrix element as
\begin{equation}
\qquad \langle K\vert\bar{s}b\vert B\rangle =
\frac{m_B^2-m_K^2}{m_b-m_s}\,f_0(q^2)\,,
\end{equation}
where $q=p_B-p_K$. For this form factor we use the QCD lattice
result of Ref.~\cite{Bailey:2015dka} (see also
\cite{Bouchard:2013pna}).
The reach of the Belle II experiment for the process $B\to K h_2$ was
recently studied in Ref.\cite{Filimonova:2019tuy}. This investigation
involves a study of the detector geometry and we present a novel study
in \ref{App:Events}. For the evaluation of the number of events we use
the formula~\eqref{Events2}. Our evaluation of the sensitivities
corresponds to $5\cdot 10^{10}$ produced $B\bar{B}$ meson pairs,
where $B$ represents both $B^+$ and $B^0$, at
$50\,\text{ab}^{-1}$ of data at Belle II experiment \cite{Kou:2018nap}.
The parameter regions that correspond to three or more displaced vertex
events of any of the final state signatures in $B\to K (h_{2}\to f)$,
$f = (\pi\pi + K K), \mu\mu, \tau\tau$ within the Belle II detector are
displayed by the dashed red contours in figure \ref{Fig:PlotA}. The
number of events involve the summation over the decays of $B^+$,
$B^0$ and the corresponding charge-conjugate mesons. Following
Ref.~\cite{Filimonova:2019tuy}, we display the regions in which the
$\pi\pi, KK$ final states occur as well as the region above the $\tau$
lepton threshold within the same plot. We show the contours of the
proper lifetime of the scalar mediator within the same parameter space
and encourage our experimental colleagues to include the lifetime
information in the following ways: In a first step one may assume the
minimal model adopted in this paper and use the lifetime measurements as
additional information on $m_{h_2}$ and $\sin \theta$.
E.g.\ if $h_2$ is light enough so that the only relevant decay channel is
$h_2\to \mu^+\mu^-$, the lifetime is the inverse of the width in
\eq{eq:s}. Thanks to the strong dependence on $m_{h_2}$ the lifetime
information will improve the determination of $m_{h_2}$ inferred from
the $B\to K h_2$ decay kinematics once $\sin \theta$ is fixed from
branching ratios. With more statistics one can go a step further and use
the lifetime information to verify or falsify the model. Even if all
$h_2$ couplings to SM particles originate from the SM Higgs field through
mixing, a richer singlet scalar sector can change the $h_2$
lifetime. Consider an extra gauge singlet scalar field $\tilde\phi$
coupling to $\phi$ in the potential in \eq{eq:v} giving rise to a third
physical Higgs state $h_3$. If $h_3$ is sufficiently light,
$h_2\to h_3h_3$ is possible. Through $\tilde\phi$--$H$ mixing the new
particle $h_3$ will decay back into SM particles, but
the lifetime can be so large that $h_2\to h_3h_3$ is just
a missing-energy signature. Then the only detectable effect of the extra
$h_2\to h_3h_3$ mode is a shorter $h_2$ lifetime. If
measured precisely enough, the lifetime will permit to determine the
decay rate of $h_2\to h_3h_3$ and thereby the associated
coupling constant. Alternatively, one may fathom a model in which
$h_2$ decays into a pair of sterile neutrinos which decay back to SM fermions.
\begin{figure*}[tb]
\hrule\medskip
\begin{center}
\subfigure{\includegraphics[width=0.55\textwidth]{PlotA.pdf}}
\end{center}
\caption{Parameter regions that correcpond to three or more
events of $B\to K h_{2}\,(\to f)$,
$f = (\pi\pi + K K), \mu^+\mu^-, \tau^+\tau^-$ are shaded in
red and bounded by the dashed red contours. Analogous regions
for $B\to K^\ast h_2$ are presented by the dark green
contour. We summed over the number of events in the
decays of $B^{+}$, $B^-$, $B^0$, and $\bar{B}^0$. The dotted
lines are contours of constant $h_2$ proper lifetime.}
\label{Fig:PlotA}
~\\[-3mm]\hrule
\end{figure*}
\subsection{$B \to K^{\ast} h_2$}
We include in our analysis the decay of $B$ meson that involves the final
state vector meson $K^\ast$ and has the branching fraction
\begin{eqnarray}
\quad
B(B\to K^\ast h_2) &=& \frac{\tau_B}{32\pi m_B^2}\vert C_{h_2 s b}\vert^2
\frac{A_0(m_{h_2}^2)^2}{(m_b+m_s)^2} \cdot \nonumber\\
&&\quad \frac{\lambda(m_B^2, m_{K^\ast}^2, m_{h_2}^2)^{3/2}}{2 m_B}\,.\label{BrtoKst}
\end{eqnarray}
The form factor $A_0(q^2)$ is related to the desired pseudoscalar
hadronic matrix element as
\begin{equation}
\langle K^\ast (k, \epsilon)\vert \bar{s}\gamma_5 b\vert B(p_B)\rangle = \frac{2\,m_{K^\ast}\,\epsilon^\ast\cdot q}{m_b+m_s} A_0(q^2)\,,
\end{equation}
where $\epsilon$ is a polarization vector of $K^\ast$ and $q=p_B-k$. For this form factor we use the combination of results from lattice QCD \cite{Horgan:2013hoa} and QCD sum rules \cite{Straub:2015ica} as provided in Ref.~\cite{Straub:2015ica}.
$ B(B\to K^\ast h_2)$ is comparable in size to $ B(B\to K h_2)$ for
masses up to $\sim 2\,\text{GeV}$ (see \fig{Fig:BranFrac}), and is
suppressed as the mass $m_{h_{2}}$ approaches the kinematic
endpoint. This is the result of the additional power of the kinematic
function $\lambda$ in \eq{BrtoKst} that comes from the
contribution of the longitudinal $K^\ast$ polarization. It follows from
angular momentum conservation that this is the only contributing
polarization. The combination of the experimental data from both
processes will be required in order to discriminate the spin-0 vs.\
spin-1 hypotheses
in case of a discovery. E.g.\ the mediator with spin 1 involves
a different dependence of the rate on the mediator's mass
and comes with a dramatic suppression of the decay rate with $K$
in the final state if the mediator is light. The decay $B \to K^\ast h_2$ has been studied before in Ref. \cite{Boiarska:2019jym}, in which a plot similar to our Fig. \ref{Fig:BranFrac} is presented
for the sum of several vector resonances. Our analysis of Belle II opportunities is new compared to Ref. \cite{Boiarska:2019jym} which focuses on LHC, ShiP, and DUNE. Refs. \cite{Boiarska:2019jym,Filimonova:2019tuy} further study the fully inclusive decay $B \to X_s h_2$.
The kinematic suppression close to the endpoint implies that the number
of $B\to K^\ast h_2(\tau\tau)$ events will be much smaller relative to
the case of the final state with $K$. We display the corresponding parameter
region corresponding to $K^\ast$ events with the dark green contour
in \fig{Fig:PlotA}.
\begin{figure*}[tb]
\begin{center}
\subfigure{\includegraphics[width=0.55\textwidth]{PlotB.pdf}}
\end{center}
\caption{Combined sensitivity of the Belle II experiment to
displaced vertices of $h_2$ including both $B\to K h_{2}$ and
$B\to K^\ast h_{2}$ and decays of $h_2$ to
$(\pi\pi + K K), \mu^+\mu^-, \tau^+\tau^-$ are shown with the
filled red region, and compared to the search limit of LHCb
\cite{Aaij:2016qsm} (shaded blue) and projected sensitivities
by other proposed experiments, Mathusla~\cite{Evans:2017lvd}
(pink), SHiP~\cite{Alekhin:2015byh}, CODEX
b~\cite{Gligorov:2017nwh} (gray) and FASER
2~\cite{Ariga:2018uku} (brown).}
\label{Fig:combination}
~\\[-3mm]\hrule
\end{figure*}
In \fig{Fig:combination} we compare the reach of the Belle II experiment
to displaced vertices of $h_2$ including both $B\to K h_{2}$ and
$B\to K^\ast h_{2}$ processes and decays of $h_2$ to
$(\pi\pi + K K), \mu^+\mu^-, \tau^+\tau^-$ with the existing search
limit of the LHCb experiment~\cite{Aaij:2016qsm}.\footnote{We use
the result of Ref.~\cite{Winkler:2018qyg} for the LHCb search limit on
$B(B\to K h_2[\to\mu^+\mu^-])$.} We also compare to projected
sensitivities of other proposed experiments,
Mathusla~\cite{Evans:2017lvd}, SHiP~\cite{Alekhin:2015byh}, CODEX
b~\cite{Gligorov:2017nwh} and FASER 2~\cite{Ariga:2018uku}.
\section{Conclusions}\label{sec:c}
We have clarified the cancellation of gauge-dependent terms appearing in
the $\bar{s} bh_2$ vertex in the standard Higgs portal model with a
singlet mediator to the Dark Sector. We have further updated the
$b\to s h_2$ phenomenology to be studied at the Belle II detector, with
a novel consideration of $B\to K^* h_2$ complementing the previously
studied decay $B\to K h_2$. Decays like
$B\to K^{(*)} h_2[\to \mu^+\mu^-]$ with a displaced vertex permit the
measurement of the $h_2$ lifetime. It is shown how this measurement will
further constrain the two relevant parameters $m_{h_2}$ and $\sin\theta$
of the model. Both the lifetime information and the combined study of
$B\to K^* h_2$ and $B\to K h_2$ permit the discrimination of the studied
Higgs portal from other Dark-Sector models. Another result
of this paper is a new calculation of the expected number of
$B\to K^{*} h_2[\to f]$ events as a function of the $B\to K h_2$ and
$h_2\to f$ branching ratios for the Belle II detector.
\begin{acknowledgements}
We are grateful for helpful discussions with Teppei Kitahara, Felix
Metzner, Vladyslav Shtabovenko and Susanne
Westhoff and thank the authors of Ref.~\cite{Filimonova:2019tuy}
for confirming our result in \ref{App:Events}.
We further thank Ulises J.\ Salda\~na-Salazar
for participation in the early stages of the project. This work is
supported by BMBF under grant \emph{Verbundprojekt 05H2018 (ErUM-FSP
T09) - BELLE II: Theoretische Studien zur Flavourphysik}. A.K.\
acknowledges the support from the doctoral school \emph{KSETA}\ and
the \emph{Graduate School Scholarship Programme}\ of the \emph{German
Academic Exchange Service (DAAD)}.
\end{acknowledgements}
|
2,869,038,154,235 | arxiv | \section*{Abstract}
{\bf
We study the possibility of charge order at quarter filling and antiferromagnetism at half-filling in a tight-binding model of magic angle twisted bilayer graphene.
We build on the model proposed by Kang and Vafek~\cite{kang2018}, relevant to a twist angle of $1.30^\circ$, and add on-site and extended density-density interactions.
Applying the variational cluster approximation with an exact-diagonalization impurity solver, we find that the system is indeed a correlated (Mott) insulator at fillings $\frac14$, $\frac12$ and $\frac34$.
At quarter filling, we check that the most probable charge orders do not arise, for all values of the interaction tested. At half-filling, antiferromagnetism only arises if the local repulsion $U$ is sufficiently large compared to the extended interactions, beyond what is expected from the simplest model of extended interactions.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
The observation of correlated insulators and superconductivity in twisted bilayer graphene (TBG) \cite{cao2018, cao2018a} has
inaugurated the new field of twistronics.
This discovery was motivated by the prediction that, for a few small ``magic'' twist angles, the band structure of a twisted graphene bilayer would contain a low-energy manifold of flat bands, well separated from the other bands and forming a strongly correlated electronic subsystem.~\cite{bistritzer2011,suarezmorell2010,tramblydelaissardiere2016}.
So far the superconducting order parameter symmetry of TBG is not known, although there are numerous predictions.
The precise nature of the insulating state (pure Mott insulator or broken symmetry phase) is not precisely known either.
The goal of this paper is to analyse the insulating state of TBG at quarter- and half-filling and to ascertain whether it is a pure Mott state or a broken symmetry state, either a charge-density wave (quarter filling) or an antiferromagnet (half-filling).
We will conclude that it is indeed a pure Mott state.
This paper is an extension of our previous work \cite{pahlevanzadeh2021} on the superconducting state of TBG.
We will use the same premise: We will start from the tight-binding model proposed by Kang and Vafek~\cite{kang2018}, based on the microscopic analysis of Moon and Koshino~\cite{moon2012}.
However, instead of applying cluster dynamical mean field theory (CDMFT) as in Ref.~\cite{pahlevanzadeh2021}, we will apply another cluster method, the variational cluster approximation (VCA), based on a 12-site cluster.
In addition, we will include extended interactions, which were neglected in Ref.~\cite{pahlevanzadeh2021} and will extend the VCA by a mean-field treatment of inter-cluster interactions.
Since the model studied is nearly particle-hole symmetric, the conclusions reached at quarter filling also apply at three-quarter filling.
\section{The low-energy model}\label{sec:model}
\begin{figure}[ht]\centering
\includegraphics[width=0.5\hsize]{Fig/wannier.pdf}
\caption{Schematic representation of the Wannier functions $w_1=w_2^*$ (orange) and $w_3=w_4^*$ (green) on which our model Hamiltonian is built.
The charge is maximal at the AA superposition points (blue circles) forming a triangular lattice.
The Wannier functions are centered on the triangular plaquettes that form a graphene-like lattice (black dots), whose unit cell is shaded in red.
The basis vectors $\Ev_{1,2}$ of the moir\'e lattice are shown (they are also basis vectors of the graphene-like lattice of Wannier functions), as well as the elementary nearest-neighbor vectors $\av_{1,2,3}$. This figure is borrowed from Ref.~\cite{pahlevanzadeh2021}.
}
\label{fig:Wannier}
\end{figure}
Among the various tight-binding Hamiltonian proposed for the low-energy bands of TBG~\cite{angeli2018, moon2012, kang2018, yuan2018a}, we adopt the one described in Ref.~\cite{kang2018}.
This model features four Wannier orbitals per unit cell (labeled $w_{1,2,3,4}$), with maximal symmetry, on an effective honeycomb lattice, appropriate for a twist angle $\theta=1.30^\circ$.
Each site of the honeycomb lattice is associated with two Wannier orbitals, which it is convenient to imagine located on two different layers, containing respectively the orbitals $w_{1,4}$ and the orbitals $w_{2,3}$.
The Wannier orbitals of one layer are schematically illustrated on Fig.~\ref{fig:Wannier}, borrowed from Ref.~\cite{pahlevanzadeh2021}.
We will only retain the largest hopping integrals among those computed in Ref.~\cite{kang2018}; see Table~\ref{table:hopping} (the notation used is that of Ref.~\cite{kang2018}).
The most important hopping terms are between Wannier orbitals $w_1$ and $w_4$ and between $w_2$ and $w_3$, i.e., between graphene sublattices, within a given layer. The inter-layer hopping terms are much smaller, the largest of which being $t_{13}[0,0]$.
\definecolor{Green}{rgb}{0,0.7,0}
\begin{table}[ht]\centering
$\vcenter{\hbox{\begin{tabular}{LL}
\hline\hline
\mathrm{symbol} & \mbox{value (meV)} \\ \hline
{\color{white}\bullet}~t_{13}[0,0] = \omega t_{13}[1,-1] = \omega^* t_{13}[1,0] & -0.011 \\
{\color{red}\bullet}~t_{14}[0,0] = t_{14}[1,0] = t_{14}[1,-1] & \phantom{-}0.0177 + 0.291i \\
{\color{blue}\bullet}~t_{14}[2,-1] = t_{14}[0,1] = t_{14}[0,-1] & -0.1141 - 0.3479i \\
{\color{Green}\bullet}~t_{14}[-1,0] = t_{14}[-1,1] = t_{14}[1,-2] & \\
~~~= t_{14}[1,1] = t_{14}[2,-2] = t_{14}[2,0] & \phantom{-}0.0464 - 0.0831i \\ \hline\hline
\end{tabular}}}$~~
$\vcenter{\hbox{\includegraphics[width=4cm]{Fig/hopping.pdf}}}$
\caption{Hopping amplitudes used in this work. They are the most important amplitudes computed in Ref.~\cite{kang2018}.
Here $\om=e^{2\pi i/3}$ and the vector $[a,b]$ following the symbol represents the bond vectors in the $(\Ev_1,\Ev_2)$ basis shown on Fig.~\ref{fig:Wannier}.
Note that $t_{23}=t_{14}^*$ and $t_{24}=t_{13}^*$.
On the right: schematic view of the hopping terms $t_{14}$ within a given layer (the unit cell is the blue shaded area). Lines 2, 3, and 4 of the table correspond to the red, blue and green links, respectively. Dashed and full lines are for $t_{14}$ and $t_{23}$, respectively.
\label{table:hopping}}
\end{table}
We now proceed to describe a simple model for interactions, derived from an on-site Coulomb repulsion at the AA sites~\cite{dodaro2018, xu2018c}:
\begin{equation}\label{eq:Hint}
H_{\rm int} = u\sum_{\Rv\in \mathrm{AA}} n_\Rv^2~~,
\end{equation}
where the sum is carried over AA sites and $n_\Rv$ is the total charge located at that site, to which contribute 12 Wannier orbitals (6 per layer).
Specifically, we could write
\begin{equation}\label{eq:nR}
n_\Rv = \frac13\sum_{i=1}^3 \left(n^{(1)}_{\Rv+\av_i}+n^{(1)}_{\Rv-\av_i}+n^{(2)}_{\Rv+\av_i}+n^{(2)}_{\Rv-\av_i}\right)
\end{equation}
where $n^{(\ell)}_\rv$ is the electron number associated with the Wannier orbital centered at the (honeycomb) lattice site $\rv$ on layer $\ell$.
The vectors $\pm\av_i$, indicated on Fig.~\ref{fig:Wannier}, go from each AA site to the six neighboring honeycomb lattice sites.
The factor of $\frac13$ above comes from the fact that each Wannier orbital has three lobes, i.e., is split across three AA sites.
Expressed in terms of the Wannier electron densities $n_\rv^{\ell}$, the interaction takes the form
\begin{equation}
H_{\rm int} =
\frac12\sum_{\rv,\rv',\ell,\ell'} V_{\rv,\rv'}^{\ell,\ell'} n_{\rv}^\ell n_{\rv'}^{\ell'}
\end{equation}
where the factor of $\frac12$ avoids double counting when performing independent sums over sites and orbitals.
The Hubbard on-site, intra-orbital interaction $U$ is equal to $V_{\rv,\rv}^{\ell,\ell}$, since
\begin{equation}
V_{\rv,\rv}^{\ell,\ell} n_{\rv\uparrow}^\ell n_{\rv\downarrow}^\ell =
\frac12 V_{\rv,\rv}^{\ell,\ell} (n_{\rv\uparrow}^\ell+n_{\rv\downarrow}^\ell)(n_{\rv\uparrow}^\ell+n_{\rv\downarrow}^\ell) - \frac12 V_{\rv,\rv}^{\ell,\ell} n_\rv^\ell \qquad(n_{\rv\sigma}^2=n_{\rv\sigma})
\end{equation}
Including on-site interactions in this form entails a compensation term $U/2$ to the chemical potential.
Careful counting from Eqs~(\ref{eq:Hint},\ref{eq:nR}) shows that
\begin{equation}\begin{aligned}\label{eq:values}
U &= \frac23 u &&\qquad \mbox{(on-site)}\\
V_{\rv\rv}^{(1,2)} &\equiv V_0 = \frac23 u = U &&\qquad \mbox{(same site, different layers)}\\
V_{\rv\rv'}^{(\ell,\ell')} &\equiv V_1 = \frac49 u = \frac23U &&\qquad \mbox{(1st neighbors)} \\
V_{\rv\rv'}^{(\ell,\ell')} &\equiv V_2 = \frac29 u = \frac13U &&\qquad \mbox{(2nd neighbors)} \\
V_{\rv\rv'}^{(\ell,\ell')} &\equiv V_3 = \frac29 u = \frac13U &&\qquad \mbox{(3rd neighbors)}
\end{aligned}
\end{equation}
There are no interactions beyond third neighbors coming from a single AA site.
We will study this model by assuming the above relations between extended interactions $V_{0,1,2,3}$ and the on-site interaction $U$.
\subsection{The strong-coupling limit}\label{subsec:strong}
Given the large number of extended interactions in the model, it is instructive to look at the strong-coupling limit (neglecting all hopping terms) to detect possible charge order instabilities stemming solely from the interactions.
The reader will forgive us if we use a slightly different notation, writing the interaction Hamiltonian as
\begin{equation}
H_{\rm int} =
\frac12\sum_{\Rv,\Rv',a,b} V_{\Rv,\Rv'}^{a,b} n_{\Rv}^a n_{\Rv'}^b
\end{equation}
where now $\Rv$, $\Rv'$ denote Bravais lattice sites and $a,b$ orbital indices from 1 to 4.
In essence, for each $\Rv$, the site index $\rv$ takes two values (the two sublattices $A$ and $B$), as does the layer index $\ell$, leading to four possible value of the orbital index $a$.
This shift in notation allows us to express the interaction in Fourier space:
\begin{equation}
H_{\rm int} = \frac12 \sum_{\qv, a,b} \tilde V^{ab}_\qv \tilde n_\qv^{a\dagger} \tilde n_\qv^b
\end{equation}
where
\begin{equation}
V_{\Rv\Rv'}^{ab} = \frac1L \sum_\qv \tilde V_\qv^{ab} e^{i\qv\cdot(\Rv-\Rv')} \qquad\qquad
\tilde n_\qv^a = \frac1{\sqrt{L}}\sum_\Rv e^{-i\qv\cdot\Rv}n_\Rv^a
\end{equation}
Interactions up to third neighbor are then encoded in the following $\qv$-dependent matrix:
\begin{equation}\label{eq:Vq}
[\tilde V_\qv^{ab}] =
\begin{pmatrix}
U+ V_2\beta_\qv & V_1\gamma_\qv + V_3 \gamma^*_{2\qv} & V_0 + V_2\beta_\qv & V_1\gamma_\qv + V_3 \gamma^*_{2\qv} \\
V_1\gamma^*_\qv + V_3 \gamma_{2\qv} & U+ V_2\beta_\qv & V_1\gamma_\qv + V_3 \gamma^*_{2\qv} & V_0 + V_2\beta_\qv \\
V_0 + V_2\beta_\qv & V_1\gamma^*_\qv + V_3 \gamma_{2\qv} & U+ V_2\beta_\qv & V_1\gamma_\qv + V_3 \gamma^*_{2\qv} \\
V_1\gamma^*_\qv + V_3 \gamma_{2\qv} & V_0 + V_2\beta_\qv & V_1\gamma^*_\qv + V_3 \gamma_{2\qv} & U+ V_2\beta_\qv
\end{pmatrix}
\end{equation}
with
\begin{equation}
\beta_\qv = 2\left(\cos\qv\cdot\bv_1+ \cos\qv\cdot\bv_2+\cos\qv\cdot\bv_3 \right) \quad\mbox{and}\quad
\gamma_\qv = e^{i\qv\cdot\av_1}+ e^{i\qv\cdot\av_2}+ e^{i\qv\cdot\av_3}
\end{equation}
where the vectors $\bv_i$ are the second-neighbor vectors on the honeycomb lattice (hence first neighbors on the Bravais lattice):
\begin{equation}
\bv_1 = 2\av_1 + \av_2 \qquad \bv_2 = \av_1 + 2\av_2 \qquad \bv_3 = \av_2-\av_1
\end{equation}
The order of orbitals adopted in this matrix notation is $(w_1,w_4,w_2,w_3)$: the first two orbitals belong to the ``first layer'', the last two to the ``second layer''.
The local density $n_{\Rv\sigma}^a$ can only take the values 0 or 1, but the Fourier transforms $\tilde n^a_\qv$ are continuous variables in the thermodynamic limit, and they all commute with each other.
Hence, for the sake of detecting charge order in the strong-coupling limit, we can treat the variables
$\tilde n^a_\qv$ as classical.
The matrix \eqref{eq:Vq} can be diagonalized by a unitary matrix:
\begin{equation}
\tilde V_\qv^{ab} = \sum_{r=1}^4 U^{ar}_\qv \lambda_\qv^{(r)} U^{br*}_\qv
\end{equation}
and thus the interaction energy can take the form
\begin{equation}
H_{\rm int} = \frac12 \sum_\qv \sum_{r=1}^4 \lambda_\qv^{(r)} |m^{(r)}_\qv|^2 \qquad\qquad
\left( m^{(r)}_\qv = U^{ar*}_\qv \tilde n_\qv^a \right)
\end{equation}
with the eigenvalues
\begin{align}
&\lambda_\qv^{(1)} = U + V_0 + 2V_2\beta_\qv + 2|V_1\gamma_\qv+V_3\gamma^*_{2\qv}| \\
&\lambda_\qv^{(2)} = U + V_0 + 2V_2\beta_\qv - 2|V_1\gamma_\qv+V_3\gamma^*_{2\qv}| \\
&\lambda_\qv^{(3)} = \lambda_\qv^{(4)} = U -V_0
\end{align}
The uniform solution $\tilde n_{\mathbf0}^a = (1,1,1,1)$ corresponds to $\lambda^{(1)}_{\mathbf0}$, which is the largest possible eigenvalue, and is favored by the (neglected) kinetic energy.
Charge order instabilities in the strong-coupling limit occur for negative eigenvalues, since they can be lower the interaction energy.
When substituting the values given in Eq.~\eqref{eq:values}, one finds that the maximum eigenvalue is $\lambda_{\mathbf 0}^{(1)}=12U$ and the minimum eigenvalue is zero, the latter at the Dirac points $\qv=\Kv$ and $\qv=\Kv'$ for $\lambda^{(1)}_\qv$, and at all wavevectors for $\lambda^{(2,3,4)}_\qv$.
This means that the system has no instabilities in the strong-coupling limit, only indifferent states (zero eigenvalue), especially at wavevectors $\Kv$ and $\Kv'$.
When probing such instabilities with a cluster method, we should therefore make sure that these two wavevectors belong to the reciprocal cluster.
The 12-site (hexagonal) cluster used in this work statisfies this requirement.
\section{The variational cluster approximation}
In order to detect spectral gaps in the normal state and to probe the possible existence of antiferromagnetic or charge-ordered states in this model, we use the variational cluster approximation (VCA)~\cite{potthoff2003,potthoff2003b,potthoff2014a} with an exact diagonalization solver at zero temperature.
This method takes into account short-range correlations exactly, while allowing long-range order through the introduction of broken-symmetry fields determined by a variational principle.
Let us summarize this method, starting with a Hamiltonian containing local interactions only.
We write the lattice Hamiltonian as $H=H_0(\tv)+H_1(U)$, the sum of a noninteracting term $H_0(\tv)$ with one-body Hamiltonian matrix $\tv$, and of an interaction term $H_1(U)$ with a local Hubbard interaction $U$.
If the lattice contains $L$ sites and the model has $B$ orbitals per unit cell, then this matrix $\tv$ is $N\times N$, with $N=LB$.
One then defines a functional $\Omega_\tv[\Sigmav]$ of the self-energy $\Sigmav$ as
\begin{equation}\label{Potthoff1}
\Omega_\tv[\Sigmav]=\Tr\ln\left(-\left(\Gv_0^{-1}-\Sigmav\right)^{-1}\right)
+F[\Sigmav]~~.
\end{equation}
In this expression the trace and the logarithm are functional in nature, $\Gv_0(\omega)=(\omega+\mu-\tv)^{-1}$ is the one-particle Green function of the noninteracting system, and $F[\Sigmav]=\Phi[\Gv[\Sigmav]]-\Tr(\Sigmav \Gv[\Sigmav])$ is the Legendre transform of the Luttinger-Ward functional $\Phi[\Gv]$~\cite{luttinger1960}, $\Gv$ being viewed a functional of $\Sigmav$.
The Potthoff variational principle states that $\Omega_\tv[\Sigmav]$ is stationary at the exact, physical self-energy, and its value at that point is the exact thermodynamic grand potential $\Omega$ of the system.
One cannot directly optimize $\Omega$ in Eq.~(\ref{Potthoff1}) since the precise form of $F[\Sigmav]$ is unknown.
But the functional form of $F[\Sigmav]$ depends only on the interaction term $H_1(U)$, not on the one-body term $H_0(\tv)$.
This motivates us to define a family of simpler, reference Hamiltonians $H'=H_0(\tv')+H_1(U)$ that differ from $H$ in their one-body Hamiltonian matrix $\tv'$ only, for which the Green function $\Gv'(\omega)$, the self-enery $\Sigmav'(\omega)$ and the grand potential $\Omega'$ can be computed numerically. Specifically, $H'$ can be restricted to a small cluster of sites and a numerical method like exact diagonalization can be applied.
Applying Eq.~(\ref{Potthoff1}) to $H'$, we obtain
\begin{equation}\label{Potthoff2}
\Omega_{\tv'}[\Sigmav']=\Omega'=\Tr\ln\left(-\left(\Gv_0^{\prime -1}-\Sigmav'\right)^{-1}\right)+F[\Sigmav'],
\end{equation}
where $\Gv_0'=(\omega+\mu-\tv')^{-1}$ is the noninteracting Green's function for $H'$ and $F$ has the same functional form for both $H$ and $H'$ since they have the same interaction part.
Equation~(\ref{Potthoff2}) then provides an explicit expression for $F$ evaluated at $\Sigma'$:
\begin{equation}
F[\Sigmav']=\Omega'-\Tr\ln\left(-\Gv'\right),
\end{equation}
with $\Gv^{\prime -1}=\Gv_0^{\prime -1}-\Sigmav'$.
So far no approximation was made. The basic approximation of the VCA method is to restrict the space of self-energies $\Sigmav'$
to the physical self-energies of the reference Hamiltonian $H'$ for a suitable set of $\tv'$'s. In other words, we are not making an approximation on the form of the functional $F$, but we restrict the variational space of self-energies:
We will search a stationary point of $\Omega_\tv[\Sigmav']$ on a subset of one-body terms $\tv'$ in a class of solvable reference Hamiltonians.
Using Eq.~(\ref{Potthoff1}) and (\ref{Potthoff2}), the functional to be optimized is
\begin{equation}\label{Potthoff3}
\Omega_\tv[\Sigmav'] = \Omega'+\Tr\ln\left(-\left(\Gv_0^{-1}-\Sigmav'\right)^{-1}\right)-\Tr\ln(-\Gv')~~,
\end{equation}
where everything on the r.h.s. can be explicitly computed.
In quantum cluster methods, such as the VCA or cluster dynamical mean field theory, the reference Hamiltonian $H'$ is defined on a set of decoupled (but otherwise identical) \textit{clusters} that tile the lattice exactly.
In other words, $H' = \sum_c H_c$, where $H_c$ is the Hamiltonian for a single cluster containing $N_c$ orbitals, and the sum contains $N/N_c$ terms.
Each cluster must be small enough for $H_c$ to be exactly solvable numerically, say by the Lanczos method or variants thereof.
If the cluster Hamiltonian $H_c$ is simply the restriction of the lattice Hamiltonian to the cluster, i.e., if the variational method described above is not applied, one get the so-called cluster perturbation theory (CPT)~\cite{senechal2000a, gros1993}.
This directly leads to the following approximate Green function
\begin{equation}\label{GCPT}
\Gv^{-1}(\omega)=\Gv_0^{-1}(\omega)-\Sigmav'(\omega)=\Gv^{\prime -1}(\omega)-\Vv,
\end{equation}
where $\Vv=\tv-\tv'$ contains inter-cluster hopping terms that were severed in the reference Hamiltonian and the $N\times N$ matrix $\Sigmav'$ is block diagonal, each block being equal to the self-energy $\Sigmav_c$ of the cluster Hamiltonian $H_c$.
Instead of dealing with $N\times N$ matrices $\Gv$, $\tv$, etc., one can make use of the translation invariance on the superlattice of clusters and express the above relations in terms $N_c\times N_c$ matrices that depend on a wave vector $\kvt$ belonging to the Brillouin zone associated with this superlattice (referred to as the \textit{reduced} Brillouin zone).
The above equation can then be recast as
\begin{equation}\label{GCPT}
\Gv^{-1}(\kvt,\omega)=\Gv_0^{-1}(\kvt,\omega)-\Sigmav_c(\omega)=\Gv_c^{-1}(\omega)-\Vv(\kvt),
\end{equation}
The wave vector $\kvt$ takes $N/N_c$ different values and all quantities of interest are diagonal in this wave vector.
In particular, $\Sigmav_c$ and $\Gv_c$ do not depend on $\kvt$ since all clusters are identical.
If, in the spirit of the Potthoff variational principle, the reference Hamiltonian is not simply the restriction to the cluster of the lattice Hamiltonian but contain additional one-body terms, these will be included in $\Vv(\kvt)$.
Using Eq.~(\ref{GCPT}), the Potthoff functional (\ref{Potthoff3}) will then be written as
\begin{equation}\label{Potthoff4}
\Omega_\tv[\Sigmav']=\Omega'-\Tr\ln\left(1-\Vv\Gv'\right)
\end{equation}
or, in terms of a sum over frequencies and reduced wavevectors,
\begin{equation}\label{Potthoff4}
\Omega_\tv[\Sigmav'] = \Omega' - \int\frac{d\omega}{2\pi}\sum_\kvt \ln\det\left[\unit-\Vv(\kvt)\Gv_c(\omega)\right]~~,
\end{equation}
where the frequency integral can be taken along the imaginary axis after proper regularization.
In VCA, one searches for stationary points of the functional (\ref{Potthoff4}), i.e., solutions of the Euler equation $\partial\Omega_\tv[\Sigmav']/\partial\tv'=0$. This is achieved in practice by using the cluster one-body terms $\tv'$ as variational parameters. In particular, one can search for spontaneously broken symmetries by including in $\tv'$ symmetry-breaking terms, i.e., Weiss fields. By contrast with conventional mean-field theory, the full dynamical effect of correlations is taken into account via the frequency dependence of the cluster Green's function $\Gv'$ in Eq.~(\ref{Potthoff4}).
In other words, short-range correlations (within the cluster) are treated exactly.
\section{The dynamical Hartree approximation}\label{sec:hartree}
The VCA approximation as summarized above only applies to systems with on-site interactions, since the Hamiltonians $H$ and $H'$ must differ by one-body terms only, i.e., they must have the same interaction part.
This is not true if extended interactions are present, as they are partially truncated when the lattice is tiled into clusters.
To treat the extended Hubbard model, one must apply further approximations.
For instance, we can apply a Hartree (or mean-field) decomposition on the extended interactions that straddle different clusters, while interactions (local or extended) within each cluster are treated exactly.
This is called the dynamical Hartree approximation (DHA) and has been used in Refs~\cite{senechal_resilience_2013, faye2015} in order to assess the effect of extended interactions on strongly-correlated superconductivity.
We will explain this approach in this section.
Let us consider a Hamiltonian of the form
\begin{equation}\label{eq:Hubbard}
H=H_0(\tv) + \frac12\sum_{i,j}V_{ij} n_i n_j
\end{equation}
where $i,j$ are compound indices for lattice site and orbital, $n_{i\s}$ is the number of electrons of spin $\s$ on site/orbital $i$, and $n_i=n_{i\up}+n_{i\dn}$ (the index $i$ is a composite of honeycomb site $\rv$ and layer $\ell$ indices as used in Sect.~\ref{sec:model}, or of Bravais lattice site $\Rv$ and orbital index $a$ used in Sect.~\ref{subsec:strong}).
The factor $\frac12$ in the last term comes from the independent sums on $i$ and $j$ rather than a sum over pairs $(i,j)$.
In the dynamical Hartree approximation, the extended interactions in the model Hamiltonian \eqref{eq:Hubbard} are replaced by
\begin{equation}\label{eq:hartree}
\frac12\sum_{i,j} V_{ij}^\mathrm{c} n_i n_j +
\frac12\sum_{i,j} V_{ij}^\mathrm{ic} (\bar n_i n_j + n_i\bar n_j - \bar n_i \bar n_j)
\end{equation}
where $V_{ij}^\mathrm{c}$ denotes the extended interaction between orbitals belonging to the same cluster, whereas
$V_{ij}^\mathrm{ic}$ those interactions between orbitals of different clusters.
Here $\bar n_i$ is a mean-field, presumably the average of $n_i$, but not necessarily, as we will see below.
Both the first term ($\hat V^\mathrm{c}$) and the second term ($\hat V^\mathrm{ic}$), which is a one-body operator, are part of the lattice Hamiltonian $H$ and of the VCA reference Hamiltonian $H'$.
Let us express the index $i$ as a cluster index $c$ and a site-within-cluster index $\a$.
Then Eq.~\eqref{eq:hartree} can be expressed as
\begin{equation}
\frac12\sum_{c,\a,\b} \tilde V_{\a\b}^\mathrm{c} n_{c,\a} n_{c,\b} +
\frac12\sum_{c, \a,\b} \tilde V_{\a\b}^\mathrm{ic} (\bar n_\a n_{c,\b} + n_{c,\a}\bar n_\b - \bar n_\a \bar n_\b)
\end{equation}
where we have assumed that the mean fields $\bar n_i$ are the same on all clusters, i.e., they have minimally the periodicity of the superlattice, hence $\bar n_i=\bar n_\a$.
We have consequently replaced the large, $N\times N$ and block-diagonal matrix $V_{ij}^\mathrm{c}$
by a small, $N_c\times N_c$ matrix $\tilde V_{\a\b}^\mathrm{c}$, and we have likewise ``folded'' the large $N\times N$ matrix $V_{ij}^\mathrm{ic}$ into the $N_c\times N_c$ matrix $\tilde V_{\a\b}^\mathrm{ic}$.
In order to make this last point clearer, let us consider the simple example of a one-dimensional lattice with nearest-neighbor interaction $v$, tiled with 3-site clusters.
The interaction Hamiltonian
\begin{equation}
H_{\rm int} = v\sum_{i=0}^N n_i n_{i+1}
\end{equation}
would lead to the following $3\times3$ interaction matrices:
\begin{equation}
\tilde V^\mathrm{c} = v\begin{pmatrix}0 & 1 & 0\\ 1 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix}
\qquad
\tilde V^\mathrm{ic} = v\begin{pmatrix}0 & 0 & 1\\ 0 & 0 & 0\\ 1 & 0 & 0 \end{pmatrix}
\end{equation}
In practice, the symmetric matrix $\tilde V^\mathrm{ic}_{\a\b}$ is diagonalized and the mean-field inter-cluster interaction is expressed in terms of eigenoperators $m_\mu$:
\begin{equation}
\hat V^\mathrm{ic} = \sum_\mu D_\mu \left[ \bar m_\mu m_\mu - \frac12\bar m_\mu^2 \right]
\end{equation}
For instance, in the above simple one-dimensional problem, these eigenoperators $m_\mu$ and their corresponding eigenvalues $D_\mu$ are
\begin{align}
D_1 &= -v & m_1 &= (n_1 - n_3)/\sqrt2 \\
D_2 &= \phantom{-}0 & m_2 &= n_2 \\
D_3 &= \phantom{-}v & m_3 &= (n_1 + n_3)/\sqrt2
\end{align}
The mean fields $\bar n_i$ are determined either by applying (i) self-consistency or (ii) a variational method.
In the case of ordinary mean-field theory, in which the mean-field Hamiltonian is entirely free of interactions, these two approaches
are identical.
In the present case, where the mean-field Hamiltonian also contains interactions treated exactly within a cluster, self-consistency does not necessarily yield the same solution as energy minimization.
In the first case, the assignation $\bar n_i \gets \langle n_i\rangle$ would be used to iteratively improve on the value of $\bar n_i$ until convergence. In the second case, one could treat $\bar n_i$ like any other Weiss field in the VCA approach, except that $\bar n_i$ is not defined only on the cluster, but on the whole lattice. We will follow the latter approach below.
\begin{figure}[htb]\centering
\includegraphics[width=0.5\hsize]{Fig/cluster.pdf}
\caption{12-site cluster used in this work. The extended interactions $V_0$ to $V_3$ are shown.
Different Wannier orbitals are shown as spheres of different colors. Orbitals $w_1$ and $w_4$ are located, say, on the bottom layer, whereas orbitals $w_2$ and $w_3$ are located on the top layer.
}
\label{fig:cluster}
\end{figure}
\setcounter{MaxMatrixCols}{12}
\begin{table}\label{table:Vic}
\[\tilde V^\mathrm{ic} =
\begin{pmatrix}
0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 \\
V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 \\
2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 \\
V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 \\
2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 \\
V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 \\
0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 \\
V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 \\
2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 \\
V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 \\
2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 \\
0 & 2V_2 & V_1 & 2V_2 & V_3 & 0 & V_3 & 2V_2 & V_1 & 2V_2 & V_3 & 0
\end{pmatrix}\]
\begin{center}
\includegraphics[width=1.0\hsize]{Fig/eigen.pdf}
\end{center}
\caption{Inter-cluster coupling matrix for the 12-site cluster used in this work.
The numbering of sites is illustrated on Fig.~\ref{fig:cluster}. Bottom: eigenvalues $D_\mu$ and corresponding eigenvectors (or eigenoperators) $m_\mu$ of this matrix. The eigenoperators are shown graphically as a function of site on the 12-site cluster: blue means 1 and red $-1$.
The eigenvalues are also shown as a function of the on-site repulsion $U$ when the constraints~\eqref{eq:values} are applied.
\label{table:Vic}}
\end{table}
\begin{figure}[htbp]\centering
\includegraphics[width=\hsize]{Fig/sec6_density.pdf}
\caption{Electronic density vs chemical potential $\mu$ for different interaction strengths at quarter filling.
The presence of a plateau (in red) is the signature of an insulating state, and the width of the plateau is the magnitude of the gap.
The insulator-to-metal transition occurs between $U=1.5$ and $U=2$.
}
\label{fig:sec6_density}
\end{figure}
\begin{figure}[htbp]\centering
\includegraphics[width=\hsize]{Fig/omega_cdw.pdf}
\caption{Left panel : The Potthoff functional $\Omega$ as a function of the charge-density-wave Weiss field $\bar m_1$ at quarter-filling. Right panel: the same, for the charge-density-wave Weiss field $\bar m_3$.
See Table~\ref{table:Vic} for an illustration of the density-waves $m_1$ and $m_3$.
The symmetric state (no charge density wave) $\bar m_{1,3}=0$ is stable.
}
\label{fig:omega_cdw3}
\end{figure}
\section{The normal state at quarter filling}
In this work we use a 12-site cluster containing 3 unit cells of the low-energy model.
It is made of two superimposed hexagonal clusters, as illustrated on Fig.~\ref{fig:cluster}.
On that figure the various extended interactions $V_0$ to $V_3$ are indicated.
The three wavevectors of the reciprocal cluster are $\mathbf\Gamma=\mathbf0$, $\Kv$ and $\Kv'$.
The $12\times12$ matrix of inter-cluster interactions is given in Table~\ref{table:Vic} and the eigenoperators $m_\mu$ used in the dynamical Hartree approximation are illustrated in the lower part of the same table.
We begin by investigating the normal state at quarter filling, for several values of the interaction $U$, all the extended interactions following from $U$ according to Eq.~\eqref{eq:values}.
We will start by applying VCA to detect the insulating state, assuming that no charge order is present.
To do this, we treat the cluster chemical potential, $\mu_c$, as a the sole variational parameter in the VCA procedure.
We do not take into account inter-cluster interactions, i.e., the Hartree approximation described in Sect.~\ref{sec:hartree}.
Indeed, all the sites of the 12-site cluster are equivalent in the absence of charge order, meaning that the relevant (normalized) eigenvector of the inter-cluster interaction matrix $V^\mathrm{ic}$ is
\begin{equation}
m_0 = \frac1{2\sqrt3}\sum_{i=1}^{12}n_i
\end{equation}
Therefore, adding the corresponding mean-field $\bar m_0 m_0$ to the lattice Hamiltonian would simply shift the chemical potential by $-\bar m_0$, and leave the variational space used in VCA unchanged.
This would therefore not help us in determining whether there is a gap or not.
The signature of the Mott gap will be a plateau in the relation between $\mu$ and the density $n$.
This is shown in Fig.~\ref{fig:sec6_density} for a few values of the interaction $U$.
Using the cluster chemical potential $\mu_c$ as a variational parameter makes the plateaux very sharp, whereas not using VCA, i.e., simple cluster perturbation theory (CPT) would make the plateaux softer, thereby making the transition to the metallic state more difficult to detect.
In the case shown, the metal-insulator transition clear occurs between $U=1.5$~meV and $U=2$~meV.
This Mott transition is essentially caused by extended interactions.
The question then arises as to the nature of the insulating state at quarter filling: is there a charge density wave or not?
As shown in Sect.~\ref{subsec:strong}, the charge fluctuations are expected to be large, because a full array of charge configurations do not affect the energy in the strong-coupling limit when the extended interactions follow Eq.~\eqref{eq:values}.
We do expect, on intuitive grounds, that the kinetic energy terms would be unfavorable to charge order.
Nevertheless, in order to probe the possible existence of charge order, we will apply Hartree inter-cluster mean-field theory, as described in Sect.~\ref{sec:hartree}. In order to put all the chances on our side, we will probe one of the eigenoperators with the lowest (negative) eigenvalues in Table~\ref{table:Vic}, namely one of those with $D=-2$:
\begin{equation}
m_{3} = \frac1{2\sqrt2}\left(n_1 + n_2 - n_4 - n_5 + n_7 + n_9 - n_{10} - n_{11} \right)
\end{equation}
We must then optimize the Potthoff functional as a function of the mean field $\bar m_3$, in addition to using $\mu_c$ as a variational parameter. On the right panel of Fig.~\ref{fig:omega_cdw3} we show the Potthoff functional $\Omega$ as a function of $\bar m_3$ for a value $\mu_c$ that actually optimize $\Omega$ at a value of $\mu$ associated with quarter filling, for a few values of the interaction $U$. This is to illustrate the absence of nontrivial solution for $\bar m_3$, i.e., the value of the mean-field parameter $\bar m_3$ that minimizes the energy is indeed zero.
This shows that, within this inter-cluster mean-field approximation and for these values of $U$, there is no charge order this type ($m_3$ or, equivalently, $m_4$) at quarter-filling.
We perform the same computation for the $m_{1}$ eigenoperator:
\begin{equation}
m_{1} = \frac1{2\sqrt2}\left(n_1 - n_2 + n_4 - n_5 + n_7 - n_9 + n_{10} - n_{11} \right)
\end{equation}
and find similar results, as shown on the left panel of Fig.~\ref{fig:omega_cdw3}.
Therefore, for the values of $U$ probed, the quarter-filled state appears to be a pure, uniform Mott insulator, driven by extended interactions.
\begin{figure}[htbp]\centering
\includegraphics[width=1.0\hsize]{Fig/sec12_density.pdf}
\caption{Electronic density vs chemical potential $\mu$ for different interaction strengths at half filling, similar
to Fig.~\ref{fig:sec6_density}.
The presence of a plateau (in red) is the signature of an insulating state, and the width of the plateau is the magnitude of the gap.
The insulator-to-metal transition occurs between $U=0.1$~meV and $U=0.25$~meV.
}
\label{fig:sec12_density}
\end{figure}
\begin{figure}[htbp]\centering
\includegraphics[scale=0.9]{Fig/omega_M.pdf}
\caption{Potthoff functional vs the antiferromagnetic Weiss field $M'$ for several values of $a=3V_1/2U$ and $U=3$, at half-filling.
The case $a=1$ corresponds to the contraints~\eqref{eq:values}, and smaller values of $a$ just weaken the extended interactions compared to the on-site interaction. The value of $\Omega$ at $M=0$ is subtracted for clarity.
Antiferromagnetism appears only below $a=0.7$, i.e., not for the extended interactions constrained by Eq.~\eqref{eq:values}.
}
\label{fig:omega_M}
\end{figure}
\section{The normal state at half filling and antiferromagnetism}
The insulating state at half-filling is revealed the same way as at quarter-filling, by applying the VCA with $\mu_c$ as a variational parameter.
The results are shown in Fig.~\ref{fig:sec12_density}, where it appears that the Mott transition occurs between $U=0.1$~meV and $U=0.25$~meV, i.e., at a much lower value of the interaction than at quarter filling.
We will not probe charge order at half-filling, as an antiferromagnetic state is more expected to occur.
The Weiss field used to probe antiferromagnetism is
\begin{equation}
\hat M = M\sum_{i=1}^{12} (-1)^i (n_{i\uparrow}-n_{i\downarrow})
\end{equation}
Fig.~\ref{fig:omega_M} shows the Potthoff functional as a function of $M$ for different values of the extended interactions compared to the on-site repulsion $U=3$~meV.
These different values are characterized by the ratio $a=3V_1/2U$, which is unity when the extended interactions obey the constraints \eqref{eq:values}.
Otherwise, the extended interactions $V_{0-3}$ have the same ratios between them as in Eq.~\eqref{eq:values}.
Lower values of $a$ correspond to weaker extended interactions (compared to $U$).
From that figure we see that, even at a relatively strong $U$ (the Mott transition occurs at a much lower value of $U$), antiferromagnetism is not present at half-filling for the nominal values of the extended interactions defined in Eq.~\eqref{eq:values}. Upon lowering these interactions, antiferromagnetism appears.
Hence the half-filled state should be a true Mott insulator, not an antiferromagnetic insulator.
This is relatively easy to understand in the strong-coupling limit, when Eq.~\eqref{eq:values} holds. The low-energy manifold at half-filling in the absence of hopping terms is degenerate non only because of spin, but also because of charge motion: if there is exactly one electron on each site, hopping an electron to the neighboring site does not change the interaction energy, and thus the usual strong-coupling perturbation theory argument leading to an effective Heisenberg model at half-filling and large $U$ does not hold anymore.
\section{Conclusion}
We have probed the insulating states at quarter- and half-filling in a tight-binding model for magic angle twisted bilayer graphene, augmented with local and extended density-density interactions.
For a wide range of interactions obeying the constraints \eqref{eq:values}, we have detected the Mott gap using the variational cluster approximation (VCA) with a 12-site cluster and located the Mott transition between $U=1.5$~meV and $U=2$~meV at quarter filling, and between $U=0.1$~meV and $0.25$~meV at half-filling.
In addition, we have investigated the possibility of charge order at quarter-filling using the VCA and an inter-cluster Hartree approximation for the extended interactions, and concluded that it does not arise.
Lastly, we have probed antiferromagnetism at half-filling and concluded likewise that it does not arise when the extended interactions obey the relations \eqref{eq:values}.
It looks therefore plausible that the correlated insulating states observed at these filling ratios are genuine Mott insulators and not gapped ordered states.
\paragraph{Funding information}
DS acknowledges support by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant RGPIN-2020-05060. Computational resources were provided by Compute Canada and Calcul Québec.
|
2,869,038,154,236 | arxiv | \section{Introduction}
Black holes (BHs) inhabit the center of almost all massive galaxies and span a wide range of masses, from $10^6~M_\odot$ to several $10^9~M_\odot$, depending strongly on the properties of the host galaxy \citep{Kormendy1993,Magorrian1998,Gebhardt2000,Ferrarese2000,Tremaine2002,Kormendy2004,Greene2005,Greene2006,McConnell2011}.\\ A small fraction of galaxies ($\sim$1\%) show nuclear activity: they are called active galactic nuclei (AGN). AGN produce an enormous amount of energy in a tiny volume ($<<$pc) via gravitational accretion of matter onto the central BH.
About 10\% of AGN produce strong relativistic jets which emit non-thermal radiation over the (almost) whole electromagnetic spectrum, thus they are defined radio-loud AGN or {\it jetted} AGN (see \citealt{Padovani2016}).
Radio-loud AGN with strongly boosted emission associated to jets and pointing towards the observer's line-of-sight are called blazars. Conversely, radio-loud AGN whose jets are oriented close to the plane of the sky are called Radio Galaxies (RGs).\\
RGs were classified for the first time by \cite{Fanaroff1974} following radio morphological criteria. Indeed, Fanaroff-Riley type one objects (FRI) show the highest surface brightness near the core along the jets (i.e. edge-darkened sources), while type two (FRII) objects are characterized by higher surface brightness at the lobes extremities, far from the nucleus (i.e. edge-brightened). There are some cases for which the morphological classification is ambiguous, e.g. they show FRII-like jets on one side and FRI-like on the other: the so-called hybrid double sources \citep{Gopal-Krishna2000}. \\
The FRI-FRII morphological dichotomy quite neatly translates into a separation in terms of extended radio power: sources with radio luminosity at 178~MHz below $10^{26}$ W~Hz$^{-1}$ \citep{Fanaroff1974,Tadhunter2016} are generally FRI, while sources with luminosity above this threshold tipically belong to FRII class.\\
RGs can be also classified on the basis of their optical spectra, accordingly to the emission lines produced in the Narrow Line Regions (hereafter NLR; \citealt{Laing1994,Buttiglione2009,Buttiglione2010,Buttiglione2011}), in High-Excitation or Low-Excitation Radio Galaxies (HERGs and LERGs, respectively). The main diagnostic of high excitation is the [OIII]$\lambda$5007 line luminosity. Conversely, the standard spectroscopic indicators of low excitation are [NII]$\lambda$6584, [SII]$\lambda$6716 and [OI]$\lambda$6364 (\citealt{Buttiglione2010}). To take into account simultaneously both high- and low-excitation diagnostics, \cite{Buttiglione2010} introduced the exctitation index (EI).\footnote{The excitation index can be defined as:\\ {\tiny $EI=log~[OIII]/H_{\beta}~-~1/3~(log~[NII]/H_{\alpha}~+~log~[SII]/H_{\alpha}~+~log~[OI]/H_{\alpha})$ .}}
Considering that different excitation modes of the NLR are associated to different accretion rates \citep{Gendre2013,Heckman2014}, this spectroscopic classification reflects the accretion regime at work in the central regions of the AGN. In particular, HERGs accrete efficiently (quasar-mode, L/L$_{\rm Edd}>$0.1), i.e. the potential energy of the gas accreted by the super-massive black hole (SMBH) is efficiently converted into radiation \citep{adaf}. Conversely, LERGs are characterized by low accretion rates typical of radiatively inefficient hot accretion flows (jet-mode, L/L$_{\rm Edd}$ $\leq$0.01-0.1), and the jet carries the bulk of the AGN energy output \citep{adaf, Heckman2014}.\\
Over the years, studies on radio-loud AGN were fundamental to investigate the jet-launching mechanism and the eventual connection between ejection of relativistic jets and accretion of material onto the central BH.
Indeed, jets are produced close to the BH and their power is predicted to depend on the BH properties (mass and spin) and on the magnetic field strength: in \cite{Blandford1977} model, the jet power ($P_{jet}$) is proportional to $(aMB)^2$, where $a$ is the BH spin, $M$ its mass and $B$ is the magnetic field value at the BH horizon \citep{Ghisellini2014}.\\
In any case, the magnetic field plays in general a major role in channeling power from the BH \citep{Blandford1977}, or from the disk \citep{Blandford1982}, into the jet \citep{Maraschi2003}.
In both scenarios, if the magnetic field strength depends, as generally assumed, on the accretion rate (see e.g. \citealt{Ghisellini2014}), a relation between the accretion rate and the jet power is expected. Various observational studies seem to confirm the link between accretion and ejection (e.g.: \citealt{Rawlings1991,Celotti1997,Willott1999,Maraschi2003,Ghisellini2010,Ghisellini2014}).\\
In the general picture, FRII radio galaxies host an efficient accretion disk \citep{Shakura1973}, while FRIs are characterized by hot inefficient accretion flows (ADAF-like; \citealt{Narayan1994,Abramowicz1995}).
However, there is a group of FRII sources that does not fit into this framework. They exhibit powerful extended radio structures but inefficient accretion, attested by their optical spectra typical of LERGs. This kind of objects is not so exotic: indeed, about 25\% of sources belonging to the 3CR catalog at $z<0.3$ and having both radio and optical classifications, are FRII-LERGs \citep{Buttiglione2009,Buttiglione2010,Buttiglione2011}. \\
Therefore, given their peculiar nature and not negligible number, FRII-LERGs constitute a particularly relevant population for the comprehension of the role of the central engine in powering RG jets and in shaping the extended radio morphology.\\
Within this context, the X-ray band is a fundamental tool to probe the processes at work on different scales from sub-pc up to several hundreds of kpc.
To this aim, we performed a systematic and homogeneous X-ray analysis of all FRIIs belonging to the 3CR sample \citep{Bennett1962} below z<0.3, one of the best studied radio catalogs in all energy bands.
For the first time FRII-LERGs are explored as a separate class. Their X-ray and multi-frequency properties are compared to those of FRII-HERGs, in order to understand their nature. For example, they could be FRII-HERGs seen through a thicker obscuring screen or they could have central engines in a ``switching-off" phase, in which the standard accretion flow becomes inefficient (ADAF-like).
Finally, also the role played by the environment in triggering the AGN and the link between environment and jet power are explored. \\
This work is organized as follows: in \S 2 we describe the sample of sources. In \S 3 we report on the X-ray data reduction and analysis, and discuss our results in \S 4. In \S 5 we summarize our conclusions.
Throughout the paper we adopt: $H_0=71$~km~s$^{-1}$~Mpc$^{-1}$, $\Lambda_\Omega=0.73$, $\Lambda_m=0.27$ \citep{Komatsu2009}.
\section{The analyzed sample}
The 3CR sources at $z<0.3$ classified both in the optical (HERGs vs LERGs) and radio bands (FRIs vs FRIIs) are 79. Following \cite{Buttiglione2009,Buttiglione2010,Buttiglione2011}, who provided the classification in both bands, radio galaxies are divided into:
\begin{itemize}
\item 30 FRII-HERGs and 17 FRII-BLRGs\footnote{Broad-Lines Radio Galaxies (BLRGs) are classified as HERGs according to their NLR emission. They differ from HERGs for the presence of broad permitted lines in the optical spectrum, coming from the broad line regions (BLR). Therefore, HERGs are classified as type 2 AGN, i.e. edge on, while BLRGs are type 1s, i.e. face on.};
\item 19 FRII-LERGs;
\item 13 FRIs.
\end{itemize}
\noindent
\begin{figure}
\centering
\includegraphics[width=7cm] {LEGACY/img/l178.png}
\caption{Distribution of total radio luminosity at 178~MHz in units of erg~s$^{-1}$~Hz$^{-1}$ from \protect\cite{Spinrad1985}: FRII-HERGs/BLRGs are in blue, FRII-LERGs are in green and FRIs are in red. While a separation between FRIs and FRIIs is clear, FRII-HERGs/BLRGs and FRII-LERGs completely overlap.}\label{Pradio}
\end{figure}
From the radio point of view, the majority of the sources are powerful FRIIs (66), distributed over the entire redshift range, while FRIs (13) represent a small fraction of the total ($\sim16\%$) and cluster below z<0.05.
This is not surprising since low-power RGs are known to be mainly observed at lower redshifts \citep{Laing1983,Spinrad1985,Wall1985,Morganti1993,Burgess2006A,Burgess2006B}.
From the optical point of view, the number of HERGs/BLRGs (47) and LERGs (32) is not very different, implying that about half of the local radio sources is characterized by a low accretion regime, independently on the radio classification. Indeed, FRII-LERGs and FRII-HERGs/BLRGs span similar values of radio luminosity at 178 MHz, as shown in Figure \ref{Pradio}. Following the prescriptions of \cite{Willott1999}, if we assume $L_{\rm 178~MHz}$ as a crude signature of the average jet power over time, FRII-LERGs do not fit in the stardand picture linking powerful jets to efficient accretion flows. Other ingredients have to be considered to explain the observations.
\section{X-ray Data Analysis}
\label{data-reduction}
We performed a systematic X-ray analysis of all the FRIIs of the 3CR sample at z<0.3 exploiting archival data from the \textit{Chandra} satellite. This is the most suited telescope to perform such kind of analysis given the good angular resolution ($\approx 0.5''$ in the observational energy band) and the presence of all the sources of the analyzed sample in the public archive \footnote{https://cda.harvard.edu/chaser/ .}.
Several of them belong to the 3C \textit{Chandra} legacy survey \citep{Massaro2010,Massaro2012}. However, \citeauthor{Massaro2010} adopted a flux sky map method to present the data, while
here we follow a different approach based on the direct fit of the data.
We consider FRII-LERGs as key targets of our work (main sample). We also analyze X-ray data of FRII-HERGs/BLRGs as a ``control" sample.
The goal of our analysis is to investigate the nuclear activity of sources down to the innermost regions (sub-pc scales) both in terms of gas presence ($N_{\rm HX}$) and X-ray luminosity.
In Table \ref{tab-log} the observation log is reported.
All the sources were pointed by \textit{Chandra}.
When more than one observation was available, the data were combined in order to achieve better statistics.
We also analyzed all the FRII-LERGs belonging to the XMM-\textit{Newton} archive. In three cases, i.e. 3C~349, 3C~353 and 3C~460, we used XMM-\textit{Newton} data since its larger effective area guaranteed a better constraint of the spectral parameters.
\textit{Chandra} data were reprocessed using the software {\sc CIAO} (Chandra Interactive Analysis of Observations), version 4.10 with calibration database CALDB version 4.8.1 and following standard procedures.\\
A preliminary check of the images was necessary to investigate the presence of extended emission. In case the source was extended, two images were produced: a soft one (0.3-4~keV) and a hard one (4-7~keV). This approach helps in better defining the pointlike emission from the core and modeling its spectrum.
Subsequently, VLA radio contours at 1.4~GHz or 5~GHz (with a few arcsec as spatial resolution) were superimposed on the hard X-ray image to properly identify the peak of the nuclear emission and verify the presence of other features such as lobes, jets, knots along the jet. Given the coincidence between the radio core and the X-ray peak in all the analyzed images, no astrometrical correction was applied.
The core spectrum was usually extracted from circular regions with a radius ranging between 1.5''-2.5'' (depending on the presence or not of extended emission) in order to collect more than 90$\%$ of photons. The background was extracted in a clean circular region in the same CCD of the source, avoiding any contamination from field sources or from the source itself. \\
Spectra were grouped to a minimum of 20 counts per bin in order to adopt the $\chi^2$ statistics. When this was not possible, the C-statistics was applied \citep{Cash1979} and spectra were grouped to at least one count per bin.
For the sources 3C~136.1, 3C~153 and 3C~430 the small number of counts (about 10 counts over the entire spectrum) prevented any modeling. Fluxes and luminosities were estimated using the \textit{Chandra} Proposal Planning Toolkit (PIMMS)\footnote{\url{http://cxc.harvard.edu/toolkit/pimms.jsp} .} and assuming a simple power-law model with $\Gamma=1.7$ \citep{Grandi2006}.
In 3C~196.1, 3C~288, 3C~310 and 3C~388 the AGN emission is completely overwhelmed by the cluster, thus precluding any nuclear study.
\begin{table}
\footnotesize
\setlength{\arrayrulewidth}{0.3mm}
\setlength{\tabcolsep}{6pt}
\renewcommand{\arraystretch}{0.8}
\begin{center}
\begin{tabular}{p{0.8cm}p{0.06cm}p{1.2cm}p{1.4cm}p{1cm}p{0.5cm}}
\hline
\multicolumn{1}{c}{\textbf{3CR}} & \multicolumn{1}{c}{\textbf{Teles.}} & \multicolumn{1}{c}{\textbf{obsID}} & \multicolumn{1}{c}{\textbf{date}} & \multicolumn{1}{c}{\textbf{CCD}} & \multicolumn{1}{c}{\bm{$t_{exp}$} \textbf{(s)}}\\
\multicolumn{1}{c}{(1)}&\multicolumn{1}{c}{(2)}&\multicolumn{1}{c}{(3)}&\multicolumn{1}{c}{(4)}&\multicolumn{1}{c}{(5)}&\multicolumn{1}{c}{(6)}\\\hline
\multicolumn{6}{c}{\textbf{FRII-LERGs - Main sample}}\\\hline
3C88$^{}$&C&9391&2008-06-30&ACIS-I&11270\\
$^{}$&C&11751&2009-10-14&ACIS-S&20180\\
$^{}$&C&11977&2009-10-06&ACIS-S&50280\\
$^{}$&C&12007&2009-10-15&ACIS-S&35080\\
3C132$^{}$&C&9329&2008-03-26&ACIS-S&7790\\
3C153$^{}$&C&9302&2007-12-07&ACIS-S&8170\\
3C165$^{}$&C&9303&2008-02-02&ACIS-S&7770\\
3C166$^{}$&C&12727&2010-11-29&ACIS-S&8050\\
3C173.1$^{}$&C&3053&2002-11-06&ACIS-S&24310\\
3C196.1$^{}$&C&12729&2011-02-11&ACIS-S&8050\\
3C213.1$^{}$&C&9307&2008-04-14&ACIS-S&8170\\
3C236$^{}$&C&10249&2009-01-14&ACIS-I&41040\\
3C288$^{}$&C&9275&2008-04-13&ACIS-S&40150\\
3C310$^{}$&C&11845&2010-04-09&ACIS-S&58320\\
3C326$^{}$&C&10908&2009-05-10&ACIS-I&27880\\
3C349$^{}$&X&0501620301&2007-08-07& EPIC/pn &14863\\
$^{}$&X&0501621601&2007-10-03& EPIC/pn &15113\\
3C353$^{}$&X&0400930101&2006-08-25& EPIC/pn &44264\\
$^{}$&X&0400930201&2007-02-17& EPIC/pn &10916\\
3C357$^{}$&C&12738&2010-10-31&ACIS-S&8050\\
3C388$^{}$&C&5295&2004-01-29&ACIS-I&31120\\
3C401$^{}$&C&4370&2002-09-21&ACIS-S&25170\\
&C&3083&2002-09-20&ACIS-S&22960\\
3C430$^{}$&C&12744&2011-11-14&ACIS-S&8050\\
3C460$^{}$&X&0675400101&2011-12-24& EPIC/pn&48744\\\hline
\multicolumn{6}{c}{\textbf{FRII-HERGs/BLRGs - Control sample}}\\\hline
3C20&C&9294&2007-12-31&ACIS-S&8040\\
3C33&C&7200&2005-11-12&ACIS-S&20180\\
3C61.1&C&9297&2008-12-05&ACIS-S&8160\\
3C79&C&12723&2010-11-01&ACIS-S&7790\\
3C98&C&10234&2008-12-24&ACIS-I&32130\\
3C105&C&9299&2007-12-17&ACIS-S&8180\\
3C133&C&9300&2008-04-07&ACIS-S&8140\\
3C135&C&9301&2008-01-10&ACIS-S&8040\\
3C136.1&C&9326&2008-01-10&ACIS-S&10040\\
3C171&C&10303&2009-01-08&ACIS-S&60220\\
&C&9304&2007-12-22&ACIS-S&8040\\
3C180&C&12728&2010-12-24&ACIS-S&8060\\
3C184.1 &C&9305 & 2008-03-27 & ACIS-S & 8130\\
3C192 &C& 9270& 2007-12-18& ACIS-S&10150\\
&C&19496&2017-12-18& ACIS-S&70110\\
&C&20888&2017-12-21& ACIS-S&10070\\
&C&20889&2017-12-21& ACIS-S&33110\\
&C&20890&2017-12-24& ACIS-S&21410\\
&C&20891&2017-12-22& ACIS-S&35760\\
3C223&C&12731&2012-01-07&ACIS-S&8050\\
3C223.1&C&9308&2008-01-16&ACIS-S&8030\\
3C234&C&12732&2011-01-19&ACIS-S&8050\\
3C277.3&C&11391& 2010-03-03&ACIS-S&25120\\
&C&15023& 2014-03-15&ACIS-I&44080\\
&C&15024& 2014-03-16&ACIS-I&20090\\
&C&16600& 2014-03-11&ACIS-I&98080\\
&C&16599& 2014-03-13&ACIS-I&29090\\
3C284&C&12735&2010-11-17&ACIS-S&8050\\
3C285&C&6911&2006-03-18&ACIS-S&40150\\
3C300&C&9311&2008-03-21&ACIS-S&8040\\
3C303.1&C&9312&2008-02-21&ACIS-S&7770\\
3C305&C&9330&2008-04-07&ACIS-S&8330\\
&C&12797&2011-01-03&ACIS-S&29040\\
&C&13211&2011-01-06&ACIS-S&29040\\
3C321&C&3138&2002-04-30&ACIS-S&47730\\
3C327&C&6841&2006-04-26&ACIS-S&40180\\
3C379.1&C&12739&2011-04-04&ACIS-S&8050\\
3C381&C&9317&2008-02-21&ACIS-S&8170\\
3C403&C&2968&2002-12-07&ACIS-S&50130\\
3C436&C&9318&2008-01-08&ACIS-S&8140\\
&C&12745&2011-05-27&ACIS-S&8060\\
3C452&C&2195&2001-08-21&ACIS-S&80950\\
3C456&C&12746&2011-01-17&ACIS-S&8050\\
3C458&C&12747&2010-10-10&ACIS-S&8050\\
3C459 &C&12734 & 2011-10-13 & ACIS-S & 8050\\
&C&16044 & 2014-10-12 & ACIS-S & 59960\\\hline
\end{tabular}
\normalsize
\caption{Observation log of FRII-LERGs (main sample) and FRII-HERGs/BLRGs (control sample). Column description: (1) 3CR name; (2) Telescope: C=\textit{Chandra} and X=XMM-\textit{Newton}; (3) Observation ID; (4) Start date of the observation; (5) Instrument used in the observation; (6) Total exposure time in seconds. All the sources are the target of the observation.}\label{tab-log}
\end{center}
\end{table}
XMM-\textit{Newton} data were reduced using the Scientific Analysis Software (SAS) version 16.1 together with the latest calibration files and following standard procedures. Throughout the paper results refer to EPIC/pn data, but all the EPIC instruments were checked.\\
Source and background spectra were extracted in 0.5-10 keV band from circular regions with radius varying between 20'' and 30'', depending on the source extension, in order to maximize the S/N ratio. In all cases, at least 80$\%$ of photons fell within the extraction region. The background was chosen in a circular region in the same CCD of the source, avoiding any contamination from field sources or from the source itself.\\
Spectra were grouped to a minimum of 20 counts per bin and the $\chi^{2}$ statistics was applied.
We checked for the presence of pile-up effects in each source (using the PIMMS software for \textit{Chandra} data and the task {\sc EPATPLOT} in the SAS for \textit{XMM-Newton} data). The pile-up was generally negligible ($<10\%$) or absent in FRII-LERGs, but turned out to be important in FRII-HERGs/BLRGs seen face-on (i.e. BLRGs). Indeed, we could perform a \textit{Chandra} spectral analysis only for two BLRGs, i.e. 3C~184.1 and 3C~459, for which the estimated pile-up was less than $<10\%$.
The spectral analysis was performed using the XSPEC version 12.9.1 \citep{Arnaud1996}.
The energy range considered in the spectral fitting was 0.3-7 keV for \textit{Chandra} and 0.5-10 keV for XMM-\textit{Newton}. Errors reported are quoted at 90\% confidence for one parameter of interest \citep{Avni1976}.
\subsection{Spectral analysis}
\label{spectral-analysis}
An ispection of the X-ray images indicate that 6 out of 19 FRII-LERGs show strong emission over the galaxy-scale (from several tens to hundreds of kpc) due to hot gas from cluster (Figure \ref{fig-cluster}).
On the contrary, no FRII-HERGs/BLRGs show cluster emission in the \textit{Chandra} images, although some of them show resolved emission on kpc-scale.
At first, we considered as baseline model a single power-law convolved with Galactic column density (\cite{Kalberla2005}; {\sc PHABS}). When the power-law spectral slope was less than 1 an intrinsic absorption component ({\sc ZPHABS}) was added to the fit. Because of the poor statistics and/or the complexity of the emission, we were forced to fix the hard photon index ($\Gamma=1.7$) in 7 out of 19 FRII-LERGs and in 27 out of 32 FRII-HERGs/BLRGs (see Table \ref{tabellone}). If no XMM-\textit{Newton/Chandra} information on the power-law spectral slope was available in literature, we chose a reliable value of $\Gamma=1.7$.
Nonetheless, we checked whether different values of the photon indices could produce significant changes in the estimate of the column density. However, even assuming different photon indices ($\Gamma=1.4$ and $\Gamma=2.0$), the column densities and the intrinsic luminosity remain consistent within the errors\footnote{For all the sources with fixed spectral slope, the N$_H$ and L$_X$ values do not change respectively more than $\approx$15\% and 40\% varying the power law spectral slope from 1.4 to 2.0. As the uncertanties of the same quantities in Table \ref{tabellone} are above 70\%, we are confident that the $\Gamma=1.7$ assumption does not significantly affect our results.}.
If residuals were still present at soft-energies a second power-law, or a thermal emission ({\sc MEKAL}) were added to the fit. A second power-law is expected if the primary component is scattered by clouds of electrons above the torus. A thermal emission is expected if the source is embedded in a gaseous environment, i.e. hot corona of early-type galaxies \citep{Fabbiano1992} or intergalactic medium. The MEKAL model could also roughly mimic features related to photoionized gas, given the limited energy resolution of CCD detectors. Therefore, after testing for collisional gas presence, if prominent photoionized features were present in the soft X-ray spectrum yet, a fit with multiple narrow emission lines (Gaussian-profile) was tested.\\
In the hard spectrum, a Gaussian component ({\sc ZGAUSS}) was included if positive residuals were observed in the region of the iron K$\alpha$ line (5-7~keV).
Once the Fe~k$\alpha$ line was attested, the presence of a reflection component ({\sc PEXRAV}) was verified: in fact, this is expected when cold matter surrounding the nuclear engine reprocesses the primary X-ray radiation \citep{Lightman88}.
In this case, the cut-off energy and the angle between the normal to the disk and the observer were fixed to 100~keV and 30$^\circ$, respectively. The reflection component is modeled by the parameter $R=\Omega/2\pi$, corresponding to the
solid angle fraction of a neutral, plane parallel slab
illuminated by the continuum power-law ({\sc PEXRAV}). Given the low statististics and the limited energy range covered by \textit{Chandra} and \textit{XMM-Newton}, small variations in these parameters do not impact the fit.
\subsection{Results}
The results of the X-ray analysis, listed in Table \ref{tabellone}, are in substantial agreement with those reported in the literature using different satellites and/or different approaches (e.g. \citealt{Grandi2006,Evans2006,Massaro2010,Massaro2012}).
Details on the soft X-ray component and reprocessed features are listed in Tables \ref{soft_table} and \ref{reprocessed_table}, respectively.\\
The photon index of FRII-LERGs was tightly constrained for 8 out of 19 sources: the mean $\Gamma$ value is $1.7$ and the standard deviation is 0.3 (see Table \ref{tabellone}). Intrinsic cold gas obscuration was required in about 50\% of the sources. They are generally moderately absorbed, with a $N_{\rm HX}$ of the order of a few $10^{22}$~cm$^{-2}$.
Only in two radio galaxies, i.e. 3C~173.1 and 3C~460, the column density reaches values of few $10^{23}$~cm$^{-2}$.
An iron K$\alpha$ line was detected in 3C~353 (see Table \ref{reprocessed_table}) with an intensity, within the large uncertainties, compatible with being produced by the same matter obscuring the nuclear region \citep{Ghisellini1994}. For the other objects with intrinsic absorption, the feature could not be revealed because of the low statistics and the abrupt drop of the \textit{Chandra} effective area above 6-7~keV. \\
When present, the soft X-ray excess is well described by a power-law, that is probably scattered nuclear emission. Indeed, the normalization values of the scattered component at 1~keV are always a few percent of the absorbed one: the mean value is 6\%, in agreement with those measured for type 2 Seyferts (e.g. \citealt{Bianchi2007}).\\
The cluster emission, when present in the X-ray images, is generally dominant. In four cases (3C~196.1, 3C~288, 3C~310, 3C~388), the AGN is overwhelmed by the thermal gas and any nuclear study is precluded. Therefore, for these sources the estimated 2-10~keV luminosity should be considered as an upper limit of the nuclear AGN emission. Only in 3C~88, the AGN spectrum was disentangled from the thermal emission and it is analogous to that of the other absorbed radio galaxies. Instead, in 3C~401 the AGN emission dominates over the cluster one and the nuclear spectrum is well reproduced by a single power-law. The intrinsic absorption is negligible, and indeed only an upper limit is provided (see Table \ref{tabellone}).\\
\noindent
The spectra of the control sample (FRII-HERGs/BLRGs) are generally more complex than FRII-LERGs (see Table \ref{tabellone}). About 90\% of them show strong obscuration, with typical values one order of magnitude higher than FRII-LERGs ($N_{\rm H}\sim10^{23}$~cm$^{-2}$). The photon index could be well constrained in only 15\% (5 out of 32) of sources ($<\Gamma>=1.8$ and $\sigma_{rms}=0.5$).
Intense iron lines with Equivalent Width (EW) spanning from 140~eV to more than 1~keV are detected in 11 sources and, in at least two cases, a Compton reflection model was also required (Table \ref{reprocessed_table}).
These reprocessed features, signature of a complex and inhomogeneous circumnuclear absorber, are commonly observed in Seyfert-like spectra \citep{Risaliti2002}.
The soft X-ray excess of $>$50\% of FRII-HERGs/BLRGs is generally well reproduced by a second power-law, which can be interpreted as the scattered component of the primary one.
The mean unabsorbed normalization at 1~keV is 8\% of the absorbed one.
In addition to the second power-law, a {\sc MEKAL} model is required in a few objects: in some cases, this component is directly related to collisional gas emission (cluster or shocked gas), in the other ones it could mimic photoionized features \citep{Balmaverde2012}. Indeed, in two radio galaxies, i.e. 3C~403 and 3C~321, single soft X-ray emission lines associated to Ne~IX, O~VII and Mg~XI were revealed in the spectrum.\\
\noindent
In summary, the control sample show more complex and feature-rich spectra than the key sample. FRII-HERGs/BLRGs are characterized by mean values of intrinsic absorption and X-ray luminosity one order of magnitude larger than FRII-LERGs, implying a substantially higher activity of the central engine and more variegated circumnuclear environment.
\begin{figure*}
\centering
\includegraphics[width=7cm,height=6cm]{LEGACY/img/3c88.png}
\hspace{0.1cm}
\includegraphics[width=7cm,height=6cm]{LEGACY/img/3c196_1.png}
\hspace{0.1cm}
\includegraphics[width=7cm,height=6cm]{LEGACY/img/3c288.png}
\hspace{0.1cm}
\includegraphics[width=7cm,height=6cm]{LEGACY/img/3c310.png}
\hspace{0.1cm}
\includegraphics[width=7cm,height=6cm]{LEGACY/img/3c388.png}
\hspace{0.1cm}
\includegraphics[width=7cm,height=6cm]{LEGACY/img/3c401.png}
\caption{\textit{Chandra} 0.3-7~keV images of extended FRII-LERGs. Radio VLA contours at 5~GHz and 1.4~GHz (\textit{white}) are superimposed to the X-ray images. Panel (a): {\bf 3C~88} inhabits the center of a galaxy group and produce the largest X-ray cavities ever found in such an environment \protect\citep{Liu2019}. Panel (b): {\bf 3C~196.1} is the Brightest Cluster Galaxy (BCG) of a Cool Core Cluster (CCC). \protect\cite{Ricci2018} measured a core cluster temperature kT$\sim$ 3~keV, in agreement with our analysis (see Table \ref{soft_table}). Panel (c): {\bf 3C~288} resides the center of a poor not-CC cluster, as shown by \protect\cite{Lal2010}. They found an ICM gas temperature kT$\sim$3~keV extending up to 400~kpc. Panel (d): {\bf 3C~310} is the central galaxy of a poor cluster with a temperature of about 3 keV at a distance between 100 and 180 kpc (at which a shock occurs) \protect\citep{Kraft2012}. Panel (e): {\bf 3C~388} resides in the center of a small cluster environment with an ICM temperature of about 3.5~keV and a cool core probably heated by a nuclear outburst \protect\citep{Kraft2006}.
Panel (f): {\bf 3C~401} is in the center of a cluster with ICM mean temperature of 2.9~keV. \protect\cite{Reynolds2005} proposed both a thermal hot core ($T\approx4.9$~keV) and a simple power law model (which is the model assumed in this work) because statistically indistinguishable.}\label{fig-cluster}
\end{figure*}
\begin{table*}
\setlength{\arrayrulewidth}{0.3mm}
\setlength{\tabcolsep}{12pt}
\begin{center}
\caption{Spectral parameters of the X-ray continuum.}
\begin{tabular}{llllllll}
\hline
Name & z & $\rm N_{H,Gal}$ & Fitted & $\rm N_{H,intr}$ & $\Gamma_{\rm H}$ & $L_{2-10~\rm keV}$ & Statistics$^b$ \\
&&($10^{20}~\rm cm^{-2}$) & Model$^a$ &($10^{22}$~cm$^{-2}$) & &(10$^{42}$~erg~s$^{-1}$) &\\
\hline
\multicolumn{8}{c}{\textbf{FRII-LERGs}}\\
3C~88 & 0.0302 & 8.3 & [iii] & $2.0\pm0.5$ & $1.6\pm0.3$ & $0.4\pm0.2$ & 36.9/50 \\
3C~132 & 0.214 & 21.3 & [ii] & $5_{-3}^{+4}$ & 1.7$^{\dag}$ & $22_{-8}^{+12}$ & 4.1/5 C \\
3C~153$^{\star}$ & 0.2769 & 16.2 & [i] & - & 1.7$^{\dag}$ & $<1.5$ & - \\
3C~165 & 0.2957 & 19.4 & [ii] & $3\pm 2$ & 1.7$^{\dag}$ & $23_{-8}^{+11}$ & 8.2/14 C \\
3C~166 & 0.2449 & 17.1 & [i] & $<0.16$ & $1.6\pm 0.2$ & $80\pm 10$ & 20.1/19 \\
3C~173.1 & 0.292 & 4.5 & [vi] & $30_{-20}^{+200}$ & 1.7$^{\dag}$ & $27_{-16}^{+180}$ & 35.3/25 C \\
3C~196.1 & 0.198 & 6.0 & [vii] & - & - & $<9.6$ & 20.5/13 C \\
3C~213.1 & 0.1939 & 2.4 & [i] & $<0.43$ & $1.9_{-0.4}^{+0.5}$ & $4\pm 1$ & 15.4/9 C \\
3C~236 & 0.1005 & 1.0 & [ii] & $1.9_{-0.5}^{+0.6}$ & $1.4\pm 0.3$ & $12_{-4}^{+7}$ & 27.0/29 \\
3C~288 & 0.246 & 0.8 & [vii] & - & - & $<4.7$ & 6.4/6 \\
3C~310 & 0.0535 & 3.7 & [vii] & - & - & $<0.02$ & 10.5/17 C \\
3C~326 & 0.0895 & 9.0 & [ii] & $2.2_{-1.7}^{+2.8}$ & 1.7$^{\dag}$ & $0.2_{-0.1}^{+0.2}$ & 3.4/3 C \\
3C~349 & 0.205 & 1.9 & [ii] & $0.9\pm 0.2$ & $1.4\pm 0.2$ & $60\pm10$ & 46.2/47 \\
3C~353 & 0.0304 & 9.3 & [vi] & $6.7_{-0.8}^{+0.9}$ & $1.7\pm 0.2$ & $3\pm1$ & 58.0/61 \\
3C~357 & 0.1662 & 3.1 & [iii] & $3\pm2$ & 2$\pm1$ & $22_{-12}^{+6}$ & 17/18 C \\
3C~388 & 0.0917 & 5.5 & [vii] & - & - & $<0.9$ & 62.5/59 C \\
3C~401 & 0.2011 & 5.9 & [i] & $<0.16$ & $1.7\pm 0.1$ & $5.0\pm 0.5$ & 15.9/18 \\
3C~430$^{\star}$ & 0.0541 & 33.1 & [i] & - & 1.7$^{\dag}$ & $<0.05$ & - \\
3C~460 & 0.268 & 4.72 & [iii] & $25_{-11}^{+23}$ & 1.7$^{\dag}$ & $20\pm 10$ & 5.7/6 \\
\hline
\multicolumn{8}{c}{\textbf{FRII-HERGs/BLRGs}}\\
3C~20 & 0.174 & 18.0 & [ii] & $15_{-3}^{+4}$ & 1.7$^{\dag}$ & $110_{-20}^{+30}$ & 24.1/25 C \\
3C~33 & 0.0596 & 3.4 & [v] & $53_{-7}^{+8}$ & 1.7$^{\dag}$ & $100_{-20}^{+30}$ & 32.5/40 \\
3C~61.1 & 0.184 & 7.9 & [iii] & $29_{-12}^{+23}$ & 1.7$^{\dag}$ & $40_{-20}^{+40}$ & 30/18 C \\
3C~79 & 0.2559 & 8.7 & [iii] & $33_{-10}^{+12}$ & 1.7$^{\dag}$ & $270_{-90}^{+130}$ & 14.4/16 C \\
3C~98 & 0.0304 & 10.0 & [iv] & $9.4_{-0.9}^{+1.0}$ & 1.7$^{\dag}$ & $5.3_{-0.3}^{+0.5}$ & 57.4/48 \\
3C~105 & 0.089 & 12.0 & [iv] & $43_{-6}^{+7}$ & 1.7$^{\dag}$ & $220_{-50}^{+90}$ & 13.6/12 \\
3C~133 & 0.2775 & 25.0 & [ii] & $0.8_{-0.3}^{+0.4}$ & $2.0\pm 0.3$ & $190_{-50}^{+80}$ & 39.4/26 \\
3C~135 & 0.1253 & 8.7 & [vi] & $34_{-19}^{+32}$ & 1.7$^{\dag}$ & $14_{-8}^{+23}$ & 15.8/13 C \\
3C~136.1$^{\star}$ & 0.064 & 32.0 & [i] & - & 1.7$^{\dag}$ & $<0.06$ & - \\
3C~171 & 0.2384 & 5.7 & [ii] & $7\pm1$ & $1.5\pm0.3$ & $130_{-50}^{+80}$ & 33.8/26 \\
3C~180 & 0.22 & 14.0 & [iii] & $70_{-50}^{+160}$ & 1.7$^{\dag}$ & $90_{-70}^{+1800}$ & 7.1/7 C \\
3C~184.1 & 0.1182 & 3.2 & [iii] & $8\pm1$ & 1.7$^{\dag}$ & $110\pm 10$ & 19.8/24 \\
3C~192 & 0.0598 & 3.9 & [iii] & $34_{-7}^{+8}$ & $1.7\pm0.5$ & $2_{-1}^{+4}$ & 14.3/18 \\
3C~223 & 0.1368 & 1.0 & [iii] & $13_{-7}^{+13}$ & 1.7$^{\dag}$ & $20_{-8}^{+16}$ & 10/10 C \\
3C~223.1 & 0.107 & 1.3 & [ii] & $28\pm6$ & 1.7$^{\dag}$ & $90_{-20}^{+30}$ & 9.3/12 C \\
3C~234 & 0.1848 & 1.8 & [vi] & $17_{-6}^{+9}$ & 1.7$^{\dag}$ & $150_{-50}^{+70}$ & 6.6/8 \\
3C~277.3 & 0.0857 & 0.9 & [xiii] & $27_{-5}^{+6}$ & 1.7$^{\dag}$ & $9_{-1}^{+2}$ & 23.1/21 \\
3C~284 & 0.2394 & 0.9 & [i] & $<0.91$ & $2.3\pm 1.0$ & $1.1_{-0.5}^{+0.4}$ & 1.4/5 C \\
3C~285 & 0.0794 & 1.3 & [vi] & $38_{-6}^{+8}$ & 1.7$^{\dag}$ & $35_{-7}^{+10}$ & 7.7/11 \\
3C~300 & 0.27 & 2.5 & [i] & $<0.19$ & $1.4\pm 0.3$ & $13\pm 2$ & 12.7/10 C \\
3C~303.1 & 0.267 & 3.0 & [ii] & $18_{-16}^{+132}$ & 1.7$^{\dag}$ & $15_{-11}^{+400}$ & 0.1/2 C \\
3C~305 & 0.0416 & 1.3 & [viii] & $<0.72$ & 1.7$^{\dag}$ & $0.04\pm0.01$ & 36.3/24 C \\
3C~321 & 0.096 & 3.8 & [ix] & $26_{-13}^{+20}$ & 1.7$^{\dag}$ & $4_{-2}^{+4}$ & 61.2/40 C \\
3C~327 & 0.1041 & 5.9 & [x] & $30_{-18}^{+63}$ & 1.7$^{\dag}$ & $8_{-4}^{+31}$ & 46.6/25 \\
3C~379.1 & 0.256 & 5.4 & [vi] & $60_{-30}^{+70}$ & 1.7$^{\dag}$ & $110_{-70}^{+400}$ & 7.3/8 C \\
3C~381 & 0.1605 & 9.9 & [iii] & $30_{-6}^{+7}$ & 1.7$^{\dag}$ & $240_{-50}^{+70}$ & 18.9/21 C \\
3C~403 & 0.059 & 12.1 & [xi] & $46\pm 3$ & 1.7$^{\dag}$ & $78_{-9}^{+10}$ & 51.5/57 \\
3C~436 & 0.2145 & 6.7 & [iii] & $48_{-15}^{+22}$ & 1.7$^{\dag}$ & $100_{-40}^{+80}$ & 14.4/15 C \\
3C~452 & 0.0811 & 9.8 & [v] & $53_{-7}^{+8}$ & 1.7$^{\dag}$ & $100\pm20$ & 77.9/78 \\
3C~456 & 0.233 & 3.7 & [ii] & $7\pm1$ & 1.7$^{\dag}$ & $160\pm 20$ & 63.6/59 C \\
3C~458 & 0.289 & 5.9 & [ii] & $35_{-16}^{+20}$ & 1.7$^{\dag}$ & $150_{-70}^{+140}$ & 16.8/14 C \\
3C~459 & 0.2199 & 5.2 & [xii] & $4_{-2}^{+3}$ & 1.7$^{\dag}$ & $12_{-2}^{+3}$ & 31.2/25 \\ \hline
\multicolumn{8}{l}{\scriptsize{$^{a}-$ All the adopted models are absorbed by the Galactic column density:[i] po;[ii] zpha*po;[iii] zphabs*po+po; [iv] zphabs*(po+zgauss)}}\\
\multicolumn{8}{l}{\scriptsize{[v] zphabs*(po+zgauss)+po+pexrav); [vi] zphabs*(po+zgauss)+po;[vii] mekal;[viii] mekal+po;[ix] zphabs*(po+zgauss)+po+2zgauss;}}\\
\multicolumn{8}{l}{\scriptsize{[x] zphabs*(po+zgauss)+po+ mekal;[xi] zphabs*(po +zgauss)+po+2zgauss+mekal; [xii] zphabs*(po)+po+mekal; [xiii] zphabs*(po+zgauss)+zphabs*po}}\\
\multicolumn{8}{l}{\scriptsize{$^{b}-$ Statistics refers to the entire energy band assuming the model listed in column (4). ``C" indicates that the C-statistics was adopted.}} \\
\multicolumn{8}{l}{\scriptsize{$^{\dag}-$ fixed photon index}}\\
\multicolumn{8}{l}{\scriptsize{$^{\star}-$ luminosities estimated with PIMMS assuming a simple power law model with $\Gamma=1.7$}.}\\
\label{tabellone}
\end{tabular}
\end{center}
\end{table*}
\normalsize
\LTcapwidth=3.0\textwidth
\section{Discussion}
The aim of the present study is to explore the jet-accretion connection and the role of the environment in shaping the radio morphology in sources of different FR type.
Our X-ray results can be summarized as follows:
\begin{itemize}
\item nearly 30\% of FRII-LERGs are in a dense/extended gaseous environment, as attested by the \textit{Chandra} images.
Thermal gas is also detected in several images of the FRII-HERGs/BLRGs control sample. The extension of the emission seems to
suggest a galactic rather than an intergalactic origin;
\item FRII-LERGs' spectra are generally well modeled by a power-law absorbed by a moderate intrinsic column density ($N_{\rm H} \sim$10$^{22}$~cm$^{-2}$). Conversely, FRII-HERGs/BLRGs have spectra rich in features and engines obscured by high column densities ($N_{\rm H}\geq$10$^{23}$~cm$^{-2}$);
\item FRII-LERGs are intrinsically less luminous than FRII-HERGs/BLRGs by a factor of ten in the 2$-$10~keV band.
\end{itemize}
\subsection{Are FRII-LERGs obscured FRII-HERGs/BLRGs?}
\label{sect_obscured}
The first scenario that we explore supposes that FRII-LERGs could be obscured FRII-HERGs/BLRGs.\\
Our analysis does not support this hyphotesis.
The gas column densities, estimated from the X-ray spectra, carry out information on the environment down to sub-parsec scales.
Looking at Table \ref{tabellone}, the difference between the FRII classes in terms of $N_{\rm H}$ is evident: FRII-HERGs/BLRGs are more obscured than FRII-LERGs.
A two-sample test univariate program, TWOST \citep{Feigelson1985,Isobe1986}, that takes into account upper limits, confirms that the two samples are different, being P$_{\rm TWOST}=7\times10^{-4}$.
We assume P$_{\rm TWOST}=0.05$ as the probability threshold to rule out the hypothesis that two samples are drawn from the same population.
To take a step forward, 3CR/FRI radio galaxies with X-ray information available from literature \citep{Balmaverde2006} were also compared to the FRII samples. The peak of the $N_{\rm H}$ distribution in FRIs is clearly shifted to lower values, as shown in Figure \ref{img-nh}. There is a partial overlap with FRII-LERGs. However, a TWOST test applied to FRII-LERGs and FRIs provides a probability of P$_{\rm TWOST}=2\times10^{-2}$, showing that these two samples are intrinsically different. Instead, the same test confirms that FRII-HERGs/BLRGs and FRIs are drawn from different populations (P$_{\rm TWOST}<10^{-4}$).
It is interesting to note that larger amounts of cold gas column densities are associated to radio galaxies with efficient accretion disks.
We conclude that there is an indication that the quantity of obscuring matter (in the form of cold gas) is decreasing from FRII-HERGs/BLRGs to FRIs, with FRII-LERGs lying in between.\\
Another source of obscuration is the dust spread in the galaxy, that could affect our optical measurements. Note that the optical classification of radio galaxies provided by \cite{Buttiglione2009} is based on lines produced in the NLR.
The dust content can be estimated using the Balmer decrement. Adopting an extinction curve $\kappa(\lambda)$, the intrinsic color excess can be expressed as:
$$E(B-V)_i=\frac{2.5}{[\kappa(H_{\beta})-\kappa(H_{\alpha})]} \times \log\bigg[\frac{(H_{\alpha}/H_{\beta})_o}{3.1}\bigg]$$.
Details on the derivation of the above formula can be found in the Appendix of \cite{momcheva}.
The theoretical ($H_{\alpha}/H_{\beta}$) ratio is 2.86, as expected if the temperature and the electron density of the NLR are T=10$^4$~K and $N_e$=10$^3$~cm$^{-3}$, respectively \citep{Osterbrock1989}. Actually, a value of 3.1 is considered the best prescription for AGN \citep{Gaskell1982,Gaskell1984,Wysota1988,Tsvetanov1989,Heard2016}.
Several functional forms for the attenuation curve are present in literature. The most used are: the Milky Way extinction curve \citep{Cardelli1989}, the Large and Small Magellanic Cloud extinction curves from \cite{Gordon2003} and a general extra-galactic extinction curve from \cite{Calzetti1997}. The reddening study was performed considering all the different extinction curves.
As the results are similar, hereafter the discussion is based on the Milky Way extinction curve.
\begin{figure}
\centering
\includegraphics[width=8cm] {LEGACY/img/nh.png}
\caption{Distribution of the intrinsic gas column density $N_{\rm HX}$ in units of cm$^{-2}$, as measured in the X-ray band. The FRII-LERGs population (\textit{green}) is on average less obscured than FRII-HERGs/BLRGs (\textit{blue}), but more than FRIs (\textit{red}). Arrows indicate upper limits with the number of sources per bin specified. FRIs data are taken from \protect\cite{Balmaverde2006}.}\label{img-nh}
\end{figure}
\cite{Buttiglione2010} provided the narrow H$_{\alpha}$ and $H_{\beta}$ fluxes for the majority of 3C radio galaxies up to z=0.3. We could then investigate the amount of dust in FRIs, FRII-LERGs and FRII-HERGs/BLRGs by simply comparing the $(H_{\alpha}/H_{\beta})_o$ ratio (assuming the same extinction curves for all the galaxies).
In Figure \ref{img-ha_hb}, a histogram of the Balmer decrement for all the sources with detected lines (74 out of 79) is presented. When the flux ratio was less than the theoretical value, the source was considered unabsorbed (i.e. ($H_{\alpha}/H_{\beta})_o=3.1$).
\begin{figure}
\centering
\includegraphics[width=8cm] {LEGACY/img/hb.png}
\caption{Distribution of the observed Balmer decrement (H$_{\alpha}$/H$_{\beta}$)$_{\rm o}$ used to estimate the dust extinction for the three classes of radio galaxies considered in this work, i.e. FRII-LERGs, FRII-HERGs/BLRGs and FRIs. H$_{\alpha}$ and H$_{\beta}$ flux measurements are taken from \protect\cite{Buttiglione2009}. We assume the theoretical (H$_{\alpha}$/H$_{\beta}$)$_{\rm o}$=3.1 as a reliable value for AGN (see Section 4.1). When (H$_{\alpha}$/H$_{\beta}$)$_{\rm o}<$3.1, the source is considered unabsorbed.}\label{img-ha_hb}
\end{figure}
It is immediate to note that the FRIs' distribution peaks to higher values of $(H_{\alpha}/H_{\beta})_{o}$. Indeed, a Kolmogorov-Smirnov (KS) test confirms that FRIs are richer in dust than both FRII-LERGs (P$_{\rm KS}=0.01$) and FRII-HERGs/BLRGs (P$_{\rm KS}=0.008$). The current optical data do not allow us to exclude that the two FRII classes are drawn from the same population (P$_{\rm KS}=0.32$).
Therefore, the difference between FRII-LERGs and FRII-HERGs/BLRGs is intrinsic and not an artefact due to different absorbing screens.\\
In Figure \ref{img-ebmv-vs-nh} the column density ($N_{\rm HX}$) measured in the X-ray band is plotted vs ($H_{\alpha}/H_{\beta})_o$.
This plot traces the obscuring matter at different scales: $N_{\rm HX}$ maps the gas down to sub-pc scales, while the optical lines ($H_{\alpha}/H_{\beta})_o$ (i.e., E(B-V)), carries out information from the NLR.
Different classes appear to populate distinct regions of the plot: FRII-HERGs/BLRGs, having higher $N_{\rm HX}$ values, mainly cluster in the upper part of the plot, FRII-LERGs occupy a similar region but are shifted to lower $N_{\rm HX}$, FRIs lie at the bottom of $N_{\rm HX}$ but extend to ($H_{\alpha}/H_{\beta})_o$ up to 10.
Moreover, all FRIs are at the edge or below the N$_{\rm H}$ line that traces the expected amount of gas according to a standard Galactic gas to dust ratio N$_{\rm H}$=$5.8\times 10^{21}$ E(B-V)~atoms~cm$^{-2}$~mag$^{-1}$ \citep{bohlin}.
Conversely, all the FRIIs are above the N$_{\rm H}$ line, suggesting a large amount of gas (although with different column densities) near the BH and paucity of dust in the NLR and/or along the galaxy.
\begin{figure}
\centering
\includegraphics[width=8.3cm] {LEGACY/img/hb-nh.png}
\caption{Column density ($N_{\rm HX}$), as obtained by the X-ray analysis, plotted versus the intrinsic reddening, as measured by the optical Balmer decrement $(H_{\alpha}$/$H_{\beta})_o$. $N_{\rm HX}$ decreases from FRII-HERGs/BLRGs to FRI. FRII-LERGs occupy the middle region of the plot. FRIs show the highest excess of colour and the lowest gas content. The black curve represents the expected $N_{\rm H}$ value assuming a dust-to-gas ratio $N_{\rm H}$/E(B-V)=$5.8\times 10^{21}$~atoms~cm$^{-2}$~mag$^{-1}$.}\label{img-ebmv-vs-nh}
\end{figure}
The dichotomy between FRII-LERGs and FRII-HERGs/BLRGs is reinforced by the X-ray analysis. The
unabsorbed X-ray luminosity divided by the Eddington luminosity (L$_{\rm Edd}$=1.3$\times10^{38}$~M/M$_{\odot}$~erg~s$^{-1}$) is a direct proxy of the accretion rate (L$_{\rm 2-10~keV}$/L$_{\rm Edd}$; \citealt{Merloni2003}). The BH masses for the sources in our sample were calculated by exploiting the relation between the H-band host-galaxy magnitude (taken from \citealt{Buttiglione2009}) and M$_{\rm BH}$, provided by \cite{Marconi2003} (with a dispersion of $\sim$ 0.3~dex in the BH mass).
As expected, no significant difference in masses is observed among FRIs and FRIIs. The M$_{\rm BH}$ range is narrow: $10^{8.5}-10^{9.5}$ M$_\odot$.
The upper panel of Figure \ref{accretion} shows the L$_{\rm 2-10~keV}$/L$_{\rm Edd}$ distribution for the three classes.
The distributions of FRIs and FRII-HERGs/BLRGs are clearly separated ($P_{TWOST}<10^{-4}$), while FRII-LERGs are in between. The displacement of FRII-LERGs' peak towards lower accretion rates is confirmed by a TWOST test, that associates a probability of $10^{-4}$ and 5.8$\times10^{-3}$ to the hypothesis that FRII-LERGs are drawn from the same
parent population of FRII-HERGs/BLRGs and FRIs, respectively.
{\it Therefore, the nuclear activity is inherently different in FRIs, FRII-LERGs and FRII-HERGs/BLRGs.}
Note that for RGs with ADAF-like engine the estimated X-ray luminosity could provide an upper limit of the accretion luminosity, as there could be a significant contribution from the jet emission. If this were the case, the separation between FRII-LERGS and FRII-HERGs/BLRGs would be even more pronounced.
Finally, we note that the same trend is observed when
the ionizing radiation L$_{\rm ion}$ in terms of the Eddington luminosity is considered (lower panel of Figure \ref{accretion}).
This quantity, defined as
Log~L$_{\rm ion}\sim$ Log~$L_{[\rm OIII]}$ + 2.83 \citep{Buttiglione2009} is directly related to the accretion efficiency, being responsible for the excitation of the NLR gas.\\
The agreeement between upper and lower panels of Figure \ref{accretion} corroborates our previous conclusion based on the $(H_{\alpha}/H_{\beta})_o$ study.
As no dust correction was applayed to L$_{\rm ion}$, the different optical classification of FRIIs {\bf cannot} be ascribed to optical NLR obscuration.
\begin{figure}
\centering
\includegraphics[width=6.9cm] {LEGACY/img/lx.png}
\includegraphics[width=6.9cm] {LEGACY/img/oiii.png}\qquad
\caption{{\it Upper panel:} X-ray luminosity normalized for the Eddington luminosity. FRIs are in red, FRII-LERGs are in green and FRII-HERGs/BLRGs are in blue. Leftwards arrows indicate upper limits, with specified number per bin and population. {\it Lower panel:} estimated L$_{\rm ion}$/L$_{\rm Edd}$ ratio. Both accretion rate estimates show the same trend: FRIs at the lowest values, classical FRIIs at the highest ones, and FRII-LERGs in between.} \label{accretion}
\end{figure}
\subsection{Were FRII-LERGs powerful FRII-HERGs/BLRGs in the past?}
The comparison of the X-ray and L$_{[OIII]}$ luminosities, accretion rates, and intrinsic nuclear absorption among the examined classes of sources have solidly established that FRII-LERGs have intermediate properties, lying between FRIs and FRII-HERGs/BLRGs. It is plausible that FRII-LERGs represent an evolutionary stage of FRII-HERGs/BLRGs.
A link between the accretion properties and the power of the produced jets is certainly expected, based on both theoretical arguments \citep[e.g.,][]{1977MNRAS.179..433B, 2007ApJ...658..815S} and observational works \citep[e.g.,][]{2006MNRAS.372...21A, Ghisellini2014}.
Since the FRII-LERGs in this sample accrete at lower rates than classic FRII-HERGs/BLRGs, we would then expect their jets to be correspondingly less powerful.
Contrarily to this expectations, their extended radio luminosities, generally assumed as predictors of the total jet power \citep[e.g.,][]{2010ApJ...720.1066C}, are very similar (see Figure \ref{Pradio}).
This conflict can be bypassed considering the large-scale radio structures of FRII-LERGs as the heritage of a past AGN activity at higher efficiency.
If the nuclear activity has recently decreased due, for instance, to the depletion of the cold gas reservoir, it is reasonable to think that this information may not have reached the large-scale radio structures yet, which are formed at kilo-parsec distances from the central engine.
The evolutionary scenario is also supported by a recent analysis of a large sample of low-redshift and low-luminosity FRIIs objects \citep{2017A&A...601A..81C}, which showed that the roughly one-to-one correspondence between FRII morphologies and powerful nuclei is not verified for this large population in the local universe (z<0.15). On the contrary, most of the FRIIs in the catalog compiled by \cite{2017A&A...601A..81C} are classified as LERGs. This could suggest that the local FRIIs are ``starved", i.e. they now miss the fueling cold material that made them shine in the past.
\subsection{Does the environment play a role?}
Another possibility is that the nuclei of FRII-LERGs, while not as powerful as in classic FRII-HERGs/BLRGs, can still form FRII morphologies due, for instance, to favorable environmental conditions.
Several studies in literature identify the environment as the fundamental ingredient for the origin of the FRI/FRII dichotomy.
\cite{Gendre2013}, studying the cluster richness for a large sample of radio galaxies, suggest that the relation between radio morphology and accretion mode is quite complex and attribute to the environment an important role. They calculated the cluster richness (CR) following the method of \cite{Wing2011}, that is based on the count of galaxies within a disk of 1~Mpc of radius around each analyzed target.
Then, they concluded that FRII-HERGs/BLRGs and FRIs live in different ambients, being characterized by poor and rich environments, respectively. On the contrary, they found that the 29 FRII-LERGs of their sample can live both in clusters ($\approx 40\%$) and in scarcely populated regions (in terms of richness, $\approx 60\%$).
Our X-ray data confirm that FRII-LERGs can actually be set in a dense and hot medium Indeed, 6 out of 19 are found in clusters (see Figure \ref{fig-cluster}).. Unfortunately, our X-ray analysis, exploiting data from public archives, suffers from an inhomogeneity of the exposure times that prevents a comparative study on the environments among the different classes.
Taking advantage of the richness study of \cite{Gendre2013}, we could explore the relation between accretion, in terms of L$_{\rm 2-10~keV}$/L$_{\rm Edd}$, radio morphology and environment
for 18 FRII-HERGs/BLRGs, 8 FRIs and 9 FRII-LERGs.
In Figure \ref{gendre} the X-ray luminosity scaled for the Eddington luminosity is plotted as a function of CR for the three classes. The vertical line is the limit between poor (CR$<$30) and rich (CR$>$30) environments proposed by \cite{Gendre2013}.
As expected, FRII-HERGs/BLRGs occupy the left upper corner (i.e. they are in less dense environments and have more efficient engines), while FRIs are segregated in the right lower region (i.e. they have high CR and low L$_{\rm 2-10~keV}$/L$_{\rm Edd}$ values).
The intermediate accretion rates of FRII-LERGs put them in the middle part of the diagram, but, unlike the other classes, they fall into both sides of the threshold fixed at CR=30.
More impressive is the clear link in the whole sample between the richness of the environment and the accretion in terms of L$_{\rm 2-10~keV}$/L$_{\rm Edd}$. A Kendall$-\tau$ test in ASURV provides a very high probability that these quantities are correlated (P$_{{Kendall-\tau}}>99.9\%$).
With the appropriate caution required by the limited number of sources, this result suggests that the environment would have a strong impact on the accretion regime. \\
The problem with this interpretation is that the radio luminosity of the extended components, related to the jet kinetic power, is similar in FRIIs, at odds with what is observed in X-rays, where FRII-HERGs/BLRGs are brighter (more efficient accretors) than FRII-LERGs. \\
However, the relation P$_{\rm jet}\propto L_{\rm 151~MHz}^{6/7}$, proposed by \cite{Willott1999} and widely used in literature, is calibrated on FRII radio galaxies and suffers from large uncertainties represented by a factor $f^{3/2}$, that mainly depends on the ratio between the energy in protons and eletrons in the lobes. \cite{Willott1999} deduced that $f$ can span from 1 to 20, implying a $P_{\rm jet}$ uncertainty of about two orders of magnitude.
Other authors revisited this relation measuring the jet power using the X-ray cavities produced by the interaction of FRI jets \citep{Birzan2008,Cavagnolo2010} or the hot-spot size and an equipartition magnetic field in FRIIs \citep{Godfrey2013}.
In particular, \cite{Cavagnolo2010} concluded that a k-value larger than 100 is necessary to match with the relation proposed by Willott.
Therefore, different classes of sources can require different values of f. We note that increasing f-values from FRIs to FRII-LERGs and FRII-HERGs/BLRGs could indeed mantain the proportionality between jet power and accretion luminosity.
Indeed, we note that the radio luminosity of a radio source can be amplified by radiative losses if the jet propagates through a dense environment. As claimed by \cite{Barthel1996} on the basis of a Cygnus A study, this effect would amplify the radio luminosity and, in turn, weakens its reliability as estimator of the AGN power.
\begin{figure}
\centering
\includegraphics[width=7.5cm] {LEGACY/img/gendre.png}
\caption{Environmental richness (CR) plotted versus the intrinsic Eddington-scaled X-ray luminosity. Color-coding is the same of previous figures. Downwards arrows indicate upper limits. The solid black line is the threshold between poor and rich environments (CR=30), as indicated by \protect\cite{Gendre2013}. The dashed line represents the linear regression of data computed with ASURV parametric EM algorithm, including upper limits. The equation is: Log(L$_X$/L$_{\rm Edd}$)=-$(0.022\pm 0.005)\times CR - (3.8\pm0.3)$.}\label{gendre}
\end{figure}
\begin{table}
\setlength{\arrayrulewidth}{0.3mm}
\caption{Spectral parameters of the soft X-ray component.}
\begin{center}
\begin{tabular}{p{0.08\textwidth}p{.1\textwidth}p{.08\textwidth}p{.1\textwidth}}
\hline
Name & $\Gamma_S$ & kT & $L_{0.5-2}$ \\
&& $(\rm keV)$ &(10$^{42}$~erg~s$^{-1}$)\\\hline
\multicolumn{4}{c}{\textbf{FRII-LERGs}}\\
3C88 & 1.6=$\Gamma_{H}$ & - & $0.03_{0.005}^{0.007}$ \\
3C173.1 & 1.7=$\Gamma_{H}$ & - & $0.3_{-0.2}^{+0.3}$ \\
3C196.1 & - & $3.2_{-0.8}^{+1.3}$ & $9\pm 1$ \\
3C288 & - & $3.7_{-1.2}^{+2.9}$ & $3.6_{-0.5}^{+0.6}$ \\
3C310 & - & $1.0\pm0.2$ & $0.04_{-0.009}^{+0.01}$ \\
3C353 & 1.7=$\Gamma_{H}$ & - & $0.04\pm0.01$ \\
3C357 & 2=$\Gamma_{H}$ & - & $0.9_{-0.3}^{+0.7}$ \\
3C388 & - & $2.1_{-0.2}^{+0.3}$ & $1.5_{-0.1}^{+0.2} $ \\
3C460 & $1.7=\Gamma_{H}$ & - & $0.7\pm 0.3$ \\ \hline
\multicolumn{4}{c}{\textbf{FRII-HERGs/BLRGs}}\\
3C33 & 1.7=$\Gamma_{H}$ & - & $1.0_{-0.2}^{+0.3}$ \\
3C61.1 & 1.2$\pm$1.0 & - & $0.7\pm 0.4$ \\
3C79 & 1.7=$\Gamma_{H}$ & - & $1.8_{-0.7}^{+1.0}$ \\
3C135 & $2.2\pm 1.0$ & - & $0.5\pm 0.2$ \\
3C180 & 1.7=$\Gamma_{H}$ & - & $0.9_{-0.4}^{+0.6}$ \\
3C184.1 & 1.7=$\Gamma_{H}$ & - & $0.7\pm 0.3$ \\
3C192 & 1.7=$\Gamma_{H}$ & - & $0.04\pm 0.01$ \\
3C223 & 1.7=$\Gamma_{H}$ & - & $1.4\pm 0.4$ \\
3C234 & $2.3\pm 0.5$ & - & $7\pm 1$ \\
3C277.3 & 1.7=$\Gamma_{H}$ & - & $0.31_{-0.08}^{+0.1}$ \\
& \multicolumn{2}{l}{N$_{H,2}=1.1_{-0.4}^{+0.5~\ddag}$}&\\
3C285 & 1.7=$\Gamma_{H}$ & - & $0.14\pm 0.1$ \\
3C305 & - & $0.8\pm0.2$ & $0.013_{-0.005}^{+0.006}$ \\
3C321 & $2.8\pm 0.2$ & - & $0.8_{-0.06}^{+0.07}$ \\
3C327 & 1.7=$\Gamma_{H}$ & $0.20_{-0.02}^{+0.04}$ & $1.9\pm 0.1$ \\
3C379.1 & 1.7=$\Gamma_{H}$ & - & $1.2_{-0.7}^{+0.9}$ \\
3C381 & 1.7=$\Gamma_{H}$ & - & $2.\pm 0.5$ \\
3C403 & 1.7=$\Gamma_{H}$ & $0.20\pm 0.03$ & $0.4\pm 0.04$ \\
3C436 & 1.7=$\Gamma_{H}$ & - & $0.4\pm 0.2$ \\
3C452 & 1.7=$\Gamma_{H}$ & - & $0.1\pm0.04$ \\
3C459 & 1.7=$\Gamma_{H}$ & $0.7_{-0.1}^{+0.2}$ & $3.9_{-0.6}^{+0.7}$ \\\hline
\multicolumn{4}{l}{\scriptsize{ $^{\ddag}-$ a secondary absorption component is also required for this }}\\
\multicolumn{4}{l}{\scriptsize{source (see \citealt{Worrall2016}).}}\\
\end{tabular}\label{soft_table}
\end{center}
\end{table}
\begin{table}
\setlength{\arrayrulewidth}{0.3mm}
\caption{Reprocessed Features.}
\begin{center}
\begin{tabular}{llc}
\hline
Name & Fe$K_{\alpha}$ line$^{\rm (a)}$ &Reflection (R)\\
\hline
\multicolumn{3}{c}{\textbf{FRII-LERGs}}\\\\
3C88 & unconstrained &- \\
3C132 & EW$<949$ &- \\
3C165 & EW$<776$ &- \\
3C166 & EW$<388$ &- \\
3C173.1 & EW$\geq886$ &- \\
3C213.1 & unconstrained &- \\
3C236 & EW$<572$ &- \\
3C326 & unconstrained &- \\
3C349 & EW$<339$ &- \\
3C353 & EW=$100\pm78$ &- \\
3C357 & EW$<610$ &- \\
3C401 & unconstrained &- \\
3C460 & EW$<359$ &- \\ \hline
\multicolumn{3}{c}{\textbf{FRII-HERGs/BLRGs}}\\\\
3C20 & EW$<281$ &- \\
3C33 & EW=$139\pm89$ & R=$1.5_{-0.6}^{+0.4}$ \\
3C61.1 & EW$<359$ & - \\
3C79 & EW$<157$ & - \\
3C98 & EW=$277\pm135$ & - \\
3C105 & EW=$178\pm132$ & - \\
3C133 & EW$<453$ & - \\
3C135 & EW$=916_{-719}^{+1474}$ & - \\
3C171 & EW$<117$ & - \\
3C180 & EW$<744$ & - \\
3C184.1 & EW$<278$ & - \\
3C192 & EW$<260$ & - \\
3C223 & EW$<836$ & - \\
3C223.1 & EW$<374$ & - \\
3C234 & EW$=900\pm400$ & - \\
3C277.3 & EW$=200\pm100$ & - \\
3C284 & unconstrained & - \\
3C285 & EW=$367_{-47}^{+144}$ & - \\
3C300 & unconstrained & - \\
3C303.1 & unconstrained & - \\
3C305 & unconstrained & - \\
3C321 & EW=$988_{-474}^{+751}$ & - \\
3C327 & EW=$2000_{-742}^{+3000}$& - \\
3C379.1 & EW=$557_{-464}^{+1900}$ & - \\
3C381 & EW$<2304$ & - \\
3C403 & EW=$153_{-15}^{+60}$ & - \\
3C436 & EW$<591$ &- \\
3C452 & EW=$172_{-65}^{+65}$ &R=2$_{-0.5}^{+0.4}$ \\
3C456 & EW$<156$ & - \\
3C458 & EW$<238$ & - \\
3C459 & EW$<649$ &- \\
\hline
\multicolumn{3}{l}{\scriptsize{$^{\rm (a)}$ -- Observed iron line equivalent width in eV.}}\\
\label{reprocessed_table}
\end{tabular}
\end{center}
\end{table}
\normalsize
\section{Conclusions}
The comparison of the optical/X-ray luminosities, accretion rates, and intrinsic nuclear absorption between the three examined classes of sources (FRIs, FRII-LERGs, FRII-HERGs/BLRGs) have solidly established that FRII-LERGs have intermediate properties.
The measurement of moderate gas column densities in FRII-LERGs, combined with a modest dust reddening, enables us to directly reject the first discussed scenario (see Section \ref{sect_obscured}), in which FRII-LERGs are a highly obscured version of classical powerful FRIIs.
Instead, the moderate N$_{\rm H}$ column densities and X-ray/[OIII] luminosities are indicative of a weak nuclear activity with respect to the more obscured FRII-HERGs/BLRGs.
This leads at least to two different interpretations: i) FRII-LERGs are ``starved" classical FRII-HERGs/BLRGs, or ii) they are a separate class of radio galaxies with their own properties.
In both cases, assuming a P$_{\rm jet}-L_{\rm radio}$ relation, FRII jets appear to carry out similar amount of energy independently on their optical classification (and different X-ray luminosities). This is difficult to explain within the paradigm that assumes powerful jets produced by efficient accretors.
In the former case, this apparent disagreement could be explained if FRII-LERGs are switching from an efficient to an inefficient regime and this information may not have reached the lobe-scales yet. In the latter case, the different trend of radio (jet power) and X-ray luminosities (accretion power) can be reconciled if the usually adopted P$_{\rm jet}-L_{\rm radio}$ relation does not properly take into account the jet interaction with the surrounding medium.
\section*{Acknowledgements}
The authors wish to thank the anonymous referee for constructive comments which helped to improve the paper.
DM and ET acknowledge financial contribution from the agreement ASI-INAF n. 2017-14-H.O. This work is based on data from the \textit{Chandra} and XMM-\textit{Newton} Data Archive.
\label{lastpage}
|
2,869,038,154,237 | arxiv | \section{Coordinate-free Linear Algebra}\label{sec:linear_algebra}
In this section we answer the question: What is a coordinate-free approach and why do we care? With this objective in mind, we define abstract vector spaces, bases, and linear maps.
A secondary objective of this section is to show that thinking of a linear map as something that preserves the operations of a vector space is easier than thinking about it as a matrix.
\subsection{Vector spaces}
\begin{definition}\label{def:vector_space}
A \textbf{vector space} over a field $\KK$ (think $\KK = \RR$ or $\KK = \CC$) is a set $V$ with two operations $\deffun{+ : V \times V -> V;}$ and $\deffun{\cdot : \KK \times V -> V;}$ such that for every $u, v, w \in V$ and $a, b \in \KK$
\vspace{\baselineskip}
\noindent
\begin{minipage}[t]{.5\textwidth}
\begin{itemize}
\item Associativity: $(u + v) + w = u + (v + w)$
\item Identity: There exists a $0 \in V$ such that \\\phantom{Identity: }$\forall v \in V$, $v + 0 = v$
\item Inverse: For every $v \in V$ there is a $-v \in V$\\\phantom{Inverse: }such that $v + (-v) = 0$
\item Commutativity: $u + v = v + u$
\end{itemize}
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\begin{itemize}
\item Compatibility: $a\cdot (b \cdot v) = (ab)\cdot v$
\item Scalar identity: $1\cdot v = v$ where $1\in\KK$\\\phantom{Scalar identity: }is the multiplicative identity
\item Distributivity $1$: $a \cdot (u+v) = a \cdot u + a \cdot v$
\item Distributivity $2$: $(a+b) \cdot u = a \cdot u + b \cdot u$
\end{itemize}
\end{minipage}
\\[8pt]
As with any mathematical multiplication, we often omit the symbol and simply write $av = a \cdot v$.
The elements of $V$ are called \textbf{vectors} and the elements of $\KK$ are called \textbf{scalars}.
Vector spaces over the real (resp.\ complex) numbers are called \textbf{real} (resp.\ \textbf{complex}) vector spaces.
\end{definition}
The first time one encounters these axioms, they can be quite overwhelming. They become a bit easier to digest once one realises that all they are doing is to model $\RR^n$ abstractly. Once one sits down and checks that $\RR^n$ fulfils all these axioms, we find the first example of a vector space.
\begin{example}
$\RR^n$ is a vector space over $\RR$.
\end{example}
$\RR^n$ is the most important example of a vector space. In fact, virtually all the vector spaces over the real numbers that we will work with will have an $\RR^n$ lurking behind one way or another. Whenever we think about an abstract vector space, it is good to picture in mind something that looks like $\RR^n$ to fix the ideas. That being said, there are other vector spaces.
\begin{example}\label{ex:complex}
$\CC^n$ is a vector space over $\CC$.
$\CC^n$ is also a real vector space, as addition of vectors is performed on the real and complex part separately.
In fact, $\CC^n$ is pretty much the same as $\RR^{2n} = \RR^n \times \RR^n$ as a real vector space.
\end{example}
We finish with some slightly less conventional examples of vector spaces.
\begin{example}\label{ex:3}
The set of polynomials in one variable of degree less or equal to $n$ is a real vector space.
The set of polynomials in one variable of degree exactly equal to $n$ is \textbf{not} a real vector space.
The set of matrices $\M{m, n}$ is a real vector space.
The set of tensors with real entries of a fixed shape is a real vector space.
The set of infinite sequences of real numbers $(a_i)_{i=1}^\infty = (a_1, a_2, \dots)$ is a real vector space.
\end{example}
It is a good exercise to convince oneself that all these examples are what they claim to be. In general, to check that an object is indeed a vector space, it tends to be enough to check that multiplying a vector by a scalar gives a vector in the set and adding two vectors in the set gives a vector in the set (the second example above is a counterexample of this).
\subsection{Vector subspaces and bases}
\begin{definition}
A subset $U$ of a vector space $V$ over $\KK$ is a \textbf{vector subspace} or \textbf{linear subspace} if it is closed under addition and multiplication by scalars. In symbols,
\[
u+v \in U \quad \text{and} \quad av \in U \mathrlap{\qquad \forall u,v \in U, a \in \KK}
\]
\end{definition}
\begin{example}
The vectors of the form $(0, a_2, \ldots, a_n)$ with $a_i \in \RR$ form a linear subspace of $\RR^n$.
The polynomials in one real variable of degree less than $n$ form a linear subspace of the vector space of all polynomials in one real variable.
\end{example}
A simple way to define subspaces is to pick a number of vectors and consider all the possible combinations of additions and multiplication by scalars that one can form---\ie, consider their closure.
\begin{definition}
The \textbf{vector space spanned by $v_1, \dots, v_n \in V$} is defined as
\[
\sspan(v_1, \dots, v_n) = \set[\Big]{\sum_{i=1}^n a_iv_i | a_i \in \KK} \subset V.
\]
\end{definition}
The vector space spanned by a number of vectors is always a vector subspace of $V$ by definition, as it is closed with respect to sums and product by scalars.
\begin{example}\label{ex:span} We compute the span of some sets of vectors in $V = \RR^3$:
\begin{itemize}
\item Let $v_1 = \pa{0, 1, 0}$. $\sspan\pa{v_1} = \set{\pa{0, a, 0} | a \in \RR}$.
\item Let $v_2 = \pa{0, 1, 1}$. $\sspan\pa{v_1, v_2} = \set{\pa{0, a + b, b} | a, b \in \RR}$. Setting $a = c-b$ for a new variable $c \in \RR$, we have that $\sspan\pa{v_1, v_2} = \set{\pa{0, c, b} | c, b \in \RR}$. In plain words, $v_1$ and $v_2$ generate the subspace of all the vectors in $\RR^3$ with first coordinate equal to zero.
\item Let $v_3 = \pa{0, 1, 2}$. $\sspan\pa{v_1, v_2, v_3} = \sspan\pa{v_1, v_2} = \set{\pa{0, a, b} | a, b \in \RR}$. What causes this is that $v_3 = 2v_2 - v_1$. As such, any vector that we can form using $v_3$, we could already form as a linear combination of $v_1$ and $v_2$. Intuitively, the third vector is ``redundant''.
\end{itemize}
\end{example}
If we have vectors $v_1, \dots, v_n$, and we express another vector as a linear combination of these, $v = \sum_{i=1}^n a_i v_i$, we would like to be able to represent $v$ as $(a_1, \dots, a_n)$, as we do when we are in $\RR^n$. This identification may not be possible because of two reasons.
\begin{enumerate}
\item \textbf{Our set might not span all the vectors in $V$}. In the examples in~\Cref{ex:span}, none of the sets of vectors that we considered was able to span the vector $v = (1, 0, 0)$. Formally, we write $(1, 0, 0) \not\in \sspan\pa{v_1, v_2, v_3}$. More generally, it might be the case that $\sspan\pa{v_1, \dots, v_n} \subsetneq V$.
\item \textbf{The representation $v = \sum_{i=1}^n a_iv_i$ might not be unique}. This is what happened in the last example in~\Cref{ex:span}. Since $v_3 = 2v_2 - v_1$, if we choose a vector in $\sspan(v_1, v_2, v_3)$, for example $v = (0, 2, 2)$, we have that we can write
\[
v = 2v_2 = v_1 + v_3 = \tfrac{2}{3}v_1 + \tfrac{2}{3}v_2 + \tfrac{2}{3}v_3.
\]
Thus, we cannot identify $v = \sum_{i=1}^3 a_i v_i$ with $(a_1, a_2, a_3)$ because the choice of $a_i$ is not unique.
\end{enumerate}
These problems motivate the following definition:
\begin{definition}
A set of vector $B = \set{v_1, \dots, v_n}$ is called a \textbf{basis} of a vector space $V$ if $\sspan(B) = V$ and any element $v \in V$ may be represented uniquely in terms of the vectors in $B$. If there is a basis of $n$ vectors for $V$, we say that \textbf{$V$ has dimension $n$} or $\dim V = n$.
\end{definition}
In plain words, a basis of a vector space is a set of vectors such that
\begin{itemize}
\item Spans the whole $V$.
\item Has no redundancies: No vector in $B$ can be expressed as a linear combination of others.
\end{itemize}
\subsection{Linear maps}
The other object that linear algebra studies are linear maps. These are maps between vector spaces that respect the vector space structure.
\begin{definition}\label{def:linear_map}
A map $\deffun{T : V -> W;}$ between vector spaces over a field $\KK$ is said to be $\KK$-\textbf{linear}---or simply \textbf{linear}---if
\[
T(u+v) = T(u) + T(v) \qquad T(av) = aT(v) \mathrlap{\qquad \forall u,v \in V, a \in \KK.}
\]
\end{definition}
The following lemma shows that there is a very close connection between linear maps and bases.
\begin{lemma}[A linear map is defined by its values on a basis]\label{lemma:linear_map_basis}
Let $V$ be a vector space with a basis $B_V = \set{v_1, \dots, v_n}$. If we choose $w_1, \dots, w_n \in W$, there exists a unique linear map $\deffun{T : V -> W;}$ such that $T(v_i) = w_i$.
\end{lemma}
\begin{proof}
Since $B_V$ is a basis, there exists a unique representation of an arbitrary element $v\in V$ as $v = a_1v_1 +\dots+a_nv_n$.
We can define $T$ on an arbitrary vector $v$ as
\[
T(v) = T(a_1 v_1 + \dots + a_n v_n) = a_1T(v_1) + \dots + a_nT(v_n).
\]
so $T$ exists. To show that it is unique, suppose that there is another linear map $T'$ that takes the same values on $B_V$. We then have that they are equal
\[
T(v) = a_1T(v_1) + \dots + a_nT(v_n) = a_1T'(v_1) + \dots + a_nT'(v_n) = T'(a_1 v_1 + \dots + a_n v_n) = T'(v)
\]
where we have used that $T(v_i) = T'(v_i)$ and the fact that both $T$ and $T'$ are linear.
\end{proof}
The best linear maps are those that map bases to bases.
\begin{definition}\label{def:linear_iso}
A linear map $\deffun{T : V -> W;}$ is called a \textbf{linear isomorphism} if for any basis $B$ of $V$, $T(B) = \set{T(v_1), \dots, T(v_n)}$ is a basis of $W$.
\end{definition}
This definition packs quite a bit of information. First, it says that $T$ is surjective. This is because, since $T(B)$ is a basis of $W$, $\sspan(T(B)) = W$ and any $w \in W$ is in the image of $T$:
\[
w = a_1T(v_1) + \dots + a_nT(v_n) = T(a_1v_1 + \dots + a_n v_n).
\]
It also says that this representation of a vector in terms of $T(B)$ is unique, as that is the other property that bases have. In summary, we can represent any vector in $W$ uniquely in terms of $B$ and $T$. Reciprocally, if we denote basis $T(v_i) = w_i$, by~\Cref{lemma:linear_map_basis}, there exists a unique linear map $\deffun{T' : W -> V;}$ such that $T'(w_i) = v_i$. By using the linearity of $T$ and $T'$ It is easy to prove that
\[
T'(T(v)) = v \quad \text{and}\quad T(T'(w)) = w \mathrlap{\qquad\forall v \in V, w \in W.}
\]
It should be clear now that a better name for this linear map would be $T^{-1}$. In fact, we have proved the following:
\begin{proposition}
A linear isomorphism as per~\Cref{def:linear_iso} has a linear inverse.\footnote{The reciprocal also holds making this another possible definition of a linear isomorphism.}
\end{proposition}
This is another characterisation of linear isomorphisms which is very useful in practice:
\begin{proposition}[Characterisation of Linear Isomorphisms]\label{prop:characterisation_linear}
Let $\deffun{T : V -> W;}$ be a linear map. If $T(B)$ is a basis of $W$ for one basis $B$ of $V$, then $T(B')$ is a basis of $W$ for any basis $B'$ of $V$, that is, $T$ is a linear isomorphism.
\end{proposition}
Linear isomorphisms are the nicest maps in linear algebra, as they allow us to translate computations on one space to computations on another and back.
\begin{example}\label{example:polynomials}
A basis of the set of polynomials $\mathcal{P}_{<n}$ with real coefficients of degree less than $n$ is given by $B = \set{1, x, x^2, \dots, x^{n-1}}$, as any polynomial $p \in \mathcal{P}_{<n}$ is represented as a linear combination of these as $p(x) = \sum a_i x^i$. Denoting $p_i(x) = x^i$, we may map this basis of $\mathcal{P}_{<n}$ into the canonical basis of $\RR^n$ via
\[
T(p_0) = (1, 0, \dots, 0) \quad
T(p_1) = (0, 1, \dots, 0) \quad
\dots \quad
T(p_{n-1}) = (0, 0, \dots, 1)
\]
This extends to a linear map $\deffun{T: \mathcal{P}_{<n} -> \RR^n;}$ by~\Cref{lemma:linear_map_basis} and since it maps a basis to a basis, by~\Cref{prop:characterisation_linear}, it is a linear isomorphism. In other words, $\mathcal{P}_{<n}$ and $\RR^n$ are \emph{isomorphic} as real vector spaces.
\end{example}
\subsection{Differences between a real vector space and $\RR^n$}\label{sec:rn}
For now, we have set up the basic ideas of linear algebra: We have a vector space, which is an abstract space in which we can add vectors and multiply vectors by scalars, we can encode most of the information of a vector space into a basis, and we can map vector spaces to other vector spaces via linear maps. In particular, if we have a linear isomorphism between vector spaces, this allows us to translate operations on one to operations on the other and back, rendering them somewhat equivalent.
On the other hand, while doing all this, we have barely talked about $\RR^n$. This is a bit odd given that it is the main space that we want to study. The only thing that we have mentioned is that ``real vector spaces model $\RR^n$'' and that, as one could expect, $\RR^n$ is a real vector space. But why would we care about real vector spaces in the abstract? Why not simply work on $\RR^n$?
There are two main differences between real vector spaces and $\RR^n$. One huge and one subtler.
The huge one is that a general vector space may have infinite dimensions. We put an example of such a vector space in~\Cref{ex:3}, when we talked about infinite sequences of real numbers. Infinite-dimensional vector spaces are whole different beasts, and we will not talk about them here.
Now, a real vector space $V$ of dimension $n$, it looks much more similar to $\RR^n$ as
\begin{theorem}\label{thm:RnisoV}
Let $V$ be a real vector space of dimension $n$, there exists a linear isomorphism $\deffun{T : V -> \RR^n;}$.\footnote{The real numbers are not important here. A vector space of dimension $n$ over $\KK$ is isomorphic to $\KK^n$ using the same argument.}
\end{theorem}
\begin{proof}
Choose a basis $B = \set{v_1, \dots, v_n}$ of $V$ and perform the same construction that we did in~\Cref{example:polynomials} constructing $T$ by sending $T(v_i) = e_i$.
\end{proof}
This isomorphism says that we can identify every element from $V$ with a vector from $\RR^n$. As such, if we need to do computations in $V$, we can map $V$ into $\RR^n$ by $T$, do the computations on $\RR^n$ and map the result back to $V$ via $T^{-1}$, potentially simplifying the abstract space to working with $\RR^n$.
In the same way, if we have a linear map between vector spaces, and we have bases for these vector spaces, we can write this map in coordinates, giving raise to the following well-known concept.
\begin{definition}\label{def:matrix}
Given a linear map between real vector spaces $\deffun{T : V -> W;}$ with bases $B_V = \set{v_1, \dots, v_n}, B_W = \set{w_1, \dots, w_m}$, we define the \textbf{matrix associated to $T$} as the element $A \in \M{m, n}$ where the $i$-th column is given by the coordinates of $T(v_i)\in W$ in the basis $B_W$. In symbols,
\[
T(v_i) = A_{1, i}w_1 + \dots + A_{m,i}w_m\mathrlap{\qquad \text{for }i = 1, \dots, n.}
\]
\end{definition}
Matrices are representations of linear maps, and any $n$-dimensional real vector space looks exactly the same as $\RR^n$, so it looks like we are saying that all these things are the same? Well, not quite, as there is a subtle yet fundamental difference:
\begin{center}
$\RR^n$ is an abstract vector space $V$ of dimension $n$ \textbf{together with a choice of an ordered basis}.
\end{center}
The point here is that, given a finite-dimensional abstract vector space, there is no canonical choice of a basis.
It is for this reason that the approach to linear algebra that simply talks about vector spaces and linear maps and not about matrices and $\RR^n$ is often referred to as \textbf{coordinate-free}.
There are times, as it happened in~\Cref{example:polynomials}, that there exists a clear choice, but others there is not distinguished basis.
We will spend the rest of these notes showing how thinking about a map $T$ being linear if
\[
T(u+v) = T(u) + T(v)\qquad \text{and}\qquad T(au) = aT(u)\mathrlap{\qquad\forall u,v\in V, a\in\RR.}
\]
makes computations easier than thinking about $T$ as being a matrix. We show an example as a taster:
\begin{example}[Linear maps are simpler than matrices]
If we think of a linear map as being a matrix, it might not be clear at first sight that the trace of a matrix $\deffun{\tr : \M{n} -> \RR;}$ is a linear map. Now, when we look at it using its abstract definition (\Cref{def:linear_map}) this becomes obvious as
\[
\tr(A + B) = \tr(A) + \tr(B) \qquad \tr(cA) = c\tr(A)\mathrlap{\qquad \forall A,B \in \M{n}, c\in \RR.}
\]
Transposing a matrix $A \mapsto \trans{A}$ or taking the first two columns of a matrix are other examples of linear maps on matrices. We will see many more in~\Cref{sec:forward_mode}.
\end{example}
\section{Multivariable Calculus}\label{sec:multivariable_calculus}
In this section, we go over the standard definition of a differentiable map from multivariable calculus. We will avoid the language of coordinates---\eg, Jacobians and partial derivatives---whenever possible.
\begin{definition}\label{def:differential}
A map $\deffun{f : \RR^m -> \RR^n;}$ is \textbf{differentiable} at a point $x \in \RR^m$ if there exists a linear map $\deffun{L_x : \RR^m -> \RR^n;}$ such that
\[
\lim_{h \to 0_{\RR^m}}\frac{f(x+h) - f(x) - L_x(h)}{\norm{h}_{\RR^m}} = 0_{\RR^n}.
\]
In this case, we denote the linear map $\pa{df}_x = L_x$, and we call it the \textbf{differential of $f$ at $x$}. We say that $\pa{df}_x(v)$ is the \textbf{directional derivative of $f$ at $x$ in the direction $v$}.
\end{definition}
\begin{remark}[The differential is a first order linear approximation]
All this definition is saying is that there exists a linear first order approximation of $f$ at $x$. In other words, if we subtract the approximation from $f$, what we have left vanishes at zero slower than linearly---\ie, we have removed all the ``linear terms'' of $f$. Another way to look at this is by thinking that $\pa{\dif f}_x$ is the first term of the Taylor expansion of $f$
\begin{equation}\label{eq:taylor_expansion}
f(x+h) = f(x) + \pa{\dif f}_x(h) + o(h)
\end{equation}
where $f(x)$ is the $0^{\text{th}}$ order approximation to $f$ at $x$, $\pa{\dif f}_x(h)$ is the $1^{\text{st}}$ order approximation and the $o(h)$ denotes that the difference vanishes at zero slower than linearly.\footnote{Formally, $o(h)$ means that there exists a function $E$ such that
$f(x+h) = f(x) + \pa{\dif f}_x(h) + E(h)\norm{h}$ with $\lim_{h \to 0}E(h) = 0$. This is clear choosing $E(h) = \frac{f(x+h) -f(x) - \pa{\dif f}_x(h)}{\norm{h}}$.}
\end{remark}
Even though it is possible to generalise some of these concepts to deal with functions that are not differentiable such as $\relu(x) = \max(x, 0)$,
\begin{center}
We will always assume that the maps we consider are \textbf{differentiable}.
\end{center}
\begin{remark}[Directional derivatives]
We defined the directional derivative of a differentiable function $f$ at $x$ in the direction $v$ as $\pa{\dif f}_x(v)$. Now, since the function is differentiable, the limit when $h$ approaches zero always exist, so we can give a much more reasonable definition of directional derivative. We can let $h$ tend to zero in the direction of $v \in \RR^m$ with $\norm{v} = 1$, by letting $h = \epsilon v$ for $\epsilon \in \RR$, getting that
\[
\pa{\dif f}_x(v) = \deriv{\epsilon}\Big\vert_{\epsilon=0} f(x+\epsilon v)
\]
where $\deriv{\epsilon}\Big\vert_{\epsilon=0}$ differentiates $f(x+\epsilon v)\in\RR^n$ coordinate-wise at $\epsilon=0$. This shows how $\pa{\dif f}_x(v) \in \RR^n$ represents how $f$ varies when $x$ is approached in the direction $v$.
\end{remark}
\begin{remark}[The matrix associated to the differential is the Jacobian]
The differential of $f$ is a linear map $\deffun{\pa{\dif f}_x : \RR^m -> \RR^n;}$ and as such, it has a matrix representation (\Cref{def:matrix}). The columns of this matrix representation of $\pa{\dif f}_x$ are given by evaluating $\pa{\dif f}_x$ on the vectors $\set{e_i}$ of the basis of $\RR^n$
\[
\pa{\dif f}_x(e_i) = \deriv{\epsilon}\Big\vert_{\epsilon=0} f(x+\epsilon e_i) = \frac{\partial f}{\partial x_i}(x).
\]
In other words, the matrix associated to the differential is the usual Jacobian matrix.
\end{remark}
It will come to no surprise that we will never talk about partial derivatives nor use the Jacobian. We will just use the definition of the differential together with two results. The first one is the chain rule:
\begin{theorem}[Chain rule]\label{thm:chain_rule}
Let $\deffun{f : \RR^m -> \RR^n;}$ and $\deffun{g : \RR^n -> \RR^p;}$ be two differentiable maps. We have that
\[
\dif \pa{g \circ f}_x = \pa{\dif g}_{f(x)} \circ \pa{\dif f}_x\mathrlap{\qquad\forall x \in \RR^m}
\]
or equivalently
\[
\dif \pa{g \circ f}_x\pa{v} = \pa{\dif g}_{f(x)}\cor{\pa{\dif f}_x\pa{v}}\mathrlap{\qquad \forall x,v\in \RR^m.}
\]
\end{theorem}
This formula is the crux of all \ad{} engines, as we will see in the sequel. Note that this formula defines the equality between a linear map and the composition of two other linear maps. It is also worth noting that the domains and codomains of these maps are compatible:
\begin{center}
\begin{tikzpicture}
\node[text width=4cm, anchor=west, right] at (-0.7,1) {Differentiable Maps};
\node (A) {$\RR^m$};
\node (B) [right of=A] {$\RR^n$};
\node (C) [below of=B] {$\RR^p$};
\draw[->] (A) to node {$f$} (B);
\draw[->] (B) to node {$g$} (C);
\draw[->] (A) to node [swap] {$g\circ f$} (C);
\node[text width=4cm, anchor=west, right] at (4,1) {Linear Maps};
\node (A1) [right of=B] {$\RR^m$};
\node (B1) [right of=A1] {$\RR^n$};
\node (C1) [below of=B1] {$\RR^p$};
\draw[->] (A1) to node {$\pa{\dif f}_x$} (B1);
\draw[->] (B1) to node {$\pa{\dif g}_{f(x)}$} (C1);
\draw[->] (A1) to node [swap] {$\dif \pa{g\circ f}_x$} (C1);
\end{tikzpicture}
\end{center}
The other rule that we will use repeatedly is a more abstract version of the derivative of the product. This roughly says that ``differentiating a function of two variables accounts for differentiating the first variable fixing the second one plus differentiating the second variable fixing he first one''.
\begin{proposition}[Leibnitz rule]\label{prop:leibnitz}
Let $\deffun{f : \RR^n \times \RR^m -> \RR^p;}$ be a differentiable function. Define $\deffun{f_{1,x} : \RR^m -> \RR^p;}$ for $x \in \RR^n$ as $f$ partially evaluated on the first variable on $x \in \RR^n$, that is, $f_{1,x}(y) \defi f(x, y)$. Define also $f_{2, y}(x) = f(x,y)$. We have that
\[
\pa{\dif f}_{(x,y)}\pa{e_1, e_2} = \pa{\dif f_{2,y}}_x(e_1) + \pa{\dif f_{1,x}}_y(e_2)
\]
\end{proposition}
The formula in this proposition may look quite difficult to parse, but it will be much easier to understand when we use it in the next section to compute the differential of some matrix functions.
\begin{remark}[On gradients]
Note that we have not defined the gradient of a function yet, but just its differential. The definition of the gradient of a function will have to wait until~\Cref{sec:backward_mode}.
\end{remark}
\section{Forward Mode \ad}\label{sec:forward_mode}
\subsection{The model and definition}
After introducing the necessary concepts from linear algebra and multivariate calculus, we are ready to put them to good use in the context of \ad. Let us consider a model with two differentiable maps
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node (A) {$\RR^m$};
\node (B) [right of=A] {$\RR^n$};
\node (C) [right of=B] {$\RR^p$};
\draw[->] (A) to node {$g$} (B);
\draw[->] (B) to node {$f$} (C);
\end{tikzpicture}
\caption{The automatic differentiation model.}\label{fig:model}
\end{figure}
\begin{remark}[Several inputs and outputs]
The first simplification that we have performed in~\Cref{fig:model} is that, for a map $\deffun{h : \RR^n \times \RR^m -> \RR^p;}$, we can always see it as a map from $\RR^{n + m}$ into $\RR^p$. As such, it is enough to talk about maps with just one input.\footnote{Sometimes, it may be beneficial to look at maps of several inputs though, as some properties such as linearity are not preserved by this transformation. We will see examples of this later in this section.} The same happens with maps with several outputs, only that in this case we can see them as separate maps of one output, and we can differentiate them separately.
\end{remark}
\begin{remark}[Reduction from general \ad]
When we do automatic differentiation, we first form a directed acyclic graph of dependencies between maps---the model---ending in a map that outputs a real number---the loss function. Without entering into the details of how to define this graph of dependencies, we can think of $f$ as the last step of the graph (in topological order) that produces the real number---\ie, set $p = 1$ above---and let $g$ be the rest of the graph.
Then, if we want to split $g$ further, we can consider its last step $g_2$ and the rest of the steps $g_1$, write $g = g_1 \circ g_2$ and proceed inductively, as we are in the same situation as above.
\end{remark}
\begin{remark}[Neural networks]
In the case of neural networks, we have a function $F(\theta, x)$ where $\theta$ are its parameters concatenated and $x$ is an example from our dataset. We then want to differentiate with respect to the parameters. This is exactly the same idea as above, where we have the function $\theta \mapsto F(\theta, x)$ for a fixed $x$.
Of course, this example does not encode \emph{all} possible neural network architectures, as we have not mentioned what to do when the functions involved are not differentiable, or when we use integers or have \code{if}-\code{else} constructions. That being said, extending the theory presented here to all these ideas is not formally challenging; the complexity lies in the implementation of the resulting algorithm.
\end{remark}
Now that we have everything set, we are ready to define forward mode automatic differentiation.
\begin{definition}
\textbf{Forward mode \ad} for the model represented in~\Cref{fig:model} accounts for computing
\[
\dif\pa{g \circ f}_x\mathrlap{\qquad \text{for } x \in \RR^m.}
\]
\end{definition}
In plain words, forward mode \ad{} computes the differentials of the model with respect to the parameters. This may be achieved incrementally via the chain rule (\Cref{thm:chain_rule}), as it tells us how to put together the differential of two functions to compute the differential of the composition. These ideas are often presented in the literature as ``forward mode \ad{} accumulates the product of the Jacobians'', which is the same idea but in coordinates.
\begin{remark}[Dual numbers]
Forward mode \ad{} is frequently defined in terms of \textbf{dual numbers}. Dual numbers are defined through an abstract quantity called $\epsilon$ with the property that $\epsilon^2 = 0$. Points are then described as $v+\dot{v}\epsilon$ and the following expansion is stated
\begin{equation}\label{eq:dual_numbers}
f\pa{v+\dot{v}\epsilon} \stackrel{?}{=} f(v) + f'(v)\dot{v}\epsilon.
\end{equation}
Now, this second equality is often not justified, and left to the reader to interpret.\footnote{This is not really true. These identities can be formalised via \emph{perturbation theory}~\parencite{kato1995perturbation}.} It is typically shown by means of the computation of the derivative of the product of real numbers.
\[
\pa{v+\dot{v}\epsilon}\pa{u+\dot{u}\epsilon} = uv + (\dot{u}v+u\dot{v})\epsilon\mathrlap{\qquad u,v,\dot{u},\dot{v}\in \RR.}
\]
A moment's reflection shows that all this approach is encoding is the idea that the differential is a first order approximation to the function! Having an $\epsilon$ such that $\epsilon^2 = 0$ simply says that we just care about the terms that are linear in epsilon, and we discard any term of order two or higher.
This is exactly what the idea of the differential of a map formalises. As such, a formal justification of~\eqref{eq:dual_numbers} is then given by the order $1$ Taylor expansion in~\eqref{eq:taylor_expansion}.
The example of the multiplication of real numbers can then be described in the language of calculus as letting $\deffun{f : \RR^2 -> \RR;}$ defined by $f(x,y) = xy$ and computing $\pa{\dif f}_{\pa{u, v}}\pa{\dot{u}, \dot{v}} = \dot{u}v+u\dot{v}$.\footnote{We will prove this formula and generalise it to vectors and matrices later in this section.}
\end{remark}
\subsection{Computing forward mode \ad}
We will spend the rest of this section showing how the abstract definition of the differential (\Cref{def:differential}) makes computations surprisingly easy.
\begin{proposition}[Differential of a linear map]\label{prop:dif_linear_map}
Let $\deffun{T: \RR^m -> \RR^n;}$ be a linear map, we have that
\[
\pa{\dif T}_x(v) = T(v)\mathrlap{\qquad \forall x,v \in \RR^m.}
\]
\end{proposition}
\begin{proof}
Plugging $\pa{\dif T}_x(v) = T(v)$ into the definition of the differential we get
\[
\lim_{h \to 0_{\RR^m}}\frac{T(x+h) - T(x) - T(h)}{\norm{h}_{\RR^m}} =
\lim_{h \to 0_{\RR^m}}\frac{T(x) + T(h) - T(x) - T(h)}{\norm{h}_{\RR^m}} =
\lim_{h \to 0_{\RR^m}}\frac{0_{\RR^n}}{\norm{h}_{\RR^m}} =
0_{\RR^n}.\qedhere
\]
\end{proof}
We can use this to compute the differential of a number of functions widely used in machine learning.\vspace{-0.5em}
\begin{example}[Differential of a linear layer. Trailing batch dimension]
Fix $X \in \M{m, b}$ a batch of $b$ vectors of size $m$ (trailing batch dimension), and let $A \in \M{n,m}$. We can define
\[
\deffun{f : \M{n, m} -> \M{n, b}; A -> AX}
\mathrlap{\qquad\qquad X \in \M{m, b}.}
\]
This is just the usual definition of a linear layer depending on the parameter and with fixed inputs. We do this because we want to differentiate with respect to the parameters. It is clear that
\[
f(A+B) = f(A) + f(B) \qquad f(cA) = cf(A) \mathrlap{\qquad \forall A,B \in \M{n,m}, c \in \RR.}
\]
so $f$ is linear and by~\Cref{prop:dif_linear_map}
\[
\pa{\dif f}_A(E) = EX\mathrlap{\qquad{\text{for }E \in \M{n, m}}.}
\]
\end{example}
\begin{example}[Differential of a linear layer. Front batch dimension]
The batch dimension in machine learning is often the first dimension of the tensor due to the layout of matrices in memory. For this reason, it is common to write a batch $b$ of $m$-dimensional vectors as $X \in \M{b, m}$. In this case, we may write a linear layer as
\[
\deffun{f : \M{n, m} -> \M{b, n}; A -> X\trans{A}}
\mathrlap{\qquad\qquad X \in \M{b, m}}
\]
This function has a transpose and a matrix multiplication. In particular, if we write $g(A) = \trans{A}$ and $h(A) = XA$ we have that $f = h \circ g$.
Luckily, both the transpose and the matrix multiplication are linear functions, so we can use the chain rule (\Cref{thm:chain_rule}) and the formula for the differential of a linear function twice to compute the differential of $f$:
\[
\pa{\dif f}_A(E) =
\dif\pa{h \circ g}_A(E) =
\cor{\pa{\dif h}_{g(A)} \circ \pa{\dif g}_A}(E) =
\pa{\dif h}_{\trans{A}}(g(E)) =
h(\trans{E}) =
X\trans{E}.
\]
A simpler way of performing this computation is by noting that $f$ is linear itself so
\[
\pa{\dif f}_A(E) = f(E) = X\trans{E}.
\]
\end{example}
\begin{example}[More linear maps]
Linear maps come in different shapes and forms
\begin{itemize}
\item \textbf{Inner product of vectors}.
Let $\deffun{f_{2,y} : \RR^n -> \RR;}$ with $f_{2,y}(x) = \trans{x}y = \sum_{i=1}^n x_i y_i$ for a fixed $y \in \RR^n$, then $\pa{\dif f_{2,y}}_x(v) = \trans{v}y$. Fixing the first variable, $f_{1,x}(y) = \trans{x}y$, $\pa{\dif f_{1,x}}_y(v) = \trans{x}v$.
\item \textbf{Trace of a matrix}.
Let $\deffun{f : \M{n} -> \RR;}$ with $f(A) = \tr\pa{A}$, then $\pa{\dif f}_A(E) = \tr\pa{E}$.
\item \textbf{Inner product of matrices}.
Let $\deffun{f_{1,X} : \M{m,n} -> \RR;}$ with $f_{1,X}(A) = \tr\pa{\trans{A}X} = \sum_{i=1}^m\sum_{j=1}^n A_{ij} X_{ij}$ for a fixed $X \in \M{m,n}$. \footnote{Note that $\tr\pa{\trans{A}X}$ is just a convenient way to represent the inner product of matrices as seen as vectors of size $\RR^N$ with $N = mn$.}
We then have $\pa{\dif f_{1,X}}_A(E) = \tr\pa{\trans{E}X}$ and an analogous formula for the second variable.
\end{itemize}
\end{example}
\begin{example}[Several inputs]
Consider the inner product of vectors as a function of two arguments
\[
\deffun{f : \RR^n \times \RR^n -> \RR; x, y -> \trans{x}y;}
\]
and define the function partially evaluated in its first and second argument as $\deffun{f_{i,x} : \RR^n -> \RR;}$ for $i = 1,2$ so that $f_{1,x}(y) = f_{2,y}(x) = f(x,y)$. We have that $f_{i,x}$ are linear, as for $i=1,2$
\[
f_{i,u}\pa{v+w} = f_{i,u}\pa{v} + f_{i,u}\pa{w} \qquad f_{i,u}\pa{cv} = cf_{i,u}\pa{v}\qquad\qquad \forall u,v,w \in \RR^n, c \in \RR.
\]
For this reason, we can compute its differential using Leibnitz rule (\Cref{prop:leibnitz})
\[
\pa{\dif f}_{(x,y)}\pa{e_1, e_2} = \pa{\dif f_{2,y}}_x(e_1) + \pa{\dif f_{1,x}}_y(e_2) = f_{2,y}(e_1) + f_{1,x}(e_2) = \trans{e_1}y + \trans{x}e_2\qquad x, y, e_1, e_2 \in \RR^n.
\]
In contrast, note that $f$ itself is not linear as a function from $\RR^{2n}$ to $\RR$ as $f(ax,ay) = a^2f(x,y)$.
\end{example}
\begin{example}[Powers of a matrix]\label{ex:powers}
Consider the map that multiplies a matrix with itself $k$ times
\[
\deffun{f : \M{n} -> \M{n}; A -> A^k = A \stackrel{k)}{\cdots} A}
\]
this is the same as evaluating the map $g(A_1, \dots, A_k) = A_1 \dots A_k$ for $A_i \in \M{n}$ at $(A, A, \dots, A)$. We can compute the differential of $g$ using~\Cref{prop:leibnitz} since $g$ is linear in every entry:
\[
\pa{\dif g}_{\pa{A_1, \dots, A_k}}\pa{E_1, \dots, E_k} = E_1A_2 \cdots A_k + A_1E_2 \cdots A_k + \dots + A_1A_2\cdots E_k.
\]
So the differential of $f$ is given by
\[
\pa{\dif f}_A(E)
= \pa{\dif g}_{\pa{A, \dots, A}}\pa{E, \dots, E}
= EA^{k-1} + AEA^{k-2} + \dots + A^{k-1}E
= \sum_{i=0}^{k-1} A^iEA^{k-i-1}.
\]
\end{example}
This example shows that, morally, if we can write a map $f$ as a map $g$ on more variables such that $g$ is linear in each of its variables, all we need to do to compute the differential of $f$ is to substitute each appearance of $A$ by $E$ on $f$ and add them all together. More generally, if the function is not linear in some of the variables, we substitute every appearance of $g(A)$ by $\pa{\dif g}_A(E)$ as described in~\Cref{prop:leibnitz}. We show this idea in the following example.
\begin{example}[Matrix inverse]\label{ex:inverse}
Let $\GL{n} \subset \M{n}$ be the set of invertible matrices. Define
\[
\deffun{f : \GL{n} -> \GL{n}; A -> A^{-1}}
\]
We have that, by definition of the matrix inverse
\[
Af(A) = \I_n\mathrlap{\qquad \forall A \in \GL{n}.}
\]
Defining $g(A) = Af(A)$ this identity can be rewritten as $g(A) = \I_n$ for $A \in \GL{n}$.
This is an equality between functions---one of them constant---so we may differentiate them.\footnote{Formally, we would first need to define what does it mean to differentiate over $\GL{n}$. Luckily, $\GL{n}$ is an open subset of $\M{n}$, and since the definition of differential is local, we can always define the differential at any matrix $A \in\GL{n}$ by restricting the limit in~\Cref{def:differential} to a neighbourhood of $A$.}
It is direct to see from~\Cref{def:differential} that the differential of a constant map is the function that maps any $E$ to the zero matrix. On the left-hand side we apply~\Cref{prop:leibnitz} to get
\[
EA^{-1} + A\pa{\dif f}_A(E) = 0_{n\times n}
\]
and solving for $\pa{\dif f}_A(E)$ we get
\[
\pa{\dif f}_A(E) = -A^{-1}EA^{-1}.
\]
Note that this is a far-reaching generalisation of the result $(1/x)' = -1/x^2$ for $x \in \RR\backslash\set{0} = \GL{1}$.
\end{example}
Before giving the last result, we show how to extend functions on the real numbers to matrix functions.
\begin{definition}\label{def:matrix_function}
Let $f_{\RR}(x) = \sum_{k=0}^\infty c_kx^k$ be an analytic function---\ie, a function equal to its Taylor series. We define its associated \textbf{matrix function} as\footnote{A matrix function is defined at $A \in \M{n}$ if and only if all the eigenvalues of $A$ lie in the domain of definition of $f$ when seen as a function from $\CC$ to $\CC$.}
\[
\deffun{f : \M{n} -> \M{n} ; A -> \sum_{k=0}^\infty c_k A^k.}
\]
\end{definition}
\begin{example} Any function with a Taylor series can be turned into a matrix function:
\vspace{\baselineskip}
\noindent
\begin{minipage}[t]{.5\textwidth}
\begin{itemize}
\item \textbf{Exponential}: $\exp(A) = \sum_{k=0}^\infty \frac{1}{k!}A^k$
\item \textbf{Logarithm}: $\log\pa{I_n + A} = \sum_{k=0}^\infty \frac{(-1)^k}{k+1}A^k$
\end{itemize}
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\begin{itemize}
\item \textbf{Sine}: $\sin(A) = \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!}A^{2k+1}$
\item \textbf{Cosine}: $\cos(A) = \sum_{k=0}^\infty \frac{(-1)^k}{(2k)!}A^{2k}$
\end{itemize}
\end{minipage}
\\[3pt]
\end{example}
We present the last and most general result of this section, which can be roughly summarised as:
\begin{center}
If we know how to approximate a matrix function, we know how to approximate its differential.
\end{center}
\begin{theorem}[Differential a Matrix Function {\parencite{mathias1996chain}}]\label{thm:mathias}
Let $\deffun{f : \M{n} -> \M{n};}$ be a matrix function (\Cref{def:matrix_function}). Applying $f$ on a matrix of size $2n \times 2n$, we get the following result by blocks:
\[
f\begin{pmatrix}
A & E \\
0 & A
\end{pmatrix} =
\begin{pmatrix}
f(A) & \pa{\dif f}_A(E) \\
0 & f(A)
\end{pmatrix}\mathrlap{\qquad \forall A,E\in\M{n}.}
\]
\end{theorem}
\begin{proof}
Differentiating the series term by term and using~\Cref{ex:powers} we have that
\begin{equation}\label{eq:differential_analytic}
\pa{\dif f}_A(E) = \sum_{k=0}^\infty c_k\sum_{i=0}^{k-1}A^iEA^{k-i-1}.
\end{equation}
We can also compute the powers of the block matrix
\[
\begin{pmatrix}
A & E \\
0 & A
\end{pmatrix}^k =
\begin{pmatrix}
A^k & \sum_{i=0}^{k-1}A^iEA^{k-i-1} \\
0 & A^k
\end{pmatrix}
\]
so
\[
f\begin{pmatrix}
A & E \\
0 & A
\end{pmatrix} =
\sum_{k=0}^\infty c_k
\begin{pmatrix}
A & E \\
0 & A
\end{pmatrix}^k =
\sum_{k=0}^\infty
\begin{pmatrix}
c_k A^k & c_k \sum_{i=0}^{k-1}A^iEA^{k-i-1} \\
0 & c_k A^k
\end{pmatrix} =
\begin{pmatrix}
f(A) & \pa{\dif f}_A(E) \\
0 & f(A)
\end{pmatrix}.\qedhere
\]
\end{proof}
\section{Backward Mode \ad}\label{sec:backward_mode}
In this section, we go over the most popular method of automatic differentiation: Backward \ad. This method has the advantage that, in order to compute the backward pass for a model, one does not have to deal with matrices---the Jacobians associated to the differential of the model---but just with vectors of the same size of the parameters.
\subsection{Inner products, gradients, and adjoints}
Before introducing the definition of backward mode \ad, we need to define one more mathematical concept from linear algebra.
\begin{definition}\label{def:inner_product}
Let $V$ be a real vector space, a (real) \textbf{inner product} is a map
\[
\deffun{\scalar{-,-} : V \times V -> \RR; x, y -> \scalar{x,y}}
\]
such that it is
\begin{itemize}
\item Bilinear: It is linear in each variable.
\item Symmetric: $\scalar{x,y} = \scalar{y,x}$ for every $x,y \in V$.
\item Positive definite: $\scalar{x,x} > 0$ for every $x \in V$, $x \neq 0$.
\end{itemize}
\end{definition}
We will write \spd{} as short for symmetric positive definite matrix, as we will use them in examples.
\begin{example}\label{ex:inner_prod}
The following are examples of inner products
\begin{itemize}
\item \textbf{Canonical inner product on $\RR^n$:} $\scalar{x,y} = \trans{x}y$ for $x,y \in \RR^n$.
\item \textbf{Other inner products on $\RR^n$:} $\scalar{x,y} = \trans{x}Hy$ for $x,y \in \RR^n$ and a fixed $H \in \M{n}$ \spd.\footnote{To prove that this is positive definite, consider the Cholesky decomposition of $H = \trans{U}U$ with $U$ upper-triangular.}
\item \textbf{Canonical inner product on $\M{m,n}$:} $\scalar{A,B} = \tr\pa{\trans{A}B}$ for $A,B \in \M{m,n}$.
\item \textbf{Other inner products on $\M{m,n}$:} $\scalar{A,B} = \tr\pa{\trans{A}HB}$ for $A,B \in \M{m,n}$ and $H\in\M{m}$ \spd.
\end{itemize}
\end{example}
Inner products allow us to measure norms of vectors $\norm{x} = \sqrt{\scalar{x, x}}$, angles between vectors $\angle\pa{x,y} = \arccos\frac{\scalar{x,y}}{\norm{x}\norm{y}}$, distances $d(x,y) = \norm{x-y}$, and many other metric properties. As such, it will come to no surprise the fact that inner products are very important in machine learning and optimisation. For one, we need them to talk about the distance from a point to the optimum and rates of convergence. Perhaps less known is the fact that we also require them to talk about gradients.
\begin{remark}[Motivating the concept of gradient]
For a function $\deffun{f : \RR^n -> \RR;}$ and an $x \in \RR^n$, the map $v \mapsto \pa{\dif f}_x(v)$ is a linear function from $\RR^n$ to $\RR$. Now, if we have an inner product $\scalar{-,-}$ on $\RR^n$, for a fixed $g \in \RR^n$, the function $v \mapsto \scalar{g, v}$ is also a linear function from $\RR^n$ to $\RR$. The question now is, given an inner product on $\RR^n$ and a function $f$, can we always represent the differential of $f$ as a vector $g_x \in \RR^n$ such that $\pa{\dif f}_x(v) = \scalar{g_x, v}$? This is, in fact, the case, and it is the definition of a well-known concept.
\end{remark}
\begin{definition}
Let $\deffun{f : \RR^n -> \RR;}$, and let $\scalar{-, -}$ be an inner product on $\RR^n$. We define the \textbf{gradient of $f$ at $x\in \RR^n$} as the vector $\grad f(x) \in \RR^n$ such that
\[
\pa{\dif f}_x(v) = \scalar{\grad f(x), v}\mathrlap{\qquad \forall v \in\RR^n.}
\]
\end{definition}
\begin{remark} A number of remarks are in order.
\begin{itemize}
\item As $x \mapsto \trans{v}x$ for $x,v \in \RR^n$ is a linear function, some people like to think informally of vectors as ``column vectors'' and linear functions as ``row vectors''. This way, the operation of going from a differential $\trans{v} \cdot -$ to a gradient $v$ for the canonical inner product looks like ``transposing'' $\trans{v}$.
\item \textbf{Important}. The gradient and the differential of a function are not the same thing. The first one is a \textbf{function} into the real numbers, while the latter one is a \textbf{vector}.
\item The gradient of a function depends on the choice of inner product, the differential on finite-dimensional spaces does not, since all the norms are equivalent.
\end{itemize}
\end{remark}
\begin{example}
We compute the gradient of some functions building on results from~\Cref{sec:forward_mode}.
\begin{itemize}
\item Consider $\RR^n$ with the canonical inner product, and let $f(x) = \scalar{g, x} = \trans{g}x$ for a fixed $g\in \RR^n$. Since $f$ is linear $\pa{\dif f}_x(v) = \scalar{g, v}$, and by definition of a gradient, $\grad f(x) = g$ for every $x\in\RR^n$.
\item Consider $\RR^n$ with an arbitrary inner product, and let $f(x) = \scalar{g,x}$ then $\grad f(x) = g$ for every $x\in\RR^n$.
\item Consider $\RR^n$ with the inner product $\scalar{u,v} = \trans{u}Hv$ for $H$ \spd{} (see \Cref{ex:inner_prod}) and let $f(x) = \trans{x}y$ for a fixed $y \in \RR^n$. As in the first example, $\pa{\dif f}_x(v) = \trans{v}y$ since $f$ is linear. On the other hand
\[
\pa{\dif f}_x(v)
= \trans{v}y
= \trans{v}HH^{-1}y
= \scalar{v, H^{-1}y}
= \scalar{H^{-1}y, v}
\]
and so $\grad f(x) = H^{-1}y$ for every $x \in \RR^n$.
\item Consider $\M{n}$ with the canonical inner product $\scalar{A,B} = \tr\pa{\trans{A}B}$, and let $f(A) = \tr(A)$. Since $f$ is linear $\pa{\dif f}_A(E) = \tr(E) = \tr\pa{\trans{\pa{\I_n}}E} = \scalar{\I_n, E}$. Thus, $\grad f(A) = \I_n$ for every $A \in \M{n}$.
\item Let $\deffun{f : \RR^n -> \RR;}$, and consider the canonical inner product on $\RR^n$. The $i$-th coordinate of $\grad f(x)$ is equal to $\frac{\partial f}{\partial x_i}(x)$.
\item Let $\deffun{f : \RR^n -> \RR;}$, and consider the inner product on $\RR^n$ given by $\scalar{x,y} = \trans{x}Hy$ for $H \in \M{n}$ \spd. Denote by $g_x \in \RR^n$ the vector with $i$-th coordinate equal to $\frac{\partial f}{\partial x_i}(x)$---\ie, the gradient of $f$ with respect to the canonical inner product. Then $\grad f(x) = H^{-1}g_x$, while $\pa{\dif f}_x(v) = \trans{g_x}v$ regardless of the inner product.
\end{itemize}
\end{example}
\begin{remark}[Gradient of a composition]
Consider a function $\deffun{h : \RR^m -> \RR;}$ defined as a composition $h = f \circ g$ with $\deffun{g : \RR^m -> \RR^n;}$ and $\deffun{f : \RR^n -> \RR;}$ and fix inner products on $\RR^m$ and $\RR^n$. How do we compute the gradient of $h$ in terms of $g$ and $f$? By the chain rule (\Cref{thm:chain_rule}) and the definition of the gradient of $f$ we have that for $x \in \RR^m$, denoting $y = g(x) \in \RR^n$
\[
\pa{\dif h}_x(v) = \pa{\dif f}_y\cor{\pa{\dif g}_x(v)} = \scalar{\grad f(y), \pa{\dif g}_x(v)}.
\]
To be able to compute the gradient of $h$ at $x$, we would have to solve for $v$ on the last equality, sending the linear map $\pa{\dif g}_x$ to the left-hand side of the inner product. This is exactly what the adjoint of a linear map achieves.
\end{remark}
\begin{definition}
Let $\deffun{T : V -> W;}$ be a linear map between real finite-dimensional vector spaces with inner products $\scalar{-, -}_V, \scalar{-,-}_W$. We define its \textbf{adjoint} as the linear map $\deffun{T^\ast : W -> V;}$ such that
\[
\scalar{w, T(v)}_W = \scalar{T^\ast(w), v}_V\mathrlap{\qquad \forall v\in V, w \in W.}
\]
\end{definition}
Before giving examples of the adjoint of some linear maps, we formalise the motivation that led to the definition of the adjoint.
\begin{proposition}[Gradient of a composition]\label{prop:grad_composition}
Let $\deffun{g : \RR^m -> \RR^n;}$ and $\deffun{f : \RR^n -> \RR;}$ and fix inner products on $\RR^m$ and $\RR^n$. We have that for every $x \in \RR^m$, denoting $y = g(x) \in \RR^n$,
\[
\grad\pa{f \circ g}(x) = \pa{\dif g}^\ast_x\pa{\grad f(y)}
\]
\end{proposition}
\begin{proof}
We finish the computation that we started before
\[
\dif\pa{f \circ g}_x(v) = \pa{\dif f}_{g(x)}\cor{\pa{\dif g}_x(v)} = \scalar{\grad f(y), \pa{\dif g}_x(v)}_{\RR^n} = \scalar{\pa{\dif g}^\ast_x\pa{\grad f(y)}, v}_{\RR^m}.\qedhere
\]
\end{proof}
\begin{example}[Adjoint of the matrix multiplication]\label{ex:adjoint_mm}
Consider the linear map of multiplying on the right by a matrix $\deffun{R_X : \M{m,n} -> \M{m,p};}$, $R_X(A) = AX$ for a fixed matrix $X \in \M{n,p}$. For the canonical inner products on $\M{m,n}$ and $\M{m,p}$:
\[
\scalar{B, R_X(A)}_{\M{m,p}}
= \tr\pa{\trans{B}AX}
= \tr\pa{X\trans{B}A}
= \tr\pa{\trans{\pa{B\trans{X}}}A}
= \scalar{B\trans{X}, A}_{\M{m,n}}
\]
In other words $\pa{R_X}^\ast(B) = B\trans{X} = R_{\trans{X}}(B)$, or simply $\pa{R_X}^\ast = R_{\trans{X}}$.
An analogous computation gives that, for the left multiplication $L_X(A) = XA$ with respect to the canonical inner products, $\pa{L_X}^\ast = L_{\trans{X}}$.
\end{example}
\begin{example}[The adjoint depends on the choice of inner product]
Consider $R_X(A) = AX$ for $A, X \in \M{n}$ as defined in~\Cref{ex:adjoint_mm} and consider the inner product $\scalar{A,B} = \tr\pa{\trans{A}HB}$ for a fixed $H \in \M{n}$ \spd{} (\cf, \Cref{ex:inner_prod}). We have
\[
\scalar{B, R_X(A)}
= \tr\pa{\trans{B}HAX}
= \tr\pa{\trans{\pa{B\trans{X}}}HA}
= \scalar{R_{\trans{X}}(B), A}
\]
so $R^\ast_X = R_{\trans{X}}$, as before. On the other hand, for the left multiplication $L_X(A) = XA$,
\[
\scalar{B, L_X(A)}
= \tr\pa{\trans{B}HXA}
= \tr\pa{\trans{B}HXH^{-1}HA}
= \scalar{L_{H^{-1}\trans{X}H}(B), A}
\]
so $L_X^\ast = L_{H^{-1}\trans{X}H}$, where we have used that the inverse of an \spd{} matrix is symmetric.
\end{example}
We finish this section enumerating two properties that will be particularly useful in~\Cref{sec:computing_backward_ad}.
\begin{proposition}\label{prop:properties_adjoint}
Let $\deffun{S : U -> V;}$ and $\deffun{T, T_1, T_2 : V -> W;}$ be linear maps between (finite-dimensional real) vector spaces with inner products, then
\begin{itemize}
\item The adjoint is linear. Defining $(aT)(v) = aT(v)$ and $(T_1 + T_2)(v) = T_1(v) + T_2(v)$ for $v\in V$, $a\in\RR$, then $(aT)^\ast = aT^\ast$ and $\pa{T_1 + T_2}^\ast = T_1^\ast + T_2^\ast$.
\item The adjoint reverses the order of the composition: $(T \circ S)^\ast = S^\ast \circ T^\ast$.
\end{itemize}
\end{proposition}
\subsection{The model and definition}\label{sec:backward_ad}
As in the case of forward \ad, we have a model described by a composition of functions. The difference is that, in this case, the last map will be a function mapping the result into the real numbers. In machine learning this is called the \emph{loss function}.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node (A) {$\RR^k$};
\node (B) [right of=A]{$\RR^m$};
\node (C) [right of=B] {$\RR^n$};
\node (D) [right of=C] {$\RR$};
\draw[->] (A) to node {$h$} (B);
\draw[->] (B) to node {$g$} (C);
\draw[->] (C) to node {$f$} (D);
\end{tikzpicture}
\caption{The automatic differentiation model.}\label{fig:model_bwd}
\end{figure}
\begin{definition}
\textbf{Backward mode \ad} for the model represented in~\Cref{fig:model_bwd} with respect to the canonical inner product on $\RR^k$ accounts for computing the gradient
\[
\grad \pa{f \circ g \circ h}(x) \mathrlap{\qquad \text{for } x \in \RR^k.}
\]
\end{definition}
All the theory explained before about linear maps, differentials, inner products, gradients, and adjoints comes together to give this particularly simple definition. Even better, we have laid out the theory in such a way that we have all the tools to compute this quantity. In~\Cref{sec:forward_mode}, we saw how to compute the differential of different maps and how to compose them together. Using~\Cref{prop:grad_composition} we can compute the gradient of the model in terms of the gradient of $f$ and the adjoints of the differentials of $g$ and $h$---for example by choosing the canonical inner product on $\RR^n$ and $\RR^m$.\footnote{It is possible to prove that the result does not depend on the choice of inner product in the intermediate spaces, the only thing that changes is the matrix representation of the functions.} Finally, we use \Cref{prop:properties_adjoint} to compute the adjoint of the composition as the reversed composition of the adjoints.\footnote{It should be clear why in some fields in applied mathematics backpropagation is called \textbf{the adjoint method}.} All this together gives, denoting $y = h(x)$ and $z = g(y) = g(h(x))$,
\[
\grad \pa{f \circ g \circ h}(x) =
\pa{\dif h}^\ast_x\cor{\pa{\dif g}^\ast_y\pa{\grad f(z)}}\mathrlap{\qquad \forall x \in \RR^k.}
\]
Note that the last function being applied, $f$, is the first one that we differentiate. We then pass its gradient ``backwards'' to $g$ and then $h$, hence the name of the method.
\subsection{Computing backward mode \ad}\label{sec:computing_backward_ad}
We already computed the adjoint of some linear functions in~\Cref{ex:adjoint_mm}. We now show how these formulae together with the properties from~\Cref{prop:properties_adjoint} are enough to compute the adjoint of the differentials of the maps we considered in~\Cref{sec:forward_mode}.
\begin{example}[Adjoint of the powers of a matrix]
In~\Cref{ex:powers} we showed that for $f(A) = A^k$
\[
\pa{\dif f}_A(E) = \sum_{i=0}^{k-1} A^iEA^{k-i-1} = \sum_{i=0}^{k-1} L_{A^i}\pa{R_{A^{k-i-1}}\pa{E}}\mathrlap{\qquad A,E\in\M{n}}
\]
with $L_X(A) = XA$ and $R_X(A)=AX$ being the left and right multiplication.
For the canonical inner product on $\M{n}$, using that the adjoint is linear (\Cref{prop:properties_adjoint}), and the formulae for the adjoint of $L$ and $R$ (\Cref{ex:adjoint_mm}) we get
\[
\pa{\dif f}^\ast_A
= \pa[\Big]{\sum_{i=0}^{k-1} L_{A^i} \circ R_{A^{k-i-1}}}^\ast
= \sum_{i=0}^{k-1} \pa{L_{A^i} \circ R_{A^{k-i-1}}}^\ast
= \sum_{i=0}^{k-1} R_{\pa{\trans{A}}^{k-i-1}} \circ L_{\pa{\trans{A}}^i}.
\]
or more explicitly
\[
\pa{\dif f}^\ast_A(E) = \sum_{i=0}^{k-1} \pa{\trans{A}}^iE\pa{\trans{A}}^{k-i-1}.
\]
\end{example}
\begin{example}[Adjoint of matrix function]\label{ex:adjoint_analytic}
For a matrix function $\deffun{f : \M{n} -> \M{n};}$, $f(A) = \sum_{k=0}^\infty c_k A^k$ (\cf, \Cref{def:matrix_function})
and the canonical inner product on $\M{n}$ we have that
\[
\pa{\dif f}^\ast_A = \pa{\dif f}_{\trans{A}}\mathrlap{\qquad \forall A \in \M{n}.}
\]
This follows from the formula for the differential of $f$ computed in~\Cref{eq:differential_analytic} and the properties of the adjoint (\Cref{prop:properties_adjoint}). As a corollary, we get that the adjoint of the differential of an analytic function on matrices can be computed by applying $f$ to a larger function, using the formula in~\Cref{thm:mathias}.
\end{example}
\subsection{Exercises}
We leave here a number of exercises to help the reader wrapping their head around the material. If you want to try just two, have a look at~\Cref{ex:rank,ex:implement}.
\begin{exercise}\label{ex:rank}
Consider a feed-forward network $F_{A, b}(x) = \ell\pa{\sigma\pa{Ax+b}}$ for a function $\deffun{\ell : \RR^m -> \RR;}$, fixed $x \in \RR^n, A \in \M{m, n}, b \in \RR^m$ and an element-wise function $\sigma$. Show that the gradient with respect to $A$ has rank $1$. In other words, show that if $\tilde{F}(A) = F_{A, b}(x) = \ell\pa{\sigma\pa{Ax+b}}$ for fixed $x, b$, $\grad \tilde{F}(A) = u\trans{v}$ for two vectors $u \in \RR^m$ and $v \in \RR^n$
\textbf{Hint}. What is the adjoint of the map $A \mapsto Ax+b$ for fixed $x, b$ evaluated on a vector $g \in \RR^m$?
\end{exercise}
The next two exercises look more difficult, but they follow via the same argument as the one above.
\begin{exercise}
Same as above, but with a feed-forward network of depth $d$.
\end{exercise}
\begin{exercise}
Same as above, but with a feed-forward network of depth $d$ and in the stochastic setting, where we define the total loss as $\widehat{F}(A) = \frac{1}{r}\sum_{i=1}^r F_{A, b}(x_i)$ for input vectors $\set{x_i}_{i=1}^r$.
\end{exercise}
\begin{exercise}\label{ex:implement}
Implement the gradient for the two layer feedforward network in~\Cref{lst:ffn}.
\textbf{Hint}. Name more intermediate variables in \code{FFN.forward} to be able to store them.
\textbf{Hint}. Reverse the order of the arguments in \code{FFN.forward} for it to be easier to debug. The gradient with respect to \code{b1} is not going to be correct if the gradient with respect to \code{b2} is not correct.
\end{exercise}
\begin{lstlisting}[language=python,escapechar=|,caption={Modify this PyTorch 1.9 code so that autograd passes.},label={lst:ffn}]
import torch
class FFN(torch.autograd.Function):
@staticmethod
def forward(ctx, x, y, A1, b1, A2, b2):
x = (A1 @ x + b1).sigmoid()
x = (A2 @ x + b2).sigmoid()
loss = (x - y).pow(2).sum()
ctx.save_for_backward(...)
return loss
@staticmethod
def backward(ctx, g_l):
t1, t2, ... = ctx.saved_tensors
...
return None, None, g_A1, g_b1, g_A2, g_b2
class Model(torch.nn.Module):
def __init__(self, in_features, hidden_features, out_features):
super().__init__()
def make_param(*size):
return torch.nn.Parameter(torch.empty(*size, dtype=torch.double))
self.register_parameter("A1", make_param(hidden_features, in_features))
self.register_parameter("b1", make_param(hidden_features))
self.register_parameter("A2", make_param(out_features, hidden_features))
self.register_parameter("b2", make_param(out_features))
torch.nn.init.xavier_normal_(self.A1)
torch.nn.init.xavier_normal_(self.A2)
def forward(self, x, y):
return FFN.apply(x, y, self.A1, self.b1, self.A2, self.b2)
x = torch.rand(32, dtype=torch.double) # Batch size 1
y = torch.rand(8, dtype=torch.double)
model = Model(32, 16, 8)
args = (x, y, model.A1, model.b1, model.A2, model.b2)
torch.autograd.gradcheck(FFN.apply, args, atol=0.01)
\end{lstlisting}
\begin{exercise}
Generalise your code in \code{FFN.backwards} to handle batches of arbitrary size. Then, use the code you have implemented to fit \mnist{} and feel good about yourself.
\end{exercise}
\begin{exercise}
Compute the gradient for a recurrent neural network (\rnn) with respect to the recurrent kernel on PyTorch or just on paper.
\end{exercise}
\begin{exercise}
Find where the adjoint for \code{matrix\_exp} is implemented in PyTorch and make sure you understand its code. \textbf{Hint}. Look for the function \code{matrix\_exp\_backward}.
\end{exercise}
\section{Complex Maps}\label{sec:complex}
\subsection{Forward mode \ad}
When we derived the formulae for forward mode \ad, they all followed from the definition of differential (\Cref{def:differential}) and the formula differential of a linear map (\Cref{prop:dif_linear_map}). As such, if we can generalise these two to complex maps, we should be able to generalise all the forward mode \ad{} to complex numbers.
In order to do this, we recall the point that we made in~\Cref{ex:complex}, $\CC^n$ is a \textbf{real} vector space of dimension $2n$. This means that for $a \in \RR$,
\[
u+v \in \CC^n \qquad au \in \CC^n\mathrlap{\qquad\qquad \forall u,v \in \CC^n.}
\]
Furthermore, it means that these operations---again, with $a\in \RR$, \textbf{not} $a\in\CC$---satisfy all the axioms of a real vector space in~\Cref{def:vector_space}.
This real vector space structure treats the $n$ real components and $n$ imaginary components as independent, as if they were two parts of a vector of size $2n$ in $\RR^{2n} = \RR^n \times \RR^n$. As such, the norm of a vector in $\CC^n$ as a real vector space is given by
\[
\norm{v}^2_{\CC^n} = \sum_{k=1}^n a_k^2 + b_k^2\mathrlap{\qquad \text{for }v = \pa{a_1+ib_1, \dots, a_n+ib_n}.}
\]
Using this norm, we can extend the definition of a differential of a real map to complex maps.
\begin{definition}
A map $\deffun{f : \CC^m -> \CC^n;}$ is \textbf{real differentiable} at a point $x \in \CC^m$ if there exists a map $\deffun{\pa{\dif f}_x : \CC^m -> \CC^n;}$ which is linear over the real numbers (\cf, \Cref{def:linear_map}) such that
\[
\lim_{h \to 0_{\CC^m}}\frac{f(x+h) - f(x) - \pa{\dif f}_x(h)}{\norm{h}_{\CC^m}} = 0_{\CC^n}.
\]
\end{definition}
\begin{remark}[Real differentiable vs.\ complex differentiable]
Here we have defined the real differential as an $\RR$-linear map, that is, a map such that $\pa{\dif f}_x(av) = a\pa{\dif f}_x(v)$ for $a \in \RR$. If we require the differential to be $\CC$-linear---that is, $\pa{\dif f}_x(av) = a\pa{\dif f}_x(v)$ for $a \in \CC$---we get the definition of a complex differentiable map, often called \textbf{holomorphic map}.
It should be clear that, if a complex map is complex differentiable, it is also real differentiable, but the opposite is not true. Consider for example $f(z) = \overline{z}$ for $z \in \CC$. We have that $f(az) = \overline{a}f(z)$ for $a \in \CC$, so it is not $\CC$-linear, but it is $\RR$-linear as $f(az) = af(z)$ for $a \in \RR$. Luckily, we will not need to use holomorphic maps, as real differentiable maps will be enough to compute differentials and gradients.
\end{remark}
The chain rule (\Cref{thm:chain_rule}) and the Leibnitz rule (\Cref{prop:leibnitz}) also hold verbatim for real differentiable maps. We also have the following equivalent to~\Cref{prop:dif_linear_map}.
\begin{proposition}[Differential of a linear map]\label{prop:dif_complex_linear_map}
Let $\deffun{T: \CC^m -> \CC^n;}$ be an $\RR$-linear map, we have that
\[
\pa{\dif T}_x(v) = T(v)\mathrlap{\qquad \forall x, v \in \CC^m.}
\]
\end{proposition}
\begin{proof}
The proof is the same as in the real case.
\end{proof}
Having this, we can compute the differential of many maps, as we did in the real case.
\begin{example}
We compute the differential of some linear maps from $\CC^m$ to $\CC^n$ or to $\RR^n \subset \CC^n$.
\begin{itemize}
\item Let $\deffun{f : \CC^n -> \RR^n;}$, $f(x) = \Im\pa{x}$ be the imaginary part of a vector. Since
\[
\Im(x+y) = \Im(x) + \Im(y) \qquad \Im\pa{ax} = a\Im\pa{x} \mathrlap{\qquad \forall x,y \in \CC^n, a \in \RR,}
\]
$f$ is $\RR$-linear and $\pa{\dif f}_x(v) = \Im\pa{v}$ for $v \in \CC^n$. Note that $f$ is \textbf{not} $\CC$-linear.
\item Analogously, if $\deffun{f : \CC^n -> \RR^n;}$, $f(x) = \Re\pa{x}$, $f$ is $\RR$-linear and $\pa{\dif f}_x(v) = \Re\pa{v}$ for $v \in \CC^n$.
\item Let $L_X(A) = XA$ for $X \in \MC{m,n}$, $A \in \MC{n, k}$. Since $L_X$ is $\CC$-linear, it is in particular $\RR$-linear, so $\pa{\dif L_X}_A(E) = XE$ for $E \in \MC{n,k}$.
\item Let $f_X(A) = \transc{A}X$ for $X \in \MC{m,n}$, $A \in \MC{k, n}$ where $\transc{A} = \trans{\overline{A}}$. Note that this is \textbf{not} a $\CC$-linear map as $f_X(cA) = \overline{c}f_X(A)$ for $c \in \CC$, but it is an $\RR$-linear map, and as such, $\pa{\dif f_X}_A(E) = \transc{E}X$.
\item The formulae for the differential of the powers of a matrix (\Cref{ex:powers}), inverse of a matrix (\Cref{ex:inverse}) and differential of a matrix function (\Cref{thm:mathias}) are also valid for complex matrices.
\end{itemize}
\end{example}
These examples show that \textbf{formulae for forward \ad{} for complex maps are the same as their real counterparts}, as the basic formulae (\Cref{prop:dif_complex_linear_map}, chain rule, and Leibnitz rule) are the same.
\subsection{Backward mode \ad}
For backward mode \ad, all we need is a real inner product (\cf, \Cref{def:inner_product}). To do that all we need to do is to consider $\CC^n$ as a real vector space, as we did in the previous section.
\begin{proposition}
The \textbf{canonical real inner product on $\CC^n$} as a real vector space for $x,y \in \CC^n$ can be written as
\[
\scalar{x,y}_{\CC^n} =
\sum_{k=1}^n a_kc_k + b_kd_k =
\Re\transc{x}y
\mathrlap{\qquad \text{for}\quad\begin{matrix}x = \sum a_k+ib_k \\ y=\sum c_k + id_k\end{matrix}.}
\]
The \textbf{canonical real inner product on $\MC{m, n}$} as a real vector space for $X,Y \in \MC{m,n}$ can be written as
\[
\scalar{X,Y}_{\MC{m,n}} =
\sum_{j=1,k=1}^n A_{jk}C_{jk} + B_{jk}D_{jk} =
\Re\tr\pa{\transc{X}Y}
\mathrlap{\qquad \text{for}\quad \begin{matrix}X = A+iB \\ Y=C+iD\end{matrix}.}
\]
\end{proposition}
\begin{proof}
Note that the first equality in the vector case comes the definition of the canonical real inner product on $\CC^n$, which is just the inner product on $\RR^{2n}$ (\cf, \Cref{ex:inner_prod}). Same happens for the $\MC{m,n}$ case.
We prove this proposition for the matrix case, as vectors can be seen as the case $\MC{n,1} = \CC^n$.
We start by rewriting the left-hand side in a coordinate-free way
\[
\sum_{j,k} A_{jk}C_{jk} + B_{jk}D_{jk} =
\scalar{A,C}_{\M{m,n}} + \scalar{B,D}_{\M{m,n}}.
\]
Thus, we just need to prove that $\Re\tr\pa{\transc{X}Y} = \scalar{A,C}_{\M{m,n}} + \scalar{B,D}_{\M{m,n}}$, but this is direct as
\[
\tr\pa{\transc{X}Y}
= \tr\pa{\pa{\trans{A}-i\trans{B}}\pa{C+iD}}
= \underbrace{\tr\pa{\trans{A}C} + \tr\pa{\trans{B}D}}_{\text{real part}} + i\underbrace{\cor{\tr\pa{\trans{A}D} - \tr\pa{\trans{C}B}}}_{\text{imaginary part}}.
\]
Note that since $\scalar{A,C}_{\M{m,n}} + \scalar{B,D}_{\M{m,n}}$ is a (real) inner product,\footnote{Note that in mathematics we also find complex inner products. These are sesquilinear maps rather than bilinear. These products are more general than the real inner products, as their real part is always a real inner product, while their imaginary part is a non-degenerate symplectic (\ie, skew-symmetric) bilinear form. Luckily, we do not need these to compute gradients.}
so is $\Re\tr\pa{\transc{X}Y}$. In other words, it is a symmetric positive definite (real) bilinear map (\Cref{def:inner_product}).
\end{proof}
All this proposition says is that $\Re\tr\pa{\transc{X}Y}$ is a convenient way to write the canonical real inner product on $\MC{m,n}$. Now, since $\MC{m,n}$ is a real vector space, and we have a real inner product on it, all the definitions and general results in~\Cref{sec:backward_mode} translate to this setting. Note that the gradients are just defined for functions with values in $\RR$, not $\CC$, while the adjoints are defined for arbitrary maps.
\begin{example}
Consider the canonical real inner products in each of the spaces
\begin{itemize}
\item Let $\deffun{f : \CC^n -> \RR;}$, $f(x) = \Re\trans{g}x$ for a fixed $g\in \CC^n$. Since $f$ is $\RR$-linear $\pa{\dif f}_x(v) = \Re\trans{g}x = \scalar{\overline{g}, v}_{\CC^n}$, and by definition of a gradient, $\grad f(x) = \overline{g}$ for every $x\in\CC^n$.
\item For $L_X(A) = XA$, $X \in \MC{m,n}, A \in \MC{n,k}$, we have that $\deffun{L_X^\ast : \MC{n,k} -> \MC{m,n};}$ is given by $L_X^\ast(E) = \transc{X}E$, so that $L_X^\ast = L_{\transc{X}}$.\footnote{This shows why some areas of mathematics abuse the notation and write $\transc{X}$ as $X^\ast$.}
\item Similarly, if $R_X(A) = AX$ for complex $A,X$, $R_X^\ast = R_{\transc{X}}$.
\item \Cref{ex:adjoint_analytic} translates to $\pa{\dif f}^\ast_A = \pa{\dif f}_{\transc{A}}$ for an analytic function $\deffun{f : \MC{n} ->\MC{n};}$ and $A \in \MC{n}$.
\end{itemize}
\end{example}
In this case, the formulae are almost the same, but it tends to happen that when a matrix or a vector is transposed in the real case, it is transposed and conjugated in the complex case.
\printbibliography
\end{document}
|
2,869,038,154,238 | arxiv | \section{Experiments}
\label{sec:exp}
We perform several experiments to validate different design choices in NAT.
We then evaluate the quality of our features by comparing them to state-of-the-art unsupervised approaches on several auxiliary supervised tasks, namely object classification on ImageNet and object classification and detection of \textsc{Pascal} VOC 2007~\citep{EVWWZ10}.
\paragraph{Transfering the features.}
In order to measure the quality of our features, we measure their performance on transfer learning.
We freeze the parameters of all the convolutional layers and overwrite the parameters of the MLP classifier with random Gaussian weights.
We precisely follow the training and testing procedure that is specific to each of the datasets following~\citet{DKD16}.
\paragraph{Datasets and baselines.}
We use the training set of ImageNet to learn our convolutional network~\citep{deng2009imagenet}.
This dataset is composed of $1,281,167$ images that belong to $1,000$ object categories.
For the transfer learning experiments, we also consider \textsc{Pascal} VOC 2007.
In addition to fully supervised approaches~\citep{KSH12},
we compare our method to several unsupervised approaches, \emph{i.e.}, autoencoder, GAN and BiGAN as reported in~\citet{DKD16}.
We also compare to self-supervised approaches, \emph{i.e.}, \citet{ACM15,DGE15,pathak2016context,WG15} and \citet{zhang2016colorful}.
Finally we compare to state-of-the-art hand-made features, \emph{i.e.}, SIFT with Fisher Vectors (SIFT+FV)~\citep{akata2014good}. They reduce the Fisher Vectors to a $4,096$ dimensional vector with PCA, and apply an $8,192$ unit $3$-layer MLP on top.
\subsection{Detailed analysis}
In this section, we validate some of our design choices, like the loss function, representations and the influences of some parameters on the quality of our features.
All the experiments are run on ImageNet.
\paragraph{Softmax \emph{versus} square loss.}
Table~\ref{tab:l2loss} compares the performance of an AlexNet trained with a softmax and a square loss.
We report the accuracy on the validation set.
The square loss requires the features to be unit normalized to avoid exploding gradients.
As previously observed by~\citet{CSTTW17}, the performances are similar, hence validating our choice of loss function.
\paragraph{Effect of image preprocessing.}
In supervised classification, image pre-processing is not frequently used, and transformations that remove information are usually avoided.
In the unsupervised case, however, we observe that is it is preferable to work with simpler inputs as it avoids learning trivial features.
In particular, we observe that using grayscale image gradients greatly helps our method, as mentioned in Sec.~\ref{sec:method}.
In order to verify that this preprocessing does not destroy crucial information, we propose to evaluate its effect on supervised classification.
We also compare with high-pass filtering.
Table~\ref{tab:degradation} shows the impact of this preprocessing methods on the accuracy of an AlexNet on the validation set of ImageNet.
None of these pre-processings degrade the perform significantly, meaning that the information related to gradients are sufficient for
object classification. This experiment confirms that such pre-processing does not lead to a significant drop in the
upper bound performance for our model.
\begin{table}[t]
\centering
\begin{tabular}{@{}rrrrr@{}}
\toprule
& clean & high-pass & sobel \\
\midrule
acc@1 & 59.7 & 58.5 & 57.4 \\
\bottomrule
\end{tabular}
\caption{
Performance of supervised models with various image pre-processings applied.
We train an AlexNet on ImageNet, and report classification accuracy.
}
\label{tab:degradation}
\end{table}
\paragraph{Continuous \emph{versus} discrete representations.}
We compare our choice for the target vectors to those commonly used for clustering, \emph{i.e.}, elements of the canonical basis of a $k$ dimensional space.
Such discrete representation make a strong assumption on the underlying structure of the problem, that it can be linearly separated in $k$ different classes.
This assumption holds for ImageNet giving a fair advantage to this discrete representation. We test this representation with k in $\{10^3, 10^4, 10^5\}$, which
is a range well-suited for the $1,000$ classes of ImageNet.
The matrix $C$ contains $n/k$ replications of $k$ elements of the canonical basis. This assumes that the clusters are balanced, which is verified on ImageNet.
We compare these cluster-like representations to our continuous target vectors on the transfer task on ImageNet.
Using discrete targets achieves an accuracy of $19\%$, which is significantly worse that our best performance, \emph{i.e.}, $33.5\%$.
A possible explanation is that binary vectors induce sharp discontinuous distances between representations.
Such distances are hard to optimize over and may result in early convergence to poorer local minima.
\begin{figure}[t]
\centering
\includegraphics[height=10em]{epochs.pdf}
\includegraphics[height=10em]{periods.pdf}
\caption{
On the left, we measure the accuracy on ImageNet after training the features with different permutation rates
There is a clear trade-off with an optimum at permutations performed every $3$ epochs.
On the right, we measure the accuracy on ImageNet after training the features with our unsupervised approach as a function of the number of epochs.
The performance improves with longer unsupervised training.
}
\label{fig:epochs}
\end{figure}
\paragraph{Evolution of the features.}
In this experiment, we are interested in understanding how the quality of our features evolves with the optimization of our cost function.
During the unsupervised training, we freeze the network every 20 epochs and learn a MLP classifier on top. We report the accuracy on the validation set
of ImageNet.
Figure~\ref{fig:epochs} shows the evolution of the performance on this transfer task as we optimize for our unsupervised approach.
The training performance improves monotonically with the epochs of the unsupervised training.
This suggests that optimizing our objective function correlates with learning transferable features, \emph{i.e.}, our features do not destroy useful
class-level information.
On the other hand, the test accuracy seems to saturate after a hundred epochs. This suggests that the MLP is overfitting rapidly on pre-trained features.
\paragraph{Effect of permutations.}
Assigning images to their target representations is a crucial feature of our approach.
In this experiment, we are interested in understanding how frequently we should update this assignment.
Indeed, updating the assignment, even partially, is relatively costly and may not be required to achieve
good performance.
Figure~\ref{fig:epochs} shows the transfer accuracies on ImageNet as a function of the frequency of these updates.
The model is quite robust to choice of frequency, with a test accuracy always above $30\%$.
Interestingly, the accuracy actually degrades slightly with high frequency.
A possible explanation is that the network overfits rapidly to its own output, leading to relatively worse features.
In practice, we observe that updating the assignment matrix every $3$ epochs offers a good trade-off between
performance and accuracy.
\begin{figure*}[t!]
\centering
\begin{tabular}{ccccccc}
\includegraphics[width=.11\linewidth]{13_1.jpg} &
\includegraphics[width=.11\linewidth]{7_1.jpg} &
\includegraphics[width=.11\linewidth]{510300_1.jpg} &
\includegraphics[width=.11\linewidth]{133334_1.jpg} &
\includegraphics[width=.11\linewidth]{110_1.jpg} &
\includegraphics[width=.11\linewidth]{34_1.jpg} &
\includegraphics[width=.11\linewidth]{4_1.jpg} \\
\includegraphics[width=.11\linewidth]{13_2.jpg}&
\includegraphics[width=.11\linewidth]{7_2.jpg} &
\includegraphics[width=.11\linewidth]{510300_2.jpg} &
\includegraphics[width=.11\linewidth]{133334_2.jpg} &
\includegraphics[width=.11\linewidth]{110_2.jpg} &
\includegraphics[width=.11\linewidth]{34_2.jpg} &
\includegraphics[width=.11\linewidth]{4_2.jpg} \\
\includegraphics[width=.11\linewidth]{13_3.jpg} &
\includegraphics[width=.11\linewidth]{7_3.jpg} &
\includegraphics[width=.11\linewidth]{510300_3.jpg} &
\includegraphics[width=.11\linewidth]{133334_3.jpg} &
\includegraphics[width=.11\linewidth]{110_3.jpg} &
\includegraphics[width=.11\linewidth]{34_3.jpg} &
\includegraphics[width=.11\linewidth]{4_3.jpg}\\
\includegraphics[width=.11\linewidth]{13_4.jpg}&
\includegraphics[width=.11\linewidth]{7_4.jpg} &
\includegraphics[width=.11\linewidth]{510300_4.jpg} &
\includegraphics[width=.11\linewidth]{133334_4.jpg} &
\includegraphics[width=.11\linewidth]{110_4.jpg} &
\includegraphics[width=.11\linewidth]{34_4.jpg} &
\includegraphics[width=.11\linewidth]{4_4.jpg}
\end{tabular}
\caption{
Images and their $3$ nearest neighbors in ImageNet according to our model using an $\ell_2$ distance.
The query images are shown on the top row, and the nearest neighbors are sorted from the closer to
the further.
Our features seem to capture global distinctive structures.
}
\label{fig:nn}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=.49\linewidth]{bnw-filters-sup-new.jpg}
\includegraphics[width=.49\linewidth]{bnw-filters-new.jpg}
\caption{
Filters form the first layer of an AlexNet trained on ImageNet with supervision (left) or with NAT (right).
The filters are in grayscale, since we use grayscale gradient images as input.
This visualization shows the composition of the gradients with the first layer.
}
\label{fig:filters}
\end{figure}
\paragraph{Visualizing the filters.}
Figure~\ref{fig:filters} shows a comparison between the first convolutional layer of an AlexNet trained with and without supervision.
Both take grayscale gradient images as input.
The visualization are obtained by composing the Sobel filtering with the filters of the first layer of the AlexNet.
Unsupervised filters are slightly less sharp than their supervised counterpart, but still maintain edge and orientation information.
\paragraph{Nearest neighbor queries.}
Our loss optimizes a distance between features and fixed vectors. This means that looking at the distance between
features should provide some information about the type of structure that our model captures.
Given a query image $x$, we compute its feature $f_\theta(x)$ and search for its nearest neighbors according to the $\ell_2$ distance.
Figure~\ref{fig:nn} shows images and their nearest neighbors.
The features capture relatively complex structures in images.
Objects with distinctive structures, like trunks or fruits, are well captured by our approach.
However, this information is not always related to true labels.
For example, the image of bird over the sea is matched to images capturing information about the sea or the sky rather than the bird.
\subsection{Comparison with the state of the art}
We report results on the transfer task both on ImageNet and \textsc{Pascal} VOC 2007.
In both cases, the model is trained on ImageNet.
\paragraph{ImageNet classification.}
In this experiment, we evaluate the quality of our features for the object classification task of ImageNet.
Note that in this setup, we build the unsupervised features on images that correspond to predefined image categories.
Even though we do not have access to category labels, the data itself is biased towards these classes.
In order to evaluate the features, we freeze the layers up to the last convolutional layer and train the classifier with supervision.
This experimental setting follows~\citet{NF16}.
We compare our model with several self-supervised approaches~\cite{WG15, DGE15, zhang2016colorful} and an unsupervised
approach, \emph{i.e.},~\citet{DKD16}.
Note that self-supervised approaches use losses specifically designed for visual features.
Like BiGANs~\citep{DKD16}, NAT does not make any assumption about the domain but of the structure of its features.
Table~\ref{tab:in2in} compares NAT with these approaches.
\begin{table}[t]
\centering
\begin{tabular}{@{}lr@{}}
\toprule
Method& Acc@1 \\
\midrule
Random~\citep{NF16} & 12.0 \\
\midrule
SIFT+FV~\cite{akata2014good} & 55.6 \\
\midrule
\citet{WG15} & 29.8 \\
\citet{DGE15} & 30.4 \\
\citet{zhang2016colorful} & 35.2 \\
$^1$\citet{NF16} & 38.1 \\
\midrule
BiGAN~\citep{DKD16} & 32.2 \\
\midrule
NAT & 36.0 \\
\bottomrule
\end{tabular}
\caption{
Comparison of the proposed approach to state-of-the-art unsupervised feature learning on ImageNet.
A full multi-layer perceptron is retrained on top of the features.
We compare to several self-supervised approaches and an unsupervised approach, \emph{i.e.}, BiGAN~\cite{DKD16}.
$^1$\citet{NF16} uses a significantly larger amount of features than the original AlexNet.
We report classification accuracy.
}
\label{tab:in2in}
\end{table}
Among unsupervised approaches, NAT compares favorably to BiGAN~\citep{DKD16}.
Interestingly, the performance of NAT are slightly better than \emph{self-supervised} methods, even though we do not
explicitly use domain-specific clues in images or videos to guide the learning.
While all the models provide performance in the $30-36\%$ range,
it is not clear if they all learn the same features.
Finally, all the unsupervised deep features are outperformed by hand-made features,
in particular Fisher Vectors with SIFT descriptors.
This baseline uses a slightly bigger MLP for the classifier and its performance can be improved by $2.2\%$
by bagging $8$ of these models.
This difference of $20\%$ in accuracy shows that unsupervised deep features are still quite far from
the state-of-the-arts among \emph{all} unsupervised features.
\paragraph{Transferring to \textsc{Pascal} VOC 2007.}
We carry out a second transfer experiment on the \textsc{Pascal} VOC dataset, on the classification and detection tasks.
The model is trained on ImageNet.
Depending on the task, we \emph{finetune} all layers in the network, or solely the classifier, following~\citet{DKD16}.
In all experiments, the parameters of the convolutional layers are initialized with the ones obtained with our unsupervised approach.
The parameters of the classification layers are initialized with gaussian weights.
We get rid of batch normalization layers and use a data-dependent rescaling of the parameters~\cite{krahenbuhl2015data}.
Table~\ref{tab:voc} shows the comparison between our model and other unsupervised approaches.
The results for other methods are taken from~\citet{DKD16} except for~\citet{zhang2016colorful}.
\begin{table}[t]
\centering
\begin{tabular}{@{}lccc@{}}
\toprule
& \multicolumn{2}{c}{Classification} & Detection \\
\midrule
Trained layers & fc6-8 & all & all \\
\midrule
ImageNet labels & 78.9 & 79.9 & 56.8 \\
\midrule
\citet{ACM15} & 31.0 & 54.2 & 43.9 \\
\citet{pathak2016context} & 34.6 & 56.5 & 44.5\\
\citet{WG15} & 55.6 & 63.1 & 47.4 \\
\citet{DGE15} & 55.1 & 65.3 & 51.1 \\
\citet{zhang2016colorful} & 61.5 & 65.6 & 46.9 \\
\midrule
Autoencoder & 16.0 & 53.8 & 41.9 \\
GAN & 40.5 & 56.4 & - \\
BiGAN~\citep{DKD16} & 52.3 & 60.1 & 46.9 \\
\midrule
NAT & 56.7 & 65.3 & 49.4 \\
\bottomrule
\end{tabular}
\caption{
Comparison of the proposed approach to state-of-the-art unsupervised feature learning on VOC 2007 Classification and detection.
We either fix the features after conv5 or we fine-tune the whole model.
We compare to several self-supervised and an unsupervised approaches.
The GAN and autoencoder baselines are from~\citet{DKD16}.
We report mean average prevision as customary on \textsc{Pascal} VOC.
}
\label{tab:voc}
\end{table}
As with the ImageNet classification task, our performance is on par with self-supervised approaches, for both detection and classification.
Among purely unsupervised approaches, we outperform standard approaches like autoencoders or GANs by a large margin.
Our model also performs slightly better than the best performing BiGAN model~\citep{DKD16}.
These experiments confirm our findings from the ImageNet experiments.
Despite its simplicity, NAT learns feature that are as good as those obtained with more sophisticated and data-specific models.
\section{Introduction}
In recent years, convolutional neural networks, or \emph{convnets}~\citep{fukushima1982neocognitron,LBDHHHJ89} have
pushed the limits of computer vision~\citep{KSH12, he16}, leading to important progress
in a variety of tasks, like object detection~\citep{girshick2015fast} or image
segmentation~\citep{pinheiro2015learning}.
Key to this success is their ability to produce features that easily transfer
to new domains when trained on massive databases of labeled
images~\citep{razavian14,oquab2014learning} or weakly-supervised data~\cite{joulin16}.
However, human annotations may introduce unforeseen
bias that could limit the potential of learned features to capture subtle information
hidden in a vast collection of images.
Several strategies exist to learn deep convolutional
features with no annotation~\citep{DKD16}. They either try to
capture a signal from the source as a form of \emph{self-supervision}~\citep{DGE15,WG15}
or learn the underlying distribution of
images~\citep{vincent2010stacked,goodfellow2014generative}.
While some of these approaches obtain promising performance in
transfer learning~\citep{DKD16,WG15}, they do not explicitly aim to learn
discriminative features.
Some attempts were made with retrieval based approaches~\citep{DSRB14} and clustering~\citep{YPB16,LSZU16},
but they are hard to scale and have only been tested on small datasets.
Unfortunately, as in the supervised case, a lot of data is required to learn good representations.
In this work, we propose a novel discriminative framework designed to learn deep architectures on massive amounts of data.
Our approach is general, but we focus on convnets since they require millions of images to produce good features.
Similar to self-organizing maps~\cite{kohonen1982self, martinetz1991neural},
we map deep features to a set of predefined representations in a low dimensional space.
As opposed to these approaches, we aim to learn the features in a end-to-end fashion,
which traditionally suffers from a feature collapsing problem. Our approach
deals with this issue by fixing the target representations and aligning them to our features.
These representations are sampled from a uninformative distribution and we use this \emph{Noise As Targets} (NAT).
Our approach also shares some similarities with standard clustering approches like $k$-means~\citep{lloyd1982least}
or discriminative clustering~\cite{Bach07}.
In addition, we propose an online algorithm able to scale to massive image databases like ImageNet~\citep{deng2009imagenet}.
Importantly, our approach is barely less efficient to train than
standard supervised approaches and can re-use any
optimization procedure designed for them.
This is achieved by using a quadratic loss as in~\cite{CSTTW17} and a fast approximation of the Hungarian algorithm.
We show the potential of our approach by training end-to-end on ImageNet a standard architecture, namely AlexNet~\citep{KSH12} with no supervision.
We test the quality of our features on
several image classification problems, following the setting of~\citet{DKD16}.
We are on par with state-of-the-art unsupervised and self-supervised learning approaches while being much simpler to train
and to scale.
The paper is organized as follows: after a brief review of
the related work in Section~\ref{sec:related}, we present
our approach in Section~\ref{sec:method}. We then
validate our solution with several experiments and
comparisons with standard unsupervised and self-supervised approaches in Section~\ref{sec:exp}.
\section{Method}
\label{sec:method}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{pullfig_5.pdf}
\caption{
Our approach takes a set of images, computes their deep features with a convolutional network and matches them to
a set of predefined targets from a low dimensional space. The parameters of the network are learned
by aligning the features to the targets.
}
\label{fig:pullfig}
\end{figure}
In this section, we present our model and discuss its relations with several clustering approaches including $k$-means.
Figure~\ref{fig:pullfig} shows an overview of our approach.
We also show that it can be trained on massive datasets using an online procedure.
Finally, we provide all the implementation details.
\subsection{Unsupervised learning}
\label{sec:unsup}
We are interested in learning visual features with no supervision.
These features are produced by applying a parametrized mapping $f_\theta$ to the images.
In the presence of supervision, the parameters $\theta$ are learned by minimizing a loss function between the
features produced by this mapping and some given targets, e.g., labels.
In absence of supervision, there is no clear target representations and we thus need to learn them as well.
More precisely, given a set of $n$ images $x_i$, we jointly learn the parameters $\theta$ of the mapping $f_\theta$,
and some target vectors $y_i$:
\begin{equation}
\label{eq:pb}
\min_{\theta} \ \frac{1}{n} \sum_{i=1}^n \ \min_{y_i \in \mathbb{R}^d} \ \ell(f_\theta(x_i), y_i),
\end{equation}
where $d$ is the dimension of target vectors.
In the rest of the paper, we use matrix notations, \emph{i.e.}, we denote by $Y$
the matrix whose rows are the target representations $y_i$,
and by $X$ the matrix whose rows are the images $x_i$.
With a slight abuse of notation, we denote by $f_\theta(X)$ the $n \times d$ matrix of features whose rows are obtained by applying the function $f_\theta$ to each image independently.
\paragraph{Choosing the loss function.}
In the supervised setting, a popular choice for the loss $\ell$ is the softmax function.
However, computing this loss is linear in the number of targets, making it impractical
for large output spaces~\citep{goodman2001classes}.
While there are workarounds to scale these losses to large output spaces,
\citet{CSTTW17} has recently shown that using a squared $\ell_2$ distance works well in many supervised settings, as long as the final activations are unit normalized.
This loss only requires access to a single target per sample, making its computation independent of the number of targets.
This leads to the following problem:
\begin{equation}\label{eq:mat}
\min_{\theta} \ \min_{Y\in\mathbb{R}^{n\times d}} \ \frac{1}{2n} \|f_\theta(X) - Y\|_F^2,
\end{equation}
where we still denote by $f_\theta(X)$ the unit normalized features.
\paragraph{Using fixed target representations.}
Directly solving the problem defined in Eq.~(\ref{eq:mat}) would lead to a
representation collapsing problem: all the images would be assigned to the same representation~\citep{xu2004maximum}.
We avoid this issue by fixing a set of $k$ predefined target representations and matching them
to the visual features.
More precisely, the matrix $Y$ is defined as the product of a matrix $C$ containing these $k$ representations and an assignment matrix $P$ in $\{0,1\}^{n \times k}$, \emph{i.e.},
\begin{equation}
Y = P C.
\end{equation}
Note that we can assume that $k$ is greater than $n$ with no loss of generality (by duplicating representations otherwise).
Each image is assigned to a different target and each target can only be assigned once.
This leads to a set $\mathcal{P}$ of constraints for the assignment matrices:
\begin{equation}\label{eq:p}
\mathcal{P} = \{ P \in \{0,1\}^{n\times k} ~ | ~ P1_k \le 1_n, ~ P^\top 1_n = 1_k\}.
\end{equation}
This formulation forces the visual features to be diversified, avoiding the collapsing issue at the cost of fixing the target representations.
Predefining these targets is an issue if their number $k$ is small, which is why we are interested in the case
where $k$ is at least as large as the number $n$ of images.
\paragraph{Choosing the target representations.}
Until now, we have not discussed the set of target representations stored in $C$.
A simple choice for the targets would be to take $k$ elements of the canonical basis of $\mathbb{R}^d$.
If $d$ is larger than $n$, this formulation would be similar to the framework of~\citet{DSRB14}, and is
impractical for large $n$.
On the other hand, if $d$ is smaller than $n$, this formulation is equivalent to the discriminative
clustering approach of~\citet{Bach07}.
Choosing such targets makes very strong assumptions on the nature of the underlying problem.
Indeed, it assumes that each image belongs to a unique class and that all classes are orthogonal.
While this assumption might be true for some classification datasets, it does not generalize to huge image collections
nor capture subtle similarities between images belonging to different classes.
Since our features are unit normalized,
another natural choice is to uniformly sample target vectors on the $\ell_2$ unit sphere.
Note that the dimension $d$ will then directly influence the level of correlation
between representations, \emph{i.e.}, the correlation is inversely proportional to the square root of $d$.
Using this \emph{Noise As Targets} (NAT), Eq.~(\ref{eq:mat}) is now equivalent to:
\begin{equation}\label{eq:lin}
\max_\theta \max_{P\in\mathcal{P}} \text{Tr} \left ( P C f_\theta(X)^\top \right ).
\end{equation}
This problem can be interpreted as mapping deep features to a uniform distribution over a manifold,
namely the $d$-dimension $\ell_2$ sphere.
Using $k$ predefined representations is a discrete approximation of this manifold that justifies
the restriction of the mapping matrices to the set $\mathcal{P}$ of $1$-to-$1$ assignment matrices.
In some sense, we are optimizing a crude approximation of the earth mover's distance between the distribution
of deep features and a given target distribution~\cite{rubner1998metric}.
\paragraph{Relation to clustering approaches.}
Using the same notations as in Eq.~(\ref{eq:lin}),
several clustering approaches share similarities with our method.
In the linear case, spherical $k$-means minimizes the same loss
function w.r.t. $P$ and $C$, \emph{i.e.},
\begin{equation*}
\max_C\max_{P\in\mathcal{Q}} \text{tr}\left(PCX^T\right).
\end{equation*}
A key difference is the set $\mathcal{Q}$ of assignment matrices:
\begin{equation*}
\mathcal{Q} = \{P\in\{0,1\}^{n\times k}~|~P 1_k = 1_n\}.
\end{equation*}
This set only guarantees that each data point is assigned to a single target representation.
Once we jointly learn the features and the assignment, this set does not prevent
the collapsing of the data points to a single target representation.
Another similar clustering approach is Diffrac~\citep{Bach07}. Their loss is equivalent to ours
in the case of unit normalized features.
Their set $\mathcal{R}$ of assignment matrices, however, is different:
\begin{equation*}
\mathcal{R} = \{P\in\{0,1\}^{n\times k}~|~P^\top 1_n \ge c 1_k\},
\end{equation*}
where $c>0$ is some fixed parameter.
While restricting the assignment matrices to this set prevents the collapsing issue,
it introduces global constraints that are not suited for online optimization.
This makes their approach hard to scale to large datasets.
\begin{algorithm}[t]
\caption{
Stochastic optimization of Eq.~(\ref{eq:lin}).
}
\label{alg1}
\begin{algorithmic}
\REQUIRE $T$ batches of images, $\lambda_0>0$
\FOR{$t = \{1,\dots, T\}$}
\item
Obtain batch $b$ and representations $r$
\item
Compute $f_\theta(X_b)$
\item
Compute $P^*$ by minimizing Eq. (\ref{eq:mat}) w.r.t. $P$
\item
Compute $\nabla_\theta L(\theta)$ from Eq.~(\ref{eq:mat}) with $P^*$
\item
Update $\theta\leftarrow \theta - \lambda_t \nabla_\theta L(\theta)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Optimization}
\label{sec:opt}
In this section, we describe how to efficiently optimize the cost function described in Eq.~(\ref{eq:lin}).
In particular, we explore approximated updates of the assignment matrix that are compatible
with online optimization schemes, like stochastic gradient descent (SGD).
\paragraph{Updating the assignment matrix $P$.}
Directly solving for the optimal assignment requires to evaluate the distances between all the $n$ features and the $k$ representations.
In order to efficiently solve this problem, we first reduce the number $k$ of representations to $n$.
This limits the set $\mathcal{P}$ to the set of permutation matrices, \emph{i.e.},
\begin{equation}\label{eq:newp}
\mathcal{P} = \{ P \in \{0,1\}^{n\times n} ~ | ~ P1_n = 1_n, ~ P^\top 1_n = 1_n\}.
\end{equation}
Restricting the problem defined in Eq.~(\ref{eq:lin}) to this set, the linear assignment problem in $P$ can be solved exactly
with the Hungarian algorithm~\citep{kuhn1955hungarian}, but at the prohibitive cost of $O(n^3)$.
Instead, we perform stochastic updates of the matrix.
Given a batch of samples, we optimize the assignment matrix $P$ on its restriction to this batch.
Given a subset $\mathcal{B}$ of $b$ distinct images,
we only update the $b\times b$ square sub matrix $P_\mathcal{B}$ obtained by restricting $P$
to these $b$ images and their corresponding targets.
In other words, each image can only be re-assigned to a target that was previously assigned to another image in the batch.
This procedure has a complexity of $O(b^3)$ per batch, leading to an overall complexity of $O(n b^2)$,
which is linear in the number of data points.
We perform this update before updating the parameters $\theta$ of our features, in an on-line manner.
Note that this simple procedure would not have been possible if $k > n$;
we would have had to also consider the $k-n$ unassigned representations.
\paragraph{Stochastic gradient descent.}
Apart from the update of the assignment matrix $P$,
we use the same optimization scheme as standard supervised approaches, \emph{i.e.},
SGD with batch normalization~\citep{ioffe2015}.
As noted by~\citet{CSTTW17}, batch normalization plays a crucial role
when optimizing the $\l_2$ loss, as it avoids exploding gradients.
For each batch $b$ of images, we first perform a forward pass to compute the distance between the images and the corresponding subset of target representations $r$.
The Hungarian algorithm is then used on these distances to obtain the optimal reassignments within the batch.
Once the assignments are updated, we use the chain rule in order to compute the gradients of all our parameters.
Our optimization algorithm is summarized in Algorithm~\ref{alg1}.
\subsection{Implementation details}
Our experiments solely focus on learning visual features with convnets.
All the details required to train these architectures with our approach are described below.
Most of them are standard tricks used in the usual supervised setting.
\paragraph{Deep features.}
To ensure a fair empirical comparison with previous work, we follow~\citet{WG15} and use an AlexNet architecture.
We train it end to end using our unsupervised loss function.
We subsequently test the quality of the learned visual feature by re-training a classifier on top.
During transfer learning, we consider the output of the last convolutional layer as our features as in~\citet{razavian14}.
We use the same multi-layer perceptron (MLP) as in~\citet{KSH12} for the classifier.
\paragraph{Pre-processing.}
We observe in practice that pre-processing the images greatly helps the quality of our learned features.
As in~\citet{huang2007unsupervised}, we use image gradients instead of the images to avoid trivial solutions like clustering according to colors.
Using this preprocessing is not surprising since most hand-made features like SIFT or HoG are based on image gradients~\citep{L99,dalal2005histograms}.
In addition to this pre-processing, we also perform all the standard image transformations that are commonly applied in the supervised setting~\cite{KSH12},
such as random cropping and flipping of images.
\begin{table}[t]
\centering
\begin{tabular}{@{}cccc@{}}
\toprule
&& Softmax & Square loss \\
\midrule
ImageNet && 59.2 & 58.4 \\
\bottomrule
\end{tabular}
\caption{
Comparison between the softmax and the square loss for supervised object classification on ImageNet.
The architecture is an AlexNet. The features are unit normalized for the square loss~\cite{CSTTW17}. We report the accuracy on the validation set.
}
\label{tab:l2loss}
\end{table}
\paragraph{Optimization details.}
We project the output of the network on the $\ell_2$ sphere as in~\citet{CSTTW17}.
The network is trained with SGD with a batch size of $256$.
During the first $t_0$ batches, we use a constant step size.
After $t_0$ batches, we use a linear decay of the step size, \emph{i.e.}, $l_t = \frac{l_0}{1 + \gamma [t - t_0]_+}$.
Unless mentioned otherwise, we permute the assignments within batches every $3$ epochs.
For the transfer learning experiments, we follow the guideline described in~\citet{DKD16}.
\section{Conclusion}
This paper presents a simple unsupervised framework to learn discriminative features.
By aligning the output of a neural network to low-dimensional noise, we obtain
features on par with state-of-the-art unsupervised learning approaches.
Our approach explicitly aims at learning discriminative features, while most unsupervised approaches
target surrogate problems, like image denoising or image generation.
As opposed to self-supervised approaches, we make very few assumptions about the input space.
This makes our appproach very simple and fast to train.
Interestingly, it also shares some similarities with traditional clustering approaches as well as retrieval methods.
While we show the potential of our approach on visual data, it will be interesting to try other domains.
Finally, this work only considers simple noise distributions and alignment methods. A possible direction of research is to
explore target distributions and alignments that are more informative. This also would strengthen the relation
between NAT and methods based on distribution matching like the earth mover distance.
\paragraph{Acknowledgement.}
We greatly thank Herv\'e J\'egou for his help throughout the development of this project.
We also thank Allan Jabri, Edouard Grave, Iasonas Kokkinos, L\'eon Bottou, Matthijs Douze and the rest of FAIR
for their support and helpful discussion.
Finally, we thank Richard Zhang, Jeff Donahue and Florent Perronnin for their help.
\bibliographystyle{icml2016}
\section{Related work}
\label{sec:related}
Several approaches have been recently proposed to tackle the problem of deep unsupervised learning~\cite{CN12, MKHS14, DSRB14}.
Some of them are based on a clustering loss~\cite{XGF16,YPB16,LSZU16}, but they are not tested at a scale comparable to that of supervised convnet training.
\citet{CN12} uses $k$-means to pre-train convnets, by learning each layer sequentially in a bottom-up fashion.
In our work, we train the convnet end-to-end with a loss that shares similarities with $k$-means.
Closer to our work, \citet{DSRB14} proposes to train convnets by solving a retrieval problem.
They assign a class per image and its transformation.
In contrast to our work, this approach can hardly scale to more than a few hundred of thousands of images, and requires a custom-tailored architecture while we use a standard AlexNet.
Another traditional approach for learning visual representations in an unsupervised manner is to define a parametrized mapping between a predefined random variable and a set of images.
Traditional examples of this approach are variational autoencoders~\cite{kingma2013auto}, generative adversarial networks~\citep{goodfellow2014generative}, and to a lesser extent, noisy autoencoders~\citep{vincent2010stacked}.
In our work, we are doing the opposite; that is, we map images to a predefined random variable.
This allows us to re-use standard convolutional networks and greatly simplifies the training.
\paragraph{Generative adversarial networks.} Among those approaches, generative adversarial
networks (GANs)~\cite{goodfellow2014generative,denton2015deep,DKD16} share another
similarity with our approach, namely
they are explicitly minimizing a discriminative loss
to learn their features.
While these models cannot learn an inverse mapping, \citet{DKD16} recently
proposed to add an encoder to extract visual features
from GANs.
Like ours, their encoder can be any standard convolutional network.
However, their loss aims at differentiating real and generated images, while
we are aiming directly at differentiating between images. This makes our
approach much simpler and faster to train, since we do not need to learn
the generator nor the discriminator.
\paragraph{Self-supervision.}
Recently, a lot of work has explored \emph{self-supervison}:
leveraging supervision contained in the input signal~\cite{DGE15,NF16, pathak2016context}.
In the same vein as word2vec~\cite{mikolov2013efficient}, ~\citet{DGE15} show
that spatial context is a strong signal to learn visual features. \citet{NF16}
have further extended this work. Others have shown that temporal coherence in videos also provides a
signal that can be used to learn powerful visual
features~\cite{ACM15,JG15,WG15}. In particular,~\citet{WG15} show that such
features provide promising performance on ImageNet.
In contrast to our work, these approaches are domain dependent since they
require explicit derivation of weak supervision directly from the input.
\paragraph{Autoencoders.}
Many have also used autoencoders with a reconstruction
loss~\citep{bengio2007greedy,huang2007unsupervised,masci2011stacked}. The idea
is to encode and decode an image, while minimizing the loss between the decoded
and original images. Once trained, the encoder produces image features and
the decoder can be used to generate images from codes. The decoder is often a
fully connected network~\cite{huang2007unsupervised} or a deconvolutional
network~\citep{masci2011stacked, whatwhere} but can be more sophisticated, like
a PixelCNN network~\citep{van2016conditional}.
\paragraph{Self-organizing map.} This family of unsupervised methods aims at
learning a low dimensional representation of the data that preserves certain topological properties~\citep{kohonen1982self, vesanto2000clustering}. In
particular, Neural Gas~\citep{martinetz1991neural} aligns feature vectors to
the input data. Each input datum is then assigned to one of these vectors in a
winner-takes-all manner. These feature vectors are in spirit similar to our
target representations and we use a similar assignment strategy. In contrast to our work,
the target vectors are not fixed and aligned to the input vectors. Since
we primarly aim at learning the input features, we do the opposite.
\paragraph{Discriminative clustering.}
Many methods have been proposed to use discriminative losses for
clustering~\cite{xu2004maximum,Bach07,krause2010discriminative,JouBacICML12}.
In particular,
\citet{Bach07} shows that the ridge regression loss could be use to
learn discriminative clusters. It has been successfully
applied to several computer vision applications, like
object discovery~\citep{Joulin10,TJLF14} or
video/text alignment~\cite{Bojanowski_ICCV13,bojanowski2014weakly,ramanathan2014linking}.
In this work, we show that a similar framework can be
designed for neural networks. As opposed to \citet{xu2004maximum},
we address the empty assignment problems by restricting the
set of possible reassignments to permutations rather than
using global linear constrains the assignments.
Our assignments can be updated online, allowing our approach to scale to very large datasets.
|
2,869,038,154,239 | arxiv | \section{Introduction}
In high energy collisions of heavy ions in RHIC or LHC, the most important
ingredients for producing quark gluon plasma ( QGP ) are small x gluons in the nuclei~\cite{1,2,3,4,5}.
The small x gluons with also small transverse momenta are sufficiently dense in the nuclei and so
they may be treated as classical fields produced by glassy large x gluons
even after the collisions of the nuclei. It has been shown that
longitudinal color magnetic and electric fields of the small x gluons are generated
initially at the collisions. They are classical fields and evolve classically
according to a color glass condensate (CGC) model\cite{cgc}.
These fields, which are called as glasma, are expected to give rise to QGP by their rapid decay\cite{hirano, iwa}.
In an extremely high energy collision, the thickness of nuclei is nearly zero due to the Lorentz contraction.
Thus, the initial gauge fields have only transverse momentum perpendicular to the collision axis, but
have no longitudinal momentum ( rapidity dependence ). Such classical gauge fields can not
possess any longitudinal momentum in their classical evolution, since equations of motion of the gauge fields
are invariant under the Lorentz boost along the collision axis\cite{2,3,4}.
Thus, the naive application of
the CGC model, e.g. McLerran-Venugopalan model ( MV model )\cite{cgc}, does not give rise to thermalized QGP.
It has recently been shown\cite{ve}, however, that the addition of small
fluctuations with rapidity, e.g. quantum fluctuations\cite{fuku,review} to the initial gauge field
induces exponentially increasing modes with longitudinal momentum.
The production of the exponentially increasing modes implies that a process toward
thermalization has started; the decay of the gauge field and the isotropization
of momenta. Although the decay of the classical fields have been clarified in the numerical calculations,
the physical mechanism of the decay is still unclear.
In this paper we show using a simple model of an initial gauge field
that the decay of the glasma is
caused by Nielsen-Olesen unstable modes\cite{no,savvidy,iwa,itakura}.
The modes arise under the initial gauge field
when small fluctuations around the gauge fields are taken into account.
Especially
we compare the time evolutions of the unstable modes with those of the exponentially increasing
modes of longitudinal pressure shown in the previous calculation\cite{ve}.
We find that the previous main results
can be reproduced.
The initial gauge field in our model is spatially homogeneous and much weaker than
the fields of the glasma\cite{ve}. The glasma are strong ( e.g. color magnetic field $B\sim O(Q_s^2/g)$ ) and
inhomogeneous ( their coherent length being the order of $\sim Q_s^{-1}$; $Q_s$ is a saturation momentum
written as $Q_s=g^2\mu$ in the ref.\cite{ve}. )
The initial gauge field is chosen to reproduce the time evolutions of the
instabilities observed in the reference\cite{ve}. It can not reproduce
local behaviors of the instabilities in the transverse space since
it is homogeneous in the space.
Nielsen-Olesen instability is associated with color magnetic fields,
not color electric fields. In the paper we are only concerned with the
instability of the color magnetic fields. The color electric fields
produced in heavy ion collisions are
unstable against quark pair creations\cite{miklos} so that they may decay
sufficiently fast. We do not discuss the instability of the color electric fields.
In the next section, we explain the instabilities observed in the previous simulation\cite{ve}.
In the section(3), we review briefly Nielsen-Olesen instability.
In the section(4), we explain the assumption
used in our model, expecially the relevance of the use of homogeneous color magnetic field
instead of inhomogeneous glasma.
In the section(5), we introduce our simple model for analyzing the decay
of the glasma in $\tau$ and $\eta$ coordinates.
In the section(6), we compare our results with those obtained in the
previous simulation\cite{ve}. In the final section,
we conclude our results.
\section{Instabilities in initial gauge fields}
In high energy heavy ion collisions, classical color gauge fields
are generated and are pointed to the collision axis ( longitudinal direction ).
They are color electric and magnetic fields.
They arise at $\tau=0$ just when the collisions occur.
( Here we assume collisions in high energy limit where heavy ions are Lorentz contracted to have
zero width. Thus, the collisions occur at the instance of $\tau=0$; $\tau=\sqrt{x_0^2-x_3^2}$. )
They have sufficiently large energy densities to produce thermalized quark gluon plasma
in the subsequent their decay. The color gauge fields are coherent
states of small x gluons with transverse momenta less than a saturation momentum, $Q_s$.
These gluons are described by a model of color glass condensate, e.g. MV model.
More explicitly, the color gauge fields are given initially at $\tau=0$ as functions of
color charge density of large x gluons inside
of heavy ions; the color charge
density is determined with a Gaussian distribution in the MV model.
Such classical gauge fields are uniform in the longitudinal direction.
On the other hand, they are not uniform in the transverse directions;
the scales of their variations in the directions
are typically determined by the saturation momentum $Q_s$. This is because
they are made of the gluons possessing transverse momenta typically
given by $Q_s$.
This implies that the fields keep their directions ( parallel or anti-parallel to the
collision axis ) inside of
transverse regions whose widths are given typically by $Q_s^{-1}$. But, they change
their sign outside of the region.
In this way
their directions are never uniform in the
transverse directions, although they are uniform in the longitudinal
direction.
Anyway, such color gauge fields are given initially in the collisions.
After their production they evolve according to the gauge field equations.
Since the gauge fields equations are invariant under Lorentzs boost along the
collision axis denoted by $x_3$ ( $\eta=\log(\frac{x_0+x_3}{x_0-x3}) \to \eta+\mbox{cont.}$ ),
the color gauge fields have no dependence of $\eta$ in their development. That is, the gauge fields
are still uniform in the longitudinal direction after the collision.
In such a circumstance, a numerical calculation\cite{5}
has been performed to show that they evolve smoothly in time $\tau$ and
become weak owing to their expansion.
Any unstable behaviors of the fields were not found in the calculation.
Non-linearity in the gauge field equations seems not to play
dramatici roles in the evolution of the gauge fields. Actually,
it has been shown\cite{itakura} that their time developments
can be reproduced qualitatively in the linear analysis of
the field equations; especially
the fields become weak with time as $1/\tau$.
( A kind of "Abelian dominance"\cite{abelian}
seems to hold for large $\tau$ since self-interactions of gauge fields
become ineffective because of smallness of the fields in the large $\tau$. )
We use the feature in order to make our
simple analytical model of the glasma decay.
Subsequently, a numerical simulation has been performed\cite{ve}
with a slightly modified initial condition. That is, much small
gauge fields depending on $\eta$ are added by hand to the original
initial gauge fields. Such gauge fields may arise because
the uniformness in $\eta$ of the initial condition is broken
in real situations:
Heavy ion collisions occur with finite energies
or the initial conditions derived in the MV model receive
high order quantum corrections. Both of them give rise to
much small corrections depending on $\eta$ to the initial gauge fields without
$\eta$ dependence.
The simulation has clarified the existence of unstable modes in
the evolution of the gauge fields; Fourier components in $\eta$ of longitudinal pressure
increase exponentially in
$\tau$. This implies that a component of gauge fields added in the simulation
increases exponentially.
The existence of the modes implies that the initial color electric and magnetic fields
uniform in the longitudinal direction are unstable under the small
fluctuations depending on $\eta$.
We should note that since gauge field fluctuations depending on $\eta$ are sufficiently smaller
than the initial gauge fields,
they can be treated perturbatively. That is, the fluctuations
evolve under the background initial gauge fields
without self-interactions. Hence, the analysis of their evolution can be
performed in the linear approximation.
Here, we review characteristic properties of the unstable modes found in the
numerical simulation. We denote the longitudinal pressure as $P_{\eta}$.
When we denote Fourier components
as $P_{\eta}(k_{\eta},\tau)=\int d\eta \,\,P_{\eta}(\eta,\tau)\exp(ik_{\eta}\eta)$ ( $k_{\eta}$ denotes longitudinal momentum ),
$P_{\eta}(k_{\eta},\tau)$ evolves smoothly in $\tau$
when initial gauge fields have no dependence of the rapidity $\eta$.
Obviously, $P_{\eta}(k_{\eta},\tau) \propto \delta(k_{\eta})$ in the case.
Once gauge fields depending on $\eta$ are added to the initial gauge fields,
$P_{\eta}(k_{\eta},\tau)$ shows exponential increase.
Namely some unstable modes are excited
when gauge fields fluctuations depending on $\eta$ are added.
$P_{\eta}(k_{\eta},\tau)$ shows the exponential increase just after
a time $\tau(k_{\eta}) >0$ passes. In other words,
$P_{\eta}(k_{\eta},\tau)$ does not increase exponentially until the time $\tau = \tau(k_{\eta})$. The simulation
shows that $\tau(k_{\eta})\propto k_{\eta}$.
That is, the component $P_{\eta}(\tau,k_{\eta})$
begins to increase exponentially later than the time when components $P_{\eta}(\tau,k'_{\eta})$
( $k'_{\eta}<k_{\eta}$ ) increase.
Hence, we can define a maximum momentum $k_{\eta}(\mbox{max})$ at the instance $\tau$
such that
the component $P_{\eta}(\tau,k_{\eta}(\mbox{max}) )$
begins to increase exponentially at the instance;
components $P_{\eta}(\tau,k_{\eta}<k_{\eta}(\mbox{max}))$
have already increased exponentially.
The simulation shows that $k_{\eta}(\mbox{max})$ increases
linearly with time $\tau$.
Another interesting property found in the simulation is concerned with longitudinal momentum
distribution at a time $\tau$, namely $P_{\eta}(k_{\eta},\tau)$.
The distribution has a peak at the value of $k_{\eta}(p)$, which is almost independent
of $\tau$. This implies that the unstable modes with the characteristic longitudinal
momentum $k_{\eta}(p)$ are induced dominantly.
The peak in the distribution, $P_{\eta}(k_{\eta}(p),\tau)$, increases
such as $P_{\eta}(k_{\eta}(p),\tau)\propto \exp(\mbox{const.}\tau^{1/2})$.
We should mention that the growth rate of the peak is
much smaller than the typical time scale $Q_s^{-1}$ in the system.
Thus, it takes a too long time for the small gauge field fluctuations
to become comparable order of magnitude of the initial gauge fields.
This is a bad new for the realization of thermalized QGP through the decay of the gauge fields.
We will show that these characteristic properties can be understood
using Nielsen-Olesen instability.
The point in our analysis is that the background initial gauge fields are approximately
described by Abelian gauge fields and the gauge field fluctuations added
can be treated perturbatively.
Based on the approximation, we show that
the unstable modes found in the numerical simulation
are just Nielsen-Olesen unstable modes.
\section{Nielsen-Olesen Instability}
We review briefly Nielsen-Olesen instability by using SU(2) gauge theory.
The instability means that homogeneous color magnetic field
is unstable in the gauge theory.
In order to explain it, we decompose the gluon's
Lagrangian
with the use of the variables, "electromagnetic field"
$A_{\mu}=A_{\mu}^3,\,\,\mbox{and} \,\, \mbox{"charged vector field"}\,
\Phi_{\mu}=(A_{\mu}^1+iA_{\mu}^2)/\sqrt{2}$
where indices $1\sim 3$ denote color components,
\begin{eqnarray}
\label{L}
L&=&-\frac{1}{4}\vec{F}_{\mu
\nu}^2=-\frac{1}{4}(\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu})^2-
\frac{1}{2}|D_{\mu}\Phi_{\nu}-D_{\nu}\Phi_{\mu}|^2- \nonumber \\
&+&ie(\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu})\Phi_{\mu}^{\dagger}\Phi_{\nu}+\frac{g^2}{4}(\Phi_{\mu}\Phi_{\nu}^{\dagger}-
\Phi_{\nu}\Phi_{\mu}^{\dagger})^2
\end{eqnarray}
with $D_{\mu}=\partial_{\mu}+igA_{\mu}$,
where we have omitted a gauge term $D_{\mu}\Phi_{\mu}=0$.
We can see that
the charged vector fields $\Phi_{\mu}$ couple with electromagnetic field $A_{\mu}$
minimally through the covariant derivative $D_{\mu}$ and non-minimally
through the interaction term, $ig(\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu})\Phi_{\mu}^{\dagger}\Phi_{\nu}$.
When a homogeneous color magnetic field $B>0$ described by $A_{\mu}=A_{\mu}^B$ is present,
we analyze the fluctuations $\Phi_{\mu}$ under the color magnetic field. To do so,
we solve the following equation of $\Phi$ under the background field $B$,
\begin{equation}
(\partial_t^2-\vec{D}^2\mp 2gB)\phi_{\pm}=0
\end{equation}
with $A_j^B=(-Bx_2,Bx_1,0)/2$,
where $\phi_{\pm}=(\Phi_1\pm i\Phi_2)/\sqrt{2}$ and $\vec{D}=\vec{\partial}+ig\vec{A}^B$.
Index $\pm$ denotes a spin component parallel ($+$) and anti-parallel ($-$) to $\vec{B}=(0,0,B)$.
We have assumed the magnetic field pointed into $x_3$ direction.
In the equation, higher order interactions have been neglected.
The energy spectra of the fields $\phi_{\pm}$ are easily obtained.
The energy $\omega$ of
the charged vector field $\phi_{\pm}\propto e^{i\omega t}$
is given by
$\omega^2=k_3^2+2gB(n+1/2)\pm 2gB$.
Integer $n\geq 0$ denote Landau levels
and $k_3$ does
momentum parallel to the magnetic field.
The term $\pm 2gB$ in $\omega^2$ comes from the non-minimal interaction
which represents anomalous magnetic moment of the charged vector fields.
It is obvious that the modes of $\phi_{+}$ have imaginary frequencies $\omega^2=k_3^2-gB<0$ when $n=0$ and $k_3^2<gB$.
The modes occupy the lowest Landau level ( $n=0$ ).
This implies that
the field $\phi_{+}$ increase exponentially in time.
The modes with $\omega^2<0$ are called as Nielsen-Olesen unstable modes.
Therefore,
when homogeneous color magnetic fields are present,
the states are unstable; the Nielsen-Olesen unstable modes are generated spontaneously
and the states decay into more stable states.
This is the Nielsen-Olesen instability in
the gauge theory.
For the convenience of later discussions, we write down the Hamiltonian\cite{iwazaki}
of the field $\phi_{+}$ neglecting higher order interactions,
\begin{equation}
\label{H}
H=|\partial_t\phi_{+}|^2+|\vec{D}\phi_{+}|^2-2gB|\phi_{+}|^2,
\end{equation}
where the term $|\vec{D}\phi_{+}|^2$ becomes $(k_3^2+gB)|\phi_{+}|^2$
when $\phi_{+}$ occupies the lowest Landau level under the homogeneous color magnetic field $B$.
The Hamiltonian holds even for inhomogeneous magnetic field $B=\epsilon_{i,j}\partial_iA_j$.
We can see that even when inhomogeneous magnetic field $B$ is present,
the presence of the field $\phi_{+}$
can make lower the energy than the energy ( $=0$ )of the state with $\phi_{+}=0$.
In the sence, inhomogeneous magnetic fields are also unstable in general.
( In order to demonstrate the instability we need to show
the presence of bound state solutions by solving eq(\ref{4}) in the next section. )
We call the field $\phi_{+}$ as Nielsen-Olesen field.
Similarly, the field $\phi_{-}$ is a Nielsen-Olesen field for the
inhomogeneous $B$ since it also describes unstable modes;
the Hamiltonian of $\phi_{-}$ is given by
$H(\phi_{-})=|\partial_t\phi_{+}|^2+|\vec{D}\phi_{+}|^2+2gB|\phi_{+}|^2$
and the sign of $gB$ can be negative in the transverse space.
\section{Our assumption for analysis of instability in gauge fields}
First of all,
we explain assumptions used in our simple model for gauge field evolution in heavy ion collisions.
Here we use the Cartesian coordinate for the explanation.
Initial gauge fields independent of the rapidity used in our model are only longitudinal color electric and magnetic fields.
They are maximal Abelian components of gauge fields.
For example,
$A_i^a=(0,0,A_i^3)$ in the SU(2) gauge theory. Then, non-Abelian interactions vanish and only
linear equations remain.
It has been discussed\cite{itakura} that even if we make such a simplification of initial gauge fields in the glasma,
we can reproduce quite well the numerical results\cite{5} in their evolution at least for $\tau>Q_s^{-1}$.
Thus, the approximate use of such Abelian initial gauge fields instead of the non-Abelian glasma is
appropriate. This is the first our assumption.
Such Abelian gauge fields are inhomogeneous in transverse space similarly as
the glasma.
Then, as we have discussed in eq(\ref{H}), the color magnetic fields $B$
are unstable energetically
owing to the presence of Nielsen-Olesen fields.
This instability represents instability of the glasma in our model.
It is described by the equation,
\begin{equation}
\label{4}
\omega^2\phi=(-D_T^2+k_3^2-2gB)\phi
\end{equation}
with $D_T^2=(\vec{\partial}_T+ig\vec{A}_T)^2$ and $B=\mbox{rot}A_T$,
where we assume that $\phi\propto \exp(i\omega t-ik_3x_3)$.
The equation can be derived from the Hamiltonian in eq(\ref{H}).
It is a "Schr$\ddot{o}$dinger equation" of charged particles under the magnetic field $B$ with
an additional potential term $-2gB$.
There are solutions of "bound states".
The binding energy is given by $-\omega^2>0$. Thus,
the growth rate of the unstable modes is given by the imaginary part of the frequency $\omega$.
The second our approximation is to
replace inhomogeneous $B$ in the operator,
$-\vec{D}_T^2-2gB$ in eq(\ref{4}) with
effective homogeneous one $\bar{B}$, keeping
the eigenvalue of the operator $-\vec{D}_T^2-2gB$ unchanged.
We should note that the dependence of the longitudinal momentum $k_3$ in
the growth rate and the momentum distribution $\phi(k_3)$ do not change even if we make
the replacement as far as eigenvalues of the operator, $-D_T^2-2gB$,
are the same.
Thus, by the replacement
we can analyze what we are concerned with, that is,
the time evolutions of the unstable modes and
the characteristic features, e.g. growth rates, associated with the evolutions.
But local behaviors of the fields in the transverse space are lost
by the replacement.
The replacement is possible in principle, but is difficult in practice.
In our simple model, the value of $\bar{B}$ is determined by
comparing our results with the previous simulation\cite{ve}.
\section{Our simple model of Nielsen-Olesen instability in glasma}
Now we explain the detail of our simple mode for the glasma decay.
We use the proper time coordinate, $\tau=\sqrt{x_0^2-x_3^2}$ and
rapidity, $\eta=\log(\frac{x_0+x_3}{x_0-x3})$, as a longitudinal coordinate.
The coordinate is convenient for the description of expanding "glasma" generated in the early stage
of heavy ion collisions. The collisions occur at $x_3=0$ ( $\eta=0$ ) and $x_0=0$ ( $\tau=0$ )
in extremely high energies. Thus,
the heavy ions are Lorentz contracted to have vanishing width in the longitudinal direction and only extended in the transverse directions
with coordinates $x_i=(x_1,x_2)$.
We analyze SU(2) gauge fields, $\vec{A}_{\mu}$.
Corresponding gauge fields in the coordinates are given such as $\vec{A}_{\tau}$, $\vec{A}_{\eta}$ and $\vec{A}_i=(\vec{A}_1,\vec{A}_2)$.
Taking a gauge condition, $\vec{A}_{\tau}=0$, the Lagrangian of the fields is given by
\begin{equation}
\tau L=\tau \Biggl( \frac{1}{2\tau^2}(\partial_{\tau}\vec{A}_{\eta})^2+\frac{1}{2}(\partial_{\tau}\vec{A}_i)^2
-\frac{1}{2\tau^2}\vec{F}_{\eta,i}^2-\frac{1}{4}\vec{F}_{i,j}^2 \Biggr) ,
\end{equation}
with $\vec{F}_{\eta,i}^2=(\partial_{\eta}\vec{A}_i-\partial_i\vec{A}_{\eta}+g\vec{A}_{\eta}\times \vec{A}_i)^2$
and $\vec{F}_{i,j}^2=(\partial_i\vec{A}_j-\partial_j\vec{A}_i+g\vec{A}_i\times \vec{A}_j)^2$.
We define the complex fields, $\phi_{\eta}$, $\phi_i$, and real ones, $A_{\eta}$, $A_i$ by rearranging
color components of the gauge fields,
\begin{equation}
\vec{A}_{\eta}=(A_{\eta}^1,A_{\eta}^2,A_{\eta}^3)=
(\frac{\phi_{\eta}+\phi_{\eta}^{\dagger}}{\sqrt{2}},\frac{\phi_{\eta}-\phi_{\eta}^{\dagger}}{i\sqrt{2}},A_{\eta})
\quad \mbox{and} \quad \vec{A}_i=(\frac{\phi_i+\phi_i^{\dagger}}{\sqrt{2}},\frac{\phi_i-\phi_i^{\dagger}}{i\sqrt{2}},A_i),
\end{equation}
and define the following "charged field" with spin parallel, $\phi_{+}$ ( anti-parallel, $\phi_{-}$, ) to the longitudinal direction,
$\phi_{\pm}\equiv\frac{\phi_1\pm i\phi_2}{\sqrt{2}}$.
Initial gauge fields are introduced as maximal Abelian component $A_{\eta}, A_i$ of the gauge fields as we mentioned above.
It is easy to see that the fields, $\phi_{\eta}$ and $\phi_{\pm}$, are transformed under the gauge transformation,
$A_{\mu}\to U^{\dagger}A_{\mu}U+g^{-1}U^{\dagger}\partial_{\mu}U$ with $U=\exp(i\theta\sigma_3)$, such that
$\phi_{\eta,\pm}\to \exp( -i\theta)\phi_{\eta,\pm} $, where $\sigma_i$ are Pauli metrices and $\theta$ is a constant
independent of $\tau$.
Therefore, we may think that the complex fields, $\phi_{\eta,\pm}$ are U(1) charged fields corresponding to the symmetry.
We introduce a longitudinal color magnetic field $B$ and an color electric field $E$ as initial background gauge fields, both of which are
assumed to point to the third direction in SU(2) gauge group; $B=\epsilon_{i,j}\partial_iA_j$ and $E=\frac{1}{\tau}\partial_{\tau}A_{\eta}$.
The fields point to the direction parallel to the collision axis in the real space; $\vec{B}=(0,0,B)$ and $\vec{E}=(0,0,E)$.
We assume that the background fields are generated at $\tau=0$ in heavy ion collisions.
With the use of the fields, $A_{i,\eta}$ and $\phi_{\pm,\eta}$, the Lagangian leads to
\begin{eqnarray}
\label{eq1}
\tau L&=&\frac{\tau}{2}(\partial_{\tau}A_i)^2+\frac{1}{2\tau}(\partial_{\tau}A_{\eta})^2+\tau(|\partial_{\tau}\phi_{+}|^2+|\partial_{\tau}\phi_{-}|^2)
+\frac{1}{\tau}|\partial_{\tau}\phi_{\eta}|^2-\frac{\tau}{4}f_{i,j}^2-\frac{1}{2\tau}f_{\eta,i}^2 \nonumber \\
&-&\tau(|D_i\phi_{+}|^2+|D_i\phi_{-}|^2)-\frac{1}{\tau}(|D_{\eta}\phi_{+}|^2+|D_{\eta}\phi_{-}|^2+|D_i\phi_{\eta}|^2)+2\tau gB(|\phi_{+}|^2-|\phi_{-}|^2) \nonumber \\
&+&\frac{\tau}{2}|D_{-}\phi_{+}+D_{+}\phi_{-}|^2+\frac{1}{\sqrt{2}\tau}((D_{-}\phi_{+}+D_{+}\phi_{-})D_{\eta}\phi_{\eta}+c.c.)
+\nonumber \\
&&\biggl(\frac{2gi}{\sqrt{2}\tau}(f_{\eta}\phi_{+}+f_{\eta}^{\dagger}\phi_{-})\phi_{\eta}^{\dagger}+c.c.\biggr)-
\frac{g^2}{\tau}|\phi_{\eta}\phi_{+}^{\dagger}-\phi_{\eta}^{\dagger}\phi_{-}|^2-\frac{\tau g^2}{2}(|\phi_{+}|^2-|\phi_{-}|^2)^2,
\end{eqnarray}
with $f_{i,j}\equiv\partial_iA_j-\partial_jA_i$, $f_{\eta,i}\equiv\partial_{\eta}A_i-\partial_iA_{\eta}$, $f_{\eta}\equiv f_{\eta,1}-if_{\eta,2}$,
$D_i\equiv\partial_i+igA_i$, $D_{\eta}\equiv\partial_{\eta}+igA_{\eta}$,
and $D_{\pm}\equiv D_1\pm iD_2$, where we have neglected surface terms just like $\partial_iJ_i$.
Obviously, the Lagrangian is invariant under the U(1) gauge transformation, $\phi_{\pm,\eta}\to \phi_{\pm,\eta}\exp(-i\theta)$
along with $A_{i,\eta}\to A_{i,\eta}+g^{-1}\partial_{i,\eta}\theta$.
The kinetic energies of the fields $A_i$, $A_{\eta}$, $\phi_{\pm}$ and $\phi_{\eta}$,
are presented in the first line of the Lagrangian.
In the second line, minimal interactions between $\phi_{\pm,\eta}$ and the abelian gauge fields,
$A_i$ and $A_{\eta}$ are presented.
We can see in the second line that the charged fields $\phi_{\pm}$ receive anomalous magnetic moments, $2\tau gB(|\phi_{+}|^2-|\phi_{-}|^2)$.
This term plays an important role for making the field $\phi_{+}$ unstable, that is, Nielsen-Olesen unstable mode.
There are the quartic interactions of the fields in the fourth line,
which describe repulsive forces among the fields $\phi_{\eta,\pm}$.
The repulsive force leads to the saturation of the exponential increase observed in the simulation\cite{ve}.
The terms in the third line are irrelevant to our discussion below ( the terms can be gauged away ).
When an initial gauge field configuration of $B=\epsilon_{i,j}\partial_iA_j$ and $E=\frac{1}{\tau}\partial_{\tau}A_{\eta}$ is given at $\tau=0$,
the subsequent evolution of the fields $A_{\eta}$ and $A_i$
is governed by the equations,
\begin{eqnarray}
&&\partial_{\tau}(\frac{1}{\tau}\partial_{\tau}A_{\eta})-\frac{1}{\tau}(\partial_i^2A_{\eta}-\partial_{\eta}\partial_iA_i)=0, \nonumber \\
\mbox{and} \quad &&\partial_{\tau}(\tau\partial_{\tau}A_i)-\tau(\partial_j^2A_i-\partial_i\partial_jA_j)
+(\partial_{\eta}^2A_i-\partial_i\partial_{\eta}A_{\eta})=0.
\end{eqnarray}
The equations are obtained from the Lagrangian by neglecting the other charged fields.
The approximation is valid when the other fields are sufficiently small so that the interactions between
$E$, $B$ and the others can be neglected.
A solution of $B$ and $A_{\eta}$ independent of the rapidity $\eta$ is given by
$B=B_0J_0(Q_0\tau)\cos(\vec{Q}_0\vec{x})$ and $A_{\eta}=c\tau J_1(Q_0\tau)\cos(\vec{Q}_0\vec{x})$,
where we suppose that the fields carry a transverse momentum, $\vec{Q}_0$ ( $Q_0=|\vec{Q}_0|$ )
as a typical transverse momentum of backgound gauge fields.
$B_0$ is a constant and $J_{0,1}(Q_0\tau)$ are Bessel functions.
( General solutions are given by the average in $Q_0$ over momentum distritutions. )
A constant of $c$ may be determined by the
requirement that $B=E$ as $\tau\to 0$. It leads to $c=B_0/Q_0$. The requirement arises from the initial condition of
$\langle\mbox{Tr}(B^2)\rangle=\langle\mbox{Tr}(E^2)\rangle$
for $\tau\to 0$ given in the MV model of CGC \cite{fu}.
Here, the expectation of $\langle\sim \rangle $ is taken over the distribution of
large x gluons according to the MV model.
Here we neglect spatial dependence of the gauge fields according to the second our
assumption. Additionally we simplify the factor $J_0(Q_0\tau)$
such that $J_0(Q_0\tau)\to\sin(Q_0\tau)\sqrt{2/\pi Q_0\tau}\sim \sqrt{2/\pi Q_0\tau}$ by neglecting the oscillating factor
$\sin(Q_0\tau)$. This smooth decay of the fields roughly coincides with
the numerical evolutions of the glasma\cite{5}. Hence,
the simplification is appropriate for discussing small fluctuations
around the slowly decaying background gauge fields.
Therefore, we assume the following background initial gauge fields,
\begin{equation}
\label{sim}
B=B_0\sqrt{2/\pi Q_0\tau} \quad \mbox{and} \quad
A_{\eta}=B_0(\tau/Q_0)\,\sqrt{2/\pi Q_0\tau},
\end{equation}
which reproduce the smooth decay of $\langle\mbox{Tr}(B^2)\rangle$ and $\langle\mbox{Tr}(E^2)\rangle$
for large $\tau$.
Under the background gauge fields,
we analyze the development of the small fluctuations $\phi_{\eta,\pm}$.
The fluctuations correspond to the small fluctuations added to initial background gauge fields
in the previous simulation\cite{ve}.
Since they are supposed to be much small,
we take into account only quadratic terms of the fields in the Lagrangian.
These fluctuations in general oscillate with small amplitudes.
But one of these fluctuations increases exponentially.
In our model the field $\phi_{+}$ ( or $\phi_{-}$ when $gB<0$ ) is the one increasing exponentially with $\tau$. The other fields
simply oscillate and their amplitudes remain small.
We write down the equation of motion of the field $\phi_{+}$,
\begin{equation}
\label{5}
\partial_{\tau}^2\phi_{+}+\frac{1}{\tau}\partial_{\tau}\phi_{+}+
\biggl(\frac{(k_{\eta}-gA_{\eta}(\tau))^2}{\tau^2}-gB(\tau) \biggr)\phi_{+}=0,
\end{equation}
where $k_{\eta}$ denotes a longitudinal momentum; $\phi_{+}\propto\exp(-ik_{\eta}\eta)$.
We have taken
only a component in the lowest Landau level.
It is easy to see that
due to the last term $-gB(\tau)$, $\phi_{+}$ increases exponentially just as $\phi_{+}\propto\exp(\sqrt{gB}\tau)$ for $\tau \to \infty$
when $gB$ is independent of $\tau$, i.e. a solution of the equation, $\partial_{\tau}^2\phi_{+}-gB\phi_{+}\simeq 0$.
Their wave functions are given such that $\phi_{+}=g_m(\tau) z^m\exp(-|\vec{x}|^2/4l^2_B) $ with $z=x_1+ix_2$ and integers, $m\ge 0$,
where $l_B=1/\sqrt{gB}$ denotes cyclotron radius and $g_m(\tau)$ is governed by the eq(\ref{5});
we have neglected the smooth expansion of the cyclotron radius.
\section{Our results}
In order to solve the eq(\ref{5}) with $A_{\eta}(\tau)$ and $B(\tau)$ given in eq(\ref{sim}),
we rewrite the equation in the following,
\begin{equation}
\label{8}
\partial_{\tau'}^2\phi_{+}+\frac{1}{\tau'}\partial_{\tau'}\phi_{+}+
\biggl(\frac{(k_{\eta}-b\sqrt{\tau'})^2}{\tau'^2}-\frac{a}{\sqrt{\tau'}}\biggr)\phi_{+}=0,
\end{equation}
where dimensionless parameters are defined as $\tau'\equiv Q_s\tau$, $a\equiv \sqrt{2/\pi}(gB_0/Q_0^2)\times (Q_0/Q_s)^{3/2}$
and $b\equiv \sqrt{2/\pi}(gB_0/Q_0^2)\times (Q_0/Q_s)^{1/2}$.
In the subsequent calculations we treat the scale of the field $\phi_{+}$ arbitrary
although it is much smaller
than the background field. This is allowed in the approximation of
taking only quadratic terms of the field $\phi_{+}$ in the Lagrangian.
The coefficients $a$ and $b$ are determined for reproducing the results in the previous
simulation\cite{ve}; actually we have used $a=(0.05)^2$ and $b=0.38$
in order to obtain our curve in Fig.5.
Before solving the equation (\ref{8}) numerically, we briefly explain how the solutions behave with $\tau'$.
The term of $\biggl(\frac{(k_{\eta}-b\sqrt{\tau'})^2}{\tau'^2}-\frac{a}{\sqrt{\tau'}}\biggr)$ ( $\equiv \omega_s^2$ ) is
just a spring constant. The term of $\frac{1}{\tau'}\partial_{\tau'}\phi_{+}$ represents a friction, which
becomes weaker as $\tau'$ becomes larger.
Thus, we understand that the field oscillates as far as $\omega_s^2 >0$, that is, in the early stage ( $\tau'\sim O(1)$ )
after the production of the background fields. The spring constant $\omega_s^2$ becomes small with $\tau'$.
Once $\omega_s^2$ becomes negative, the field stops the oscillation
and begins to increase exponentially.
In Fig.1 we show the typical behavior of the field, $\phi_{+}(k_{\eta}=16.5,\tau')$, with
the initial conditions of $\phi_{+}(\tau'=0.01)=1$ and $\partial_{\tau}\phi_{+}(\tau'=0.01)=0$.
( Obviously, taking different initial conditions do not change the global behavior of $\phi_{+}$ for large $\tau'$,
since it simply oscillates in small $\tau'$. )
The field increases exponentially after the oscillation in the early stage.
When the longitudinal momentum $k_{\eta}$ becomes large,
the time $\tau$ when the field $\phi_{+}(k_{\eta},\tau')$
begins to increase exponentially,
becomes large. This implies that as $\tau'$ becomes larger,
the modes with larger longitudinal momentums
are excited.
In Fig.2 we show the time dependence of the maximal momentum $k_{\eta}(\mbox{max})$.
The maximal momentum $k_{\eta}(\mbox{max})$ at the time $\tau'$ is defined as the momentum with which
the mode $\phi_{+}(k_{\eta}(\mbox{max}),\tau')$ starts to increase
exponentially at the time $\tau'$. The modes with $k_{\eta}<k_{\eta}(\mbox{max})$
have already been increasing exponentially at the time $\tau'$.
The maximal momentum $k_{\eta}(\mbox{max})$ may be obtained by solving the condition of
the spring constant $\omega_s^2=0$, but
$k_{\eta}(\mbox{max})$ is defined as $|\phi_{+}(k_{\eta}(\mbox{max}),\tau')|=2$ in our calculation.
We have shown both results in Fig.2, which almost coinside with each other.
$k_{\eta}(\mbox{max})$ increases almost linearly in $\tau'$, but
the solution of $\omega_s^2=0$ shows that $k_{\eta}(\mbox{max})\propto \tau'\,^{3/4}$.
The result agrees with the previous one\cite{ve},
although the rate of the increase is approximately four times smaller than the previous one;
$k_{\eta}(\mbox{max})\simeq 0.015\,Q_s\tau+5$.
It has been shown\cite{ve} that
$k_{\eta}(\mbox{max})$ deviates from the linear dependence around the time when
the exponential increase is saturated. After the deviation, $k_{\eta}(\mbox{max})$ increases
very rapidly. These phenomena could be understood in our model
as the result due to the onset of quartic interactions among the field $\phi_{+}$.
The quartic interactions make the energy of the mode with largest amplitude
be transmitted to the other modes with higher longitudinal momenta.
\begin{figure}[htb]
\begin{minipage}{.47\textwidth}
\includegraphics[width=6cm,clip]{t-dependencenew.eps}
\label{fig:t-dependence}
\caption{After the oscillation, $\phi_{+}(k_{\eta}=16.5)$ increases exponentially with $\tau'=Q_s\tau$.}
\end{minipage}
\hfill
\begin{minipage}{.47\textwidth}
\includegraphics[width=6cm,clip]{kmaxnew1.eps}
\caption{solution of $\omega_s^2=0$ ( solid ),
$k_{\eta}(\mbox{max})$ ( dashing ) increases almost linearly with $\tau'=Q_s\tau$.}
\label{fig:kmax}
\end{minipage}
\end{figure}
Before proceed to make the comparison, we should mention
why we compare the evolution of the field $\phi_{+}$ with
the evolution of longitudinal pressure discussed in the previous simulation\cite{ve}.
Their behaviors are quite similar to each other.
Roughly speaking, there are two components of
fields involved in the ref.\cite{ve}, large ones and much small ones.
The large ones $A(\mbox{large})$ are just the background
fields independent of rapidity, which are produced according to
the MV model.
On the other hand, the small ones $a(\mbox{small})$ are fluctuations depending on the rapidity
which are added by hand to the background fields.
In the circumstance,
the longitudinal pressure $P_{\eta}$
is composed of two parts, $P_{\eta,0}$ and $\delta P_{\eta}$; $P_{\eta}=P_{\eta,0}+\delta P_{\eta}$.
\begin{equation}
P_{\eta}=\tau^{-2}\biggl(\vec{F}_{\eta,i}^2+(\tau\partial_{\tau}\vec{A}_i)^2\biggr)-
\vec{F}_{1,2}^2-(\frac{1}{\tau}\partial_{\tau}\vec{A}_{\eta})^2\simeq P_{\eta,0}+\delta P_{\eta},
\end{equation}
where $P_{\eta,0}$ is rapidity independent
and is formed only of the large components, while $\delta P_{\eta}$ is formed of
the small components as well as the large ones;
$\delta P_{\eta}=P(A(\mbox{large}))\times a(\mbox{small})$.
The linear dependence on $a(\mbox{small})$ comes from
the approximation
of $a(\mbox{small}) \ll A(\mbox{large})$.
$P_{\eta,0}$ decreases smoothly with $\tau$.
On the other hand, $\delta P_{\eta}$
increases exponentially although it is still
much smaller than $P_{\eta,0}$.
Therefore, some of small components $a(\mbox{small})$ depending on the rapidity,
increase exponentially as has been shown\cite{ve}. Our simple model of gauge field evolution indicates that such a small component
is just the Nielsen-Olesen unstable mode $\phi_{+}$.
That is the reason why we compare the evolution of the field $\phi_{+}$ with
the evolution of the longitudinal pressure.
It should be noted that the Fourier component of $\delta P_{\eta}$ in the rapidity
is determined only by the factor of $a(\mbox{small})$, which corresponds to small fluctuations $\phi_{\eta,\pm}$
in our model.
We now proceed to make the further comparison.
In Fig.3, we show the typical momentum distribution $|\phi_{+}(k_{\eta},\tau')|$ at $\tau'=1500$.
The distribution in $k_{\eta}$ has been obtained by
solving eq(\ref{8}) for $\phi_+(k_{\eta},\tau')$ with each $k_{\eta}$ chosen within the range
$0.24\le k_{\eta}\le 18$
by $\delta k_{\eta}=0.03$.
The distribution is not smooth but oscillating rapidly in $k_{\eta}$.
( When we magnify a small region, e.g. $\delta k_{\eta}\sim 0.2$ of the Fig.3,
we can see explicitly the oscillation in $k_{\eta}$. Roughly speaking, this oscillation comes from
the oscillation of a modified Bessel function
$I_{k_{\eta}}(Q_s\tau)$. )
But we can
find out a smooth distribution obtained by averaging the original one over
an appropriately small but sufficiently large $\delta k_{\eta}$
to be compared with wave length in the oscillation. Then, the smooth one coincides almost with
the mountain-like form of the distribution in Fig.3.
( The average corresponds to an average over initial conditions for solving
eq(\ref{8}). This is because slightly different initial conditions lead to slightly different
curves. )
We find that the distribution has a peak at a momentum $k_{\eta}(\rm{p})$.
The mode with the momentum is generated most efficiently.
In Fig.4, we show how $k_{\eta}(\rm{p})$
depends on the time $\tau'$;
$k_{\eta}(\rm{p})$ increases very slowly with $\tau'$ just as $0.18 \sqrt{\tau'}$.
Furthermore, we can see that the momenta
are smaller than $k_{\eta}(\mbox{max})$. These results agree with
the previous ones\cite{ve}.
The presence of the longitudinal momentum $k_{\eta}(\rm{p})\simeq 6\sim 8$
almost independent of time implies that
in the decay of the gauge fields uniform in $\eta$,
specific modes with the momentum $k_{\eta}(\rm{p})$ is generated most efficiently,
which breaks the homogeneity in $\eta$.
We do not understand why such specific momenta are present.
\noindent
\begin{figure}[t]
\begin{minipage}{.47\textwidth}
\includegraphics[width=6cm,clip]{distributionnew.eps}
\label{fig:distribution}
\caption{distribution of longitudinal momentum $k_{\eta}$ at $\tau'=Q_s\tau=1500$.}
\end{minipage}
\hfill
\begin{minipage}{.47\textwidth}
\includegraphics[width=6cm,clip]{k-pnew1.eps}
\caption{ $\tau'$ dependence of $k_{\eta}(p)$ ( dot ) and
$k_{\eta}=0.18\sqrt{\tau'}$ ( solid ).}
\label{fig:k(p)}
\end{minipage}
\end{figure}
Finally, we show in Fig.5 how $|\phi_{+}(k_{\eta}(\rm{p}),\tau)|$ increases
with $\tau'$, that is, the time dependence of the peak $|\phi_{+}(k_{\eta}(\rm{p}),\tau)|$.
It increases as $\exp(\tau'\,^{3/4})$ for
$\tau'\to \infty $,
which can be read
from the equation(\ref{8}). The growth rate is defined as $(\log|\phi|)/\tau$.
Thus, the growth rate of the field $|\phi_{+}(k_{\eta}(\rm{p}),\tau)|$
decreases slowly with $\tau$.
Since $k_{\eta}(\rm{p})$ is almost constant in time,
$|\phi_{+}(k_{\eta},\tau)|$ also increases
in the similar way to $|\pi_{+}(k_{\eta}(\rm{p}),\tau)|$ for any $k_{\eta}$.
For a comparison, we have depicted the Fourier component of the longitudinal pressure,
$P_{\eta}(k_{\eta}(p))=(d_0+d_1\exp(0.427\sqrt{\tau'}))$ in the ref.\cite{ve}
as well as the function of $\exp(0.00544\tau')$ also used in the ref.\cite{ve}.
Obviously,
the field, $|\phi_{+}(k_{\eta}(\rm{p}),\tau')|$, in our calculation agrees with
the longitudinal pressure
better than
the function of $\exp(0.00544\tau')$.
( In Fig.5, we take the scale arbitrary in the vertical coordinate.
In order for $|\phi_{+}(k_{\eta}(\rm{p}),\tau')|$ to coincide precisely with the
pressure shown in the ref.\cite{ve}, we have only to take the scale of the field
$|\phi_{+}(k_{\eta}(\rm{p}),\tau')|$ appropriately. )
Our simulation does not reproduce strictly the behavior of the pressure such as $\exp(\sqrt{\tau'})$.
We expect that the more elaborate treatment of the background magnetic field may
give rise to the behavior $\exp(\sqrt{\tau'})$. ( When $B$ decreases as $\sim 1/\tau'$ instead of $1/\sqrt{\tau'}$,
this behavior can be obtained. We will discuss
the validity of the behavior $B\sim 1/\tau'$ in near future. )
\noindent
\begin{figure}[t]
\begin{minipage}{.47\textwidth}
\includegraphics[width=6cm,clip]{comparisonnew.eps}
\label{fig:comparison2}
\caption{ longitudinal pressure $P_{\eta}(k_{\eta}(p))$ ( solid ) in ref.\cite{ve} , $|\phi_{+}|$ ( short dashing )
and $\exp(0.00544\tau')$ ( dashing ).}
\end{minipage}
\end{figure}
As it has been shown, the behaviors of the longitudinal
pressure calculated in the MV model with the small
fluctuations added can be roughly reproduced in our simple model:
1) $k_{\eta}(\mbox{max})$ increases linearly with $\tau$, 2) $k_{\eta}(\rm{p})$
depends on $\tau$ very weakly and $k_{\eta}(\rm{p})$ is much smaller than $k_{\eta}(\mbox{max})$,
and 3) the pressure $P_{\eta}(k(p))$
increases with $\tau$ as $\exp(\tau^{3/4})$, although the pressure obtained in the simulation
increase as $\exp(\sqrt{\tau})$.
Furthermore, we may argue that the saturation of the exponential increase arises due to
the repulsive self-interaction of $\phi_{+}$ in our model.
The repulsive interaction need more energies for the field to increase more.
Thus, it would stop the field increasing.
Although our results are different in detail with those in the simulation,
the rough agreement shows that
our simple model of instabilities
is valid approximation for the instabilities
observed in the glasma.
Thus,
the instabilities of the glasma observed in the
previous simulation is caused by
the Nielsen-Olesen unstable mode.
In order to obtain these results,
we have used the parameters such as $a=(0.05)^2$ and $b=0.38$.
Roughly speaking, the parameter $a$ gives the growth rate of the
field $\phi_+$. Thus, the parameter can be determined by making it fit the growth rate
obtained in the simulation\cite{ve}. But the growth rate shown in the simulation
is the one of the field $\phi_+(k_{\eta}(p))$, not $\phi_+(k_{\eta})$ itself.
Here $k_{\eta}(p)$ depends on
$\tau$ although the dependence is very weak. This requires
a careful adjustment of the other parameter $b$.
The determination of the parameters in detail is not
important. An important thing is that these parameters lead to
a weak homogeneous color magnetic field $B_0$.
Actually,
the above values of $a$ and $b$ corresponds to the physical parameters of $Q_0\simeq Q_s/152$ and $gB_0\simeq 4.7Q_0^2$.
Thus, the growth rate $\sim \sqrt{gB_0}$ discussed in the section (4)
is much smaller than $\sqrt{g|B|}\sim Q_s$. Indeed, the growth rate is about
$0.00544Q_s$ derived from the reference function $\exp(0.00544\tau')$ depicted in Fig.5.
We should mention that the smallness of the growth rate comes from the inhomogeneity of the glasma.
The potential $-2gB\sim Q_s^2$ for the field $\phi_{+}$ in eq(\ref{4})
varies so rapidly in transverse space; it has many attractive regions ( $-2gB<0$ ) and
repulsive regions ( $-2gB>0$ ) whose widths are of the order $Q_s^{-1}$.
Wave functions of the bound states are extended over these regions
involving both attractive and repulsive potentials; they can never be trapped
within a region with attractive potential.
Therefore, the binding energies $-\omega^2$
become much smaller than $Q_s^2$.
Thus, the growth rate becomes much smaller than $Q_s$.
\section{Conclusion}
To summarize,
we have discussed that inhomogeneous color magnetic fields $gB\sim Q_s^2$ produced in the
high energy heavy ion collisions decay with the production of
the Nielsen-Olesen fields $\phi_{+}$.
Instead of analyzing the evolution of the field $\phi_{+}$ under the inhomogeneous
color magnetic fields, we analyze it
using an effective homogeneous color magnetic field.
Then,
we have compared the time evolution of
the Nielsen-Olesen field $\phi_{+}$
with the evolution of
the longitudinal pressure shown in the simulation\cite{ve}.
We have found that our simple model with the effective weak homogeneous color magnetic field $gB_0$
reproduces the important features clarified in the simulation;
growth rates, longitudinal momentum distributions, etc. of the unstable modes.
The coincidence is not accidental.
These are properties associated with time and longitudinal directions.
Even if we use the homogeneous magnetic field, the properties of the unstable modes can be
reproduced in principle. On the other hand,
properties associated with the transverse directions
can not be reproduced with the use of such fields.
Therefore, our analysis shows that the decay of the glasma
generated initially at $\tau=0$ in heavy ion collisions
is caused by Nielsen-Olesen instability.
On the other hand, there is an insistence\cite{ve,arnold} that
Weibel instability known in plasma physics
is the cause of the glasma instability:
Inhomogeneous electromagnetic plasma whose momentum distribution depends only on transverse momentum,
shows Weibel instability under small magnetic field applied.
The instability is discussed by using
the Boltzmann equation of charged ( color charged ) particles
coupled with electromagnetic ( color gauge ) fields,
while the glasma instability in the simulation has been
shown in pure gauge theory.
Nielsen-Olesen instability is the instability
arising in pure gauge theory. In this sense, the relevance of the Weibel instability
to the glamsa instability is not obvious.
Thus, it is reasonable to think
that the glasma instability shown in the simulation
is just Nielsen-Olesen instability.
We will discuss in future publications
why the glamsa instability is just Nielsen-Olesen instability,
not Weibel one.
\vspace*{2em}
We would like to express thanks
to Dr.K. Itakura in KEK and Dr.H. Fujii in University of Tokyo for useful comments.
|
2,869,038,154,240 | arxiv | \section{Introduction}
\begin{figure}[t]
\vskip -0.15in
\centering
\includegraphics[width=\linewidth]{supporting_curve.pdf}
\vskip -0.1in
\caption{Performance improvement of Soft Truncation.}
\label{fig:st_thumbnail}
\vskip -0.2in
\end{figure}
Recent advances in the generative models enable creating of highly realistic images. One direction of such modeling is \textit{likelihood-free models} \citep{karras2019style} based on the minimax training. The other direction is \textit{likelihood-based models}, including VAE \cite{vahdat2020nvae}, autoregressive models \citep{parmar2018image}, and flow models \citep{grcic2021densely}. Diffusion models \citep{ho2020denoising} are one of the most successful \textit{likelihood-based models} where the generative process is modeled by the reverse diffusion process. The success of diffusion models achieves the state-of-the-art performance in image generation \citep{dhariwal2021diffusion, song2020score}.
Despite of such success, the score-based diffusion models still struggle in generating realistic images with robust model density. Rather than improving the network architecture, this paper focuses mostly on the optimization viewpoint of diffusion models to improve the sample quality without hurting the model density. We first observe that the diffusion time acts in distinctive ways on model performances: Evidence Lower Bound (ELBO) is mostly contributed from the score accuracy at small diffusion time, and Fr\'echet Inception Distance (FID) is dominated by the score estimation on large diffusion time. Having that small diffusion time contributes to ELBO the most, the score estimation on large diffusion time would be immatured if we insist training the score network with MLE. Therefore, this paper introduces a training technique that enables a well-trained score network on both small and large time.
In practice, a diffusion model trains its score network with a truncated ELBO with small $\epsilon>0$ due to the loss diverging issue. This means that the score network would not estimate the data score perturbed less than $\epsilon$. Soft Truncation, a proposed training technique, softens the truncation time to $\tau>0$ by sampling $\tau$ from a truncation prior for every mini-batch, so that the score network estimates the data score perturbed on $[\tau,T]$, where $T$ is the maximum diffusion time. With the $\tau$-truncated ELBO, the score network does not update its parameter to fit the data score perturbed less than $\tau$, but as $\tau$ is softened, the score network estimates the data score on small diffusion time with mini-batches of sufficiently small $\tau$ truncations. It turns out that Soft Truncation is a natural way to optimize the general weighted diffusion loss in view of the variational bound introduced in Theorem \ref{thm:1}. Theorem \ref{thm:1} additionally implies that Soft Truncation could be interpreted as Maximum Perturbed Likelihood Estimation (MPLE) because Soft Truncation updates the network by maximizing the perturbed log-likelihood with a sampled perturbation level ($\tau$) for every mini-batch.
\section{Preliminary}\label{sec:preliminary}
Throughout this paper, we focus on continuous-time diffusion models. A continuous diffusion model slowly and systematically perturbs a data random variable, $\mathbf{x}_{0}$, into a noise variable, $\mathbf{x}_{T}$, as time flows \cite{song2020score}. The diffusion mechanism is represented as a Stochastic Differential Equation (SDE), written by
\begin{align}\label{eq:forward_sde}
\diff\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t)\diff t+g(t)\diff\bm{\omega}_{t},
\end{align}
where $\bm{\omega}_{t}$ is a standard Brownian motion. The drift ($\mathbf{f}$) and the diffusion ($g$) terms are fixed, so the data variable is diffused in a fixed manner. We denote $\{\mathbf{x}_{t}\}_{t=0}^{T}$ as the solution of the given SDE of Eq. \eqref{eq:forward_sde}.
The objective of the diffusion model is \textit{learning} the stochastic process, $\{\mathbf{x}_{t}\}_{t=0}^{T}$, as a parametrized stochastic process, $\{\mathbf{x}_{t}^{\bm{\theta}}\}_{t=0}^{T}$. A diffusion model builds the parametrized stochastic process as a solution of a generative SDE,
\begin{align}\label{eq:generative_sde}
\diff\mathbf{x}_{t}^{\bm{\theta}}=\big[\mathbf{f}(\mathbf{x}_{t}^{\bm{\theta}},t)-g^{2}(t)\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t}^{\bm{\theta}},t)\big]\diff\bar{t}+g(t)\diff\bm{\bar{\omega}}_{t},
\end{align}
where $\diff\bar{t}$ is the backward time differential, and $\bm{\bar{\omega}}_{t}$ is a standard Brownian motion of backward time \cite{anderson1982reverse}. We construct the parametrized stochastic process by solving the SDE of Eq. \eqref{eq:generative_sde} backwards in time with a starting variable of $\mathbf{x}_{T}^{\bm{\theta}}\sim \pi$, where $\pi$ is an initial noise distribution.
The model distribution, or the generative distribution, of a diffusion model is the probability distribution of $\mathbf{x}_{0}^{\bm{\theta}}$. We denote $p_{t}$ and $p_{t}^{\bm{\theta}}$ as the marginal distributions of the groundtruth stochastic process ($\{\mathbf{x}_{t}\}_{t=0}^{T}$) and the generative stochastic process ($\{\mathbf{x}_{t}^{\bm{\theta}}\}_{t=0}^{T}$), respectively.
A diffusion model learns the generative stochastic process by minimizing the score loss \cite{huang2021variational} of
\begin{align*}
\mathcal{L}(\bm{\theta};\lambda)=\frac{1}{2}\int_{0}^{T}\lambda(t) \mathbb{E}_{\mathbf{x}_{t}}\big[\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla\log{p_{t}(\mathbf{x}_{t})}\Vert_{2}^{2}\big] \diff t,
\end{align*}
where $\lambda(t)$ is a weighting function that counts the contribution of each diffusion time on the overall loss scale. This score loss is infeasible to optimize because the data score, $\nabla_{\mathbf{x}_{t}}\log{p_{t}(\mathbf{x}_{t})}$, is intractable in general. Fortunately, $\mathcal{L}(\bm{\theta};\lambda)$ is known to be equivalent to the (continuous) denoising NCSN loss \cite{song2020score, song2019generative},
\begin{align*}
\frac{1}{2}\int_{0}^{T}\lambda(t) \mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{t}}\big[\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}^{2}\big] \diff t,
\end{align*}
up to a constant that is irrelevant to $\bm{\theta}$-optimization.
Two important SDEs are known to attain analytic transition probabilities, $\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}$: VESDE and VPSDE \cite{song2020score}. First, VESDE assumes $\mathbf{f}(\mathbf{x}_{t},t)=0$ and $g(t)=\sigma_{min}(\frac{\sigma_{max}}{\sigma_{min}})^{t}\sqrt{2\log{\frac{\sigma_{max}}{\sigma_{min}}}}$. With such specific forms of $\mathbf{f}$ and $g$, the transition probability of VESDE turns out to follow a Gaussian distribution of $p_{0t}(\mathbf{x}_{t},t)=\mathcal{N}(\mathbf{x}_{t};\mu_{VE}(t)\mathbf{x}_{0},\sigma_{VE}^{2}(t)\mathbf{I})$ with $\mu_{VE}(t)\equiv 1$ and $\sigma_{VE}^{2}(t)=\sigma_{min}^{2}[(\frac{\sigma_{max}}{\sigma_{min}})^{2t}-1]$. Similarly, VPSDE takes $\mathbf{f}(\mathbf{x}_{t},t)=-\frac{1}{2}\beta(t)\mathbf{x}_{t}$ and $g(t)=\sqrt{\beta(t)}$, where $\beta(t)=\beta_{min}+t(\beta_{max}-\beta_{min})$; and its transition probability falls into a Gaussian distribution of $p_{0t}(\mathbf{x}_{t},t)=\mathcal{N}(\mathbf{x}_{t};\mu_{VP}(t)\mathbf{x}_{0},\sigma_{VP}^{2}\mathbf{I})$ with $\mu_{VP}(t)=e^{-\frac{1}{2}\int_{0}^{t}\beta(s)\diff s}$ and $\sigma_{VP}^{2}(t)=1-e^{-\int_{0}^{t}\beta(s)\diff s}$.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{differential_of_nelbo_cifar10_ver2.pdf}
\subcaption{Integrand by Time}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{nelbo_cifar10_ver3.pdf}
\subcaption{Variational Bound Truncated at $\tau$}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{nll_nelbo_log_cifar10_ver3.pdf}
\subcaption{Test Performance by Log-Time}
\end{subfigure}
\vskip -0.05in
\caption{The contribution of diffusion time on the variational bound experimented on CIFAR10 with DDPM++ (VP) \cite{song2020score}. (a) The scale of the integrand, $\frac{\diff\mathcal{L}(\bm{\theta};g^{2},\tau)}{\diff\tau}$, is extremely imbalanced on the overall time horizon on $[\epsilon,T]$. (b) The truncated variational bound, $\mathcal{L}(\bm{\theta};g^{2},\tau)$, remains at the same level except near $\tau\approx 0$. (c) The test performance with log-time. The truncation time ($\epsilon$) is a significant factor on the density evaluation performance.}
\label{fig:nelbo}
\vskip -0.1in
\end{figure*}
From the above examples of the analytic Gaussian transition probabilities, we assume the transition probability to follow a Gaussian distribution $p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t};\mu(t)\mathbf{x}_{0},\sigma^{2}(t)\mathbf{I})$ of generic $\mu$ and $\sigma$ (see Appendix \ref{sec:transition_probability}) throughout this paper to emphasize the suggested method is applicable for any form of SDE, including VE/VP SDEs. With such a Gaussian transtion probability of a generic linear SDE, the denoising NCSN loss reduces to
\begin{align*}
\frac{1}{2}\int_{0}^{T}\frac{\lambda(t)}{\sigma^{2}(t)}\mathbb{E}_{\mathbf{x}_{0},\bm{\epsilon}}\big[\Vert\sigma(t)\mathbf{s}_{\bm{\theta}}(\mu(t)\mathbf{x}_{0}+\sigma(t)\bm{\epsilon},t)-\bm{\epsilon}\Vert_{2}^{2}\big]\diff t,
\end{align*}
where $\bm{\epsilon}$ follows a standard Gaussian distribution. This is the (continuous) DDPM loss \cite{song2020score} with an $\bm{\epsilon}$-parametrization \cite{ho2020denoising} by $\bm{\epsilon}_{\bm{\theta}}(\mu(t)\mathbf{x}_{0}+\sigma(t)\bm{\epsilon},t):=-\sigma(t)\mathbf{s}_{\bm{\theta}}(\mu(t)\mathbf{x}_{0}+\sigma(t)\bm{\epsilon},t)$. This provides a unified view of NCSN and DDPM that the NCSN loss and the DDPM loss are equivalent. Hence, we take the NCSN loss as a default representation throughout the paper.
Recently, \citet{song2021maximum} connects the diffusion loss with the log-likelihood by proving the variational bound of
\begin{align*}
\mathbb{E}_{\mathbf{x}_{0}}[-\log{p_{0}^{\bm{\theta}}(\mathbf{x}_{0})}]\le\mathcal{L}(\bm{\theta};g^{2}),
\end{align*}
when the weighting function is the square of the diffusion term, called the likelihood weighting. We get the \textit{truncated} variational bound in Lemma \ref{lemma:1} by applying the data processing inequality \cite{gerchinovitz2020fano} and the Girsanov theorem \cite{sarkka2019applied} as proposed in \citet{pavon1991free,vargas2021solving,song2021maximum}.
\begin{lemma}\label{lemma:1}
For any $\tau\in[0,T]$, up to a constant,
\begin{eqnarray}
\lefteqn{\mathbb{E}_{\mathbf{x}_{\tau}}\big[-\log{p_{\tau}^{\bm{\theta}}(\mathbf{x}_{\tau})}\big]\le\mathcal{L}(\bm{\theta};g^{2},\tau)}&\label{eq:perturbed_nelbo}\\
&&:=\frac{1}{2}\int_{\tau}^{T}\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{t}}\big[g^{2}(t)\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}^{2}\notag\\
&&\quad\quad-g^{2}(t)\Vert\nabla_{\mathbf{x}_{t}}\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}^{2}-2\textup{div}\big(\mathbf{f}(\mathbf{x}_{t},t)\big)\big]\diff t\notag\\
&&\quad\quad-\mathbb{E}_{\mathbf{x}_{T}}\big[\log{\pi(\mathbf{x}_{T})}\big].\notag
\end{eqnarray}
\end{lemma}
By Lemma \ref{lemma:1}, the negative log-likelihood is decomposed by the truncated log-likelihood and the reconstruction term,
\begin{align}
&\mathbb{E}_{\mathbf{x}_{0}}\big[-\log{p_{0}^{\bm{\theta}}(\mathbf{x}_{0})}\big]=\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{\tau}}\big[-\log{p(\mathbf{x}_{0},\mathbf{x}_{\tau})}\big]\nonumber\\
&\quad\quad=\mathbb{E}_{\mathbf{x}_{\tau}}\big[\underbrace{-\log{p_{\tau}^{\bm{\theta}}(\mathbf{x}_{\tau})}}_{\text{continuous diffusion}}\big]+\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{\tau}}\big[\underbrace{-\log{p(\mathbf{x}_{0}\vert\mathbf{x}_{\tau})}}_{\text{reconstruction}}\big]\nonumber\\
&\quad\quad\le\underbrace{\mathcal{L}(\bm{\theta};g^{2},\tau)}_{\text{truncated bound}}+\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{\tau}}\big[-\log{p(\mathbf{x}_{0}\vert\mathbf{x}_{\tau})}\big].\label{eq:diffusion_loss}
\end{align}
\section{Training and Evaluation of Diffusion Models in Practice}\label{sec:practice}
\subsection{A Universal Phenomena in Diffusion Training: Extremely Imbalanced Loss}
In practice, previous works train a diffusion model with a truncated loss to stabilize training and evaluation. Before we explain the truncated loss, observe that $\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}=-\frac{\mathbf{x}_{t}-\mu(t)\mathbf{x}_{0}}{\sigma^{2}(t)}=-\frac{\mathbf{z}}{\sigma(t)}$, where $\mathbf{x}_{t}=\mu(t)\mathbf{x}_{0}+\sigma(t)\mathbf{z}$ with $\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})$. The denominator $\sigma(t)$ converges to zero as $t\rightarrow 0$ for both VE/VP SDEs, and thus $\Vert\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}$ diverges as $t\rightarrow 0$. Therefore, in both VE/VP SDEs, the integrand of the variational bound,
\begin{align*}
&\frac{\diff\mathcal{L}(\bm{\theta};g^{2},\tau)}{\diff\tau}=\frac{1}{2}\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{t}}\big[g^{2}(t)\Vert\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}^{2}\\
&-g^{2}(t)\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}^{2}+2\text{div}\big(\mathbf{f}(\mathbf{x}_{t},t)\big)\big],
\end{align*}
diverges as $t\rightarrow 0$, which leads to training instability.
\begin{figure}[t]
\vskip -0.15in
\centering
\includegraphics[width=\linewidth]{thumbnail_ver3.pdf}
\vskip -0.05in
\caption{The truncation time is key to enhance the microscopic sample quality.}
\label{fig:truncation_variance}
\vskip -0.2in
\end{figure}
To avoid such diverging issue, previous works in VPSDE modify the loss by truncating the integration on $[\epsilon,T]$ with a hyperparameter $\epsilon>0$, so that the diffusion model does not optimize the score network at diffusion time on $[0,\epsilon)$. Analogously, previous works in VESDE approximate $\sigma_{VE}^{2}(t)\approx\sigma_{min}^{2}(\frac{\sigma_{max}}{\sigma_{min}})^{2t}$ to truncate the minimum variance of the transition probability to be $\sigma_{min}^{2}$. Truncating diffusion time at $\epsilon$ in VPSDE is equivalent to truncating diffusion variance ($\sigma_{min}^{2}$) in VESDE, so these two truncations on VE/VP SDEs have the identical effect on bounding the diffusion loss. Henceforth, this paper discusses the argument with the view of truncating diffusion time (VPSDE) and diffusion variance (VESDE), exchangeably.
Figure \ref{fig:nelbo} illustrates the significance of truncation towards the training of diffusion models. With the truncation of $\epsilon=10^{-5}$, Figure \ref{fig:nelbo}-(a) shows that the integrand of the variational bound, $\frac{\diff\mathcal{L}(\bm{\theta};g^{2},\tau)}{\diff\tau}$, in the Bits-Per-Dimension (BPD) scale is still extremely imbalanced. It turns out that such extreme imbalance appears to be a universal phenomena in training a diffusion model, and this phenomena lasts from the beginning to the end of training.
Figure \ref{fig:nelbo}-(b) presents the \textit{truncated} variational bound of $\mathcal{L}(\bm{\theta};g^{2},\tau)$ in $y$-axis with the BPD scale, and it indicates that most of the variational bound is contributed from small diffusion time. Therefore, if $\epsilon$ is insufficiently small, then Eq. \eqref{eq:diffusion_loss} with the green line in Figure \ref{fig:nelbo}-(b) is not tight to $\mathbb{E}_{\mathbf{x}_{0}}\big[-\log{p_{0}^{\bm{\theta}}(\mathbf{x}_{0})}\big]$, and a diffusion model fails at MLE training. In addition, Figure \ref{fig:truncation_variance} clearly indicates that insufficiently small $\epsilon$ (or $\sigma_{min}$) would also harm the microscopic sample quality. From these observations, $\epsilon$ becomes a significant hyperparameter that needs to be selected carefully.
\subsection{Contribution of Truncation on Model Evaluation}
Figure \ref{fig:nelbo}-(c) reports test performances on density estimation. Figure \ref{fig:nelbo}-(c) illustrates that both Negative ELBO (NELBO) and Negative Log-Likelihood (NLL) monotonically decrease by lowering $\epsilon$ because NELBO is mostly contributed by small diffusion time at test time as well as training time. Therefore, it could be a common strategy to reduce $\epsilon$ as much as possible to reduce test NELBO/NLL.
\begin{wraptable}{r}{0.22\textwidth}
\vskip -0.15in
\centering
\caption{Ablation on $\sigma_{min}$.}
\label{tab:cifar10_ablation_ncsn}
\vskip -0.1in
\tiny
\begin{tabular}{ccc}
\toprule
\multirow{2}{*}{$\sigma_{min}$} & \multicolumn{2}{c}{CIFAR-10}\\
& NLL ($\downarrow$) & FID-10k ($\downarrow$) \\\midrule
$10^{-2}$ & 4.95 & 6.95 \\
$10^{-3}$ & 3.04 & 7.04 \\
$10^{-4}$ & 2.99 & 8.17 \\
$10^{-5}$ & 2.97 & 8.29 \\
\bottomrule
\end{tabular}
\vskip -0.15in
\end{wraptable}
On the contrary, there is a counter effect on sample generation performance with respect to $\epsilon$. Table \ref{tab:cifar10_ablation_ncsn} trained on CIFAR10 \cite{krizhevsky2009learning} with NCSN++ \cite{song2020score} presents that FID is worsened as we take $\sigma_{min}\rightarrow 0$ (equivalently, $\epsilon\rightarrow 0$ in VPSDE). Figures \ref{fig:large_diffusion_time} and \ref{fig:regenerated} illustrate the reason of the inverse correlation between density estimation and sample quality. When a diffusion model creates a fake sample by reverting the diffusion process, the precision of the generative process, $\diff\mathbf{x}_{t}^{\bm{\theta}}=\big[\mathbf{f}(\mathbf{x}_{t}^{\bm{\theta}},t)-g^{2}(t)\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t}^{\bm{\theta}},t)\big]\diff\bar{t}+g(t)\diff\bm{\bar{\omega}}_{t}$, fully depends on score estimation ($\mathbf{s}_{\bm{\theta}}$) because the drift ($\mathbf{f}$) and the diffusion ($g$) terms are fixed a-priori. As Figure \ref{fig:monte_carlo_norm} illustrates, sample generation is dominated by large diffusion time. However, if the truncation ($\epsilon$) is lowered, then both scale and variance of the variational bound relies more on small diffusion time as in Figure \ref{fig:nelbo}-(b), so the score network on large diffusion time is remained unoptimized.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{denoising_two_ver2.pdf}
\vskip -0.05in
\caption{Illustration of the generative process trained on CelebA-HQ $256\times 256$ with NCSN++ (VE) \cite{song2020score}. The score precision on large diffusion time is key to construct the realistic overall sample quality.}
\label{fig:large_diffusion_time}
\vskip -0.1in
\end{figure}
\begin{figure}[t]
\vskip -0.05in
\centering
\includegraphics[width=0.65\linewidth]{monte_carlo_norm.pdf}
\vskip -0.1in
\caption{The contribution of diffusion time on sample generation. Large diffusion time dominates sample generation.}
\label{fig:monte_carlo_norm}
\vskip -0.2in
\end{figure}
Figure \ref{fig:large_diffusion_time} illustrates a mechanism of how overall sample fidelity is damaged: a man synthesized in the second row has unrealistic curly hair on his forehead, which is constructed at the initial steps of the generation process that corresponds to large diffusion time. Analogously, Figure \ref{fig:regenerated} shows the regenerated samples by solving the generative process time reversely, starting from the perturbed data $\mathbf{x}_{\tau}$ \cite{meng2021sdedit}, and the regenerated samples with small $\tau$ differs from the original picture only in fine details. Figures \ref{fig:large_diffusion_time} and \ref{fig:regenerated} imply that the overall shape of a sample is determined by the accuracy of score estimation on large diffusion time. Taken all together, \textit{density estimation favors accurate score estimation on small diffusion time, whereas sample generation needs accurate score estimation on large diffusion time}. This explains the behind mechanism of the inverse correlation of density estimation and sample generation performances observed earlier in \citet{dockhorn2021score, nichol2021improved, vahdat2021score}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{regenerated.pdf}
\vskip -0.05in
\caption{Illustration of the regenerated samples that are synthesized by solving the probability flow on $[\epsilon,\tau]$ with the initial point of $\mathbf{x}_{\tau}=\mu(\tau)\mathbf{x}_{0}+\sigma(\tau)\mathbf{z}$ for $\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})$ trained on CelebA with DDPM++ (VP) \cite{song2020score}.}
\label{fig:regenerated}
\vskip -0.2in
\end{figure}
\section{Soft Truncation: A Training Technique for a Diffusion Model}
As in Section \ref{sec:practice}, the choice of $\epsilon$ is crucial on both training and evaluation, but it is computationally infeasible to search the optimal $\epsilon$. Therefore, we introduce a training technique that largely mediates the need of $\epsilon$ search by softening truncation time in every optimization step. Our approach achieves successful training of the score network on large diffusion time without sacrificing NLL. We explain the Monte-Carlo estimation of the variational bound in Section \ref{sec:monte_carlo}, and we provide the details of Soft Truncation in Section \ref{sec:softrunc}.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{monte_carlo_loss.pdf}
\subcaption{Monte-Carlo Loss}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{monte_carlo_loss_st.pdf}
\subcaption{Soft Truncation}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\includegraphics[width=\linewidth]{importance_distribution.pdf}
\subcaption{Importance Distribution}
\end{subfigure}
\caption{The experimental result trained on CIFAR10 with DDPM++ (VP) \cite{song2020score}. (a) The Monte-Carlo loss for each diffusion time, $\sigma^{2}(t)\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}^{2}$. (b) The Monte-Carlo loss for each diffusion time on variaous truncation time. (c) The importance distribution for various truncation distributions.}
\label{fig:monte_carlo_loss}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{importance_sampling_ver2.pdf}
\caption{Quartile of importance weighted Monte-Carlo time of VPSDE. Red dots represent Q1/Q2/Q3/Q4 quantiles when truncated at $\tau=\epsilon=10^{-5}$. About $25\%$ and $50\%$ of Monte-Carlo time are located in $[\epsilon,5\times 10^{-3}]$ and $[\epsilon,0.106]$, respectively. Green dots represent Q0-Q5 quantiles when truncated at $\tau=0.1$. Importance weighted Monte-Carlo time with $\tau=0.1$ is distributed much more balanced compared to the truncation at $\tau=\epsilon$.}
\label{fig:importance_sampling}
\end{figure*}
\subsection{Monte-Carlo Estimation of Truncated Variational Bound with Importance Sampling}\label{sec:monte_carlo}
For every batch $\{\mathbf{x}_{0}^{(b)}\}_{b=1}^{B}$, the Monte-Carlo estimation of the variational bound in Inequality \eqref{eq:perturbed_nelbo} is $\mathcal{L}(\bm{\theta};g^{2},\epsilon)\approx\mathcal{\hat{L}}(\bm{\theta};g^{2},\epsilon)=\frac{1}{2B}\sum_{b=1}^{B}g^{2}(t^{(b)})\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t^{(b)}},t^{(b)})-\nabla\log{p_{0t^{(b)}}(\mathbf{x}_{t^{(b)}}\vert\mathbf{x}_{0})}\Vert_{2}^{2}$, up to a constant irrelevant to $\bm{\theta}$, where $\mathbf{x}_{t^{(b)}}=\mu(t^{(b)})\mathbf{x}_{0}+\sigma(t^{(b)})\bm{\epsilon}^{(b)}$ with $\{t^{(b)}\}_{b=1}^{B}$ and $\{\bm{\epsilon}^{(b)}\}_{b=1}^{B}$ be the corresponding Monte-Carlo samples from $t^{(b)}\sim [\epsilon,T]$ and $\bm{\epsilon}^{(b)}\sim\mathcal{N}(0,\mathbf{I})$, respectively. Note that this Monte-Carlo estimation is computed from the analytic transition probability as $\nabla\log{p_{0t^{(b)}}(\mathbf{x}_{t^{(b)}}\vert\mathbf{x}_{0})}=\frac{\bm{\epsilon}^{(b)}}{\sigma(t^{(b)})}$.
In practice, \cite{song2021maximum,huang2021variational} apply the importance sampling that minimizes the estimation variance to avoid suboptimal score training. From the theoreic observation of $\sigma^{2}(t)\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{t}}\big[\Vert\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})}\Vert_{2}^{2}\big]=d$, where $d$ is the data dimension, we take the importance distribution to be $p_{iw}(t)=\frac{g^{2}(t)/\sigma^{2}(t)}{Z_{\epsilon}}1_{[\epsilon,T]}(t)$, where $Z_{\epsilon}=\int_{\epsilon}^{T}\frac{g^{2}(t)}{\sigma^{2}(t)}\diff t$. Then, the Monte-Carlo estimation of $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ becomes
\begin{align}
&\mathcal{L}(\bm{\theta};g^{2},\epsilon)\nonumber\\
&=\frac{Z_{\epsilon}}{2}\int_{\epsilon}^{T}p_{iw}(t)\sigma^{2}(t)\mathbb{E}\big[ \Vert \mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla\log{p_{0t}(\mathbf{x}_{t}\vert\mathbf{x}_{0})} \Vert_{2}^{2} \big]\diff t\nonumber\\
&\approx\frac{Z_{\epsilon}}{2B}\sum_{b=1}^{B}\sigma^{2}(t_{iw}^{(b)})\left\Vert\mathbf{s}_{\bm{\theta}}\Big(\mathbf{x}_{t_{iw}^{(b)}},t_{iw}^{(b)}\Big)-\frac{\bm{\epsilon}^{(b)}}{\sigma(t_{iw}^{(b)})}\right\Vert_{2}^{2}\nonumber\\
&:=\mathcal{\hat{L}}_{iw}(\bm{\theta};g^{2},\epsilon),\label{eq:iw}
\end{align}
where $\{t_{iw}^{(b)}\}_{b=1}^{B}$ is the Monte-Carlo sample from the importance distribution, i.e., $t_{iw}^{(b)}\sim p_{iw}(t)\propto\frac{g^{2}(t)}{\sigma^{2}(t)}$.
In spite of the importance sampling, each Monte-Carlo sample loss is severly imbalanced as in Figure \ref{fig:monte_carlo_loss}-(a). Also, since $p_{iw}(t)\rightarrow\infty$ as $t\rightarrow 0$ in the blue line of Figure \ref{fig:monte_carlo_loss}-(c), most of importance weighted Monte-Carlo time are concentrated at $t\approx \epsilon$ in Figure \ref{fig:importance_sampling}. Therefore, there are two factors that prohibit score estimation on large diffusion time: 1) the magnitude of each Monte-Carlo loss is imbalanced even after the importance sampling (Figure \ref{fig:monte_carlo_loss}-(a)), 2) the most of importance weighted Monte-Carlo time are around $t\approx \epsilon$ (Figure \ref{fig:importance_sampling}). Combining these two factors, the gradient signal of the score network is dominated by small diffusion time.
Previous works \cite{ho2020denoising} train the denoising score loss with the variance weighting, $\lambda(t)=\sigma^{2}(t)$, in order to alleviate the second factor. The importance distribution becomes $p_{iw}(t)=\frac{\lambda(t)}{\sigma^{2}(t)}\equiv 1$, so importance weighted Monte-Carlo time is sampled from the uniform distribution on $[\epsilon,T]$. However, training a diffusion model with the variance weighting leads to poor NLL because the optimization loss is no longer the variational bound of the log-likelihood.
\subsection{Soft Truncation}\label{sec:softrunc}
Soft Truncation proposes a training trick other than likelihood/variance weightings that relieve the second factor of inaccurate score estimation. Let $\mathbb{P}(\tau)$ be a prior distribution for truncation time. Then, in every mini-batch update, we optimize the diffusion model with $\mathcal{\hat{L}}_{iw}(\bm{\theta};g^{2},\tau)$ in Eq. \eqref{eq:iw} for a sampled $\tau\sim\mathbb{P}(\tau)$. In other words, for every batch $\{\mathbf{x}_{0}^{(b)}\}_{b=1}^{B}$, Soft Truncation optimizes the Monte-Carlo loss
\begin{align*}
\mathcal{\hat{L}}_{iw}(\bm{\theta};\lambda,\tau)=\frac{Z_{\tau}}{2B}\sum_{b=1}^{B}\sigma^{2}(t_{iw}^{(b)})\left\Vert\mathbf{s}_{\bm{\theta}}\Big(\mathbf{x}_{t_{iw}^{(b)}},t_{iw}^{(b)}\Big)-\frac{\bm{\epsilon}^{(b)}}{\sigma(t_{iw}^{(b)})}\right\Vert_{2}^{2}
\end{align*}
with $\{t_{iw}^{(b)}\}_{b=1}^{B}$ sampled from the importance distribution of $p_{iw,\tau}(t)=\frac{g^{2}(t)/\sigma^{2}(t)}{Z_{\tau}}1_{[\tau,T]}(t)$, where $Z_{\tau}:=\int_{\tau}^{T}\frac{g^{2}(t)}{\sigma^{2}(t)}\diff t$.
Soft Truncation resolves the second factor of immature training because Monte-Carlo time is not concentrated on $\epsilon$, anymore. Figure \ref{fig:importance_sampling} illustrates the quantiles of importance weighted Monte-Carlo time with Soft Truncation under $\tau=\epsilon$ and $\tau=0.1$. The score network focuses more on large diffusion time when $\tau=0.1$, and as a consequence, the loss imbalance issue in Figure \ref{fig:monte_carlo_loss}-(a) is mitigated as in Figure \ref{fig:monte_carlo_loss}-(b) with purple dots. This limited range on $\tau$ provides a chance to learn accurate score network on large diffusion time with balanced Monte-Carlo losses. As $\tau$ is softened, such truncation level will vary by mini-batch updates: see the loss scales change by blue, green, red and purple dots according to sampled $\tau$s in Figure \ref{fig:monte_carlo_loss}-(b). Eventually, the softened $\tau$ will provide fair chance to learn the score network from small as well as large diffusion time.
\subsection{Diffusion Model with General Weight}\label{sec:soft_truncation_general_weight}
In the original diffusion model, the Monte-Carlo loss estimation, $\mathcal{\hat{L}}(\bm{\theta};g^{2},\epsilon)$, is just a mini-batch approximation of a population loss, $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$. However, in Soft Truncation, the target population loss, $\mathcal{L}(\bm{\theta};g^{2},\tau)$, depends on $\tau$, so the target loss itself becomes a random variable on $\tau$. Therefore, we derive the expected Soft Truncation loss to reveal the connection to the original diffusion model:
\begin{align*}
&\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}):=\mathbb{E}_{\mathbb{P}(\tau)}\big[\mathcal{L}(\bm{\theta};g^{2},\tau)\big]\\
&\quad=\frac{1}{2}\int_{\epsilon}^{T}\mathbb{P}(\tau)\int_{\tau}^{T}g^{2}(t)\mathbb{E}\big[\Vert\mathbf{s}_{\bm{\theta}}-\nabla\log{p_{0t}}\Vert_{2}^{2}\big]\diff t\diff\tau\\
&\quad=\frac{1}{2}\int_{\epsilon}^{T}g^{2}_{\mathbb{P}}(t)\mathbb{E}\big[\Vert\mathbf{s}_{\bm{\theta}}-\nabla\log{p_{0t}}\Vert_{2}^{2}\big]\diff t,
\end{align*}
up to a constant, where $g^{2}_{\mathbb{P}}(t)=\big(\int_{0}^{t}\mathbb{P}(\tau)\diff\tau\big)g^{2}(t)$, by exchanging the orders of the integrations. Therefore, we conclude that Soft Truncation reduces to a diffusion model with a general weight of $g_{\mathbb{P}}^{2}(t)$, see Appendix \ref{sec:general_weight}:
\begin{align}\label{eq:general_weight}
\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P})=\mathcal{L}(\bm{\theta};g^{2}_{\mathbb{P}},\epsilon).
\end{align}
\subsection{Diffusion Model with General Weight in View of Soft Truncation}
As explained in Section \ref{sec:soft_truncation_general_weight}, Soft Truncation is a diffusion model with a general weight, in the expected sense. Reversely, this section analyzes a diffusion model with a general weight in view of Soft Truncation. Suppose we have a general weight $\lambda$. We obtain Theorem \ref{thm:1} that proves the variational bound for the general weight.
\begin{theorem}\label{thm:1}
Suppose $\frac{\lambda(t)}{g^{2}(t)}$ is a nondecreasing and nonnegative absolutely continuous function on $[\epsilon,T]$ and zero on $[0,\epsilon)$. For the probability defined by
\begin{align*}
\mathbb{P}_{\lambda}([a,b])=\bigg[\int_{\text{max}(a,\epsilon)}^{b}\Big(\frac{\lambda(s)}{g^{2}(s)}\Big)'\diff s+\frac{\lambda(\epsilon)}{g^{2}(\epsilon)}1_{[a,b]}(\epsilon)\bigg]\bigg/Z,
\end{align*}
where $Z=\frac{\lambda(T)}{g^{2}(T)}$ is the normalizing constant, the variational bound of the general weighted diffusion loss becomes
\begin{align*}
&\mathbb{E}_{\mathbb{P}_{\lambda}(\tau)}\big[D_{KL}(p_{\tau}\Vert p_{\tau}^{\bm{\theta}})\big]\le D_{KL}(p_{T}\Vert\pi)\\
&\quad+\frac{1}{2Z}\int_{\epsilon}^{T}\lambda(t)\mathbb{E}_{\mathbf{x}_{t}}\big[\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla_{\mathbf{x}_{t}}\log{p_{t}(\mathbf{x}_{t})}\Vert_{2}^{2}\big]\diff t\\
&=\mathbb{E}_{\mathbb{P}_{\lambda}(\tau)}\big[D_{KL}(p_{T}\Vert\pi)+\mathcal{L}(\bm{\theta};g^{2},\tau)\big].
\end{align*}
\end{theorem}
See Appendix \ref{sec:proof} for the detailed statement and proof. Theorem \ref{thm:1} is a generalization of the original variational bound of $D_{KL}(p_{\epsilon}\Vert p_{\epsilon}^{\bm{\theta}})\le\frac{1}{2}\int_{\epsilon}^{T}g^{2}(t)\mathbb{E}_{\mathbf{x}_{t}}\big[\Vert\mathbf{s}_{\bm{\theta}}(\mathbf{x}_{t},t)-\nabla\log{p_{t}(\mathbf{x}_{t})}\Vert_{2}^{2}\big]\diff t+D_{KL}(p_{T}\Vert\pi)$, which is introduced in \citet{song2021maximum}, because the inequality collapses to the original variational bound when $\lambda(t)=cg^{2}(t)$, $\forall c>0$\footnote{When $\lambda(t)=cg^{2}(t)$, the prior is $\mathbb{P}([a,b])=1_{[a,b]}(\epsilon)$, which implies that the prior distribution has mass of probability one at $\epsilon$.}.
The meaning of Soft Truncation becomes clearer in view of this generalized variational bound. Instead of training the general weighted diffusion loss directly, we optimize the \textit{truncated} variational bound, $\mathcal{L}(\bm{\theta};g^{2},\tau)$, that upper bounds the \textit{perturbed} KL divergence, $D_{KL}(p_{\tau}\Vert p_{\tau}^{\bm{\theta}})$. Therefore, Soft Truncation corresponds to the Maximum Perturbed Likelihood Estimation (MPLE), where the perturbation level is a random variable. As Figure \ref{fig:nelbo}-(b) illustrates, the inequality of $\mathbb{E}_{\mathbf{x}_{0}}\big[-\log{p_{0}^{\bm{\theta}}(\mathbf{x}_{0})}\big]\le\mathcal{L}(\bm{\theta};g^{2},\tau)+\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{\tau}}\big[-\log{p(\mathbf{x}_{0}\vert\mathbf{x}_{\tau})}\big]$ is not tight if $\tau$ is insufficiently small, which leads inaccurate score estimation on large time (Appendix \ref{sec:reconstruction_training}). As a consequence, we drop the reconstruction term during training, and the \textit{truncated} variational bound becomes tight to the \textit{perturbed} KL divergence regardless of $\tau$, as in Figure \ref{fig:nelbo}-(c). To sum up, Soft Truncation, or MPLE, is a natural training framework inspired from the equivalence of Eq. \eqref{eq:general_weight}. Note that MPLE reduces to MLE if $\lambda(t)=cg^{2}(t)$.
\subsection{Choice of Soft Truncation Prior}
We parametrize the prior distribution by
\begin{align}\label{eq:prior_example}
\mathbb{P}_{k}(\tau)=\frac{1/\tau^{k}}{Z_{k}}1_{[\epsilon,T]}(\tau)\propto\frac{1}{\tau^{k}},
\end{align}
where $Z_{k}=\int_{\epsilon}^{T}\frac{1}{\tau^{k}}\diff \tau$. Figure \ref{fig:monte_carlo_loss}-(c) illustrates the importance distribution of $\lambda_{\mathbb{P}_{k}}$ for varying $k$. From the definition of Eq. \eqref{eq:prior_example}, $\mathbb{P}_{k}(\tau)\rightarrow\delta_{\epsilon}(\tau)$ as $k\rightarrow \infty$, and this limiting delta prior corresponds to the original diffusion model with the likelihood weighting. Figure \ref{fig:monte_carlo_loss}-(c) shows that the importance distribution of $\mathbb{P}_{k}$ with finite $k$ interpolates the likelihood weighting and the variance weighting.
With the current simple prior form, we experimentally find that the sweet spot is $k=1.0$ in VPSDE and $k=2.0$ in VESDE with the emphasis on the sample quality. The choice of $k$ is empirically robust as long as we keep $k$ near the sweet spot. For VPSDE, when $k\approx 1.0$, the importance distribution in Figure \ref{fig:monte_carlo_loss}-(c) is nearly equal to that of the variance weighting, so Soft Truncation with $k\approx 1.0$ provides the high sample fidelity. On the other hand, if $k$ is too small, no $\tau$ will be sampled near $\epsilon$, so it hurts both sample generation and density estimation. We leave as a future work for optimal prior selection.
\begin{table}[t]
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{subfigure}{0.7\linewidth}
\includegraphics[width=\linewidth]{FID_by_iteration.pdf}
\end{subfigure}
\vskip -0.1in
\captionof{figure}{Soft Truncation improves FID on CelebA trained with UNCSN++ (RVE).}
\label{fig:st_training}
\end{minipage}
\begin{minipage}[c]{0.5\textwidth}
\centering
\vskip 0.1in
\caption{Ablation study of Soft Truncation for various weightings on CIFAR-10 and IMAGENET32 with DDPM++ (VP).}
\label{tab:ablation_weighting_function}
\vskip -0.05in
\tiny
\begin{tabular}{l@{\hskip 0.3cm}l@{\hskip -0.1cm}c@{\hskip 0.1cm}c@{\hskip 0.2cm}c@{\hskip 0.15cm}c@{\hskip 0.15cm}c@{\hskip 0.2cm}c}
\toprule
& \multirow{2}{*}{Loss} & \multirow{2}{*}{\shortstack{Soft\\Truncation}} & \multicolumn{2}{c@{\hskip 0.4cm}}{NLL} & \multicolumn{2}{c@{\hskip 0.4cm}}{NELBO} & FID \\
&&& lossless & uniform & lossless & uniform & ODE \\\midrule
\multirow{4}{*}[-2pt]{CIFAR-10} & $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ & \xmark & \textbf{2.77} & 3.03 & \textbf{2.82} & 3.13 & 6.70 \\
& $\mathcal{L}(\bm{\theta};\sigma^{2},\epsilon)$ & \xmark & 2.95 & 3.21 & 2.97 & 3.34 & \textbf{3.90} \\
& $\mathcal{L}(\bm{\theta};g_{\mathbb{P}_{1}}^{2},\epsilon)$ & \xmark & 2.87 & 3.06 & 2.90 & 3.18 & 6.11 \\
& $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1})$ & \cmark & 2.82 & \textbf{3.01} & 2.83 & \textbf{3.08} & 3.96 \\\midrule
\multirow{4}{*}[-2pt]{IMAGENET32} & $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ & \xmark & 3.77 & 3.92 & 3.81 & 3.94 & 12.68 \\
& $\mathcal{L}(\bm{\theta};\sigma^{2},\epsilon)$ & \xmark & 3.82 & 3.95 & 3.86 & 4.00 & 9.22 \\
& $\mathcal{L}(\bm{\theta};g_{\mathbb{P}_{1}}^{2},\epsilon)$ & \xmark & 3.80 & 3.93 & 3.85 & 3.97 & 11.89 \\
& $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{0.9})$ & \cmark & \textbf{3.75} & \textbf{3.90} & \textbf{3.78} & \textbf{3.91} & \textbf{8.42} \\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}[c]{0.5\textwidth}
\vskip 0.1in
\centering
\caption{Ablation study of Soft Truncation for various model architectures and diffusion SDEs on CelebA.}
\label{tab:ablation_architecture_sde}
\vskip -0.05in
\tiny
\begin{tabular}{l@{\hskip 0.2cm}l@{\hskip 0.2cm}l@{\hskip 0.1cm}c@{\hskip 0.2cm}c@{\hskip 0.2cm}c@{\hskip 0.1cm}c@{\hskip 0.2cm}c@{\hskip 0.2cm}c}
\toprule
\multirow{2}{*}{SDE} & \multirow{2}{*}{Model} & \multirow{2}{*}{Loss} & \multicolumn{2}{c@{\hskip 0.5cm}}{NLL} & \multicolumn{2}{c@{\hskip 0.4cm}}{NELBO} & \multicolumn{2}{c@{\hskip 0.3cm}}{FID} \\
& & & lossless & uniform & lossless & uniform & PC & ODE \\\midrule
\multirow{2}{*}{VE} & \multirow{2}{*}{NCSN++} & $\mathcal{L}(\bm{\theta};\sigma^{2},\epsilon)$ & - & 3.41 & - & 3.42 & 3.95 & -\\
& & $\mathcal{L}_{ST}(\bm{\theta};\sigma^{2},\mathbb{P}_{2})$ & - & 3.44 & - & 3.44 & 2.68 & -\\\midrule
\multirow{2}{*}{RVE} & \multirow{2}{*}{UNCSN++} & $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ & - & 2.01 & - & \textbf{2.01} & 3.36 & -\\
& & $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{2})$ & - & \textbf{1.97} & - & 2.02 & \textbf{1.92} & -\\\midrule
\multirow{8}{*}[-9pt]{VP} & \multirow{2}{*}{DDPM++} & $\mathcal{L}(\bm{\theta};\sigma^{2},\epsilon)$ & 1.75 & 2.14 & 1.93 & 2.21 & 3.03 & 2.32 \\
& & $\mathcal{L}_{ST}(\bm{\theta};\sigma^{2},\mathbb{P}_{1})$ & 1.73 & 2.17 & 1.77 & 2.29 & 2.88 & \textbf{1.90}\\\cmidrule(lr){2-3}
& \multirow{2}{*}{UDDPM++} & $\mathcal{L}(\bm{\theta};\sigma^{2},\epsilon)$ & 1.76 & 2.11 & 1.87 & 2.20 & 3.23 & 4.72\\
& & $\mathcal{L}_{ST}(\bm{\theta};\sigma^{2},\mathbb{P}_{1})$ & 1.72 & 2.16 & 1.75 & 2.28 & 2.22 & 1.94\\\cmidrule(lr){2-3}
& \multirow{2}{*}{DDPM++} & $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ & \textbf{1.50} & 2.00 & \textbf{1.51} & 2.09 & 5.31 & 3.95\\
& & $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1})$ & 1.51 & 2.00 & \textbf{1.51} & 2.11 & 4.50 & 2.90\\\cmidrule(lr){2-3}
& \multirow{2}{*}{UDDPM++} & $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ & 1.61 & 1.98 & 1.75 & 2.12 & 4.65 & 3.98\\
& & $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1})$ & \textbf{1.50} & 2.00 & 1.53 & 2.10 & 4.45 & 2.97\\
\bottomrule
\end{tabular}
\end{minipage}
\vskip -0.1in
\end{table}
\begin{table}[t]
\begin{minipage}[c]{0.5\textwidth}
\centering
\caption{Ablation study of Soft Truncation for various $\epsilon$ on CIFAR-10 with DDPM++ (VP).}
\label{tab:ablation_epsilon}
\vskip -0.05in
\tiny
\begin{tabular}{lcc@{\hskip 0.3cm}c@{\hskip 0.3cm}c@{\hskip 0.3cm}c@{\hskip 0.3cm}c@{\hskip 0.3cm}c}
\toprule
\multirow{2}{*}{Loss} & \multirow{2}{*}{$\epsilon$} & \multicolumn{2}{c@{\hskip 0.3cm}}{NLL} & \multicolumn{2}{c@{\hskip 0.5cm}}{NELBO} & FID \\
&& lossless & uniform & lossless & uniform & ODE \\\midrule
\multirow{4}{*}{$\mathcal{L}(\bm{\theta};g^{2},\epsilon)$} & $10^{-2}$ & 9.17 & 4.64 & 9.21 & 4.69 & 38.82 \\
& $10^{-3}$ & 5.94 & 3.51 & 5.96 & 3.52 & 6.21 \\
& $10^{-4}$ & 3.78 & 3.05 & 3.78 & 3.08 & 6.33 \\
& $10^{-5}$ & \textbf{2.77} & 3.03 & \textbf{2.82} & 3.13 & 6.70 \\\midrule
\multirow{4}{*}{$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1})$} & $10^{-2}$ & 9.17 & 4.65 & 9.22 & 4.69 & 39.83 \\
& $10^{-3}$ & 5.94 & 3.51 & 5.97 & 3.52 & 5.14 \\
& $10^{-4}$ & 3.77 & 3.05 & 3.78 & 3.08 & 4.16 \\
& $10^{-5}$ & 2.82 & \textbf{3.01} & \textbf{2.82} & \textbf{3.08} & \textbf{3.96} \\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}[c]{0.5\textwidth}
\vskip 0.1in
\centering
\caption{Ablation study of Soft Truncation for various $\mathbb{P}_{k}$ on CIFAR-10 trained with DDPM++ (VP).}
\label{tab:ablation_prior}
\vskip -0.05in
\tiny
\begin{tabular}{lcccccc}
\toprule
\multirow{2}{*}{Loss} & \multicolumn{2}{c@{\hskip 0.2cm}}{NLL} & \multicolumn{2}{c@{\hskip 0.1cm}}{NELBO} & FID \\
& lossless & uniform & lossless & uniform & ODE \\\midrule
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{0})$ & 3.07 & 3.24 & 3.11 & 3.39 & 6.27 \\
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{0.8})$ & 2.84 & 3.03 & 2.84 & \textbf{3.05} & 3.61 \\
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{0.9})$ & 2.83 & 3.03 & 2.83 & 3.13 & \textbf{3.45} \\
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1})$ & 2.82 & \textbf{3.01} & 2.83 & 3.08 & 3.96 \\
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1.1})$ & 2.81 & 3.02 & 2.82 & 3.09 & 3.98 \\
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1.2})$ & 2.83 & 3.03 & 2.85 & 3.09 & 3.98 \\
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{2})$ & 2.79 & \textbf{3.01} & 2.82 & 3.10 & 6.31 \\
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{3})$ & 2.77 & 3.02 & \textbf{2.81} & 3.09 & 6.54 \\\midrule
$\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{\infty})$ & \multirow{2}{*}{\textbf{2.77}} & \multirow{2}{*}{\textbf{3.01}} & \multirow{2}{*}{2.82} & \multirow{2}{*}{3.09} & \multirow{2}{*}{6.70} \\
$=\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ &&&&&&\\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}[c]{0.5\textwidth}
\vskip 0.1in
\centering
\caption{Ablation study of Soft Truncation for CIFAR-10 trained with DDPM++ when a diffusion is combined with a normalizing flow in INDM \cite{kim2022maximum}.}
\label{tab:ablation_indm}
\vskip -0.05in
\tiny
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{Loss} & NLL & NELBO & FID \\
& \multicolumn{2}{c}{uniform} & ODE \\\midrule
INDM (VP, NLL) & \textbf{2.98} & \textbf{2.98} & 6.01 \\
INDM (VP, FID) & 3.17 & 3.23 & \textbf{3.61} \\
INDM (VP, NLL) + ST & 3.01 & 3.02 & 3.88 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
\begin{table*}
\centering
\caption{Performance comparisons on benchmark datasets. The boldfaced numbers present the best performance, and the underlined numbers present the second-best performance.}
\label{tab:performances}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{lccccccccccc}
\toprule
\multirow{3}{*}{Model} & \multicolumn{3}{c}{CIFAR10} & \multicolumn{3}{c}{ImageNet32} & \multicolumn{2}{c}{CelebA} & CelebA-HQ & \multicolumn{2}{c}{STL-10} \\
& \multicolumn{3}{c}{$32\times 32$} & \multicolumn{3}{c}{$32\times 32$} & \multicolumn{2}{c}{$64\times 64$} & $256\times 256$ & \multicolumn{2}{c}{$48\times 48$} \\
& NLL ($\downarrow$) & FID ($\downarrow$) & IS ($\uparrow$) & NLL & FID & IS & NLL & FID & FID & FID & IS \\\midrule
\multicolumn{12}{l}{\textbf{Likelihood-free Models}}\\
StyleGAN2-ADA+Tuning \citep{karras2020training} & - & 2.92 & \underline{10.02} & - & - & - & - & - & - & - & - \\
Styleformer \citep{park2021styleformer} & - & 2.82 & 9.94 & - & - & - & - & 3.66 & - & \underline{15.17} & \underline{11.01} \\
\multicolumn{12}{l}{\textbf{Likelihood-based Models}}\\
ARDM-Upscale 4 \citep{hoogeboom2021autoregressive} & \underline{2.64} & - & - & - & - & - & - & - & - & - & - \\
VDM \citep{kingma2021variational} & 2.65 & 7.41 & - & 3.72 & - & - & - & - & - & - & - \\
LSGM (FID) \citep{vahdat2021score} & 3.43 & \textbf{2.10} & - & - & - & - & - & - & - & - & - \\
NCSN++ cont. (deep, VE) \citep{song2020score} & 3.45 & \underline{2.20} & 9.89 & - & - & - & 2.39 & 3.95 & \underline{7.23} & - & - \\
DDPM++ cont. (deep, sub-VP) \citep{song2020score} & 2.99 & 2.41 & 9.57 & - & - & - & - & - & - & - & - \\
Efficient-VDVAE \citep{https://doi.org/10.48550/arxiv.2203.13751} & 2.87 & - & - & \textbf{3.58} & - & - & 1.83 & - & - & - & - \\
DenseFlow-74-10 \citep{grcic2021densely} & 2.98 & 34.90 & - & \underline{3.63} & - & - & 1.99 & - & - & - & - \\
PNDM \citep{liu2022pseudo} & - & 3.26 & - & - & - & - & - & 2.71 & - & - & - \\
ScoreFlow (deep, sub-VP, NLL) \citep{song2021maximum} & 2.81 & 5.40 & - & 3.76 & 10.18 & - & - & - & - & - & - \\
ScoreFlow (VP, FID) \citep{song2021maximum} & 3.04 & 3.98 & - & 3.84 & \textbf{8.34} & - & - & - & - & - & - \\
Improved DDPM ($L_{simple}$) \citep{nichol2021improved} & 3.37 & 2.90 & - & - & - & - & - & - & - & - & - \\\midrule
UNCSN++ (RVE) + ST & 3.04 & 2.33 & \textbf{10.11} & - & - & - & 1.97 & \underline{1.92} & \textbf{7.16} & \textbf{7.71} & \textbf{13.43} \\
DDPM++ (VP, FID) + ST & 2.85 & 2.47 & 9.78 & - & - & - & \underline{1.73} & \textbf{1.90} & - & - & - \\
DDPM++ (VP, NLL) + ST & \textbf{2.63} & 4.02 & 9.17 & 3.75 & \underline{8.42} & \textbf{11.82} & \textbf{1.51} & 2.90 & - & - & - \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table*}
\section{Experiments}
This section empirically studies our suggestions on benchmark datasets, including CIFAR-10 \citep{krizhevsky2009learning}, IMAGENET $32\times 32$ \cite{van2016pixel}, STL-10 \citep{coates2011analysis}\footnote{We downsize the dataset from $96\times 96$ to $48\times 48$ following \citet{jiang2021transgan, park2021styleformer}.} CelebA \citep{liu2015deep} $64\times 64$ and CelebA-HQ \citep{karras2017progressive} $256\times 256$.
Soft Truncation is a universal training technique indepedent to model architectures and diffusion strategies. In the experiments, we test Soft Truncation on various architectures, including vanilla NCSN++, DDPM++, Unbounded NCSN++ (UNCSN++), and Unbounded DDPM++ (UDDPM++). Also, Soft Truncation is applied to various diffusion SDEs, such as VESDE, VPSDE, and Reverse VESDE (RVESDE). Appendix \ref{sec:implementation_details} enumerates the specifications of score architectures and SDEs.
When computing NLL/NELBO, the negative log-likelihood includes the reconstruction term by $\mathbb{E}_{\mathbf{x}_{0}}[-\log{p_{0}^{\bm{\theta}}(\mathbf{x}_{0})}]=\mathbb{E}_{\mathbf{x}_{\epsilon}}[-\log{p_{\epsilon}^{\bm{\theta}}(\mathbf{x}_{\epsilon})}]+\mathbb{E}_{\mathbf{x}_{0},\mathbf{x}_{\epsilon}}[-\log{p(\mathbf{x}_{0}\vert\mathbf{x}_{\epsilon})}]$, but previous continuous diffusion models \cite{song2020score, song2021maximum} reported NLL/NELBO \textit{without} the reconstruction term. This reconstruction term becomes significnat as $\epsilon$ increases, so we report NLL and NELBO \textit{with} the reconstruction term. We compute the reconstruction term according to \citet{ho2020denoising} or \citet{kim2022maximum}. Specifically, \citet{ho2020denoising} computes $\log{p(\mathbf{x}_{0}\vert\mathbf{x}_{\epsilon})}$ using the lossless compression (\textit{lossless} in tables), and \citet{kim2022maximum} computes the second term using the variational inference with uniformly dequantized data (\textit{uniform} in tables). For NLL, we compute the first term by the Instantaneous change-of-variable formula \cite{song2020score}. However, instead of the way \citet{song2020score, song2021maximum} who misleadingly compute the first term as $\log{p_{\epsilon}^{\bm{\theta}}(\mathbf{x}_{0})}$, we compute the first term by $\log{p_{\epsilon}^{\bm{\theta}}(\mathbf{x}_{\epsilon})}$, where the initial data is perturbed by $\epsilon$, which causes a significant discrepancy from $\log{p_{\epsilon}^{\bm{\theta}}(\mathbf{x}_{0})}$ when $\epsilon$ (or $\sigma_{min}$) is not small enough. For sample generation, we use either of Predictor-Corrector (PC) sampler or Ordinary Differential Equation (ODE) sampler \cite{song2020score}.
\textbf{FID by Iteration} Figure \ref{fig:st_training} illustrates the FID score \cite{heusel2017gans} in $y$-axis by training steps in $x$-axis. Figure \ref{fig:st_training} shows that the setting of $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{2})$ reaches to $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ after 150k iterations, and it surpasses the original diffusion loss of $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ afterwards.
\textbf{Ablation Studies} Tables \ref{tab:ablation_weighting_function}, \ref{tab:ablation_architecture_sde}, \ref{tab:ablation_epsilon}, and \ref{tab:ablation_prior} show ablation studies on various weighting functions, model architectures, SDEs, $\epsilon$s, and priors, respectively. See Appendix \ref{sec:full_tables} for full tables. Table \ref{tab:ablation_weighting_function} shows that Soft Truncation performs as good as the likelihood weighting setting with respect to NLL/NELBO, and the variance weighting setting with respect to FID. Additionally, Table \ref{tab:ablation_weighting_function} clarifies the effect of Soft Truncation in comparison of $\mathcal{L}(\bm{\theta};g_{\mathbb{P}_{1}}^{2},\epsilon)$ and $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1})$. From Eq. \eqref{eq:general_weight}, the Soft Truncation loss, $\mathcal{L}_{ST}(\bm{\theta};g^{2},\mathbb{P}_{1})$, equals to the expected loss of $\mathcal{L}(\bm{\theta};g_{\mathbb{P}_{1}}^{2},\epsilon)$ with $g_{\mathbb{P}_{1}}^{2}(t)=(\int_{0}^{t}\mathbb{P}_{1}(\tau)\diff\tau)g^{2}(t)$, but the results in Table \ref{tab:ablation_weighting_function} present that optimizing this general weighted loss with Soft Truncation is advantageous on both density estimation and sample generation.
Table \ref{tab:ablation_architecture_sde} provides two implications. First, Soft Truncation particularly boosts FID while maintaining density estimation performances in any settings. Second, Soft Truncation is applied to the variance weighted loss by improving FID. In other words, $\mathcal{L}_{ST}(\bm{\theta};\sigma^{2},\mathbb{P})$ updates the network by the variance weighting loss with softened sample range, $[\tau,T]$. Table \ref{tab:ablation_architecture_sde} shows that combining the variance weighting with Soft Truncation additionally improves FID.
Table \ref{tab:ablation_epsilon} shows a contrastive trend of the likelihood weight and Soft Truncation. The experiment with the likelihood weight shows the inverse correlation of NLL and FID, but Soft Truncation monotonically reduces both NLL and FID by $\epsilon$. This implies that Soft Truncation largely mitigates the effort of $\epsilon$ search. Table \ref{tab:ablation_prior} studies the effect of $k$ in VPSDE. It shows that Soft Truncation significantly improves FID upon the experiment of $\mathcal{L}(\bm{\theta};g^{2},\epsilon)$ on the range of $0.8\le k\le 1.2$. Finally, Table \ref{tab:ablation_indm} shows that Soft Truncation is indeed beneficial on improving sample performances for a combined model \cite{kim2022maximum}, where a diffusion model and a normalizing flow are merged to construct a data-adaptive nonlinear diffusion.
\textbf{Quantitative Comparison to SOTA} Table \ref{tab:performances} compares Soft Truncation (ST) against the current best generative models. It shows that Soft Truncation achieves the state-of-the-art performances on CIFAR-10, CelebA, CelebA-HQ, and STL-10. In particular, we have experimented thoroughly on the CelebA dataset, and we find that Soft Truncation largely exceeds the previous best NLL/FID scores by far. In FID, Soft Truncation with DDPM++ performs 1.90, which exceeds the previous best FID of 2.92 by DDGM. In NLL, Soft Truncation with DDPM++ performs 1.51, which improves the previous best NLL of 1.86 in CR-NVAE. Also, Soft Truncation improves FID on STL-10 to be halved.
\section{Conclusion}
This paper proposes a generally applicable training method for continuous diffusion models. The suggested training method, Soft Truncation, is motivated from the observation that the density estimation is mostly counted on small diffusion time, while the sample generation is mostly constructed on large diffusion time. However, small diffusion time dominates the Monte-Carlo estimation of the loss function, so this imbalance contribution prevents accurate score learning on large diffusion time. Soft Truncation softening the truncation level at each mini-batch update, and this simple modification is connected to the general weighted diffusion loss and Maximum Perturbed Likelihood Estimation.
\nocite{langley00}
|
2,869,038,154,241 | arxiv | \section{Introduction}
\label{sec:intro}
The study of entanglement entropy has been a deeply fascinating and instructive one in both the fields of high energy \cite{Ryu:2006bv} and condensed matter physics \cite{Laflorencie:2015eck}. In particular, it appears to have deep connections with both geometric and topological properties in both disciplines \cite{Ryu:2006bv, 2007JSMTE..08...24H, Kitaev:2005dm}. Here we are interested in extending this study in the specific realm of Chern-Simons gauge theory using topological techniques.
Chern-Simons gauge theory is a three-dimensional TQFT with non-local gauge invariant observables, the Wilson loop operators \cite{Witten89}. Performing a path integral on a 3-manifold with boundary, one obtains a vector in the space of conformal blocks of the associated WZW model on the boundary 2-manifold(s). In particular, a path integral on the link complement determines a ``link state'':
$$| \mathcal{L} \rangle = \sum_{\alpha_{1}, \cdots, \alpha_{m}} C(\alpha_{1}, \cdots, \alpha_{m}) | \alpha_{1} \rangle \otimes \cdots \otimes | \alpha_{m} \rangle,$$
where $\mathcal{L}$ is a $m$-component link, and $\alpha_{i}$'s are integrable representations of the gauge group. Each vector $| \alpha_{i} \rangle $ belongs to the 2d Hilbert space associated to a torus $H_{T^{2}}$, which is fixed by a path integral on a solid torus with a Wilson loop colored in $\alpha_{i}$, as in Figure \ref{fig:solidtorus}. The inner product of the Hilbert space $H_{T^{2}}$ is simply $\langle \alpha_{i} | \alpha_{j} \rangle = \delta_{\alpha_{i},\alpha_{j}}$, and thus, $(\langle \alpha_{1} | \otimes \cdots \otimes \langle \alpha_{m}| ) | \mathcal{L} \rangle = C(\alpha_{1},\cdots,\alpha_{m})$. Topologically, $(\langle \alpha_{1} | \otimes \cdots \otimes \langle \alpha_{m}| ) | \mathcal{L} \rangle$ stands for gluing $m$ solid tori with Wilson loops colored in $\alpha_{i}$ back into the link complement, so $C(\alpha_{1},\cdots,\alpha_{m})$ is nothing but a ``colored'' link invariant of $\mathcal{L}$. Once these colored link invariants of $\mathcal{L}$ are known for all possible colorings, one can compute the density matrix and the entanglement entropy of $\mathcal{L}$ and check whether the link state is an entangled state or not \cite{BFLP,Salton:2016qpp}.
\begin{figure} [htb]
\centering
\includegraphics{solidtorus}
\caption{A solid torus (colored in yellow) contains a Wilson loop (colored in red), which wraps its non-contractible cycle. When the Wilson loop carries a representation $\alpha_{i}$ of the gauge group, the path integral fixes a vector $|\alpha_{i} \rangle$ in $H_{T^{2}}$.}
\label{fig:solidtorus}
\end{figure}
The aim of our paper is to reinforce \cite{BFLP,Salton:2016qpp} when the gauge group is $SU(2)$ with a technique to compute the colored link invariants $C(\alpha_{1}, \cdots, \alpha_{m})$ for any link $\mathcal{L}$, for any given coloring. To clarify, \cite{BFLP,Salton:2016qpp} computes the link states and study their entanglement properties when the given links are torus links (so that their colored link invariants can be written as a product of $S$ and $T$ matrices of the associated WZW model) or twist links (in which case Habiro's formula works nicely). In this paper, we use junctions of Wilson lines and the ``symmetric web'' relations \cite{RoseTub}, by which one can systematically compute the colored invariants for more general classes of knots/links. This will allow for the construction of toplogically interesting states and sets of entanglement entropies via topological techniques. One step further, we propose a conjecture in which the entanglement data is transcribed to the topological data: namely, if the link state factorizes for all colorings $\alpha_{i}$'s, then the corresponding link itself is reducible, \textit{i.e.}, a union of two unlinked sub-links. In terms of colored Jones polynomials, we can phrase the conjecture as follows:
\begin{conj}
Given a $m$-component link $\mathcal{L}$, suppose there exist two sub-links $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$, each with $i$ and $(m-i)$ components. Suppose the two sub-links satisfy the following:
$$J_{\alpha_{1}, \cdots, \alpha_{m}}(\mathcal{L}) = J_{\alpha_{1}, \cdots, \alpha_{i}}(\mathcal{L}_{1})J_{\alpha_{i+1}, \cdots, \alpha_{m}}(\mathcal{L}_{2})$$
for all colorings $\alpha_{1}, \cdots, \alpha_{m}$, then $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are unlinked.
\end{conj}
\section{Review: the link states and entanglement entropy}
First, let's recall the definition of entanglement entropy. Recall that the entanglement entropy $S$ of a system $A$ is given by
$$S_A=-\mathrm{Tr} \rho_A \log \rho_A,$$
where $\rho_A$ is the reduced density matrix corresponding to system $A$. The entanglement entropy quantifies the amount of entanglement, whether it be classical correlation or genuine quantum entanglement, that $A$ shares with all other systems. Note that $S_A$ is zero for pure states and nonzero for mixed states. In general, the calculation of the entanglement entropy is somewhat involved, particularly for large Hilbert spaces for which taking the logarithm of a high-dimensional matrix is computationally expensive.
Next, let us briefly review \cite{Witten89}. We start with $SU(2)$ Chern-Simons theory at level $k$ on a 3-manifold $M_{3}$:
$$S_{CS} = \frac{k}{4\pi} \mathrm{Tr} \int_{M_{3}} A \wedge dA + \frac{2}{3}A \wedge A \wedge A,$$
where $A$ is a $su(2)$-valued one form on $M_{3}$. When $M_{3}$ is a closed 3-manifold, the partition function $Z(M_{3}) = \int [\mathcal{D}A] e^{i S_{CS}}$ defines a topological 3-manifold invariant. We can also introduce gauge invariant observables, the famous Wilson loops defined on a closed path $C$ in $M_{3}$:
$$W_{R}(C) = \mathrm{Tr}_{R} \bigg[ \mathcal{P} \oint_{C} e^{i A} \bigg],$$
where $R$ is an irreducible representation of $SU(2)$. When $M_{3}$ is a 3-sphere and $R = \square$ (the fundamental representation of $SU(2)$), the expectation value of Wilson loops coincides with the Jones polynomials \cite{Witten89}:
\begin{equation}
\langle W_{\square}(C) \rangle = \dfrac{\int [\mathcal{D}A] W_{\square}(C) e^{i S_{CS}}}{\int [\mathcal{D}A] e^{i S_{CS}}} = J_{\square}(C).
\label{eqn:Jones}
\end{equation}
Equation \ref{eqn:Jones} holds, because $\langle W_{\square}(unknot) \rangle$ equals $J_{\square}(unknot)$ and the skein relation holds. Indeed, gluing two solid tori via modular S-transform on the boundary, one obtains a 3-sphere. When only one of the tori has a Wilson loop colored in $\square$, path integral on these solid tori fixes vctors $|\square \rangle$ and $|0 \rangle$ in the associated Hilbert space $H_{T^{2}}$. Then, the expectation value of the Wilson loop can be written as:
$$\langle W_{\square}(unknot) \rangle = \frac{\langle 0 | S | \square \rangle}{\langle 0 | S | 0 \rangle } = \frac{S_{0\square}}{S_{00}} = J_{\square}(unknot).$$
where $S_{ij}$ is the S-matrix element of $\widehat{su(2)}_{k}$ WZW model on torus. Here, $i, j$ stand for the integrable representations of the affine Lie algebra $\widehat{su(2)}_{k}$, which are in one-to-one correspondence with the space of conformal blocks $H_{T^{2}}$. In case the gauge group is $SU(2)$ and the level is $k$, the integrable representations are spin-$j$ representations, where $j = 0, 1/2, \cdots, k/2$. Thus, $H_{T^{2}}$ is spanned by the vectors $|0 \rangle, |\tfrac{1}{2}\rangle, \cdots, |\tfrac{k}{2} \rangle$, and $|\square\rangle = |\tfrac{1}{2}\rangle$ in our notation.
\begin{figure} [htb]
\centering
\includegraphics{skein}
\caption{The skein relation among three Wilson lines in $D^{3}$. $N$ is the rank of the gauge group, and all three Wilson lines are in the fundamental representation ($\square$) and canonically framed.}
\label{fig:skein}
\end{figure}
Next, consider the local relation among three Wilson lines shown in Figure \ref{fig:skein}. The three Wilson lines are lying in a closed three-ball $D^{3}$, and they are related to each other by half-twist(s) along the vertical direction. Performing a path integral on $D^{3}$ containing any one of the above three Wilson lines would fix a vector in the Hilbert space $H_{S^{2};\square,\square,\bar{\square},\bar{\square}}$ associated to the punctured boundary 2-sphere. The Hilbert space is 2-dimensional by the charge conservation argument, so the three Wilson lines shown in Figure \ref{fig:skein} must satisfy a linear relation in this 2d Hilbert space. The coefficients are fixed by studying the action of the half-twist on $S^{2}$. Since the Wilson lines colored in $\square$ satisfy $\langle W_{\square}(unknot) \rangle = J_{\square}(unknot)$ and the skein relation, we may repeatedly apply the skein relation until no crossing remains. Then, $\langle W_{\square}(unknot) \rangle = J_{\square}(unknot)$ determines the expectation value of the original Wilson loop $C$, and Equation \ref{eqn:Jones} holds.
The space $H_{T^{2}}$ is also equipped with a metric and fusion coefficients:
$$\langle i | j \rangle = \delta_{ij}, \quad \langle i | j ,k \rangle = N_{ijk}.$$
Topologically, the first equation corresponds to gluing two solid tori, each containing a Wilson line in representation $i$ and $j$, respectively. As a result, one gets a 3-manifold $S^{1} \times S^{2}$ containing two Wilson lines in $i,j$ wrapping the $S^{1}$ direction. LHS stands for the partition function of this configuration. Now, this partition function is nothing but the trace on the associated Hilbert space on $S^{2}$ with two punctures decorated by $i$ and $j$. The charge conservation argument immediately tells us that $i$ and $j$ must be dual to each other. For us, $i$ and $j$ are spin representations, and thus it is enough to write this condition as $\delta_{ij}$. The second equation corresponds to gluing two solid tori, and now one solid torus contains one Wilson line in $i$ representation, while the other contains two Wilson lines in $j$ and $k$. Again, the resultant 3-manifold is $S^{1} \times S^{2}$, and the partition function is nothing but the trace on the Hilbert space associated to $S^{2}$ with three punctures decorated by $i,j,k$. Again from the charge conservation argument, we see that the RHS must be the fusion coefficient among the spin representations $i,j$ and $k$.
Next, let us briefly recall the central ideas of \cite{BFLP}. Consider Wilson loop operators supported on a $m$-componnet link $\mathcal{L} = \bigcup_{i=1}^{m} C_{i}$. Each component $C_{i}$ is colored by an irreducible representation $R_{i}$ of $SU(2)$, and the link lies in $S^{3}$. The complement of $\mathcal{L}$, $\mathcal{L}^{c} = S^{3}\setminus D \times \mathcal{L}$ is obtained by removing a small solid torus $D \times \mathcal{L}$ in $S^{3}$. Since $\mathcal{L}^{c}$ is a 3-manifold whose boundary is a disjoint union of $m$ tori, a path integral on $\mathcal{L}^{c}$ fixes a vector in $(H_{T^{2}})^{\otimes m}$:
$$|\mathcal{L} \rangle = \sum_{\alpha_{1}, \cdots, \alpha_{m}} C(\alpha_{1},\cdots,\alpha_{m}) | \alpha_{1} \rangle \otimes \cdots \otimes |\alpha_{m} \rangle,$$
where $\alpha_{i}$'s are the spin representations which span $H_{T^{2}}$. Taking an inner product with a fixed vector $\langle \alpha_{1} | \otimes \cdots \otimes \langle \alpha_{m} |$, we are effectively gluing $m$ solid tori back into $S^{3}$, but this time each solid torus $D \times C_{i}$ contains a Wilson line colored in $\alpha_{i}$. As a result, on LHS we get the expectation value $\langle W_{\alpha_{1}, \cdots, \alpha_{m}}(\mathcal{L}) \rangle$. Since the metric on $H_{T^{2}}$ is nothing but a Kronecker delta symbol, we get on the RHS the coefficient $C(\alpha_{1},\cdots,\alpha_{m})$. Thus, we see that the link state is nothing but the sum over the basis vectors with the expectation value of Wilson loops as the coefficients:
$$|\mathcal{L} \rangle = \sum_{\alpha_{1}, \cdots, \alpha_{m}} \langle W_{\alpha_{1},\cdots,\alpha_{m}}(\mathcal{L})\rangle |\alpha_{1} \rangle \otimes \cdots \otimes |\alpha_{m} \rangle.$$
Once the colored link invariants are known, we can explicitly write down the $m$-partite entangled state corresponding to $\mathcal{L}$. Then, the entanglement structure of $|\mathcal{L}\rangle$ can be studied by computing its (reduced) density matrix, entanglement entropy, or entanglement negativity. In \cite{BFLP}, several examples were provided, including the triple Hopf link $2^{2}_{1} + 2^{2}_{1}$ and the Borromean ring $6^{3}_{2}$ (both in Rolfsen notation.) The corresponding link states can be explicitly written in terms of modular $S$ and $T$ matrices as follows, in which case the expectation values are colored Jones polynomials:
\begin{gather*}
| 2^{2}_{1} + 2^{2}_{1} \rangle = \sum_{j_{1},j_{2},j_{3}} \frac{S_{j_{1}j_{2}}S_{j_{2}j_{3}}}{S_{0j_{2}}} | j_{1}\rangle \otimes |j_{2} \rangle, \quad S_{j_{1}j_{2}} = \sqrt{\frac{2}{k+2}}\sin \big( \frac{(2j_{1}+1)(2j_{2}+1) \pi}{k+2} \big), \\
| 6^{3}_{2} \rangle = \sum_{i=0}^{\min(j_{1},j_{2},j_{3})} (-1)^{i}(q^{1/2}-q^{-1/2})^{4i} \frac{[2j_{1}+i+1]![2j_{2}+i+1]![2j_{3}+i+1]![i]![i]!}{[2j_{1}-i]![2j_{2}-i]![2j_{3}-i]![2i+1]![2i+1]!}, \\
\text{where} \quad q = e^{\frac{2 \pi i}{k+2}}, \quad [n] = \frac{q^{n/2}-q^{-n/2}}{q^{1/2}-q^{-1/2}}, \quad \text{and} \quad [n]! = [n][n-1] \cdots [1].
\end{gather*}
The density matrix for $| \mathcal{L} \rangle$ is, as usual, $\rho_{\mathcal{L}} = \frac{1}{\langle \mathcal{L} | \mathcal{L} \rangle} |\mathcal{L} \rangle \langle \mathcal{L} |$. Tracing over components of $\mathcal{L}$, we get the reduced density matrices. In the above two examples, tracing out any one component of the triple Hopf link yields a separable reduced density matrix, indicating that the state $|2^{2}_{1}+2^{2}_{1} \rangle$ is ``GHZ-like''. On the other hand, tracing out any one component of the Borromean ring yields a non-separable density matrix, showing that the state $| 6^{3}_{2} \rangle$ is ``W-like''.
\section{Review: Colored link invariants via symmetric webs}
The triple Hopf link is a torus link, so its colored link invariants can be written in terms of modular $S$ and $T$ matrices of $\widehat{su(2)}_{k}$ WZW model. The Borromean ring is a twist ink, and Habiro's formula works nicely to compute its colored link invariants. To compute colored link invariants for more general classes of knots/links, one may introduce junctions of Wilson lines and apply the techniques of quantum spin networks \cite{Masbaum, MasbaumVogel, CGV}. Alternatively, we may resolve the crossings by the ``symmetric webs'', which we will soon discuss.
In subsection \ref{subsec:local}, we review networks of Wilson lines and the local relations among them \cite{Witten89, Witten89wf, Witten89rw, MOY, CKM, CGR}. In the following subsection, we discuss their symmetric analogues \cite{RoseTub, ChunRefinedCS}. We provide an examples of the figure-eight knot and the triple Hopf link when $k=2$. The example will allow us to explicitly write down the entangled link state.
\subsection{Local relations among Wilson lines and networks of Wilson lines}
\label{subsec:local}
As was discussed before, path integral over a 3-manifold $M_{3}$ with boundary fixes a vector in the associated Hilbert space, $H_{\partial M_{3}}$. The Hilbert space is isomorphic to the space of conformal blocks in $\widehat{su(2)}_{k}$ WZW model on $\partial M_{3}$. When $M_{3}$ contains Wilson lines which end on the boundary, the Hilbert space is isomorphic to the space of conformal blocks on $\partial M_{3}$ with punctures decorated by the $R_{1}, \cdots, R_{m}$, the representations Wilson lines carry.
In particular, consider the case when $M_{3} = D^{3}$, \textit{i.e.}, the closed 3-ball. The boundary $\partial M_{3}$ is simply a 2-sphere, and thus, the charge conservation argument shows that the dimension of the corresponding Hilbert space is equal to the dimension of the invariant subspace :
\begin{equation}
dim \, H_{S^{2} ; R_{1}, \cdots, R_{m}} = dim \, \mathrm{Inv}_{G}(\otimes_{i=1}^{m} R_{i}).
\label{eqn:cc}
\end{equation}
Now, let $dim \, H_{S^{2} ; R_{1}, \cdots, R_{m}} = d$, and consider $(d+1)$ distinct Wilson line configurations in $D^{3}$ ending on the boundary 2-sphere with $m$ punctures $R_{1}, \cdots, R_{m}$. Then, $(d+1)$ vectors obtained by the path integral satisfy a linear relation in the $d$-dimensional Hilbert space.
One canonical example is the famous skein relation, Figure \ref{fig:skein}. In Figure \ref{fig:skein}, the associated Hilbert space $H_{S^{2} ; \square, \square, \bar{\square}, \bar{\square}}$ is 2-dimensional (by Equation \ref{eqn:cc}). The three distinct braided/unbraided Wilson lines in $D^{3}$ then fix three vectors in the two-dimensional Hilbert space, and they clearly satisfy a linear relation. The coefficients of the linear relation is determined from the eigenvalues of the modular $T$ matrix, as was explained in \cite{Witten89}. Using the skein relation, we can simplify any given $\square$-colored link until there is no crossing left and write its expectation value in terms of those of unknots.
For links colored in higher spin representations, however, the Hilbert space $H_{S^{2} ; j_{1}, j_{2}, \bar{j_{1}}, \bar{j_{2}}}$ is $\min (j_{1},j_{2})+1$ dimensional. We would need $\min (j_{1},j_{2})+2$ braided Wilson lines to set up a linear relation, but such a linear relation may not simplify the given knot, as we are adding more crossings. Thus, we need an alternative way to simplify the given link into a form that we can evaluate systematically.
\subsection{Symmetric webs in $SU(2)$ Chern-Simons theory}
\label{subsec:symweb}
We can do so by introducing jucntions of Wilson lines and resolve the crossings with trivalent graphs of Wilson lines \cite{Witten89wf, Witten89rw}. The junctions of interest are trivalent, and on each of them we place a gauge invariant tensor so that a closed trivalent graph defines a gauge invariant observable.
\begin{figure} [htb]
\centering
\includegraphics{MOYjunction}
\caption{LHS: a junctions of three Wilson lines colored in $R_{1}, R_{2}, R_{3}$ such that $0 \in R_{1} \otimes R_{2} \otimes R_{3}$. At the junction, we place a gauge invariant tensor in $Hom_{G}(R_{1} \otimes R_{2} \otimes R_{3}, \mathbb{C})$. RHS: an equivalent junction, with $R_{1}$-strand reversed and replaced by its complex dual.}
\label{fig:MOYjunction}
\end{figure}
When Wilson lines are colored by antisymmetric powers of the fundamental representations of $SU(N)$, such trivalent graphs coincide with ``MOY graphs'' \cite{CGR}. The MOY graphs can be simplified systematically by local relations, until they can be written as a linear sum of MOY graphs whose ``MOY graph polynomials'' are known \cite{MOY}. The networks of Wilson lines in antisymmetric representations satisfy the same set of local relations, which are also called $N$Web relations in the context of representation theory of quantum groups \cite{CKM}.
Before proceeding further, it is important to note that the Wilson lines with junctions must be vertically framed. This is because we cannot canonically frame the Wilson lines near the junctions so that the configuration's self-linking number to vanish upon braiding. Although the colored link invariants of vertically framed Wilson lines are different from those which are canonically framed, the entanglement structure would be framing-independent \cite{BFLP}. For this reason, we fix the framing of Wilson lines to be vertical in the rest of this paper.
Now let us consider the Wilson lines colored in spin representations. For $SU(2)$, the spin-$i/2$ representation is simply the $i$-th symmetric power of fundamental representations, denoted $Sym^{i}\square$. Just like their antisymmetri counterparts, they constitute ``symmetric webs'' which enable us to compute the colored link invariants by replacing the crossings with planar trivalent graphs of Wilson lines. The key trick is to use the level-rank duality in 2d WZW models, in which we can swap $\wedge^{i}\square$ with $Sym^{i}\square$ and the rank of the gauge group $N$ with the level $k$. To see how this works, consider the expectation value of a Wilson loop colored in $Sym^{i}\square$:
$$\langle W_{S^{i}\square}(\text{unknot}) \rangle = S^{(N,k)}_{0 Sym^{i}\square}/S^{(N,k)}_{00} = \quantumbinomial{N+i-1}{i},$$
where the superscript $(N,k)$ indicates that the S-matrix is from $\widehat{su(N)}_{k}$ WZW model. One can in fact write the RHS in terms of the S-matrix elements from the $\widehat{su(k)}_{N}$ WZW model:
\begin{gather}
q^{N} = e^{\pi i N/(N+k)} = -e^{\pi i k / (N+k)} = -q^{k} \\[1.5ex]
\Rightarrow \quad [N+a] = \frac{q^{N+a}-q^{-(N+a)}}{q-q^{-1}} = \frac{q^{k-a}-q^{-(k-a)}}{q-q^{-1}} = [k-a] \\[1.5ex]
\Rightarrow \quad \quantumbinomial{N+i-1}{i} = \quantumbinomial{k}{i} = S^{(k,N)}_{0 \wedge^{i}\square}/S^{(k,N)}_{00},
\end{gather}
where in the last line we have introduced quantum binomials, $\quantumbinomial{a}{b} = \frac{[a][a-1]\cdots[a-b+1]}{[b][b-1]\cdots[1]}$.
From the dimension of the Hilbert space associated to $S^{2}$ with punctures, it is immediate that the kinematics of the Wilson lines in symmetric representations are the same as those of their antisymmetric counterparts. Therefore, we can obtain the symmetric web relations starting from the $k$Web relations and replacing $q^{k}$ by $-q^{N}$ and antisymetric representatoins with symmetric representations (Figure \ref{fig:SymWebGenerators}).
\begin{figure} [htb]
\centering
(circle removal) \quad \raisebox{-0.5\height}{\includegraphics{sym_circ_removal}} \\[1.5ex]
(digon removal) \quad \raisebox{-0.5\height}{\includegraphics{CS_digon}} \\[1.5ex]
(associativity) \quad \raisebox{-0.5\height}{\includegraphics{CS_associativity}} \\[1.5ex]
($[E,F]$ relation) \quad \raisebox{-0.5\height}{\includegraphics{EF}}
\caption{The local relations of Wilson lines in symmetric representations, which determine the expectation values of all closed trivalent graphs colored in symmetric representations. Above, the indices $i,j,k$ stand for $Sym^{i}\square, Sym^{j}\square, Sym^{k}\square$, respectively.}
\label{fig:SymWebGenerators}
\end{figure}
The symmetric web relations provided in Figure \ref{fig:SymWebGenerators} are coherent and allow us to determine the expectation value of all closed trivalent graphs of Wilson lines colored in symmetric representations. Applying $[E,F]$ relation repeatedly, we can derive the famous ``square switch'' relation, which is particularly useful when simplifying complicated Wilson line networks (Figure \ref{fig:ss}).
\begin{figure} [htb]
\centering
\includegraphics{square_switch}
\caption{The ``square switch'' relation.}
\label{fig:ss}
\end{figure}
\begin{figure} [htb]
\centering
\includegraphics{mn_over}
\includegraphics{mn_under}
\caption{Resolution of crossings.}
\label{fig:crossingResolutions}
\end{figure}
Now it remains to write the crossings of symmetric-colored Wilson lines as a linear sum of trivalent graphs. In Figure \ref{fig:crossingResolutions}, the Wilson lines in the LHS and RHS satisfy a linear relation because $H_{S^{2};Sym^{m}\square, Sym^{n}\square, \overline{Sym^{m}\square}, \overline{Sym^{m}\square}}$ is $(\min(m,n)+1)$-dimensional. It is rather tedious to fix the coefficients, so we refer the interested readers to the appendix \ref{appendix:crossing} for the derivation of the above relations.
So far, we have restricted ourselves to ``MOY'' type junctions. Alternatively, one may generalize to include more general types of junctions, those which appear in the quantum spin networks. When incoming three Wilson lines are colored in spin $i$, $j$ and $k$, we insert a vertex if and only if the fusion coefficient $N_{ijk}$ is nonzero. These are precisely the types of junctions considered in \cite{Witten89wf, Witten89rw}, and one obtains quantum spin networks \cite{Masbaum,MasbaumVogel,CGV} by choosing a different normalization for the gauge invariant tensors from those of \cite{Witten89wf, Witten89rw}.
\subsection{Example: Triple Hopf link}
Now let us provide some examples in which the link states are computed via the symmetric webs. The first example is the simplest of all four-component link, the triple Hopf link.
For simplicity of calculation, we restrict ourselves to $k=2$. This is because when $k=1$ the Wilson lines can only be colored by trivial and fundamental representations, and then we can compute the link invariants simply via skein relations. For higher level $k$, the complexity will grow with it, but the idea is the same: resolve all crossings and use the symmetric web relations to simplify the resultant trivalent graphs.
\begin{figure} [htb]
\centering
\includegraphics{22_double}\\[1.5ex]
\includegraphics{12_double}\\[1.5ex]
\includegraphics{21_double}\\[1.5ex]
\includegraphics{11_double}
\caption{The relations among Wilson lines which are used to the ``linkings'' of $1$ and $2$-colored triple Hopf links.}
\label{fig:doubleCrossings}
\end{figure}
In Figure \ref{fig:doubleCrossings}, the Wilson lines on the LHS are obtained by vertically stacking the resolution of crossings, Figure \ref{fig:crossingResolutions}. Closing a Wilson line, we get the relations in Figure \ref{fig:doubleCrossingsClosed}. Given a colored triple Hopf link, we can apply them from right to left, until we are left with an unknot. For instance, consider a triple Hopf link, whose components are all colored by $Sym^{2}\square$. Applying the topmost relation of Figure \ref{fig:doubleCrossingsClosed} from right to left twice, we can compute the colored link invariant (see Figure \ref{fig:eval222tripleHopf}). Notice that we have omitted the orientation of Wilson lines, as the spin reprsentations are self-dual.
\begin{figure} [htb]
\centering
\includegraphics{22_double_closed} \quad \quad
\includegraphics{12_double_closed} \\[1.5ex]
\includegraphics{21_double_closed}\quad \quad
\includegraphics{11_double_closed}
\caption{The relations obtained by closing one of the Wilson lines from the previous figure.}
\label{fig:doubleCrossingsClosed}
\end{figure}
\begin{figure} [htb]
\centering
\includegraphics{222_tripleHopf}
\caption{Evaluation of $J_{2,2,2}(2^{2}_{1}+2^{2}_{1}+2^{2}_{1})$ via symmetric web relations.}
\label{fig:eval222tripleHopf}
\end{figure}
Likewise, we can compute the colored link invariants of the triple Hopf link for all colorings up to $Sym^{2}\square$ and determine the link state of the triple Hopf link at level $k=2$. Here, we use the fact that $q = e^{2 \pi i /(2+2)} = i$ to simplify this 64-dimensional vector. The full $q$-dependent expression is provided in Appendix \ref{appen:qTripleHopfState}.
\begin{align*}
| 2^{2}_{1}+2^{2}_{1}+2^{2}_{1} \rangle \quad = \quad &|0000\rangle + \sqrt{2} \big( |1000\rangle + |0100\rangle + |0010\rangle + |0001\rangle \big) \\
&+ \big( |2000\rangle + |0200\rangle + |0020\rangle + |0002\rangle \big) \\
&+ 2 \big( |1010\rangle + |0101\rangle + |1001\rangle \big) \\
&+ \sqrt{2} \big( |2010\rangle + |1020\rangle + |0201\rangle + |0102\rangle + |2001\rangle + |1002\rangle \big) \\
&+ \big( |2200\rangle + |0220\rangle + |0022\rangle + |2020\rangle + |2002\rangle + |0202\rangle \big) \\
&+ (-2) \big( |2101\rangle + |1201\rangle + |1021\rangle + |1012\rangle \big) \\
&+ \sqrt{2} \big( |2201\rangle + |1022\rangle \big) -\sqrt{2} \big( |1202\rangle + |2102\rangle + |2021\rangle + |2012\rangle \big) \\
&+ \big(|2202\rangle + |2022\rangle \big) + 2 \big( |0121\rangle + |1210\rangle \big) \\
&+ \sqrt{2} \big( |0212\rangle + |2120\rangle \big) -\sqrt{2} \big( |0122\rangle + |1220 \rangle + |0221\rangle + |2210 \rangle \big) \\
&+ \big( |0222\rangle + |2220\rangle \big) -\sqrt{2} \big( |2121\rangle + |1212\rangle \big) + 2 |1221\rangle \\
&+ \sqrt{2} \big( |2122\rangle + |2212\rangle \big) -\sqrt{2} \big( |1222\rangle + |2221\rangle \big) + |2222\rangle.
\end{align*}
where each cubit corresponds to a component of the triple Hopf link, and the number assigned to the cubit is its coloring.
\subsection{Example: Figure-eight knot}
Next, we consider our first example of a non-torus knot/link. The simplest of non-torus knot/link is the figure-eight knot: the simplest in a sense that it has the minimum number of crossings. Since there is only one component, the link state is 3-dimensional, and the coefficients of $|0\rangle$ and $|1\rangle$ can be easily computed by applying the skein relation in vertical framing. Let us omit the detailed procedure and write them down as follows:
$$ 1 | 0 \rangle, \quad \text{and} \quad (q^{2}+1-q)(q^{\frac{1}{2}}+q^{-\frac{1}{2}}) | 1\rangle.$$
Now, it remains to compute the colored link invariant of a $Sym^{2}\square$-colored figure-eight knot. We can compute the link invariant most easily by using a linear relation among four 2-colored Wilson lines shown in Figure \ref{fig:22_lines} (such a linear relation is well-defined, for $H_{S^{2},2,2,\bar{2},\bar{2}}$ is 3-dimensional.)
\begin{figure} [htb]
\centering
\includegraphics{22_lines}
\caption{Four 2-colored Wilson lines which satisfy a linear relation inside a 3-ball.}
\label{fig:22_lines}
\end{figure}
And symmetric web relations allow us to determine coefficients of the linear relation of our interest, by considering the four Wilson lines in Figure \ref{fig:22_lines} as parts of the Wilson lines shown in Figure \ref{fig:22_more_lines}.
\begin{figure} [htb]
\centering
\includegraphics{22_more_lines} \\[1.5ex]
\includegraphics{22_more_lines_2}\\[1.5ex]
\includegraphics{22_more_lines_3}\\[1.5ex]
\includegraphics{22_more_lines_4}
\caption{Four different ways to ``close'' the 2-colored Wilson lines satisfying a linear relation.}
\label{fig:22_more_lines}
\end{figure}
The ``tadpole'' diagrams in the first two closures vanish as they violate the charge conservation condition. Then, the braid relations in Figure \ref{fig:braid} allow us to determine the coefficients as shown in Figure \ref{fig:22_lines_coeff}. Apply the linear relation to any one crossing in a figure-eight knot. Then, we can write its colored link invariant as a linear sum over an unknot, a trefoil knot and a Hopf link. The $|2\rangle$ component is therefore:
\begin{figure} [htb]
\centering
\includegraphics{22_lines_coeff}
\caption{A linear relation among four 2-colored Wilson lines.}
\label{fig:22_lines_coeff}
\end{figure}
$$ (q^{7}-q^{5}+q+1+\frac{1}{q}-\frac{1}{q^{5}} +\frac{1}{q^{7}}) | 2 \rangle.$$
When $k$ equals 2, $q = e^{2 \pi i /(2+2)}$, and the link state of a figure-eight knot can be explicitly written:
$$|4_1 \rangle = |0 \rangle - \sqrt{2}i |1 \rangle + |2\rangle.$$
\section{Conjecture: entanglement structure and topological entanglement}
\label{sec:entangled}
Consider a link which is a union of two unlinked sub-links. TQFT axioms state that the corresponding link state must be a product state. But is the converse true? That is, given a link state which is a product state, can we expect the link itself to be a union of unlinked components?
If a link state $|\mathcal{L} \rangle$ is a product of two component link states, $|\mathcal{L} \rangle = |\mathcal{L}_{1} \rangle \otimes |\mathcal{L}_{2} \rangle$, any operator that acts on either $|\mathcal{L}_{1}\rangle$ or $|\mathcal{L}_{2}\rangle$ (as $1 \otimes \hat{O}$ or $\hat{O} \otimes 1$) would not change the other. In particular, a surgery on one component (say, $\mathcal{L}_{1}$) would not change the state $|\mathcal{L}_{2}\rangle$. The topological implication of the claim $|\mathcal{L}\rangle = |\mathcal{L}_{1}\rangle \otimes |\mathcal{L}_{2}\rangle$ can be best illustrated when the gauge group is $U(1)$. In this case, the (colored) link invariants are nothing but (colored) Gaussian linking nubmers, and the above claim implies that the ``mutual linking number'' (analogous to the mutual inductance) between $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ vanishes and remains unaltered after arbitrary surgery on either of the component links. In terms of the (colored) link invariants, this implies that the colored link invariants of $\mathcal{L}$ factorize into those of $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$.
Obviously, the claim would be true if $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are unlinked. However, the linking number is not an exclusive linking detector (for instance, the whitehead link has a vanishing linking number, but its component are nontrivially linked), so the condition does not immediately enforce $\mathcal{L}$ to be a product link. Currently, we do not have a proof or counterexample, so we leave it as a conjecture here. When the gauge group is $SU(2)$:
\begin{conj}
Given a $m$-component link $\mathcal{L}$, suppose there exist two sub-links $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$, each with $i$ and $(m-i)$ components. Suppose the two sub-links satisfy the following:
$$J_{\alpha_{1}, \cdots, \alpha_{m}}(\mathcal{L}) = J_{\alpha_{1}, \cdots, \alpha_{i}}(\mathcal{L}_{1})J_{\alpha_{i+1}, \cdots, \alpha_{m}}(\mathcal{L}_{2})$$
for all colorings $\alpha_{1}, \cdots, \alpha_{m}$, then $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are unlinked.
\end{conj}
An information theoretic plausiblity argument for the above would go as follows. If the link state $|\mathcal{L} \rangle$ is of form $|\mathcal{L} \rangle = |\mathcal{L}_{1} \rangle \otimes |\mathcal{L}_{2} \rangle$ then a partial trace over either the $|\mathcal{L}_{1} \rangle$ or $|\mathcal{L}_{2} \rangle$ subsystem would not affect the other. In particular, tracing out one would leave the other invariant. At this point, one can simply construct the link corresponding to $|\mathcal{L}_{1} \rangle$ and $|\mathcal{L}_{2} \rangle$ separately, and simply put them next to each other at the end of the construction to create an unlinked manifestation of $|\mathcal{L} \rangle$. This of course gives plausibility to the fact that they are genuinely not linked.
\section{Categorification of the entanglement entropy and the density matrix}
A ``categorification'' is an algebraic procedure in which algebraic objects are upgraded to the higher level ones, possibly with further structures: \textit{e.g.}, numbers to vector spaces, vector spaces to categories, and $n$-categories to higher $(n+1)$-categories. When topological invariants admit categorification, they often produce strictly stronger invariants. For instance, $\mathfrak{sl}_{N}$ polynomials are categorified to Khovanov or Khovanov-Rozansky homologies \cite{Kh, KhR04, KhR05}, which can distinguish knots and links better than the former.
Now that the symmetric webs are originally defined as a 1-category \cite{RoseTub}, a map (\textit{e.g.}, a density matrix) between symmetric webs would necessarily be a 2-morphism in the categorification of symmetric webs. In fact, we have seen a 2-morphism between symmetric webs, the density matrix! Recall that when computing the colored link invariants, we have replaced each crossings by certain symmetric webs which belong to the same Hilbert space. One can quickly notice that the resolution of crossing is nothing but a basis change inside the Hilbert space. Then, the density matrix can be interpreted as a map between two symmetric webs, or in the language of higher representation theory, a 2-morphism between symmetric webs.
Unfortunately, the diagrammatic presentation of such 2-category (called the symmetric $\mathfrak{sl}_{2}$-foam 2-category in \cite{RoseTub}) is unknown at present, but we can still take a glimpse of what it would look like from its antisymmetric counterpart. The networks of Wilson lines colored in antisymmetric representations correspond to morphisms in the category of $\mathfrak{sl}_N$-webs \cite{CKM, CGR}. When categorified, the morphisms between webs are represented by singular cobordisms connecting them \cite{BN, Blanchet, CMW, Kh03, KhR04, Kup, MV, SN, LQR, QR}. These cobordisms satisfy certain relations, which encode homological information which is not fully captured in the webs.
Thus, once the symmetric $\mathfrak{sl}_{2}$-foam 2-category is successfully constructed, the density matrix would correspond to a linear sum of singular cobordisms between symmetric webs. Then, the kinematics of the symmetric $\mathfrak{sl}_{2}$-foams would allow us to study the entanglement structure of the link states not only at the level of knot polynomials, but also in terms of the homological invariants.
\section{Conclusions}
In this paper we have married the techniques developed in \cite{BFLP,Salton:2016qpp} with symmetric web techniques to create a novel way of creating topologically interesting states. Further, we make and motivate a conjecture that product states are represented by un-linked knots.
It would be interesting to attempt to prove this conjecture in future work. Also, now that this connection has been established between entanglement and knot aspects of topological theories it would be intresting if some other more refined entanglement property could be useful in defining the cohomologies in the categorification of topological quantum field theories.
\acknowledgments{
We would like to thank Fran\c{c}ois Costantino, Sergei Gukov, Peter Kravchuck, Greg Kuperberg, and Onkar Parrikar for discussions. N.B. is funded as a Burke Fellow at the Walter Burke Institute for Theoretical Physics. The work is funded in part by the DOE Grant DE-SC0011632 and the Walter Burke Institute for Theoretical Physics, and also by the Samsung Scholarship.}
|
2,869,038,154,242 | arxiv |
\section{Applications}
\label{sec:applications}
In this section we look into two applications in maritime safety for which information about both $H_s$ and $T$ are used. One is an extension of the fatigue damage application considered in \citet{lit:hildeman}. The other is a method of estimating the risk of capsizing due to a specific capsizing mode known as \textit{broaching-to}.
\begin{figure}[t]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width= 0.8\textwidth]{./figs/data/globeRoute.png}
\caption{Transatlantic route of ship.}
\label{fig:route}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width= 0.8\textwidth]{./figs/data/waveDirection.png}
\caption{Mean wave directions during April.}
\label{fig:waveDirection}
\end{subfigure}
\caption{Route of ship and the mean wave directions during month of April. }
\end{figure}
\subsection{Accumulated fatigue damage}
\label{sec:fatigue}
A ship traversing the ocean is subjected to wear due to collisions with waves. These collisions will create microscopic cracks in the hull of the ship. With time and further exposure to the wave environment such cracks will grow while new will form. This type of wear damage is called fatigue. A ship will accumulate a certain amount of fatigue damage on any journey. However, the accumulated fatigue damage will vary in severity depending on the sea states encountered en route.
\citet{lit:mao} proposed the following formula based on $H_s$ and $T_z$ for which the expected rate, $d(t)$, of accumulated fatigue damage could be computed,
\begin{align}
d(t) \approx
\frac{0.47 C^{\beta} H_s^\beta(\psp(t))}{\gamma} \left( \frac{1}{T_z(\psp(t))} - \frac{2\pi V(t)\cos \alpha(t)}{gT_z^2(\psp(t))} \right).
\label{eq:fatigueDamage}
\end{align}
Here, $g$ is the gravitational constant ($\approx 9.81$), $V$ is the speed of the ship,
and $\alpha$ is the angle between the heading of the ship and the direction of the traveling waves. Further, $\gamma$ and $\beta$ are constants dependent on the material of the ship and $C$ is a constant depending on the ship's design \citep{lit:mao}. This formula can be used in combination with Monte Carlo simulations of $H_s$ and $T_z$ from our proposed model to evaluate the distribution of accumulated fatigue damage on a planned route.
We consider the transatlantic route of Figure \ref{fig:route}. The continuous route is approximated by line segments between $100$ point locations (evenly spaced in geodesic distance). We set the ship speed to a fixed value of $10$ [m/s] which yields a sailing duration of $149.69$ hours or equivalently $6.23$ days. The heading of the ship, in one of the 100 locations on the route, is approximated as the mean between the direction acquired from the two connecting line segments. We consider the journey to take place in April, since we have estimated the parameters of the model for this month. A ship traversing the considered route can be modeled by a curve in space and time, $\psp_{\gamma}(t)$. Since we have neither a spatio-temporal model nor data with sufficient temporal resolution, we consider the sea states remaining constant in time during the traversal of the route, i.e., $\psp_{\gamma}(t) \in \gspace$ and not in space-time.
We denote the accumulated fatigue damage during the trip up until time $t$ as $D(t)$, where $t = 0$ corresponds to the start of the trip, with no accumulated damage, and $t = t_{end}$ corresponds to the end of the trip, with maximal accumulated damage.
We set the constants specific to the ship as in \citep{lit:podgorski, lit:mao, lit:hildeman}, i.e., $C = 20, \beta = 3,$ and $\gamma = 10^{12.73}$.
In order to compute the fatigue, we also need the propagating waves angle in comparison with the ships heading. This is a random quantity that is not modeled in this work. Instead we assume that the mean direction of the wave propagation is the same as the direction that the countour lines of $H_s$ moves, i.e. the direction of the gradient of $H_s$ field (this is the same wave direction as used in \citep{lit:podgorski, lit:mao, lit:hildeman}, which has shown good results).
This direction was estimated in \citet{lit:baxevani} and can be seen in Figure \ref{fig:waveDirection}.
Furthermore, assuming that the sea states can be characterized by Bretschenider spectrums, $T_z = \frac{1.2965}{1.408} T_1 = 0.9208 \cdot T_1$.
With these assumptions, and given values of $H_s$ and $T_1$, we use \eqref{eq:fatigueDamage} to compute the corresponding values of $d(t)$, and approximate the accumulated fatigue damage as
\begin{align}
D(T) = \int_0^{t_{end}} d(t)dt \approx \sum_{i=1}^{100} d(t_i) \Delta t,
\end{align}
where $\Delta t$ is the time differences between the $100$ consecutive point locations on the considered route, $\Delta t = 1.4969 $ hours.
The accumulated fatigue damage is computed for each of the $600$ days available in the test set of the data. Hence, we acquire a sample of $600$ values of accumulated fatigue damage. Figure \ref{fig:fatigueMaximumRho} shows the empirical CDF computed from this sample (blue line). The accumulated damage is computed for a ship traversing the route in both directions, since the accumulated damage will depend on the angle between the heading of the ship and the propagation direction of the waves.
In order to assess whether the estimated CDF from data behaves as if estimated from the model, we also estimate 200 CDFs from independent sets of data generated from the model. That is, 200 times we generate 600 independent realizations of the bivariate $H_s, T$ surface, and from each set of 600 realizations we compute a CDF. In the figure, these 200 estimated CDFs are plotted (green lines) together with the pointwise upper and lower envelopes of the values (red lines).
As can be seen, the estimated CDF from data is within the envelopes, suggesting that the model can be used for fatigue damage predictions.
\begin{figure}[t]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/fatigue/fourier44_maxRho/fatigueDamageCDFToEUR.eps}
\caption{From USA to Europe.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/fatigue/fourier44_maxRho/fatigueDamageCDFToUSA.eps}
\caption{From Europe to USA.}
\end{subfigure}
\caption{Empirical CDFs of accumulated fatigue damage for the transatlantic route. Empirical CDF from data (blue line), $200$ different empirical CDFs from simulations (green), and pointwise upper and lower envelopes of the simulated CDFs (red). }
\label{fig:fatigueMaximumRho}
\end{figure}
In \citet{lit:hildeman}, a similar comparison was performed where the accumulated fatigue damage was computed using only $H_s$. Instead of $T_z$, the proxy $T_z = 3.75 \sqrt{H_s}$ was used, as proposed in~\citep{lit:mao, lit:podgorski}.
\citet{lit:hildeman} showed that the accumulated fatigue damage of the model agreed well with observed data. However, in that work only data of $H_s$ was available. Hence, the data that the model was compared to also used the proxy $T_z = 3.75 \sqrt{H_s}$. Since we have data of both $H_s$ and $T$, we can compare this proxy with data from the real bivariate random field. Figure \ref{fig:fatiguePureHs} shows the corresponding CDFs, and one can note that the use of the proxy does not provide accurate estimates of the true distribution of fatigue damage. In the direction from America to Europe, the model underestimates the damage, while in the other direction it overestimates it. This suggests that it is necessary to use a bivariate model in order to model accumulated fatigue damage correctly.
\begin{figure}[t]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/fatigue/fourier44_pureHs/fatigueDamageCDFToEUR.eps}
\caption{From USA to Europe.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/fatigue/fourier44_pureHs/fatigueDamageCDFToUSA.eps}
\caption{From Europe to USA.}
\end{subfigure}
\caption{Results for accumulated fatigue damage as in Figure \ref{fig:fatigueMaximumRho} but where the univariate spatial model of $H_s$ was used together with the proxy $T_z(\psp) = 3.75\sqrt{H_s(\psp)}$.}
\label{fig:fatiguePureHs}
\end{figure}
However, instead of using the full bivariate model, a possible simpler alternative is to model $T$ as
the pointwise conditional mean given $H_s$. In such a model, only $H_s$ has to be modeled spatially. Compared to the proxy model of \citet{lit:hildeman}, the pointwise cross-correlation between $H_s$ and $T$ would still need to be estimated.
Using this conditional means model for $T_z$ given $H_s$ yields the estimated CDFs as in Figure \ref{fig:fatigueMaxRhoPureHsConditionalMean}.
Also, this simpler model seemed sufficient to explain the distribution of fatigue damage accumulated on the transatlantic route.
\begin{figure}[t]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/fatigue/fourier44_maxRhoPureHsConditionalMean/fatigueDamageCDFToEUR.eps}
\caption{From USA to Europe.}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/fatigue/fourier44_maxRhoPureHsConditionalMean/fatigueDamageCDFToUSA.eps}
\caption{From Europe to USA.}
\end{subfigure}
\caption{Results for accumulated fatigue damage as in Figure \ref{fig:fatigueMaximumRho} but where the univariate spatial model of $H_s$ is used and $T_z$ is the conditional mean given $H_s$ (when using the pointwise ML estimate).}
\label{fig:fatigueMaxRhoPureHsConditionalMean}
\end{figure}
\subsection{Safety of operation in a following sea}
\label{sec:broaching}
Although capsizing of ships is rare, it is an important issue in naval architecture of hull designs of new vessels as well as for operational recommendations. A natural approach to capsize modeling is to view it as an extremal problem to be handled by the machinery of extreme value theory. However, efforts to do this by fitting specific extreme value distributions, e.g., to maximum roll angle values, have not been overly successful.
The variety of capsize modes suggests that a variety of modeling approaches may be required. In this section, the so called broaching-to capsize mode will be analyzed using the method proposed in~\citep{lit:leadbetter2}. The goal is to see if the proposed bivariate model can be used for modeling of broaching-to risks.
For a vessel sailing in a following sea, a large overtaking wave may trigger a response which may end in capsizing. There are several ways the capsize event may develop one of these, referred to as broaching-to, results in a sudden change of heading~\citep{lit:spyrou}. In moderate sea states, a vessel is likely to broach-to if it runs with high speed and is slowly overtaken by steep and relatively long waves. However, it may also occur at lower speeds if the waves are steep enough.
In order to assure safe operation of vessels, recommendations are needed for their heading and speed in terms of sea conditions $H_s$ and $T$. These recommendations should be given such that the risk of capsizing is small.
It is reasonable to assume an exponentially distributed time until capsizing for time scales of hours or larger, since the apparent waves have correlation ranges on much shorter time scales.
Hence, the risk will be measured by the capsize intensity, which will depend on the type of ship and operating conditions such as sea state, heading, and speed. We summarize the operating conditions in a vector of parameters, $\theta = (H_s, T_z, \alpha, v)$, where $\alpha$ is the angle between the heading of the ship and the direction of the traveling waves, and $v$ is the speed of the ship.
The angle, $\alpha$, is estimated in the same way as in the fatigue example.
Let $\lambda(\theta)$ denote the Poisson intensity, meaning the expected number of capsizes in given time unit under the operational conditions $\theta$.
In order to estimate capsize probability, a detailed understanding of what constitutes a ``dangerous wave'' is necessary, i.e., what geometrical properties make it more likely to cause capsize when it overtakes a vessel from behind. Further, it seems likely that the probability of such a wave causing a capsize will depend on factors such as the position and motion of the vessel relative to the overtaking wave when an encounter is initiated.
Simulations on the performance of a Coast guard cutter in severe sea conditions, run by the U.S. Coast Guard, was studied in~\citep{lit:leadbetter2}. For capsizes due to broaching-to, the vessel track of the simulated ship along with the shape of the last wave preceding the capsize event, which we refer to as the ``triggering wave'', were recorded. A common denominator of the triggering waves is the similar (steep) slope between peak and trough.
It is therefore reasonable to define a wave as dangerous if its downward slope lies within some range of steep slopes as the wave passes the centre of gravity of the vessel. We then want to calculate the rate $\mu_D(\theta)$ in which dangerous waves are expected to overtake the vessel, and further adjust this by the estimated probability that a dangerous wave will cause a capsize.
\subsubsection{Intensities of potentially dangerous overtaking waves}
A monochromatic plane wave has wavelength $L=2\pi\,g/\omega^2$, period $2\pi \omega^{-1}$, and velocity $V = L/T$. For the ship traveling with speed $v > 0$ and an angle of $-\pi/2 < \alpha< \pi/2$ to the propagating direction of the wave, the intensity of overtaking waves is $\mu(\theta)=(V-v_x)^+/L$.
Here, $v_x = v\cos(\alpha)$ and $a^+ = \max(a,0)$. Note that a wider angle between the heading of the vessel and the wave direction yields a higher intensity. Likewise, a smaller ship speed also yields a higher intensity. At the same time, too large values of $\alpha$ will not cause dangerous broaching-to events since the heading of the ship will not change dramatically; although encountering big waves perpendicular to the heading of a ship can be dangerous for other reasons.
Similarly to the monochromatic wave, the intensity of an apparent wave overtaking the center of gravity of the ship in a non-degenerate Gaussian sea has been shown to be~\citep{lit:rychlik3}
\begin{equation}
\mu(\theta)=\expect{\frac{(V-v_x)^+}{L}}=\frac{1}{4\pi}\sqrt{{\frac{m_{20}}{m_{00}}}}
\left( -\frac{m_{11}}{m_{20}} - v_x + \sqrt{v_x^2+2\frac{v_xm_{11}}{m_{20}}+\frac{m_{02}}{m_{20}} } \right),
\label{eq:mu}
\end{equation}
where
\begin{align}
m_{ij} := \int_0^{\infty} \left( \frac{\omega^2}{g} \right)^i \omega^j S(\omega)d\omega
\end{align}
are the spectral moments of the Gaussian process.
A ship being overtaken by an apparent wave is only dangerous if the wave is high and has a steep slope.
Analytic derivations~\citep[Theorem 6.2]{lit:aberg} give an explicit formula for the CDF of $W_x(x_0, t_0)$, where, $x_0, t_0$ are instances in space-time where the center of gravity of the ship is being overtaken by the zero level down-crossing of an apparent wave, and $W_x$ is the partial derivative of $W$ with respect to the spatial direction of the propagating wave.
The formula for the CDF is
\begin{equation}
F_{W_x}(r)=
\begin{cases}
\frac{2}{1-\rho}\left(\Phi(r/\sigma)-\rho\,e^{-\frac{r^2}{2m_{20}}}\Phi(r\rho/\sigma)\right)&,\quad r \le 0 \\
1&, \quad r > 0.
\end{cases}
\label{eq:slope}
\end{equation}
Here, $\Phi(x)$ is the CDF of the standard normal distribution, $\sigma^2=m_{20}(1-\rho^2)$, and
\begin{align}
\rho=\frac{v_xm_{20}+m_{11}}{\sqrt{m_{20}(v_x^2m_{20}+2v_xm_{11}+m_{02})}}.
\end{align}
The intensity of a broaching-prone wave scenario is the product of the intensity of overtaking waves thinned with the probability that the overtaking wave has a dangerously steep slope, i.e.,
\begin{equation}
\mu_D(\theta)=\mu(\theta)\prob{ W_x(x_0, t_0) \in A },
\label{eq:muD}
\end{equation}
where $A$ is an interval of slopes considered dangerous.
Inspired by~\citet{lit:leadbetter2}, we choose $A = [-0.4, -0.2]$.
Since the spectral moments are known functions of $H_s$ and $T$, assuming a Bretschneider spectrum, we can compute them for each point on the route for a given realization of $H_s$ and $T$.
In the following example we computed the spectral moments assuming a limited bandwidth and numerical integration using the Matlab toolbox WAFO~\citep{lit:brodtkorb}.
Using the route of Figure~\ref{fig:route} and wave directions of Figure \ref{fig:waveDirection}, $\mu_D(\psp_{\gamma}(t))$ can be estimated conditioned on a given sea state scenario.
\subsubsection{Estimation of $\lambda(\theta)$ response surfaces.}
Conditioned on the ship being overtaken by a ``dangerous'' wave, the capsizing phenomenon is a result of complicated nonlinear interactions between the wave and the vessel. Direct computations of risk for capsizes based on random models for sea motion and vessel response are not feasible to obtain.
In addition, there are limited data of capsizing available.
Consequently, one must study the problem using tank experiments with model ships or by means of computer simulations of the responses.
Since a capsize due to broaching-to occurs with a small probability, tank experiments would require too much time to get stable estimates of capsize probability for all but the most severe sea states. Instead, appropriate computer simulations are the best methods for estimating the probability of capsize and related events under moderately high sea conditions.
\citet{lit:leadbetter2}~derived a method for modeling the capsizing intensity due to broaching-to, $\lambda$, based on Poisson regression on the covariates $\mu_D$, $H_s$, and $T_1$, i.e.,
\begin{align}
\lambda(\theta) = \mu_D(\theta) \exp\left(\beta_0 + \beta_H \log H_s + \beta_T \log T\right).
\end{align}
The values of $\beta_0, \beta_H$, and $\beta_T$ depend on the ship type in consideration; a heavier and larger ship can withstand taller waves without broaching-to, as compared to a small ship.
The parameters of the regression for a U.S. coast guard cutter were estimated in~\citep{lit:leadbetter2}.
It turned out that this standard linear Poisson regression satisfactorily explained $\lambda(\theta)$ with the parameters $\beta_0, \beta_H,$ and $\beta_T$ estimated from capsize data in the computer simulations.
The values were $\beta_0 \approx \log(0.05), \beta_H \approx 7.5,$ and $\beta_T \approx -7.5$.
The model was shown to predict intensities of order $10^{-3}$ adequately.
It is still not known if the model can be extrapolated to even safer operating conditions. However, the predicted sea states that should be avoided are in line with the ones found using significant roll threshold, see~\citep[Fig. 22.2]{lit:leadbetter2}.
For a ship traversing the route of Figure~\ref{fig:route}, $\lambda(\theta(\psp_{\gamma}(t)))$ is the conditional capsize intensity of an inhomogeneous Poisson process over the space-time curve of the ships path, given the sea states, $\theta$.
The distribution of capsizes, if assuming that a ship could continue after a capsize, would then be Poisson distributed with intensity,
\begin{align}
\lambda(\theta) := \int_{0}^{t_{end}}\lambda(\theta_{\gamma}(t)) dt \approx \sum_{i=1}^{100}\lambda(\theta_{\gamma}(t_i))\Delta t,
\end{align}
where $\theta_{\gamma}(t) := \theta(\psp_{\gamma}(t))$.
The capsize events can hence be considered as a Cox process where the latent random intensity is given by the sea states, $\theta$.
In our example we compute the distribution of $\lambda$ as a function of the bivariate random field $H_s, T$. We use the same coefficients as in~\citet{lit:leadbetter2}, i.e., $\beta_0 = \log(0.05), \beta_1 = 7.5, \beta_2 = -7.5$.
When computing $\lambda$ we consider traversing the route from America to Europe, with the wave directions as in Figure \ref{fig:waveDirection}.
Furthermore, we choose the cutoff angle, $\alpha_0 = 75^{\circ}$, meaning that we only consider waves as potentially dangerous if the angle between the ships heading and the propagation direction of the waves are less than $\alpha_0$.
The scenario of traversing the route from Europe to America was not considered since the wave direction angle was always more than $\alpha_0$, i.e., negligible risk of a dangerous apparent wave overtaking the ship from behind.
The distribution of capsize intensities, $\lambda$, as well as corresponding total intensities of overtaking waves and dangerous overtaking waves can be seen in Figure~\ref{fig:broaching}.
The figure shows the estimated CDF of the total intensities, $\mu$, $\mu_D$, and $\lambda$, for a ship traversing the transatlantic route of Figure~\ref{fig:route} from America to Europe. The left column correspond to computations using the proposed bivariate spatial random model of sea states. The right column correspond to the simpler model of the univariate spatial $H_s$ model together with the pointwise conditional mean of $T$, which was found to be sufficient for the fatigue application in Section~\ref{sec:fatigue}. The CDF computed from the data is compared with 20 simulations of equal size, 600 days.
As is seen in Figure~\ref{fig:broaching}, the simpler model is now clearly deviating from the empirical CDF of the data. The proposed bivariate spatial model show a better fit although it seems to overestimate the risks slightly for medium sized intensities. Thus, for this application the bivariate model is clearly outperforming the simpler alternative.
\begin{figure}[tp]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
Bivariate spatial model
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
Univariate spatial model
\end{subfigure}
\vspace{ 0.5 cm}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/broaching/fourier44_maxRho/broachingMuRouteToEur.eps}
\caption{$\mu$ bivariate}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/broaching/fourier44_maxRhoPureHsConditionalMean/broachingMuRouteToEur.eps}
\caption{$\mu$ univariate}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/broaching/fourier44_maxRho/broachingMuDRouteToEur.eps}
\caption{$\mu_D$ bivariate}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/broaching/fourier44_maxRhoPureHsConditionalMean/broachingMuDRouteToEur.eps}
\caption{$\mu_D$ univariate}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/broaching/fourier44_maxRho/broachingLambdaRouteToEur.eps}
\caption{$\lambda$ bivariate}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/validate/broaching/fourier44_maxRhoPureHsConditionalMean/broachingLambdaRouteToEur.eps}
\caption{$\lambda$ univariate}
\end{subfigure}
\caption{Empirical CDFs of total $\mu$, $\mu_D$, and $\lambda$. The broaching-to risks were computed using two different models of sea states, the proposed bivariate random field model (left) and the simpler model of spatial $H_s$ with the marginal conditional mean of $T$ (right). Empirical CDF from data (blue), corresponding empirical CDFs from 20 simulations (green) and the pointwise upper and lower envelopes of the empirical CDFs (red). }
\label{fig:broaching}
\end{figure}
\section{Data}
\label{sec:data}
In order to test the proposed model, we will fit it to data from the ERA-Interim global atmospheric reanalysis \citep{lit:dee} acquired by the European Centre for Medium-Range Weather Forecasts (ECMWF).
The reanalysis data is based on measurements and interpolated to a lattice grid in a longitude-latitude projection using ECMWFs weather forecasting model IFS, cycle 31r2 \citep{lit:berrisford}.
The spatial resolution of the data is $0.75^{\circ}$ and it is available from 1979 to present.
We will use the variables \textit{significant wave height of wind and ground swells} and \textit{mean wave period} from the dataset as $H_s$ and $T_1$ in our analysis. Both variables are available at a temporal resolution of 6 hours. However, since we will not model the temporal evolution of the data, and therefore want to approximate data from different points in time as independent, we thin the data to a temporal resolution of $24$ hours. Data from different months are distributed differently due to the effects of the annual cycle. Because of this, we restrict the analysis to the data from the month of April for the available years 1979 to 2018.
We also restrict the analysis spatially to the north Atlantic, since this region contains several important trading routes and is known to produce data that is approximately log-Gaussian distributed \citep{lit:ochi}.
An example of two simultaneous observation of $H_s$ and $T_1$ from the data can be seen in Figure \ref{fig:realizations}.
A bivariate histogram as well as marginal normal distribution plots for $\log H_s$ and $\log T_1$ for one specific point in space ($-32.25^{\circ}$ longitude and $48.75^{\circ}$ latitude) can be seen in Figure \ref{fig:normplots}. The data at this point agrees well with the assumption of a bivariate log-normal distribution, and similar results are obtained for other locations in the domain.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/data/realization1/realizationHsData.eps}
\caption{$H_s$ data $1$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/data/realization1/realizationTpData.eps}
\caption{$T_1$ data $1$}
\end{subfigure} \\
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/data/realization2/realizationHsData.eps}
\caption{$H_s$ data $2$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/data/realization2/realizationTpData.eps}
\caption{$T_1$ data $2$}
\end{subfigure}
\caption{Two observations of $H_s$ and $T_1$ chosen randomly from the dataset of April month during the years 1979-2018.}
\label{fig:realizations}
\end{figure}
Figure \ref{fig:dataNormplots} shows the normal probability plot of $\log H_s$ and $\log T_1$ over all points in the region. The data were first standardized, pointwise, before computing the plot. Hence, the points should lie on a line if the assumption of log-normality holds, which can be seen to be true in the figure.
The sample mean and sample variance of the logaritmized data of April months can be seen in Figure \ref{fig:meanVarFields}.
Clearly, the mean wave height is decreasing close to the coasts and the wave height variance is slightly increasing close to the coasts. The mean wave period is larger to the east than in the west. This is due to the mean wind direction blowing eastward. Also wave period show similar behavior.
The left columns of Figures \ref{fig:correlationHs} and \ref{fig:correlationTp} show the empirical correlation between three reference points in space and every other point in the spatial domain.
Apparently, the point close to the coast of USA is showing an anisotropic pattern with the principal axis on the diagonal. Contrary to this, the spatial correlation of the mid Atlantic and at the coast of northern Europe has the principal axis in the east-west direction.
It should be noted that the data is portrayed in the longitude-latitude coordinate system in Figures \ref{fig:correlationHs} and \ref{fig:correlationTp}. Other projections would yield different shapes of anisotropy---however, it is clear that no stationary model (on the sphere or in the plane) can explain the observed behaviour.
The considered dataset consists of 1200 days of data. We divide these into two equally-sized subsets of training data and test data. The training set consists of every second day starting from the first day available. The test set consists of the remaining days. Hence, the test- and training sets form a partition of all available days, each set consists of $600$ days, at least 2 days apart. In the next section we will use the training set to estimate model parameters. The test set is used to compare the fitted model with data for model validation.
\begin{figure}[t]
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width = \textwidth, keepaspectratio]{./figs/data/dataNormplotHs.eps}
\caption{$\log H_s$}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width = \textwidth, keepaspectratio]{./figs/data/dataNormplotTp.eps}
\caption{$\log T_1$}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width = \textwidth, keepaspectratio]{./figs/data/dataBivariateHist.eps}
\caption{Bivariate histogram}
\end{subfigure}
\caption{Normal probability plots of the marginal distribution of $\log H_s$ and $\log T_1$ as well as their corresponding two dimensional histogram. The data is taken from a point at latitude $48.75^\circ$ and longitude $-35.25^\circ$ from the ERA-Interim dataset.}
\label{fig:normplots}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/data/normplotHs.png}
\caption{$\log H_s$}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/data/normplotTp.png}
\caption{$\log T_1$}
\end{subfigure}
\caption{Normal probability plot of all data (standardized for each spatial location separately prior to computing the normal probability plot). Left: plot for $\log H_s$. Right: Plot of $\log T_1$.}
\label{fig:dataNormplots}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/marginal/meanFieldHsData.eps}
\caption{mean $H_s$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/marginal/meanFieldTpData.eps}
\caption{mean $T_1$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/marginal/varFieldHsData.eps}
\caption{variance $H_s$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/marginal/varFieldTpData.eps}
\caption{variance $T_1$}
\end{subfigure}
\caption{Sample mean and sample variance for both $H_s$ and $T_1$ in the north Atlantic. }
\label{fig:meanVarFields}
\end{figure}
\section{Discussion}
\label{sec:discussion}
A joint spatial model of significant wave height and wave period has been introduced. The model is a bivariate extension of the model of~\citet{lit:hildeman} using the multivariate random field approach of~\citet{lit:bolin, lit:hu}. Furthermore, the model also incorporates the rational approximation to Mat\'ern fields of arbitrary smoothness~\citep{lit:bolin3}.
This means that the spatial model allows for non-stationary, anisotropic models of bivariate Gaussian random fields, each with its own arbitrary smoothness. The model is parametrized with a relatively small number of easily interpretable parameters.
The model was fitted using data from the month of April from the ERA-Interim global atmospheric reanalysis \citep{lit:dee}.
A stepwise maximum likelihood approach together with numerical optimization by a quasi-Newton method was used to estimate the parameters of the model.
The univariate models for $H_s$ and $T$ separately agrees well with data.
However, problems were encountered when fitting the cross-correlation structure between $H_s$ and $T$.
The problem is that the cross-correlation is not at its maximum between the same spatial points in $H_s$ and $T$, as assumed by the model. This lead to ML~estimates of the cross-correlation structure that did not agree at all with the observed data. Instead, estimating the cross-correlation structure using a pointwise maximum likelihood method yielded better results; although the cross-correlation range was clearly underestimated for small values.
The shift of locations of maximum cross-correlation, as seen in Figure~\ref{fig:crosscorrelationTranslation}, is likely an effect of the dynamic nature of ocean waves and their interaction with wind.
The proposed model assumes a symmetric cross-correlation structure with maximum cross-correlation between the same point in the two fields.
Due to the shifts, the real cross-correlation is not symmetric.
Because of this, it would make sense to incorporate these shifts into the bivariate model using the model of~\citet{lit:li}. This is an interesting extension of the multivariate modeling approach using systems of SPDEs and was proposed in~\citep{lit:hu2}.
Such shifts could be considered as a diffeomorphism between $\gspace$ and some overlapping region $\mathcal{H}$.
This diffeomorphism would fulfill that when $\log T$ is mapped to $\mathcal{H}$, the two fields, $\log H_s$ and $\log T$, align, i.e., maximum cross-correlation is between the same point in the two fields. The proposed model of this paper could then be applied to this transformed data.
The spatial model was evaluated in two applications in naval logistics. Both applications considered risks of undertaking a journey between the European and American continents through the north Atlantic. The first application considered computing the probability distribution of accumulated fatigue damage acquired during the journey. It was shown that the spatial model agreed with data. In particular, it showed that it works better than the approach where $T_z$ is replaced by the proxy $T_z = 3.75\sqrt{H_s}$, which was used in~\citep{lit:hildeman}.
However, a simpler model using only the univariate spatial random field model of $H_s$ together with pointwise conditional means of $T$ given $H_s$ yielded an adequate fit as well.
The second application concerned the risk of capsizing due to broaching-to. An inhomogeneous Poisson process was derived given the bivariate sea state surface of $H_s$ and $T$. The Poisson intensity depended on the intensity of the ship being overtaken by a wave from behind, the probability that the overtaking wave is steep, and a Poisson regression of the probability of capsizing given a dangerous wave.
The distribution of capsizing intensity (corresponding to the risk of capsizing) was compared between the proposed bivariate spatial model and the data. The spatial model showed a reasonable fit but seems to overestimate the risk slightly.
The simpler model, using the univariate random field of $H_s$ from~\citep{lit:hildeman} together with the pointwise conditional mean of $T$, was on the other hand clearly deviating from the distribution of the data. This shows that the bivariate model is indeed important for certain applications, and cannot simply be substituted by simpler univariate models.
\section{Parameter estimation and model fit}
\label{sec:estimation}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{./figs/data/globeMesh.png}
\caption{The north Atlantic with the the FEM mesh overlaid. Blue triangles are part of the spatial domain, $\gspace$. Pink triangles are part of the mesh extension. }
\label{fig:globeMesh}
\end{figure}
Just as in \citet{lit:hildeman}, we logaritmize and standardize the data first, marginally pointwise using sample mean and sample variances from the training set. The standardized data is then modeled by the proposed mean-zero bivariate Gaussian random field where we fix the marginal variances to one. As is common in geostatistical models, we allow for a nugget effect for each dimension while estimating the model. That is, for a location $\mv{s}_i$, we assume that the observed values, $X_{obs,i}, Y_{obs,i}$, are $X_{obs,i} = X(\mv{s}_i) + \varepsilon_{X,i}$ and $Y_{obs,i} = Y(\mv{s}_i) + \varepsilon_{Y,i}$, where $\varepsilon_{X,i} \sim \mathbb{N}(0,\sigma_{X,e}^2)$ and $\varepsilon_{Y,i} \sim \mathbb{N}(0,\sigma_{Y,e}^2)$ are independent variables representing measurement noise.
In order to use the proposed FEM model, a triangular mesh has to be created over the spatial domain, $\gspace$. Since the spatial domain is in reality a subset of the surface of the globe---we create a mesh approximating $\gspace$ by a polyhedra, i.e., as a piecewise planar manifold. Hence, the region inside each triangle is planar.
Figure \ref{fig:globeMesh} shows the mesh created for the north Atlantic. The blue triangles correspond to triangles within $\gspace$ and the pink triangles make up the mesh extension used to remove boundary effects. As in \citet{lit:hildeman}, the barrier method \citep{lit:bakka} is used to reduce the required size of the mesh extension.
Since the parameters of the proposed model are not known a priori, they have to be estimated from data.
The proposed bivariate model is defined by the marginal random fields through $K_X$ and $K_Y$, and the cross-correlation function $\rho(\psp)$.
The likelihood function of the joint model can be computed explicitly with a computational cost of $\mathcal{O}(N^{3/2})$, where $N$ are the number of nodes in the triangular mesh.
The maximum likelihood (ML) estimates of the parameters cannot be computed explicitly, but instead numerical optimization using a quasi-Newton algorithm is used to acquire the parameter estimates. Furthermore, the initial values of the optimization algorithm is chosen using local parameter estimates as proposed in \citep{lit:hildeman}.
Although the joint likelihood can be optimized numerically, we propose a stepwise parameter estimation procedure, motivated as follows:
One of the strengths of the proposed model is that all parameters have intuitive interpretations.
The parameters of $K_X$ and $K_Y$ respectively explain the spatial distribution of the random fields $X$ and $Y$ independently of each other.
Since the real spatial cross-correlation structure between $X$ and $Y$ likely is too complex to be explained completely by just $\rho(\psp)$, some degree of model-misspecification will be present.
Maximizing the full likelihood function corresponds, asymptotically, to minimizing the Kullback-Liebler divergence between the true data distribution and the assumed model. However, under model-misspecification, full ML estimates of the bivariate fields do not necessarily estimate the parameters of the original interpretation; instead, the estimates will correspond to the values that are minimizing the distance between the true model and the proposed one.
In many applications there is a point in keeping the original interpretation rather than minimizing the distributional distance---especially if conclusions should be drawn based on the estimated values of the parameters themselves.
Therefore, we fit $X$ and $Y$ independently in a first step. Then, conditioned on the estimates of the univariate random field parameters, a ML estimate of the cross-correlation structure, $\rho(\psp)$, is computed.
Estimating the parameters of $K_X$ and $K_Y$ independently has the additional advantage that it allows a lower dimensionality in the quasi-Newton optimization; which reduces the computational cost of estimation as well as decreases the risk of finding bad local optima. Also, the parameters of $K_X$ and $K_Y$ independently can be computed in parallel, further reducing the wall clock time.
\subsection{Estimation of the univariate random fields}
The models for $X$ and $Y$ independently are parametrized by the smoothness $\alpha$, the nugget effect, as well as the functions $H(\psp)$ and $\kappa(\psp)$. As in \citet{lit:hildeman}, we define
\begin{align}
\tilde{H}(\psp) = \begin{bmatrix}
\exp\left(h_1(\psp)\right) & \left(2S(h_3(\psp)) - 1\right)\exp \left( \frac{h_1(\psp) + h_2(\psp)}{2} \right) \\
\left(2S(h_3(\psp)) - 1\right)\exp \left( \frac{h_1(\psp) + h_2(\psp)}{2} \right) & \exp \left(h_2(\psp)\right)
\end{bmatrix},
\end{align}
and let $\kappa(\psp) = \determinant{\tilde{H}(\psp)}^{-1/2}$, and $H(\psp) = \kappa(\psp)^2 \tilde{H}$. The functions $h_1,h_2,h_3$ are defined as low-dimensional regressions on cosine functions over the domain of interest,
\begin{align}
h_i(\psp) = \sum_{p=0}^k \sum_{n=0}^k \beta_{np}^i \cos \left( n\frac{\pi s_1}{S_1} \right) \cos \left( p\frac{\pi s_2}{S_2} \right),\quad i=1,2,3,
\label{eq:cosineParams}
\end{align}
where $\psp = (s_1,s_2)$ and $S_1, S_2$ denotes the width and height of the bounding box of the locations of observations. The advantage of this parameterization is that we do not have any restrictions on the coefficients $\beta_{np}^i$ in order to obtain a valid model.
We use $k = 4$ in Equation \eqref{eq:cosineParams}, meaning that $25\cdot 3 + 2 = 77$ parameters were estimated simultaneously using the quasi-Newton method for each field.
The estimated correlation functions for three reference points are visualized in Figures \ref{fig:correlationHs} and \ref{fig:correlationTp}. Thus, the figures show the correlation between the reference points and all other points in the domain.
These three reference points have the coordinates $296^{\circ}$ longitude, $37^{\circ}$ latitude (close to the east coast of USA), $320^{\circ}$ longitude, $44^{\circ}$ latitude (in the middle of the north Atlantic), and $342^{\circ}$ longitude, $51^{\circ}$ latitude (close to the west coast of Ireland).
The figures suggests that the correlation structures are quite similar between $\log H_s$ and $\log T_1$, which makes sense since they are positively correlated.
\begin{figure}[t]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
Data $\log H_s$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
Model $\log H_s$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/correlation/data/ireland/corPointHsData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/correlation/fourier44/ireland/corPointHsModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/correlation/data/atlantic/corPointHsData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/correlation/fourier44/atlantic/corPointHsModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/correlation/data/virginia/corPointHsData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/correlation/fourier44/virginia/corPointHsModel.eps}
\end{subfigure}
\end{subfigure}
\begin{subfigure}{0.07\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/examples/colorbar01.png}
\end{subfigure}
\caption{Correlation between three different reference points and all other points in $\log H_s$. Left column: empirical correlation function from data. Right column: correlation function from fitted model. }
\label{fig:correlationHs}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
Data $\log T_1$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
Model $\log T_1$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/correlation/data/ireland/corPointTpData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/correlation/fourier44/ireland/corPointTpModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/correlation/data/atlantic/corPointTpData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/correlation/fourier44/atlantic/corPointTpModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/correlation/data/virginia/corPointTpData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/correlation/fourier44/virginia/corPointTpModel.eps}
\end{subfigure}
\end{subfigure}
\begin{subfigure}{0.07\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/examples/colorbar01.png}
\end{subfigure}
\caption{Correlation between three different reference points and all other points in $\log T_1$. Left column: empirical correlation function from data. Right column: correlation function from fitted model. }
\label{fig:correlationTp}
\end{figure}
The estimated smoothness parameter of $\log H_s$ was $\alpha = 3.66$, corresponding to a random field which is almost surely H{\"o}lder continuous with H{\"o}lder constant $2.66$.
In \citet{lit:hildeman} the same model was fitted to $\log H_s$ with the difference that it was defined in the longitude-latitude projection instead of on the sphere and that the smoothness parameter could only be integer-valued. In that work, the smoothness was found to be $\alpha = 3$. With arbitrary smoothness we are now able to find a more exact estimate of the smoothness parameter. Likewise, the estimated smoothness of $\log T_1$ was $\alpha = 3.16$, corresponding to H{\"o}lder constant $2.16$. Hence, the wave period is spatially a little bit rougher compared to the significant wave height.
\subsection{Estimation of the cross-correlation structure by ML}
Given the marginal parameters of $X$ and $Y$, we now want to estimate their cross-correlation structure, i.e., $\rho(\psp)$. We parametrize this function as a regression on cosines as in \eqref{eq:cosineParams}.
Estimating $\rho(\psp)$ using ML conditioned on the already estimated parameters for $X$ and $Y$, we acquired parameters for our bivariate model of $H_s$ and $T_1$ jointly. Figure \ref{fig:crosscorrelationProperRho} compares the estimated cross-correlation structure with the empirical one estimated from data. The reference point used in this figure was at $320^{\circ}$ longitude and $44^{\circ}$ latitude.
\begin{figure}[t]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/crosscorFieldData.eps}
\caption{Pointwise cross-correlation (data).}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_properRho/crosscorFieldModel.eps}
\caption{Pointwise cross-correlation (model).}
\end{subfigure}\\
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/atlantic/crosscorPointHsTpData.eps}
\caption{Cross-correlation $T_1 \to H_s$ (data).}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_properRho/atlantic/crosscorPointHsTpModel.eps}
\caption{Cross-correlation $T_1 \to H_s$ (model).}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/atlantic/crosscorPointTpHsData.eps}
\caption{Cross-correlation $H_s \to T_1$ (data).}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_properRho/atlantic/crosscorPointTpHsModel.eps}
\caption{Cross-correlation $H_s \to T_1$ (model).}
\end{subfigure}
\end{subfigure}
\begin{subfigure}{0.07\textwidth}
\centering
\includegraphics[width= \textwidth]{./figs/examples/colorbarm11.png}
\end{subfigure}
\caption{Cross-correlation from joint maximum likelihood estimation of cross-correlation structure. Top row: comparison of, pointwise cross-correlation between the data and the estimated model. Middle row: comparison of cross-correlation between $\log T_1$ at a reference point and $\log H_s$ for all points in $\gspace$.
Bottom row: comparison of cross-correlation between $\log H_s$ at a reference point and $\log T_1$ for all points in $\gspace$.}
\label{fig:crosscorrelationProperRho}
\end{figure}
Surprisingly, even though the data is strongly positively correlated, the fitted model yielded a strong negative correlation.
It turns out that the proposed model of the cross-correlation structure is a bit too simplistic to explain the true dependency between $\log H_s$ and $\log T_1$. The reason being that the point, $\psp_2$, where $\log T_1$ has the strongest cross-correlation with $\log H_s(\psp_1)$ is not $\psp_1$, i.e., $\psp_2 \neq \psp_1$. However, this is assumed in the proposed model of Section \ref{sec:model}.
For the reference point at $320^{\circ}$ longitude and $44^{\circ}$ latitude, the translation between the reference point and the point of maximum cross-correlation can be seen in Figure \ref{fig:crosscorrelationTranslationRef}. For $\log H_s$ in the reference point, corresponding $\log T_1$ is generally further west while the opposite relationship holds for $\log T_1$ in the reference point.
Corresponding vectors between reference points in $\log H_s$ and maximum points of correlation with $\log T_1$ can also be seen in the figure.
Figure \ref{fig:crosscorrelationTranslation} shows the ratio between the points of highest cross-correlation and the pointwise cross-correlation.
For most regions, the pointwise cross-correlation is not that much smaller than the maximum cross-correlation. However, since there is a clear consistent increase in cross-correlation when moving away from the reference point, the maximum likelihood estimate of $\rho$ is negative. This obvious model-misspecification is another reason for using the proposed stepwise estimation procedure.
\begin{figure}[t]
\begin{center}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/atlantic/arrows/crosscorPointHsTpData.eps}
\caption{$T_1 \to H_s$}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/atlantic/arrows/crosscorPointTpHsData.eps}
\caption{$H_s \to T_1$}
\end{subfigure}\\
\begin{subfigure}{0.6\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/vectorsMaxCX.eps}
\caption{Shift of maximum cross-correlation.}
\end{subfigure}
\end{center}
\vspace{-0.5cm}
\caption{Top row: translation vectors between a reference point $\psp_r$ at $(320^{\circ}, 44^{\circ})$ and its maximum cross-correlation value (estimated from data). Cross-correlation between $\log T_1(\psp_r)$ and $\log H_s(\psp)$ for $\psp$ close to $\psp_r$ is shown to the left. The right panel show cross-correlation between $\log H_s(\psp_r)$ and $\log T_1(\psp)$.
Bottom row: translation vectors between $H_s$ and corresponding maximum cross-correlation with $T_1$ for several points in the domain.}
\label{fig:crosscorrelationTranslationRef}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.6\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/ratioMaxCX.eps}
\end{subfigure}
\begin{subfigure}{0.07\textwidth}
\includegraphics[width = 0.8 \textwidth]{./figs/examples/colorbar051.png}
\end{subfigure}
\caption{Ratio between maximum cross-correlation and pointwise cross-correlation for all points in the spatial domain. }
\label{fig:crosscorrelationTranslation}
\end{figure}
\subsection{Estimation of the cross-correlation structure by pointwise ML}
The results of the previous subsection suggest that the bivariate model will not explain the joint distribution perfectly. However, it can still be useful if one could obtain a better method of estimating $\rho$.
Instead of estimating $\rho$ by ML as before, a possible solution is to fit the model to explain the pointwise cross-correlation, intead of the total cross-correlation.
This corresponds to maximizing a product likelihood of the bivariate Gaussian random variables for each spatial location, i.e., the log-likelihood function
\begin{align}
l(\rho; \bs{x}, \bs{y}) &= l(\rho; \bs{\hat{\gamma}}) = \sum_{j = 1}^{M} O_j \left[ -\log\left(2\pi\right) - \frac{1}{2}\log\left( 1-\gamma_j^2 \right)
+ \frac{\gamma_j}{1-\gamma_j^2} \left( \frac{O_j-1}{O_j} \hat{\gamma}_{j} \right) \right].
\label{eq:pointwiseML}
\end{align}
Here, $M$ is the number of locations where there have been observations in the data, $O_j$ are the number of observations for location $j$, and $\gamma_j$ is the pointwise cross-correlation between the two fields at location $\psp_j$ from the model. The observations, $\bs{x} := \{x_{jk}\}_{j,k}$ and $\bs{y} := \{y_{jk}\}_{j,k}$ are not needed explicitly since the sample pointwise cross-correlations, $\bs{\hat{\gamma}}$, are sufficient statistics for evaluating the log-likelihood.
The pointwise cross-correlations of the model are
\begin{align}
\bs{\gamma} = A_{j\cdot} P_r \tilde{\Sigma}_{XY} Q_r^T A_{j\cdot}^T,
\end{align}
where $A$ is the $M\times N$ observational matrix, i.e., mapping nodal values to values at the locations of observations \citep{lit:lindgren}. The matrices $P_r$ and $Q_r$ are defined in Section \ref{sec:model} and are sparse $N\times N$ matrices. The matrices $\tilde{\Sigma}_{\star\star}$ are $N\times N$ block matrices of $\tilde{\Sigma}$ which is the covariance matrix of $[\tilde{U}_X, \tilde{U}_Y]$, as defined in Section \ref{sec:model}.
To reduce the computational cost of computing $\Sigma_j$, we use the
Takahashi equations \citep{lit:takahashi, lit:rue2} to compute the needed elements of $\tilde{\Sigma}$ based on the corresponding precision matrix---without computing the full inverse which is non-sparse.
When we estimated the parameters, the pointwise sample cross-correlations, $\{\hat{\gamma}_j\}_j$ were replaced with the sample cross-correlations between $H_s$ at location $\psp_j$ and $T$ at the location which maximized the pointwise cross-correlation. In this way, the fitted model will have a pointwise cross-correlation corresponding to the maximum cross-correlation of that point---instead of fitting a perfect pointwise cross-correlation that will underestimate the maximum cross-correlation somewhat.
The pointwise cross-correlation as compared to data can be seen in Figure \ref{fig:crosscorrelationDataField}. As seen, the model has a larger pointwise cross-correlation, as designed.
To get an understanding of the true cross-correlation structure of the estimated parameters, Figures \ref{fig:crosscorrelationHsTpMaxRho} and \ref{fig:crosscorrelationTpHsMaxRho} show the cross-correlation between the three reference points in one of the fields and all points in the other field.
Finally, Figure~\ref{fig:realizations2} shows realizations from the final model, which look similar to the observed data in Figure \ref{fig:realizations}.
\begin{figure}[t]
\centering
\begin{subfigure}{0.44\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/crosscorFieldData.eps}
\caption{Data.}
\end{subfigure}
\begin{subfigure}{0.44\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_maxRho/crosscorFieldModel.eps}
\caption{Model with pointwise ML estimates.}
\end{subfigure}
\begin{subfigure}{0.07\textwidth}
\centering
\raisebox{0.2cm}{\includegraphics[ width= 0.8 \textwidth]{./figs/examples/colorbarm11.png}}
\end{subfigure}
\vspace{-0.4cm}
\caption{Comparison between the pointwise cross-correlation of data and the model using the pointwise ML estimates.}
\label{fig:crosscorrelationDataField}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
Data $T_1 \to H_s$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
Model $T_1 \to H_s$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/ireland/crosscorPointHsTpData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_maxRho/ireland/crosscorPointHsTpModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/atlantic/crosscorPointHsTpData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_maxRho/atlantic/crosscorPointHsTpModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/virginia/crosscorPointHsTpData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_maxRho/virginia/crosscorPointHsTpModel.eps}
\end{subfigure}
\end{subfigure}
\begin{subfigure}{0.08\textwidth}
\centering
\includegraphics[width= 0.8 \textwidth]{./figs/examples/colorbarm11.png}
\end{subfigure}
\caption{Cross-correlation between three reference points in $\log T_1$ and all other points in $\log H_s$. Left column: empirical cross-correlation function from data. Right column: cross-correlation function from fitted model using pointwise ML estimates.}
\label{fig:crosscorrelationHsTpMaxRho}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
Data $H_s \to T_1$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
Model $H_s \to T_1$
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/ireland/crosscorPointTpHsData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_maxRho/ireland/crosscorPointTpHsModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/atlantic/crosscorPointTpHsData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_maxRho/atlantic/crosscorPointTpHsModel.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/data/virginia/crosscorPointTpHsData.eps}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width = \textwidth]{./figs/estimation/crosscorrelation/fourier44_maxRho/virginia/crosscorPointTpHsModel.eps}
\end{subfigure}
\end{subfigure}
\begin{subfigure}{0.08\textwidth}
\centering
\includegraphics[width= 0.8 \textwidth]{./figs/examples/colorbarm11.png}
\end{subfigure}
\caption{Cross-correlation between three reference points in $\log H_s$ and all other points in $\log T_1$. Left column: empirical cross-correlation function from data. Right column: cross-correlation function from fitted model using pointwise ML estimates.}
\label{fig:crosscorrelationTpHsMaxRho}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/fourier44_maxRho/realization1/realizationHsSim.eps}
\caption{$H_s$ simulation $1$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/fourier44_maxRho/realization1/realizationTpSim.eps}
\caption{$T_1$ simulation $1$}
\end{subfigure} \\
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/fourier44_maxRho/realization2/realizationHsSim.eps}
\caption{$H_s$ simulation $2$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{./figs/realizations/fourier44_maxRho/realization2/realizationTpSim.eps}
\caption{$T_1$ simulation $2$}
\end{subfigure}
\caption{Two simulations of $H_s$ and $T_1$ from the joint spatial model.}
\label{fig:realizations2}
\end{figure}
\section{Introduction}
\label{sec:introduction}
The sea state characterizes the stochastic behavior of ocean waves in a region in space and time.
Explicit knowledge of the sea state allows for quantitative assessments of profits, costs, and risks associated with naval logistics, fishing, marine operations, and other applications affected by the sea surface conditions.
Let us denote the spatio-temporal stochastic process of sea surface elevation as $W(\psp, t)$, where $\psp \in \gspace$, $t \in [0,\mathcal{T}]$. Here, $\gspace$ is a small region in space and $[0,\mathcal{T}]$ is a small interval in time, typically from 20 minutes up to about 3 hours.
The distribution of $W$ is equivalent to the sea state at $\gspace \times [0,\mathcal{T}]$.
In general, a spatio-temporal stochastic process can be very complex to model.
However, for waves in deep water, the sea surface elevation is often approximated by means of Gaussian fields. Furthermore, if $\gspace$ and $\mathcal{T}$ are small enough, $W$ will be a stationary Gaussian process. For most applications, the quantities of interest are the deviations from the sea level, hence the mean value is of no interest. Then, $W$ could be modeled as a centered stationary Gaussian process and is completely characterized by the directional spectrum $S(\omega, \theta)$. Here $\omega \ge 0$ is the angular frequency of the waves and $\theta \in [0, 2\pi]$ is the direction \citep{lit:aberg}.
In this paper we are concerned with applications related to ship safety. For such applications we are mainly interested in sea states where a dominant part of the wave energy is propagating in a narrow band of directions. Hence, we will make the approximation $S(\omega, \theta) = S(\omega)\delta(\theta-\theta_0)$, where $S(\omega) = \int_0^{2\pi}S(\omega, \theta)d\theta$ is the temporal spectrum,
$\theta_0$ is the direction the waves are, approximately, propagating from and $\delta$ is the Dirac delta function. This approximation is known as a \textit{long crested sea}, for which the sea state is completely characterized by its temporal spectrum and a wave direction.
For most applications, a few scalar valued quantities are enough to characterize $S$. For example, the popular parametric Bretschneider spectrum \citep{lit:bretschneider}, which has been shown to explain the important characteristics of sea states for a wide range of applications and spatial regions, is fully characterized by the \textit{significant wave height} $H_s$ and the \textit{peak wave period} $T_p$. The Bretschneider spectrum is defined as
\begin{equation}\label{PM}
S(\omega) = c\omega^{-5}\exp \left( -1.25 \frac{\omega_p^4}{\omega^4} \right), \qquad
c= \frac{1.25}{4} \,H_s^2\omega_p^4, \qquad \omega_p=2\pi\,/T_p.
\end{equation}
Here, $H_s = 4\sqrt{\Var[W(\psp, t)}]$ is four times the standard deviation of the sea surface elevation. It is a quantity summarizing the distribution of wave heights of apparent waves and is measured in units of length, in this paper in meters $[m]$. The significant wave height is in general the most important single quantity when assessing risks to ships in a given sea state.
The peak wave period is defined as the wave period with the highest energy,
\begin{align}
T_p = \arg \max_{\omega > 0} \frac{2\pi}{S(\omega)},
\end{align}
and summarizes the distribution of wave periods of apparent waves and is measured in units of time, in our paper in seconds $[s]$.
Two other popular quantities summarizing the distribution of wave periods are the \textit{mean wave period}, $T_1$, and \textit{mean zero-crossing period}, $T_z$, defined as
\begin{align}
T_1 = 2\pi \frac{\int_{0}^{\infty}\omega^{-1}\,S(\omega)d\omega }{\int_{0}^{\infty}S(\omega)d\omega }\qquad
T_z = 2\pi \sqrt{\frac{\int_{0}^{\infty}S(\omega)d\omega }{\int_{0}^{\infty}\omega^2\,S(\omega)d\omega }}.
\end{align}
In words, $T_1$ is the mean of the period spectrum while $T_z$ is the mean time between a zero upcrossing and the consecutive, for a fixed point in space.
Under the assumption of a Bretschneider spectrum, these three quantities are related as $T_p = 1.408 \cdot T_z = 1.2965 \cdot T_1$.
Since all three quantities are proportional to each other under the assumption of a Bretschneider spectrum, we will in this paper use the notation, $T$, to denote a quantity of the wave period without explicitly stating which.
Hence, as long as the Bretschneider spectrum is a reasonable approximation, all information about the sea state is encoded in the two quantities $H_s$ and $T$.
The problem with using a the Bretschneider spectrum to model the sea state is that it assumes stationarity, which is not valid for large spatial regions. This is often solved by assuming that the parameters $H_s$ and $T$ are spatially varying. The main contribution of this work is to propose a joint spatial model for $H_s$ and $T$, which can be used to describe the sea states for large regions.
Probabilistic models of $H_s$ and $T$ jointly for a fixed point in space and time have been studied extensively. \citet{lit:ochi} showed that a bivariate log-normal distribution fits the bulk of the marginal probability distributions of $H_s$ and $T$ for data from the north Atlantic. Other approaches are to use Placket-models \citep{lit:placket,lit:athanassoulis}, or more general Box-Cox transformations \citep{lit:box} and then model the transformed values with a bivariate Gaussian distribution. Conditional modeling approaches have also been proposed where $H_s$ is first modeled and $T$ is modeled conditional on $H_s$ \citep{lit:soares, lit:lucas, lit:vanem}.
Prior work has also studied temporal models for $H_s$ and/or $T$ for fixed points in space. These models are often based on transformations of the marginal data to Gaussianity such that the temporal correlation can be modeled by ARMA-processes \citep[and the reference within]{lit:monbet2}.
As stated above, we are instead interested in spatial models for $H_s$ and $T$, which for example are important when considering moving ships where the wave state at points visited on the ships route will be highly dependent. An important property of a spatial model for any larger region is that it allows for spatial non-stationarity \citep{lit:baxevani3, lit:ailliot}, i.e., different distributional behavior depending on the spatial location.
Some prior work on modeling $H_s$ spatially, or spatio-temporally, using transformed Gaussian random fields exist. Such spatial models are usually based on a chosen parametric stationary covariance function for which parameters are estimated using maximum likelihood and/or minimum contrast methods.
\citet{lit:baxevani2} considered regions small enough to assume stationarity in order to work with a stationary Gaussian model. To handle non-stationarity, this model was later extended in~\citet{lit:baxevani3} to a spatial moving average process with a non-stationary Gaussian kernel.
\citet{lit:ailliot} instead considered mutually exclusive subregions of the spatial domain for which they assumed stationarity within. The mean and variance were estimated for each subregion and the measured values were standardized based on these parameters. The standardized data were then treated as stationary.
In \citet{lit:hildeman} a non-stationary and anisotropic model was proposed based on the SPDE approach \citep{lit:lindgren} and the deformation method \citep{lit:sampson}.
Compared to the covariance-based models of \citep{lit:baxevani2, lit:baxevani3, lit:ailliot} this model is based on a description of the random field through a stochastic partial differential equation (SPDE).
By approaching the characterization of the random field from a SPDE perspective the model gains some distinct benefits.
It allows modeling on complex spatial domains (even arbitrary Rimennian manifolds), a finite-dimensional representation of a continuously indexed Gaussian random field, and it has computationally beneficial properties (especially when modeling large regions).
The model we propose is an extension of the $H_s$ model by \citet{lit:hildeman}. Specifically, we will assume that the distribution of $H_s$ and $T$ are Gaussian after logarithmic transformation, as proposed by \citet{lit:ochi}. We will then model $\log(H_s)$ and $\log(T)$ using a bivariate extension of the model by \citet{lit:hildeman} where we also allow for arbitrary smoothness of the two random fields as well as a spatially varying cross-correlation of the two quantities.
The proposed model is not temporal and hence it cannot model the vast variability in sea state behavior over the whole year. Instead we restrict ourselves to modeling of the sea state variability during only one of the months of the year. The idea being that during a fixed month, the spatial sea state distribution does not change.
39 years of data from the north Atlantic during April month will be used to estimate the model as well as to validate it.
To illustrate the flexibility of the proposed model, we will consider two safety issues in naval logistics which require spatial modeling of the sea state parameters, namely fatigue damage modeling of ships as well as estimation of the risk of capsizing due to broaching-to.
The structure of the paper is as follows.
In Section~\ref{sec:model}, the proposed model is introduced. Section~\ref{sec:modelDiscretization} describe the finite-dimensional discretization of the proposed model.
In Section~\ref{sec:data}, the data used for parameter estimation and validation of the model is described.
Section~\ref{sec:estimation} goes through the method of estimating the parameters of the model from the available data.
It also assesses the fit of the model.
Section~\ref{sec:applications} introduces two applications where such a spatial model can be used to estimate risks and wear associated with a planned ship journey.
Finally, Section~\ref{sec:discussion} concludes with a discussion of the results and future extensions.
\section{Model formulation}
\label{sec:model}
In \citet{lit:hildeman} a random field model was developed for the significant wave height, $H_s$. The model was defined by interpreting $X(\psp) = \log(H_s)$ as a weak solution to the \textit{stochastic partial differential equation} (SPDE)
\begin{align}
\mathcal{L}^{\alpha/2} \left( \tau(\psp)\rv(\psp) \right) := \left[ \kappa(\psp)^{\frac{2}{\alpha}-2} \left( \kappa(\psp)^2 - \nabla \cdot H(\psp) \nabla \right) \right]^{\alpha/2} \left(\tau(\psp)\rv(\psp)\right) &= \noise(\psp).
\label{eq:SPDEG}
\end{align}
Here, $H$ is a symmetric and positive definite matrix-valued function, $\damp$ and $\tau$ are strictly positive real-valued functions, and $\alpha\ge 1$ a constant.
The SPDE is defined over a spatial domain, $\gspace$, and $\noise$ is Gaussian white noise.
When $\gspace := \mathbb{R}^d$, $\kappa(\psp) := \kappa>0$, $\tau(\psp) := \tau>0$, and $H(\psp) := I$ (the identity matrix), the solution to \eqref{eq:SPDEG} is a mean-zero Gaussian random field with a Mat\'ern covariance function \citep{lit:whittle}. The parameters $\tau$ and $\kappa$ respectively controls the variance and correlation range of the field, and $\alpha = \nu + d/2$ where $\nu$ determines the smoothness. However, to obtain a model that is flexible enough to describe a wide range of non-stationary and anisotropic Gaussian random fields, the parameters $\kappa(\psp)$ and $H(\psp)$ of the model were obtained using the deformation method of \citet{lit:sampson} and the SPDE description of a Gaussian random fields with Mat\'ern correlation structure \citep{lit:whittle, lit:lindgren}.
In short that means that we consider a differentiable and bijective mapping, $\warp^{-1}(\psp)$, that maps points on the observational domain, $\gspace$, to points on a subset to some manifold, $\dspace$. When $X(\psp)$ is mapped to $\dspace$ it will be distributed as a Gaussian Mat\'ern field. Specifically, $\tilde{X}(\tilde{\psp}) := X(\warp^{-1}(\psp))$ is a unit-variance Gaussian random field with a Mat\'ern covariance function with the same smoothness parameter $\alpha$ as in \eqref{eq:SPDEG}. Because of this, the function $\warp$ explains the anistropy, non-stationarity, and correlation range of $X$, whereas $\tau(\psp)$ determines the marginal variances and $\alpha$ the smoothness.
The connection between the parameters $H(\psp)$ and $\damp(\psp)$ of the SPDE in Equation \eqref{eq:SPDEG} and the mapping $\warp: \dspace \mapsto \gspace$ is
\begin{align}
\damp^2(\psp) = \determinant{J[\warp^{-1}](\psp)}, \quad H(\psp) = \damp^2(\psp) J[\warp^{-1}]^{-1}(\psp) J[\warp^{-1}]^{-T}(\psp),
\end{align}
where $J[F^{-1}]$ denotes the Jacobian matrix of $F^{-1}$.
This means that the SPDE is completely characterized by the Jacobian matrix of $F$.
In fact, the model is well-defined for a broader class than those which are diffeomorphic to a Mat\'ern Gaussian random field---it is enough that they are locally diffeomorphic to a Mat\'ern Gaussian random field. That is, any $d\times d$ matrix-valued function which is Lipschitz continuous and uniformly positive definite (or uniformly negative definite) can be used in place of $J[F^{-1}]$.
In \citet{lit:hildeman} it was shown that this SPDE model agreed well with data of significant wave height in the north Atlantic ocean.
We now extend the model to a bivariate random field model for significant wave height and wave period.
We construct a bivariate model for which the marginal distributions over $H_s$ and $T$ are identical to the model of Equation \eqref{eq:SPDEG}.
Let us denote $X(\psp) := \log H_s(\psp)$ and $Y(\psp) := \log T(\psp)$, and consider $X$ and $Y$ as dependent Gaussian random fields.
\citet{lit:bolin, lit:hu, lit:hu2} developed multivariate models of Gaussian random fields based on a triangular system of SPDEs. Inspired by those models, we extend \eqref{eq:SPDEG} to a bivariate model
\begin{align}
\begin{bmatrix}
g_{11} & g_{12} \\
0 & g_{22}
\end{bmatrix}
\begin{bmatrix}
\mathcal{L}_X^{\smooth/2} &0 \\
0 &\mathcal{L}_Y^{\beta/2}
\end{bmatrix}
\begin{bmatrix}
X \\
Y
\end{bmatrix}
&=:
D
\begin{bmatrix}
\mathcal{L}_X^{\smooth/2} &0 \\
0 &\mathcal{L}_Y^{\beta/2}
\end{bmatrix}
\begin{bmatrix}
X \\
Y
\end{bmatrix}
=
\begin{bmatrix}
\mathcal{W} \\
\mathcal{V}
\end{bmatrix}.
\label{eq:multidimHu}
\end{align}
Here $\mathcal{W}$ and $\mathcal{V}$ are independent copies of Gaussian white noise on $\gspace$ and $g_{11}, g_{12}$, and $g_{22}$ are scalar-valued functions in $L^{\infty}(\gspace)$, where $g_{11}$ and $g_{12}$ are bounded away from $0$ such that $D$ is invertible.
The pseudo-differential operators $\mathcal{L}_X$ and $\mathcal{L}_Y$ are defined as in Equation \eqref{eq:SPDEG} and control the marginal distributions of $X$ and $Y$ independently.
The term $g_{12}$ will introduce dependencies between $X$ and $Y$.
The inverse, $R = D^{-1}$ can be used to rewrite the system of SPDEs as
\begin{align}
\begin{bmatrix}
\mathcal{L}_X^{\smooth/2} &0 \\
0 &\mathcal{L}_Y^{\beta/2}
\end{bmatrix}
\begin{bmatrix}
X \\
Y
\end{bmatrix}
&=
R
\begin{bmatrix}
\mathcal{W} \\
\mathcal{V}
\end{bmatrix}
:=
\begin{bmatrix}
h_{11} & h_{12} \\
0 & h_{22}
\end{bmatrix}
\begin{bmatrix}
\mathcal{W} \\
\mathcal{V}
\end{bmatrix},
\label{eq:multidim}
\end{align}
which corresponds to a linear model of coregionalization \citep{lit:bolin}. The parameters $h_{11}, h_{12}$ and $h_{22}$ are here functions of the spatial location, fully defined by the parameters in the elements of $D$. In particular, $h_{12}$ solely defines the dependency between the two fields. Moreover, considering only one random field at a time, they will have the same distribution as in the univariate case if $h_{11}(\psp)^2+h_{12}(\psp)^2 = h_{22}(\psp)^2 = 1 \forall \psp \in \domsp$.
In the case of $D$ being constant, \citet{lit:bolin} gives a parametrization of $R$ using only one parameter, $\rho$, due to the sum to one constraint. The parameter $\rho \in \R$ controls the correlation between the fields $X$ and $Y$ but is in general not equal to the correlation.
Using $\rho$, the parameters of $D$ and $R$ are fully identified as
\begin{align}
R &=
\begin{aligned}
\begin{bmatrix}
h_{11} & h_{12} \\
0&h_{22}
\end{bmatrix}
=
\frac{1}{\sqrt{1+\rho^2}}\begin{bmatrix}
1 & \rho \\
0 & \sqrt{1+\rho^2}
\end{bmatrix}
\end{aligned},
\quad
&D = R^{-1} =
\begin{aligned}
\begin{bmatrix}
g_{11} &
g_{12} \\
& g_{22}
\end{bmatrix}
=
\begin{bmatrix}
\sqrt{1+\rho^2} & - \rho \\
0 & 1
\end{bmatrix}
\end{aligned}.
\end{align}
We use this parameterisation, but extend the model by allowing $\rho$ to be a spatially varying function. Hence, the model we consider is
\begin{equation}\label{eq:final_model}
\begin{split}
\sqrt{1+\rho^2} \mathcal{L}_{X}^{\alpha/2} X - \rho \mathcal{L}_{Y}^{\beta/2} Y &= \mathcal{W} \\
\mathcal{L}_{Y}^{\beta/2} Y &= \mathcal{V}.
\end{split}
\end{equation}
With this parameterization, the covariance operators for $X$ and $Y$ are $\mathcal{L}_X^{-\alpha}$ and $\mathcal{L}_Y^{-\beta}$ respectively, and the cross-covariance is
$\rho(1+\rho^2)^{-1/2}\mathcal{L}_X^{-\alpha/2}\mathcal{L}_Y^{-\beta/2}.
$
In the case when the covariance operators for $X$ and $Y$ are the same and $\rho$ is constant, the correlation coefficient between the two fields is equal to $\frac{\rho}{\sqrt{1+\rho^2}}$ in the sense that it corresponds to the Pearson correlation coefficient between the two fields at any fixed point in $\gspace$.
In the general case, the interpretation of $\rho$ as controlling the correlation still holds and
values near zero of $\rho$ give a negligible dependency between the fields while large positive values give a strong positive correlation and large negative values give a strong negative correlation. However, a simple relationship with the pointwise correlation coefficient does not exist.
This effect is highlighted in Figure~\ref{fig:exAnisotropy} showing a realization of such a bivariate Gaussian random field model. Here, both fields are stationary and anisotropic but with different directions of the main principal axes and different smoothness parameters.
Even though $\rho = -0.98$, which would correspond to a correlation of $-0.7$ if the marginal random fields would have been equal in distribution, the true correlation between the fields is larger. It is however visible that peaks in the left field tend to correspond to valleys in the right field indicating a negative correlation.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{./figs/examples/ex_anisotrop_left.png}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{./figs/examples/ex_anisotrop_right.png}
\end{subfigure}
\caption{Realization of a bivariate, anisotropic and stationary Gaussian random field. The left field has a correlation range of $25$ in the direction of the principal axis at $45^\circ$ and a correlation range of $14$ in the perpendicular direction. The right field has the principal direction at an angle of $-45^\circ$ with the correlation range $30$, the perpendicular direction has a range of $15$. The correlation between the fields is controlled by $\rho = -0.98$. Furthermore, the left field has a smoothnes constant of $\alpha = 1.6$ while the right field has $\alpha = 3$.}
\label{fig:exAnisotropy}
\end{figure}
\section{Model discretization}
\label{sec:modelDiscretization}
To be able to use the model of the previous section in applications, we first must discretize it. This is done using a finite element approximation of the system of SPDEs. In this section we provide the details of this procedure. We first show the details in the univariate case with $\alpha=2$, then generalize to arbitrary $\alpha>1$, and finally combine the methods for the multivariate setting.
\subsection{The univariate case}
\label{sec:weak}
In the case when $\alpha=2$ in \eqref{eq:SPDEG}, the model can be discretized using a standard Galerkin finite element method as suggested by \citet{lit:lindgren}. The aim is to approximate the solution $X$ by a basis expansion $X_h(\mv{s} ) = \sum_{j=1}^N U_j \phi_j(\mv{s})$. Here $\{\phi_j \}_{j=1}^N$ is a set of piecewise linear functions induced by a triangular mesh of the spatial domain. Let $V_h$ be the space spanned by these basis functions. Augmenting the operator with homogeneous Dirichlet boundary conditions and considering the weak formulation of the SPDE on $V_h$ yields the following system of equations for the coefficients in the basis expansion
\begin{align}
\sum_{j=1}^N \left( \langle \damp \tau \phi_j , \phi_i \rangle
+ \langle H \nabla \tau \phi_j, \nabla \left(\damp^{-1} \phi_i\right) \rangle \right) U_j \overset{d}{=} \langle \noise, \phi_i \rangle, \quad i=1,\ldots, N,
\end{align}
where $\langle \cdot, \cdot \rangle$ denotes the inner product on $\mathcal{G}$. This system of equations can be written in matrix form as $KU := (B + G)U \overset{d}{=} W$, where
$B_{ij} := \langle \kappa \phi_j, \phi_i \rangle$, $G_{ij} := \langle H \nabla \phi_j, \nabla \left( \damp^{-1} \phi_i \right) \rangle$, and $W \sim \mathbb{N}(0, C)$ with $C_{ij} := \langle \phi_j, \phi_i \rangle$. Hence, the stochastic weights of the basis expansion are $U \sim \mathbb{N}(0, K^{-1}CK^{-T})$.
The important property of using a basis of $V_h$ with compact support is that $K$ and $C$ will be sparse matrices. \citet{lit:lindgren} showed that $C$ can be approximated by a diagonal matrix, with diagonal elements $\langle \phi_i, 1 \rangle$. With this approximation, the precision matrix $KC^{-1}K$ is also sparse and $U$ is Gaussian Markov random field (GMRF). This greatly reduces the computational cost for inference and simulation \citep{lit:rue}. We refer to \citep{lit:hildeman} for further details in the univariate case.
\subsection{Rational approximation for arbitrary smoothness}
\label{sec:rational}
The procedure from the previous subsection can be extended to integer values of $\alpha$ by noting that the solution to $\mathcal{L}^2 X = \noise$ can be obtained by first solving $\mathcal{L} X_1 = \noise$ and then $\mathcal{L} X = X_1$. One can therefore use the discretization from the previous subsection iteratively to obtain a discretization for even integer values of $\alpha$. \citet{lit:lindgren} also stated the solution to $\mathcal{L}^{1/2}X = \noise$ as a least square solution, which can be combined with the iterative procedure to obtain discretizations also for odd integer values of $\alpha$. This was utilized in \citep{lit:hildeman} where only integer values of $\smooth$ were considered.
For large values of $\alpha$, the correlation function does not change much for a small change in $\alpha$. However, for small values of $\alpha$, restricting it to integer values constrain the flexibility of the model. For instance, the exponential correlation function corresponds to $\alpha = 1.5$ and cannot be modeled by an integer-valued $\alpha$.
Therefore, in this work we want to model any positive value of $\alpha \ge 1$ and not only integer values.
Until recently, it was not clear how to formulate a FEM approximation for non-integer valued $\alpha$. However, \citet{lit:bolin3} solved this problem by combining the FEM approximation with a rational approximation of the power function, i.e., $x^{\smooth} = \frac{p_l(x)}{p_r(x)}$, where $p_l$ and $p_r$ are polynomials of some chosen orders.
By using such a decomposition, it was possible to approximate the non-integer power of a pseudo-differential operator $\mathcal{L}^{\smooth}$ as a product of two polynomial pseudo-differential operators, $P_l$ and $P_r$. Here, $P_l = \sum_{j=0}^{N_r} a_j \mathcal{L}^{j}$ and similarly for $P_r$.
That is,
\begin{align}
\mathcal{L} \rv^R_m \approx \left( P_r^{-1}P_l \right)\rv^R_m = \noise \Leftrightarrow P_{l} \rv^{R}_ m = P_r \noise,
\label{eq:rationalApprox}
\end{align}
where $\rv^R_m$ is the rational approximation of $\rv$.
Since the polynomial operators $P_l$ and $P_r$ are commutative, the solution can be written as a system of equations
\begin{align}
&P_l Z = \noise \\
&X^R_m = P_r Z.
\end{align}
This is important since a FEM approximation of $P_l Z = \noise$ can be used in order to get a GMRF approximation of $Z$.
More specifically, the discretized FEM operators $P_l$ and $P_r$ can be written as
\begin{align}
P_{l,h} := b_{m+1}K^{m_{\alpha}-1} \prod_{j=1}^{m+1} \left( I - r_{2,j} K \right), &\quad
P_{r,h} := c_m \prod_{i=1}^{m} \left( I - r_{1,i} K \right),
\end{align}
where $K$ is the FEM matrix of Section \ref{sec:weak}, $m$ is an integer controlling the quality of the approximation, and $m_{\smooth} = \max\{1, \lfloor \smooth \rfloor \}$ an integer associated with the smoothness parameter, $\smooth$. The coefficients $b_{m+1}, c_m, r_{2,j},r_{i,j}$ are obtained from the rational approximation of the function $x^{\smooth}$ (see \citet{lit:bolin3}). A larger $m$ yields a better approximation, $X^R_m$, but also more terms in the polynomial operators which will increase the computational cost by making $P_{l,h}$ and $P_{r,h}$ less sparse.
The distribution of the stochastic weights is $U\sim \mathbb{N}(0, P_{r,h} P_{l,h}^{-1} C P_{l,h}^{-T} P_{r, h}^T)$. Even though both $P_{r,h}$ and $P_{l,h}$ are sparse, their inverses are not. Therefore, the precision matrix of $U$ will not be sparse. However, because of the two-step procedure of the model formulation, all computational benefits of the GMRF case can be maintained when using the model. The trick is to use the nested SPDE approach \citep{lit:bolin4} and write $U = P_{r,h}\tilde{U}$, since $P_{r,h}$ is sparse and $\tilde{U}$ has a sparse precision matrix $P_{l,h}^{T} C^{-1} P_{l,h}$.
\subsection{FEM for the bivariate model}
We are now ready to discretize the model of Equation \eqref{eq:final_model}. In the prior section we saw that we can write a FEM approximation of the operator $\sqrt{1+\rho^2} \mathcal{L}_{X}^{\alpha/2}$ as $K_X = P_lP_r^{-1}$. Likewise, denote the FEM approximation of the operator $\mathcal{L}_{Y}^{\beta/2}$ as $K_Y = Q_l Q_r^{-1}$. Moreover, we can consider $-\rho \mathcal{L}_{Y}^{\beta/2}$ to be a composition of the two operators $-\rho \mathcal{I}$ and $\mathcal{L}_{Y}^{\beta/2}$. By considering an iterative FEM approximation with respect to these two operators we acquire the system of linear equations
\begin{align}
K_X U_X + K_{\rho} U_Y &= W \\
K_Y U_{Y} &= V,
\end{align}
where $W$ and $V$ are i.i.d.~$\mathbb{N}(0, C)$ random vectors and $U_X$ and $U_Y$ are the stochastic weights for the FEM approximation of $X$ and $Y$ respectively. Furthermore, $K_{\rho} = -C^{-1}C_{\rho}K_Y$ where $C_{\rho} = \{\langle \rho(\psp) \phi_i(\psp), \phi_j(\psp) \rangle\}_{ij}$.
The block covariance matrix for $U_X$ and $U_Y$ is
\begin{align}
&\begin{bmatrix}
\sigma_X & \sigma_{XY} \\
\sigma_{YX} & \sigma_{Y}
\end{bmatrix} =
\begin{bmatrix}
K_X^{-1} C K_X^{-T} + K_X^{-1} K_{\rho} K_Y^{-1} C K_Y^{-T} K_{\rho}^{T}K_X^{-T}
& -K_X^{-1}K_{\rho} K_Y^{-1} C K_Y^{-T} \\
- K_Y^{-1} C K_Y^{-T} K_{\rho}^T K_X^{-T}
& K_Y^{-1} C K_Y^{-T}
\end{bmatrix}.
\end{align}
The corresponding block precision matrix is
\begin{align}
&\begin{bmatrix}
q_X & q_{XY} \\
q_{YX} & q_{Y}
\end{bmatrix} =
\begin{bmatrix}
K_X^T C^{-1} K_X &
K_X^T C^{-1} K_{\rho} \\
K_{\rho}^T C^{-1} K_X
& K_Y^T C^{-1} K_Y + K_{\rho}^T C^{-1} K_{\rho}
\end{bmatrix}.
\end{align}
Note that this is not a sparse matrix, which is needed to acquire the important computational advantages of the SPDE approach. However, by using the idea introduced in the previous section, we can formulate the model as a latent GMRF to keep the computational benefits. This is done by
considering $U_X = P_r \tilde{U}_X$ and $U_Y = Q_r \tilde{U}_Y$ where $\left[ \tilde{U}_X, \tilde{U}_Y \right] \sim \mathbb{N}\left( \bs{0}, \tilde{Q}\right)$ is a GMRF with
\begin{align}
\tilde{Q} &=
\begin{bmatrix}
P_l^T C^{-1} P_l &
-P_l^T C^{-2}C_{\rho}Q_l \\
-Q_l^T C_{\rho}^T C^{-2} P_l
& Q_l^T \left( C^{-1} + C_{\rho}^T C^{-3} C_{\rho} \right) Q_l
\end{bmatrix}.
\end{align}
With this formulation of our model, we can use the methods of \citet{lit:bolin3} for computationally efficient inference and simulation.
\section{Acknowledgements}
We would like to thank the European Centre for Medium-range Weather Forecast (ECMWF) for the development of the ERA-Interim data set and for making it publicly available.
The data used was the ERA-Interim reanalysis dataset, Copernicus Climate Change Service (C3S) (accessed September 2018), available from ``https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/era-interim''.
|
2,869,038,154,243 | arxiv | \section{Introduction}
The Web has gained so much importance in the market economy during the last two decades because of the development of new Internet-based business models. Among those, \emph{online advertising} is one of the most successful and profitable. Generally speaking, online advertising -- also referred to as Internet advertising -- leverages the Internet to deliver promotional contents to end users.
Already in 2011, revenues coming from online advertising in the United States alone surpassed those of cable television, and nearly exceeded those of broadcast television~\cite{IAB2012}.
Plus, worldwide investment in Internet advertising have reached around 200 billion dollars in 2016~\cite{eMarketer2017} and are expected to get to 335 billion by 2020~\cite{eMarketer2016}.\\
\indent Online advertising allows web content creators and service providers -- broadly referred to as \emph{publishers} -- to monetize yet providing their business for free to end users. For example, news websites or search engines can operate without charging users as they get paid by advertisers who compete for buying dedicated slots on those web pages to display ads~\cite{2008-ijeb-jansen, 2009-sigir-broder, 2012-cikm-azimi}.\\
\indent {\color{black}The global spread of mobile devices has also been changing the original target of online advertising \cite{mobileAd, ADVSurvey}. This is indeed moving from showing traditional display advertisements (i.e., \emph{banners}) on desktop computers to the so-called \emph{native} advertisements impressed within app streams of smartphones and tablets~\cite{lalmas2015kdd}.} More generally, Internet advertising business will eventually extend to emerging pervasive and ubiquitous inter-connected \emph{smart devices}, which are collectively known as the \emph{Internet of Things} (IoT).
\\
\indent Enabling computational advertising in the IoT world is an under-investigated research area; nonetheless, it possibly includes many interesting opportunities and challenges. Indeed, IoT advertising would enhance traditional Internet advertising by taking advantage of three key IoT features~\cite{ADVSurvey}: \emph{device diversity}, \emph{high connectivity}, and \emph{scalability}.
IoT device diversity will enable more complex advertising strategies that truly consider \emph{context awareness}. For example, a car driver could receive customized ads from roadside digital advertisement panels based on his habits (e.g., preferred stopping locations, hotels, and restaurants). Furthermore, IoT high connectivity and scalability will allow advertising to be performed in a really dynamic environment as new smart devices are constantly joining or leaving the IoT network. Finally, different from the traditional web browser-based advertising where a limited number of user interactions occur during the day, IoT advertising might count on users interacting with the IoT environment almost 24 hours a day.\\
\indent The rest of this paper is organized as follows: Section~\ref{sec:usecase} motivates the idea of IoT advertising with a use case scenario.
Section~\ref{sec:onlineadvertising} and~\ref{sec:iotoverview} articulate key background concepts.
In Section~\ref{sec:iotadvertising}, we propose our vision of an IoT advertising landscape; in particular, we characterize the main entities involved as well as the interactions between them. Section~\ref{sec:challenges} outlines the key challenges to be addressed for successfully enabling IoT advertising. Finally, we conclude in Section~\ref{sec:conclusions}.
\section{An Example of an IoT Advertising Scenario: In-Car Advertising}
\label{sec:usecase}
Connected smart vehicles are one of the most dominant trends of the IoT industry: automakers are indeed putting a lot of effort to equip their vehicles with an increasing set of computational sensors and devices.\\
\indent With millions of smart vehicles going around -- each one carrying possibly multiple passengers -- automobiles are no longer just mechanical machines used by people to move from point A to point B; rather, they are mobile, interconnected, and complex nodes constituting a dynamic and distributed computing system. This opens up new opportunities for developers who can leverage such an environment to build novel application and services. In particular, smart vehicles -- in fact, passengers traveling on board of those -- may become interesting ``targets'' for advertisers who want to sponsor their businesses.\\
\indent Assume a family of three is traveling in their smart car; their plan is to drive to a seaside destination a few hours away from their home and spend the weekend there. To do so, they rely on the GPS navigation system embedded in their car.
Bob is actually driving the car; he is a forty-five years old medical doctor and he likes Cuban food. Alice -- Bob's wife -- is forty and an architect. She is really passionate about fashion design and shopping.
Sitting in the back of the car, Charlie -- their son -- is a technology-enthusiast teenager who is listening to his favorite indie rock music from his smartphone. Suppose there exists a mechanism for \emph{profiling} passengers traveling on the same smart vehicle, either explicitly or implicitly. In other words, we assume the smart car can keep track of each passenger's profile. {\color{black} Such a profile needs to be built \emph{only} from data which the user agrees to share with the surrounding IoT environment.}
\\
\indent Suppose these travelers are about to cross a city where an iconic summer music festival takes place. Interestingly, an emerging rock band is going to perform on stage the same evening. Festival promoters have already advertised that event through \emph{analog} (e.g., newspapers and small billboards) and \emph{digital} (e.g., the city's website) channels.
However, they would also like to take advantage of an \emph{IoT ad network} to send more targeted and dynamic sponsored messages, namely to reach out to possibly interested people who happen to be around, such as Charlie.\\
\indent Assume Charlie gets an advertisement on the music app installed on his smartphone, and he convinces his parents to stop to attend the concert. Other similar advertising messages might be delivered to Alice and Bob as well. For example, Alice could be suggested to visit the city's shopping mall on her dedicated portion of the car's head-up display.
Furthermore, the eye-tracking sensors installed in the car could detect that Bob is getting tired, as he has been driving for too long. Therefore, Bob might be prompted with the coordinates of the best local Cuban cafe on the GPS along with a voice message suggesting to have a coffee there.\\
\indent We propose an IoT advertising platform that behaves as an intermediary (i.e., a \emph{broker}) between \emph{advertisers} (the festival promoters), \emph{end-users} (Alice, Bob, and Charlie), and possibly \emph{publishers}, the same way well-known ad networks do in the context of Internet advertising. Note though that in IoT, several entities can play the role of ``publisher'', which is not limited to a single web resource provider, but it may be a composite entity with several IoT devices. As such, the automaker, as well as any other device embedded in the car or dynamically linked to it, may act as publisher. Providing the IoT ad network can gather information from smart vehicles and passengers traveling around a specific geographic area, that information can be further matched against a set of candidate advertisements, which in turn are conveyed to the right target. {\color{black} Note that triggering of ad requests is somewhat transparent to the end user, i.e., we do not conjure any explicit publisher-subscriber mechanism between end users and advertisers. On the other hand, users must have control over their data, which in turn may be used by the IoT ad network for targeting.}\\
\indent Figure~\ref{fig:usecase} depicts the scenario above, where Alice, Bob, and Charlie all receive their targeted advertising messages. The IoT ad network is responsible for choosing the most relevant advertisements and it delivers them through one or more IoT devices that are either embedded in the car (e.g., the head-up display and the GPS) or temporarily joined to the car (e.g., the passengers' smartphones).
\begin{figure}[!htb]
\centering{\includegraphics[width=0.5\textwidth]{IoTAdvertisingUseCase.png}}
\caption{Targeted ads triggered by the IoT environment (e.g., a smart car traveling close by a smart city) are delivered to end users on IoT devices via an intermediate IoT ad network.}
\label{fig:usecase}
\end{figure}
{\color{black}We claim that IoT represents a huge opportunity for marketers who may want to leverage the IoT ecosystem to increase their targeted audience. Indeed, although online advertising is already a multibillion-dollar market, we believe one of its limitations is that it is essentially based on the activities users perform on the web. Instead, IoT advertising will overcome this limitation by bringing advertisement messages to users interacting with the IoT environment (which is potentially much larger than the web).}
\section{How Internet Advertising Works Today}
\label{sec:onlineadvertising}
The general idea behind Internet advertising is to allow web content \emph{publishers} to monetize by reserving some predefined slots on their web pages to display ads.
On the other hand, \emph{advertisers} compete for taking those slots and are keen on paying publishers in exchange for that.
Actually, publishers often rely on third-party entities -- called \emph{ad networks} -- which free them from running their own ad servers; ad networks decide on behalf of publishers which ads should be placed in which slots, when, and to whom. Furthermore, advertisers partner with several ad networks to optimize their return on investment for their ad campaigns.
Finally, ad networks charge advertisers for serving their ads according to a specific \emph{ad pricing model}, e.g., \emph{cost per mille impressions} (CPM) or \emph{cost per click} (CPC), and share a fraction of this revenue with the publishers where those ads are impressed \cite{ADVSurvey}.\\
\indent At the heart of online advertising, there is a real-time auction process. This runs within an \emph{ad exchange} to populate an ad slot with an \emph{ad creative}\footnote{\small{An \emph{ad creative} is the actual advertisement message (e.g., text and image) impressed on the slot.}}.
For each ad request, there are multiple competing advertisers bidding for that ad slot.
And, before any ad is served, publishers and advertisers outline a number of ad serving requirements, such as budget, when the ad should be displayed as well as targeting information.
{\color{black}In particular, \emph{targeted advertising} allows to deliver sponsored contents that are more likely tailored to each user's \emph{profile}, which is either explicitly collected (e.g., through the set of user queries submitted to the search engine in the case of sponsored search) or implicitly derived (e.g., from user's browsing history in the case of native advertising)~\cite{yan2009www,li2012dss}.}
The auction process uses all those requirements to match up each ad request with the ``best'' ad creative so as to maximize profit for the publisher.\\
\indent Figure~\ref{fig:onlineadvertising} shows the high-level architecture of current online advertising systems. Although the actual architecture can be more complex than the figure, the main entities involved are:
the \emph{user} who typically sits behind a web browser or a mobile app; the \emph{publisher} (i.e., a service provider) who exposes some ``service'' to the user (e.g., a web content provider like {\sf \small{cnn.com}} or a web search engine like {\sf \small {Google}} or {\sf \small {Yahoo}}); the \emph{advertiser} who wants to promote its products and possibly attract new customers by leveraging the user base of the publisher; the \emph{ad network} that participates in the \emph{ad exchange} and acts as intermediary between the publisher and the advertiser.
\begin{figure}[!htb]
\centering{\includegraphics[width=0.46\textwidth]{OnlineAdvertising.png}}
\caption{High-level architecture of traditional online advertising.}
\label{fig:onlineadvertising}
\end{figure}
\indent The workflow is as follows:
\begin{itemize}
\item The user accesses a service exposed by the publisher, e.g., using {\tt HTTP GET} (1).
\item The publisher responses with the ``core'' content/service originally requested (2).
\item The publisher also asks its partner ad network to fetch ads which best match user's profile, and are eventually shown to the user within the same content delivered before (3).
\item The ad network uses profile information during the real-time auction which takes place on the ad exchange to select advertisements that are expected to generate the highest revenue (4).
\item The ad network instructs the publisher on how to tell the user how to fetch the selected ad (5--6).
\item Finally, the user requests (7) and retrieves (8) the actual ad to be displayed.
\end{itemize}
As it turns out from the description above, there is a clear event which activates an ad request, i.e., the user accessing a resource exposed by a web publisher.
Conversely, in the IoT world that triggering event might be less explicit (i.e., the user \emph{interacting} with IoT devices). Nevertheless, in Section~\ref{sec:iotadvertising}, we discuss how the scheme described above can be adapted to the context of future IoT advertising.
\section{IoT Key Features}
\label{sec:iotoverview}
The IoT stack is normally described as a four-layer infrastructure. The first layer defines how the smart physical world (e.g., networked-enabled devices, devices embedded with sensors) interact with the physical world.
The second layer is in charge of providing the necessary connectivity between devices and the Internet. Further, a third layer incorporates data aggregation and other preliminary data processing.
Finally, the fourth layer is in charge of feeding the control centers and providing IoT cloud-based services \cite{IoTArchitecture}. In general, IoT bounds a cooperative relationship among computing systems, devices, and users with these layers.\\
\indent \textbf{Connectivity:} A crucial element in IoT is the high connectivity required among devices, servers, and/or service control centers.
Indeed, high-speed connectivity is necessary in order to cope with real-time applications and the level of cooperation expected from IoT devices. Currently, IoT connectivity is guaranteed by traditional network protocols and technologies like WiFi, Bluetooth Smart, and Device-to-Device (D2D) communications. IEEE and the IETF are designing new communications protocols specifically devised for IoT \cite{IOTSurvey}. These protocols (i.e., IEEE 802.15.4e, 6LoWPAN, LoRa) are intended to homogenize the IoT low-energy communication environment among the huge IoT device diversity.\\
\indent \textbf{Resource availability:} This defines the amount of computing resources available to implement IoT services. In general, IoT devices can be categorized into two groups: \textit{resource-rich}, with faster CPUs and higher memory availability and \textit{resource-limited} devices, with limited memory and low-performance CPUs.
Note that the way IoT devices interact with users (e.g., display availability, user-input enabled devices, etc.) depends on the available resources~\cite{icc17}.\\
\indent \textbf{Power consumption:} The nature of IoT applications imposes several power constraints on the devices. In general, IoT devices are meant to be remotely monitored, autonomous, wearable, and/or with high mobility. These characteristics define the specific power restrictions for every application.
\\
\indent \textbf{Complexity and Scalability:} Today, IoT devices can be found in several user-oriented (e.g., smart home, wearables devices) and industrial (e.g., smart grid, healthcare IoT) applications.
The different IoT architectures need to be scalable to handle the constant flow of new devices and the always-increasing set of new services and applications.
\section{A Vision for an IoT Advertising Landscape}
\label{sec:iotadvertising}
The ultimate aim of IoT is to provide new applications and services by taking advantage of the IoT features discussed above.
Different from the simplistic approach of utilizing traditional legacy sensors combined with decision entities, the high connectivity and intelligence present in IoT along with the possibility of continuous scalability, allow building a wide pool of applications based on users' generated IoT-data. Among those, expanding the traditional Internet advertising marketplace is one of the most promising.\\
\indent To enable the IoT advertisement vision, we introduce our model of an IoT advertising architecture (Figure \ref{fig:iotadvmodel}).
Despite this is clearly inspired by the Internet advertising architecture (Figure~\ref{fig:onlineadvertising}),
IoT advertising has its own peculiarities, and therefore, deserves a dedicated infrastructure to be successful.\\
\indent Our IoT advertising model consists of three layers, each one composed of several entities:
the bottom layer (\emph{IoT Physical Layer}) contains physical IoT devices; the middle layer (\emph{IoT Advertising Middleware}) coincides with the \emph{IoT Advertising Coordinator}, which allows physical IoT devices to interface with the upper layer (\emph{IoT Advertising Ecosystem}), and in particular with the \emph{IoT Publisher}.\\
\begin{figure}[!htb]
\centering{\includegraphics[width=0.48\textwidth]{IoTAdvertisingLandscape}}
\caption{The proposed IoT advertising model consists of three layers: \emph{IoT Physical Layer}, \emph{IoT Advertising Middleware}, and \emph{IoT Advertising Ecosystem}.}
\label{fig:iotadvmodel}
\end{figure}
\indent In the remaining of this section, we discuss the role and characteristics of each entity separately.
\subsection{IoT Advertiser}
This represents an entity which would like to take advantage of IoT to advertise its own products/services such as the music festival promoters in the use case discussed above. It is expected to interact with other actors of the advertising ecosystem in the same way web advertisers do on traditional Internet advertising. Due to the high diversity of devices involved, the IoT advertiser needs to conceive and design its campaign for heterogeneous targets, i.e., newer ad formats, which are not necessarily visual (e.g., acoustic messages), as opposed to traditional banners displayed on web browsers or mobile apps.
Moreover, targeting criteria may go beyond just user's demographics and/or geolocation; in fact, the contextual environment will play a crucial role in the ad matching phase.
\subsection{IoT Ad Network and IoT Ad Exchange}
The IoT ad network, in combination with the IoT ad exchange will be responsible for matching the most profitable ads with target IoT publishers on behalf of both the publisher and the advertiser. This can be achieved in the same way as traditional ad networks interact with ad exchanges for Internet advertising, i.e., through real-time auctions. Moreover, differently from Internet advertising where those auctions are triggered by the user requesting a resource from a web publisher, in IoT such events can be extremely blurry as the user keeps constantly interacting with her surrounding IoT environment. That means IoT ad networks and ad exchanges may need to operate at an even larger scale and higher rate.
\subsection{IoT Publisher}
The role of IoT publisher is not limited to a web resource provider anymore. An IoT publisher can rather be thought of an ensemble of IoT devices, which collectively cooperate to implement and expose to the user multiple functionalities, as well as to deliver advertisements. For instance, the smart vehicle introduced in our use case is a possible example of an IoT publisher. The smart vehicle is indeed composed of several embedded IoT \emph{atomic} devices (e.g., the GPS, the tire controller, the sound system), each one implementing its own communication standard and exposing a specific functionality through its own user interface. In addition, many other IoT devices can dynamically and temporarily join the smart vehicle (e.g., the smartphones of car passengers).
\subsection{IoT Advertising Coordinator}
The role of IoT advertising coordinator is twofold: On the one hand, it allows bottom-layer IoT devices to expose themselves as a single IoT publisher entity to the upper-layer advertising ecosystem. On the other hand, it is responsible for dispatching and delivering advertisements coming from advertising ecosystem down to physical IoT devices, and in turn, to the end user. To achieve both those capabilities, the IoT advertising coordinator makes use of several sub-components. Among those, we focus on three of them: \emph{(i)} \emph{IoT Aggregator}, \emph{(ii)} \emph{IoT Profiler}, and \emph{(iii)} \emph{IoT Ad Dispatcher}.
Those are responsible for:
\begin{itemize}
\item Unifying different communication standards utilized in a vast variety of IoT devices, so they can all respond to the specific advertisement needs.
\item Providing a cross-platform that will translate IoT-customer interaction into usable data for real-time effective advertisement (i.e., collecting meaningful metadata or ``\emph{profiles}'', which can, in turn, be exploited during ad matching at the layer above).
\item Managing the actual delivery of advertisements to the target IoT device, and therefore to the user, according to specific supported ad formats.
\end{itemize}
\indent More specifically, the ability of the IoT advertising coordinator to take advantage of IoT devices and user identification via digital fingerprinting will open the door to new advertising strategies. These might consider the following key aspects:
\begin{itemize}
\item \textit{User profile}: IoT advertising will vary based on the actual recipient (age range, gender, known user behavior) so we can have ads anticipating the user's needs not based on what he/she browses, but based on what he/she is and what he/she does.
\item \textit{Context awareness}: IoT advertising will adapt to new contexts, that is, the advertisement strategy will also focus on the location, {\color{black}time}, and the type of activity the user is performing (e.g., a regular traveler can receive ads based on the most visited restaurants and hotels {\color{black}during lunchtime}).
\item \textit{Services/Features}: IoT advertising ecosystem can make use of an unlimited number of features to know more about the user (e.g., most visited locations, driving mode, behavioral characteristics). That will translate into a new set of services from the IoT advertising landscape (e.g., announcing upcoming events with better price deals, lower car insurance due to the driver record directly derived from the smart car, etc.).
\item \textit{Security/Privacy}: User security and privacy protection will impact the new IoT advertising model in two different ways. First, the coordinator needs to be transparent to the implementation of traditional (or any new) IoT security mechanisms. Second, these security mechanisms will inevitably limit the amount and type of data that can be extracted from IoT devices and will scarce the quality of the user's digital fingerprint.
\item \textit{Device capabilities}: The coordinator may have to deal with devices supporting a broader spectrum of advertising formats by themselves (e.g., smartwatches have full display capabilities and adequate computing resources).
Conversely, other devices would either accept only custom, resource-friendly ad formats (e.g., acoustic messages sent to smart speakers) or rely on other devices with more capabilities (e.g., the smart lighting system may use the client application running on the smartphone to interact with the user). In this regard, the ad dispatcher will have a crucial role in deciding what specific types of ads to generate/integrate from/to the different devices and how those ads can be delivered to the user.
\end{itemize}
\indent Furthermore, the sensors present in smart devices and interacting with users will play a major role in profiling what the user does (e.g., presence sensor can report when the user leaves the house) and the specific context of such activities (e.g., Saturday night). These constitute key elements for a more effective advertising (e.g., restaurants and nightclubs). Eventually, to be fully effective in a fast-changing and very limited power-consuming IoT world, the amount of data required to characterize users needs to be minimized while coping with the demand imposed by the proposed IoT advertising model.
In this context, the IoT advertising coordinator will ``translate'' data flow from/to IoT devices into a common language and, more importantly, it will adapt IoT requirements to the well-known Internet advertising model to enable the new IoT advertising ecosystem. Finally, timing and geographical distribution of sensors will influence the effectiveness of the IoT Advertising Coordinator by (1) effectively using user location and IoT device availability to deliver the most appropriated ad (e.g., take advantage of the presence of electronic road signals to show ads to drivers) and (2) timely deliver the right apps (e.g., nearby preferred restaurants at lunchtime).
\section{Challenges of IoT Advertising}
\label{sec:challenges}
In this section, we analyze the possible key challenges of IoT advertising.
\subsection{Architectural Challenges}
From the IoT advertising perspective, the current IoT architecture (see Section~\ref{sec:iotoverview}) has several challenges that need to be addressed. IoT device heterogeneity will add an extra burden to the IoT advertising coordinator.
The coordinator would need to deal with different memory, CPU, energy, and sensor availability and capabilities, so the right advertising strategy is chosen for every device and user while keeping the required efficiency and reliability of services. Moreover, IoT can be configured in several different network topologies, which require the use of different network metrics to characterize the IoT traffic and to successfully identify devices and users.
\subsection{Ad Content Delivery Challenges}
Content delivery in IoT advertising involves three different scopes: user profile, user location-activity, and device capabilities. Content delivery challenges will defy the capacity of the IoT devices to cope with the requirements of the proposed IoT advertising scheme in two main aspects:
\begin{enumerate}
\item \textit {Quality and quantity of available user data:} Different levels of data obtained from the user will create user-based digital signatures (i.e., user profile) with different quality levels. Also, different permission policies can impact negatively on the quality of users' activity/location tracking processes.
\item \textit{Device capabilities:} In cases where IoT device cooperation is not possible, the delivery of the advertisement content to the user will be exclusively defined by the device capacity. For instance, the amount of advertisement content that the user can get from devices with visual capabilities is expected to be higher.
\end{enumerate}
\subsection{Security and Privacy Challenges}
Integrating IoT into the traditional advertising model poses security challenges for customers, advertisers, and publishers.
Some of the security challenges that need to be overcome are the following:
\begin{itemize}
\item Due to the high diversity of devices and communication protocols in IoT, there exists a perpetual need for monitoring and detecting new vulnerabilities and attacks in a constantly changing environment.
\item Sensitive user data needs to be protected not only from outsiders, but also from malicious corporations that can misuse it.
\item Users are not always aware of security risks and a lot of effort needs to be done on the educational side.
\item Current and new communication protocols incorporate state-of-the-art protection mechanisms, but, in most cases, security is optional and these protocols are insecure in default mode.
\item The high level of interconnection in the IoT opens creates more opportunities for malware and worms to spread over the network.
\item Advertisements should not become intrusive for user privacy nor disrupt the user experience of the surrounding IoT environment.
\end{itemize}
Traditionally, Internet advertising has compromised user privacy by tracking people's browsing habits.
IoT advertising would go further by tracking user behavior based on day-to-day activities. Here, \emph{dataveillance} becomes more valuable considering that IoT user data is much more diverse if compared with regular web browsing data.
\subsection{Fragmentation of IoT}
Currently, there is not a single inter-operable framework that integrates all IoT devices and services. In fact, despite the efforts to design dedicated protocols for IoT \cite{IOTSurvey}, the current IoT ecosystem offers several options for developers to write smart apps using a variety of different programming architectures (e.g. SmartThings, OpenHAB, and Apple Home Kit). Also, multiple combinations of standards and protocols are possible (e.g., Communications: IPv4/IPv6, RPL, 6LowPAN, Data: MOTT, CoAP, AMPQ, Websocket; Device Management: TR-069, OMA-DM; Transport: Wifi, Bluetooth, LPWAN, NFC; Device Discovery: Physical Web, mDNS, DNS-SD; Device Identification: EPC, uCode, URIs). The proposed IoT advertisement middleware should be able to adapt and convert the current fragmentation of the IoT world into a common language to enable IoT advertising.
\subsection{IoT Data Flow}
Data flow in IoT highly depends on the programming architecture. There are few cases where smart apps run on specific IoT devices or hubs; however, most of the IoT apps are cloud-based \cite{smartthingspaper}. Smart apps obtain information from the smart devices (sensors) and send data to the cloud to execute the app logic. External web programming tools like IFTTT and Node-RED can also be integrated into the IoT architecture to connect, control, and request information from different devices. The integration of these third-party applications can also represent a challenge to the proposed IoT advertising model. On the other hand, such integration would simplify the overhead caused by the current IoT fragmentation.
\section{Conclusions}
\label{sec:conclusions}
Internet advertising market is worth hundreds of billions of dollars and is one of the fastest growing online businesses. Nevertheless, it is still restricted to web browser-based and, more recently, mobile in-app contexts.\\
\indent The Internet of Things (IoT) will open up a novel, large-scale, pervasive digital advertising landscape; in other words, a new IoT advertising marketplace that takes advantage of a huge collection of smart devices, such as wearables, home appliances, vehicles, and many other connected digital instruments, which end users constantly interact with in their daily lives.\\
\indent In this paper, we introduce the architecture of an IoT advertising platform and its enabling components. We also discuss possible key challenges to implement such a platform with a special focus on issues related to advertisement delivery, security, and privacy of the user.\\
\indent To the best of our knowledge, this is the first work defining the IoT advertising and discussing possible enabling solutions for it. We expect our work will impact both upcoming researches on this topic, and the development of new products at scale in the industry.
\section*{Acknowledgment}
This work is partially supported by the US National Science Foundation (Awards: NSF-CAREER-CNS-1453647, 1663051). Mauro Conti is supported by a Marie Curie Fellowship funded by the European Commission (agreement PCIG11-GA-2012-321980). This work is also
partially supported by the EU TagItSmart! Project (agreement
H2020-ICT30-2015-688061), the EU-India REACH Project (agreement
ICI+/2014/342-896), by the project CNR-MOST/Taiwan 2016-17 ``Verifiable
Data Structure Streaming", the grant n. 2017-166478 (3696) from Cisco
University Research Program Fund and Silicon Valley Community
Foundation, and by the grant "Scalable IoT Management and Key security
aspects in 5G systems" from Intel. The views in this document are of the authors, not of the funding agencies.
\section*{Additional Note}
\noindent Authors are listed in alphabetical order and each one of them equally contributed to this work.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,869,038,154,244 | arxiv | \section{Introduction -- Luminous Blue Variables}
\section{Introduction -- Crucial Distinctions Among LBVs}
Luminous Blue Variables (LBVs) or S Doradus variables have attracted attention in recent
years for two mutually independent reasons. Normal hot-star winds cannot
account for sufficient mass loss \citep{Fullerton06}, thus another form of
mass loss, perhaps in LBV events,
is required for them to become WR stars, see also Humphreys \& Davidson (1994). Second, the non-terminal supernova
impostors
resemble extreme LBV outbursts \citep{VanDyk} and certain types of
supernovae were
observed to have experienced prior high mass loss events that have been likened to LBVs.
Unfortunately, serious confusion has arisen because disparate
objects are often mixed together and collectively called ``LBVs.''
In a recent paper, \citet{Smith15} argue
that LBVs are not generically associated with young massive stars.
They suggest that LBVs gained mass from more massive companions in interacting binary systems and subsequently moved away from their birth sites
when the companions became supernovae.
This would contradict traditional
views in which evolved
massive hot stars experience periods of enhanced mass loss,
in transition to a WR star or a supernova.
Their argument is
based on their failure to find LBVs closely associated with young O-type stars
in the Milky Way and the Magellanic Clouds.
As we explain in Section 4.2 however, Smith and Tombleson's
statistical sample is a mixed population and with at least three physically distinct
types of objects. When they are separated, the statistics agree
with standard expectations.
We mention this example because it illustrates
the need to distinguish between giant eruptions, classical LBVs,
less-luminous LBVs, LBV candidates, and others such as
B[e] stars that occupy the same part of the HR diagram.
In this paper we explore those distinctions.
Since there are numerous misunderstandings in this subject, a careful
summary of the background is useful.
LBV/S Dor variables are {\it evolved massive stars, close to the
Eddington limit, with a distinctive
spectroscopic and photometric variability.}
There are two classes with different initial masses and evolutionary histories as
we explain later.
In its quiescent or normal state, an LBV spectrum resembles
a B-type supergiant or Of-type/WN star. During the ``eruption'' or maximum
visible light stage, increased mass loss causes the wind to become optically thick,
sometimes called a pseudo-photosphere, at $T \sim$ 7000-9000 K with an absorption line spectrum resembling an
F-type supergiant. Since this alters the bolometric correction,
the visual brightness increases by 1--2 magnitudes while the total luminosity
remains approximately constant (Wolf 1989, Humphreys \& Davidson 1994 and numerous
early references therein) or may decrease \citep{Groh}. Such an event can
last for several years or even decades.
There are two recognized classes of LBV/S Dor variables based on
their position on the HR Diagram \citep{HD94}. The {\it classical LBVs,\/} with bolometric
magnitudes between $-9.7$ and $-11.5$ (log L/L$_{\odot}$ $\gtrsim$ 5.8)
have clearly evolved from very
massive stars with $M_\mathrm{ZAMS} \; \gtrsim \; 50 \; M_\odot$.
Their high mass loss prevents them from becoming red supergiants
\citep{HD79}.
The {\it less luminous} LBVs with M$_{Bol}$ $\simeq$ -8 to -9.5 mag
had initial masses in the range $\sim$ 25 to 40 M$_{\odot}$ or so,
and can become red supergiants.
Some factor must distinguish LBVs from the
far more numerous ordinary stars with similar $T_\mathrm{eff}$ and $L$,
and $L/M$ is the most evident parameter. LBVs have larger $L/M$ ratios
than other stars in the same part of the HR Diagram. Their Eddington
factors $\Gamma \, = \, L/L_\mathrm{Edd}$ are around
0.5 or possibly higher.\footnote{
For example, P Cyg and AG Car have $\Gamma \approx 0.5$ \citep{Vink02,Vink2012}. }
This is not surprising for the very massive classical LBVs. But how
can the less luminous LBVs have such large $L/M$ ratios?
The simplest explanation is that they have passed through a red supergiant
stage and moved back to the left in the HR diagram \citep{HD94}.
\citet{MM} showed that stars in the 22 -- 45 M$_{\odot}$ initial mass range,
evolving back to warmer temperatures, will pass through the LBV stage. Recent
evolutionary tracks, with mass loss and rotation \citep{Ekstrom}, show that these stars
will have shed about half of their initial mass. Having lost
much of their mass, they are now
fairly close to $(L/M)_\mathrm{Edd}$. Although empirical mass estimates are uncertain, the low luminosity LBVs are close to their Eddington limit with $\Gamma \, \approx 0.6$ \citep{Vink2012}.
Hence {\it the evolutionary state of
less-luminous LBVs is fundamentally different from the classical LBVs.\/}
To illustrate this, in Figure 1 we show the evolution of $L/M$ for two different masses
representing the two LBV/S Dor classes.
It shows a simplified Eddington factor,
${\Gamma}_\mathrm{s} = (L/L_\odot)/(43000 M/M_\odot)$,
as a function of time in four evolution models reported by
\citet{Ekstrom} for rotating and non-rotating
stars with $M_\mathrm{ZAMS} =$ 32 and 60 $M_\odot$. The 32 and
60 $M_\odot$ models have $\log L \sim$ 5.6 and 6.0, respectively,
when they are in the LBV instability strip (see Figs.\ 2 and 3).
Three critical evolution points are marked in the figure:
(1) The end of central H burning, (2) the farthest major excursion
to the red side of the HR Diagram, and (3) the end of central He
burning. For the 60 $M_\odot$ star,
${\Gamma}_\mathrm{s} \gtrsim 0.5$ as it leaves the main sequence,
but the 32 $M_\odot$ star must pass through yellow and/or red
supergiant stages before its ${\Gamma}_\mathrm{s}$ reaches 0.5.
Of course, these examples depend on the input parameters and
assumptions of the models. Other recent evolution models with mass loss
and rotation generically agree on the main point stated here, see for example,
\citet{Chen}
Therefore we should expect that only the classical LBVs should be associated
with young objects such as O-type stars. The less luminous LBVs, more than 5 million years old, will have moved 50 - 100 pc and are also old enough for their
neighbors to evolve out of the O star class. Among other results in this
paper, we find that the data agree with this expectation.
\begin{figure}[!h]
\figurenum{1}
\epsscale{0.5}
\plotone{Figure1.eps}
\caption{The Eddington parameter $\Gamma_\mathrm{s}$ (see text) as a function of
time for two initial masses, 32 and 60 $M_\odot$, with(r) and without rotation(
nr). The initial rotation velocities in the models, respectively, are 306 and 346 km s$^{-1}$. Marked evolution points are (1) end of central H-burning,
(2) coolest value of $T_{eff}$. and (3) end of central He-burning.
Blue curves represent stages where $T_{eff} > 10000$ K,
red indicates $T_{eff} < 4500$ K, and intermediate-temperature
stages are shown in green. The dashed segments indicate yellow and red
supergiant stages where low ionization and other factors make ${\Gamma}_s$
ineffective. }
\end{figure}
One of the distinguishing characteristics of LBV/S Dor variability is that
during quiescence
or minimum light, the stars lie on the S Dor instability strip first introduced by
\citet{Wolf89} and illustrated here in Figures ~2 and 3.
The more luminous, classical LBVs
above the upper luminosity boundary have not been red supergiants,
while those below are post-red supergiant candidates. Thus LBVs with
very different initial masses and different evolutionary histories occupy the
same locus in the HR Diagram.
No single cause is generally
accepted as the origin for the LBV/S Dor enhanced mass loss/optically
thick wind events. Most proposed explanations invoke the star's
proximity to its Eddington limit due to previous high mass loss. Proposed
models include an opacity-modified Eddington limit,
subphotospheric gravity-mode instabilities, super-Eddington winds, and
envelope inflation close to the Eddington limit (see \S {5} in Humphreys \& Davidson 1994, Glatzel 2005, Owocki \& Shaviv 2012, and Vink 2012).
In a {\it giant eruption}, represented by objects like $\eta$ Car
in the 1840's and P Cygni in the 1600's, the star greatly increases its
total luminosity with an increase in its visual magnitude typically by
three magnitudes or more. Giant eruptions should not be confused with
the normal LBV/S Dor variability described above.
The energetics of the outburst
and what we observe are definitely different. Unfortunately, many authors
do not make the distinction especially with respect to the SN impostors and
the progenitors of Type IIn supernovae.
Few confirmed LBVs are known in our galaxy due to their rarity,
uncertainties in distance, and the infrequency of the LBV ``eruption''.
There are only six confirmed LBVs in the LMC and one in the SMC. Thus, even
the Magellanic Clouds do not provide a large enough sample to confidently
determine
their relative numbers and group properties relative to other massive star
populations. In this paper we examine the spatial distribution of
the LBVs and candidate LBVs in M31 and M33 based on our
recent discussion of the luminous stars in those galaxies which
included the discovery of a new LBV in M31 \citep{RMH13,RMH14,RMH15}.
In the next section we show that the majority of these LBVs are found in
or near associations of young stars, and although spectra are not available
for nearby neighbors, their magnitudes and colors support
the classification of most of them as hot supergiants.
It must be emphasized that LBVs are {\it evolved} massive stars. We should not necessarily expect
to find them closely associated with young O stars. This is especially true
for the less luminous LBVs. The appropriate
comparison population should be those supergiants found near
the quiescent temperature and luminosity of the LBVs on the HR Diagram;
that is, the evolved massive stars presumably of similar initial mass.
In the Milky Way (Section 3), it is equally important that they be
at the same distance.
The Smith and Tombleson Magellanic Cloud sample was a mixed population; they combined the classical LBVs with log L/L$_{\odot}$ $\ge$ -5.8 and
the less luminous LBVs. In Section 4 we reassess their analysis
separating the LBVs into the two groups based on their positions
on the HR Diagram. Their spatial distribution and their kinematics lead
to significantly different conclusions
which are summarized in the last section.
\section{The LBVs in M31 and M33}
The LBVs in M31 and M33 were
originally known as the Hubble-Sandage variables. In their paper on the
"brightest variables'', \citet{HS} indentified one star in M31 (V19 = AF And)
and four in M33 (Vars. A, B, C, and 2) based on their long term light
curves from $\approx$ 1920's to 1950 \footnote{Var A is now considered a
post red supergiant, warm hypergiant \citep{RMH87,RMH06,RMH13}}.
There are now five confirmed LBVs in M31: AF And \citep{HS,RMH75},
AE And \citep{RMH75},
Var 15 \citep{Hubble,RMH78}, Var A-1 \citep{Ros,RMH78}, and the newly
recognized J004526.62+415006.3 \citep{Shol,RMH15}. The four in M33 are
Variables B, C, and 2 \citep{HS,RMH75} and Var 83 \citep{RMH78}.
These stars plus a few candidate LBVs are listed in Table 1 with their
luminosities and temperature estimates for their quiescent state with the
corresponding references. LBVs have
one important advantage over other supergiants. During the LBV
enhanced mass loss state, with the cool, dense wind, the bolometric
correction is near zero, allowing us to determine its bolometric luminosity
once corrected for interstellar extinction. This is the case for Var B, and
Var C and for the new LBV, J004526.62+415006.3. Other
luminosities and temperatures in Table 1 are primarily from \citet{Szeif} or \cite{RMH14}.
The adopted corrections for interstellar extinction are from \cite{RMH14}.
The LBVs and candidates are shown on an HR Diagram in Figure 2
\input{Table1.tex}
\begin{figure}[!h]
\figurenum{2}
\epsscale{0.7}
\plotone{Figure2.eps}
\caption{A schematic HR Diagram for the LBVs and candidate LBVs in M31 and M33
discussed in the text. The apparent transits in the HR Diagram during the LBV
maximum light or cool, dense wind stage are shown as straight blue lines. The
LBV/S Dor instability strip is outlined in pink and the empirical upper luminosity
boundary is shown in orange.}
\end{figure}
Most of the known and candidate LBVs in these galaxies are in known stellar
associations mapped previously by \citet{Hodge} for M31 and by \citet{HS80} in M33 and a few are also in or near H II regions. The association designation (A) and number from these references are given in the comments column in Table 1. To compare their space distribution with
the luminous star populations in these galaxies, we show images in the Appendix of their environments made from the Local Group Galaxy Survey \citep{Massey06}. The LBVs and representative nearby stars listed in Table 2 are identified in each image. Although spectra
are not available for these neighboring stars, their magnitudes and colors included
in Table 2 indicate that most are hot stars or other supergiants. We use the
Q--method {\citep{Hiltner,Johnson}} to estimate the intrinsic B-V colors, color excess, and visual extinction (A$_{v}$) with $R = 3.2$ for the candidate OB-type stars in Table 2. Their absolute
visual luminosities, M$_{v}$ are determined using distance moduli of 24.4 mag and 24.5 mag for M31 and M33, respectively \citep{M31Ceph,M33Ceph}.
\subsection{Comments on individual stars}
{\it AE And} had an LBV eruption that lasted for twenty
years when it was the visually brightest star in M31 \citep{Luyten}. Recent specta show
significant variability in the strengths of the absorption and emission lines
indicating an unstable wind. It is in the outer parts of M31 approximately 10{\arcsec} north of A170. Although there
is no associated H~II region, an arc of emission nebulosity passes through the star (Figure A1).
Nearby stars include three hot stars and one possible red supergiant. AE And's luminosity
places it just at the upper luminosity boundary, so it could be a post-red supergiant
(RSG).
{\it AF And} is in an association of young stars with a neighboring
H II region (Figure A1), although this stellar grouping was not included in the
Hodge catalog.
{\it Var A-1} is in A42 with an associated H II region (Figure A2). The closest star is a luminous hot star, possibly a late O star, based on its colors.
{\it Var 15} is a ``less luminous'' LBV and therefore less massive than the other LBVs in M31. It is in A38 and just to the east of a major dust lane (Figure A2). The closest star is red and may be a red supergiant. V15's luminosity and temperature in Table 1 are estimated
from its current SED \citep{RMH14}. From 1992 to 2001 it got both fainter, by about one magnitude in V, and bluer (Fig. 9 in \citep{RMH14}). The color change suggests that it has gotten hotter.
The new LBV, {\it J004526.62 +415006.3 (M31-004526.62)}, is in A45 with an associated H II region (Figure A4). The three closest stars are all hot stars, probable late O-type supergiants based on their colors and luminosities.
{\it J004051.59 +403303.0} is a candidate LBV \citep{Massey07,Shol}. It is in the large association A82
in the outer SE spiral arm (Figure A3). Three of the nearest stars are luminous hot stars.
Our spectrum from 2013 \citep{GH16} shows prominent asymmetric Balmer emission lines with
P Cyg profiles and Fe II emission lines with strong P Cyg absorption features.
The strengths of the Ca II H and K absorption lines and the Mg II 4481 and He I 4471 lines suggest an early A-type supergiant. Although, it has been called an LBV or LBV candidate it could also be a warm hypergiant \citep{RMH13,GH16}.
{\it J004425.18 +413452.2} is a potentially interesting star. It shows spectral
variability reminescent of LBVs, but its luminosity is well below the S Dor instability
strip \citep{RMH14}. It is in a spiral arm between the associations A4 and A9 (Figure A3). The nearest stars
all have intermediate colors typical of yellow supergiants consistent with
this star's lower mass and probable post-RSG status.
{\it Var B} in M33 has been observed in a recent eruption \citep{Szeif,Massey96}, and consequently has a well-determined luminosity. It is in A142 near the center of M33 (Figure A5). One of the closest objects, marked {\it d},
may be a compact H II region based on its point-source appearance and very bright H$\alpha$ magnitude. Three additional nearby hot
stars have the colors of late O and early B-type supergiants.
{\it Var C} has been observed in several maximum light episodes \citep{Burg15} since its initial discovery.
It entered another maximum light phase in 2013 \citep{RMHVarC}. A period analysis of its light curve suggests possible semi-periodic behavior of 42.4 years \citep{Burg15}.
Nearby stars include three hot supergiants and a possible RSG (Figure A5).
{\it Var 83} is a luminous LBV in the prominent southern spiral arm in M33 situated
between two large associations, A101 and A103. It is surrounded by several prominent nebulous arcs, and three nearby neighbors have the colors of O stars and early B-type stars (Figure A6).
{\it Var 2} is near A100, 10$\arcsec$ above its northern boundary, and was classified Ofpe/WN9 by \citet{Neugent}. It is surrounded by nebulosity (Figure A6) and the brightest nearby star (a)
has the colors of an A-type supergiant while three nearby faint stars have the colors
of O-type stars and are likely main sequence stars based on their luminosities.
{\it M33-V532 = GR 290} also known as Romano's star, has been considered an LBV or
LBV candidate by several authors. Its spectrum exhibits variability
from WN8 at minimum to WN11 at visual maximum \citep{Shol2011,Pol2011} with a corresponding apparent temperature range from about 42000 K to 22000 K \citep{Shol2011}. But it is
not known to show the spectroscopic transition from a hot star
to the optically thick cool wind
at visual maximum that that is characteristic of the LBV/S Dor phenomenon.
It may either be in transition to the LBV stage or if it may
be in a post-LBV state \citep{Pol2011,RMH14}. It is included here as a candidate LBV.
It is in the outer parts of M33 about 30$\arcsec$ east of A89 (Figure A7). The nearest star (a) has the colors of an O-type main sequence star.
{\it UIT 008} is another candidate LBV \citep{RMH14}. It has the spectrum of an Of/WN star
and an unusually slow wind like the LBVs. Its high temperature and luminosity place it
near the top of the S Dor instability strip. It is located in A27 and its associated H II
region NGC 588 which contains several luminous O stars and WR stars.
{\it B526 = UIT 341 = M33C-7292} has been considered an LBV candidate \citep{Clark12}, but it
is actually two stars
separated by less than 1$\arcsec$. Its published spectra are all likely composite.
A recent long slit spectrum observed with the LBT/MODS \citep{RMH16}
clearly separates the two stars
and shows that the northeast component has an emission line spectrum often
associated with LBVs with
strong H emission with P Cygni profiles plus Fe II and [Fe II] emission. We therefore
consider this object's northeast component a candidate LBV (B526NE). It is in the large
A101 association with numerous nearby luminous stars (Figure A7). We classify its close
neighbor to the southwest as a B5 supergiant \citep{RMH16}.
\input{Table2.tex}
\clearpage
\subsection{Spatial Distribution and Kinematics}
As we have emphasized, the spatial distribution of the LBVs should
be compared with a stellar population of similar evolutionary state, with
comparable luminosities and initial masses based on their positions on the HR Diagram.
All of the above luminous LBVs and candidates with M$_{Bol}$ $\gtrsim$ -9.8 mag, are in associations of luminous stars, some with associated H II
regions and emission nebulosity. Their nearby stars in Table 2, for the most
part have the colors of early-type stars. One might argue that by comparison
AE And looks relatively isolated. With its luminosity
range, it could be a post-RSG candidate and its association with less
luminous stars would be expected.
The WR variable, M33-V532 is likewise in the outerparts of M33,
but near an association with several early type stars.
In the Smith and Tombleson model not only have the LBVs gained mass from a companion,
but they have been ``kicked'' or moved away from their place of origin after
the
supernova explosion of their more massive companion. In that case, we would expect some of the LBVs to have high velocities compared to their expected velocity
in the parent galaxy. They would be runaway stars.
Their radial velocities are in Table 1 together with the expected
velocity, in parentheses, at their positions in M31 and M33. The absorption lines in LBVs show
considerable variability due to their variable winds. We use the velocities measured from the centroid of [Fe II] and [N II] lines to estimate their systemic velocities. These lines are formed in the low density outer parts of the wind and are not affected
by P Cygni absorption. The expected velocities were calculated following the prescription and fit to the rotation curves from \citet{Massey09}, \citet{Drout09}, and \citet{Drout12}, for the yellow and red supergiants in M31 and M33.
Most of the LBVs have velocities within 40 km s$^{-1}$ of the expected
velocity based on the rotation curve. This is consistent with and
within the velocity range
used by \citet{Drout09,Drout12} to decide on membership of the yellow
supergiants in M31 and M33. The velocities for runaway stars depend on the orbital parameters. Based on some of the models, the velocities are expected to be 100 to 200 km s$^{-1}$ and higher \citep{Tauris}. Thus, none of the LBVs or candidates can be described as a runaway star.
\section{The Milky Way LBVs}
There are very few confirmed LBVs in the Milky Way due to the infrequency of
the LBV eruption and of course to the uncertainty in the distances. Smith and
Tombleson
list 10 LBVs and candidate LBVs and discuss some of the better studied examples.
A few deserve some comments and cautionary remarks. As has been
acknowledged for some time, the giant eruption $\eta$ Car is clearly
associated with a population of luminous massive stars including several
O3-type stars \citep{DH97}. At its presumed initial mass of 150 - 200 M$_{\odot}$, it is at
most about 2 -- 3 million years old and has not have moved far from its place of
birth.
Smith and Tombleson emphasize that most of the other Galactic LBVs
are relatively isolated from O stars. But most are also much
less massive and will be older. For example, the well-studied,
classic stellar wind star, and survivor of a giant eruption, P Cygni is in the Cyg OB1
association. At its distance, the two nearest B-type supergiants of similar
temperature and luminosity are about 30 pc away. The same is true of an O star.
At its position
on the HR Diagram, P Cyg had an initial mass of $\approx$ 50 - 60 M$_{\odot}$ and is past core H-burning with a current mass of 23 to 30 M$_{\odot}$ \citep{Lamers83,PP}. It is therefore at least
4 - 5 $\times$ 10$^{6}$ years old based on models by e.g., \citet{Ekstrom}, with and
without rotation. With a velocity dispersion of 10 km s$^{-1}$ for the extreme
Population I, P Cyg will have moved 35 - 44 pc or more.
So its spatial separation from similar stars is what would be expected.
AG Car is a well-studied classical LBV. At its high luminosity and presumed
initial mass of $\approx$ 80 M$_{\odot}$, one would expect to find an associated population
of massive stars. Smith and Tombleson argue that there are no nearby O stars.
However, at its large kinematic distance of $\approx$ 6 kpc \citep{RMH89,Groh}, AG Car
must be compared with the stellar population at the same distance.
Since our line of sight to AG Car at {\it l} = 289\arcdeg, is directly down
the Carina spiral arm, the cataloged O stars are most likely in its foreground.
HR Car is in a similar situation at $\sim$ 5 kpc in the Carina arm \citep{vanG91}.
HR Car, HD 160529, and HD 168607 are confirmed LBVs, but belong to the less
luminous, less massive group of LBVs with ages of roughly 6 million years or more;
enough time for them to move 30 to 100 pc away from their original
locations even at low random speeds.
We would not expect them to be closely associated with O stars.
In all cases these stars need to be compared with stars at the same distance and
in the same part of the HR diagram.
That is of course the advantage of studying these stars in nearby galaxies
and especially in the Magellanic Clouds.
\section{The LBVs in the Magellanic Clouds}
Smith and Tombleson included nineteen stars in the
Magellanic Clouds in their analysis.
However, six of the
LMC stars in their list are neither LBVs nor candidates.
This does not mean that these stars, described below, are not interesting or that someday they may be
shown to have LBV characteristics, but they should not be included in
a survey to set limits on the spatial distribution of the known LBVs.
The six confirmed LMC LBVs (S Dor, R71, R127, R143, R110, R85) and four
candidates (S61, S119, Sk -69\arcdeg 142a, Sk -69\arcdeg 279) are listed in Table 3
with their observed parameters. They also list three stars in the SMC, but R40 is the only confirmed LBV/S Dor variable \citep{Szeifert93}.
The very luminous HD 5980 is a complex triple system that
may be an example of a giant eruption as we describe below. R4 is a B[e] star.
The confirmed LBVs are shown on an HR Diagram
in Figure 3. The three most luminous LBVs, R127, S Dor and R143, are
the classical LBVs \citep{HD94}, and the remaining four belong to the less luminous group.
\begin{figure}[!h]
\figurenum{3}
\epsscale{0.7}
\plotone{Figure3.eps}
\caption{A schematic HR Diagram for the LBVs and candidate LBVs in the LMC and S
MC discussed in the text.The symbols and colors are the same as in Figure 2.}
\end{figure}
\input{Table3.tex}
\subsection{The LBV Candidates in the LMC and SMC}
The properties of the four candidates and the six excluded stars in the LMC
plus HD 5980 and the candidate R4 in the SMC are described here with our reasoning for their inclusion or not in our analysis.
S61 (= Sk -67\arcdeg 266) was classified O8Iafpe by \citet{Bohannan} but later revised to WN11h by \citet{Crowther97}. No S Dor
type variability has been observed and its wind speed of 900 km s$^{-1}$
\citep{Wolf87} is
much higher than observed for confirmed LBVs in quiescence \citep{RMH14}, although
\citet{Crowther97} derive a terminal velocity of 250 km s$^{-1}$. S61
has an expanding circumstellar nebula \citep{Weis2003b} similar to the
nebulae associated with some LBVs. Primarily for that
reason it is included as an LBV candidate.
S119 (= HDE 269687) is an Ofpe/WN9 star \citep{Bohannan} or WN11h
\citep{Crowther97} with the low wind velocity, 230 km s$^{-1}$
typical of LBVs in quiescence, and an expanding circumstellar nebula
\citep{Weis2003a}. But \citet{Weis2003a} also find that the center of the
expansion has a velocity of 156 km s$^{-1}$, rather low for membership in
the LMC. The possibility that S119 may be a runaway is discussed later.
It has not been observed to show S Dor type
variability, and is included here as a candidate LBV.
Sk -69\arcdeg 142a (= HDE 269582) is another late WN star (WN10h, \citet{Crowther97}), a class
of stars associated with some LBVs in quiescence. It is a known variable with small light variations on the order of $\pm$ 0.2 mag \citep{vanG}, but
does not have a circumstellar nebula.
Sk -69\arcdeg 279 is classified as an O9f star \citep{Conti86} with a
large associated circumstellar nebula \citep{Weis95,Weis97,Weis02}.
It is on that basis that we consider it a candidate LBV.
{\it LMC Non-LBVs:}
R81 (= HDE 269128 = Hen S86) was initially considered an example of
S Dor variability \citep{Wolf81}, but it was soon shown to be an
eclipsing binary \citep{Stahl87}. This analysis and conclusion is
confirmed in the more recent paper by \citet{Tubbesing}. For this reason it was
not included in the review by \citet{HD94}. R81's spectroscopic and photometric
variability are tied to its orbital motion. It is not an LBV or a candidate.
MWC 112 (= Sk -69\arcdeg 147) is an F5 Ia supergiant \citep{Rou}. It has not been considered an
LBV candidate, although it has been confused with HDE 269582 discussed above,
see \citet{vanG,HD94}.
R126 (= HD 37974) is a B[e]sg star \citep{Lamers,Zickgraf} similar to the
classic S18, and has not been considered an LBV or candidate LBV in the
literature.
R84 (= HDE 269227) has a composite spectrum with an early-type supergiant
plus an M supergiant \citep{Munari}. It has been suggested to be a
dormant LBV \citep{Crowther1995}, but the various conflicting descriptions
of this star which have included reports of TiO bands, are now
attributed to its composite nature (B0 Ia + M4 Ia, \citet{Stahl84}).
If this is a physical pair it is an important star since so few M supergiants
are known binaries.
Sk -69\arcdeg 271 (= CPD -69\arcdeg 5001) has been classified as a B4 I/III star
\citep{Neugent11}. It is associated with an arc of H$\alpha$ emission \citep{Weis97} and is considered to be a possible X-ray binary \citep{Sasaki}. It has
not been suggested to be an LBV or candidate in the literature. Its
luminosity, based on its spectral type and extinction \citep{Neugent11}, of
M$_{Bol}$ $\approx$ -7.90 mag, places it below the LBV/S Dor instability
strip.
R99 (= HDE 269445) can best be described as peculiar. It is classified
as an Ofpe/WN9 star \citep{Bohannan}. Although it shows spectral and
photometric variability, the amplitude is very small. \citet{Crowther97}
emphasize its peculiar spectrum and variability. Its high wind velocity of
1000 km s$^{-1}$ argues against an LBV in quiesence. Although it is associated
with emission from a nearby H II region, \citet{Weis2003b} concludes that there is
no circumstellar nebula like those associated with some LBVs and candidates.
{\it A Giant Eruption Candidate and a Non-LBV in the SMC}
HD 5980 is a complex triple system with two luminous hot stars in a short
period eclipsing binary (P$_{AB}$ = 19.3$^{d}$). Star B is considered to
be an early-type WN star, and the third star(C) is an O-type supergiant which may also be a binary \citep{Koenig14}.
In 1994, HD 5980 experienced a brief 3.5 mag brightening that lasted about 5 months.
At maximum, the absorption line spectrum was described as B1.5Ia$^{+}$
\citep{Koenig}, but \citet{Moffat} later classified it as WN11. It did not produce
the LBV/S Dor cool wind at maximum. In that respect it is reminiscent of M33-V532.
Given its brightening and short duration however,
HD 5980 can be better described as a giant eruption. Nevertheless numerous
authors describe it as an LBV. For that reason we include it in Table 3, but
with the above caveat.
R4 is a spectroscopic binary \citep{Zickgraf96} with an early B-type supergiant plus an early A-type supegiant. The hot star is described as a B[e] star with T$_{eff}$ of 27,000\arcdeg and M$_{Bol}$ = -7.7 mag which places it below and well outside
the S Dor instability strip. Its LBV nature has not been established,
and subsequent papers treat it as a B[e] star.
\subsection{The Spatial Distribution and Kinematics of the Magellanic Cloud LBVs}
Here we reassess the Smith and Tombleson analysis of spatial
correlations in the LMC and SMC. For each star in a set, e.g. WN stars, red supergiants, or LBVs, they measured the projected
distance $D1$ to the nearest O-type star and plotted the cumulative
fraction of the stars in that set that have $D1$ less than a given value.
This is illustrated in Figure 4 where we reproduce their
Magellanic sample from their Table 1 together with the late O-type stars,
WN stars, and red supergiants. The O-type stars, for example, tend to exist in groups, so naturally they have small values of $D1$. We would conclude from
this figure that the red supergiants have a spatial distribution uncorrelated
with O stars. The stars in the Smith and Tombleson sample fall in between.
\begin{figure}[!h]
\figurenum{4}
\epsscale{0.6}
\plotone{Figure4.eps}
\caption{The cumulative fraction versus the projected distance to the nearest O
-type star for the Magellanic Cloud sample from \citet{Smith15} based on their
figure. The distributions for the late O-type stars, WN stars, and red supergiants are shown for reference.}
\end{figure}
We repeat this analysis for the seven confirmed LBVs in Table 3. The
seven stars discussed in \S {4.1} that have no clear relation to
LBVs are removed, and the four ``candidates'' are discussed below.
We show the cumulative distributions
for the {\it classical} and {\it less luminous} LBVs in Figure 5, plotted respectively, as LBV1 and LBV2\footnote{Set LBV1 consists of R127, R143, and S Dor;
no additional classical LBVs have been found in the LMC and SMC since 1994.
Set LBV2 includes R71, R110, R85, and R40.}.
The three most luminous stars (LBV1) in the sample were
originally identified
as classical LBVs by \citet{HD94}, and were not merely chosen in relation to Smith and Tombleson's arguments. They have an average $D1$ of only 7 pc, and, as
Figure 5 shows, their distribution is statistically indistinguishable from the late O-type
stars. The four stars in LBV2 are all substantially fainter than
the $M_\mathrm{bol} \approx -9.7$ criterion. As expected
for these less massive, highly evolved objects, they parallel
the RSG distribution in Figure 5. Formal statistical tests
are doubtful for such a small sample; but if one
attempts to apply any such test (e.g., Kolmogorov-Smirnov),
the result is excellent consistency between set LBV2 and the
RSG distribution. This is obvious in the figure.
Moreover, elementary statistics support the distinction between
classical and less luminous LBVs. Among the 7 confirmed LBVs,
the three classical LBVs have the three smallest values of $D1$.
The ab initio probability of this outcome would be 2.9\% if
all seven represent the same population; so, in this sense,
the difference between the two classes is significant with
a 97\% confidence. Two aspects of this statement are worth noting.
(1) Given only seven objects, 97\% is the highest confidence level
that can be attained without additional information.\footnote{
The Kolmogorov-Smirnov test is inappropriate here for two
reasons: The $D1$ values of sets LBV1 and LBV2 do not overlap,
and the sample size is too small. The $t$ test for two finite
samples with unequal variance formally gives a confidence
level of about 99\% for the distinction between LBV1 and LBV2,
but this entails extra assumptions that are too complicated
to review here. A detail in our reasoning is that
$D1$ is expected to be smaller in set LBV1, as observed, and
not larger. The main point is that elaborate
statistical analyses are not worthwhile with such a small sample
size. For comments about statistical testing in general, see \\https://asaip.psu.edu/Articles/beware-the-kolmogorov-smirnov-test/. }
2) In terms of standards for judging scientific evidence, the above
result is useful {\it because it was not known at the time when the
two classes of LBVs were defined.\/}
\begin{figure}[!h]
\figurenum{5}
\epsscale{0.6}
\plotone{Figure5.eps}
\caption{The cumulative fraction versus the projected distance to the nearest O
-type star for the {\it classical} LBVs (LBV1) and {\it less luminous} LBVs
(LBV2) in the Magellanic Clouds.}
\end{figure}
Just as important, their velocities do not suggest that any
of the LBVs are runaways or have exceptionally high velocities.
Their heliocentric velocities are included in Table 3 with the references.
The emission line velocities are given for the RAVE measurements \citep{Munari} except for R143\footnote{The absorption line velocities are also given in \citet{Munari}}. The published average velocities for
the Magellanic Clouds range from 262 - 278 km s$^{-1}$ for the LMC
and from 146 - 160 km s$^{-1}$ for the SMC \citep{Richter87,McC12}.
The LBV velocities are all consistent with their membership,
and for the LMC stars, with their position based on the H I
rotation curve available at NED (from \citet{Kim}). The only exception is the candidate LBV, S119
with a possible systemic velocity based on the center of its expanding nebula
\citep{Weis2003a} that is much lower than expected at its position in the LMC.
Thus S119 is a candidate for a runaway star.
In addition, a runaway star with a high velocity would be expected to alter the
morphology of its cicumstellar ejecta, the LBV nebula. Runaway stars are
commonly known to have an arc-like bow shock nebula due to interaction with the
interstellar medium \citep{Bomans}. The stars in M31 and M33 are too
distant to search for bow shocks, but several confirmed LBVs and candidates in the
Milky Way and LMC have associated circumstellar nebulae that are either spherical
or bipolar in shape \citep{Weis2011}. They are not bow shocks. The only nebula
that has a signature, an embedded arc in the surrounding nebulosity, that could be attributed to a bow shock
is S119 which may be a runaway.
The distribution of the four LBV candidates in the LMC (Figure 6) interestingly
follows that of the less luminous LBVs. Three of the four are considered
candidates because of their circumstellar nebulae. The origin of their very
extended ejecta may be a prior giant eruption or numerous earlier S Dor-like
events, but their low expansion velocities (14--27 km s$^{-1}$) are a puzzle
compared with the outflow velocities of LBV winds and giant eruptions\footnote{The nebulae associated with Milky Way LBVs have expansion velocities of 50 - 75 km s$^{-1}$.}. They may be
due to deceleration with the ISM or the product of mass loss in a previous state
with lower ejection velocities. Their published luminosities are very
similar and except for Sk -69\arcdeg 279, they lie in or near the LBV
instability strip.
Based on their position on the HR Diagram, they
may be post-RSGs or given their rather high luminosities, former warm
hypergiants like IRC~+10420 or Var A in M33.
\begin{figure}[!h]
\figurenum{6}
\epsscale{0.6}
\plotone{Figure6.eps}
\caption{The cumulative fraction versus the projected distance to the nearest O
-type star for the four candidate LBVs in the LMC.}
\end{figure}
\section{Concluding Remarks}
In M31 and M33 we conclude that except possibly for
AE And which may be a post-RSG, the LBVs and candidates are associated with luminous young stars and
supergiants appropriate to their luminosities and positions on the HR diagram.
Their measured velocities are also consistent with their positions in their
respective galaxies. There no evidence that they are runaway stars that have moved away from their place of origin.
In the Magellanic Clouds, separating the LBVs by luminosity removes the apparent
isolation of the LBVs claimed by Smith and Tombleson. The more luminous and more massive, classical LBVs
have a distribution similar to the late-type O stars and the
WN stars, while the less luminous ones have a distribution like the RSGs, an evolved lower mass population of supergiants.
In both the Magellanic Clouds and in M31 and M33, none of the sample of
16 confirmed LBVs have high velocities or are candidate runaway stars.
One positive result is especially notable: since the LBV distribution
in Figure 5 is indistinguishable from the red supergiants, evidently
the less luminous LBVs are not young. This fact strongly supports
the interpretation outlined in Section 1, i.e., those objects
have evolved back to the blue in the HR Diagram.
It is not necessary to invoke an exotic explanation such as ``kicked mass
gainers'' from binary systems to explain the Luminous Blue Variables.
Some LBVs and candidates are indeed binaries \citep{Rivinius,Lobel,Martayan}, but this
binarity contradicts
the Smith and Tombleson scenario of the LBV, the primary in
these
systems, as the mass gainer. Thus the results of our discussion and
analysis supports the accepted description of LBVs of all luminosities as evolved massive stars that have shed a lot of mass, whether as hot stars or
as red supergiants, and are now close to their Eddington limit.
\acknowledgements
Research by R. Humphreys, K. Davidson, and M. Gordon on massive stars is supported by
the National Science Foundation AST-1109394.
|
2,869,038,154,245 | arxiv | \section{Introduction}
Conventional quantum field theories in flat space require the Minkowski metric $\eta_{\mu\nu}$ as
an indispensable background structure in order to introduce a notion of time and causality,
and to construct actions with interesting ``nontopological'' and covariant terms. Many concepts
known for the flat case can be generalized and transferred to curved spacetimes, where the metric
$g_{\mu\nu}$ assumes the role of the crucial background arena all invariants of the theory can be
constructed in. In quantum gravity, however, there is a fundamental conceptual difficulty since
the metric itself is a dynamical field now, having the consequence that, a priori, the arena does
not exist.
An elegant way out of this problem is provided by the introduction of a nondynamical
background metric $\bar{g}_{\mu\nu}$ that is kept arbitrary and that serves as the basis of nontopological
covariant constructions. The dynamical metric $g_{\mu\nu}$ is then parametrized by a combination of this
background and some dynamical field(s). The crucial idea is that -- due to the arbitrariness of
the background -- in the end all physical quantities like scattering amplitudes must not depend on
$\bar{g}_{\mu\nu}$ any more. This bootstrap argument is one implementation of \emph{background independence},
a property that must be satisfied by any meaningful theory of quantum gravity. Its application has
led to great successes in many different physical situations.
As a consequence of background arbitrariness, also the way in which the dynamical metric is
parametrized is not determined without assuming or knowing the fundamental degrees of
freedom and without having defined a fluctuating field. The most famous choice of parametrization
is the standard background field method, where the dynamical field undergoes a \emph{linear split}
into background plus fluctuation \cite{DW03}. This has turned out to be a powerful technique in
many quantum field theory calculations, in particular in non-Abelian gauge theories. In the context
of gravity it reads
\begin{equation}
g_{\mu\nu} = \bar{g}_{\mu\nu} + h_{\mu\nu} \, ,
\label{eq:stdParam}
\end{equation}
where the fluctuating field $h_{\mu\nu}$ is a symmetric tensor. In the following we refer to this
split as \emph{standard parametrization}.
As an example for a different choice of parametrization we can consider the nonlinear relation
\cite{FP94}
\begin{equation}
g_{\mu\nu} = e^a{}_\mu e^b{}_\nu\, \bar{g}_{ab} \, ,
\end{equation}
with invertible matrix-valued fields $e^a{}_\mu$. In the vielbein formalism for instance
\cite{HR12,DP13}, the dynamical metric $g_{\mu\nu}$ is parametrized by $g_{\mu\nu} = e^a{}_\mu e^b{}_\nu\,
\eta_{ab}$, together with $e^a{}_\mu = \bar{e}^a{}_\mu + \varepsilon^a{}_\mu$, so the fluctuations
$\varepsilon^a{}_\mu$ around the background field $\bar{e}^a{}_\mu$ contribute nonlinearly to
$g_{\mu\nu}$.
Recently yet another parametrization has attracted increasing interest \cite{KKN93}. Although it
is a nonlinear relation, too, it does not introduce more independent components than contained in
$g_{\mu\nu}$. Here, the metric is determined by the \emph{exponential} of a fluctuating field,
\begin{equation}
g_{\mu\nu}=\bar{g}_{\mu\rho}(e^h)^\rho{}_\nu \, .
\label{eq:newParam}
\end{equation}
Again, $\bar{g}_{\mu\rho}$ denotes the background metric, and $h$ is a symmetric matrix-valued field,
$h_{\mu\nu} = h_{\nu\mu}$ (or $h^\mu{}_\nu = h_\nu{}^\mu$ with the shifted index position). As usual,
indices are raised and lowered by means of the background metric. In this work we refer to the
exponential relation \eqref{eq:newParam} as \emph{new parametrization}.
A priori, there seems to be no reason to prefer one parametrization over another one. It is well
known that field redefinitions in the path integral for the partition function do not change
S-matrix elements \cite{CWZ69}. While this equivalence theorem is based on the use of the equations
of motion, the (off shell) effective action $\Gamma$ in the usual formulation does still depend on
the choice of the parametrization.
In a geometric setting, by regarding the configuration space as a manifold, Vilkovisky and DeWitt
constructed an effective action in a covariant way such that it is independent of parametrizations
and gauge conditions for the quantized fields both off and on shell \cite{V84}. In this approach,
however, $\Gamma$ can have a remaining dependence on the chosen configuration space metric
\cite{O91}. Furthermore, unlike the conventional effective action, the Vilkovisky-DeWitt effective
action does not generate the 1PI correlation functions; instead, it is governed by modified Ward
identities \cite{BK87} which, in the present context, would relate $\delta\Gamma/\delta g_{\mu\nu}$ to
$\delta\Gamma/\delta \bar{g}_{\mu\nu}$. This is a first hint that specific parametrizations can
be of interest as they have an effect on off shell quantities, and appropriate choices may
simplify calculations. An example is the frame dependence in cosmology which has been investigated
in reference \cite{KS14} at one-loop level.
Studies (without the Vilkovisky-DeWitt approach) of the renormalization group (RG) show that
$\beta$-functions and fixed points can vary when the parametrization is changed \cite{W74,M98}.
In addition, parametrization invariance is violated even on shell when truncations, e.g.\
derivative expansions, are considered \cite{M98}. Combining RG techniques with the ideas of
Vilkovisky and DeWitt leads to the geometrical effective average action, which is constrained
by generalized modified Ward identities \cite{P03}. Therefore, again, parametrization and gauge
invariance can be obtained only at the expense of nontrivial dependencies on the background.
In summary, off shell quantities in both conventional and Vilkovisky-DeWitt approach can depend
on the underlying parametrization and/or the background.
Thus, it is usually safer to consider physical observables as they should not exhibit any
parametrization or gauge dependence. In quantum gravity, however, it is not even clear what
physically meaningful observable quantities are, and so far there is no experiment for a direct
measurement of quantum gravity effects \cite{W09}. Based on effective field theory arguments
it is possible to compute the leading quantum corrections to the Newtonian potential \cite{D94},
but the effect is unobservably small and the description is valid only in the low energy regime,
so it cannot be considered a fundamental theory of the gravitational field. Due to the problem
of finding observable quantities, the best one can do with a candidate theory of quantum gravity
is to test it for self-consistency, check the classical limit, and compare it with other
approaches. In this regard it is of substantial interest to study off shell quantities like
$\beta$-functions. Their parametrization dependence might then be exploited to simplify the
comparison between different theories. For instance, if correlation functions of vertex
operators of the type $e^{ikX}$ in string theory \cite{GSW87} are supposed to be compared
with another approach, it may be natural to use the exponentials of some fields there as well.
In the present work we use a fully nonperturbative framework to show within the
Einstein--Hilbert truncation that $\beta$-functions do indeed depend on the parametrization,
and the exponential relation \eqref{eq:newParam} turns out to be more appropriate for a
comparison with conformal field theories. Particular attention is paid to RG fixed points
in the context of Asymptotic Safety. It is assumed that the reader is familiar with the
concept; for introductions and reviews of Asymptotic Safety see \cite{ASReviews}.
At this point we want to make an important remark. Apart from the fact that a
reparametrization can change objects whose direct physical meaning is obscured, it could
also give rise to a fundamental change: In principle, there is the possibility that in
parametrization A the defining path integral has a suitable continuum limit according to
the Asymptotic Safety scenario, i.e.\ coupling constants approach a fixed point in the UV,
while there may not be such a well defined limit for parametrization B. However, when
resorting to truncations, it would be hard to decide whether such a change due to
reparametrization is actually fundamental or just a truncation artifact. One could find
that a suitable fixed point is absent in one parametrization, while it exists in another
one, but after enlarging the truncated theory space the resulting differences between the
two parametrizations might diminish eventually. Clearly, in that case higher order truncations
would have to be considered to obtain more reliable results.
In the following, we focus more concretely on the properties of the ``new'' exponential
parametrization \eqref{eq:newParam}. There are several independent reasons that strongly motivate
its use.
\textbf{i)} The first argument is a geometric one. We answer the question if the right hand side
of relation \eqref{eq:newParam} represents a metric. As we prove in appendix \ref{app:Logs}, there
is a \emph{one-to-one correspondence} between dynamical metrics $g_{\mu\nu}$ and symmetric matrices $h$.
(Note that we consider Euclidean signature spacetimes throughout this paper, so metrics are positive
definite.) That means that, given a dynamical and a background metric, $g_{\mu\nu}$ and $\bar{g}_{\mu\nu}$,
respectively, there exists a unique symmetric matrix $h$ satisfying equation \eqref{eq:newParam}.
If, on the other hand, $\bar{g}_{\mu\nu}$ and a symmetric $h_{\mu\nu}$ are given, then $g_{\mu\nu}$ defined by
$g_{\mu\nu}=\bar{g}_{\mu\rho}(e^h)^\rho{}_\nu$ is symmetric and positive definite, so it is again an admissible
metric. As a consequence, a path integral over $h_{\mu\nu}$ captures all possible $g_{\mu\nu}$, and no $g_{\mu\nu}$
is counted twice or even more times. The matrices $h$ can be seen as tangent vectors corresponding
to the space of metrics and equation \eqref{eq:newParam} as the exponential map (even though not
using the Vilkovisky-DeWitt connection). Due to the positive definiteness of $g_{\mu\nu}$ guaranteed by
construction, the new parametrization seems to be preferable to the standard one given by
equation \eqref{eq:stdParam}.
\textbf{ii)} Our main motivation comes from an apparent connection to conformal field theories (CFT).
To see this, we examine a path integral for two dimensional gravity coupled to conformal matter
(i.e.\ to a matter theory that is conformally invariant when the metric is flat) with central
charge $c$. Here it is sufficient to consider matter actions constructed from scalar fields. Then
$c$ is just the number of these scalar fields. As shown by Polyakov \cite{P81}, the path integral
decomposes into a path integral over the conformal mode $\phi$ with a Liouville-type action times a
$\phi$-independent part. Owing to the integral over Faddeev-Popov ghosts, the kinetic term for $\phi$
comes with a factor of $(c-26)$, reflecting the famous critical dimension of string theory. If,
finally, the implicit $\phi$-dependence of the path integral measure is shifted into the action, the
kinetic term for $\phi$ gets proportional to $(c-25)$ \cite{DDK88}. For this reason we call
\begin{equation}
c_\text{crit}=25
\end{equation}
the \emph{critical central charge} at which $\phi$ decouples.
How is that related to Asymptotic Safety? Let us consider the RG running of the dimensionless
version of Newton's constant, $g$, now slightly away from two dimensions, $d=2+\epsilon$. Already
a perturbative treatment shows that the $\beta$-function has the general form
\begin{equation}
\beta_g = \epsilon g - b g^2 ,
\label{eq:betaeps}
\end{equation}
up to order $\mathcal{O}(g^3)$ \cite{W80}, leading to the non-Gaussian fixed point
\begin{equation}
g_* = \epsilon/b .
\end{equation}
It turns out that the coefficient $b$ depends on the underlying parametrization of the metric.
Perturbative calculations based on the standard parametrization \eqref{eq:stdParam} yield
$b=\frac{2}{3}(19-c)$, where $c$ denotes again the number of scalar fields \cite{W80,T77}.
This gives rise to the critical central charge
\begin{equation}
c_\text{crit}=19
\label{eq:ccritStdPert}
\end{equation}
in the standard parametrization. If, on the other hand, the new parametrization \eqref{eq:newParam}
underlies the computation, the critical central charge amounts to \cite{KKN93}
\begin{equation}
c_\text{crit}=25.
\label{eq:ccritNewPert}
\end{equation}
Since many independent derivations yield $c_\text{crit}=25$, too, it appears ``correct'' in a certain sense.
This result seems to be another advantage of the exponential parametrization. In the present
work we investigate if it can be reproduced in a nonperturbative setup.
\textbf{iii)} Let us come back to an arbitrary dimension $d$. Parametrizing the metric with an
exponential allows for an easy treatment of the conformal mode which can be separated as the
trace part of $h$ in equation \eqref{eq:newParam}: We split $h_{\mu\nu}$ into trace and traceless
part, $h_{\mu\nu}=\hat{h}_{\mu\nu}+\frac{1}{d}\bar{g}_{\mu\nu} \phi$, where $\phi=\bar{g}^{\mu\nu} h_{\mu\nu}$ and
$\bar{g}^{\mu\nu}\hat{h}_{\mu\nu}=0$. In this case, equation \eqref{eq:newParam} becomes
\begin{equation}
g_{\mu\nu}=\bar{g}_{\mu\rho}\big(e^{\hat{h}}\big)^\rho{}_\nu \, e^{\frac{1}{d}\phi} ,
\end{equation}
so the trace part of $h$ gives a conformal factor. Using the matrix relation $\det(\exp M)=\exp(\Tr M)$
we obtain
\begin{equation}
\sqrt{g} = \sqrt{\bg} \, e^{\frac{1}{2}\phi} \, ,
\label{eq:sqrtg}
\end{equation}
where $g$ ($\bar{g}$) denotes the determinant of $g_{\mu\nu}$ ($\bar{g}_{\mu\nu}$). The traceless part of $h$ has
completely dropped out of equation \eqref{eq:sqrtg}. Hence, unlike for the standard parametrization,
the cosmological constant appears as a coupling in the conformal mode sector only. This will become
explicit in our calculations.
\textbf{iv)} The new parametrization might simplify computations and cure singularities that are
possibly encountered with the standard parametrization. Here we briefly mention three examples.
(a) The RG flow of nonlocal form factors appearing in a curvature expansion of the effective
average action in $2+\epsilon$ dimensions is divergent in the limit $\epsilon\rightarrow 0$ for small
$k$ when based on \eqref{eq:stdParam} but has a meaningful limit with equation \eqref{eq:newParam}
\cite{SCM10}. (b) Similarly, when trying to solve the flow equations in scalar-tensor theories
\cite{NP09} in $d=3$ and $d=4$, singularities occurring in a standard calculation can be avoided by
using the new parametrization.\footnote{The author would like to thank R.~Percacci for pointing
this out.} (c) Related to argument iii) the exponential parametrization provides an easy access
to unimodular gravity \cite{E13}.
In this work we present a nonperturbative derivation of the $\beta$-functions of Newton's constant
and the cosmological constant. For that purpose, we study the effective average action within the
Einstein--Hilbert truncation (without using the Vilkovisky-DeWitt method), where the metric is
replaced according to equation \eqref{eq:newParam}. We will show that the results for the exponential
parametrization are significantly different from the ones obtained with the standard split
\eqref{eq:stdParam}. Although we encounter a stronger scheme dependence that has to be handled with
care, the favorable properties of the new parametrization seem to prevail, as discussed in the
final section.
\section{Framework}
\label{sec:framework}
We employ functional RG techniques to evaluate $\beta$-functions in a nonperturbative way. The method
is based upon the effective average action $\Gamma_k$, a scale dependent version of the usual
effective action $\Gamma$. By definition, its underlying path integral contains a mass-like regulator
function $\mathcal{R}_k(p^2)$ such that quantum fluctuations with momenta below the infrared cutoff scale $k$
are suppressed while only the modes with $p^2>k^2$ are integrated out. Thus, $\Gamma_k$ interpolates
between the microscopic action at $k\rightarrow\infty$ and $\Gamma$ at $k=0$. Its scale dependence
is governed by an exact functional RG equation (FRGE) \cite{W93,R98,M94},
\begin{equation}
k \partial_k \Gamma_k = \frac{1}{2}\, \Tr \bigg[ \Big(\Gamma_k^{(2)}+ \mathcal{R}_k\Big)^{-1}\,
k \partial_k \mathcal{R}_k\,\bigg] ,
\label{eq:FRGE}
\end{equation}
where $\Gamma_k^{(2)}$ denotes the second functional derivative with respect to the fluctuating
field ($h_{\mu\nu}$ in our case). In the terminology of reference \cite{CPR09} we choose
a type Ia cutoff, i.e.\ $\mathcal{R}_k$ is a function of the covariant Laplacian.
As outlined above, any field theoretic description of quantum gravity requires the introduction of
a background metric. Consequently, $\Gamma_k$ is a functional of both $g_{\mu\nu}$ and $\bar{g}_{\mu\nu}$ in
general, i.e.\ $\Gamma_k\equiv\Gamma_k[g,\bar{g}]$. In terms of $h$ the two parametrizations give rise
to the functionals
\begin{equation}
\Gamma_k^\text{standard}[h;\bar{g}] \equiv \Gamma_k[\bar{g}+h,\bar{g}] ,
\label{eq:GammaStandard}
\end{equation}
as opposed to
\begin{equation}
\Gamma_k^\text{new}[h;\bar{g}] \equiv \Gamma_k\big[\bar{g} \, e^h,\bar{g}\big] .
\label{eq:GammaNew}
\end{equation}
(We adopt the comma notation for $\Gamma_k[g,\bar{g}]$ and the semicolon notation for
$\Gamma_k[h;\bar{g}]$.)
The difference between \eqref{eq:GammaStandard} and \eqref{eq:GammaNew} is crucial: Since
the second derivative $\Gamma_k^{(2)}$ in \eqref{eq:FRGE} is with respect to $h$, the two
parametrizations give rise to different terms according to the chain rule,
\begin{equation}
\begin{split}
&\Gamma_k^{(2)}(x,y) \equiv \frac{1}{\sqrt{\bg(x)}\sqrt{\bg(y)}}\;\frac{\delta^2\Gamma_k}{\delta h(x) \,
\delta h(y)} \\
&= \frac{1}{\sqrt{\bg(x)}\sqrt{\bg(y)}}\int_u\int_v\, \frac{\delta^2\Gamma_k}{\delta g(u)\,\delta g(v)} \,
\frac{\delta g(v)}{\delta h(x)}\,\frac{\delta g(u)}{\delta h(y)} \\
& \qquad +\frac{1}{\sqrt{\bg(x)}\sqrt{\bg(y)}}\int_u\, \frac{\delta \Gamma_k}{\delta g(u)} \,
\frac{\delta^2 g(u)}{\delta h(x)\,\delta h(y)} \; ,
\end{split}
\label{eq:2ndVar}
\end{equation}
where we suppressed all spacetime indices and used the shorthand $\int_u \equiv \int \td^d u$.
The first term on the right hand side of equation \eqref{eq:2ndVar} is the same for both
parametrizations, at least at lowest order, because $\delta g_{\mu\nu}(x)/\delta h_{\rho\sigma}(y)
= \delta^\rho_\mu \, \delta^\sigma_\nu \, \delta(x-y)$ in the standard case, and
$\delta g_{\mu\nu}(x)/\delta h_{\rho\sigma}(y)\allowbreak = \delta^\rho_\mu \, \delta^\sigma_\nu \,
\delta(x-y) +\mathcal{O}(h)$ with the new parametrization. The last term in \eqref{eq:2ndVar}, however,
vanishes identically for parametrization \eqref{eq:stdParam} since $\delta^2 g/\delta h^2 = 0$,
whereas the exponential relation \eqref{eq:newParam} entails
\begin{align}
&\frac{\delta^2 g_{\mu\nu}(u)}{\delta h_{\rho\sigma}(x) \, \delta h_{\lambda\gamma}(y)} \\
&= {\textstyle \frac{1}{2}}
\left(\bar{g}^{\sigma\lambda} \delta^\rho_\mu \, \delta^\gamma_\nu + \bar{g}^{\rho\gamma} \delta^\lambda_\mu
\, \delta^\sigma_\nu \right) \delta(u-x) \delta(u-y) + \mathcal{O}(h), \nonumber
\end{align}
leading to additional contributions to the FRGE \eqref{eq:FRGE}. Note that these new contributions
are proportional to the first variation of $\Gamma_k$ in \eqref{eq:2ndVar}, so the exponential
parametrization gives the same result as the standard one when going on shell. But, due to the
inherent off shell character of the FRGE, we expect differences in $\beta$-functions and the
corresponding RG flow.
Finally, let us comment on gauge invariance and fixing. Starting from relation \eqref{eq:newParam},
we observe that $(e^h)^\rho{}_\nu$ must transform as a tensor under general coordinate
transformations, if $g_{\mu\nu}$ and $\bar{g}_{\mu\rho}$ transform as tensors. It is possible to show then
that $h_{\mu\nu}$ transforms in the same way, i.e.\
\begin{equation}
\delta \mkern1mu h_{\mu\nu} = \mathcal{L}_\xi h_{\mu\nu}
\end{equation}
under diffeomorphisms generated by the vector field $\xi$ via the Lie derivative $\mathcal{L}_\xi$.
In order to be as close to the standard calculations based on \eqref{eq:stdParam} as possible
\cite{R98}, we shall employ an analogous gauge fixing procedure. This can most easily be done by
observing that $h_{\mu\nu}$ in the standard gauge fixing condition $\mathcal{F}_\alpha^{\mu\nu}[\bar{g}]
\, h_{\mu\nu}=0$ can be replaced by $g_{\mu\nu}$: We use the most convenient class of $\mathcal{F}$'s
where $\mathcal{F}_\alpha^{\mu\nu}[\bar{g}]$ is proportional to the covariant derivative $\bar{D}_\mu$
corresponding to the background metric, and therefore, $\mathcal{F}_\alpha^{\mu\nu}[\bar{g}] \, g_{\mu\nu}
= \mathcal{F}_\alpha^{\mu\nu}[\bar{g}](\bar{g}_{\mu\nu}+h_{\mu\nu}) = \mathcal{F}_\alpha^{\mu\nu}[\bar{g}] \, h_{\mu\nu} = 0$ for
the standard parametrization.
Passing on to the new parametrization, we can choose the $g_{\mu\nu}$-version of the gauge condition,
too, $\mathcal{F}_\alpha^{\mu\nu}[\bar{g}] \, g_{\mu\nu} = 0$. This version is preferred to the one acting on
$h_{\mu\nu}$ because, (a) it is hard to solve the true or ``quantum'' gauge transformation law for
$\delta h_{\mu\nu}$ (by solving $\delta g_{\mu\nu} = \mathcal{L}_\xi g_{\mu\nu}$ while $\delta \bar{g}_{\mu\nu}=0$),
and (b) the $g_{\mu\nu}$-choice leads to the same Faddeev-Popov operator as in the standard case
\cite{R98}. As a consequence, all contributions to the FRGE coming from gauge fixing and ghost
terms are the same for both parametrizations. By virtue of the one-to-one correspondence between
$g_{\mu\nu}$ and $h_{\mu\nu}$ (see appendix \ref{app:Logs}) this gauge fixing method is perfectly admissible.
We present a single-metric computation in section \ref{sec:single} and a bi-metric
\cite{bimetric,BR14} analysis in section \ref{sec:bi}. In the single-metric case, we employ
the harmonic gauge condition, $\mathcal{F}_\alpha^{\mu\nu}[\bar{g}] \, g_{\mu\nu} = 0$ with
$\mathcal{F}_\alpha^{\mu\nu}[\bar{g}] =\delta^\nu_\alpha \,\bar{g}^{\mu\rho}\bar{D}_\rho - \frac{1}{2} \,
\bar{g}^{\mu\nu} \bar{D}_\alpha$ (corresponding to $\rho=\frac{d}{2}-1$ in \cite{CPR09}), together with
a Feynman-type gauge parameter, $\alpha=1$. The bi-metric results are obtained by using the
$\Omega$ deformed $\alpha=1$ gauge \cite{BR14}. To summarize, we repeat the calculations of
\cite{R98} and \cite{BR14} with the new parametrization, where the modifications originate
from the gravitational part of $\Gamma_k$, while gauge fixing, ghost and cutoff contributions
remain the same.
\section{Results: Single-metric}
\label{sec:single}
As usual, we resort to evaluations of the RG flow within subspaces of reduced dimensionality, i.e.\
we truncate the full theory space. Our single-metric results are based on the Einstein--Hilbert
truncation \cite{R98},
\begin{equation}
\begin{split}
\Gamma_k\big[g,\bar{g},\xi,\bar{\xi}\, \big] = \;&\frac{1}{16\pi G_k} \int \! \td^d x \sqrt{g} \,
\big( -R + 2\Lambda_k \big) \\
& + \Gamma_k^\text{gf}\big[g,\bar{g}\big]
+\Gamma_k^\text{gh}\big[g,\bar{g},\xi,\bar{\xi}\, \big].
\end{split}
\label{eq:EHtrunc}
\end{equation}
Here $G_k$ and $\Lambda_k$ are the dimensionful Newton constant and cosmological constant,
respectively, $\Gamma_k^\text{gf}=\frac{1}{2\alpha}\frac{1}{16\pi G_k}\int\td^d x \sqrt{\bg} \,
\bar{g}^{\alpha\beta}\big(\mathcal{F}_\alpha^{\mu\nu} g_{\mu\nu}\big)\big(\mathcal{F}_\beta^{\rho\sigma}
g_{\rho\sigma}\big)$ is the gauge fixing action with $\mathcal{F}_\alpha^{\mu\nu}$ and $\alpha$ as
given in the previous section, and $\Gamma_k^\text{gh}$ denotes the corresponding ghost action
with ghost fields $\xi$ and $\bar{\xi}$. After having inserted the respective metric parametrization
into the ansatz \eqref{eq:EHtrunc}, the FRGE \eqref{eq:FRGE} can be used to extract
$\beta$-functions.
\subsection{Known results for the standard parametrization}
For comparison, we begin by quoting known results for the standard parametrization.
In $4$ dimensions the resulting $\beta$-functions for the dimensionless couplings $g_k=k^{d-2}G_k$
and $\lambda_k=k^{-2}\Lambda_k$ \cite{R98} give rise to the flow diagram shown in figure
\ref{fig:StdSingle}. In addition to the Gaussian fixed point at the origin there exists a
non-Gaussian fixed point (NGFP) with positive Newton constant, suitable for the Asymptotic Safety
scenario \cite{W80}. It is also crucial that there are trajectories emanating from the NGFP and
passing the classical regime close to the Gaussian fixed point \cite{RW04}. (In figure
\ref{fig:StdSingle} one can see the separatrix, a trajectory connecting the non-Gaussian to the
Gaussian fixed point.) It has turned out that the qualitative picture (existence of NGFP, number
of relevant directions, connection to classical regime) is extremely stable under many kinds of
modifications of the setup (truncation ansatz, gauge, cutoff, inclusion of matter, etc.);
for reviews see \cite{ASReviews}. In particular, changes in the cutoff shape function do not
alter the picture, except for insignificantly shifting numerical values like fixed point
coordinates.
\begin{figure}[tp]
\centering
\includegraphics[width=.78\columnwidth]{StdParamSingleMetric}
\caption{Flow diagram for the Einstein--Hilbert truncation in $d=4$ based on the \emph{standard
parametrization}. (First obtained in \cite{RS02} for
a sharp cutoff, here for the optimized cutoff \cite{L01}.)}
\label{fig:StdSingle}
\end{figure}
In $d=2+\epsilon$ dimensions the $\beta$-function of $g_k$ has the same structure as in the
perturbative analysis, see equation \eqref{eq:betaeps}, $\beta_g = \epsilon g - b g^2$. It is
possible to show that the coefficient $b$ is a universal number, i.e.\ it is independent of the
shape function, and its value is given by $b=\frac{38}{3}$ \cite{R98}. If, additionally, scalar
fields are included, then it reads $b=\frac{2}{3}(19-c)$, where $c$ denotes the number of scalar
fields. Thus, the standard parametrization gives rise to the \emph{universal} number for the
\emph{critical central charge}
\begin{equation}
c_\text{crit}=19 ,
\end{equation}
in agreement with the perturbative result \eqref{eq:ccritStdPert}.
\subsection{Results for the new parametrization}
We refrain from presenting details of the calculation and specify some intermediate results and
$\beta$-functions in appendix \ref{app:Details} instead.
\subsubsection*{Results in \texorpdfstring{$2+\epsilon$}{2+epsilon} dimensions}
Considering equations \eqref{eq:beta_g_FRG} and \eqref{eq:beta_lambda_FRG} for $d=2+\epsilon$ and
expanding in orders of $\epsilon$ yields the $\beta$-functions
\begin{equation}
\beta_g = \epsilon g - b g^2 ,
\label{eq:betag2d}
\end{equation}
with $b=\frac{2}{3}\Big[2 \Phi_0^1(0)+24\Phi_1^2(0)
-\Phi_0^1\big(-\frac{4}{\epsilon}\lambda\big)\Big]$, and
\begin{equation}
\beta_\lambda = -2\lambda + 2 g\Big[-2\Phi_1^1(0)
+\Phi_1^1\big(-{\textstyle\frac{4}{\epsilon}}\lambda\big)\Big],
\label{eq:betalambda2d}
\end{equation}
where we have dropped higher orders in $\lambda$, $g$ and $\epsilon$, since it is possible to prove
that the fixed point values of both $g$ and $\lambda$ must be of order $\mathcal{O}(\epsilon)$. Some of the
threshold functions $\Phi$ (cf.\ \cite{R98}) appearing in \eqref{eq:betag2d} and
\eqref{eq:betalambda2d} are independent of the underlying cutoff shape function $R^{(0)}(z)$.
Here we have $\Phi_0^1(0)=1$ and $\Phi_1^2(0)=1$ for any cutoff. Furthermore, for all standard
shape functions satisfying $R^{(0)}(z=0)=1$ we find
$\Phi_0^1\big(-\frac{4}{\epsilon}\lambda\big)=\big(1-\frac{4}{\epsilon}\lambda\big)^{-1}$.
Due to the occurrence of $\epsilon^{-1}$ in the argument of $\Phi_0^1$, the $\lambda$-dependence
does not drop out of $\beta_g$ at lowest order, but rather the combination $\lambda/\epsilon$
results in a finite correction. As an exception, the sharp cutoff \cite{RS02} implicates
$R^{(0)}(z=0)\rightarrow\infty$, leading to $\Phi_0^1\big(-\frac{4}{\epsilon}\lambda\big)
=1$.\footnote{For the sharp cutoff, $\Phi^1_n(w)=-\frac{1}{\Gamma(n)}\ln(1+w)+C$ is determined
up to a constant, which, for consistency, is chosen such that $\Phi^1_n(w=0)$ agrees with
$\Phi^1_n(0)$ for some other cutoff \cite{RS02}. In the limit $n\rightarrow 0$, however, the
$w$-dependence drops out completely, and $\Phi^1_0(w)^\text{sharp}=\Phi^1_0(0)^\text{other}$.
Since $\Phi^1_0(0)=1$ for any cutoff, we find $\Phi^1_0(w)^\text{sharp}=1 \;\, \forall w$.}
Thus, we find
\begin{equation}
\textstyle
b=\frac{2}{3}\Big[26 - \big(1-\frac{4}{\epsilon}\lambda\big)^{-1} \Big]
\label{eq:bcoeff}
\end{equation}
for all standard cutoffs, and $b=\frac{2}{3} \cdot 25$ for the sharp cutoff. The threshold
function $\Phi_1^1(w)$ is cutoff dependent, also at $w=0$, so $\beta_\lambda$ is nonuniversal, too.
Hence, both $\lambda_*$ and $g_*$ depend on the cutoff.
In order to calculate the critical central charge $c_\text{crit}$, we include minimally coupled scalar
fields $\varphi_i$ ($i=1,\ldots,c$) in our analysis, for instance by adding to $\Gamma_k$ in equation
\eqref{eq:EHtrunc} the action $\frac{1}{2} \sum_{i=1}^c\int\td^d x\sqrt{\bg} \, \varphi_i\big(-\bar{D}^2\big)
\varphi_i$. In this case, the coefficient $b$ is changed into
$b=\frac{2}{3}\big[26 - (1-\frac{4}{\epsilon}\lambda )^{-1}-c \big]$ for standard shape functions,
and $b=\frac{2}{3}(25-c)$ for the sharp cutoff. The critical value for $c$, determined by the
zero of $b$ at the NGFP, is computed for different shape functions: the optimized cutoff \cite{L01},
the ``$s$-class exponential cutoff'' \cite{LR02} and the sharp cutoff \cite{RS02}. In addition, we
might set $\lambda=0$ by hand in \eqref{eq:bcoeff} such that the result can be compared to the
perturbative studies \cite{KKN93} where the cosmological constant is not taken into account. If we
do so, we find indeed $c_\text{crit}=25$, reproducing the perturbative value given in equation
\eqref{eq:ccritNewPert}. For nonvanishing $\lambda$, however, \emph{the critical central charge is
cutoff dependent}, as can be seen in table \ref{tab:ccrit}. In conclusion, the nice (and expected)
result of a \emph{universal} value $c_\text{crit}$ found for the standard parametrization cannot be
transferred to the new parametrization. Nevertheless, we obtain a number close to $25$ for all
cutoffs considered, making contact to the CFT result.
{\renewcommand{\arraystretch}{1.2}
\begin{table}[tp]
\begin{tabular}{cc}
\hline
Cutoff & $c_\text{crit}$ \\
\hline
$\quad$Any cutoff, but setting $\lambda=0 \quad$ & $25$ \\
Optimized cutoff & $\quad 25.226 \quad$ \\
Exponential cutoff ($s=1$) & $25.322$ \\
Exponential cutoff ($s=5$) & $25.190$ \\
Sharp cutoff & $25$ \\
\hline
\end{tabular}
\caption{Cutoff dependence of the critical central charge}
\label{tab:ccrit}
\end{table}%
}%
\subsubsection*{Results in \texorpdfstring{$4$}{4} dimensions}
Our analysis in $2+\epsilon$ dimensions suggests that results in the new parametrization might
depend to a larger extent on the cutoff shape. In the following we confirm this conjecture by
considering global properties of the RG flow for different shape functions.
\textbf{i)} \emph{Optimized cutoff.} An evaluation of the $\beta$-functions in $d=4$ gives rise
to the flow diagram shown in figure \ref{fig:NewSingleOpt}.
\begin{figure}[htp]
\centering
\includegraphics[width=.87\columnwidth]{NewParamSingleMetricOpt}
\caption{Flow diagram in $d=4$ based on the \emph{new
parametrization} and the optimized cutoff.}
\label{fig:NewSingleOpt}
\end{figure}
The result is fundamentally different from what is known for the standard parametrization (cf.\
figure \ref{fig:StdSingle}). Although we find again a Gaussian and a non-Gaussian fixed point,
we encounter new properties of the latter. The NGFP is \emph{UV-repulsive} in both directions now.
Furthermore, it is surrounded by a closed UV-attractive \emph{limit cycle}. The singularity line
(dashed) (where the $\beta$-functions diverge and beyond which the truncation ansatz is no longer
reliable) prevents the existence of globally defined trajectories emanating from the limit cycle
and passing the classical regime, i.e.\ there is no connection between the limit cycle and the
Gaussian fixed point. Trajectories inside the limit cycle are asymptotically safe in a generalized
sense since they approach the cycle in the UV, and they hit the NGFP in the infrared, but they can
never reach a classical region. Note that the limit cycle is similar to those found in references
\cite{HR12,DP13} which are based on nonlinear metric parametrizations, too.
\textbf{ii)} \emph{Sharp cutoff.} The flow diagram based on the sharp cutoff -- see figure
\ref{fig:NewSingleSharp} -- is similar to the ones found with the standard parametrization.
\begin{figure}[htp]
\centering
\includegraphics[width=.87\columnwidth]{NewParamSingleMetricSharp}
\caption{Flow diagram in $d=4$ based on the \emph{new
parametrization} and the sharp cutoff.}
\label{fig:NewSingleSharp}
\end{figure}
There is a Gaussian and a non-Gaussian fixed point. The NGFP is \emph{UV-attractive} in both $g$-
and $\lambda$-direction. There is \emph{no limit cycle}. Due to the singularity line, there is no
asymptotically safe trajectory that has a sufficiently extended classical regime close to the
Gaussian fixed.
\textbf{iii)} \emph{Exponential cutoff.} The exponential cutoff with generic values for the
parameter $s$ gives rise to a flow diagram (not depicted here) that is somewhere in between
figure \ref{fig:NewSingleOpt} and figure \ref{fig:NewSingleSharp}. The NGFP is \emph{UV-repulsive}
as it is for the optimized cutoff. However, there is no closed limit cycle. Although a relict of
the cycle is still present, it does not form a closed line, but rather runs into the singularity
line. Again, there is no separatrix connecting the fixed points. Varying $s$ shifts the
coordinates of the NGFP. For $s\leq 0.93$ \emph{the fixed point even vanishes}, or, more precisely,
it is shifted beyond the singularity, leaving it inaccessible. Thus, the NGFP that seemed to be
indestructible for the standard parametrization can be made disappear with the new parametrization!
In summary, fundamental qualitative features of the RG flow like the signs of critical exponents,
the existence of limit cycles, or the existence of suitable non-Gaussian fixed points are strongly
cutoff dependent.
\section{Results: Bi-metric}
\label{sec:bi}
For the bi-metric analysis \cite{bimetric,BR14} we consider the truncation ansatz
\begin{equation}
\begin{split}
\Gamma_k\big[g,\bar{g},\xi,\bar{\xi}\, \big] =\; &\frac{1}{16\pi G_k^\text{Dyn}} \int\! \td^d x \sqrt{g}
\big(\! -R + 2\Lambda_k^\text{Dyn} \big)\\
&+\frac{1}{16\pi G_k^\text{B}} \int\! \td^d x \sqrt{\bg} \big(\! -\bar{R} + 2\Lambda_k^\text{B}
\big)\\[0.1em]
&+ \Gamma_k^\text{gf}\big[g,\bar{g} \big] + \Gamma_k^\text{gh}\big[g,\bar{g},\xi,\bar{\xi}\, \big].
\end{split}
\label{eq:doubleEHtrunc}
\end{equation}
It consists of two separate Einstein--Hilbert terms belonging to the dynamical ('Dyn') and the
background ('B') metric with their corresponding couplings. To evaluate $\beta$-functions we
employ the conformal projection technique together with the $\Omega$ deformed $\alpha=1$ gauge
\cite{BR14}.
\subsection{Known results for the standard parametrization}
\label{sec:biKnown}
We quote the most important results obtained for the standard parametrization \cite{BR14}.
Since the background couplings $G_k^\text{B}$ and $\Lambda_k^\text{B}$ in the truncation ansatz
\eqref{eq:doubleEHtrunc} occur in terms containing the background metric only, they drop out when
calculating the second derivative of $\Gamma_k$ with respect to $h_{\mu\nu}$, and hence, they cannot
enter the RHS of the FRGE \eqref{eq:FRGE}. As a consequence, the RG flow of the dynamical coupling
sector is decoupled: $\beta_\lambda^\text{Dyn}\equiv \beta_\lambda^\text{Dyn}(\lambda^\text{Dyn},
g^\text{Dyn})$ and $\beta_g^\text{Dyn}\equiv \beta_g^\text{Dyn}(\lambda^\text{Dyn},g^\text{Dyn})$
form are closed system, so one can solve the RG equations of the 'Dyn' couplings independently
at first.
On the other hand, the background $\beta$-functions depend on both dynamical and background
couplings. Therefore, the RG running of $g_k^\text{B}$ and $\lambda_k^\text{B}$ can be determined
only if a solution of the 'Dyn' sector is picked. With regard to the Asymptotic Safety program
we choose a 'Dyn' trajectory which emanates from the NGFP and passes the classical regime near the
Gaussian fixed point. This trajectory is inserted into the $\beta$-functions of the background
sector, making them explicitly $k$-dependent. The vector field these $\beta$-functions give rise to
depends on $k$, too, and possible ``fixed points'', i.e.\ simultaneous zeros of
$\beta_\lambda^\text{B}$ and $\beta_g^\text{B}$, become moving points. The UV-attractive ``moving
NGFP'' is called running attractor in \cite{BR14}. In figure \ref{fig:StdBiOpt} we show the vector
field of the background sector at $k\rightarrow\infty$ and the RG trajectory (thick) that starts
at the $k=0$ position of the running attractor and ends at its $k\rightarrow\infty$ position
(w.r.t.\ the inverse RG flow).
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{StdParamBimetric}
\caption{Vector field for the background couplings at $k\rightarrow\infty$ and RG trajectory
that is asymptotically safe in the UV and restores split symmetry in the IR (left figure),
and underlying trajectory in the 'Dyn' sector (right figure), based on the \emph{standard
parametrization} and the optimized cutoff in $d=4$.}
\label{fig:StdBiOpt}
\end{figure}
As argued in \cite{BR14}, this specific trajectory is the only acceptable possibility once a 'Dyn'
trajectory is chosen: It combines the requirements of Asymptotic Safety (it approaches an NGFP in
the UV) and \emph{split symmetry restoration} in the infrared.
Recovering split symmetry at $k=0$ is of great importance. We mentioned already in the introduction
that the background metric is an auxiliary construction, and physical observables must not depend
on it. Thus, physical quantities derived from the full quantum action $\Gamma=\Gamma_{k=0}$ are
required not to have an extra $\bar{g}$-dependence, but to depend only on $g$. In the standard
parametrization, where $g=\bar{g}+h$, this means that $\bar{g}$ and $h$ can make their appearance only
via their sum (which is split symmetric). Hence, the claim of split symmetry originates from
requiring \emph{background independence}. Within the approximations of \cite{BR14}, this requires
at the level of the effective average action: $1/G_k^\text{B}\rightarrow 0$ and
$\Lambda_k^\text{B}/G_k^\text{B}\rightarrow 0$. For any appropriate choice of initial conditions in
the 'Dyn' sector \emph{there exists a unique trajectory} in the 'B' sector complying with this
requirement in the infrared. This general result is independent of the chosen cutoff shape function.
\subsection{Results for the new parametrization}
We aim at finding an asymptotically safe trajectory that restores background independence at
$k=0$. (In the context of the new parametrization we do no longer call it ``split symmetry'' in order
to keep the nonlinearity in mind.) We present some intermediate results of the calculation in appendix
\ref{app:bi}, where we mention the differences to the standard parametrization on the technical level.
In the following, we discuss the differences of the resulting RG flow and its dependence on the cutoff
shape.
\textbf{i)} \textit{Optimized cutoff.} An evaluation of the $\beta$-functions in the 'Dyn' sector
gives rise to the flow diagram displayed in figure \ref{fig:NewBiOptDyn}.
We discover a non-Gaussian fixed point, but it is rather close to the singularity line. As a
consequence, all trajectories emanating from this fixed point will hit the singularity after a
short RG time. It is \emph{impossible to find suitably extended trajectories}: they do not pass
the classical regime, and they never come close to an acceptable infrared limit. For this reason
it is pointless to investigate the possibility of restoration of background independence.
The background sector exhibits a UV-attractive NGFP, too, but due to the lack of an appropriate
infrared regime we do not show a vector field for the background couplings here.
\begin{figure}[tp]
\centering
\includegraphics[width=.8\columnwidth]{NewParamBimetricDynOpt}
\caption{Flow diagram of the 'Dyn' couplings in $d=4$ based on the \emph{new
parametrization} and the optimized cutoff.}
\label{fig:NewBiOptDyn}
\end{figure}
\textbf{ii)} \textit{Exponential cutoff.} We find the same qualitative picture as in figure
\ref{fig:NewBiOptDyn} which was based on the optimized cutoff. The exponential cutoff brings about
a UV-attractive non-Gaussian fixed point, but there are no trajectories that extend to a suitable
infrared region. Thus, there is \emph{no restoration of background independence}.
\textbf{iii)} \textit{Sharp cutoff.} The $\beta$-functions of the 'Dyn' coup\-lings lead to a
Gaussian and a non-Gaussian fixed point. We observe that $\beta_\lambda^\text{Dyn}$ is
proportional to $\lambda^\text{Dyn}$, so trajectories cannot cross the line at
$\lambda^\text{Dyn}=0$. However, there are trajectories that connect the NGFP to the classical
regime, comparable with the ones found for the standard parametrization. Once such a trajectory
is chosen, it serves as a basis for further analyses since it can be inserted into the
$\beta$-functions of the background sector to study the corresponding RG flow. In this way, we
find the same running attractor mechanism as in section \ref{sec:biKnown} for the standard
parametrization. As can be seen in figure \ref{fig:NewBiSharp}, there is an NGFP present at
$k\rightarrow\infty$, suitable for the Asymptotic Safety scenario.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{NewParamBimetricSharp}
\caption{Vector field for the background couplings at $k\rightarrow\infty$ and RG trajectory
that is asymptotically safe in the UV and restores background independence in the IR (left
figure), and underlying trajectory in the 'Dyn' sector (right figure), based on the
\emph{new parametrization} and the sharp cutoff in $d=4$.}
\label{fig:NewBiSharp}
\end{figure}
The thick trajectory is again the unique one that starts at the $k=0$ position of the running
attractor and ends at its $k\rightarrow\infty$ position. Even if the curve has a different form
compared with figure \ref{fig:StdBiOpt}, it has the same essential properties. In particular, it
\emph{restores background independence in the infrared}. Since, in addition, it is asymptotically
safe, it is an eligible candidate for defining a fundamental theory.
To summarize, the possibility of finding a suitable RG trajectory combining the requirements of
Asymptotic Safety and restoration of background independence at $k=0$ depends in a crucial way
on the cutoff shape for the new parametrization.
\section{Conclusion}
We have investigated the properties of the ``new'' exponential metric parametrization
$g_{\mu\nu}=\bar{g}_{\mu\rho}(e^h)^\rho{}_\nu$, which has been contrasted with the standard background
field split, $g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}$. When inserting the exponential relation into the classical
Einstein--Hilbert action and expanding in orders of $h_{\mu\nu}$ we obtain
\begin{equation}
\begin{split}
S^\text{EH}[g] &= S^\text{EH}\big[\bar{g} e^h\big]=S^\text{EH}\big[\bar{g}+h+\mathcal{O}(h^2)\big] \\
&= S^\text{EH}[\bar{g}] + \int\td^d x\frac{\delta S^\text{EH}}{\delta g_{\mu\nu}(x)}h_{\mu\nu}(x) + \mathcal{O}(h^2).
\end{split}
\end{equation}
Thus, the equations of motion are given by the standard ones,
$\frac{\delta S^\text{EH}}{\delta h_{\mu\nu}}\big|_{g=\bar{g}}
= \frac{\delta S^\text{EH}}{\delta g_{\mu\nu}}\big|_{g=\bar{g}}
= \frac{1}{16\pi G}\big(\bar{G}^{\mu\nu}+\bar{g}^{\mu\nu}\Lambda\big) = 0$, i.e.\ the two parametrizations
give rise to equivalent theories at the classical level. Since the quantum character of
gravity is not known, there is no reason, a priori, to prefer one parametrization
to another. So far, almost all Asymptotic Safety related studies considered the standard
parametrization. In this work we focused on the new one instead.
We have shown that there is a one-to-one correspondence between dynamical metrics
$g_{\mu\nu}=\bar{g}_{\mu\rho}(e^h)^\rho{}_\nu$ and symmetric fields $h_{\mu\nu}$ in the exponent, which
is particularly important from a path integral perspective. It remains an open question, however,
if this correspondence can be transferred to Lorentzian spacetimes in the sense that $g_{\mu\nu}$ and
$\bar{g}_{\mu\nu}$ have the same signature when related by the new parametrization.
Do we expect different results for the two parametrizations at all? According to the
equivalence theorem, a field redefinition in the path integral does not change S-matrix
elements, but this is an on shell argument, where internal quantum fluctuations are integrated
out completely. In contrast, in the effective average action $\Gamma_k$ fluctuations with
momenta below $k$ are suppressed, and nowhere in the FRGE do we go on shell. Therefore, we
expect differences in $\beta$-functions and structures of the corresponding RG flow, indeed.
Due to the lack of directly measurable physical observables exhibiting quantum gravity effects,
these off shell quantities are of considerable interest.
Clearly, even the role of Newton's constant is changed for the new parametrization.
This can be understood as follows. In order to identify Newton's constant with the strength
of the gravitational interaction in the standard parametrization, one usually rescales
the fluctuations $h_{\mu\nu}$ such that
\begin{equation}
g_{\mu\nu} = \bar{g}_{\mu\nu} + \sqrt{32\pi G_k}\, h_{\mu\nu} .
\end{equation}
In this way, each gravitational vertex with $n$ legs is associated with a factor
$(\sqrt{32\pi G_k}\,)^{n-2}$. For the new parametrization we can consider a similar rescaling
of $h_{\mu\nu}$, leading to the same factor appearing in the $n$-point functions. The difference
resides in the fact that there are new terms and structures in $\Gamma_k^{(n)}$ when using
the new parametrization. As already indicated in equation \eqref{eq:2ndVar}, these
additional contributions to each vertex are due to the chain rule. Hence, Newton's constant
is accompanied by different terms in the $n$-point functions.
In fact, these general considerations are reflected in our findings. To summarize them: 1.) We
find quite different $\beta$-functions and new structures in the RG flow. 2.) The calculations
based on a type I cutoff result in a strong dependence on the cutoff shape function.
This can be seen most clearly in $d=4$ dimensions. In the single-metric computation we encountered
a limit cycle and a UV-repulsive NGFP for the optimized cutoff, whereas the sharp cutoff gives
rise to a UV-attractive NGFP without limit cycle. Furthermore, in the bi-metric setting with a
sharp cutoff there exists an asymptotically safe trajectory that restores background independence
in the infrared, while it is not possible to find such trajectories when using the optimized
cutoff. It is remarkable and somewhat unexpected that the sharp cutoff leads to the most convincing
results.
Our observations seem to suggest that results based on the new parametrization are less
reliable or even unphysical. On the other hand, the strong cutoff dependence compared to the
linear parametrization could be seen from a different perspective as well: If Quantum Einstein
Gravity with $g_{\mu\nu}=\bar{g}_{\mu\rho}(e^h)^\rho{}_\nu$ is asymptotically safe, probably more
invariants in the truncation ansatz are needed to get a clear picture. The nonlinear relation
for the metric might give more importance to the truncated higher order terms.
Moreover, it can be speculated that the strong dependence on the cutoff shape might be a
peculiarity of the type I cutoff. As has been argued in \cite{DP13}, in some situations the type
II cutoff leads to correct physical results, whereas the type I cutoff does not. Future calculations
may show if a similar reasoning applies here as well, i.e.\ if the essential properties of the RG
flow obtained with a type II cutoff do not depend on the shape function to such a great extent.
We conjecture that the limit cycle is a consequence of a nonlinear parametrization in
combination with a type I cutoff. Instead, a type II cutoff might lead to physical and stable
results so that the advantages of the new parametrization become more apparent.
In $d=2+\epsilon$ dimensions we can reproduce the critical value of the central charge
obtained with a perturbative calculation, $c_\text{crit}=25$, when the cosmological constant $\lambda$ is
set to zero. For nonvanishing $\lambda$ we find a slight cutoff dependence of $c_\text{crit}$, but it
remains still close to $25$. Since it is this number that makes contact to vertex operator
calculations and other established CFT arguments, the new parametrization seems to be appropriate
for comparisons and further applications after all.
\begin{acknowledgments}
The author would like to thank M.~Reuter and S.~Lippoldt for many extremely useful suggestions.
He is also grateful to H.~Gies, R.~Percacci, A.~Codello, K.~Falls, D.~Benedetti and A.~Eichhorn
for stimulating discussions at the ERG conference 2014.
\end{acknowledgments}
|
2,869,038,154,246 | arxiv | \section{Introduction}
Many galaxy clusters are sources of radio emission. One such class of
sources found in galaxy clusters are that of radio
mini-halos. Mini-halos are diffuse, steep-spectrum synchrotron sources found in the cores of so-called
``cool-core'' clusters. These sources typically are associated with a
central AGN and extend out to approximately the cooling radius of the
cluster gas \citep[for a review see][]{fer08}.
A number of mini-halos have emission that is correlated on the sky
with spiral-shaped ``cold fronts'' seen in the X-ray emission, believed to be the signature of
sloshing of the cluster's cool core gas \citep{MVF03,asc06}. \citet{maz08} discovered this correlation in two clusters, and
suggested that the correlation resulted from a population of
relativistic electrons that was reaccelerated by turbulence generated by the sloshing
motions. In order to determine whether or not the reacceleration efficiency
resulting from this turbulence is sufficient enough to reaccelerate
electrons and produce the corresponding radio emission, we have
performed MHD simulations of gas sloshing in a galaxy cluster with
tracer particles acting as the relativistic electrons.
\begin{figure*}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{temp.eps}\includegraphics[clip=true]{bmag.eps}}
\caption{\footnotesize Slices through the center of the simulation
domain of gas sloshing in the center of a galaxy cluster. Left:
Temperature (keV), showing prominent spiral-shaped ``cold fronts''.
Right: Magnetic field strength ($\mu$G), showing the field
amplification near the front surfaces. Vectors show the magnetic
field direction. Each panel is 400~kpc on a side.
}
\label{fig:sloshing}
\end{figure*}
\section{Simulations}
Our simulations have been performed using FLASH 3, an adaptive mesh refinement hydrodynamics code with support for
simulations of magnetized fluids. Our simulations are set up in the manner of \citet{asc06} and
\citet{zuh10}. In this scenario, a large, relaxed cool-core cluster and a small, gasless
subcluster are set on a trajectory in which the subcluster will pass by the core. The gravitational force from
this subcluster acts on both the gas and dark matter cores of the main
cluster, but due to ram pressure the gas core becomes separated from
the center of the potential well and begins to ``slosh'' back and forth in the gravitational
potential, forming spiral-shaped cold fronts (see Figure
\ref{fig:sloshing}, left panel). The initial magnetic field in our simulations
is set up as a tangled field with an average field strength
proportional to the gas pressure ($\beta = p/p_B \approx$~100).
For the relativistic electrons, we employ a simple model, assuming
they are passive tracer particles advected along
with the fluid motions. Each particle is given an initial energy and carries with it the properties of the
local fluid. With this information, we derive the evolution of the
particle energy along its trajectories by taking into account the
relevant physical properties.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.5\textwidth]{pow_spec.eps}
\includegraphics[width=0.44\textwidth]{vels.eps}
\caption{\footnotesize The turbulent velocity structure of the
sloshing gas. Left: The power spectrum of the velocity field. Solid
line indicates the unfiltered power spectrum, dashed line the
filtered power spectrum. The dotted line shows what would be
expected for a Kolmogorov spectrum for comparison. Right:
Mass-weighted, projected turbulent velocity of the gas in units of $\mathrm {km~s}^{-1}$. The most
significant turbulence is contained within the envelope of the cold
fronts. The panel is 400~kpc on a side.
}
\label{fig:vels}
\end{center}
\end{figure*}
We assume that electrons are reaccelerated via transit-time damping
(TTD) of magnetosonic turbulent modes \citep{cas05,bru07,bru10}. We determine the turbulent velocity field in our simulation by a
process of filtering \citep{dol05,vaz06,vaz09}. The velocity field is
assumed to be a sum of a``bulk'' component and a ``turbulent''
component. For each position the local turbulent velocity is calculated by quadratically interpolating the mean
velocity from surrounding boxes of width $\sim$20~kpc and then
subtracting this mean value from the total velocity. The turbulent reacceleration coefficient for
the electrons then may be determined using the TTD formalism. Figure
\ref{fig:vels} shows the resulting turbulent velocity has a power spectrum
that is close to Kolmogorov (left panel) and is strongest in the
region of the sloshing motions (right panel). We take into account radiative (synchrotron and inverse Compton) and
Coulomb losses to calculate the evolution of the particle energy, with
the physical parameters determined from the properties of the
magnetized fluid. With the updated particle energies, we can then
compute the associated synchrotron emissivities that may be projected
along a line of sight to produce a map of radio emission.
\section{Results}
Two effects contribute to the association of the radio emission with
the sloshing cold fronts. Strong shear
flows are associated with the cold front surfaces, and these flows
stretch and amplify magnetic field lines parallel to these
surfaces \citep{kes10}. Our simulations show that the degree of amplification of the
magnetic energy along a cold front can be an order of magnitude or
more above the initial field energy (ZuHone et al. 2011, in
preparation; also see Figure \ref{fig:sloshing}, right panel).
\begin{figure*}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{mini_halo_z.eps}}
\caption{\footnotesize Mock X-ray images of a sloshing gas core with
mock 300 MHz radio contours overlaid, projected along the z-axis of
the simulation. Left: X-ray surface brightness in the 0.5-7.0
keV band. Right: ``Spectroscopic-like'' projected temperature. Contours begin at 0.5 mJy/beam and are spaced
by a factor of $\sqrt{2}$. Each frame is 400~kpc on a side.
}
\label{fig:mini_halo}
\end{figure*}
Our most intriguing results come from the integration of electron
energies along the trajectories of our tracer particles. Beginning
with a spherical distribution of particles with radius $r$ = 200~kpc,
and with the number density proportional to the local gas density, we assign electron energies to
each tracer particle from an initial power-law distribution, set up
shortly after the sloshing period begins. After evolving the particle
energies along the trajectories of the tracer particles for
approximately 400~Myr, we find that most of the electrons have cooled
below the threshold for synchrotron emission, with the exception of
those associated with the envelope of the sloshing motions. Figure
\ref{fig:mini_halo} shows mock observations of X-ray surface
brightness and projected ``spectroscopic-like'' temperature with radio
contours overlaid. There is a clear correspondence with the region
associated with the cold fronts and the radio emission from the
mini-halo. This is also the region where the turbulent motions are
strongest (see Figure \ref{fig:vels}, right panel). If the electrons
are assumed to simply cool without reacceleration, we do not see such emission.
\section{Conclusions}
We have performed MHD simulations of gas sloshing in clusters of
galaxies, with the aim of determining whether or not the correlation
between radio mini-halo emission and sloshing cold fronts in some
clusters can be explained by the reacceleration of relativistic
electrons by turbulence associated with the sloshing motions. Our
initial results are very promising, indicating that the combined
effects of the amplified magnetic field and the turbulence associated
with the sloshing motions reaccelerates a population of relativistic
electrons within the envelope of the cold fronts which emit
synchrotron radiation from these regions. In our procedure we assume a
population of seed electrons diffused in the sloshing region. Although we do not model the injection process of these
particles, several mechanisms may provide an efficient source of fresh
electrons to reaccelerate in cluster cool-cores \citep[see discussion in][]{cas08}. Further work will
also detail differences in prediction with other models for mini-halos.
\begin{acknowledgements}
J.A.Z. is supported under {\it Chandra} grant GO8-9128X. The software used in this work was in part developed by the DOE-supported ASC / Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,154,247 | arxiv | \section{Introduction}\label{sectionintroduction}
\subsection{Statement of the Theorem}\label{subsectionstatement}
Goodwillie's isomorphism \cite{GoodWillieRelativeK86}:
\begin{equation}\label{equgoodwillie}
K_{n+1}(R,I)\otimes\mathbb{Q}\cong HC_{n}(R,I)\otimes\mathbb{Q},
\end{equation}
relating the relative algebraic $K$-theory and relative cyclic homology of a ring $R$ with respect to a $2$-sided nilpotent ideal $I$, together with the $\lambda$-decomposition of cyclic homology, which takes the form \cite{LodayCyclicHomology98}:
\begin{equation}\label{equlambda}
HC_n(R;k)\cong \frac{\Omega_{R/k} ^n}{d\Omega_{R/k} ^{n-1}}\oplus H_{dR} ^{n-2}(R;k)\oplus H_{dR} ^{n-4}(R;k)\oplus...,
\end{equation}
for a smooth algebra $R$ over a commutative ring $k$ containing $\mathbb{Q}$,
highlight the relationships among algebraic $K$-theory, cyclic homology, and differential forms.\footnotemark\footnotetext{I have chosen notation and context similar to that of Loday \cite{LodayCyclicHomology98} and Weibel \cite{WeibelKBook}. Goodwillie \cite{GoodWillieRelativeK86} works in the more general context of {\it simplicial rings,} and uses $K_n(f)$ and $HC_n(f)$ to denote the relative groups $K_{n-1}(R,I)$ and $HC_{n-1}(R,I)$, where $f$ is the canonical surjection $R\rightarrow R/I$. Note the difference of index conventions: Goodwillie (page 359) defines $K_n(f)$ to be the $(n-1)$st homotopy group of the homotopy fiber of the morphism $\mbf{K}(R)\rightarrow \mbf{K}(R/I)$ of pointed simplicial sets or spectra, while Loday (also page 359) and Weibel (Chapter IV, page 8) define $K_{n}(R,I)$ to be the $n$th homotopy group of the analogous fiber. This leads to different numberings in the long exact sequences of Goodwillie (remark 3, page 359) and Loday (11.2.19.2, page 359).} Goodwillie's isomorphism is a {\it relative} example of a rational isomorphism between an algebraic $K$-theory and a cohomology theory (compare \cite{FriedlanderRational03}); i.e., an isomorphism after tensoring both objects with $\mathbb{Q}$. The first summand $\Omega_{R/k} ^n/d\Omega_{R/k} ^{n-1}$ appearing in equation \hyperref[equlambda]{\ref{equlambda}} is the $n$th module of K\"{a}hler differentials of $R$ relative to $k$, modulo exact differentials. It is roughly analogous to the $(n+1)$st Milnor $K$-theory group $K_{n+1}^{\tn{\fsz{M}}}(R)$, which maps canonically into the first summand of the corresponding $\lambda$-decomposition of the algebraic $K$-theory group $K_{n+1}(R)$.\footnotemark\footnotetext{Here, $K_{n+1}(R)$ may be taken to be Quillen $K$-theory. The map $K_{n+1}^{\tn{\fsz{M}}}(R)\rightarrow K_{n+1}(R)$ sends the Steinberg symbol $\{r_0,r_1,...,r_n\}$ to the product $r_0\times r_1\times...\times r_{n}$, where each factor $r_j$ in the latter product is regarded as an element of $K_1(R)\cong R^*$, and where the multiplication $\times$ is in the ring $K(R)$. See Weibel \cite{WeibelKBook} Chapter IV, pages 7-8, for details. This map is not injective in general, even for fields. See, for example, Weibel \cite{WeibelKBook} Chaper IV, exercise 1.12.} The remaining summands $H_{\tn{\fsz{dR}}} ^{n-2j}(R;k)$ are de Rham cohomology modules; i.e., the cohomology modules of the algebraic de Rham complex $(\Omega_{R/k}^\bullet,d)$.
In this paper I prove the following Goodwillie-type theorem for relative Milnor $K$-theory in the context of commutative rings:
\vspace*{.2cm}
\begin{theorem*}Suppose that $R$ is a split nilpotent extension of a $5$-fold stable ring $S$, with extension ideal $I$,
whose index of nilpotency is $N$. Suppose further that every positive integer less than or equal to $N$
is invertible in $S$. Then for every positive integer $n$,
\begin{equation}\label{equmaintheorem}K_{n+1} ^{\tn{\fsz{M}}}(R,I)\cong \frac{\Omega_{R,I} ^n}{d\Omega_{R,I} ^{n-1}}.\end{equation}
\end{theorem*}
Here, $R$ and $S$ are commutative rings with identity, and $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ is the $(n+1)$st Milnor $K$-group of $R$ relative to $I$. This group is what Kerz \cite{KerzMilnorLocal} calls the ``na\"{i}ve Milnor $K$-group;" see section \hyperref[subsectionsymbolic]{\ref{subsectionsymbolic}} below for more details. The differentials are {\it absolute} K\"{a}hler differentials, in the sense that they are differentials with respect to $\mathbb{Z}$. They are {\it relative} to $I$ in the same sense that the $K$-groups are relative to $I$. Because $R$ is a split extension of $S$, the group $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ may be identified with the kernel $\tn{Ker}[K_{n+1} ^{\tn{\fsz{M}}}(R)\rightarrow K_{n+1} ^{\tn{\fsz{M}}}(S)]$, and the group $\Omega_{R,I} ^n$ may be identified with the kernel $\tn{Ker}[\Omega_{R/\mathbb{Z}} ^n\rightarrow \Omega_{S/\mathbb{Z}} ^n]$, where both maps are induced by the split surjection $R\rightarrow S$. The class of $m$-fold stable rings, defined in section \hyperref[subsectionstability]{\ref{subsectionstability}} below, includes any local ring with at least $m+2$ elements in its residue field.
The isomorphism \hyperref[equmaintheorem]{\ref{equmaintheorem}} is the map
\[\phi_{n+1}:K_{n+1} ^{\tn{\fsz{M}}}(R,I)\longrightarrow\frac{\Omega_{R,I} ^n}{d\Omega_{R,I} ^{n-1}}\]
\begin{equation}\label{equationphi}
\{r_0,r_1,...,r_n\}\mapsto\log(r_0)\frac{dr_1}{r_1}\wedge...\wedge\frac{dr_n}{r_n},
\end{equation}
where $r_0$ belongs to the subgroup $(1+I)^*$ of the multiplicative group $R^*$ of $R$, and $r_1,...,r_n$ belong to $R^*$. The symbol $\{r_0,r_1,...,r_n\}$ is called a Steinberg symbol; such symbols generate the Milnor $K$-group $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$, as explained in section \hyperref[subsectionsymbolic]{\ref{subsectionsymbolic}} below. The logarithm is understood in the sense of power series. The inverse isomorphism is the map
\[\psi_{n+1}:\frac{\Omega_{R,I} ^n}{d\Omega_{R,I} ^{n-1}}\longrightarrow K_{n+1} ^{\tn{\fsz{M}}}(R,I)\]
\begin{equation}\label{equationpsi}
r_0dr_1\wedge...\wedge dr_m\wedge dr_{m+1}\wedge...\wedge dr_n\mapsto\{e^{r_0r_{m+1}...r_{n}},e^{r_1},...,e^{r_m},r_{m+1},...,r_{n}\},
\end{equation}
where the elements $r_0,r_1,...,r_m$ belong to the ideal $I$, and the elements $r_{m+1},...,r_{n}$ belong to the multiplicative group $R^*$ of $R$. Part of the proof of the theorem involves verifying that the formulae \hyperref[equationphi]{\ref{equationphi}} and \hyperref[equationpsi]{\ref{equationpsi}}, defined in terms of special group elements, extend to homomorphisms.
\subsection{Structure of the Paper}\label{subsectionstructure}
{\bf Section \hyperref[sectionprelim]{\ref{sectionprelim}}} provides context and background involving split nilpotent extensions, Van der Kallen stability, symbolic $K$-theory, and K\"{a}hler differentials.
\begin{itemize}
\item Section \hyperref[subsectioncontextual]{\ref{subsectioncontextual}} places the theorem in its proper historical and mathematical context.
\item Section \hyperref[subsectionnilpotent]{\ref{subsectionnilpotent}} introduces split nilpotent extensions of rings.
\item Section \hyperref[subsectionstability]{\ref{subsectionstability}} discusses Van der Kallen's stability criterion for rings, and includes an easy lemma on the behavior of stability under split nilpotent extensions.
\item Section \hyperref[subsectionsymbolic]{\ref{subsectionsymbolic}} introduces Milnor $K$-theory and Dennis-Stein $K$-theory, two different symbolic $K$-theories. The definition of Milnor $K$-theory used is the na\"{i}ve one in terms of tensor algebras. Dennis-Stein $K$-theory is defined here only for $K_2$. Brief historical context is provided, and different definitions appearing in the literature are mentioned. In particular, Kerz's ``improved Milnor $K$-theory," and Thomason's nonexistence proof for ``ideal global Milnor $K$-theory," are cited.
\item Section \hyperref[subsectionsymbolicstability]{\ref{subsectionsymbolicstability}} presents two useful existing results relating symbolic $K$-theories under stability hypotheses. The first, theorem \hyperref[theoremvanderkallen]{\ref{theoremvanderkallen}}, is Van der Kallen's isomorphism between the second Milnor $K$-theory group and the second Dennis-Stein $K$-theory group in the $5$-fold stable case. The second, theorem \hyperref[theoremmaazen]{\ref{theoremmaazen}}, is Maazen and Stienstra's isomorphism between relative $K_2$ and the corresponding second relative Dennis-Stein group of a split radical extension.
\item Section \hyperref[subsectionkahlerbloch]{\ref{subsectionkahlerbloch}} introduces absolute K\"{a}hler differentials, and cites a famous result of Bloch, theorem \hyperref[theorembloch]{\ref{theorembloch}}, which expresses relative $K_2$ of a split nilpotent extension in terms of absolute K\"{a}hler differentials. This theorem provides the base case of the main theorem in section \hyperref[subsectionbasecase]{\ref{subsectionbasecase}}.
\end{itemize}
{\bf Section \hyperref[sectionSteinbergKahler]{\ref{sectionSteinbergKahler}}} supplies computational tools for working with Milnor $K$-theory and K\"{a}hler differentials.
\begin{itemize}
\item Section \hyperref[subsectionnotation]{\ref{subsectionnotation}} fixes notation and conventions designed to streamline the proof in section \hyperref[sectionproof]{\ref{sectionproof}}. This devices are specific to this paper, although they could be used to advantage in any similar context.
\item Section \hyperref[generatorsMilnor]{\ref{generatorsMilnor}} discusses Milnor $K$-theory and relative Milnor $K$-theory in terms of generators and relations. In particular, lemmas \hyperref[lemrelationsstable]{\ref{lemrelationsstable}} and \hyperref[lemrelativegenerators]{\ref{lemrelativegenerators}} establish the computational convenience of split nilpotent extensions of $5$-fold stable rings in this context.
\item Section \hyperref[subsectiongeneratorsKahler]{\ref{subsectiongeneratorsKahler}} provides similar results for K\"{a}hler differentials.
\item Section \hyperref[subsectiondlogdeRhamWitt]{\ref{subsectiondlogdeRhamWitt}} introduces the canonical $d\log$ map, which is a homomorphism of graded rings between Milnor $K$-theory and the absolute K\"{a}hler differentials. The $d\log$ map plays a specific role in the proof of lemma \hyperref[lempatchingphi]{\ref{lempatchingphi}}. This section also mentioned the de Rham Witt viewpoint, as suggested by Van der Kallen and Hesselholt.
\end{itemize}
{\bf Section \hyperref[sectionproof]{\ref{sectionproof}}} presents the proof of the theorem.
\begin{itemize}
\item Section \hyperref[subsectionstrategy]{\ref{subsectionstrategy}} outlines the strategy of proof: ``induction and patching." This section also explains why na\"{i}ve ``simpler" approaches seem to run into trouble.
\item Section \hyperref[subsectionbasecase]{\ref{subsectionbasecase}} presents the base case of the theorem: $K_{2} ^{\tn{\fsz{M}}}(R,I)\cong \Omega_{R,I} ^1/dI$, proved by combining theorems \hyperref[theoremvanderkallen]{\ref{theoremvanderkallen}}, \hyperref[theoremmaazen]{\ref{theoremmaazen}}, and \hyperref[theorembloch]{\ref{theorembloch}}. The isomorphisms in both directions are described explicitly.
\item Section \hyperref[subsectionexplicitinduction]{\ref{subsectionexplicitinduction}} states the induction hypothesis in detail.
\item Section \hyperref[subsectionanalysisphi]{\ref{subsectionanalysisphi}} gives the construction of the map $\displaystyle\phi_{n+1}: K_{n+1} ^{\tn{\fsz{M}}}(R,I)\rightarrow \Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$, and the proof that it is a surjective homomorphism. The strategy is to ``patch together" maps $\Phi_{n+1,j}$ for $0\le j\le n-1$.
\item Section \hyperref[subsectionanalysispsi]{\ref{subsectionanalysispsi}} completes the proof of the theorem by giving the construction of the map $\displaystyle\psi_{n+1}:\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}\rightarrow K_{n+1} ^{\tn{\fsz{M}}}(R,I)$, and the proof that $\phi_{n+1}$ and $\psi_{n+1}$ are inverse isomorphisms.
\end{itemize}
{\bf Section \hyperref[sectiondiscussion]{\ref{sectiondiscussion}}} includes discussion and applications of the theorem.
\begin{itemize}
\item Section \hyperref[subsectionGG]{\ref{subsectionGG}} discusses the initial motivation for the paper, which is the recent work by Green and Griffiths on the infinitesimal structure of cycle groups and Chow groups.
\item Section \hyperref[subsectionsimilarresults]{\ref{subsectionsimilarresults}} examines existing results concerning relative $K$-theory which are similar to the theorem in this paper. First, an early special case of the theorem, due to Van der Kallen, leads to an expression for the tangent group to the second Chow group $\tn{Ch}^2(X)$ of a smooth projective surface $X$ defined over a field containing the rational numbers. This result, spelled out in equation \hyperref[linearblochstheorem]{\ref{linearblochstheorem}}, plays a prominent role in the work of Green and Griffiths. Second, Stienstra carries this line of reasoning further to define the formal completion of $\tn{Ch}^2(X)$, essentially by sheafifying equation Bloch's theorem \hyperref[theorembloch]{\ref{theorembloch}} for relative $K_2$. The resulting expression appears in equation \hyperref[equstienstra]{\ref{equstienstra}}. Third, Hesselholt has proven an analogous result for the relative $K$-theory (not just Milnor $K$-theory) of a truncated polynomial algebra. This result appears in equation \hyperref[equhesselholt]{\ref{equhesselholt}}. The first summand on the right-hand side gives a special case of the theorem in this paper, under appropriate assumptions on the underlying ring.
\item Section \hyperref[subsectiontangentfunctors]{\ref{subsectiontangentfunctors}} discusses various ways of generalizing Green and Griffiths' tangent functors in a geometric context.
\end{itemize}
\section{Preliminaries}\label{sectionprelim}
\subsection{Contextual Remarks}\label{subsectioncontextual}
From an abstract viewpoint, the mathematical problem addressed by this paper is a {\it group isomorphism problem,} in which one attempts to determine whether or not two groups, expressed in terms of generators and relations, are isomorphic.\footnotemark\footnotetext{Such problems were first studied systematically in the context of finite groups by Max Dehn more than a century ago. In the present context, the groups are generally infinite, although some finite examples are included; for instance, those involving nilpotent extensions of finite fields. The general group isomorphism is known to be undecidable, in the sense that no algorithm exists that will solve every case of the problem.} The early algebraic $K$-theory of the 1960's and 1970's provides many examples of such problems, often accompanied by forbidding symbolic computations. The papers of Maazen and Stienstra \cite{MaazenStienstra77} and Van der Kallen \cite{VanderKallenRingswithManyUnits77} are representative. The arguments in this paper follow this tradition; in particular, they will not stagger anyone with their beauty. Perhaps the best justification for inflicting such material on the reader forty years after the papers mentioned above is that some people still wish, with justification, to carry out explicit elementary calculations in algebraic $K$-theory. Here I have in mind particularly the recent work of Green and Griffiths \cite{GreenGriffithsTangentSpaces05} on the infinitesimal structure of cycle groups and Chow groups, in which the authors employ ``low-tech" wrangling with Steinberg symbols and K\"{a}hler differentials to achieve surprising geometric insights. Such a viewpoint would be impossible without early results such as Matsumoto's theorem, which support, in special cases of particular interest, a na\"{i}ve symbolic treatment of $K$-theoretic structure possessing much greater intrinsic subtlety in the general case. It is also true that relatively old and utilitarian methods can sometimes pick up crumbs left behind by the great machines of modern $K$-theory. For example, the stability criterion of Van der Kallen, used in the theorem in this paper, provides a sharper result than the hypotheses that appear in many similar but more sophisticated theorems.
\subsection{Split Nilpotent Extensions}\label{subsectionnilpotent}
A split nilpotent extension of a ring $S$ provides an algebraic notion of ``infinitesimal thickening of $S$." Heuristically, one may think of augmenting $S$ by the addition of elements ``sufficiently small" that products of sufficiently many such elements vanish.\footnotemark\footnotetext{More precisely, one thinks of ``thickening" the affine scheme $\tn{Spec}(S)$ corresponding to $S$. The motivation for mentioning this viewpoint comes from the geometric applications discussed in section \hyperref[sectiondiscussion]{\ref{sectiondiscussion}}.}\\
\begin{defi} Let $S$ be a commutative ring with identity. A {\bf split nilpotent extension} of $S$ is a split surjection $R\rightarrow S$ whose kernel $I$, called the extension ideal, is nilpotent.
\end{defi}
The {\it index of nilpotency} of $I$ is the smallest integer $N$ such that $I^N=0$. A nilpotent extension ideal $I$ is contained in any maximal ideal $J$ of $R$, since $R/J$ is a field, and hence belongs to the Jacobson radical of $R$. Hence, a split nilpotent extension is a special case of what Maazen and Stienstra \cite{MaazenStienstra77} call a {\it split radical extension.}\\
\begin{example}\tn{The simplest nontrivial split nilpotent extension of $S$ is the extension $R=S[\varepsilon]/\varepsilon^2\rightarrow S$. The ring $S[\varepsilon]/\varepsilon^2$ is called the ring of dual numbers over $S$. This is the extension involved in Van der Kallen's early computation \cite{VanderKallenEarlyTK271} of relative $K_2$, which plays a prominent role in the work of Green and Griffiths \cite{GreenGriffithsTangentSpaces05} on the infinitesimal structure of cycle groups and Chow groups. More generally, if $k$ is a field and $S$ is a $k$-algebra, then tensoring $S$ with any local artinian $k$-algebra $A$ induces a split nilpotent extension $S\otimes_k A\rightarrow S$. These are the extensions considered by Stienstra \cite{StienstraFormalCompletion83} in his study of the formal completion of the second Chow group of a smooth projective surface over a field containing the rational numbers.}
\end{example}
\subsection{Van der Kallen Stability}\label{subsectionstability}
Certain convenient properties of algebraic $K$-theory, including those enabling some of the steps of the proof in section \hyperref[sectionproof]{\ref{sectionproof}} below, rely on an assumption that the ring under consideration has ``enough units," or that its units are ``organized in a convenient way." One way to make this idea precise is via Van der Kallen's \cite{VanderKallenRingswithManyUnits77} notion of stability.\footnotemark\footnotetext{This terminology seems to have first appeared in Van der Kallen, Maazen, and Stienstra's 1975 paper {\it A Presentation for Some $K_2(R,n)$} \cite{VanMaazenStienstra}.} This notion is closely related to the stable range conditions of Hyman Bass, introduced in the early 1960's.\\
\begin{defi}Let $S$ be a commutative ring with identity, and let $m$ be a positive integer.
\begin{enumerate}
\item A pair $(s,s')$ of elements of $S$ is called {\bf unimodular} if $sS+s'S=S$.
\item $S$ is called $\mbf{m}$-{\bf fold stable} if, given any family $\{(s_j,s_j')\}_{j=1}^m$ of unimodular pairs in $S$, there exists an element $s\in S$ such that $s_j+s_j's$ is a unit in $S$ for each $j$.
\end{enumerate}
\end{defi}
\vspace*{.3cm}
\begin{example}\label{examplesemilocalstable}\tn{A semilocal ring\footnotemark\footnotetext{A {\it commutative} ring $R$ is semilocal if and and only if it has a finite number of maximal ideals. The general definition is that $R/J(R)$ is semisimple, where $J(R)$ is the Jacobson radical of $R$.} is $m$-fold stable if and only if all its residue fields contain at least $m+1$ elements. See Van der Kallen, Maazen and Stienstra \cite{VanMaazenStienstra}, page 935, or Van der Kallen \cite{VanderKallenRingswithManyUnits77}, page 489. In particular, for any $m$, the class of $m$-fold stable rings is much larger than the class of local rings of smooth algebraic varieties over a field containing the rational numbers, which are the rings of principal interest in the context of Green and Griffiths' work on the infinitesimal structure of cycle groups and Chow groups \cite{GreenGriffithsTangentSpaces05}. Due to the relationship between stability and the size of residue fields, the theorem in this paper allows some of the computations of Green and Griffiths to be repeated in positive characteristic.}
\end{example}
The following easy lemma establishes two consequences of stability necessary for the proof of the theorem in section \hyperref[sectionproof]{\ref{sectionproof}} below.\\
\begin{lem}\label{lemstability} Suppose $R$ is a split nilpotent extension of a commutative ring $S$ with identity. Let $I$ be the extension ideal.
\begin{enumerate}
\item If $S$ is $m$-fold stable, then $R$ is also $m$-fold stable.
\item If $S$ is $2$-fold stable and $2$ is invertible in $S$, then every element of $R$ is the sum of
two units.
\end{enumerate}
\end{lem}
\begin{proof} For part 1 of the lemma, let $\{(r_j,r_j')\}_{j=1}^m$ be a family of unimodular pairs in $R$. Since $R$ is a split extension of $S$, $r_j$ and $r_j'$ may be written uniquely as sums
\[r_j=s_j+i_j,\hspace*{.5cm}\tn{and}\hspace*{.5cm}r_j'=s_j'+i_j',\hspace*{.5cm}\tn{where}\hspace*{.5cm}s_j,s_j'\in S\hspace*{.5cm}\tn{and}\hspace*{.5cm}i_j,i_j'\in I.\]
It follow immediately that $\{(s_j,s_j')\}_{j=1}^m$ is a family of unimodular pairs in $S$. Since $S$ is $m$-fold stable, there exists an element $s$ of $S$ such that $s_j+s_j's$ is a unit in $S$ for each $j$. Then $r_j+r_j's$ may be expressed as a sum of a unit and a nilpotent element as follows:
\[r_j+r_j's=(s_j+s_j's)+(i_j+i_j's).\]
Therefore, $r_j+r_j's$ is a unit in $R$. This is true for every $j$, so $R$ is $m$-fold stable.
For part 2 of the lemma, first note that if $S$ is $2$-fold stable and $2$ is invertible in $S$, the same hypotheses hold for $R$ by part 1 of the lemma. Let $r$ be any element of $R$. Any pair of elements including a unit is automatically unimodular, so the pairs $(r,2)$ and $(r,-2)$ are unimodular (these pairs need not be distinct). Since $R$ is $2$-fold stable, there exists an element $r'$ in $R$, and units $u$ and $v$ in $R$, such that
\[r+2r'=u\hspace*{.5cm}\tn{and}\hspace*{.5cm} r-2r'=v.\]
Adding these formulas gives $2r=u+v$. Since $2$ is invertible in $R$, this implies that $r=u/2+v/2$, a sum of two units.
\end{proof}
\subsection{Symbolic $K$-Theories: Milnor $K$-Theory and Dennis-Stein $K$-Theory}\label{subsectionsymbolic}
Milnor $K$-theory and Dennis-Stein $K$-theory are {\it symbolic $K$-theories;} an informal term which means, in this context, $K$-theories whose $K$-groups admit simple presentations via generators, called {\it symbols,} and relations. More sophisticated ``modern" $K$-theories, such as Quillen's $K$-theory, Waldhausen's $K$-theory, and the amplifications of Bass and Thomason, have homotopy-theoretic definitions. Symbolic $K$-theories have the advantage of being relatively elementary, but tend to lack certain desirable formal properties. In this sense, they represent one extreme of the seemingly unavoidable tradeoff between accessibility and formal integrity in algebraic $K$-theory.
The following definition introduces the ``na\"{i}vest" version of Milnor $K$-theory:\\
\begin{defi}\label{defiMilnorK} Let $R$ be a commutative ring with identity, and let $R^*$ be its multiplicative group of invertible elements, viewed as a $\mathbb{Z}$-module.
\begin{enumerate}
\item The {\bf Milnor $K$-ring} $K_\bullet^{\tn{\fsz{M}}}(R)$\footnotemark\footnotetext{The ``dot notation" $K_\bullet^{\tn{\fsz{M}}}(R)$, rather than the simpler $K^{\tn{\fsz{M}}}(R)$, is used here for the purposes of comparing the Milnor $K$-ring to the graded ring $\Omega_{R/\mathbb{Z}}^\bullet$ of absolute K\"{a}hler differentials in lemma \hyperref[lemdlog]{\ref{lemdlog}} below, since $\Omega_{R/\mathbb{Z}}$ always means $\Omega_{R/\mathbb{Z}}^1$.} of $R$ is the quotient
\[K_\bullet^{\tn{\fsz{M}}}(R):=\frac{T_{R^*/\mathbb{Z}}}{I_{\tn{\fsz{St}}}}\]
of the tensor algebra $T_{R^*/\mathbb{Z}}$ by the ideal $I_{\tn{\fsz{St}}}$ generated by elements of the form $r\otimes(1-r)$.
\item The $n$th {\bf Milnor $K$-group} $K_{n} ^{\tn{\fsz{M}}}(R)$ of $R$, defined for $n\ge0$, is the $n$th graded piece of $K_\bullet^{\tn{\fsz{M}}}(R)$.
\end{enumerate}
\end{defi}
The tensor algebra $T_{R^*/\mathbb{Z}}$ of $R$ over $k$ is by definition the graded $k$-algebra whose zeroth graded piece is $k$, whose $n$th graded piece is the $n$-fold tensor product $R\otimes_k...\otimes_kR$ for $n\ge1$, and whose multiplicative operation is induced by the tensor product. The subscript ``$\tn{St}$" assigned to the ideal $I_{\tn{\fsz{St}}}$ stands for ``Steinberg," since the defining relations $r\otimes(1-r)\sim0$ of $K^{\tn{\fsz{M}}}(R)$ are called {\it Steinberg relations}. The ring $K^{\tn{\fsz{M}}}(R)$ is noncommutative, since concatenation of tensor products is noncommutative; more specifically, it is {\it anticommutative} if $R$ has ``enough units," in a sense made precise below. The $n$th Milnor $K$-group $K_{n} ^{\tn{\fsz{M}}}(R)$ is generated, under {\it addition} in $K^{\tn{\fsz{M}}}(R)$, by equivalence classes of $n$-fold tensors $r_1\otimes...\otimes r_n$. Such equivalence classes are denoted by symbols $\{r_1,...,r_n\}$, called {\it Steinberg symbols}. When working with individual Milnor $K$-groups, the operation is usually viewed {\it multiplicatively,} and the identity element is usually denoted by $1$. For example, expressions such as $\prod_l\{r_l,e^{u_jr_li_k\Pi_l},\bar{r}_l,u_j\}$, appearing in sections \hyperref[subsectionanalysisphi]{\ref{subsectionanalysisphi}} and \hyperref[subsectionanalysispsi]{\ref{subsectionanalysispsi}} below, are viewed as products in $K_{n} ^{\tn{\fsz{M}}}(R)$, although they represent {\it sums} in $K^{\tn{\fsz{M}}}(R)$.
Milnor $K$-theory first appeared in John Milnor's 1970 paper {\it Algebraic $K$-Theory and Quadratic Forms} \cite{MilnorAlgebraicKTheoryQforms70}, in the context of fields. Around the same time, Milnor, Steinberg, Matsumoto, Dennis, Stein, and others were studying the second $K$-group $K_2(R)$ of a general ring $R$, defined by Milnor in 1967 as the center of the Steinberg group of $R$. $K_2(R)$ is often called ``Milnor's $K_2$" in honor of its discoverer, but is in fact the ``full $K_2$-group." In particular, it is much more complicated in general than the second Milnor $K$-group $K_2^{\tn{\footnotesize{M}}}(R)$ according to definition \hyperref[defiMilnorK]{\ref{defiMilnorK}}. Adding further to the confusion of terminology, the two groups $K_2(R)$ and $K_2^{\tn{\footnotesize{M}}}(R)$ {\it are} equal in many important special cases; in particular, when $R$ is a field, a division ring, local ring, or even a semilocal ring.\footnotemark\footnotetext{See \cite{WeibelKBook}, Chapter III, Theorem 5.10.5, page 43, for details.} This result is usually called Matsumoto's theorem, since its original version was proved by Matsumoto, for fields, in an arithmetic setting. Matsumoto's theorem was subsequently extended to division rings by Milnor, and finally to semilocal rings by Dennis and Stein.
There is no consensus in the literature about how the Milnor $K$-groups $K_n^{\tn{\fsz{M}}}(R)$ should be defined for general $n$ and $R$. The definition I use here is the most na\"{i}ve one. Its claim to relevance relies on foundational work by Steinberg, Milnor, Matsumoto, and others. Historically, the Steinberg symbol arose as a map $R^*\times R^*\rightarrow K_2(R)$, defined in terms of special matrices. The properties of this map, including the relations satisfied by its images, may be analyzed concretely in terms of matrix properties.\footnotemark\footnotetext{See Weibel \cite{WeibelKBook} Chapter III, or Rosenberg \cite{RosenbergK94} Chapter 4 for details.} In the case where $R$ is a field, Matsumoto's theorem states that the image of the Steinberg symbol map generates $K_2(R)$, and that all the relations satisfied by elements of the image follow from the relations of the tensor product and the Steinberg relations. This allows a simple re-definition of $K_2(R)$ in terms of a tensor algebras when $R$ is a field, with the generators {\it renamed} Steinberg symbols. Abstracting this result to general $n$ and $R$ leads to definition \hyperref[defiMilnorK]{\ref{defiMilnorK}} above. However, it has been understood from the beginning that the resulting ``Milnor $K$-theory" is seriously deficient in many respects. Quillen, Waldhausen, Bass, Thomason, and others have since addressed many of these deficiencies by defining more elaborate versions of $K$-theory, but there still remain many reasons why symbolic $K$-theories are of interest. For example, they are closely connected to motivic cohomology, provide interesting approaches to the study of Chow groups and higher Chow groups, and arise in physical settings in superstring theory and elsewhere. The viewpoint of the present paper, involving $\lambda$-decompositions, cyclic homology, and differential forms, is partly motivated by these considerations, particularly the theory of Chow groups.
It is instructive to briefly examine a few different treatments of Milnor $K$-theory in the literature. Weibel \cite{WeibelKBook} chooses to confine his definition of Milnor $K$-theory to the original context of fields (Chapter III, section 7), while defining Steinberg symbols more generally (Chapter IV, example 1.10.1, page 8), and also discussing many other types of symbols, including Dennis-Stein symbols (Chapter III, defnition 5.11, page 43), and Loday symbols (Chapter IV, exercise 1.22, page 122).\footnotemark\footnotetext{Interestingly, the Loday symbols project nontrivially into a range of different pieces of the $\lambda$-decomposition of Quillen $K$-theory. See Weibel \cite{WeibelKBook} Chapter IV, example 5.11.1, page 52, for details.} Elbaz-Vincent and M\"{u}ller-Stach \cite{ElbazVincentMilnor02} define Milnor $K$-theory for general rings (Definition 1.1, page 180) in terms of generators and relations, but take the additive inverse relation of lemma \hyperref[lemrelationsstable]{\ref{lemrelationsstable}} as part of the definition. The result is a generally nontrivial quotient of the Milnor $K$-theory of definition \hyperref[defiMilnorK]{\ref{defiMilnorK}} above.\\
\begin{example}\tn{Let $R=\mathbb{Z}_2[x]/x^2$. The multiplicative group $R^*$ is isomorphic to $\mathbb{Z}_2$, generated by the element $r:=1+x$. The Steinberg ideal is empty since $1-r$ is not a unit. Hence, $K_2^{\tn{\fsz{M}}}(R)$ is just $R^*\otimes_\mathbb{Z} R^*\cong\mathbb{Z}_2$, generated by the symbol $\{r,r\}=\{r,-r\}$, while Elbaz-Vincent and M\"{u}ller-Stach's corresponding group is trivial. By contrast, the additive inverse relation $\{r,-r\}=1$ always holds if one uses the original definition of Steinberg symbols in terms of matrices; see Weibel \cite{WeibelKBook} Chapter III, remark 5.10.4, page 43. This may be interpreted as an indication that this relation is a desirable property for ``enhanced" versions of Milnor $K$-theory.}
\end{example}
Moritz Kerz \cite{KerzMilnorLocal} has suggested an ``improved version of Milnor $K$-theory," motivated by a desire to correct certain formal shortcomings of the ``na\"{i}ve" version defined in terms of the tensor product. For example, this version fails to satisfy the Gersten conjecture. Thomason \cite{ThomasonNoMilnor92} has shown that Milnor $K$-theory does not extend to a theory of smooth algebraic varieties with desirable properties such as $\mbb{A}^1$-homotopy invariance and functorial homomorphisms to more complete version of $K$-theory. Hence, the proper choice of definition depends on what properties and applications one wishes to study.
Dennis-Stein $K$-theory plays only on small part in this paper. It therefore suffices to define only the ``second Dennis-Stein $K$-group."\footnotemark\footnotetext{Dennis and Stein initially considered a symbolic version of $K_2$ in their 1973 paper {\it $K_2$ of radical ideals and semi-local rings revisited} \cite{DennisStein}. Their approach generalizes to higher $K$-theory in a variety of different ways; see, for instance, sections 11.1 and 11.2 of Loday \cite{LodayCyclicHomology98}. What I mean by ``Dennis-Stein $K$-theory" is essentially the part of $K$-theory generated by the Loday symbols, but the distinction is immaterial for $K_2$.}\\
\begin{defi}\label{defidennisstein} Let $R$ be a commutative ring with identity, and let $R^*$ be its multiplicative group of invertible elements. The second {\bf Dennis-Stein $K$-group} $D_2(R)$ of $R$ is the multiplicative abelian group whose generators are symbols $\langle a,b\rangle$ for each pair of elements $a$ and $b$ in $R$ such that $1+ab\in R^*$, subject to the additional relations
\begin{enumerate}
\item $\langle a,b\rangle\langle -b,-a\rangle=1.$
\item $\langle a,b\rangle\langle a,c\rangle=\langle a,b+c+abc\rangle.$
\item $\langle a,bc\rangle=\langle ab,c\rangle\langle ac,b\rangle.$
\end{enumerate}
\end{defi}
This definition may be found in both Maazen and Stienstra \cite{MaazenStienstra77} definition 2.2, page 275, and Van der Kallen \cite{VanderKallenRingswithManyUnits77}, page 488.
\subsection{Symbolic $K$-Theory and Stability}\label{subsectionsymbolicstability}
For rings possessing a sufficient degree of stability in the sense of Van der Kallen, different versions of symbolic $K$-theory tend to produce isomorphic $K$-groups. An important example of such an isomorphism involves the second Dennis-Stein $K$-group and the second Milnor $K$-group.\\
\begin{theorem}\label{theoremvanderkallen} (Van der Kallen) Let $S$ be a commutative ring with identity, and suppose that $S$ is $5$-fold stable. Then
\begin{equation}K_{2} ^{\tn{\fsz{M}}}(S)\cong D_2(S).\end{equation}
\end{theorem}
\begin{proof}Van der Kallen \cite{VanderKallenRingswithManyUnits77}, theorem 8.4, page 509. Note that Van der Kallen denotes the Dennis-Stein group $D_2$ by $\tn{D}$, and the Milnor $K$-group $K_{2} ^{\tn{\fsz{M}}}$ by $\tn{US}$. Definitions of the groups $\tn{D}(S)=D_2(S)$ and $\tn{US}(S)=K_{2} ^{\tn{\fsz{M}}}(S)$ in terms of generators and relations appears on pages 488 and 509 of the same paper, respectively.
\end{proof}
Whether or not theorem \hyperref[theoremvanderkallen]{\ref{theoremvanderkallen}} remains true if one weakens the stability hypothesis to $4$-fold stability apparently remains unknown.\footnotemark\footnotetext{Van der Kallen \cite{VanderKallenRingswithManyUnits77} writes {\it ``We do not know if $4$-fold stability suffices for theorem 8.4."} More recently, Van der Kallen tells me \cite{VanderKallenprivate14} that the answer to this question is still apparently unknown.}
The following result involving {\it relative} $K$-groups, does not require any stability hypothesis. Note that the group $K_{2}(R,I)$ is {\it a priori} the ``total relative $K$-group," not the just the part generated by Steinberg symbols.\\
\begin{theorem}\label{theoremmaazen} (Maazen and Stienstra) If $R$ is a split radical extension of $S$ with extension ideal $I$, then
\begin{equation}K_{2}(R,I)\cong D_{2}(R,I)\end{equation}
\end{theorem}
\begin{proof}Maazen and Stienstra \cite{MaazenStienstra77}, theorem 3.1, page 279.
\end{proof}
In general, the relative $K$-groups $K_n(R,I)$ are defined so as to possess convenient functorial properties, and this does not always lead to a simple description in terms of $K_n(R)$ and $K_n(S)$.\footnotemark\footnotetext{In particular, these groups are often defined via homotopy fibers, as mentioned in the first footnote of section \hyperref[subsectionstatement]{\ref{subsectionstatement}}. This guarantees the existence of a long exact sequence relating absolute and relative groups. See Weibel \cite{WeibelKBook} Chapter IV, page 8, for details.} For the purposes of this paper, however, the relative groups $K_n (R,I)$ may be identified with the kernels $\tn{Ker}[K_n (R)\rightarrow K_n (R/I)]$, and similarly for the Milnor and Dennis-Stein $K$-groups. This is because the extension of $S$ by $I$ to obtain $R$ is assumed to be a {\it split} extension.
\subsection{Absolute K\"{a}hler Differentials; a Result of Bloch}\label{subsectionkahlerbloch}
K\"{a}hler differentials provide a purely algebraic notion of differential forms in the context of commutative rings. In the noncommutative context, differential forms are superseded by algebra cohomology theories. Historically, expanding the role of differential forms was one of the primary motivations for the development of cyclic homology and cohomology. This renders natural the appearance of cyclic homology in Goodwillie's isomorphism.\\
\begin{defi}\label{defikahler}Let $R$ be a commutative $k$-algebra over a commutative ring $k$ with identity. The $k$-module of {\bf K\"{a}hler differentials} $\Omega^1_{R/k}$ of $R$ with respect to $k$ is the module generated over $k$ by symbols of the form $rdr'$, subject to the relations
\begin{enumerate}
\item $rd(\alpha r'+\beta r'')=\alpha rdr'+\beta rdr''$ for $\alpha,\beta\in k$ and $r,r',r''\in M$ ($k$-linearity).
\item $rd(r'r'')=rr'dr''+rr''dr'$ (Leibniz rule).
\end{enumerate}
The ring $\Omega_{R/k}^\bullet$ of K\"{a}hler differentials of $R$ with respect to $k$ is the exterior algebra over $\Omega^1_{R/k}$; i.e., the graded ring whose zeroth graded piece is $k$, whose $n$th graded piece is $\bigwedge^n\Omega^1_{R/k}:=\Omega^n_{R/k}$, and whose multiplication is wedge product.
\end{defi}
The differential graded ring $(\Omega_{R/k}^\bullet,d)$, where the map $d$ takes the differential $r_0dr_1\wedge...\wedge dr_n$ to the differential $dr_0\wedge dr_1\wedge...\wedge dr_n$, may be viewed as a complex, called the {\it algebraic de Rham complex.} If the ground ring $k$ is the ring of integers $\mathbb{Z}$, then the modules $\Omega^n_{R/\mathbb{Z}}$ are abelian groups (i.e., $\mathbb{Z}$-modules), called the groups of {\bf absolute K\"{a}hler differentials.} Groups $\Omega^n_{R,I}$ of absolute K\"{a}hler differentials {\it relative to an ideal} $I\subset R$ may be defined, for the purposes of this paper, to be the kernels $\tn{Ker}\big[\Omega_{R/\mathbb{Z}}^n\rightarrow \Omega_{(R/I)/\mathbb{Z}}^n\big]$.
The following relationship between the second relative $K$-group and the first group of absolute K\"{a}hler differentials was first pointed out by Bloch \cite{BlochK2Artinian75}. This result helps establish the base case of the theorem in lemma \hyperref[lembasecase]{\ref{lembasecase}} below.\\
\begin{theorem}\label{theorembloch} Suppose $R$ is a split nilpotent extension of a ring $S$, with extension ideal $I$
whose index of nilpotency is $N$. Suppose further that every positive integer less than or equal to $N$
is invertible in $S$. Then
\begin{equation}K_{2}(R,I)\cong \frac{\Omega_{R,I} ^1}{dI}.\end{equation}
\end{theorem}
\begin{proof}Maazen and Stienstra \cite{MaazenStienstra77}, Example 3.12 page 287.
\end{proof}
\section{Calculus of Steinberg Symbols and K\"{a}hler Differentials}\label{sectionSteinbergKahler}
\subsection{Notation and Conventions for Symbols and Differentials}\label{subsectionnotation}
The proof in section \hyperref[sectionproof]{\ref{sectionproof}} involves a significant amount of symbolic manipulation. To streamline this, I use the following notation and conventions:
\begin{enumerate}
\item $R$ is a split nilpotent extension of a $5$-fold stable ring $S$, with extension ideal $I$,
whose index of nilpotency is $N$. The multiplicative group of invertible elements of $R$ is denoted by $R^*$. The subset of elements of the form $1+i$, where $i\in I$, is a subgroup of $R^*$. It is denoted by $(1+I)^*$ to emphasize its multiplicative structure.
\item Individual letters, such as $r$ and $r'$, are used to denote elements of $R$.
\item Ordered tuples of elements of $R$ are usually numbered beginning with zero: $(r_0,r_1,...,r_n)$.
\item ``Bar notation" is often used to abbreviate ordered tuples of elements of $R$. For example, the expression $(r_0,\bar{r})$ might be used to denote the $(n+1)$-tuple $(r_0,r_1,...,r_n)$, where $\bar{r}$ stands for the last $n$ entries $r_1,...,r_n$. Similarly, the expression $(\bar{r},r_j,\bar{r}')$ might be used to denote the $n$-tuple $(r_0,...,r_{j-1},r_j,r_{j+1},...,r_n)$, where $\bar{r}$ stands for $r_0,...,r_{j-1}$, and $\bar{r}'$ stands for $r_{j+1},...,r_{n}$. The number of elements represented by a barred letter is either stated explicitly, determined by context, or immaterial. In particular, $\bar{r}$ may be empty. For example, the multiplicativity relation $\{\bar{r},rr',\bar{r}'\}=\{\bar{r},r,\bar{r}'\}\{\bar{r},r',\bar{r}'\}$ in lemma \hyperref[lemMilnorrelations]{\ref{lemMilnorrelations}} below includes the case $\{rr',\bar{r}'\}=\{r,\bar{r}'\}\{r',\bar{r}'\}$, where $\bar{r}$ is empty.
\item Let $(\bar{r})=(r_0,r_1,...,r_n)$ be an ordered $(n+1)$-tuple of elements of $R$. Then the expression $\bar{r}\in R^*$ means $(r_0,...,r_n)\in (R^*)^{n+1}$. Similarly, $\{\bar{r}\}$ means the Steinberg symbol corresponding to $\{\bar{r}\}$, if it exists; $d\bar{r}$ means $dr_0\wedge dr_1\wedge...$, and $e^{\bar{r}}$ means $(e^{r_0},e^{r_1},...)$.
\item Instances of ``capital pi," such as $\Pi$ and $\Pi'$, stand for the products of the entries of tuples such as $(\bar{r})$ and $(\bar{r}')$.(
\item The ``hat notation" $(r_0,...,\hat{r_j},...,r_n)$ denotes the $n$-tuple given by omitting the $j$th entry $r_j$ from the $(n+1)$-tuple $r_0,...,r_j,...,r_n$. The hat notation may be used to omit multiple entries of an ordered tuple.
\item The group operation in the Milnor $K$-group $K_n ^{\tn{\fsz{M}}}(R)$ is expressed as multiplication (juxtaposition of Steinberg symbols), although it is actually addition in the Milnor $K$-ring $K^{\tn{\fsz{M}}}(R)$.
\item The ring multiplication $K_m ^{\tn{\fsz{M}}}(R)\times K_n ^{\tn{\fsz{M}}}(R)\rightarrow K_{m+n} ^{\tn{\fsz{M}}}(R)$ in the Milnor $K$-ring $K^{\tn{\fsz{M}}}(R)$ is expressed abstractly by the symbol $\times$, or concretely by concatenation of the entries of Steinberg symbols. For example, the distributive law is expressed as
\[\{\bar{r}\}\times(\{\bar{r}'\}\{\bar{r}''\})=(\{\bar{r}\}\times\{\bar{r}'\})(\{\bar{r}\}\times\{\bar{r}''\})
=\{\bar{r},\bar{r}'\}\{\bar{r},\bar{r}''\},\]
for $\{\bar{r}\}\in K_l ^{\tn{\fsz{M}}}(R), \{\bar{r}'\}\in K_m ^{\tn{\fsz{M}}}(R),$ and $\{\bar{r}''\}\in K_n ^{\tn{\fsz{M}}}(R)$.
\end{enumerate}
\subsection{Generators and Relations for Milnor $K$-Theory}\label{generatorsMilnor}
In this section, I gather together some elementary results about Steinberg symbols that are useful for the computations in sections \hyperref[subsectionanalysisphi]{\ref{subsectionanalysisphi}} and \hyperref[subsectionanalysispsi]{\ref{subsectionanalysispsi}}. For numbering consistency, I work with $K_{n+1} ^{\tn{\fsz{M}}}(R)$, rather than $K_{n} ^{\tn{\fsz{M}}}(R)$, since the former group is the one appearing in theorem. Throughout this section, $n$ is a nonnegative integer.\\
\begin{lem}\label{lemMilnorrelations}As an abstract multiplicative group, $K_{n+1} ^{\tn{\fsz{M}}}(R)$ is generated by the Steinberg symbols $\{r_0,...,r_n\}$, where $r_j\in R^*$ for all $j$, subject to the relations
\begin{enumerate}
\addtocounter{enumi}{-1}
\item $K_{n+1} ^{\tn{\fsz{M}}}(R)$ is abelian.
\item Multiplicative relation: $\{\bar{r},rr',\bar{r}'\}\{\bar{r},r,\bar{r}'\}^{-1}\{\bar{r},r',\bar{r}'\}^{-1}=1$.
\item Steinberg relation: $\{\bar{r},r,1-r,\bar{r}'\}=1$.
\end{enumerate}
\end{lem}
\begin{proof} This follows directly from definition \hyperref[defiMilnorK]{\ref{defiMilnorK}} and the properties of the tensor algebra. For example, in the case of $K_{2} ^{\tn{\fsz{M}}}(R)$, the tensor product relations in $R^*\otimes_\mathbb{Z} R^*$ are
\[rr'\otimes r''=(r\otimes r'')(r'\otimes r''),\hspace{.3cm} r\otimes r'r''=(r\otimes r')(r\otimes r''),\hspace{.2cm}\tn{and}\hspace{.2cm}r^n\otimes r'=r\otimes(r')^n =(r\otimes r')^n \hspace{.2cm}\tn{for}\hspace{.2cm} n\in\mathbb{Z},\]
since the operations in $R^*$ and $R^*\otimes_\mathbb{Z} R^*$ are expressed multiplicatively. These three relations are equivalent to the multiplicativity relation in the statement of the lemma, while imposing the Steinberg relation is equivalent to quotienting out the Steinberg ideal $I_{\tn{\fsz{St}}}$.
\end{proof}
Lemma \hyperref[lemMilnorrelations]{\ref{lemMilnorrelations}} translates the na\"{i}ve tensor algebra definition of Milnor $K$-theory into a definition in terms of Steinberg symbols and relations. The following lemma gathers together additional relations satisfied by Steinberg symbols in the $5$-fold stable case. The first of these, the idempotent relation, actually requires no stability assumption, but is included here, rather than in lemma \hyperref[lemMilnorrelations]{\ref{lemMilnorrelations}}, because it is information-theoretically redundant. The other two relations, however, require the ring to have ``enough units." \\
\begin{lem}\label{lemrelationsstable} Let $R$ be a $5$-fold stable ring. Then the Steinberg symbols $\{r_0,...,r_n\}$ generating $K_{n+1} ^{\tn{\fsz{M}}}(R)$ satisfy the following additional relations:
\begin{enumerate}
\addtocounter{enumi}{2}
\item Idempotent relation: if $e\in R^*$ is idempotent, then $\{\bar{r},e\}=1$ in $K_{n+1}^{\tn{\fsz{M}}}(R)$.
\item Additive inverse relation: $\{\bar{r},r,-r,\bar{r}\}=1$ in $K_{n+1}^{\tn{\fsz{M}}}(R)$.
\item Anticommutativity: $\{\bar{r}, r',r, \bar{r}'\}=\{\bar{r},r,r',\bar{r}'\}^{-1}.$
\end{enumerate}
\end{lem}
\begin{proof} The idempotent relation requires no stability assumption. Indeed, multiplicativity implies that $\{\bar{r},e\}=\{\bar{r},e\}\{\bar{r},e\}$. Multiplying both sides by $\{\bar{r},e\}^{-1}$ yields $\{\bar{r},e\}=1$. The additive inverse relations and anticommutativity are established in the $5$-fold stable case by Van der Kallen \cite{VanderKallenRingswithManyUnits77} Theorem 8.4, page 509.
\end{proof}
A few remarks concerning the interdependence of these relations may be helpful. The additive inverse relation implies anticommutativity, as demonstrated by the following sequence of manipulations, copied from Rosenberg's proof\footnotemark\footnotetext{See Rosenberg \cite{RosenbergK94} Theorem 4.3.15, page 214. Rosenberg follows Hutchinson's proof \cite{HutchinsonMatsumoto90}, but Hutchinson credits this particular argument to Milnor.} of Matsumoto's theorem:
\begin{equation}\label{equmatsumotoarg}\begin{array}{lcl} \{r,r'\}&=&\{r,r'\}\{r,-r\}=\{r,-rr'\}\\&=&\{rr'(r')^{-1},-rr'\}=\{rr',-rr'\}\{(r')^{-1},-rr'\}\\&=&\{r',-rr'\}^{-1}=\{r',r\}^{-1}\{r',-r'\}^{-1}\\&=&\{r',r\}^{-1}. \end{array}\end{equation}
However, the additive inverse relation itself depends on expressing the symbol $\{r,-r\}$ as a product of symbols involving the multiplicative and Steinberg relations. If $R$ is a field, this is easy, since in this case $1-r$ is a unit whenever $r$ is a unit not equal to $1$. Indeed, from the identity $(1-r)r^{-1}=-(1-r^{-1})$, it follows that $(1-r^{-1})$ is invertible. Rearranging yields the expression $-r=(1-r)(1-r^{-1})^{-1}.$ Thus,
\begin{equation}\label{equadditiveinversearg}\begin{array}{lcl} \{r,-r\}&=&\{r,(1-r)(1-r^{-1})^{-1}\}=\{r,1-r\}\{r,(1-r^{-1})^{-1}\}\\
&=&\{r,(1-r^{-1})^{-1}\}=\{r^{-1},1-r^{-1}\}\\
&=&1,\end{array}\end{equation}
by repeated application of multiplicatively and the Steinberg relations. If $R$ is a local ring in which $2$ is invertible, then either $1+r$ or $1-r$ is invertible, and again the result is easy.\footnotemark\footnotetext{Suppose neither $1+r$ nor $1-r$ is invertible. Then both elements belong to the maximal ideal $M$ of $R$, so their difference $2r$ belongs to $M$. Since $2$ is invertible, $r$ belongs to $M$, a contradiction, since $r$ is invertible. An argument analogous to the one appearing in equation \hyperref[equadditiveinversearg]{\ref{equadditiveinversearg}} now applies.} Van der Kallen stability is a much more general criterion permitting the same conclusion. As mentioned above, some authors, such as Elbaz-Vincent and M\"{u}ller-Stach \cite{ElbazVincentMilnor02}, take the additive inverse relation to be part of the {\it definition} of Milnor $K$-theory. This obviates the need for stability hypotheses in this context, at the expense of the tensor algebra definition \hyperref[defiMilnorK]{\ref{defiMilnorK}}, and consequently, at the expense of the description in terms of differentials given by the main theorem in equation \hyperref[equmaintheorem]{\ref{equmaintheorem}}.
The following lemma is useful for the factorization of Steinberg symbols used in the proof of lemma \hyperref[lempatchingphi]{\ref{lempatchingphi}} below:\\
\begin{lem}\label{lemrelativegenerators} Let $R$ be split nilpotent extension of a ring $S$, with extension ideal $I$. Then the relative Milnor $K$-group $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ is generated by Steinberg symbols $\{\bar{r}\}=\{r_0,...,r_n\}$ with at least one entry $r_j$ belonging to $(1+I)^*$.
\end{lem}
\begin{proof} By the splitting $R=S\oplus I$, any element $r$ of $R^*$ may be written uniquely as a product $s(1+i)$, where $s$ belongs to $S^*$ and $i$ belongs to $I$. Hence, the Steinberg symbol $\{\bar{r}\}$ may be factored into a product
\begin{equation}\label{factorizationbarr}\{\bar{r}\}=\{\bar{s}\}\prod_l\{\bar{r}_l'\},\end{equation}
where each entry of $\{\bar{s}\}$ belongs to $S^*$, and where each factor $\{\bar{r}_l'\}$ has at least one entry in $(1+I)^*$. For example, the Steinberg symbol $\{r_0,r_1\}$ factors as follows:
\[\{r_0,r_1\}=\{s_0,s_1\}\{s_0,1+i_1\}\{1+i_0,s_1\}\{1+i_0,1+i_1\}.\]
In general, there are $2^{n+1}-1$ factors of the form $\{\bar{r}_l'\}$ in equation \hyperref[factorizationbarr]{\ref{factorizationbarr}}. Under the map $K_{n+1} ^{\tn{\fsz{M}}}(R)\rightarrow K_{n+1} ^{\tn{\fsz{M}}}(S)$ induced by the canonical surjection $R\rightarrow S$, the factors $\{\bar{r}_l'\}$ all map to the identity by the idempotent relation. Therefore, $\{\bar{r}\}$ maps to $\{\bar{s}\}$. Hence, $\{\bar{r}\}$ belongs to the kernel $\tn{Ker}[K_{n+1} ^{\tn{\fsz{M}}}(R)\rightarrow K_{n+1} ^{\tn{\fsz{M}}}(S)]=K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ if and only if $\{\bar{s}\}=1$. Thus, each element $\{\bar{r}\}$ of $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ admits a factorization $\prod_l\{\bar{r}_l'\}$, where each factor $\{\bar{r}_l'\}$ has at least one entry in $(1+I)^*$.
\end{proof}
\subsection{Generators and Relations for K\"{a}hler Differentials}\label{subsectiongeneratorsKahler}
In this section, I describe the groups $\Omega_{R}^{n}/d\Omega_{R}^{n-1}$ and $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$ in more detail in terms of generators and relations.
\begin{lem}\label{lemKahlerrelations}As an abstract additive group, $\Omega_{R/\mathbb{Z}}^{n}/d\Omega_{R/\mathbb{Z}}^{n-1}$ is generated by differentials $r_0dr_1\wedge...\wedge dr_n$, where $r_j\in R$ for all $j$, subject to the relations
\begin{enumerate}
\addtocounter{enumi}{-1}
\item $\Omega_{R/\mathbb{Z}}^{n+1}/d\Omega_{R/\mathbb{Z}}^{n}$ is abelian.
\item Additive relation: $(r+r')d\bar{r}=rd\bar{r}+r'd\bar{r}$.
\item Leibniz rule: $rd(r'r'')\wedge d\bar{r}=rr'dr''\wedge d\bar{r}+rr''dr'\wedge d\bar{r}$.
\item Alternating relation: $r_0dr_1\wedge...\wedge dr_j\wedge dr_{j+1}\wedge...\wedge dr_n=-r_0dr_1\wedge...\wedge dr_{j+1}\wedge dr_j\wedge...\wedge dr_n$.
\item Exactness: $d\bar{r}=0$.
\end{enumerate}
\end{lem}
\begin{proof} This follows directly from definition \hyperref[defikahler]{\ref{defikahler}} and the properties of the exterior algebra.
\end{proof}
These relations, of course, imply other familiar relations. For example, the Leibniz rule and exactness together imply that
\[r_1dr_0\wedge dr_2\wedge...\wedge dr_n= -r_0dr_1\wedge dr_2\wedge...\wedge dr_n,\]
so the alternating property ``extends to coefficients." This, in turn, implies that additivity is not ``confined to coefficients:"
\[r_0dr_1\wedge...\wedge d(r_j+r_j')\wedge...\wedge dr_n=r_0dr_1\wedge...\wedge dr_j\wedge...\wedge dr_n+r_0dr_1\wedge...\wedge dr_j'\wedge...\wedge dr_n.\]
Similarly, repeated use of the alternating relation implies that applying a permutation to the elements $r_0,...,r_n$ appearing in the differential $r_0dr_1\wedge...\wedge dr_n$ yields the same differential, multiplied by the sign of the permutation.
The following lemma establishes properties of K\"{a}hler differentials analogous to the properties of Milnor $K$-groups established in lemma \hyperref[lemrelativegenerators]{\ref{lemrelativegenerators}}. Minor stability and invertibility assumptions are necessary to yield the desired results.\\
\begin{lem}\label{lemrelativekahlergenerators} Let $R$ be split nilpotent extension of a $2$-fold stable ring $S$, in which $2$ is invertible. Let $I$ be the extension ideal.
\begin{enumerate}
\item The group $\Omega_{R,I}^n$ of absolute K\"{a}hler differentials of degree $n$ relative to $I$ is generated by differentials of the form $rd\bar{r}\wedge d\bar{r}'$, where $r$ is either $1$ or belongs to $I$, where $\bar{r}\in I$, and where $\bar{r}'\in R^*$.
\item The group $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$ is a subgroup of the group $\Omega_{R/\mathbb{Z}}^{n}/d\Omega_{R/\mathbb{Z}}^{n-1}$. In particular, $d\Omega_{R,I}^{n-1}=d\Omega_{R}^{n-1}\cap \Omega_{R,I}^{n}$.
\end{enumerate}
\end{lem}
\begin{proof} For the first part of the lemma, note that the elements $r_0,...,r_n$ contributing to a differential $r_0dr_1\wedge...\wedge dr_n$ may be permuted up to sign, using exactness and the alternating property of the exterior product. Also note that by lemma \hyperref[lemstability]{\ref{lemstability}}, any element of $R$ may be written as a sum of two units. It therefore suffices to show that $\Omega_{R,I}^n$ is generated by differentials of the form $r'd\bar{r}'$, where either $r'$ or at least one entry of $\bar{r}'$ belongs to $I$. By the splitting $R=S\oplus I$, any element $r$ of $R^*$ may be written uniquely as a sum $s+i$, where $s$ belongs to $S$ and $i$ belongs to $I$. Hence, the differential $rd\bar{r}$ may be decomposed into a sum
\begin{equation}\label{decompositionrdbarr}rd\bar{r}=sd\bar{s}+\sum_lr'_ld\bar{r}_l',\end{equation}
where $s$ and each entry of $\bar{s}$ belong to $S$, and where for each $l$, either $r'_l$ or at least one entry of $d\bar{r}_l'$ belongs to $I$. For example, the differential $r_0dr_1\wedge dr_2$ decomposes as follows:
\begin{equation*}\begin{array}{lcl} r_0dr_1\wedge dr_2&=&s_0ds_1\wedge ds_2+s_0ds_1\wedge di_2+s_0di_1\wedge ds_2+s_0di_1\wedge di_2\\&+&i_0ds_1\wedge ds_2+i_0ds_1\wedge di_2+i_0di_1\wedge ds_2+i_0di_1\wedge di_2.\end{array}\end{equation*}
In general, there are $2^{n+1}-1$ factors of the form $d\bar{r}_l'$ in equation \hyperref[decompositionrdbarr]{\ref{decompositionrdbarr}}. Under the map $\Omega_{R}^n\rightarrow\Omega_{S}^n$ induced by the canonical surjection $R\rightarrow S$, the summands $r'_ld\bar{r}_l'$ all map to zero, so $rd\bar{r}$ maps to $sd\bar{s}$. Hence, $rd\bar{r}$ belongs to the kernel $\tn{Ker}[\Omega_{R}^n\rightarrow\Omega_{S}^n]=\Omega_{R,I}^n$ if and only if $sd\bar{s}=0$. Thus, each element $rd\bar{r}$ of $\Omega_{R,I}^n$ admits a decomposition $rd\bar{r}=\sum_lr'_ld\bar{r}_l'$, where or each $l$, either $r'_l$ or at least one entry of $d\bar{r}_l'$ belongs to $I$.
For the second part of the lemma, the inclusion $d\Omega_{R,I}^{n-1}\subset d\Omega_{R}^{n-1}\cap \Omega_{R,I}^{n}$ is obvious, independent of the fact that $R$ is a split nilpotent extension of $S$. Conversely, suppose that $\omega$ belongs to the intersection $d\Omega_{R}^{n-1}\cap \Omega_{R,I}^{n}$. Since $\omega\in \Omega_{R,I}^{n}$, the first part of the lemma implies that $\omega$ may be expressed as a sum of differentials of the form $rd\bar{r}=r_0dr_1\wedge...\wedge dr_n$, where either $r_0$ or at least one of the $r_j$ belongs to $I$. Since $\omega\in d\Omega_{R}^{n-1}$, the ``coefficient" $r_0$ in each summand may be taken to be $1$. Hence, $\omega$ is a sum of terms of the form $dr_1\wedge...\wedge dr_n=d\big(r_1dr_2\wedge...\wedge dr_n\big)$, where one of the elements $r_1,...,r_n$ belongs to $I$, so $\omega\in d\Omega_{R,I}^{n-1}$ by the first part of the lemma. It follows that the map sending $\omega+d\Omega_{R,I}^{n-1}$ to $\omega+d\Omega_{R}^{n-1}$ is an injective group homomorphism from $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}\rightarrow\Omega_{R}^{n}/d\Omega_{R}^{n-1}$, which may be regarded as an inclusion map.
\end{proof}
\subsection{The $d\log$ Map; the de Rham-Witt Viewpoint}\label{subsectiondlogdeRhamWitt}
The following lemma establishes the existence of the ``canonical $d\log$ map" from Milnor $K$-theory to the absolute K\"{a}hler differentials, used in the proof of lemma \hyperref[lempatchingphi]{\ref{lempatchingphi}} below.\\
\begin{lem}\label{lemdlog} Let $R$ be a commutative ring. The map $R^*\rightarrow\Omega_{R/\mathbb{Z}}^1$ sending $r$ to $d\log(r)=dr/r$ extends to a homomorphism $d\log:T_{R^*/\mathbb{Z}}\rightarrow \Omega_{R/\mathbb{Z}}^\bullet$ of graded rings, by sending sums to sums and tensor products to exterior products. This homomorphism induces a homomorphism of graded rings:
\[d\log:K_\bullet^{\tn{\fsz{M}}}(R)\longrightarrow \Omega_{R/\mathbb{Z}}^\bullet\]
\begin{equation}\label{equdlog}\{r_0,...,r_n\}\mapsto\displaystyle\frac{dr_0}{r_0}\wedge...\wedge\frac{dr_n}{r_n}.\end{equation}
\end{lem}
\begin{proof} The map $d\log:T_{R^*/\mathbb{Z}}\rightarrow \Omega_{R/\mathbb{Z}}^*$ is a graded ring homomorphism by construction, since its definition stipulates that sums are sent to sums and tensor products to exterior products. Elements of the form $r\otimes(1-r)$ in $R*\otimes_\mathbb{Z} R^*$ map to zero in $\Omega_{R/\mathbb{Z}}^2$ by the alternating property of the exterior product:
\[d\log\big(r\otimes(1-r)\big)=\frac{dr}{r}\wedge\frac{d(1-r)}{1-r}=-\frac{1}{r(1-r)}dr\wedge dr=0,\]
so $d\log$ descends to a homomorphism $K_\bullet^{\tn{\fsz{M}}}(R)\longrightarrow \Omega_{R/\mathbb{Z}}^\bullet$.
\end{proof}
Lars Hesselholt \cite{HesselholtBigdeRhamWitt} provides a more sophisticated viewpoint regarding the $d\log$ map and the closely related map $\phi_{n+1}$ in the main theorem, expressed in equation \hyperref[equationphi]{\ref{equationphi}} above. This viewpoint is expressed in terms of pro-abelian groups, de Rham Witt complexes, and Frobenius endomorphisms. In describing it, I closely paraphrase a private communication \cite{Hesselholtprivate14} from Hesselholt. For every commutative ring $R$, there exists a map of pro-abelian groups
\[d\log: K_n^{\tn{\fsz{M}}}(R)\rightarrow W\Omega_{R/\mathbb{Z}}^n\]
from Milnor $K$-theory to an appropriate de Rham-Witt theory $W\Omega_{R/\mathbb{Z}}^n$, taking the Steinberg symbol $\{r_1,...,r_n\}$ to the element $d\log[r_1]...d\log[r_n]$, where $[r]$ is the Teichm\"{u}ller representative of $r$ in an appropriate ring of Witt vectors of $R$. Here, $W\Omega_{R/\mathbb{Z}}^n$ may represent either the $p$-typical de Rham-Witt groups or the big Rham-Witt groups. On the $p$-typical de Rham-Witt complex, there is a (divided) Frobenius endomorphism $F=F_p$; on the big de Rham-Witt complex, there is a (divided) Frobenius endomorphism $F_n$ for every positive integer $n$. The map $d\log$ maps into the sub-pro-abelian group
\[(W\Omega_{R/\mathbb{Z}}^n)^{F=\tn{Id}}\subset W\Omega_{R/\mathbb{Z}}^n,\]
fixed by the appropriate Frobenius endomorphism or endomorphisms. Using the big de Rham complex, one may conjecture\footnotemark\footnotetext{This is Hesselholt's idea.} that for every commutative ring $R$ and every nilpotent ideal $I\subset R$, the induced map of relative groups
\[K_n^{\tn{\fsz{M}}}(R,I)\rightarrow(W\Omega_{R,I}^n)^{F=\tn{Id}},\]
is an isomorphism of pro-abelian groups. Expressing the right-hand-side in terms of differentials, as in the main theorem \hyperref[equmaintheorem]{\ref{equmaintheorem}}, likely requires some additional hypotheses.\footnotemark\footnotetext{Hesselholt \cite{Hesselholtprivate14} writes, {\it ``If every prime number $l$ different from a fixed prime number $p$ is invertible in $R$, then one should be able to use the $p$-typical de Rham-Witt groups instead of the big de Rham-Witt groups. In this context, [the main theorem] can be seen as a calculation of this Frobenius fixed set... ... In order to be able to express the Frobenius fixed set in terms of differentials (as opposed to de Rham-Witt differentials), I would think that it is necessary to invert $N$... ...I do not think that inverting 2 is enough.}}
\section{Proof of the Theorem}\label{sectionproof}
\subsection{Strategy of Proof}\label{subsectionstrategy}
The proof of the theorem is by induction on $n$ in the statement $\displaystyle K_{n+1} ^{\tn{\fsz{M}}}(R,I)\cong \Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$. The base case of the theorem ($n=1$) is provided by combining the theorems \hyperref[theoremvanderkallen]{\ref{theoremvanderkallen}} (Van der Kallen), \hyperref[theoremmaazen]{\ref{theoremmaazen}} (Maazen and Stienstra), and \hyperref[theorembloch]{\ref{theorembloch}} (Bloch), introduced in section \hyperref[sectionprelim]{\ref{sectionprelim}} above. This synthesis is discussed in detail in section \hyperref[subsectionbasecase]{\ref{subsectionbasecase}} below. The induction hypothesis assumes the existence of isomorphisms $\phi_{m+1}:\displaystyle K_{m+1} ^{\tn{\fsz{M}}}(R,I)\rightarrow\Omega_{R,I} ^m/d\Omega_{R,I} ^{m-1}$ and $\psi_{m+1}:\displaystyle \Omega_{R,I} ^m/d\Omega_{R,I} ^{m-1}\rightarrow K_{m+1} ^{\tn{\fsz{M}}}(R,I)$ for $1\le m<n$, satisfying conditions made explicit in section \hyperref[subsectionexplicitinduction]{\ref{subsectionexplicitinduction}}. These isomorphisms are then used to construct corresponding isomorphisms $\phi_{n+1}$ and $\psi_{n+1}$ in sections \hyperref[subsectionanalysisphi]{\ref{subsectionanalysisphi}} and \hyperref[subsectionanalysispsi]{\ref{subsectionanalysispsi}}.
As illustrated by equations \hyperref[equationphi]{\ref{equationphi}} and \hyperref[equationpsi]{\ref{equationpsi}} in section \hyperref[sectionintroduction]{\ref{sectionintroduction}} above, it is easy to specify the images of certain special generators of $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ and $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$ under $\phi_{n+1}$ and $\psi_{n+1}$. The whole ``difficulty" of the proof is in verifying that these formulae actually extend to well-defined isomorphisms. Induction allows this problem to be split into two parts: first, to show that the maps $\phi_{m+1}$ and $\psi_{m+1}$ for $1\le m<n$ are well-defined isomorphisms; second, that these maps give rise to well-defined isomorphisms $\phi_{n+1}$ and $\psi_{n+1}$. The first part ``comes for free," via the base case of the theorem and the induction hypothesis. The second part consists of ``patching together" $(n+1)$ maps $\Phi_{n+1,j}$ and $\Psi_{n+1,j}$, defined, roughly speaking, by applying $\phi_n$ and $\psi_n$ to Steinberg symbols and differentials ``of size $n$," given by omitting individual entries of corresponding symbols and differentials ``of size $n+1$." The maps $\Phi_{n+1,j}$ and $\Psi_{n+1,j}$ are introduced in definitions \hyperref[defiPhinj]{\ref{defiPhinj}} and \hyperref[defiPsinplusonej]{\ref{defiPsinplusonej}}, respectively. The ``patching lemmas" \hyperref[lempatchingphi]{\ref{lempatchingphi}} and \hyperref[lempatchingpsi]{\ref{lempatchingpsi}} are the most computationally involved parts of the proof.
The obvious question raised by this approach is, ``why not just define the images of sets of generators of $\displaystyle K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ and $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$, respectively, show that they satisfy the proper relations in the target, etc, instead of pursuing an elaborate induction and patching scheme?" There may well be some clever way to do this, and to convince oneself that all the necessary conditions have been checked, but it is not a straightforward procedure. The reason why is that relations among ``convenient" generators for $\displaystyle K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ generally do not translate easily into relations among ``convenient" generators for $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$, and vice versa. For example, consider the Leibniz rule in lemma \hyperref[subsectiongeneratorsKahler]{\ref{subsectiongeneratorsKahler}}:
\[rd(r'r'')\wedge d\bar{r}=rr'dr''\wedge d\bar{r}+rr''dr'\wedge d\bar{r},\]
and suppose that we want to verify that
\begin{equation}\label{equleibnizcomplication}\psi_{n+1}\big(rd(r'r'')\wedge d\bar{r}\big)=\psi_{n+1}\big(rr'dr''\wedge d\bar{r}\big)\psi_{n+1}\big(rr''dr'\wedge d\bar{r}\big).\end{equation}
Assume for simplicity that the element $r$ and the product $r'r''$ both belong to $I$, and that every entry of the $(n-1)$-tuple $(\bar{r})$ is an element of $R^*$; the example will raise sufficient subtleties for illustrative purposes even in this special case. Equation \hyperref[equationpsi]{\ref{equationpsi}} specifies the image in $\displaystyle K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ of the left-hand side of equation \hyperref[equleibnizcomplication]{\ref{equleibnizcomplication}}:
\[\psi_{n+1}\big(rd(r'r'')\wedge d\bar{r}\big)=\{e^{r\Pi},e^{r'r''},\bar{r}\},\]
where $\Pi$ is the product of the entries of $\bar{r}$. However, it is not straightforward to write out the right-hand side of equation \hyperref[equleibnizcomplication]{\ref{equleibnizcomplication}} explicitly. In particular, the factors $r'$ and $r''$ of the product $r'r''$ may both belong to $I$, or only one may belong to $I$, or neither may belong to $I$, if $I$ is not prime. Suppose for simplicity that $r'$ belongs to $I$ but $r''$ does not. Then the second factor on the right-hand side of equation \hyperref[equleibnizcomplication]{\ref{equleibnizcomplication}} is
\[\psi_{n+1}\big(rr''dr'\wedge d\bar{r}\big)=\{e^{rr''\Pi},e^{r'},\bar{r}\},\]
but the explicit form of the first factor cannot be read off from equation \hyperref[equationpsi]{\ref{equationpsi}}, since $r''$ is neither a unit nor an element of $I$. Instead, one must use lemma \hyperref[lemstability]{\ref{lemstability}} to write $r''$ as a sum of two units $r''=u+v$. Then the right-hand side of equation \hyperref[equleibnizcomplication]{\ref{equleibnizcomplication}} may be written out explicitly:
\[\psi_{n+1}\big(rd(r'r'')\wedge d\bar{r}\big)=\{e^{rr'u\Pi},u,\bar{r}\}\{e^{rr'v\Pi},v,\bar{r}\}\{e^{rr''\Pi},e^{r'},\bar{r}\}.\]
However, one still must show that the right-hand side does not depend on the choice of $u$ and $v$, since $r''$ can generally be written as a sum of two units in many different ways.\footnotemark\footnotetext{See lemma \hyperref[lemuvUV]{\ref{lemuvUV}} below.} This example should serve to convince the reader that a na\"{i}ve, straightforward approach to the proof involves many cases and loose ends. The induction approach I use instead has the advantage of being more systematic. Most of the work involved in checking relations is shunted off on the induction hypothesis, with the tradeoff that one must endure a bit of computation to show that the maps $\Phi_{n+1,j}$ and $\Psi_{n+1,j}$ really patch together as claimed.
\subsection{Base Case of the Theorem}\label{subsectionbasecase}
Combining several of the preliminary results in section \hyperref[sectionprelim]{\ref{sectionprelim}} yields the following lemma, which serves as the base case of the theorem:\\
\begin{lem}\label{lembasecase} Suppose $R$ is a split nilpotent extension of a $5$-fold stable ring $S$, with extension ideal $I$
whose index of nilpotency is $N$. Suppose further that every positive integer less than or equal to $N$
is invertible in $S$. Then
\begin{equation}K_{2} ^{\tn{\fsz{M}}}(R,I)\cong \frac{\Omega_{R,I} ^1}{dI}\end{equation}
\end{lem}
\begin{proof}$R$ is $5$-fold stable by lemma \hyperref[lemstability]{\ref{lemstability}}, so $K_2 ^{\tn{\fsz{M}}}(R)\cong D_2(R)$ by theorem \hyperref[theoremvanderkallen]{\ref{theoremvanderkallen}}. $R$ is a split radical extension of $S$ since $I$ is nilpotent, so $K_2 ^{\tn{\fsz{M}}}(R,I)\cong D_2(R,I)\cong K_2(R,I)$ by theorem \hyperref[theoremmaazen]{\ref{theoremmaazen}}. Finally, since every positive integer less than or equal to $N$ is invertible in $S$, $K_2(R,I)\cong \Omega_{R,I} ^1/dI$ by theorem \hyperref[theorembloch]{\ref{theorembloch}}.
\end{proof}
In terms of Steinberg symbols and K\"{a}hler differentials, the isomorphisms of lemma \hyperref[lembasecase]{\ref{lembasecase}} are the maps
\[\phi_{2}:K_{2} ^{\tn{\fsz{M}}}(R,I)\longrightarrow\frac{\Omega_{R,I} ^1}{dI}\]
\begin{equation}\label{equationphi1}
\{r_0,r_1\}\mapsto\log(r_0)\frac{dr_1}{r_1},
\end{equation}
where $r_0\in (1+I)^*$ and $r_1\in R^*$, and
\[\psi_{2}:\frac{\Omega_{R,I} ^1}{dI}\longrightarrow K_{2} ^{\tn{\fsz{M}}}(R,I)\]
\begin{equation}\label{equationpsi1}
r_0dr_1\mapsto\{e^{r_0r_1},r_{1}\},
\end{equation}
where $r_0\in I$ and $r_{1}\in R^*$. These maps are given by setting $n=1$ in equations \hyperref[equationphi]{\ref{equationphi}} and \hyperref[equationpsi]{\ref{equationpsi}} of section \hyperref[sectionintroduction]{\ref{sectionintroduction}}. I will now describe in more detail how they arise. As described in the proof of lemma \hyperref[lembasecase]{\ref{lembasecase}}, $\phi_2$ may be viewed as a composition of isomorphisms
\[K_{2} ^{\tn{\fsz{M}}}(R,I)\rightarrow D_2(R,I)\rightarrow \frac{\Omega_{R,I} ^1}{dI}.\]
The first isomorphism is given, in the $5$-fold stable case, by restricting the isomorphism $K_{2} ^{\tn{\fsz{M}}}(R)\cong D_2(R)$ of lemma \hyperref[subsectionsymbolicstability]{\ref{subsectionsymbolicstability}} to the subgroup $K_{2} ^{\tn{\fsz{M}}}(R,I)$. This isomorphism is described explicitly by Van der Kallen \cite{VanderKallenRingswithManyUnits77} theorem 8.4, page 509, as the map taking the Steinberg symbol $\{r_0,r_1\}$ to the Dennis-Stein symbol $\langle(r_0-1)/r_1,r_1\rangle$. Restricting to $K_{2} ^{\tn{\fsz{M}}}(R,I)$, one may assume that at least one of the entries $r_0$ and $r_1$ of $\{r_0,r_1\}$ belongs to $(1+I)^*$. By anticommutativity, one may assume that $r_0\in(1+I)^*$. The second isomorphism is described explicitly by Maazen and Stienstra \cite{MaazenStienstra77} section 3.12, pages 287-289, as the map taking the Dennis-Stein symbol $\langle a,b\rangle$ to the differential $\log(1+ab)(db/b)$.\footnotemark\footnotetext{Actually, Maazen and Stienstra give the image as $\log(1+ab)(da/a)$, but the Dennis-Stein relation $\langle a,b\rangle\langle -b,-a\rangle=1$ in definition \hyperref[defidennisstein]{\ref{defidennisstein}} implies that the definition I use here is equivalent.} This definition makes sense whether or not $b$ is invertible, since every term in the power series expansion of $\log(1+ab)$ is divisible by $b$. Putting the two maps together,
\[\phi_2\big(\{r_0,r_1\}\big)= \log\Big(1+\frac{r_0-1}{r_1}r_1\Big)\frac{dr_1}{r_1}=\log(1+r_0)\frac{dr_1}{r_1},\]
as stated in equation \hyperref[equationphi1]{\ref{equationphi1}}.
Similarly, $\psi_2$ may be viewed as a composition of isomorphisms in the opposite direction:
\[\frac{\Omega_{R,I} ^1}{dI}\rightarrow D_2(R,I)\rightarrow K_{2} ^{\tn{\fsz{M}}}(R,I).\]
The first isomorphism is described explicitly by Maazen and Stienstra \cite{MaazenStienstra77} section 3.12, pages 287-289, as the map taking the differential $r_0dr_1$ to the Dennis-Stein symbol $\langle (e^{r_0r_1}-1)/r_1,r_1\rangle$.\footnotemark\footnotetext{Maazen and Stienstra give the image as $\langle (e^{r_0r_1}-1)/r_0,r_0\rangle$, but the Leibniz rule $d(r_0r_1)=r_0dr_1+r_1dr_0$, together with exactness, proves that the definition I use here is equivalent.} Restricting to $D_2(R,I)$, one may assume that at least one of the elements $r_1$ and $r_2$ belongs to $I$. By exactness and the Leibniz rule, it suffices to describe the images of differentials of the form $idr$. Such a differential maps to $\langle (e^{ir}-1)/r,r\rangle$. The second isomorphism is given, in the $5$-fold stable case, by restricting the isomorphism $D_2(R)\rightarrow K_{2} ^{\tn{\fsz{M}}}(R)$ to the subgroup $D_2(R,I)$. This isomorphism is described explicitly by Van der Kallen \cite{VanderKallenRingswithManyUnits77} theorem 8.4, page 509, as the map taking the Dennis-Stein symbol $\langle a,b\rangle$ to the Steinberg symbol $\{1+ab,b\}$. Putting the two maps together,
\[\psi_2(r_0dr_1)=\Big\{1+\frac{e^{r_0r_1}-1}{r_1}r_1,r_1\Big\} =\{e^{r_0r_1},r_1\},\]
as stated in equation \hyperref[equationpsi1]{\ref{equationpsi1}}.
\subsection{Induction Hypothesis}\label{subsectionexplicitinduction}
Let $n$ be a positive integer. The induction hypothesis states that for any positive integer $m$ less than $n$, there exists an isomorphism
\[\phi_{m+1}:K_{m+1} ^{\tn{\fsz{M}}}(R,I)\rightarrow \frac{\Omega_{R,I} ^m}{d\Omega_{R,I} ^{m-1}}\]
\begin{equation}\label{equinductionphi}\{r,\bar{r}\}\mapsto\log(r)\frac{d\bar{r}}{\Pi},\end{equation}
for $r\in (1+I)^*$ and $\bar{r}\in R^*$, with inverse
\[\psi_{m+1}:\frac{\Omega_{R,I} ^m}{d\Omega_{R,I} ^{m-1}}\rightarrow K_{m+1} ^{\tn{\fsz{M}}}(R,I)\]
\begin{equation}\label{equinductionpsi}rd\bar{r}\wedge d\bar{r}'\mapsto\{e^{r\Pi'},e^{\bar{r}},\bar{r}'\},\end{equation}
for $r,\bar{r}\in I$ and $\bar{r}'\in R^*$. Remaining is the inductive step: to show that the induction hypothesis implies the existence of such isomorphisms for $m=n$.
\subsection{Definition and Analysis of the Map $\displaystyle\phi_{n+1}: K_{n+1} ^{\tn{\fsz{M}}}(R,I)\rightarrow \Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$}\label{subsectionanalysisphi}
I will define $\phi_{n+1}$ in several steps, building up its properties in the process.\\
\begin{defi}\label{defprelimmapsphi} Let $m$ be a positive integer.
\begin{enumerate}
\item Let $F_{m+1}(R^*)$ be the free abelian group generated by ordered $(m+1)$-tuples $(r_0,...,r_m)$ of elements of $R^*$.
\item Let $q_{m+1}$ be the quotient homomorphism
\[q_{m+1}: F_{m+1}(R^*)\rightarrow K_{m+1} ^{\tn{\fsz{M}}}(R)\]
\[(r_0,...,r_m)\mapsto\{r_0,...,r_m\},\]
sending each $(m+1)$-tuple to the corresponding Steinberg symbol.
\item Let $F_{m+1}\big(R^*,(1+I)^*\big)$ be the preimage in $F_{m+1}(R^*)$ of the relative Milnor $K$-group $K_{m+1}^{\tn{\fsz{M}}}(R,I)\subset K_{m+1}^{\tn{\fsz{M}}}(R)$ under $q_{m+1}$.
\item For $1\le m<n$, let $\Phi_{m+1}$ be the composition
\[\phi_{m+1}\circ q_{m+1}:F_{m+1}\big(R^*,(1+I)^*\big)\rightarrow\frac{\Omega_{R,I} ^{m}}{d\Omega_{R,I} ^{m-1}}.\]
\end{enumerate}
\end{defi}
An $(m+1)$-tuple $(r_0,...,r_m)$ of elements of $R^*$ satisfying the condition that at least one of its entries belongs to $(1+I)^*$ is automatically an element of $F_{m+1}\big(R^*,(1+I)^*\big)$. If $1\le m<n$, then specifying such an element $r_j$ allows the image of $(r_0,...,r_m)$ under $\Phi_m$ to be expressed explicitly in terms of logarithms and their differentials, via equation \hyperref[equinductionphi]{\ref{equinductionphi}} above. In particular, if $r_0\in(1+I)^*$, then
\[\Phi_{m+1}(r_0,...,r_m)=\log(r_0)\frac{dr_1}{r_1}\wedge...\wedge\frac{dr_{m}}{r_{m}}.\]
However, there are generally $(m+1)$-tuples belonging to $F_{m+1}\big(R^*,(1+I)^*\big)$ that do not satisfy this property. For example, any $(m+1)$-tuple including an idempotent element as one of its entries maps to the trivial element of $K_{m+1}^{\tn{\fsz{M}}}(R)$ under $q_{m+1}$, and therefore belongs to $F_{m+1}\big(R^*,(1+I)^*\big)$, whether or not it includes an element of $(1+I)^*$.
It will be useful to consider diagrams of the form
\[\begin{CD}
K_{n+1} ^{\tn{\fsz{M}}}(R,I) @<q_{n+1}<< F_{n+1}\big(R^*,(1+I)^*\big)
@<A_j<<F_{n}\big(R^*,(1+I)^*\big)\times R^*
@>\beta>>\displaystyle\frac{\Omega_{R,I} ^{n-1}}{d\Omega_{R,I} ^{n-2}}\times R^*
@>{\gamma}>>\displaystyle\frac{\Omega_{R,I}^{n}}{d\Omega_{R,I}^{n-1}},
\end{CD}\]
where the maps $A_j$, $\beta$, and $\gamma$ are defined as follows:\\
\begin{defi}\label{defiAjbetagamma} Let $j$ be a nonnegative integer less than or equal to $n$.
\begin{enumerate}
\item Let $A_j$ be the map converting each $n$-tuple $(\bar{r})$ in $F_{n}\big(R^*,(1+I)^*\big)$ to an $(n+1)$-tuple in $F_{n+1}\big(R^*,(1+I)^*\big)$ by inserting an element of $R^*$ between the $j-1$ and $j$th entries of $(\bar{r})$. Extend $A_j$ to inverses and products of $n$-tuples to produce products of $(n+1)$-tuples and their inverses sharing the same $j$th entries. For example, if $n=3$ and $j=1$, then
\[A_1 \big((r_0,r_1,r_2)(r_0',r_1',r_2')^{-1}, r\big)=(r_0,r,r_1,r_2)(r_0',r,r_1',r_2')^{-1}.\]
\item Let $\beta$ be the Cartesian product $\Phi_{n}\times \tn{Id}_{R^*}$, where $\tn{Id}_{R^*}$ is the identity map on $R^*$.
\item Let $\gamma$ be defined by wedging on the right with $\displaystyle\frac{dr}{r}$ for $r\in R^*$, sending $\big(\omega+d\Omega_{R,I} ^{n-2},r\big)$ to $\displaystyle\omega\wedge\frac{dr}{r}+d\Omega_{R,I} ^{n-1}$.
\end{enumerate}
\end{defi}
It is prudent to verify that these maps are well-defined. For $q_{n+1}$ and $\beta$, this is obvious. For $A_j$, the definition certainly produces an element of $F_{n+1}(R^*)$, and it remains to show that this element belongs to the subgroup $F_{n+1}\big(R^*,(1+I)^*\big)$. To see this, note that inserting an element corresponds, after applying $q_{n+1}$ and anticommutativity, to the product $K_n^{\tn{\fsz{M}}}(R)\times K_1^{\tn{\fsz{M}}}(R)\rightarrow K_{n+1}^{\tn{\fsz{M}}}(R)$, which fits into the commutative square
\[\begin{CD}
K_n^{\tn{\fsz{M}}}(R)\times K_1^{\tn{\fsz{M}}}(R)@>>>K_{n+1}^{\tn{\fsz{M}}}(R)\\
@VVV@VVV\\
K_n^{\tn{\fsz{M}}}(S)\times K_1^{\tn{\fsz{M}}}(S)@>>>K_{n+1}^{\tn{\fsz{M}}}(S).
\end{CD}\]
This implies that a generator of $F_{n+1}(R^*)$ given by inserting a element of $R^*$ between the $j-1$ and $j$th entries of a generator of $F_n(R^*)$ maps to the identity in $K_{n+1}^{\tn{\fsz{M}}}(S)$ under $q_{n+1}$ if the original $n$-tuple maps to the identity in $K_{n}^{\tn{\fsz{M}}}(S)$ under $q_{n}$.\footnotemark\footnotetext{The converse is obviously false; for example, inserting $1$ into any generator of $F_n(R^*)$ produces an element of $F_{n+1}\big(R^*,(1+I)^*\big)$ by the idempotent lemma.}
Turning to $\gamma$, it is necessary to verify that the image $\omega\wedge(dr/r)+d\Omega_{R,I} ^{n-1}$ of the pair $\big(\omega+d\Omega_{R,I} ^{n-2},r\big)$ does not depend on the choice of $\omega$; i.e., that adding an exact differential to $\omega$ does not alter the image. By lemma \hyperref[lemrelativekahlergenerators]{\ref{lemrelativekahlergenerators}}, this reduces to showing that $d\bar{r}\wedge(dr/r)$ belongs to $d\Omega_{R,I} ^{n-1}$ for any exact differential of the form $d\bar{r}=dr_0\wedge...\wedge dr_{n-2}$ with at least one $r_j$ belonging to $I$. But $d\bar{r}\wedge(dr/r)$ is just $d\big(\log(r)d\bar{r}\big)$ up to sign, and at least one of its factors is the differential of an element of $I$, by the choice of $\bar{r}$.
A few other properties of the maps $A_j$ are noteworthy. First, they respect the group structure of $F_n \big(R^*,(1+I)^*\big)$, but do not respect the group structure of $R^*$, since the target $F_{n+1}\big(R^*,(1+I)^*\big)$ has no relations except commutativity. However, the composite maps $q_{n+1}\circ A_j$ respect both group structures, due to the multiplicative relations in $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$. Second, each map $A_j$ is injective. Indeed, an element of $F_{n+1}\big(R^*,(1+I)^*\big)$ can belong to the image of $A_j$ only if the $j$th entries of its factors coincide, in which case its inverse image in $F_{n}\big(R^*,(1+I)^*\big)\times R^*$, if it exists, is uniquely defined by extracting the common $j$th entry. The inverse maps $A_j^{-1}$ are therefore well-defined on the images $\tn{Im}(A_j)$. It is important to note that the maps $A_j^{-1}$ are maps on {\it products of $(n+1)$-tuples and their inverses}, rather than merely maps on $(n+1)$-tuples. For this reason, the image of a single $(n+1)$-tuple $(\bar{r})$ under $A_j^{-1}$ is written as $A_j^{-1}\big((\bar{r})\big)$, rather than $A_j^{-1}(\bar{r})$.\\
\begin{defi}\label{defiPhinj}For an $(n+1)$-tuple $(\bar{r})=(r_1,...,r_{n+1})$ belonging to $\tn{Im}(A_j)\subset F_{n+1}\big(R^*,(1+I)^*\big)$, define
\[\Phi_{n+1,j}\big((\bar{r})\big):= \gamma\circ\beta\circ {A_j ^{-1}}\big((\bar{r})\big),\]
and extend $\Phi_{n+1,j}$ to products of $(n+1)$-tuples sharing the same $j$th entry by sending products in $F_{n+1}\big(R^*,(1+I)^*\big)$ to sums in $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$.
\end{defi}
The following ``patching lemma" enables the definition of the ``global map" $\Phi_{n+1}$ in definition \hyperref[defipatching]{\ref{defipatching}} below:\\
\begin{lem}\label{lempatchingphi} $\Phi_{n+1,j}=(-1)^{k-j}\Phi_{n+1,k}$ on the intersection $\tn{Im}(A_j)\cap \tn{Im}(A_k)\subset F_{n+1}\big(R^*,(1+I)^*\big)$ for any $1\le j<k\le n$.
\end{lem}
\begin{proof} Since both $\Phi_{n+1,j}$ and $\Phi_{n+1,k}$ send products in $\tn{Im}(A_j)\cap \tn{Im}(A_k)$ to sums in $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$, it suffices to prove the statement of the lemma for a single generic $(n+1)$-tuple $(\bar{r})=(r_0,...,r_{n})$ in $\tn{Im}(A_j)\cap \tn{Im}(A_k)$. Such an $(n+1)$-tuple satisfies the condition that the $n$-tuples $(r_0,...,\hat{r_j},...,r_{n})$ and $(r_0,...,\hat{r_k},...,r_{n})$, given by deleting its $j$th and $k$th entries, respectively, belong to $F_{n}\big(R^*,(1+I)^*\big)$. Define an $(n-1)$-tuple $(\bar{r}')$ by deleting {\it both} entries $r_j$ and $r_k$ from $(\bar{r})$, as follows:\footnotemark\footnotetext{The purpose of isolating and renaming the $(n-1)$-tuple $(\bar{r}')$, even though all its entries come from $(\bar{r})$, is to avoid numbering issues later in the proof.}
\[(\bar{r}'):=(r_0,...,\hat{r_j},...,\hat{r_k},...,r_{n})\in F_{n-1}(R^*).\]
By anticommutativity, the Steinberg symbols $\{\bar{r}',r_j\}$ and $\{\bar{r}',r_k\}$, defined by appending the deleted entries $r_j$ and $r_k$ onto $\bar{r}'$, respectively, belong to $K_n ^{\tn{\fsz{M}}}(R,I)$. Using the splitting $R=S\oplus I$, as in lemma \hyperref[lemrelativegenerators]{\ref{lemrelativegenerators}} above, each entry $r_l$ of $(\bar{r})$ may be factored into a product of the form $r_l=s_l(1+i_l)$, where $s_l$ belongs to $S^*$, and $i_l$ belongs to $I$. There then exist factorizations in Milnor $K$-theory:
\begin{equation}\label{equmilnorfactorizations}\begin{array}{lcl}\{\bar{r}',r_j\}&=&\{\bar{s}',1+i_j\}\displaystyle\prod_{l=0} ^{n-2}\{\bar{r_l}',r_j\} \hspace{.5cm}\tn{and}\\
\{\bar{r}',r_k\}&=&\{\bar{s}',1+i_k\}\displaystyle\prod_{l=0} ^{n-2}\{\bar{r_l}',r_k\},\end{array}\end{equation}
where $(\bar{s}'):=(s_0,...,\hat{s_j},...,\hat{s_k},...,s_{n})$, and where $(\bar{r_l}')=(s_0',...s_{l-1}',1+i_l',r_{l+1}',...,r_{n-2}')$ has its $l$th entry in $(1+I)^*$.\footnotemark\footnotetext{{\it A priori,} the first product in equation \hyperref[equmilnorfactorizations]{\ref{equmilnorfactorizations}} has an additional factor $\{\bar{s}',s_j\}$, and the second product has an additional factor $\{\bar{s}',s_k\}$, but both factors are trivial since $(\bar{r})\in F_{n+1}\big(R^*,(1+I)^*\big)$. For example, for $n=3$, the Steinberg symbol $\{r_1,r_2,r_3\}$ factors as $\{s_1,s_2,s_3\}\{s_1,s_2,1+i_3\}\{s_1,1+i_2,r_3\}\{1+i_1,r_2,r_3\}$, where the first factor is trivial.} By anticommutativity and the definition of $\Phi_{n+1,j}$, it follows that
\begin{equation}\label{equPhij}\begin{array}{lcl}\Phi_{n+1,j}\big((\bar{r})\big)&=&\displaystyle(-1)^{n-k}\phi_{n}\big(\{\bar{r}',r_k\}\big)\wedge\frac{dr_j}{r_j}\\&=&\displaystyle(-1)^{k+1}\log(1+i_k)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_j}{r_j}+(-1)^{n-k}\omega\wedge\frac{dr_k}{r_k}\wedge\frac{dr_j}{r_j},\end{array}\end{equation}
where
\[\omega:=\sum_{l=0} ^{n-2}(-1)^{l}\log(1+i_l')\frac{ds_0'}{s_0'}\wedge...\wedge\frac{ds_{l-1}'}{s_{l-1}'}\wedge
\frac{dr_{l+1}'}{r_{l+1}'}\wedge...\wedge\frac{dr_{n-2}'}{r_{n-2}'},\]
and where $\Pi_{\bar{s}'}$ is the product of the entries in $\bar{s}'$.
\comment{
\color{red}
For the first equality in equation \hyperref[equPhij]{\ref{equPhij}}, the full computation is:
\[\Phi_{n+1,j}\big((\bar{r})\big)=\gamma\circ\beta\circ A_j^{-1}\big((\bar{r})\big)=\gamma\circ\beta\big((r_0,...,r_{j-1},r_{j+1},...,r_{n}),r_j\big)\]
\[=\gamma\Big(\Phi_{n-1}\big((r_0,...,r_{j-1},r_{j+1},...,r_{n})\big),r_j\Big)\]
\[=\Phi_{n}\big((r_0,...,r_{j-1},r_{j+1},...,r_{n})\big)\wedge\frac{dr_j}{r_j}\]
\[=\phi_{n}\big(\{r_0,...,r_{j-1},r_{j+1},...,r_{n}\}\big)\wedge\frac{dr_j}{r_j}\]
\[=\phi_{n}\big(\{r_0,...,r_{j-1},r_{j+1},...,r_k,r_{k+1},...,r_{n}\}\big)\wedge\frac{dr_j}{r_j}\]
\[=(-1)^{n-k}\phi_{n-1}\big(\{\bar{r}',r_k\}\big)\wedge\frac{dr_j}{r_j},\]
where the factor of $(-1)^{n-k}$ comes from moving the element $r_k$ across $n-k$ elements to its right. For the second equality, the full computation is as follows. First, by the factorization in equation \hyperref[equmilnorfactorizations]{\ref{equmilnorfactorizations}},
\[(-1)^{n-k}\phi_{n}\big(\{\bar{r}',r_k\}\big)\wedge\frac{dr_j}{r_j}=(-1)^{n-k}\phi_{n}\Big(\{\bar{s}',1+i_j\}\prod_{l=0} ^{n-2}\{\bar{r_l}',r_j\} \Big)\wedge\frac{dr_j}{r_j}.\]
\[=(-1)^{n-k}\phi_{n}\big(\{\bar{s}',1+i_j\}\big)\wedge\frac{dr_j}{r_j}+(-1)^{n-k}\sum_{l=0} ^{n-2}\phi_{n}\big(\{\bar{r_l}',r_j\} \big)\wedge\frac{dr_j}{r_j}.\]
Moving the element $1+i_j$ to the left across the $n-1$ entries of $\bar{s}'$ in the first term gives a total exponent of $(n-k)+(n-1)=2n-k-1=k+1$ factors of $-1$ in the first term, which is the same as $k+1$ factors modulo $2$. Applying $\phi_{n}$ then gives the first term in the second equality of equation \hyperref[equPhij]{\ref{equPhij}}. Similarly, moving the element $1+i_l$ across $l$ entries $s_0',...s_{l-1}'$ of $\bar{r}_l'$ in the $l$th term of the sum gives a total exponent of $(n-k)+(l)$ factors of $-1$ in the $l$th term. After applying $\phi_{n}$, the original $(n-k)$ factors of $-1$ are left factored out equation \hyperref[equPhij]{\ref{equPhij}}, while the remaining $l$ factors for each term are absorbed into the definition of $\omega$.
\color{black}
}
Similarly,
\begin{equation}\label{equPhik}\begin{array}{lcl}\Phi_{n+1,k}\big((\bar{r})\big)&=&\displaystyle(-1)^{n-j-1}\phi_{n}(\{\bar{r}',r_j\})\wedge\frac{dr_k}{r_k},\\&=&\displaystyle(-1)^{j}\log(1+i_j)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_k}{r_k}+(-1)^{n-j-1}\omega\wedge\frac{dr_j}{r_j}\wedge\frac{dr_k}{r_k}.\end{array}\end{equation}
\comment{
\color{red}
The difference between equations \hyperref[equPhij]{\ref{equPhij}} and \hyperref[equPhik]{\ref{equPhik}} is that $j<k$. The first equality in equation \hyperref[equPhik]{\ref{equPhik}} is analogous to the corresponding computation for equation \hyperref[equPhik]{\ref{equPhik}} up the step
\[\Phi_{n+1,k}\big((\bar{r})\big)=...=\phi_{n}\big(\{r_0,...,r_{k-1},r_{k+1},...,r_{n+1}\}\big)\wedge\frac{dr_k}{r_k}.\]
The subsequent steps are
\[...=\phi_{n}\big(\{r_0,...,r_j,...,r_{k-1},r_{k+1},...,r_{n+1}\}\big)\wedge\frac{dr_k}{r_k}\]
\[=(-1)^{n-j-1}\phi_{n}\big(\{\bar{r}',r_j\}\big)\wedge\frac{dr_k}{r_k},\]
since there are only $n-j-1$, rather than $n-j$, entries to move the element $r_j$ across, due to the extraction of $r_k$.
For the second equality,
\[(-1)^{n-j-1}\phi_{n}\big(\{\bar{r}',r_j\}\big)\wedge\frac{dr_k}{r_k}=(-1)^{n-j-1}\phi_{n}\Big(\{\bar{s}',1+i_k\}\prod_{l=0} ^{n-2}\{\bar{r_l}',r_k\} \Big)\wedge\frac{dr_k}{r_k}.\]
\[=(-1)^{n-j-1}\phi_{n}\big(\{\bar{s}',1+i_k\}\big)\wedge\frac{dr_k}{r_k}+(-1)^{n-j-1}\sum_{l=0} ^{n-2}\phi_{n}\big(\{\bar{r_l}',r_k\} \big)\wedge\frac{dr_k}{r_k}.\]
Moving the element $1+i_k$ to the left across the $n-1$ entries of $\bar{s}'$ in the first term gives a total exponent of $(n-j-1)+(n-1)=-j$ factors of $-1$ in the first term, which is the same as $j$ factors modulo $2$. Applying $\phi_{n}$ then gives the first term in equation \hyperref[equPhik]{\ref{equPhik}}. Similarly, moving the element $1+i_l$ across $l$ entries $s_0',...s_{l-1}'$ of $\bar{r}_l'$ in the $l$th term of the sum gives a total exponent of $(n-j-1)+l$ factors of $-1$ in the $l$th term. After applying $\phi_{n}$, the original $n-j-1$ factors of $-1$ are left factored out equation \hyperref[equPhik]{\ref{equPhik}}, while the remaining $l$ factors for each term are absorbed into the definition of $\omega$.
\color{black}
}
The terms involving $\omega$ in equations \hyperref[equPhij]{\ref{equPhij}} and \hyperref[equPhik]{\ref{equPhik}} differ by the desired factor of $(-1)^{k-j}$ (note that $dr_j/r_j$ and $dr_k/r_k$ appear in the opposite order in the two equations). It remains to show that the differential
\[\sigma:=\log(1+i_k)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_j}{r_j}+\log(1+i_j)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_k}{r_k},\]
is exact.
\comment{
\color{red}
The point here is that multiplying the second differential by $(-1)^{k-j}$ and subtracting the two, as per the statement of the lemma, gives the differential
\[(-1)^{k+1}\log(1+i_k)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_j}{r_j}-(-1)^{k-j}(-1)^{j}\log(1+i_j)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_k}{r_k},\]
which simplifies to
\[(-1)^{k+1}\Bigg(\log(1+i_k)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_j}{r_j}+\log(1+i_j)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{dr_k}{r_k}\Bigg),\]
and it is the differential in parentheses that we want to be exact.
\color{black}
}
Since $\{\bar{r}',r_j\}$ and $\{\bar{r}',r_k\}$ belong to $K_n ^{\tn{\fsz{M}}}(R,I)$, their projections $\{\bar{s}',s_j\}$ and $\{\bar{s}',s_k\}$ in $K_n ^{\tn{\fsz{M}}}(S)$ are trivial. Applying the canonical $d\log$ map from lemma \hyperref[lemdlog]{\ref{lemdlog}} to these projections yields
\begin{equation}\label{equdlog}\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{ds_j}{s_j}=\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{ds_k}{s_k}=0.\end{equation}
Hence, the differentials
\begin{equation}\begin{array}{lcl}\tau_1&:=&\displaystyle\log(1+i_k)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{ds_j}{s_j}\hspace*{.5cm}\tn{and}\\\tau_2&:=&\displaystyle\log(1+i_j)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge\frac{ds_k}{s_k},\end{array}\end{equation}
given by multiplying the differentials in equation \hyperref[equdlog]{\ref{equdlog}} by the appropriate logarithms, vanish. Therefore, the differential
\begin{equation}\begin{array}{lcl}\sigma&=&\sigma-\tau_1-\tau_2\\
&=&\displaystyle\log(1+i_k)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge d\log(1+i_j)+\log(1+i_j)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\wedge d\log(1+i_k)\\&=&\displaystyle(-1)^{n-1}d\Big(\log\big((1+i_j)(1+i_k)\big)\frac{d\bar{s}'}{\Pi_{\bar{s}'}}\Big),\end{array}\end{equation}
is exact.
\end{proof}
A ``global map" $\Phi_{n+1}$ from the subgroup of $F_{n+1}\big(R^*,(1+I)^*\big)$ generated by the union $\bigcup_{j=1}^{n+1}\tn{Im}(A_j)$ to $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$ may now be defined by patching the maps $\Phi_{n+1,j}$ together, using lemma \hyperref[lempatchingphi]{\ref{lempatchingphi}}. This procedure is analogous to the familiar procedure of patching together maps defined locally on open subsets of manifolds or schemes, to obtain a global map.\\
\begin{defi}\label{defipatching} For an $(n+1)$-tuple $(\bar{r})$ belonging to the the union $\bigcup_{j=1}^{n+1}\tn{Im}(A_j)$, define
\[\Phi_{n+1}\big((\bar{r})\big):=(-1)^{n+1-j}\Phi_{n+1,j}\big((\bar{r})\big),\]
whenever the right-hand-side is defined, and extend to $\Phi_n$ to the subgroup of $F_{n+1}\big(R^*,(1+I)^*\big)$ generated by $\bigcup_{j=1}^{n+1}\tn{Im}(A_j)$ by taking inverses to negatives and multiplication to addition in $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$.
\end{defi}
The map $\Phi_{n+1}$ is a well-defined group homomorphism from $\bigcup_{j=1}^{n+1} \tn{Im}(A_j)$ to $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$, by lemma \hyperref[]{\ref{lempatchingphi}}. The choice of notation for $\Phi_{n+1}$ is a deliberate reflection of the fact that this map plays the same role as the maps $\Phi_{m+1}$ for $1\le m\le n-1$, introduced in definition \ref{defprelimmapsphi} above. However, whereas the maps $\Phi_{m+1}$ are defined in terms of the maps $\phi_{m+1}$, whose existence was assumed by induction, the situation here is the reverse; $\Phi_{n+1}$ is used to define $\phi_{n+1}$ below. The image of $\bigcup_{j=1}^{n+1} \tn{Im}(A_j)$ under the quotient homomorphism $q_{n+1}$ generates the relative Milnor $K$-group $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$, since any $(n+1)$-tuple $(\bar{r})=(r_0,...,r_{n})$ in $F_{n+1}(R^*)$ with at least one entry in $(1+I)^*$ belongs to $\bigcup_{j=1}^{n+1} \tn{Im}(A_j)$, and since the images of these elements under the quotient map generate $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$.
It is now possible to define the desired map $\phi_{n+1}: K_{n+1} ^{\tn{\fsz{M}}}(R,I)\rightarrow \Omega_{R,I}^n/d\Omega_{R,I} ^n.$\\
\begin{defi}\label{defiphin} Let $\prod_{l\in L}\{\bar{r_l}\}$ be an element of $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$, where each factor $\{\bar{r_l}\}$ belongs to the union $\bigcup_{j=1}^{n+1} \tn{Im}(A_j)$, and where $L$ is a finite index set. For each $l\in L$, let $(\bar{r}_l)$ be the element of $F_{n+1}\big(R^*,(1+I)^*\big)$ corresponding to the Steinberg symbol $\{\bar{r_l}\}$. Define
\[\phi_{n+1}\Big(\prod_{l\in L}\{\bar{r_l}\}\Big):=\Phi_{n+1}\Big(\prod_{l\in L}(\bar{r_l})\Big).\]
\end{defi}
The final step regarding $\phi_{n+1}$ is to show that it is a well-defined, surjective group homomorphism.\\
\begin{lem}\label{lemelementaryphi} The map $\phi_{n+1}$ is a well-defined, surjective group homomorphism $K_{n+1}^{\tn{\fsz{M}}}(R,I)\rightarrow\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$.
\end{lem}
\begin{proof} To show that $\phi_{n+1}$ is well-defined, it suffices to show that $\phi_{n+1}$ maps each multiplicative relation $\{\bar{r},rr',\bar{r}'\}\{\bar{r},r,\bar{r}'\}^{-1}\{\bar{r},r',\bar{r}'\}^{-1}$ and each Steinberg relation $\{\bar{r},r,1-r,\bar{r}'\}$ to zero in $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$, where each of the Steinberg symbols appearing in these relations is assumed to have at least one entry in $(1+I)^*$. By definition \hyperref[defiphin]{\ref{defiphin}}, this is equivalent to showing that $\Phi_{n+1}$ maps the corresponding elements $(\bar{r},rr',\bar{r}')(\bar{r},r,\bar{r}')^{-1}(\bar{r},r',\bar{r}')^{-1}$ and $(\bar{r},r,1-r,\bar{r}')$ in $F_{n+1}\big(R^*,(1+I)^*\big)$ to zero. By lemma \hyperref[lempatchingphi]{\ref{lempatchingphi}}, it suffices to show that these elements map to zero under any map $\Phi_{n+1,j}$ whose domain contains them. By definition \hyperref[defiPhinj]{\ref{defiPhinj}}, these elements belong to the domain of $\Phi_{n+1,j}$ if and only if the elements of $F_n(R^*)$ given by deleting their $j$th entries belong to the subgroup $F_n\big(R^*,(1+I)^*\big)$. By definition \hyperref[defprelimmapsphi]{\ref{defprelimmapsphi}}, this is true if and only if the corresponding images in Milnor $K$-theory under the quotient map $q_n$ belong to $K_{n}^{\tn{\fsz{M}}}(R,I)$. But choosing $j$ to be any of the barred entries produces the identity element $1$ in $K_{n}^{\tn{\fsz{M}}}(R,I)$, since the resulting products of symbols are automatically relations. Furthermore, the corresponding map $\Phi_{n+1,j}$ sends the required elements to zero by definition, since $\phi_{n}(1)=0$.
The map $\phi_{n+1}$ is a group homomorphism by construction, since $\Phi_{n+1}$ is defined to respect the group structure in definition \hyperref[defipatching]{\ref{defipatching}}. To prove that $\phi_{n+1}$ is surjective, it suffices to show that any element of the form
$rd\bar{r}d\bar{r}'$ in $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}$ belongs to $\tn{Im}(\phi_n)$, where $r,\bar{r}\in I$ and $\bar{r}'\in R^*$. But for such an element,
\[rd\bar{r}\wedge d\bar{r}'=\phi_{n+1}\big(\{e^{r\Pi'},e^{\bar{r}},\bar{r}'\}\big),\]
where as usual $\Pi'$ is the product of the entries of $\bar{r}'$.
\end{proof}
\begin{example} \tn{The following example illustrates the reasoning involved in the first part of the proof of lemma \hyperref[lemelementaryphi]{\ref{lemelementaryphi}}. Let $n=2$, and consider the multiplicative relation
\[\{r_0,r_1r_1',r_2\}\{r_0,r_1,r_2\}^{-1}\{r_0,r_1',r_2\}^{-1}\hspace*{.5cm}\tn{in}\hspace*{.5cm}K_{3}^{\tn{\fsz{M}}}(R,I).\]
Choose $j=0$; $j=2$ would work just as well. One then needs to show that the element
\[(r_0,r_1r_1',r_2)(r_0,r_1,r_2)^{-1}(r_0,r_1',r_2)^{-1},\]
belongs to the domain of the map $\Phi_{3,0}$, and that
\[\Phi_{3,0}\big((r_0,r_1r_1',r_2)(r_0,r_1,r_2)^{-1}(r_0,r_1',r_2)^{-1}\big)=0\hspace*{.5cm}\tn{in}\hspace*{.5cm}\Omega_{R,I}^{2}/d\Omega_{R,I}^{1}.\]
This relation, incidentally, is easy to compute directly by treating the definition of $\phi_3$ as a {\it fait accompli} and using the Leibniz rule, but this is irrelevant at the moment. The condition on the domain follows from the obvious fact that
\[(r_0,r_1r_1',r_2)(r_0,r_1,r_2)^{-1}(r_0,r_1',r_2)^{-1}=A_1\big((r_1r_1',r_2)(r_1,r_2)^{-1}(r_1',r_2)^{-1}, r_0\big).\]
By the definition of $\Phi_{3,0}$, it follows that:
\[\Phi_{3,0}\big((r_0,r_1r_1',r_2)(r_0,r_1,r_2)^{-1}(r_0,r_1',r_2)^{-1}\big)=\gamma\circ\beta\circ A_1^{-1}\big((r_0,r_1r_1',r_2)(r_0,r_1,r_2)^{-1}(r_0,r_1',r_2)^{-1}\big)\]
\[=\phi_2\big(\{r_1r_1',r_2\}\{r_1,r_2\}^{-1}\{r_1',r_2\}^{-1}\big)\wedge\frac{dr_0}{r_0}\]
\[=\phi_2\big(1\big)\wedge\frac{dr_0}{r_0}=0.\]}
\end{example}
\subsection{Definition and Analysis of the Map $\displaystyle\psi_{n+1}:\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}\rightarrow K_{n+1} ^{\tn{\fsz{M}}}(R,I)$}\label{subsectionanalysispsi}
As in the case of $\phi_{n+1}$, I will define $\psi_{n+1}$ in several steps. Recall that the induction hypothesis assumes the existence of isomorphisms
\[\psi_{m+1}:\frac{\Omega_{R,I} ^m}{d\Omega_{R,I} ^{m-1}}\rightarrow K_{m+1} ^{\tn{\fsz{M}}}(R,I)\]
\[rd\bar{r}d\bar{r}'\mapsto\{e^{r\Pi'},e^{\bar{r}},\bar{r}'\},\]
for $r,\bar{r}\in I$ and $\bar{r}'\in R^*$ for all $1\le m<n$, appearing in equation \hyperref[equinductionpsi]{\ref{equinductionpsi}} above.\\
\begin{defi}\label{defprelimmapspsi} Let $m$ be a positive integer.
\begin{enumerate}
\item Let $F_{m+1}(R)$ be the free abelian group generated by ordered $(m+1)$-tuples $(r_0,...,r_m)$ of elements of $R$.
\item Let $Q_{m+1}$ be the quotient homomorphism
\[Q_{m+1}:F_{m+1}(R)\rightarrow\frac{\Omega_R ^{m}}{d\Omega_R ^{m-1}}\]
\begin{equation}\label{equQuotient}(r_0,...,r_{m})\mapsto r_0dr_1\wedge...\wedge dr_{m}.\end{equation}
\item Let $F_{m+1}(R,I)$ be the preimage in $F_{m+1}(R)$ of the group $\Omega_{R,I} ^{m}/d\Omega_{R,I} ^{m-1}\subset\Omega_R ^{m}/d\Omega_R ^{m-1}$.
\item For $1\le m< n-1$, let $\Psi_m$ be the composition
\begin{equation}\label{equPsimplusone}\Psi_{m+1}:=\psi_{m+1}\circ Q_{m+1}:F_{m+1}(R,I)\rightarrow K_{m+1}^{\tn{\fsz{M}}}(R,I).\end{equation}
\end{enumerate}
\end{defi}
An $(m+1)$-tuple $(r_0,...,r_m)$ satisfying the condition that at least one of its entries belongs to $I$ is automatically an element of $F_{m+1}(R,I)$. If $1\le m<n$, then specifying such an element $r_j$ allows the image of $(r_0,...,r_m)$ under $\Psi_{m+1}$ to be expressed explicitly in terms of Steinberg symbols. However, this is more complicated than the analogous case of $\Phi_{m+1}$, discussed in section \hyperref[subsectionanalysisphi]{\ref{subsectionanalysisphi}} above. This is because the remaining elements $r_k$ for $k\ne j$ are generally not units, while equation \hyperref[equinductionpsi]{\ref{equinductionpsi}} specifies the images $\psi_{m+1}(rd\bar{r}\wedge d\bar{r}')$ only when for $r,\bar{r}\in I$ and $\bar{r}'\in R^*$. Hence, it is generally necessary to use lemma \hyperref[lemstability]{\ref{lemstability}} to write entries of $(r_0,...,r_m)$ which are neither units nor elements of $I$ as sums of units, then express the differential $Q_{m+1}\big((r_0,...,r_m)\big)=r_0dr_1\wedge...\wedge dr_m$ as a sum whose individual terms involve only units and elements of $I$. Then $\psi_{m+1}$ may be applied to obtain $\Psi_{m+1}\big((r_0,...,r_m)\big)$.\\
\begin{example} \tn{Suppose that $m=2$, and consider the triple $(r_0,r_1,r_2)$, where $r_0$ belongs to $I$, $r_1$ belongs to $R^*$, and $r_2$ belongs to neither. Writing $r_2$ as a sum of two units $r_2'+r_2''$ permits the computation
\[\begin{array}{lcl}\vspace*{.2cm}\Psi_{3}\big((r_0,r_1,r_2)\big)&=&\psi_3(r_0dr_1\wedge dr_2)\\
&=&\vspace*{.2cm}\psi_3\big(r_0dr_1\wedge d(r_2'+r_2'')\big)\\
&=&\vspace*{.2cm}\psi_3(r_0dr_1\wedge dr_2'+r_0dr_1\wedge dr_2'')\\
&=&\vspace*{.2cm}\psi_3(r_0dr_1\wedge dr_2')\psi_3(+r_0dr_1\wedge dr_2'')\\
&=&\{e^{r_0r_1r_2'},r_1,r_2'\}\{e^{r_0r_1r_2''},r_1,r_2''\}.\end{array}\]}
\end{example}
It will be useful to consider diagrams of the form
\[\begin{CD}
\displaystyle\frac{\Omega_{R,I} ^{n}}{d\Omega_{R,I} ^{n-1}}
@<Q_{n+1}<<F_{n+1}(R,I)
@<\Gamma_j<<F_n(R,I)\times R
@<\sigma<<F_n(R,I)\times (R^*)^2
@>\varepsilon>>K_{n+1} ^{\tn{\fsz{M}}}(R,I),
\end{CD}\]
where the maps $\Gamma_j$, $\sigma$, and $\varepsilon$ are defined as follows:\\
\begin{defi}\label{defiGammajsigmaepsilon} Let $j$ be an integer between $0$ and $n-1$ inclusive.
\begin{enumerate}
\item Let $\Gamma_j$ be the map converting each $n$-tuple $(\bar{r})$ in $F_n(R,I)$ to an $(n+1)$-tuple in $F_{n+1}(R,I)$, by inserting an element of $R$ between the $j-1$ and $j$th entries of $(\bar{r})$. Extend $\Gamma_j$ to inverses and products of $n$-tuples to produce products of $(n+1)$-tuples and their inverses sharing the same $j$th entries. $\Gamma_j$ plays a role directly analogous to the map $A_j$ defined in definition \hyperref[defiAjbetagamma]{\ref{defiAjbetagamma}} above.
\item Let $\sigma$ be the map
\begin{equation}\label{equsigma}\big((\bar{r}),(u,v)\big)\mapsto\big((\bar{r}),u+v\big).\end{equation}
\item Let $\varepsilon$ be the map
\begin{equation}\label{equvarepsilon}\big((r,\bar{r}),(u,v)\big)\mapsto\big(\Psi_{n}(ur,\bar{r})\times \{u\}\big)\big(\Psi_{n}(vr,\bar{r})\times \{v\}\big).\end{equation}
\end{enumerate}
\end{defi}
Recall that $\times$ denotes multiplication in the Milnor $K$-ring $K_{*}^{\tn{\fsz{M}}}(R)$. Since the multiplicative group $R^*$ is the first Milnor $K$-group $K_{1} ^{\tn{\fsz{M}}}(R)$, which is the first graded piece of the Milnor $K$-ring $K_{*} ^{\tn{\fsz{M}}}(R)$, the elements $u$ and $v$ may be viewed either as elements of $R^*$ or as elements of $K_{*} ^{\tn{\fsz{M}}}(R)$. Writing $u$ and $v$ as Steinberg symbols $\{u\}$ and $\{v\}$ on the right-hand side of equation \hyperref[equvarepsilon]{\ref{equvarepsilon}}, emphasizes the latter view, since these elements are to be multiplied on the left in $K_{*} ^{\tn{\fsz{M}}}(R)$ by $\Psi_{n-1}(ur,\bar{r})$ and $\Psi_{n-1}(vr,\bar{r})$.
It is straightforward to verify that these maps are well-defined, and that the maps $\Gamma_j$ are injective.
\comment{
\color{red}
Full argument: $Q_{n+1}$ and $\sigma$ are obviously well-defined. For $\Gamma_j$, the definition certainly produces an element of $F_{n+1}(R)$, and it remains to show that this element belongs to the subgroup $F_{n+1}(R,I)$. To see this, it suffices to show that the image $Q_{n+1}\circ\Gamma_j\big((r_0,...,r_{n-1}),r\big)$ of a single pair $\big((r_0,...,r_{n-1}),r\big)$ in $F_n(R,I)\times R$ belongs to $\Omega_{R,I} ^{n}/d\Omega_{R,I} ^{n-1}$. If $j\ne 0$, then the image is
\[Q_{n+1}\circ\Gamma_j\big((r_0,...,r_{n-1}),r\big)=r_0dr_1\wedge...\wedge dr_{j-1}\wedge dr\wedge dr_j\wedge...\wedge dr_{n-1}\]
\[=(-1)^{n-j}r_0dr_1\wedge...\wedge dr_{j-1}\wedge dr_j\wedge...\wedge dr_{n-1}\wedge dr=(-1)^{n-j}Q_n\big((r_0,...,r_{n-1})\big)\wedge dr.\]
The factor $Q_n\big((r_0,...,r_{n-1})\big)$ belongs to $\Omega_{R,I} ^{n-1}d\Omega_{R,I} ^{n-2}$ by hypothesis, and therefore may be represented by a sum of differentials of degree $n-1$ involving elements of $I$. Hence, $Q_{n+1}\circ\Gamma_j$ may be represented by a sum of differentials of degree $n$ involving elements of $I$, which therefore maps to zero under the canonical projection $\Omega_{R} ^{n}\rightarrow \Omega_{S} ^{n}$, and is thus an element of $\Omega_{R,I} ^{n}$ by definition. A similar argument applies when $j=0$. The maps $\Gamma_j$ are injective for essentially the same reason that the maps $A_j$ in section \hyperref[subsectionanalysisphi]{\ref{subsectionanalysisphi}} are injective.
Turning to $\varepsilon$, it suffices to verify that the elements $\Psi_{n}(ur,\bar{r})\times\{u\}$ and $\Psi_{n}(vr,\bar{r})\times\{v\}$ belong to $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ if the element $(r,\bar{r})$ belongs to $F_n(R,I)$. The first element may be written as
\[\Psi_{n}(ur,\bar{r})\times\{u\}=\{ur,\bar{r}\}\times\{u\}=\{ur,\bar{r},u\}=\big(\{u,\bar{r}\}\{r,\bar{r}\}\big)\times\{u\}=\{u,\bar{r}\}\times\{u\}=\{u,\bar{r},u\}=1,\]
since $(r,\bar{r})\in F_n(R,I)$, where the last step makes repeated use of the anticommutative property of lemma \hyperref[lemrelationsstable]{\ref{lemrelationsstable}}. The argument for $\Psi_{n}(vr,\bar{r})\times\{v\}$ is identical.
\color{black}
}
Elements of $S$, and hence of $R$, may be decomposed into sums of units under appropriate stability and invertibility assumptions, as shown in lemma \hyperref[lemstability]{\ref{lemstability}} above. The following lemma facilitates the use of this result in lemma \hyperref[lempatchingpsi]{\ref{lempatchingpsi}} below. \\
\begin{lem}\label{lemuvUV}Let $u$, $v$, $U$, and $V$ belong to $R^*$, and suppose that $u+v=U+V$. Then for any $(\bar{r})\in F_n(R,I)$,
\begin{equation}\label{equuvUV}\varepsilon\big((\bar{r}),(u,v)\big)=\varepsilon\big((\bar{r}),(U,V)\big).\end{equation}
\end{lem}
\begin{proof}Writing $(\bar{r})=(r,\bar{r}')$ to distinguish the first element,
\[\begin{array}{lcl} \Psi_{n}(ur,\bar{r}')&=&\displaystyle\prod_l\psi_{n}(ur_ld\bar{r}_l\wedge d\bar{r}_l')\\
&=&\displaystyle\prod_l\{e^{ur_l\Pi_l'},e^{\bar{r}_l},\bar{r}_l'\},\end{array}\]
where $\sum_lr_ld\bar{r}_l\wedge d\bar{r}_l'$ is a decomposition of $rd\bar{r}'$ such that
$r_l,\bar{r}_l\in I$ and $\bar{r}_l'\in R^*$. Such a decomposition exists by lemma \hyperref[lemstability]{\ref{lemstability}} because $R$ is $2$-fold stable and $2$ is invertible in $R$. Similar formulas apply for $v$, $U$, and $V$. Thus
\[\begin{array}{lcl}\vspace*{.2cm}\varepsilon\big((\bar{r}),(u,v)\big)&=&(\Psi_{n}\big(ur,\bar{r}')\times \{u\}\big)\big(\Psi_{n}(vr,\bar{r}')\times \{v\}\big)\\
&=&\displaystyle\prod_{l}\{e^{ur_l\Pi_l'},e^{\bar{r}_l},\bar{r}_l',u\}\prod_{l}\{e^{vr_l\Pi_l'},e^{\bar{r}_l},\bar{r}_l',v\}\\
&=&\displaystyle\Bigg(\prod_l\{e^{\bar{r}_l},\bar{r}_l'\}\times\big(\{e^{ur_l\Pi_l'},u\}\{e^{vr_l\Pi_l'},v\}\big)\Bigg)^{(-1)^{n-1}},\end{array}\]
where the exponent comes from moving the elements $e^{ur_l\Pi_l'}$ and $e^{vr_l\Pi_l'}$ to the right across the $n-1$ elements $e^{\bar{r}_l}$ and $\bar{r}_l$ before factoring out the symbols $\{e^{ur_l\Pi_l'},u\}$ and $\{e^{vr_l\Pi_l'},v\}$, which belong to $K_{2+1} ^{\tn{\fsz{M}}}(R)$. Similarly,
\[\varepsilon\big((\bar{r}),(U,V)\big)=\Bigg(\prod_l\{e^{\bar{r}_l},\bar{r}_l'\}\times(\{e^{Ur_l\Pi_l'},U\}\{e^{Vr_l\Pi_l'},V\}\big)\Bigg)^{(-1)^{n-1}}.\]
But for each $l$,
\[\begin{array}{lcl}\vspace*{.2cm}\{e^{ur_l\Pi_l'},u\}\{e^{vr_l\Pi_l'},v\}&=&\displaystyle\psi_2\circ\phi_2\big(\{e^{ur_l\Pi_l'},u\}\{e^{vr_l\Pi_l'},v\}\big)\\
&=&\vspace*{.2cm}\displaystyle\psi_2\Big(ur_l\Pi_l'\frac{du}{u}+vr_i\Pi_l'\frac{dv}{v}\Big)\\
&=&\vspace*{.2cm}\displaystyle\psi_2\big(r_l\Pi_l'd(u+v)\big)=\psi_2\big(r_l\Pi_l'd(U+V)\big)\\
&=&\displaystyle\{e^{Ur_l\Pi_l'},U\}\{e^{Vr_l\Pi_l'},V\}.\end{array}\]
Therefore, $\varepsilon\big((\bar{r}),(u,v)\big)=\varepsilon\big((\bar{r}),(U,V)\big)$, as claimed.
\end{proof}
The next step is to define maps $\Psi_{n+1,j}$ analogous to the maps $\Phi_{n+1,j}$ appearing in section \hyperref[subsectionanalysispsi]{\ref{subsectionanalysispsi}} above.\\
\begin{defi}\label{defiPsinplusonej}For a generator $(r_0,...,r_n)$ of $F_{n+1}(R,I)$, satisfying the condition that
$(r_0,...,\hat{r_j},...,r_n)\in F_{n}(R,I)$, define $\Psi_{n+1,j}\big((r_0,...,r_n)\big)$ to be the composition
$\varepsilon\circ\sigma^{-1}\circ\Gamma_j ^{-1}\big((r_0,...,r_n)\big)$, and extend to products of $(n+1)$-tuples sharing
the same $j$th entry by preserving multiplication.
\end{defi}
To see that $\Psi_{n+1,j}$ is well defined, note that $\Gamma_j$ is injective, and although $\sigma$ is not injective,
different preimages under $\sigma$ map to the same element of $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ under $\varepsilon$ by
lemma \hyperref[lemuvUV]{\ref{lemuvUV}}.\\
\begin{example} \tn{Let $n=2$ and $j=1$, and consider the $3$-tuple $(r_0,r_1,r_2)$, where for simplicity I will assume that $r_0\in I$ and $r_2\in R^*$. Then
\[\begin{array}{lcl}\vspace*{.2cm}\Psi_{3,1}\big((r_0,r_1,r_2)\big)&=&\varepsilon\circ\sigma^{-1}\circ\Gamma_3^{-1}\big((r_0,r_1,r_2)\big)\\
&=&\vspace*{.2cm}\varepsilon\circ\sigma^{-1}\big((r_0,r_2),r_1\big)\\
&=&\vspace*{.2cm}\varepsilon\big((r_0,r_2),(u_1,v_1)\big),\\
&=&\vspace*{.2cm}\big(\Psi_{2}\big(u_1r_0,r_2)\times \{u_1\}\big)\big(\Psi_{2}(v_1r_0,r_2)\times \{v_1\}\big)\\
&=&\{e^{u_1r_0r_2},r_2,u_1\}\{e^{v_1r_0r_2},r_2,v_1\},\end{array}\]
where $u_1+v_1$ is any decomposition of $r_1$ into a sum of units. }
\end{example}
The following lemma is, from a computational perspective, the most onerous part of the proof.\\
\begin{lem}\label{lempatchingpsi} $\Psi_{n+1,j}=\Psi_{n+1,k}^{(-1)^{k-j}}$ on the intersection $\tn{Im}(\Gamma_j)\cap \tn{Im}(\Gamma_k)\subset F_{n+1}(R,I)$ for any $1\le j<k\le n$.
\end{lem}
\begin{proof} I will work out the case $0<j<k\le n$; the other cases are similar. Since both $\Psi_{n+1,j}$ and $\Psi_{n+1,k}$ send products in $\tn{Im}(\Gamma_j)\cap \tn{Im}(\Gamma_k)$ to products in $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$, it suffices to prove the statement of the lemma for a single generic $(n+1)$-tuple $(\bar{r})=(r_0,...,r_n)$ of $F_{n+1}(R,I)$. Such an $(n+1)$-tuple satisfies the condition that $(r_0,...,\hat{r_j},...r_n)$ and $(r_0,...,\hat{r_k},...r_n)$ both belong to $F_{n}(R,I)$. Since $R$ is $2$-fold stable and $2$ is invertible in $R$, the omitted elements $r_j$ and $r_k$ may be decomposed into sums
\begin{equation}\label{equrjrk}r_j=u_j+v_j\hspace*{.5cm}\tn{and}\hspace*{.5cm} r_k=u_k+v_k,\end{equation}
where $u_j$, $v_j$, $u_k$, and $v_k$ belong to $R^*$. Using the splitting $R=S\oplus I$, these summands may be further decomposed as
\begin{equation}\label{equujvjukvk}u_j=a_j+m_j,\hspace*{.5cm}v_j=b_j+n_j,\hspace*{.5cm}u_k=a_k+m_k,\hspace*{.5cm}\tn{and}\hspace*{.5cm}v_k=b_k+n_k,\end{equation}
where $a_j$, $b_j$, $a_k$, and $b_k$ belong to $S^*$, and where $m_j$, $n_j$, $m_k$, and $n_k$ belong to $I$. Let $i_j=m_j+n_j$, and $i_k=m_k+n_k$ be the $I$-parts of $r_j$ and $r_k$. Also define $\bar{r}':=(r_0,...,\hat{r_j},...,\hat{r_k},...,r_{n})$ to be the $(n-1)$-tuple given by omitting both entries $r_j$ and $r_k$ from $(\bar{r})$.
Since $j<k$,
\begin{equation}\label{equkminus2}\begin{array}{lcl} \Psi_{n}\big((r_0,...,\hat{r_j},...,r_{n})\big)&=&\psi_{n}\circ Q_{n}\big((r_0,...,\hat{r_j},...,r_{n})\big)\\
&=&\psi_{n}(r_0dr_1\wedge...\wedge\hat{dr_j}\wedge...\wedge dr_n)\\
&=&\psi_{n}(r_0dr_k\wedge d\bar{r}')^{(-1)^{k-2}}.\end{array}\end{equation}
\comment{
\color{red}
This is because $dr_k$ must be moved to the left across $k-2$ differentials $dr_1,...,\hat{dr_j},...,dr_{k-1}$.
\color{black}
}
Since $(r_0,...,\hat{r_j},...r_n)\in F_{n}(R,I)$, the differential $r_0dr_k\wedge d\bar{r}'$ may be decomposed as a sum
\begin{equation}\label{r0drkdecomp}r_0dr_k\wedge d\bar{r}'=\sum_lr_ldi_k\wedge d\bar{r}_l+\sum_\alpha r_\alpha du_k\wedge d\bar{r}_\alpha \wedge d\bar{r}_\alpha '+\sum_\alpha r_\alpha dv_k\wedge d\bar{r}_\alpha \wedge d\bar{r}_\alpha ',\end{equation}
where $r_l$ and $\bar{r}_l$ belong to $S^*$, $r_\alpha $ and $\bar{r}_\alpha $ belong to $I$, and $\bar{r}_\alpha '$ belongs to $R^*$.
\comment{
\color{red}
To see this, first use the splitting $R=S\oplus I$ on $r_0$ and $\bar{r}'$:
\[r_0dr_k\wedge d\bar{r}'=(s_0+i_0)dr_k\wedge d(s_1+i_1)\wedge...\wedge \hat{dr_j}\wedge...\wedge\hat{dr_k}\wedge...\wedge d(s_n+i_n).\]
All terms except one involve at least one element of $I$, and the elements not in $I$ may be broken down into sums of units. The resulting terms give the last two sums in equation \hyperref[r0drkdecomp]{\ref{r0drkdecomp}}. The remaining term is $s_0dr_k\wedge d\bar{s}'$, where $s_0$ is the $S$-part of $r_0$, and similarly for $\bar{s}'$. Applying the splitting to $r_k$ in this term, the summand $s_0ds_k\wedge d\bar{s}'$ vanishes, and the remaining summand involving $i_k$ may be written as the sum over $l$ after expressing elements of $S$ as sums of units.
\color{black}
}
Thus, after manipulating some differentials, one obtains the expression
\begin{equation}\label{equbiggerkminus2}\begin{array}{lcl} \Psi_{n}\big((u_jr_0,r_1,...,\hat{r_j},...,r_{n})\big)=\psi_{n}(u_jr_0dr_k\wedge d\bar{r}')^{(-1)^{k-2}}\\
=\Big(\displaystyle\prod_l\{u_jr_l,e^{u_jr_li_k\Pi_l},\bar{r}_l\}
\prod_\alpha \{e^{u_jr_\alpha u_k\Pi_\alpha '},u_k,e^{\bar{r}_\alpha },\bar{r}_\alpha '\}
\prod_\alpha \{e^{u_jr_\alpha v_k\Pi_\alpha '},v_k,e^{\bar{r}_\alpha },\bar{r}_\alpha '\}\Big)^{(-1)^{k-2}}.\end{array}\end{equation}
\comment{
\color{red}
This requires some explanation. The full computation is
\[\Psi_{n}\big((u_jr_0,r_1,...,\hat{r_j},...,r_{n})\big)=\psi_{n}(u_jr_0dr_k\wedge d\bar{r}')^{(-1)^{k-2}}\]
\[=\psi_{n}\Big(u_j\sum_lr_ldi_k\wedge d\bar{r}_l+u_j\sum_\alpha r_\alpha du_k\wedge d\bar{r}_\alpha \wedge d\bar{r}_\alpha '+u_j\sum_\alpha r_\alpha dv_k\wedge d\bar{r}_\alpha \wedge d\bar{r}_\alpha '\Big)^{(-1)^{k-2}}\]
\[=\Big(\prod_l\psi_{n}(u_jr_ldi_k\wedge d\bar{r}_l)\prod_\alpha\psi_{n}(u_jr_\alpha du_k\wedge d\bar{r}_\alpha\wedge d\bar{r}_\alpha')\prod_i\psi_{n-1}(u_jr_\alpha dv_k\wedge d\bar{r}_\alpha\wedge d\bar{r}_\alpha')\Big)^{(-1)^{k-2}}\]
\[=\Big(\prod_l\{u_jr_l,e^{u_jr_li_k\Pi_l},\bar{r}_l\}
\prod_\alpha \{e^{u_jr_\alpha u_k\Pi_\alpha '},u_k,e^{\bar{r}_\alpha },\bar{r}_\alpha '\}
\prod_\alpha \{e^{u_jr_\alpha v_k\Pi_\alpha '},v_k,e^{\bar{r}_\alpha },\bar{r}_\alpha '\}\Big)^{(-1)^{k-2}}.\]
For the first product, the coefficient $u_jr_l$ is a unit, so the expression must be manipulated before equation \hyperref[equinductionpsi]{\ref{equinductionpsi}} may be applied. Write $u_jr_ldi_k\wedge d\bar{r}_l=-i_kd(u_jr_l)\wedge d\bar{r}_l$ by exactness and the Leibniz rule. Under $\psi_n$, this maps to $\{e^{u_jr_li_k\Pi_l},u_jr_l,\bar{r}_l\}^{-1}=\{u_jr_l,e^{u_jr_li_k\Pi_l},\bar{r}_l\}$ by anticommutativity, as desired. For the second product, the coefficient $u_jr_l$ is an element of $I$, but the first differential $du_k$ is the differential of a unit, so this must be moved to the right across the differentials $d\bar{r}_\alpha$ before applying equation \hyperref[equinductionpsi]{\ref{equinductionpsi}}. However, one may then move $u_k$ back to the left across the elements $e^{\bar{r}_\alpha}$ to obtain the second product of Steinberg symbols, and the signs cancel. A similar argument applies to the third product.
\color{black}
}
Arguing in a similar manner for $v_j$ leads to the following expression for $\Psi_{n+1,j}\big((r_0,...,r_n)\big)$:
\begin{equation}\label{equbiggestkminus2}\begin{array}{lcl} \vspace*{.2cm}\Psi_{n+1,j}\big((r_0,...,r_n)\big)=\varepsilon\circ\sigma^{-1}\circ\Gamma_j ^{-1}\big((r_0,...,r_n)\big)\\
\hspace*{0cm}\vspace*{.5cm}=\Big(\Psi_{n}\big((u_jr_0,r_1,...,\hat{r_j},...,r_{n})\big)\times \{u_j\}\Big)\Big(\Psi_{n}\big((v_jr_0,r_1,...,\hat{r_j},...,r_{n})\big)\times \{v_j\}\Big)\\
=\Big(\displaystyle\prod_l\{u_jr_l,e^{u_jr_li_k\Pi_l},\bar{r}_l,u_j\}\prod_\alpha \{e^{u_jr_\alpha u_k\Pi_\alpha '},u_k,e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_j\}
\prod_\alpha \{e^{u_jr_\alpha v_k\Pi_\alpha '},v_k,e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_j\}\\
\hspace*{.3cm}\displaystyle\prod_l\{v_jr_l,e^{v_jr_li_k\Pi_l},\bar{r}_l,v_j\}\prod_\alpha \{e^{v_jr_\alpha u_k\Pi_\alpha '},u_k,e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_j\}
\prod_\alpha \{e^{v_jr_\alpha v_k\Pi_\alpha '},v_k,e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_j\}\Big)^{(-1)^{k-2}}\end{array}\end{equation}
\[=(abcABC)^{(-1)^{k-2}},\]
where the letters $a$, $b$, $c$, $A$, $B$, and $C$, stand for the six products over $l$ or $\alpha$, in the order shown. By similar reasoning, and recalling that $j<k$,
\begin{equation}\label{equbiggestjminus1}\begin{array}{lcl} \vspace*{.5cm}\Psi_{n+1,k}\big((r_0,...,r_n)\big)=\varepsilon\circ\sigma^{-1}\circ\Gamma_k ^{-1}\big((r_0,...,r_n)\big)\\
=\Big(\displaystyle\prod_l\{u_kr_l,e^{u_kr_li_j\Pi_l},\bar{r}_l',u_k\}\prod_\alpha \{e^{u_kr_\alpha u_j\Pi_\alpha '},u_j,e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_k\}\prod_\alpha \{e^{u_kr_\alpha v_j\Pi_\alpha '},v_j,e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_k\}\\
\hspace*{.3cm}\displaystyle\prod_l\{v_kr_l,e^{v_kr_li_j\Pi_l},\bar{r}_l',v_k\}\prod_\alpha \{e^{v_kr_\alpha u_j\Pi_\alpha '},u_j,e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_k\}\prod_\alpha \{e^{v_kr_\alpha v_j\Pi_\alpha '},v_j,e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_k\}\Big)^{(-1)^{j-1}}\end{array}\end{equation}
\[=(a'b'c'A'B'C')^{(-1)^{j-1}},\]
where the letters $a'$, $b'$, $c'$, $A'$, $B'$, and $C'$, stand for the six products over $l$ or $\alpha$, in the order shown.
It will suffice to show that $a'b'c'A'B'C'=(abcABC)^{-1}$, since this implies that
\[\Psi_{n+1,j}\big((r_0,...,r_n)\big)=(abcABC)^{(-1)^{k-2}}=\Big((a'b'c'B'C')^{(-1)^{j-1}}\Big)^{(-1)^{k-j}}=\Big(\Psi_{n+1,k}\big((r_0,...,r_n)\big)\Big)^{(-1)^{k-j}}.\]
\comment{
\color{red}
The full computation is
\[(abcABC)^{(-1)^{k-2}}=(a'b'c'A'B'C')^{(-1)^{k-1}}=(a'b'c'A'B'C')^{(-1)^{(j-1)+(k-j)}}=\Big((a'b'c'B'C')^{(-1)^{j-1}}\Big)^{(-1)^{k-j}}.\]
\color{black}
}
Now by anticommutativity,
\begin{equation}\label{bprimeequalsbinverse}b'=b^{-1},c'=B^{-1},B'=c^{-1},C'=C^{-1}.\end{equation}
\comment{
\color{red}
Indeed,
\[b=\prod_\alpha \{e^{u_jr_\alpha u_k\Pi_\alpha '},u_k,e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_j\}\hspace*{.5cm}\tn{and}\hspace*{.5cm} b'=\prod_\alpha \{e^{u_kr_\alpha u_j\Pi_\alpha '},u_j,e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_k\};\]
The first entries of $b$ and $b'$ are the same. Working with $b'$, moving $u_k$ to the left across the entries $u_j,e^{\bar{r}_\alpha },\bar{r}_\alpha '$, then moving $u_j$ to the right across the entries $e^{\bar{r}_\alpha },\bar{r}_\alpha '$, leaves one extra factor of $-1$, so $b'=b^{-1}$. Analogous arguments apply to the pairs $(c',B)$, $(B',c)$, and $(C',C)$. It follows that \[(bcBC)^{(-1)^{k-2}}=(b'c'B'C')^{(-1)^{k-1}}=(b'c'B'C')^{(-1)^{(j-1)+(k-j)}}=\Big((b'c'B'C')^{(-1)^{j-1}}\Big)^{(-1)^{k-j}}.\]
\color{black}
}
It remains to show that $aA=(a'A')^{-1}$; i.e., that $aAa'A'=1$. Factoring out terms with repeated entries, it suffices to show that
\[\prod_l\{r_l,e^{u_jr_li_k\Pi_l},\bar{r}_l,u_j\}\prod_l\{r_l,e^{v_jr_li_k\Pi_l},\bar{r}_l,v_j\}
\prod_l\{r_l,e^{u_kr_li_j\Pi_l},\bar{r}_l,u_k\}\prod_l\{r_l,e^{v_kr_li_j\Pi_l},\bar{r}_l,v_k\}=1.\]
\comment{
\color{red}
Indeed,
\[aAa'A'=\prod_l\{u_jr_l,e^{u_jr_li_k\Pi_l},\bar{r}_l,u_j\}\prod_l\{v_jr_l,e^{v_jr_li_k\Pi_l},\bar{r}_l,v_j\}\prod_l\{u_kr_l,e^{u_kr_li_j\Pi_l},\bar{r}_l',u_k\}\prod_l\{v_kr_l,e^{v_kr_li_j\Pi_l},\bar{r}_l',v_k\}.\]
Factoring the first entry of $a$ two factors: the first has first and last entries $u_j$ and $u_j$, respectively, and therefore vanishes. The second has first and last entries $r_l$ and $u_j$, and is retained. Similar arguments apply to $A$, $a'$, and $A'$.
\color{black}
}
Applying anticommutativity and factoring in $K_\bullet^{\tn{\fsz{M}}}(R)$, it suffices to show that
\[\prod_l\big(\{e^{u_jr_li_k\Pi_l},u_j\}\{e^{v_jr_li_k\Pi_l},v_j\}
\{e^{u_kr_li_j\Pi_l},u_k\}\{e^{v_kr_li_j\Pi_l},v_k\}\big)\times\{r_l,\bar{r}_l\}=1.\]
\comment{
\color{red}
Indeed,
Here, $\{r_1,\bar{r}_l\}$ is factored out of each term after moving its entries to the far right of each Steinberg symbol.
\color{black}
}
Using the isomorphisms $\phi_2$ and $\psi_2$, applied to each left-hand factor:
\begin{equation}\label{usingphi2psi2}\begin{array}{lcl}\vspace*{.2cm}\psi_2\circ\phi_2\big(\{e^{u_jr_li_k\Pi_l},u_j\}\{e^{v_jr_li_k\Pi_l},v_j\}\{e^{u_kr_li_j\Pi_l},u_k\}\{e^{v_kr_li_j\Pi_l},v_k\}\big)\\
\vspace*{.2cm}=\psi_2(r_li_k\Pi_ldu_j+r_li_k\Pi_ldv_j+r_li_j\Pi_ldu_k+r_li_j\Pi_ldv_k)\\
\vspace*{.2cm}=\psi_2(r_li_k\Pi_ldr_j+r_li_j\Pi_ldr_k)\\
\vspace*{.2cm}=\psi_2\big(r_l\Pi_ld(i_ji_k)+r_li_k\Pi_lds_j+r_li_j\Pi_lds_k\big)\\
\vspace*{.2cm}=\psi_2(-i_ji_kd(r_l\Pi_l)+r_li_k\Pi_lda_j+r_li_k\Pi_ldb_j+r_li_j\Pi_lda_k+r_li_j\Pi_ldb_k)\\
=\{e^{i_ji_kr_l\Pi_l},r_l\Pi_l\}^{-1}\{e^{r_li_k\Pi_la_j},a_j\}\{e^{r_li_k\Pi_lb_j},b_j\}
\{e^{r_li_j\Pi_la_k},a_k\}\{e^{r_li_j\Pi_lb_k},b_k\}.\end{array}\end{equation}
\comment{
\color{red}
For second equality, combine $r_j=u_j+v_j$ and $r_k=u_k+v_k$. For third, writing out the expression gives
\[\psi_2\big(r_l\Pi_ld(i_ji_k)+r_li_k\Pi_lds_j+r_li_j\Pi_lds_k\big)=\psi_2\big(r_li_k\Pi_ldi_j+r_li_j\Pi_ldi_k+r_li_k\Pi_lds_j+r_li_j\Pi_lds_k\big),\]
and the first and third, and second and fourth, terms combine to give the previous expression $\psi_2(r_li_k\Pi_ldr_j+r_li_j\Pi_ldr_k)$. For fourth, The first term uses exactness and the Leibniz rule, while the other terms break up $s_j$ and $s_k$ into sums of units.
\color{black}
}
``Re-multiplying" in the Milnor $K$-ring $K_{*}^{\tn{\fsz{M}}}(R)$, it suffices to show that
\[\prod_l\{e^{i_ji_kr_l\Pi_l},r_l\Pi_l,r_l,\bar{r}_l\}^{-1}\{e^{r_li_k\Pi_la_j},a_j,r_l,\bar{r}_l\}
\{e^{r_li_k\Pi_lb_j},b_j,r_l,\bar{r}_l\}
\{e^{r_li_j\Pi_la_k},a_k,r_l,\bar{r}_l\}\{e^{r_li_j\Pi_lb_k},b_k,r_l,\bar{r}_l\}=1.\]
The first factor is trivial, because it can be factored in $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ into a product of terms with repeated
entries. For the next two factors, note that since $r_0dr_j\wedge d\bar{r}'\in\Omega_{R,I} ^{n-1}/d\Omega_{R,I} ^{n-2}$, it follows that
\[s_0ds_j\wedge d\bar{s}'=\sum_lr_lda_j\wedge d\bar{r}_l+\sum_lr_ldb_j\wedge d\bar{r}_l=0.\]
Applying $i_kd$,
\[\sum_li_kdr_l\wedge da_j\wedge d\bar{r}_l+\sum_li_kdr_l\wedge db_j\wedge d\bar{r}_l=0.\]
Under the isomorphism $\psi_{n}$, this maps to
\begin{equation}\label{secondthirdtrivial}\prod_l\{e^{r_li_k\Pi_la_j},a_j,r_l,\bar{r}_l\}\{e^{r_li_k\Pi_lb_j},b_j,r_l,\bar{r}_l\}=1.\end{equation}
For the final two factors, apply $i_jd$ to $s_0ds_k\wedge d\bar{s}'=0$, then apply $\psi_{n}$ to show that
\begin{equation}\label{fourthfifthtrivial}\prod_l\{e^{r_li_j\Pi_la_k},a_k,r_l,\bar{r}_l\}\{e^{r_li_j\Pi_lb_k},b_k,r_l,\bar{r}_l\}=1.\end{equation}
This completes the proof of the case $0<j<k\le n$. The remaining cases, in which $j=0$, are nearly identical, though slightly easier.
\comment{
\color{red}
Now consider the case $j=0,k=1$. For a generator $(r_0,...,r_n)$ of $F_{n+1}(R,I)$ satisfying the condition that
$(r_0,r_2,...,r_n)$ and $(r_1,r_2,...,r_n)$ belong to $F_{n}(R,I)$, it is necessary to show that
\[\Psi_{n+1,0}\big((r_0,...,r_n)\big)=\Psi_{n+1,1}\big((r_0,r_1,...,r_n)\big)^{-1}.\]
By arguments analogous to those above,
\[\Psi_{n+1,0}(r_0,...,r_n)=(\psi_{n}\big(u_0r_1d\bar{r}')\times \{u_0\}\big)\big(\psi_{n}(v_0r_1d\bar{r}')\times \{v_0\}\big)\]
\[=\prod_l\{e^{u_0i_1\Pi_l},\bar{r}_l,u_0\}
\prod_\alpha \{u_0u_1, e^{r_\alpha u_0u_1\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_0\}
\prod_\alpha \{u_0v_1, e^{r_\alpha u_0v_1\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_0\}\]
\[\prod_l\{e^{v_0i_1\Pi_l},\bar{r}_l,v_0\}\prod_\alpha \{v_0u_1, e^{r_\alpha v_0u_1\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_0\}
\prod_\alpha \{v_0v_1, e^{r_\alpha v_0v_1\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_0\},\]
where $r_0=u_0+v_0$ with $u_0,v_0\in R^*$, where $\bar{r}'=(r_2,...,r_n)$, and where I have used the decomposition
\[u_0r_1d\bar{r}'=\sum_lu_0i_1d\bar{r}_l+\sum_\alpha u_0u_1dr_\alpha d\bar{r}_\alpha d\bar{r}_\alpha '+\sum_\alpha u_0v_1dr_\alpha d\bar{r}_\alpha d\bar{r}_\alpha '.\]
Similarly,
\[\Psi_{n+1,0}(r_0dr_1d\bar{r})=\prod_l\{e^{u_1i_0\Pi_l},\bar{r}_l,u_1\}
\prod_\alpha \{u_1u_0, e^{r_\alpha u_1u_0\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_1\}
\prod_\alpha \{u_1v_0, e^{r_\alpha u_1v_0\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',u_1\}\]
\[\prod_l\{e^{v_1i_0\Pi_l},\bar{r}_l,v_1\}\prod_\alpha \{v_1u_0, e^{r_\alpha v_1u_0\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_1\}
\prod_\alpha \{v_1v_0, e^{r_\alpha v_1v_0\Pi_\alpha '},e^{r_\alpha },e^{\bar{r}_\alpha },\bar{r}_\alpha ',v_1\}.\]
As in the case $0<j<k$, after factoring and canceling terms with repeated entries, the products indexed by $\alpha$ may be recognized as inverses by anticommutativity. It remains to show that
\[\prod_l\{e^{u_0i_1\Pi_l},\bar{r}_l,u_0\}\prod_l\{e^{v_0i_1\Pi_l},\bar{r}_l,v_0\}
\prod_l\{e^{u_1i_0\Pi_l},\bar{r}_l,u_1\}\prod_l\{e^{v_1i_0\Pi_l},\bar{r}_l,v_1\}=1.\]
Applying anticommutativity, and factoring in $K_{*}^{\tn{\fsz{M}}}(R)$, it suffices to show that
\[\Big(\prod_l\{e^{u_0i_1\Pi_l},u_0\}\prod_l\{e^{v_0i_1\Pi_l},v_0\}
\prod_l\{e^{u_1i_0\Pi_l},u_1\}\prod_l\{e^{v_1i_0\Pi_l},v_1\}\Big)\times\{\bar{r}_l\}=1.\]
Using the isomorphisms $\phi_2$ and $\psi_2$,
\[\psi_2\phi_2\big(\{e^{u_0i_1\Pi_l},u_0\}\{e^{v_0i_1\Pi_l},v_0\}\{e^{u_1i_0\Pi_l},u_1\}\{e^{v_1i_0\Pi_l},v_1\}\big)\]
\[=\psi_2\big(i_1\Pi_ldu_0+i_1\Pi_ldv_0+i_0\Pi_ldu_1+i_0\Pi_ldv_1\big)\]
\[=\psi_2(i_1\Pi_ldr_0+i_0\Pi_ldr_1)\]
\[=\psi_2(\Pi_ld(i_0i_1)+i_1\Pi_lds_0+i_0\Pi_lds_1)\]
\[=\psi_2(-i_0i_1d(\Pi_l)+i_1\Pi_lda_0+i_1\Pi_ldb_0+i_0\Pi_lda_1+i_0\Pi_ldb_1)\]
\[=\{e^{i_0i_1\Pi_l},\Pi_l\}^{-1}\{e^{i_1a_0\Pi_l},a_0\}
\{e^{i_1b_0\Pi_l},b_0\}\{e^{i_0a_1\Pi_l},a_1\}\{e^{i_0b_1\Pi_l},b_1\}\]
``Re-multiplying" in $K_{*}^{\tn{\fsz{M}}}(R)$, it suffices to show that
\[\prod_l\{e^{i_0i_1\Pi_l},\Pi_l,\bar{r}_l\}^{-1}\{e^{i_1a_0\Pi_l},a_0,\bar{r}_l\}\{e^{i_1b_0\Pi_l},b_0,\bar{r}_l\}
\{e^{i_0a_1\Pi_l},a_1\}\{e^{i_0b_1\Pi_l},b_1,\bar{r}_l\}=1.\]
The first factor is trivial because $\{e^{i_0i_1\Pi_l},\Pi_l,\bar{r}_l\}$ can be factored in $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ into a product of terms with repeated entries. For the next two factors, apply $i_1d$ to $s_0d\bar{s}'=0$, then apply $\psi_1$ to obtain
\[\prod_l\{e^{i_1a_0\Pi_l},a_0,\bar{r}_l\}\{e^{i_1b_0\Pi_l},b_0,\bar{r}_l\}=1.\]
For the final two factors, apply $i_0d$ to the equation $s_1d\bar{s}'=0$, then apply $\psi_2$ to obtain
\[\prod_l\{e^{i_0a_1\Pi_l},a_1,\bar{r}_l\}\{e^{i_0b_1\Pi_l},b_1,\bar{r}_l\}=1.\]
This completes the case $j=0,k=1$. The case involving $j=0$ and general $k$ follows by slight modifications of
this case.
\color{black}
}
\end{proof}
A ``global map" $\Psi_{n+1}$ from the subgroup of $F_{n+1}(R,I)$ generated by the union $\bigcup_{j=1}^{n+1}\tn{Im}(\Gamma_j)$ to $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ may now be defined by patching the maps $\Psi_{n+1,j}$ together, using lemma \hyperref[lempatchingpsi]{\ref{lempatchingphi}}.\\
\begin{defi}\label{defipatchingpsi} For an $(n+1)$-tuple $(\bar{r})$ belonging to the the union $\bigcup_{j=1}^{n+1}\tn{Im}(\Gamma_j)$, define
\begin{equation}\label{equdefiPsinplusone}\Psi_{n+1}\big((\bar{r})\big):=\Psi_{n+1,j}\big((\bar{r})\big)^{(-1)^{n+1-j}},\end{equation}
whenever the right-hand-side is defined, and extend to $\Psi_n$ to the subgroup of $F_{n+1}(R,I)$ generated by $\bigcup_{j=1}^{n+1}\tn{Im}(\Gamma_j)$ by taking inverses to negatives and multiplication to multiplication in $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$.
\end{defi}
The map $\Psi_{n+1}$ is a well-defined group homomorphism from $\bigcup_{j=1}^{n+1} \tn{Im}(\Gamma_j)$ to $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$, by lemma \hyperref[]{\ref{lempatchingpsi}}. The choice of notation for $\Psi_{n+1}$ is a deliberate reflection of the fact that this map plays the same role as the maps $\Psi_{m+1}$ for $1\le m\le n-1$, introduced in definition \ref{defprelimmapspsi} above. The image of $\bigcup_{j=1}^{n+1} \tn{Im}(\Gamma_j)$ under the quotient homomorphism $Q_{n+1}$ generates the group $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$, since any $(n+1)$-tuple $(\bar{r})=(r_0,...,r_{n})$ in $F_{n+1}(R)$ with at least one entry in $I$ belongs to $\bigcup_{j=1}^{n+1} \tn{Im}(\Gamma_j)$, and since the images of these elements under the quotient map generate $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$.
It is now possible to define the desired map $\psi_{n+1}: \Omega_{R,I}^n/d\Omega_{R,I} ^n\rightarrow K_{n+1} ^{\tn{\fsz{M}}}(R,I)$.\\
\begin{defi}\label{defipsinplusone} For any element $\omega\in\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$, write
$\omega=\sum_ir_id\bar{r}_i$, where $r_id\bar{r_i}\in Q_{n+1}\big(\bigcup_j \tn{Im}(\Gamma_j)\big)$. Now define
\begin{equation}\label{equdefipsinplusone}\psi_{n+1}(\omega)=\Psi_{n+1}\big(\prod_i(r_i,\bar{r_i})\big).\end{equation}
\end{defi}
The next step is to show that $\psi_{n+1}$ is a well-defined group homomorphism.\\
\begin{lem}\label{lemelementarypsi} The map $\psi_{n+1}$ is a well-defined group homomorphism $\Omega_{R,I}^{n}/d\Omega_{R,I}^{n-1}\rightarrow K_{n+1}^{\tn{\fsz{M}}}(R,I)$.
\end{lem}
\begin{proof} To show that $\psi_{n+1}$ is well-defined, it suffices to show that $\psi_{n+1}$ maps the relations in lemma \hyperref[lemKahlerrelations]{\ref{lemKahlerrelations}} to the identity in $K_{n+1}^{\tn{\fsz{M}}}(R,I)$. To streamline the notation, let $\ms{R}$ be such a relation. Following similar reasoning to that used in the proof of lemma \hyperref[lemelementaryphi]{\ref{lemelementaryphi}}, it suffices to show that $\Psi_{n+1,j}(\ms{R})$ is defined, and equal to $1$, for some $j$. Now $\Psi_{n+1,j}(\ms{R})$ is defined whenever $j\ne l$ for the additivity relation, whenever $j\ne 0,l$ for the Leibniz rule, and whenever $j\ne l, l+1$ for anticommutativity. In all cases, $\Psi_{n+1,j}(\ms{R})=1$, since omitting the $j$th entry yields a relation in $F_{n}(R,I)$. The map $\psi_{n+1}$ is a group homomorphism by construction, since $\Psi_{n+1}$ is defined to respect the group structure in definition \hyperref[defipatchingpsi]{\ref{defipatchingpsi}}.
\end{proof}
The following lemma is the final step in the proof of the theorem:\\
\begin{lem} The maps $\phi_{n+1}$ and $\psi_{n+1}$ are inverse isomorphisms.
\end{lem}
\begin{proof}It suffices to show this on sets of generators. $K_{n+1} ^{\tn{\fsz{M}}}(R,I)$ is generated by
elements of the form $\{r_0,...,r_{n}\}$ with $r_i\in S^*\cup(1+I)^*$ and at least one $r_i$ in $(1+I)^*$.
By anticommutativity, such an element can be written as $\{r,\bar{r}\}$, where $r\in(1+I)^*$ and $\bar{r}\in R^*$.
For such an element,
\begin{equation}\label{inversepsiphi}\psi_{n+1}\circ\phi_{n+1}\big(\{r,\bar{r}\}\big)=\psi_{n+1}\Big(\log(r)\frac{d\bar{r}}{\Pi}\Big)=\{e^{\frac{\log(r)}{\Pi}\Pi},\bar{r}\}=\{r,\bar{r}\}.\end{equation}
Finally, $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$ is generated by elements of the form $rdr_1\wedge...\wedge dr_n$
with $r_i\in S^*\cup I$ and at least one $r_i$ in $I$. By anticommutativity and exactness, such an element can
be written as $rd\bar{r}\wedge d\bar{r}'$, where $r,\bar{r}\in I$ and $\bar{r}'\in R^*$. For such an element,
\begin{equation}\label{inversepsiphi}\phi_{n+1}\big(\psi_{n+1}(rd\bar{r}\wedge d\bar{r}')\big)=\phi_{n+1}\big(\{e^{r\Pi'},e^{\bar{r}},\bar{r}'\}\big)= \log\big(e^{r\Pi'}\big)d\log(e^{\bar{r}})\wedge \frac{d\bar{r}'}{\Pi'}=rd\bar{r}\wedge d\bar{r}'.\end{equation}
\end{proof}
\section{Discussion and Applications}\label{sectiondiscussion}
\subsection{Green and Griffiths: Infinitesimal Structure of Chow Groups}\label{subsectionGG}
The original motivation for this paper arose from an attempt to understand Green and Griffiths' suggestive yet incomplete study \cite{GreenGriffithsTangentSpaces05} of the infinitesimal structure of cycle groups and Chow groups over smooth algebraic varieties. Suppose $X$ is a smooth algebraic variety over a field $k$ containing the rational numbers. Then Bloch's theorem \cite{BlochK2Cycles74}, extended by Quillen \cite{QuillenHigherKTheoryI72}, expresses the Chow groups of $X$ as Zariski sheaf cohomology groups of the Quillen $K$-theory sheaves\footnotemark\footnotetext{These are the sheaves associated to the presheaves $U\mapsto K_p(U)$ for open $U\subset X$. I use $p$ here as a generic superscript; $p$ is usually $n+1$ in the context of the main theorem.} $\ms{K}_p$ on $X$:
\begin{equation}\label{blochstheorem}
\tn{Ch}^p(X)=H_{\tn{\fsz{Zar}}}^p(X,\ms{K}_p).
\end{equation}
The general intractability of the Chow groups $\tn{Ch}_X^p$ for $p\ge 2$ makes the {\it linearization} of equation \hyperref[blochstheorem]{\ref{blochstheorem}} a problem of obvious interest, somewhat in the same spirit as the linearization of Lie groups via much simpler Lie algebras.\footnotemark\footnotetext{In view of Bloch's theorem \hyperref[blochstheorem]{\ref{blochstheorem}}, this is much more than an analogy. In particular, the relationship between algebraic $K$-theory and cyclic homology shares many nearly-identical structural features with the relationship between Lie groups and Lie algebras. See Loday \cite{LodayCyclicHomology98}, Chapters 10 and 11 for details.} Following this reasoning, and skipping some details, leads to the expression
\begin{equation}\label{linearblochstheorem}
T\tn{Ch}^p(X)=H_{\tn{\fsz{Zar}}}^p(X,T\ms{K}_p,),
\end{equation}
where $T\tn{Ch}^p(X)$ is the {\it tangent group at the origin} of the Chow group $\tn{Ch}^p(X)$, and where $T\ms{K}_p$ is the {\it tangent sheaf at the origin} of the $K$-theory sheaf $\ms{K}_p$. In this context, $T\ms{K}_p$ is the relative sheaf defined via the simplest nontrivial split nilpotent extension of the structure sheaf $\ms{O}_X$ of $X$, given by tensoring $\ms{O}_X$ with the ring of dual numbers $k[\varepsilon]/\varepsilon^2$ over $k$. At a given point $x\in X$, this involves extending the local ring $S=\ms{O}_{X,x}$ to the ring $R=S\otimes_kk[\varepsilon]/\varepsilon^2=S[\varepsilon]/\varepsilon^2$, with nilpotent extension ideal $I=(\varepsilon)$. The terminology {\it at the origin} may be understood by noting that elements of the relative $K$-group $K_p(R,I)$ may be viewed as ``infinitesimal deformations" of the identity element in $K_p(S)$, since the canonical map $K_p(R)\rightarrow K_p(S)$ sends every element of the subgroup $K_p(R,I)\subset K_p(R)$ to the origin in $K_p(S)$. These considerations bring the study of {\it relative} $K$-theory, and hence of Goodwillie-type theorems, squarely into the picture.
Green and Griffiths focus on the case of $\tn{Ch}^2(X)$, where $X$ is a smooth algebraic surface over a field $k$ containing the rational numbers. Historically, this case has provided some of the most important and surprising results in the theory of Chow groups. The $K$-theory sheaf involved in this context is $\ms{K}_2$, and one may substitute the corresponding {\it Milnor} sheaf $\ms{K}_{2} ^{\tn{\fsz{M}}}$, since the functor $K_{2} ^{\tn{\fsz{M}}}$, defined using the na\"{i}ve tensor product definition \hyperref[defiMilnorK]{\ref{defiMilnorK}}, coincides with $K_2$ on the local rings of $X$. The object of interest is then
\begin{equation}\label{linearblochstheorem}
T\tn{Ch}^2(X)=H_{\tn{\fsz{Zar}}}^2(X,T\ms{K}_{2} ^{\tn{\fsz{M}}}).
\end{equation}
Equation \hyperref[linearblochstheorem]{\ref{linearblochstheorem}} represents information about an object viewed as totally intractable; namely $\tn{Ch}^2(X)$, in terms of objects viewed as elementary; namely the relative Milnor $K$-groups $K_{2} ^{\tn{\fsz{M}}}(R,I)$, which may be described in terms of K\"{a}hler differentials. This expression provides hope for acquiring useful geometric understanding in a somewhat broader context by means of symbolic $K$-theory, avoiding as much as possible an otherwise forbidding morass of modern homotopy-theoretic constructions.
\subsection{Similar Results involving Relative $K$-Theory and Infinitesimal Geometry}\label{subsectionsimilarresults}
{\bf Van der Kallen: an Early Computation of $K_{2} ^{\tn{\fsz{M}}}(R,I)$.} The isomorphism $K_{2}(R,I)\cong \Omega_{R,I} ^1/dI$ of Bloch \cite{BlochK2Artinian75}, stated in theorem \hyperref[theorembloch]{\ref{theorembloch}} above under appropriate hypotheses, clearly applies in the case discussed in section \hyperref[subsectionGG]{\ref{subsectionGG}}, in which $R=S[\varepsilon]/\varepsilon^2$ and $I=(\varepsilon)$, $S=R/I$ is assumed to be local, $I$ is nilpotent, and the underlying field $k$ contains $\mathbb{Q}$. Under these conditions, it is easy to show that the group $\Omega_{R,I} ^1/dI$ is isomorphic to the group $\Omega_S^1=\Omega_{S/\mathbb{Z}}^1$ of absolute K\"{a}hler differentials over $S$. Indeed, by lemma \hyperref[lemrelativekahlergenerators]{\ref{lemrelativekahlergenerators}}, the relative group $\Omega_{R,I} ^1$ is generated by differentials of the form $\varepsilon adb+c d\varepsilon$ for some $a,b,c\in S$. Hence, in the quotient $\Omega_{R,I} ^1/dI$,
\begin{equation}\label{equvanderkallencomputation}d(c\varepsilon)=cd\varepsilon+\varepsilon dc=0,\hspace*{.5cm}\tn{so}\hspace*{.5cm}cd\varepsilon=-\varepsilon dc,\end{equation}
by the Leibniz rule and exactness. This shows that $\Omega_{R,I} ^1/dI$ is generated by differentials of the form $\varepsilon adb$, and it is easy to see that all the remaining relations come from $\Omega_{S/\mathbb{Z}}^1$. Identifying $\varepsilon adb$ with $adb$ then gives the isomorphism $\Omega_{R,I}^1\cong\Omega_{S/\mathbb{Z}}^1$.\footnotemark\footnotetext{In many instances, it is better to ``carry along the $\varepsilon$," and to think of $\Omega_{R,I}^1$ as $\Omega_{S/\mathbb{Z}}^1\otimes_k(\varepsilon)$, since the latter form generalizes in important ways. For example, $\Omega_{S/\mathbb{Z}}^1\otimes_k(\varepsilon)$ is replaced by $\Omega_{S/\mathbb{Z}}^1\otimes_k\mfr{m}$ for an appropriate local artinian $k$-algebra with maximal idea $\mfr{m}$ and residue field $k$ in the context of Stienstra's paper \cite{StienstraFormalCompletion83} on the formal completion of $\tn{Ch}^2(X)$}
The specific result
\begin{equation}\label{equvanderkallenearly}K_{2}\big(S[\varepsilon]/\varepsilon^2,(\varepsilon)\big)\cong \Omega_{S/\mathbb{Z}}^1,\end{equation}
is due to Van der Kallen \cite{VanderKallenEarlyTK271}, predating the more general result of Bloch by several years. Van der Kallen's result features prominently in the work of Green and Griffiths.\footnotemark\footnotetext{In fact, Green and Griffiths give a messy but elementary symbolic proof of Van der Kallen's result, without using Bloch's theorem; see \cite{GreenGriffithsTangentSpaces05}, appendix 6.3.1, pages 77-81.}
Sheafifying equation \hyperref[equvanderkallenearly]{\ref{equvanderkallenearly}} and substituting it into equation \hyperref[linearblochstheorem]{\ref{linearblochstheorem}} yields the expression
\begin{equation}\label{linearblochstheoremkahler}
T\tn{Ch}^2(X)=H_{\tn{\fsz{Zar}}}^2(X,\varOmega_{X/\mathbb{Z}}^1),
\end{equation}
where $\varOmega_{X/\mathbb{Z}}^1$ is the sheaf of absolute K\"{a}hler differentials on $X$. Working primarily from the viewpoint of complex algebraic geometry, Green and Griffiths were struck by the ``mysterious" appearance of absolute differentials in this context, and much of their study \cite{GreenGriffithsTangentSpaces05} is an effort to explain the ``geometric origins" of such differentials. The right-hand-side of equation \hyperref[linearblochstheoremkahler]{\ref{linearblochstheoremkahler}} is what Green and Griffiths call the ``formal tangent space to $\tn{Ch}^2(X)$."
Green and Griffiths generalize equation \hyperref[linearblochstheoremkahler]{\ref{linearblochstheoremkahler}} to give a definition (\cite{GreenGriffithsTangentSpaces05} equation 8.53, page 145) of the tangent space $T\tn{Ch}^p(X)$ of the $p$th Chow group of a $p$-dimensional smooth projective variety:
\begin{equation}\label{linearblochstheoremkahlerhigher}
T\tn{Ch}^p(X)=H_{\tn{\fsz{Zar}}}^p(X, \varOmega_{X/\mathbb{Z}}^{p-1}).
\end{equation}
Equation \hyperref[linearblochstheoremkahlerhigher]{\ref{linearblochstheoremkahlerhigher}} is a linearization of the corresponding case of Bloch's theorem in equation \hyperref[blochstheorem]{\ref{blochstheorem}} above. From the viewpoint of the present paper, the sheaf $\varOmega_{X/\mathbb{Z}}^{p-1}$ in equation \hyperref[linearblochstheoremkahlerhigher]{\ref{linearblochstheoremkahlerhigher}} may be derived by sheafifying a special case of the main theorem in equation \hyperref[equmaintheorem]{\ref{equmaintheorem}}, given by setting $R=S[\varepsilon]/\varepsilon^2$, $I=(\varepsilon)$, and $n=p-1$, where $S$ is taken to be the local ring at a point on $X$:
\begin{equation}\label{equspecialcasedual}K_{p} ^{\tn{\fsz{M}}}\big(S[\varepsilon]/\varepsilon^2, (\varepsilon)\big)\cong\Omega_{S[\varepsilon]/\varepsilon^2, (\varepsilon)} ^{p-1}/d\Omega_{S[\varepsilon]/\varepsilon^2, (\varepsilon)} ^{p-2}\cong \Omega_{S/\mathbb{Z}} ^{p-1}.\end{equation}
The second isomorphism in equation \hyperref[equspecialcasedual]{\ref{equspecialcasedual}} follows easily from the Leibniz rule and exactness, as in equation \hyperref[equvanderkallencomputation]{\ref{equvanderkallencomputation}} in the case $n=1$. In view of equation \hyperref[equspecialcasedual]{\ref{equspecialcasedual}}, the group $\Omega_{S/\mathbb{Z}} ^{p-1}$ may be identified as the tangent group at the origin of the Milnor $K$-group $K_{p} ^{\tn{\fsz{M}}}(S)$. The problem of finding meaningful generalizations of the expression for $T\tn{Ch}^{p}(X)$ in equation \hyperref[linearblochstheoremkahlerhigher]{\ref{linearblochstheoremkahlerhigher}} in cases in which the extension ideal $I$ is more complicated than $(\varepsilon)$ is one of the principal motivations for this paper. I return to this point in section \hyperref[subsectiontangentfunctors]{\ref{subsectiontangentfunctors}} below.
{\bf Stienstra: the Formal Completion of $\tn{Ch}^2(X)$.} In the special case of algebraic surfaces, such a generalization was carried out twenty years ago by Jan Stienstra in his study \cite{StienstraFormalCompletion83} of the {\it formal completion at the origin} $\widehat{\tn{Ch}}_X^2(A,\mfr{m})$ of the Chow group $\tn{Ch}^2(X)$ of a smooth projective surface defined over a field $k$ containing the rational numbers. As discussed above, extension of a $k$-algebra $S$ to the ring $S[\varepsilon]/(\varepsilon^2)$ of dual numbers over $S$ is the simplest nontrivial type of split nilpotent extension, and Bloch's theorem \hyperref[theorembloch]{\ref{theorembloch}} immediately gives much more. Exploring this thread, Stienstra identifies $\widehat{\tn{Ch}}_X^2(A,\mfr{m})$ as the Zariski sheaf cohomology group
\begin{equation}\label{equstienstra}\widehat{\tn{Ch}}_X^2(A,\mfr{m})=H_{\tn{\fsz{Zar}}}^2\Bigg(X,\frac{\varOmega_{X\otimes_k A,X\otimes_k\mfr{m}}^1}{d(\ms{O}_X\otimes_k\mfr{m})}\Bigg),\end{equation}
where $A$ is a local artinian $k$-algebra with maximal ideal $\mfr{m}$ and residue field $k$, and where $\widehat{\tn{Ch}}_X^2$ is viewed as a functor from the category of local artinian $k$-algebras with residue field $k$ to the category of abelian groups.\footnotemark\footnotetext{See Stienstra \cite{StienstraFormalCompletion83} page 366. Stienstra writes, {\it ``We may forget about $K$-theory. Our problem has become analyzing $H^n\big(X,\varOmega_{X\otimes A,X\otimes m}/d(\ms{O}_X\otimes_k\mfr{m})\big)$, as a functor of $(A,\mfr{m})$."} Much of Stienstra's paper consists of expressing these cohomology groups in terms of the simpler objects $H^n(X,\ms{O}_X)$, $\Omega_{A,\mfr{m}}^1$, and $H^n(X,\varOmega_{X/\mathbb{Z}}^1)$. Similar analysis of the cohomology groups in equation \hyperref[equgeneralizedtangent]{\ref{equgeneralizedtangent}} below is a problem of obvious interest.} This formidable-looking expression is merely a sheafification of Bloch's isomorphism $K_{2}(R,I)\cong \Omega_{R,I} ^1/dI$ in theorem \hyperref[theorembloch]{\ref{theorembloch}}, substituted into Bloch's expression for the Chow groups $\tn{Ch}^p(X)=H_{\tn{\fsz{Zar}}}^p(\ms{K}_p, X)$ in equation \hyperref[blochstheorem]{\ref{blochstheorem}}. For example, if $A$ is the ring of dual numbers $k[\varepsilon]/\varepsilon^2$ over $k$, and $\mfr{m}$ is the ideal $(\varepsilon)$, then one recovers the case studied by Green and Griffiths:
\[\widehat{\tn{Ch}}_X^2\big(k[\varepsilon]/\varepsilon^2, (\varepsilon)\big)=T\tn{Ch}^2(X)=H_{\tn{\fsz{Zar}}}^2(X,\varOmega_{X/\mathbb{Z}}^1).\]
{\bf Hesselholt: Relative $K$-Theory of Truncated Polynomial Algebras.} More recently, Lars Hesselholt has done very substantial work on the relative $K$-theory of rings with respect to nilpotent ideals and Goodwillie-type theorems.\footnotemark\footnotetext{In a formal sense, this story may be considered complete: as Hesselholt points out (\cite{HesselholtTruncated05}, page 72) {\it ``If the ideal... ... is nilpotent, the relative $K$-theory can be expressed completely in terms of the cyclic homology of Connes and the topological cyclic homology of B\"{o}kstedt-Hsiang-Madsen."} However, one is still concerned with unwinding the latter theories in cases of particular interest. This is the object of Hesselholt's paper \cite{HesselholtTruncated05}, and also, in a smaller way, of mine.} Here, I cite only one of his many results that is of particular relevance to the subject of this paper. In his paper \cite{HesselholtTruncated05}, Hesselholt computes the relative $K$-theory (not just Milnor $K$-theory) of truncated polynomial algebras; i.e., polynomial algebras of the form $S[\varepsilon]/\varepsilon^N$ for some integer $N\ge1$, where $S$ is commutative regular noetherian algebra over a field. The case $N=1$ returns $S$, and the case $N=2$ gives the now-familiar extension of $S$ by the dual numbers. Hesselholt's result may be expressed as follows:
\begin{equation}\label{equhesselholt}K_{n+1}\big(S[\varepsilon]/\varepsilon^N,(\varepsilon)\big)\cong\bigoplus_{m\ge0}\big(\Omega_{S/\mathbb{Z}} ^{n-2m}\big)^{N-1}.\end{equation}
The case $N=2$ gives the expression
\[K_{n+1}\big(S[\varepsilon]/\varepsilon^2,(\varepsilon)\big)\cong\Omega_{S/\mathbb{Z}} ^{n}\oplus \Omega_{S/\mathbb{Z}} ^{n-2}\oplus \Omega_{S/\mathbb{Z}} ^{n-4}\oplus...,\]
where the first summand on the right-hand side is immediately recognizable as the tangent group at the origin of Milnor $K$-theory, identified in equation \hyperref[equspecialcasedual]{\ref{equspecialcasedual}}. The remaining summands may be viewed, roughly speaking, as representing ``tangents to the non-symbolic part of $K$-theory."
\subsection{Generalized Tangent Functors}\label{subsectiontangentfunctors}
There exist a number of obvious ways in which one may attempt to generalize Green and Griffiths' study of the tangent group at the origin $T\tn{Ch}^2(X)$ of the Chow group $\tn{Ch}^2(X)$ of a smooth algebraic surface:
\begin{enumerate}
\item\label{higherdim} Higher-dimensional varieties may be considered, as in equation \hyperref[linearblochstheoremkahlerhigher]{\ref{linearblochstheoremkahlerhigher}}. As Green and Griffiths point out in \cite{GreenGriffithsTangentSpaces05}, much of their work concerning $\tn{Ch}^2(X)$ applies immediately to $\tn{Ch}^n(X)$ for an $n$-dimensional variety.
\item\label{othercodim} Codimensions different than the dimension of the variety may be studied; for example, one may examine $T\tn{Ch}^2(X)$ for a $3$-fold.
\item\label{otherinfinitesimal} Infinitesimal information more complicated than the dual numbers may be added to the picture, as in Stienstra's paper \cite{StienstraFormalCompletion83}. In other words, one may choose to study functors such as the formal completion $\widehat{\tn{Ch}}_X^2$, rather than merely the tangent space.\footnotemark\footnotetext{This may be viewed as an exalted analogue of studying Taylor polynomials rather than merely tangent lines.}
\item\label{otherK} More sophisticated $K$-theory may be employed, as suggested by Hesselholt's theorem, which shows that ``there is more to relative $K$-theory than relative Milnor $K$-theory," even in the simplest cases. For deep structural reasons, the nonconnective $K$-theory of Bass and Thomason gives good formal results, but Quillen $K$-theory is inadequate.
\item\label{finitechar} The case of positive characteristic may be considered.
\item\label{nonsmooth} Smooth algebraic varieties may be exchanged for a more general category of schemes.
\item\label{higherchow} Analogous objects such as higher Chow groups may be examined.
\end{enumerate}
The main theorem \hyperref[equmaintheorem]{\ref{equmaintheorem}} in this paper contributes to items \hyperref[higherdim]{1}, \hyperref[othercodim]{2}, \hyperref[otherinfinitesimal]{3}, \hyperref[finitechar]{5}, and \hyperref[nonsmooth]{6} above. It contributes to item \hyperref[higherdim]{1} because it applies to $K_{p} ^{\tn{\fsz{M}}}(R,I)$ for all $p$. It contributes to item \hyperref[othercodim]{2} because Bloch's theorem \hyperref[blochstheorem]{\ref{blochstheorem}} for a fixed $p$, applies to varieties of all dimensions. It contributes to item \hyperref[otherinfinitesimal]{3} because it applies to a broad class of split nilpotent extensions, not merely extensions by the dual numbers. It contributes to item \hyperref[finitechar]{5} because many rings of positive characteristic are $5$-fold stable, as noted in example \hyperref[examplesemilocalstable]{\ref{examplesemilocalstable}}. Finally, it contributes to item \hyperref[nonsmooth]{6} because the right-hand side of Bloch's theorem \hyperref[blochstheorem]{\ref{blochstheorem}} provides one way of generalizing the Chow functors to apply to more general schemes, since the sheaves $\ms{K}_p$ are defined under very general conditions.
The main theorem \hyperref[equmaintheorem]{\ref{equmaintheorem}} permits interpretation of a particular class of functors on the category of smooth algebraic varieties over a field containing the rational numbers, or another appropriate category of schemes, as {\it generalized tangent functors.} These functors are given by sheafifying the isomorphism
\[K_{n+1} ^{\tn{\fsz{M}}}(R,I)\longrightarrow\frac{\Omega_{R,I} ^n}{d\Omega_{R,I} ^{n-1}},\]
of equation \hyperref[equmaintheorem]{\ref{equmaintheorem}}, and taking Zariski sheaf cohomology. In particular, Stienstra's formal completion functor \hyperref[equstienstra]{\ref{equstienstra}} generalizes in the obvious way:
\begin{equation}\label{equgeneralizedtangent}\widehat{\tn{Ch}}_X^n(A,\mfr{m})=H_{\tn{\fsz{Zar}}}^n\Bigg(X,\frac{\varOmega_{X\otimes_k A,X\otimes_k\mfr{m}}^n}{d\big(\varOmega_{X\otimes_k A,X\otimes_k\mfr{m}}^{n-1}\big)}\Bigg).\end{equation}
While these functors considerably broaden the picture examined by Green and Griffiths, they are almost certainly not the ``best" tangent functors available, either in the sense of information-theoretic completeness or in the sense of good formal behavior. Their advantage lies in the relative tractability of the groups $\Omega_{R,I} ^n/d\Omega_{R,I} ^{n-1}$ compared to higher $K$-groups. However, the ``best" generalized tangent functors can likely only be accessed by exiting the world of symbolic $K$-theory.
\subsection*{Acknowledgements}\label{subsectionacknowledgements}
I am grateful to Wilberd Van der Kallen and Lars Hesselholt for their kind answers to several inquiries. I would also like to thank J. W. Hoffman for making me aware of this topic.
\newpage
|
2,869,038,154,248 | arxiv |
\section{Proof of \cref{algbb}}
In order to prove \cref{algbb}, we define the relevant notions in detail.
Let $A$ be a (not necessarily finite) set of symbols and $R\subseteq A^*\times
A^*$. The pair $(A,R)$ is called a \emph{(monoid) presentation}. The smallest
congruence of $A^*$ containing $R$ is denoted by $\equiv_R$ and we will
write $[w]_R$ for the congruence class of $w\in A^*$. The \emph{monoid
presented by $(A,R)$} is defined as $A^*/\mathord{\equiv_R}$. For the
monoid presented by $(A,R)$, we also write $\langle A \mid R\rangle$, where $R$
is denoted by equations instead of pairs.
Note that since we did not impose a finiteness restriction on $A$, every monoid
has a presentation. Furthermore, for monoids $M_1$, $M_2$ we can find
presentations $(A_1,R_1)$ and $(A_2,R_2)$ such that $A_1\cap A_2=\emptyset$.
We define the \emph{free product} $M_1*M_2$ to be presented by $(A_1\cup A_2,
R_1\cup R_2)$. Note that $M_1*M_2$ is well-defined up to isomorphism. By way of
the injective morphisms $[w]_{R_i}\mapsto [w]_{R_1\cup R_2}$, $w\in A_i^*$ for
$i=1,2$, we will regard $M_1$ and $M_2$ as subsets of $M_1*M_2$. It is a
well-known property of free products that if $\varphi_i\colon M_i\to N$ is a
morphism for $i=1,2$, then there is a unique morphism $\varphi\colon M_1*M_2\to
N$ with $\varphi|_{M_i}=\varphi_i$ for $i=1,2$. Furthermore, if
$u_0v_1u_1\cdots v_nu_n=1$ for $u_0,\ldots,u_n\in M_1$ and $v_1,\ldots,v_n\in
M_2$ (or vice versa), then $u_j=1$ or $v_j=1$ for some $0\le j\le n$.
Moreover, we write $M^{(n)}$ for the $n$-fold free product $M*\cdots*M$.
One of the directions of the equality $\VA{\mathbb{B}*\mathbb{B}*M}=\Alg{\VA{M}}$ follows from
previous work. In \cite{Zetzsche2013a} (and, for a more general
product construction, in \cite{BuckheisterZetzsche2013a}), the following was
shown.
\begin{ctheorem}[\cite{Zetzsche2013a,BuckheisterZetzsche2013a}]\label{basics:freeproduct}
Let $M_0$ and $M_1$ be monoids. Then $\VA{M_0 * M_1}\subseteq\Alg{\VA{M_0}\cup\VA{M_1}}$.
\end{ctheorem}
Let $M$ and $N$ be monoids. In the following, we write $M\hookrightarrow N$ if
there is a morphism $\varphi\colon M\to N$ such that $\varphi^{-1}(1)=\{1\}$.
Clearly, if $M\hookrightarrow N$, then $\VA{M}\subseteq\VA{N}$: Replacing in a
valence automaton over $M$ all elements $m\in M$ with $\varphi(m)$ yields a
valence automaton over $N$ that accepts the same language.
\begin{clemma}
If $M\hookrightarrow M'$ and $N\hookrightarrow N'$, then $M*N\hookrightarrow M'*N'$.
\end{clemma}
\begin{proof}
Let $\varphi\colon M\to M'$ and $\psi\colon N\to N'$. Then the morphism
$\kappa\colon M*N\to M'*N'$ with $\kappa|_M=\varphi$ and $\kappa|_N=\psi$
clearly satisfies $\kappa^{-1}(1)=1$.
\end{proof}
We will use the notation $R_1(M)=\{a\in M\mid \exists b\in M\colon ab=1\}$.
\begin{clemma}\label{bpowers}
Let $M$ be a monoid with $R_1(M)\ne\{1\}$. Then
$\mathbb{B}^{(n)}*M\hookrightarrow \mathbb{B}*M$ for every $n\ge
1$. In particular, $\VA{\mathbb{B}*M}=\VA{\mathbb{B}^{(n)}*M}$ for every $n\ge 1$.
\end{clemma}
\begin{proof}
If $\mathbb{B}^{(n)}*M\hookrightarrow \mathbb{B}*M$ and $\mathbb{B}*\mathbb{B}*M\hookrightarrow \mathbb{B}*M$,
then
\[ \mathbb{B}^{(n+1)}*M\cong \mathbb{B}*(\mathbb{B}^{(n)}*M)\hookrightarrow \mathbb{B}*(\mathbb{B}*M)\hookrightarrow \mathbb{B}*M. \]
Therefore, it suffices to prove $\mathbb{B}*\mathbb{B}*M\hookrightarrow \mathbb{B}*M$.
Let $\mathbb{B}_s=\langle s,\bar{s}\mid s\bar{s}=1\rangle$ for $s\in\{p,q,r\}$. We show
$\mathbb{B}_p*\mathbb{B}_q*M\hookrightarrow \mathbb{B}_r*M$. Suppose $M$ is presented by $(X,R)$. We
regard the monoids $\mathbb{B}_p*\mathbb{B}_q*M$ and $\mathbb{B}_r*M$ as embedded into
$\mathbb{B}_p*\mathbb{B}_q*\mathbb{B}_r*M$, which by definition of the free product, has a presentation
$(Y,S)$, where $Y=\{p,\bar{p},q,\bar{q},r,\bar{r}\}\cup X$ and $S$ consists of
$R$ and the equations $s\bar{s}=1$ for $s\in \{p,q,r\}$. For $w\in Y^*$, we
write $[w]$ for the congruence class generated by $S$. Since
$R_1(M)\ne\{1\}$, we find $u,v\in X^*$ with $[uv]=1$ and $[u]\ne 1$. and
let $\varphi\colon (\{p,\bar{p},q,\bar{q}\}\cup X)^*\to(\{r,\bar{r}\}\cup X)^*$
be the morphism with $\varphi(x)=x$ for $x\in X$ and
\begin{align*}
p&\mapsto rr, & \bar{p}&\mapsto \bar{r}\bar{r}, \\
q&\mapsto rur, & \bar{q}&\mapsto \bar{r}v\bar{r}.
\end{align*}
We show by induction on $|w|$ that $[\varphi(w)]=1$ implies $[w]=1$. Since this
is trivial for $w=\varepsilon$, we assume $|w|\ge 1$. Now suppose
$[\varphi(w)]=[\varepsilon]$ for some $w\in(\{p,\bar{p},q,\bar{q}\}\cup X)^*$. If $w\in X^*$,
then $[\varphi(w)]=[w]$ and hence $[w]=1$. Otherwise, we have
$\varphi(w)=xry\bar{r}z$ for some $y\in X^*$ with $[y]=1$ and $[xz]=1$. This
means $w=fsy\overline{s'}g$ for $s,s'\in\{p,q\}$ with $\varphi(fs)=xr$ and
$\varphi(\overline{s'}g)=\bar{r}z$. If $s\ne s'$, then $s=p$ and $s'=q$; or
$s=q$ and $s'=p$. In the former case
\[ [\varphi(w)]=[\varphi(f)~rr~y~\bar{r}v\bar{r}~\varphi(g)]=[\varphi(f)rv\bar{r}\varphi(g)]\ne 1 \]
since $[v]\ne 1$ and in the latter
\[ [\varphi(w)]=[\varphi(f)~rur~y~\bar{r}\bar{r}~\varphi(g)]=[\varphi(f)ru\bar{r}\varphi(g)]\ne 1 \]
since $[u]\ne 1$. Hence $s=s'$. This means $1=[w]=[fsy\bar{s}g]=[fg]$ and
$1=[\varphi(w)]=[\varphi(fg)]$ and since $|fg|<|w|$, induction yields
$[w]=[fg]=1$.
Hence, we have shown that $[\varphi(w)]=1$ implies $[w]=1$. Since, on
the other hand, $[u]=[v]$ implies $[\varphi(u)]=[\varphi(v)]$ for all $u,v\in
(\{p,\bar{p},q,\bar{q}\}\cup X)^*$, we can lift $\varphi$ to a morphism
witnessing $\mathbb{B}_p*\mathbb{B}_q*M\hookrightarrow \mathbb{B}_r*M$.
\end{proof}
\begin{proof}[Proof of \cref{algbb}]
It suffices to prove the first statement: If $R_1(M)\ne\{1\}$, then by
\cref{bpowers}, $\VA{\mathbb{B}*M}=\VA{\mathbb{B}*\mathbb{B}*M}$.
Since $\VA{\mathbb{B}}\subseteq\mathsf{CF}$, \cref{basics:freeproduct} yields
\[ \VA{\mathbb{B}*N}\subseteq\Alg{\VA{\mathbb{B}}\cup\VA{N}}\subseteq\Alg{\VA{N}} \]
for every monoid $N$. Therefore,
\[ \VA{\mathbb{B}*\mathbb{B}*M}\subseteq\Alg{\VA{\mathbb{B}*M}}\subseteq\Alg{\Alg{\VA{M}}}=\Alg{\VA{M}}. \]
It remains to be shown that $\Alg{\VA{M}}\subseteq\VA{\mathbb{B}*\mathbb{B}*M}$.
Let $G=(N,T,P,S)$ be a reduced $\VA{M}$-grammar and let $X=N\cup T$. Since
$\VA{M}$ is closed under union, we may assume that for each $B\in N$, there is
exactly one production $B\to L_B$ in $P$. For each $B\in N$, let $A_B=(Q_B,X,M,E_B,q^B_0,F_B)$ by a
valence automaton over $M$ with $\Lang{A_B}=L_B$. We may clearly assume that
$Q_B\cap Q_C=\emptyset$ for $B\ne C$ and that for each $(p,w,m,q)\in E_B$,
we have $|w|\le 1$.
\newcommand{\lfloor}{\lfloor}
\newcommand{\rfloor}{\rfloor}
In order to simplify the correctness proof, we modify $G$. Let $\lfloor$ and
$\rfloor$ be new symbols and let $G'$ be the grammar
$G'=(N,T\cup\{\lfloor,\rfloor\},P',S)$, where $P'$ consists of the productions
$B\to \lfloor L\rfloor$ for $B\to L\in P$.
Moreover, let
\[ K=\{v\in (N\cup T\cup\{\lfloor,\rfloor\})^* \mid u \grammarsteps[G'] v,~u\in L_S\}. \]
Then $\Lang{G}=\pi_T(K\cap (T\cup\{\lfloor,\rfloor\})^*)$ and it suffices to
show $K\in\VA{\mathbb{B}*\mathbb{B}*M}$.
Let $Q=\bigcup_{B\in N} Q_B$. For each $q\in Q$, let $\mathbb{B}_q=\langle
q,\bar{q}\mid q\bar{q}=1\rangle$ be an isomorphic copy of $\mathbb{B}$. Let
$M'=\mathbb{B}_{q_1}*\cdots*\mathbb{B}_{q_n}*M$, where $Q=\{q_1,\ldots,q_n\}$. We shall prove
$K\in\VA{M'}$, which implies $K\in\VA{\mathbb{B}*\mathbb{B}*M}$ by \cref{bpowers} since $R_1(\mathbb{B}*M)\ne\{1\}$.
Let $E=\bigcup_{B\in N} E_B$, $F=\bigcup_{B\in N} F_B$. The new set $E'$
consists of the following transitions:
\begin{align}
&(p,x,m,q) && \text{for $(p,x,m,q)\in E$,} \label{basics:algbb:t:old} \\
&(p,\lfloor,mq,q^B_0) && \text{for $(p,B,m,q)\in E$, $B\in N$,} \label{basics:algbb:t:open} \\
&(p,\rfloor,\bar{q},q) && \text{for $p\in F$, $q\in Q$.} \label{basics:algbb:t:close}
\end{align}
We claim that with $A'=(Q, N\cup T\cup\{\lfloor,\rfloor\},M',E',q_0^S,F)$, we have
$\Lang{A'}=K$.
Let $v\in K$, where $u\grammarstepsn[G']{n} v$ for some $u\in L_S$. We show
$v\in\Lang{A'}$ by induction on $n$. For $n=0$, we have $v\in L_S$ and can use
transitions of type \labelcref{basics:algbb:t:old} inherited from $A_S$ to
accept $v$. If $n\ge 1$, let $u\grammarstepsn[G']{n-1} v'\grammarstep[G'] v$.
Then $v'\in \Lang{A'}$ and $v'=xBy$, $v=x\lfloor w\rfloor y$ for some $B\in N$,
$w\in L_B$. The run for $v'$ uses a transition $(p,B,m,q)\in E$. Instead of
using this transition, we can use $(p,\lfloor,mq,q_0^B)$, then execute the
\labelcref{basics:algbb:t:old}-type transitions for $w\in L_B$, and finally use
$(f,\rfloor,\bar{q},q)$, where $f$ is the final state in the run for $w$. This
has the effect of reading $\lfloor w\rfloor$ from the input and multiplying
$mq1\bar{q}=m$ to the storage monoid. Hence, the new run is valid and accepts
$v$. Hence, $v\in \Lang{A'}$. This proves $K\subseteq\Lang{A'}$.
In order to show $\Lang{A'}\subseteq K$, consider the morphisms
$\varphi\colon (T\cup\{\lfloor,\rfloor\})^*\to \mathbb{B}$, $\psi\colon M'\to\mathbb{B}$ with
$\varphi(x)=1$ for $x\in T$, $\varphi(\lfloor)=a$, $\varphi(\rfloor)=\bar{a}$,
$\psi(q)=a$ for $q\in Q$, $\psi(\bar{q})=\bar{a}$, and $\psi(m)=1$ for $m\in
M$. The transitions of $A'$ are constructed such that
$(p,\varepsilon,1)\autsteps[A'] (q,w,m)$ implies $\varphi(w)=\psi(m)$. In
particular, if $v\in \Lang{A'}$, then $\pi_{\{\lfloor,\rfloor\}}(v)$ is a semi-Dyck
word with respect to $\lfloor$ and $\rfloor$.
Let $v\in\Lang{A'}$ and let $n=|w|_{\lfloor}$. We show $v\in K$ by induction on
$n$. If $n=0$, then the run for $v$ only used transitions of type
\labelcref{basics:algbb:t:old} and hence $v\in L_S$. If $n\ge 1$, since
$\pi_{\{\lfloor,\rfloor\}}(v)$ is a semi-Dyck word, we can write $v=x\lfloor
w\rfloor y$ for some $w\in (N\cup T)^*$. Since $\lfloor$ and $\rfloor$ can only
be produced by transitions of the form \labelcref{basics:algbb:t:open} and
\labelcref{basics:algbb:t:close}, respectively, the run for $v$ has to be of
the form
\begin{align*}
(q_0^S,\varepsilon,1)&\autsteps[A'](p,x,r) \\
&\autstep[A'](q_0^B,x\lfloor,rmq) \\
&\autsteps[A'](f,x\lfloor w,rmqs) \\
&\autstep[A'](q',x\lfloor w\rfloor,rmqs\overline{q'}) \\
&\autsteps[A'] (f',x\lfloor w\rfloor y,rmqs\overline{q'}t)
\end{align*}
for some $p,q,q'\in Q$, $B\in N$, $(p,B,m,q)\in E$, $f,f'\in F$, $r,t\in M'$,
and $s\in M$ and with $rmqs\overline{q'}t=1$. This last condition implies $s=1$
and $q=q'$, which in turn entails $rmt=1$. This also means
$(p,B,m,q')=(p,B,m,q)\in E$ and $(q_0^B,\varepsilon,1)\autsteps[A']
(f,w,s)=(f,w,1)$ and hence $w\in L_B$. Using the transition $(p,B,m,q')\in E$, we
have
\begin{align*}
(q_0^S,\varepsilon,1) &\autsteps[A'] (p,x,r) \\
&\autstep[A'] (q',xB,rm) \\
&\autsteps[A'] (f',xBy,rmt).
\end{align*}
Hence $xBy\in\Lang{A'}$ and $|xBy|_{\lfloor}<|v|_{\lfloor}$. Thus, induction
yields $xBy\in K$ and since $xBy\grammarstep[G'] x\lfloor w\rfloor y$, we have $v=x\lfloor w\rfloor y\in K$. This
establishes $\Lang{A'}=K$.
\end{proof}
\section{Proof of \cref{basics:sli:zpowers}}
\begin{proof}
We start with the inclusion ``$\subseteq$''. Since the right-hand side is
closed under morphisms and union, it suffices to show that for each
$L\in\VA{M}$, $L\subseteq X^*$, and semilinear $S\subseteq X^\oplus$, we have
$L\cap\ParikhInv{S}\in\VA{M\times\mathbb{Z}^n}$ for some $n\ge 0$. Let $n=|X|$ and pick a
linear order on $X$. This induces an embedding $X^\oplus\to\mathbb{Z}^n$,
by way of which we consider $X^\oplus$ as a subset of $\mathbb{Z}^n$.
Suppose $L=\Lang{A}$ for a valence automaton $A$ over $M$. The new valence
automaton $A'$ over $M\times\mathbb{Z}^n$ simulates $A$ and, if $w$ is the input read
by $A$, adds $\Parikh{w}$ to the $\mathbb{Z}^n$ component of the storage monoid. When
$A$ reaches a final state, $A'$ nondeterministically changes to a new state
$q_1$, in which it nondeterministically subtracts an element of $S$ from the
$\mathbb{Z}^n$ component. Afterwards, $A'$ switches to another new state $q_2$, which
is the only accepting state in $A'$. Clearly, $A'$ accepts a word $w$ if and
only if $w\in \Lang{A}$ and $\Parikh{w}\in S$, hence
$\Lang{A'}=\Lang{A}\cap\ParikhInv{S}$. This proves ``$\subseteq$''.
Suppose $L=\Lang{A}$ for some valence automaton $A=(Q,X,M\times\mathbb{Z}^n,E,q_0,F)$.
We construct a valence automaton $A'$ over $M$ as follows. The input alphabet
$X'$ of $A'$ consists of all those $(w,\mu)\in X^*\times\mathbb{Z}^n$ for which there
is an edge $(p,w,(m,\mu),q)\in E$ for some $p,q\in Q$, $m\in M$. $A'$ has edges
\[ E'=\{ (p, (w,\mu), m, q) \mid (p, w, (m,\mu), q)\in E \}. \] In other words,
whenever $A$ reads $w$ and adds $(m,\mu)\in M\times\mathbb{Z}^n$ to its storage monoid,
$A'$ adds $m$ and reads $(w,\mu)$ from the input. Let $\psi\colon
X'^\oplus\to\mathbb{Z}^n$ be the morphism that projects the symbols in $X'$ to the
right component and let $h\colon X'^*\to X^*$ be the morphism that projects the
symbols in $X'$ to the left component. Note that the set
$S=\psi^{-1}(0)\subseteq X'^\oplus$ is Presburger definable and hence
effectively semilinear. We clearly have
$\Lang{A}=h(\Lang{A'}\cap\ParikhInv{S})\in\HomSLI{\VA{M}}$. This proves
``$\supseteq$''. Clearly, all constructions in the proof can be
carried out effectively.
\end{proof}
\section{Proof of \cref{hierarchy:closure}}
\begin{cproposition}\label{basics:alg:semiafl}
Let $\mathcal{C}$ be an effective full semi-trio. Then $\Alg{\mathcal{C}}$ is an effective full semi-AFL.
\end{cproposition}
\begin{proof}
Since $\Alg{\mathcal{C}}$ is clearly effectively closed under union, we only prove
effective closure under rational transductions.
Let $G=(N,T,P,S)$ be a $\mathcal{C}$-grammar and let $U\subseteq X^*\times T^*$ be a
rational transduction. Since we can easily construct a $\mathcal{C}$-grammar for
$a\Lang{G}$ (just add a production $S'\to \{aS\}$) and the rational
transduction $(\varepsilon, a)U=\{ (v,au) \mid (v,u)\in U\}$, we may assume that
$\Lang{G}\subseteq T^+$.
Let $U$ be given by the automaton $A=(Q,X^*\times T^*,E,q_0,F)$. We may assume that
\[ E \subseteq Q\times ((X\times\{\varepsilon\})\cup (\{\varepsilon\}\times T))\times Q \]
and $F=\{f\}$.
We regard $Z=Q\times T\times Q$ and $N'=Q\times N\times Q$ as alphabets. For
each $p,q\in Q$, let $U_{p,q}\subseteq N'\times (N\cup T)^*$ be the transduction
such that for $w=w_1\cdots
w_n$, $w_1,\ldots,w_n\in N\cup T$, $n\ge 1$, the set $U_{p,q}(w)$ consists of all words
\[ (p,w_1,q_1)(q_1,w_2,q_2)\cdots (q_{n-1},w_n,q) \]
with $q_1,\ldots,q_{n-1}\in Q$. Moreover, let
$U_{p,q}(\varepsilon)=\{\varepsilon\}$ if $p=q$ and $U_{p,q}(\varepsilon)=\emptyset$ if $p\ne q$.
Observe that $U_{p,q}$ is locally finite.
The new grammar $G'=(N',Z,P',(q_0,S,f))$ has productions $(p,B,q)\to
U_{p,q}(L)$ for each $p,q\in Q$ and $B\to L\in P$.
Let $\sigma\colon Z^*\to\Powerset{X^*}$ be the regular substitution defined by
\[ \sigma((p,x,q)) = \{ w\in X^* \mid (p,(\varepsilon,\varepsilon))\autsteps[A] (q, (w, x)) \}. \]
We claim that $U(\Lang{G})=\sigma(\Lang{G'})$. First, it can be shown by
inducion on the number of derivation steps that
$\SententialForms{G'}=U_{q_0,f}(\SententialForms{G})$. This implies
$\Lang{G'}=U_{q_0,f}(\Lang{G})$. Since for every language $K\subseteq T^+$, we
have $\sigma(U_{q_0,f}(K))=UK$, we may conclude
$\sigma(\Lang{G'})=U(\Lang{G})$.
$\Alg{\mathcal{C}}$ is clearly effectively closed under $\Alg{\mathcal{C}}$-substitutions. Since
$\mathcal{C}$ contains the finite languages, this means $\Alg{\mathcal{C}}$ is closed under
$\mathsf{REG}$-substitutions. Hence, we can construct a $\mathcal{C}$-grammar for
$U(\Lang{G})=\sigma(\Lang{G'})$.
\end{proof}
\begin{cproposition}\label{basics:sli:presburger:trio}
Let $\mathcal{C}$ be an effective full semi-AFL. Then $\HomSLI{\mathcal{C}}$ is an effective
Presburger closed full trio. In particular,
$\HomSLI{\HomSLI{\mathcal{C}}}=\HomSLI{\mathcal{C}}$.
\end{cproposition}
\begin{proof}
Let $L\in\mathcal{C}$, $L\subseteq X^*$, $S\subseteq X^\oplus$ semilinear, and $h\colon
X^*\to Y^*$ be a morphism. If $T\subseteq Z^*\times Y^*$ is a rational
transduction, then $Th(L\cap \ParikhInv{S})=U(L\cap\ParikhInv{S})$, where
$U\subseteq Z^*\times X^*$ is the rational transduction $U=\{ (v,u)\in
Z^*\times X^* \mid (v,h(u))\in T \}$. We may assume that $X\cap Z=\emptyset$.
Construct a regular language $R\subseteq (X\cup Z)^*$ with
$U=\{(\pi_Z(w),\pi_X(w)) \mid w\in R\}$. With this, we have
\begin{align*}
U(L\cap \ParikhInv{S}) &= \pi_Z\left((R\cap (L\shuf Z^*))\cap\ParikhInv{S+Z^\oplus}\right).
\end{align*}
Since $\mathcal{C}$ is an effective full semi-AFL, and thus $R\cap (L\shuf Z^*)$ is
effectively in $\mathcal{C}$, the right hand side is effectively contained in
$\HomSLI{\mathcal{C}}$. This proves that $\HomSLI{\mathcal{C}}$ is an effective full trio.
Let us prove effective closure under union. Now suppose $L_i\subseteq X_i^*$,
$S_i\subseteq X_i^\oplus$, and $h_i\colon X_i^*\to Y^*$ for $i=1,2$. If
$\bar{X}_2$ is a disjoint copy of $X_2$ with bijection $\varphi\colon X_2\to
\bar{X}_2$, then
\[ h_1(L_1\cap\ParikhInv{S_1})\cup h_2(L_2\cap\ParikhInv{S_2})=h((L_1\cup \varphi(L_2))\cap \ParikhInv{S_1\cup \varphi(S_2)}), \]
where $h\colon X_1\cup \bar{X}_2\to Y$ is the map with $h(x)=h_1(x)$ for $x\in
X_1$ and $h(x)=h_2(\varphi(x))$ for $x\in \bar{X}_2$. This proves that
$\HomSLI{\mathcal{C}}$ is effectively closed under union.
It remains to be shown that $\HomSLI{\mathcal{C}}$ is Presburger closed. Suppose
$L\in\mathcal{C}$, $L\subseteq X^*$, $S\subseteq X^\oplus$ is semilinear, $h\colon
X^*\to Y^*$ is a morphism, and $T\subseteq Y^\oplus$ is another semilinear set.
Let $\varphi\colon X^\oplus\to Y^\oplus$ be the morphism with
$\varphi(\Parikh{w})=\Parikh{h(w)}$ for every $w\in X^*$. Moreover, consider
the set
\[ T'=\{\mu\in X^\oplus \mid \varphi(w)\in T \}=\{\Parikh{w}\mid w\in X^*,~\Parikh{h(w)}\in T\}. \]
It is clearly Presburger definable in terms of $T$ and hence effectively semilinear.
Furthermore, we have
\begin{align*}
h(L\cap\ParikhInv{S})\cap\ParikhInv{T}=h(L\cap \ParikhInv{S\cap T'}).
\end{align*}
This proves that $\HomSLI{\mathcal{C}}$ is effectively Presburger closed.
\end{proof}
\begin{proof}[Proof of \cref{hierarchy:closure}]
\Cref{hierarchy:closure} follows from
\cref{basics:alg:semiafl,basics:sli:presburger:trio}. The uniform algorithm
recursively applies the transformations described therein.
\end{proof}
\section{Proof of \cref{basics:fsemilinear}}
\begin{cproposition}\label{basics:sli:semilinear}
If $\mathcal{C}$ is semilinear, then so is $\HomSLI{\mathcal{C}}$.
Moreover, if $\mathcal{C}$ is effectively semilinear, then so is $\HomSLI{\mathcal{C}}$.
\end{cproposition}
\begin{proof}
Since morphisms effectively preserve semilinearity, it suffices to show that
$\Parikh{L\cap\ParikhInv{S}}$ is (effectively) semilinear for each $L\in\mathcal{C}$,
$L\subseteq X^*$, and semilinear $S\subseteq X^\oplus$. This, however, is easy
to see since $\Parikh{L\cap\ParikhInv{S}}=\Parikh{L}\cap S$ and the semilinear
subsets of $X^\oplus$ are closed under intersection (they coincide with the
Presburger definable sets). Furthermore, if a semilinear representation of
$\Parikh{L}$ can be computed, this is also the case for $\Parikh{L}\cap S$.
\end{proof}
\begin{proof}[Proof of \cref{basics:fsemilinear}]
The semilinearity follows from \cref{basics:sli:semilinear} and a result by
van~Leeuwen~\cite{vanLeeuwen1974}, stating that if $\mathcal{C}$ is semilinear, then so
is $\Alg{\mathcal{C}}$.
The computation of (semilinear representations of) Parikh images can be done
recursively. The procedure in \cref{basics:sli:semilinear} describes the
computation for languages in $\mathsf{F}_i$. In order to compute the Parikh image of
a language in $\mathsf{G}_i=\Alg{\mathsf{F}_i}$, consider an $\mathsf{F}_i$-grammar $G$. Replacing
each right-hand side by a Parikh equivalent regular language yields a
$\mathsf{REG}$-grammar $G'$ that is Parikh equivalent to $G$. Since $G'$ is effectively
context-free, one can compute the Parikh image for $G'$.
\end{proof}
\section{Simple constructions of PAIM}
This section contains simple lemmas for the construction of PAIM.
\begin{clemma}[Unions]\label{pa:union}
Given $i\in\mathbb{N}$ and languages $L_0,L_1\in\mathsf{G}_i$, along with a PAIM in $\mathsf{G}_i$
for each of them, one can construct a PAIM for $L_0\cup L_1$ in $\mathsf{G}_i$.
\end{clemma}
\begin{proof}
One can find a PAIM $(K^{(i)}, C^{(i)}, P^{(i)}, (P^{(i)}_c)_{c\in C^{(i)}},
\varphi^{(i)}, \diamond)$ for $L_i$ in $\mathcal{C}$ for $i=0,1$ such that $C^{(0)}\cap
C^{(1)}=P^{(0)}\cap P^{(1)}=\emptyset$. Then $K=K^{(0)}\cup K^{(1)}$ is
effectively contained in $\mathsf{G}_i$ and can be turned into a PAIM $(K,C,P,(P_c)_{c\in
c},\varphi,\diamond)$ for $L_0\cup L_1$.
\end{proof}
\begin{clemma}[Homomorphic images]\label{pa:morphism}
Let $h\colon X^*\to Y^*$ be a morphism. Given $i\in\mathbb{N}$ and a PAIM for
$L\in\mathsf{G}_i$ in $\mathsf{G}_i$, one can construct a PAIM for $h(L)$ in $\mathsf{G}_i$.
\end{clemma}
\begin{proof}
Let $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ be a PAIM for $L$ and let
$\bar{h}\colon X^\oplus\to Y^\oplus$ be the morphism with
$\bar{h}(x)=\Parikh{h(x)}$ for $x\in X$. Define the new morphism
$\varphi'\colon (C\cup P)^\oplus\to Y^\oplus$ by
$\varphi'(\mu)=\bar{h}(\varphi(\mu))$. Moreover, let $g\colon (C\cup X\cup
P\cup \{\diamond\})^*\to (C\cup Y\cup P\cup \{\diamond\})^*$ be the extension of
$h$ that fixes $C\cup P\cup \{\diamond\}$. Then $(g(K),C,P,(P_c)_{c\in
C},\varphi',\diamond)$ is clearly a PAIM for $h(L)$ in $\mathsf{G}_i$.
\end{proof}
\begin{clemma}[Linear decomposition]\label{pa:decomposelinear}
Given $i\in\mathbb{N}$ and $L\in\mathsf{G}_i$ along with a PAIM in $\mathsf{G}_i$, one can construct
$L_1,\ldots,L_n\in\mathsf{G}_i$, each together with a linear PAIM in $\mathsf{G}_i$, such
that $L=L_1\cup\cdots\cup L_n$.
\end{clemma}
\begin{proof}
Let $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ be a PAIM for $L\subseteq X^*$.
For each $c\in C$, let $K_c=K\cap c(X\cup P\cup\{\diamond\})^*$. Then $(K_c,
\{c\}, P_c, P_c, \varphi_c,\diamond)$, where $\varphi_c$ is the restriction of
$\varphi$ to $(\{c\}\cup P_c)^\oplus$, is a PAIM for $\pi_X(K_c)$ in
$\mathsf{G}_i$. Furthermore, $L=\bigcup_{c\in C}\pi_X(K_c)$.
\end{proof}
\begin{clemma}[Presence check]\label{pa:checkif}
Let $X$ be an alphabet and $x\in X$. Given $i\in\mathbb{N}$ and a PAIM for $L\subseteq
X^*$ in $\mathsf{G}_i$, one can construct a PAIM for $L\cap X^*xX^*$ in $\mathsf{G}_i$.
\end{clemma}
\begin{proof}
Since
\[ (L_1\cup\cdots\cup L_n)\cap X^*xX^* = (L_1\cap X^*xX^*)\cup\cdots\cup (L_n\cap X^*xX^*),\]
\cref{pa:decomposelinear} and \cref{pa:union} imply that we may assume that the
PAIM $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ for $L$ is linear, say $C=\{c\}$
and $P=P_c$. Since in the case $\varphi(c)(x)\ge 1$, we have $L\cap X^*xX^*=L$
and there is nothing to do, we assume $\varphi(c)(x)=0$.
Let $C'=\{(c,p)\mid p\in P, \varphi(p)(x)\ge 1\}$ be a new alphabet and let
\[ K'=\{(c,p)uv \mid (c,p)\in C',~u,v\in (X\cup P\cup \{\diamond\})^*,~cupv\in K\}. \]
Note that $K'$ can clearly be obtained from $K$ by way of a rational
transduction and is therefore contained in $\mathsf{G}_i$.
Furthermore, we let
$P'=P'_{(c,p)}=P$ and $\varphi'((c,p))=\varphi(c)+\varphi(p)$ for $(c,p)\in
C'$ and $\varphi'(p)=\varphi(p)$ for $p\in P$. Then we have
\begin{align*}
\pi_X(K')&=\{\pi_X(w) \mid w\in K,~\exists p\in P: \varphi(\pi_{C\cup P}(w))(p)\ge 1,\varphi(p)(x)\ge 1\} \\
&=\{\pi_X(w) \mid w\in K,~|\pi_X(w)|_x\ge 1\} = L\cap X^*xX^*.
\end{align*}
This proves the projection property. For each $(c,p)uv\in K'$ with $cupv\in K$, we have
\[ \varphi'(\pi_{C'\cup P'}((c,p)uv))=\varphi(\pi_{C\cup P}(cupv))=\Parikh{\pi_X(cupv)}=\Parikh{\pi_X((c,p)uv)}. \]
and thus $\varphi'(\pi_{C'\cup P'}(w))=\Parikh{\pi_X(w)}$ for every $w\in K'$.
Hence, we have established the counting property. Moreover,
\begin{align*}
\Parikh{\pi_{C'\cup P'}(K')} &= \bigcup_{p\in P} (c,p)+P'^\oplus,
\end{align*}
meaning the commutative projection property is satisfied as well.
This proves that the tuple $(\pi_{C\cup X\cup P}(K'),C',P',(P'_d)_{d\in C'},\varphi')$ is a
Parikh annotation for $L\cap X^*xX^*$ in $\mathsf{G}_i$. Since
$(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ is a PAIM for $L$, it follows that
$(K',C',P',(P'_d)_{d\in C'},\varphi',\diamond)$ is a PAIM for $L\cap X^*xX^*$.
\end{proof}
\begin{clemma}[Absence check]\label{pa:checkifnot}
Let $X$ be an alphabet and $x\in X$. Given $i\in\mathbb{N}$ and a PAIM for $L\subseteq X^*$ in
$\mathsf{G}_i$, one can construct a PAIM for $L\setminus X^*xX^*$ in
$\mathsf{G}_i$.
\end{clemma}
\begin{proof}
Since
\[ (L_1\cup\cdots\cup L_n)\setminus X^*xX^* = (L_1\setminus X^*xX^*)\cup\cdots\cup (L_n\setminus X^*xX^*),\]
\cref{pa:decomposelinear} and \cref{pa:union} imply that we may assume that the
PAIM $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ for $L$ is linear, say $C=\{c\}$
and $P=P_c$. Since in the case $\varphi(c)(x)\ge 1$, we have $L\setminus
X^*xX^*=\emptyset$ and there is nothing to do, we assume $\varphi(c)(x)=0$.
Let $C'=C$, $P'=P'_c=\{p\in P \mid \varphi(p)(x)=0\}$, and let
\[ K' = \{w\in K \mid |w|_p = 0~\text{for each $p\in P\setminus P'$} \}. \]
Furthermore, we let $\varphi'$ be the restriction of $\varphi$ to $(C'\cup
P')^\oplus$. Then clearly $(K',C',(P'_c)_{c\in C'},\varphi',\diamond)$ is a PAIM for
$L\setminus X^*xX^*$ in $\mathsf{G}_i$.
\end{proof}
\section{Proof of \cref{nonterminal:extension}}
\begin{proof}
First, observe that there is at most one $G$-compatible extension: For each
$A\in N$, there is a $u\in T^*$ with $A\grammarsteps[G] u$ and hence
$\hat{\psi}(A)=\psi(u)$.
In order to prove existence, we claim that for each $A\in N$ and
$A\grammarsteps[G] u$ and $A\grammarsteps[G] v$ for $u,v\in T^*$, we have
$\psi(u)=\psi(v)$. Indeed, since $G$ is reduced, there are $x,y\in T^*$ with
$S\grammarsteps[G] xAy$. Then $xuy$ and $xvy$ are both in $\Lang{G}$ and hence
$\psi(xuy)=\psi(xvy)=h$. In the group $H$, this implies
\[ \psi(u)=\psi(x)^{-1}h\psi(y)^{-1}=\psi(v). \]
This means a $G$-compatible extension exists: Setting $\hat{\psi}(A)=\psi(w)$
for some $w\in T^*$ with $A\grammarsteps[G] w$ does not depend on the chosen
$w$. This definition implies that whenever $u\grammarsteps[G] v$ for $u\in
(N\cup T)^*$, $v\in T^*$, we have $\hat{\psi}(u)=\hat{\psi}(v)$. Therefore, if
$u\grammarsteps[G] v$ for $u,v\in (N\cup T)^*$, picking a $w\in T^*$ with
$v\grammarsteps[G] w$ yields $\hat{\psi}(u)=\hat{\psi}(w)=\hat{\psi}(v)$.
Hence, $\hat{\psi}$ is $G$-compatible.
Now suppose $H=\mathbb{Z}$ and $\mathcal{C}=\mathsf{F}_i$. Since $\mathbb{Z}$ is commutative, $\psi$ is
well-defined on $T^\oplus$, meaning there is a morphism $\bar{\psi}\colon
T^\oplus\to\mathbb{Z}$ with $\bar{\psi}(\Parikh{w})=\psi(w)$ for $w\in T^*$. We can
therefore determine $\hat{\psi}(A)$ by computing a semilinear representation of
the Parikh image of $K=\{w\in T^* \mid A\grammarsteps[G] w\}\in\Alg{\mathsf{F}_i}$
(see \cref{basics:fsemilinear}), picking an element $\mu\in\Parikh{K}$, and
compute $\hat{\psi}(A)=\bar{\psi}(\mu)$.
\end{proof}
\section{Proof of \cref{pa:matchings}}
\begin{proof}
Let $G=(N,X,P,S)$ and let $\delta\colon X^*\to\mathbb{Z}$ be the morphism with
$\delta(w)=\gamma_0(\pi_{X_0}(w))-\gamma_1(\pi_{X_1}(w))$ for $w\in X^*$.
Since then $\delta(w)=0$ for every $w\in \Lang{G}$, by
\cref{nonterminal:extension}, $\delta$ extends uniquely to a $G$-compatible
$\hat{\delta}\colon (N\cup X)^*\to\mathbb{Z}$. We claim that with
$k=\max\{|\hat{\delta}(A)| \mid A\in N\}$, each derivation tree of $G$ admits a
$k$-matching.
Consider an
$(N\cup X)$-tree $t$ and let $L_i$ be the set of $X_i$-labeled leaves.
Let $A$ be an arrow collection for $t$ and let
$d_A(\ell)$ be the number of arrows incident to $\ell\in L_0\cup L_1$. Moreover,
let $\lambda(\ell)$ be the label of the leaf $\ell$ and let
\[ \beta(t)=\sum_{\ell\in L_0}\gamma_0(\lambda(\ell))-\sum_{\ell\in L_1}\gamma_1(\lambda(\ell)). \]
$A$ is a \emph{partial $k$-matching} if the following holds:
\begin{enumerate}
\item if $\beta(t)\ge 0$, then $d_A(\ell)\le\gamma_0(\lambda(\ell))$ for each $\ell\in L_0$ and $d_A(\ell)=\gamma_1(\lambda(\ell)))$ for each $\ell\in L_1$.
\item if $\beta(t)\le 0$, then $d_A(\ell)\le\gamma_1(\lambda(\ell))$ for each $\ell\in L_1$ and $d_A(\ell)=\gamma_0(\lambda(\ell)))$ for each $\ell\in L_0$.
\item $d_A(s)\le k$ for every subtree $s$ of $t$.
\end{enumerate}
Hence, while in a $k$-matching the number $\gamma_i(\lambda(\ell))$ is the
degree of $\ell$ (with respect to the matching), it is merely a capacity in a partial $k$-matching. The
first two conditions express that either all leaves in $L_0$ or all in $L_1$
(or both) are filled up to capacity, depending on which of the two sets of
leaves has less (total) capacity.
If $t$ is a derivation tree of $G$, then $\beta(t)=0$ and hence a partial
$k$-matching is already a $k$-matching. Therefore, we show by induction on $n$
that every derivation subtree of height $n$ admits a partial $k$-matching. This
is trivial for $n=0$ and for $n>0$, consider a derivation subtree $t$ with
direct subtrees $s_1,\ldots,s_r$. Let $B$ be the label of $t$'s root and
$B_j\in N\cup X$ be the label of $s_j$'s root. Then $\hat{\delta}(B)=\beta(t)$,
$\hat{\delta}(B_j)=\beta(s_j)$ and $\beta(t)=\sum_{j=1}^r \beta(s_j)$. By
induction, each $s_j$ admits a partial $k$-matching $A_j$. Let $A$ be the union
of the $A_j$. Observe that since $\sum_{\ell\in L_0} d_A(\ell)=\sum_{\ell\in
L_1} d_A(\ell)$ in every arrow collection (each side equals the number of
arrows), we have
\begin{equation}\beta(t)=\underbrace{\sum_{\ell\in L_0}(\gamma_0(\lambda(\ell))-d_A(\ell))}_{=:p\ge 0}-\underbrace{\sum_{\ell\in L_1}(\gamma_1(\lambda(\ell))-d_A(\ell))}_{=:q\ge 0}. \label{eq:beta:capacity}\end{equation}
If $\beta(t)\ge 0$ and hence $p\ge q$, this equation allows us to obtain $A'$
from $A$ by adding $q$ arrows, such that each $\ell\in L_1$ has
$\gamma_1(\lambda(\ell))-d_A(\ell)$ new incident arrows. They are connected to
$X_0$-leaves so as to maintain $\gamma_0(\ell)-d_{A'}(\ell)\ge 0$.
Symmetrically, if $\beta(t)\le 0$ and hence $p\le q$, we add $p$ arrows such
that each $\ell\in L_0$ has $\gamma_0(\lambda(\ell))-d_A(\ell)$ new incident
arrows. They also are connected to $X_1$-leaves so as to maintain
$\gamma_1(\lambda(\ell))-d_{A'}(\ell)\ge 0$. Then by construction, $A'$
satisfies the first two conditions of a partial $k$-matching. Hence, it remains
to be shown that the third is fulfilled as well.
Since for each $j$, we have either $d_A(\ell)=\gamma_0(\lambda(\ell))$ for all
$\ell\in L_0\cap s_j$ or we have $d_A(\ell)=\gamma_1(\lambda(\ell))$ for all
$\ell\in L_1\cap s_j$, none of the new arrows can connect two leaves inside of
$s_j$. This means the $s_j$ are the only subtrees for which we have to verify
the third condition, which amounts to checking that $d_{A'}(s_j)\le k$ for $1\le j\le
r$. As in \cref{eq:beta:capacity}, we have
\[ \beta(s_j)=\underbrace{\sum_{\ell\in L_0\cap s_j} (\gamma_0(\lambda(\ell))-d_A(\ell))}_{=:u\ge 0}-\underbrace{\sum_{\ell\in L_1\cap s_j} (\gamma_1(\lambda(\ell))-d_A(\ell))}_{=:v\ge 0}. \]
Since the arrows added in $A'$ have respected the capacity of each leaf, we
have $d_{A'}(s_j)\le u$ and $d_{A'}(s_j)\le v$. Moreover, since $A_j$ is a
partial $k$-matching, we have $u=0$ or $v=0$. In any case, we have
$d_{A'}(s_j)\le |u-v|=|\beta(s_j)|=|\hat{\delta}(B_j)|\le k$, proving the third condition.
\end{proof}
\section{Proof of \cref{pa:lettersubstitution}}
\begin{clemma}\label{computereduced}
Given an $\mathsf{F}_i$-grammar, one can compute an equivalent reduced
$\mathsf{F}_i$-grammar.
\end{clemma}
\begin{proof}
Since $\mathsf{F}_i$ is a Presburger closed semi-trio and has a decidable emptiness
problem, we can proceed as follows. First, we compute the set of productive
nonterminals. We initialize $N_0=\emptyset$ and then successively compute
\[ N_{i+1}=\{A\in N \mid L\cap (N_i\cup T)^*\ne\emptyset~\text{for some $A\to L$ in $P$}\}. \]
Then at some point, $N_{i+1}=N_i$ and $N_i$ contains precisely the productive nonterminals.
Using a similar method, one can compute the set of productive nonterminals.
Hence, one can compute the set $N'\subseteq N$ of nonterminals that are
reachable and productive. The new grammar is then obtained by replacing each
production $A\to L$ with $A\to (L\cap (N'\cup T)^*)$ and removing all
productions $A\to L$ where $A\notin N'$.
\end{proof}
\begin{proof}[Proof of \cref{pa:lettersubstitution}]
In light of \cref{pa:morphism}, it clearly suffices to prove the statement in
the case that there are $a\in Z$ and $b\in Z'$ with $Z'=Z\cup \{b\}$, $b\notin
Z$ and $\sigma(x)=\{x\}$ for $x\in Z\setminus\{a\}$ and $\sigma(a)=\{a,b\}$.
Let $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ be a PAIM for $L$ in $\mathsf{G}_i$.
According to \cref{computereduced}, we can assume $K$ to be given by a reduced
$\mathsf{F}_i$-grammar.
We want to use \cref{consistent:substitution} to construct a PAIM for
$\sigma(L)$. Let $X_0=Z\cup\{\diamond\}$, $X_1=C\cup P$, and $\gamma_i\colon
X_i^*\to\mathbb{N}$ for $i=0,1$ be the morphisms with
\[ \gamma_0(w)=|w|_a,~~~~~\gamma_1(w)=\varphi(w)(a). \]
Then, by the counting property of PAIM, we have $\gamma_0(w)=\gamma_1(w)$ for each
$w\in K$. Let $Y,h$ and $Y_i,h_i,\eta_i$ be defined as in
\cref{consistent:substitution:alphabet} and
\cref{consistent:substitution:morphisms}. \Cref{consistent:substitution} allows us to construct
$\hat{K}\in\mathsf{G}_i$, $\hat{K}\subseteq Y^*$, with $\hat{K}\subseteq h^{-1}(K)$,
$\pi_{X_i}(\hat{K})=\pi_{X_i}(h^{-1}(K))$ for $i=0,1$, and
$\eta_0(\pi_{X_0}(w))=\eta_1(\pi_{X_1}(w))$ for each $w\in \hat{K}$.
For each $f\in C\cup P$, let $D_f=\{(f',m)\in Y_1 \mid f'=f\}$. With this, let
$C'=\bigcup_{c\in C} D_c$, $P'=\bigcup_{p\in P} D_p$, and
$P'_{(c,m)}=\bigcup_{p\in P_c} D_p$ for $(c,m)\in C'$. The new morphism $\varphi'\colon
(C'\cup P')^\oplus\to Z'^\oplus$ is defined by
\begin{align*}
\varphi'((f,m))(z)&=\varphi(f)(z) & \text{for $z\in Z\setminus\{a\}$}, \\
\varphi'((f,m))(b)&=m, \\
\varphi'((f,m))(a)&=\varphi(f)(a)-m.
\end{align*}
Let $g\colon Y^*\to (C'\cup Z'\cup P'\cup \{\diamond\})^*$ be the morphism
with $g((z,0))=z$ for $z\in Z$, $g((a,1))=b$, $g(x)=x$ for $x\in C'\cup
P'\cup\{\diamond\}$. We claim that with $K'=g(\hat{K})$, the tuple
$(K',C',P',(P'_c)_{c\in C'},\varphi',\diamond)$ is a PAIM for $\sigma(L)$.
First, note that $K'\in\mathsf{G}_i$ and
\[ K'=g(\hat{K})\subseteq g(h^{-1}(K))\subseteq g(h^{-1}(C(Z\cup P)^*))\subseteq C'(Z'\cup P')^*. \]
Note that $g$ is bijective. This allows us to define $f\colon (C'\cup Z'\cup
P'\cup \{\diamond\})^*\to(C\cup Z\cup P\cup\{\diamond\})^*$ as the morphism
with $f(w)=h(g^{-1}(w))$ for all $w$. Observe that then $f(a)=f(b)=a$ and
$f(z)=z$ for $z\in Z\setminus\{a,b\}$ and by the definition of $K'$, we have
$f(K')\subseteq K$ and $\sigma(L)=f^{-1}(L)$.
\begin{itemize}
\item \emph{Projection property.}
Note that $\pi_{Y_0}(u)=\pi_{Y_0}(v)$ implies $\pi_{Z'}(g(u))=\pi_{Z'}(g(v))$
for $u,v\in Y^*$. Thus, from $\pi_{Y_0}(\hat{K})=\pi_{Y_0}(h^{-1}(K))$, we
deduce
\begin{align*}
\pi_{Z'}(K')&=\pi_{Z'}(g(\hat{K}))=\pi_{Z'}(g(h^{-1}(K))) \\
&=\pi_{Z'}(f^{-1}(K))=f^{-1}(L)=\sigma(L).
\end{align*}
\item\emph{Counting property.}
Note that by the definition of $\varphi'$ and $g$, we have
\begin{align}
\varphi'(\pi_{C'\cup P'}(x))(b)=\eta_1(x)=\eta_1(g^{-1}(x)) \label{pa:lettersubst:counting:eta}
\end{align}
for every $x\in C'\cup P'$.
For $w\in K'$, we have $f(w)\in K$ and hence $\varphi(\pi_{C\cup
P}(f(w)))=\Parikh{\pi_{Z}(f(w))}$. Since for $z\in Z\setminus\{a\}$, we have
$\varphi'(x)(z)=\varphi(f(x))(z)$ for every $x\in C'\cup P'$, it follows that
\begin{align}
\varphi'(\pi_{C'\cup P'}(w))(z)&=\varphi(\pi_{C\cup P}(f(w)))(z) \nonumber\\
&=\Parikh{\pi_Z(f(w))}(z)=\Parikh{\pi_{Z'}(w)}(z).\label{pa:lettersubst:counting:z}
\end{align}
Moreover, by \labelcref{pa:lettersubst:counting:eta} and since $g^{-1}(w)\in\hat{K}$, we have
\begin{align}
\varphi'(\pi_{C'\cup P'}(w))(b) & =\eta_1(g^{-1}(w))=\eta_0(g^{-1}(w))=|w|_b \nonumber\\
& =\Parikh{\pi_{Z'}(w)}(b).\label{pa:lettersubst:counting:b}
\end{align}
and $f(w)\in K$ yields
\begin{align*}
\varphi'(\pi_{C'\cup P'}(w))(a)+\varphi'(\pi_{C'\cup P'}(w))(b)& =\varphi(\pi_{C\cup P}(f(w)))(a) \\
& =\Parikh{\pi_{Z}(f(w))}(a) \\
& =\Parikh{\pi_{Z'}(w)}(a)+\Parikh{\pi_{Z'}(w)}(b).
\end{align*}
Together with \labelcref{pa:lettersubst:counting:b}, this implies
$\varphi'(\pi_{C'\cup P'}(w))(a)=\Parikh{\pi_{Z'}(w)}(a)$. Combining this with
\labelcref{pa:lettersubst:counting:z,pa:lettersubst:counting:b}, we
obtain $\varphi'(\pi_{C'\cup P'}(w))=\Parikh{\pi_{Z'}(w)}$. This proves the
counting property.
\item\emph{Commutative projection property.} Observe that
\begin{align*}
\Parikh{\pi_{C'\cup P'}(K')} &= \Parikh{\pi_{Y_1}(\hat{K})}=\Parikh{\pi_{Y_1}(h^{-1}(K))} \\
&=\Parikh{h^{-1}(\pi_{C\cup P}(K))}=\bigcup_{c\in C'} c+P'^\oplus_c.
\end{align*}
\item\emph{Boundedness.} Since $|w|_{\diamond}=|h(v)|_{\diamond}$ for each $w\in
K'$ with $w=g(v)$, there is a constant bounding $|w|_{\diamond}$ for $w\in K'$.
\item\emph{Insertion property.}
Let $cw\in K'$ with $c\in C'$ and $\mu\in P'^\oplus_c$. Then $f(\mu)\in
P_{f(c)}^\oplus$ and $f(cw)\in K$. Write
\[ \pi_{Z'\cup\{\diamond\}}(cw)=w_0\diamond w_1 \diamond \cdots \diamond w_n \]
with $w_0,\ldots,w_n\in Z'^*$. Then
\[ \pi_{Z\cup\{\diamond\}}(f(cw))=f(\pi_{Z'\cup\{\diamond\}}(cw))=f(w_0)\diamond\cdots\diamond f(w_n). \]
By the insertion property of $K$ and since $f(cw)\in K$, there is a $v\in K$
with
\[ \pi_Z(v)=f(w_0)v_1 f(w_1)\cdots v_nf(w_n), \]
$v_1,\ldots,v_n\in Z^*$, and
$\Parikh{\pi_Z(v)}=\Parikh{\pi_Z(f(cw))}+\varphi(f(\mu))$. In particular, we
have $\Parikh{v_1\cdots v_n}=\varphi(f(\mu))$. Note that $\varphi'(\mu)\in
Z'^\oplus$ is obtained from $\varphi(f(\mu))\in Z^\oplus$ by replacing some
occurrences of $a$ by $b$. Thus, by the definition of $f$, we can find words
$v'_1,\ldots,v'_n\in Z'^*$ with $f(v'_i)=v_i$ and $\Parikh{v'_1\cdots
v'_n}=\varphi'(\mu)$. Then the word
\[ w' = w_0v'_1w_1\cdots v'_nw_n\in Z'^* \]
statisfies $\pi_{Z'\cup\{\diamond\}}(cw)\preceq_{\diamond} w'$,
$\Parikh{w'}=\Parikh{\pi_{Z'}(cw)}+\varphi'(\mu)$ and
\[ f(w')=f(w_0)v_1f(w_1)\cdots v_nf(w_n)=\pi_Z(v)\in \pi_Z(K)=L. \]
Since $f^{-1}(L)=\sigma(L)$, this means $w'\in\sigma(L)$. We have thus
established the insertion property.
\end{itemize}
We conclude that the tuple $(K',C',P',(P'_c)_{c\in C'},\varphi',\diamond)$ is a
PAIM in $\mathsf{G}_i$ for $\sigma(L)$.
\end{proof}
\section{Proof of \cref{pa:substitution}}
\begin{proof}
Let $\sigma\colon X^*\to\Powerset{Y^*}$.
Assuming that for some $a\in X$, we have $\sigma(x)=\{x\}$ for all $x\in
X\setminus\{a\}$ means no loss of generality. According to
\cref{pa:morphism}, we may also assume that $\sigma(a)\subseteq Z^*$ for some alphabet $Z$ with
$Y=X\uplus Z$. If $\sigma(a)=L_1\cup \cdots\cup L_n$, then first substituting
$a$ by $\{a_1,\ldots,a_n\}$ and then each $a_i$ by $L_i$ has the same effect as
applying $\sigma$. Hence, \cref{pa:lettersubstitution} allows us to assume
further that the PAIM given for $\sigma(a)$ is linear. Finally, since
$\sigma(L)=(L\setminus X^*aX^*)\cup \sigma(L\cap X^*aX^*)$,
\cref{pa:checkif,pa:checkifnot,pa:union} imply that we may also assume
$L\subseteq X^*aX^*$.
Let $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ be a PAIM for $L$ and
$(\hat{K},\hat{c},\hat{P},\hat{\varphi},\diamond)$ be a linear PAIM for
$\sigma(a)$. The idea of the construction is to replace each occurrence of $a$
in $K$ by words from $\hat{K}$ after removing $\hat{c}$. However, in order to
guarantee a finite bound for the number of occurrences of $\diamond$ in the
resulting words, we also remove $\diamond$ from all but one inserted words from
$\hat{K}$. The new map $\varphi'$ is then set up to so that if $f\in C\cup P$
represented $m$ occurrences of $a$, then $\varphi'(f)$ will represent $m$ times
$\hat{\varphi}(\hat{c})$.
Let $C'=C$, $P'_c=P_c\cup\hat{P}$, $P'=\bigcup_{c\in C'}P'_c$, and
$\varphi'\colon (C'\cup P')^\oplus\to Y^\oplus$ be the morphism with
\begin{align*}
\varphi'(f)&=\varphi(f)~-~\varphi(f)(a)\cdot a ~+~ \varphi(f)(a)\cdot\hat{\varphi}(\hat{c}) & & \text{for $f\in C\cup P$}, \\
\varphi'(f)&=\hat{\varphi}(f) & & \text{for $f\in \hat{P}$}.
\end{align*}
Let $a_{\diamond}$ be a new symbol and
\[ \bar{K}=\{ua_{\diamond}v \mid uav\in K,~|u|_a=0 \}. \]
In other words, $\bar{K}$ is obtained by replacing in each word from $K$
the first occurrence of $a$ with $a_{\diamond}$. The occurrence of
$a_\diamond$ will be the one that is replaced by all of $\hat{K}$, the
occurrences of $a$ are replaced by $\pi_{\{\hat{c}\}\cup
Z\cup\hat{P}}(\hat{K})$. Let $\tau$ be the substitution
\begin{align*}
\tau\colon (C\cup X\cup P\cup \{\diamond,a_{\diamond}\})^* &\longrightarrow\Powerset{(C'\cup Z\cup P'\cup\{\diamond\})^*} & \\
x &\longmapsto\{x\},~~\text{for $x\in C\cup X\cup P\cup\{\diamond\}$, $x\ne a$}, \\
a_{\diamond} &\longmapsto\pi_{Z\cup \hat{P}\cup\{\diamond\}}(\hat{K}), \\
a&\longmapsto\pi_{Z\cup \hat{P}}(\hat{K}). &
\end{align*}
We claim that with $K'=\tau(\bar{K})$, the tuple $(K',C',P',(P'_c)_{c\in
C'},\varphi',\diamond)$ is a PAIM in $\mathsf{G}_i$ for $\sigma(L)$. First,
since $\mathsf{G}_i$ is closed under rational transductions and substitutions,
$K'$ is in $\mathsf{G}_i$.
\begin{itemize}
\item\emph{Projection property.}
Since $L=\pi_X(K)$ and $\sigma(a)=\pi_Z(\hat{K})$,
we have $\sigma(L)=\pi_Z(K')$.
\item\emph{Counting property.}
Let $w\in K'$. Then there is a $u=cu_0au_1\cdots au_n\in K$, $u_i\in (C\cup
X\cup P\cup \{\diamond\})^*$, $c\in C$, and $|u_i|_a=0$ for $i=0,\ldots,n$ and
$w=cu_0w_1u_1\cdots w_nu_n$ with $w_1\in\pi_{Z\cup
\hat{P}\cup\{\diamond\}}(\hat{K})$, $w_i\in \pi_{Z\cup \hat{P}}(\hat{K})$ for
$i=2,\ldots,n$. This means
\begin{equation}\Parikh{\pi_Z(w_i)}=\hat{\varphi}(\hat{c})+\hat{\varphi}(\pi_{\hat{P}}(w_i)). \label{pa:substitution:a}\end{equation}
Since $\varphi(\pi_{C\cup P}(u))(a)=\Parikh{\pi_X(u)}=n$, we have
\begin{align}
\varphi'(\pi_{C'\cup P'}(u)) &= \varphi(\pi_{C\cup P}(u)) - n\cdot a + n\cdot\hat{\varphi}(\hat{c}) \label{pa:substitution:b}\\
&=\Parikh{\pi_X(u)}-n\cdot a+n\cdot\hat{\varphi}(\hat{c}). \nonumber
\end{align}
\Cref{pa:substitution:a,pa:substitution:b} together imply
\begin{align*}
\varphi'(\pi_{C'\cup P'}(w)) &=\varphi'(\pi_{C'\cup P'}(u))+\sum_{i=1}^n\varphi'(\pi_{P'}(w_i)) \\
&=\Parikh{\pi_X(u)}-n\cdot a + n\cdot \hat{\varphi}(\hat{c})+\sum_{i=1}^n \left(\Parikh{\pi_Z(w_i)}-\hat{\varphi}(\hat{c})\right) \\
&=\Parikh{\pi_X(u)}-n\cdot a + \sum_{i=1}^n \Parikh{\pi_Z(w_i)}=\Parikh{\pi_Z(w)}.
\end{align*}
\item\emph{Commutative projection property.}
Let $c\in C'$ and $\mu\in P'^\oplus_c$ and write $\mu=\nu+\hat{\nu}$ with
$\nu\in P_c^\oplus$ and $\hat{\nu}\in \hat{P}^\oplus$. Then there is a $cw\in
K$ with $\Parikh{\pi_{C\cup P}(cw)}=c+\nu$. Since $L\subseteq X^*aX^*$, we can
write $w=cu_0au_1\cdots au_n$ with $|u_i|_a=0$ for $0\le i\le n$ and $n\ge
1$. Moreover, there are $\hat{c}\hat{w}\in \hat{K}$ and
$\hat{c}\hat{w}'\in\hat{K}$ with
$\Parikh{\pi_{\{\hat{c}\cup\hat{P}}(\hat{c}\hat{w})}=\hat{c}+\hat{\nu}$ and
$\Parikh{\pi_{\hat{c}\cup\hat{P}}(\hat{c}\hat{w}')}=\hat{c}$. By definition of $K'$, the word
\[ w'=cu_0\hat{w}u_1\hat{w}'u_2\cdots \hat{w}'u_n \]
is in $K'$ and satisfies $\Parikh{\pi_{C'\cup P'}(w')}=c+\nu+\hat{\nu}=c+\mu$.
This proves
\[ \bigcup_{c\in C'} c+P'^\oplus_c\subseteq\Parikh{\pi_{C'\cup P'}(K')}. \]
The other inclusion is clear by definition. We have thus established that
the tuple $(\pi_{C'\cup Z\cup P'}(K'), C', P', (P'_c)_{c\in
C'},\varphi')$ is a Parikh annotation in $\mathsf{G}_i$ for $\sigma(L)$.
\item\emph{Boundedness.}
Note that if $|w|_{\diamond}\le k$ for all $w\in K$ and $|\hat{w}|_{\diamond}\le
\ell$ for all $\hat{w}\in \hat{K}$, then $|w'|_{\diamond}\le k+\ell$ for all
$w'\in K'$ by construction of $K'$, implying boundedness.
\item\emph{Insertion property.} The insertion property follows from the
insertion property of $K$ and $\hat{K}$.
\end{itemize}
\end{proof}
\section{Proof of \cref{pa:onenonterminal}}
\begin{clemma}[Sentential forms]\label{pa:onenonterminal:sf}
Let $G=(N,T,P,S)$ be an $\mathsf{G}_i$-grammar with $N=\{S\}$, $P=\{S\to L\}$,
and $L\subseteq (N\cup T)^*S(N\cup T)^*$. Furthermore, suppose a PAIM in
$\mathsf{G}_i$ is given for $L$. Then one can construct a PAIM in $\mathsf{G}_i$
for $\SententialForms{G}$.
\end{clemma}
\begin{proof}
Observe that applying the production $S\to L$ with $w\in L$ contributes
$\Parikh{w}-S$ to the Parikh image of the sentential form. Therefore, we have
$\Parikh{\SententialForms{G}}=S+(\Parikh{L}-S)^\oplus$ and we can construct a
PAIM for $\SententialForms{G}$ using an idea to obtain a semilinear
representation of $U^\oplus$ for semilinear sets $U$. If $U=\bigcup_{j=1}^n
\mu_j+F_j^\oplus$ for $\mu_j\in X^\oplus$ and finite $F_j\subseteq X^\oplus$,
then
\[ U^\oplus = \bigcup_{D\subseteq\{1,\ldots,n\}} \sum_{j\in D} \mu_j + \left(\bigcup_{j\in D} \{\mu_j\}\cup F_j\right)^\oplus.\]
The symbols representing constant and period vectors for $\SententialForms{G}$
are therefore set up as follows. Let $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$
be a PAIM for $L$ in $\mathsf{G}_i$. and let $S'$ and $S_D$ and $d_D$ be new symbols
for each $D\subseteq C$. Moreover, let $C'=\{d_D \mid D\subseteq C\}$ and
$P'=C\cup P$ with $P'_{d_D}=D\cup\bigcup_{c\in D}P_c$.
We will use the shorthand $X=N\cup T$. Observe that since $L\subseteq
X^*SX^*$, we have $\varphi(c)(S)\ge 1$ for each $c\in C$.
We can therefore define the
morphism $\varphi'\colon (C'\cup P')^\oplus\to X^\oplus$ as
\begin{align}
\varphi'(p) &= \varphi(p) & &\text{for $p\in P$,} \nonumber\\
\varphi'(c) &= \varphi(c)-S & &\text{for $c\in C$,}\label{pa:onenon:constantrep} \\
\varphi'(d_D) &= S+\sum_{c\in D}\varphi'(c). & &\label{pa:onenon:newconstantrep}
\end{align}
The essential idea in our construction is to use modified versions of $K$
as right-hand-sides of a grammar. These modified versions are obtained as follows.
For each $D\subseteq C$, we define the rational transduction $\delta_D$ which
maps each word $w_0Sw_1\cdots Sw_n\in (C\cup X\cup P\cup\{\diamond\})^*$,
$|w_i|_S=0$ for $0\le i\le n$, to all words $w_0 S_{D_1}w_1\cdots S_{D_n}w_n$
for which
\begin{align*} &D_1\cup\cdots\cup D_n=D, & &D_i\cap D_j=\emptyset~\text{for $i\ne j$.} \end{align*}
Thus, $\delta_D$ can be thought of as distributing the elements of $D$ among
the occurrences of $S$ in the input word. The modified versions of $K$ are
then given by
\begin{align*}
&K_D = \delta_D(\pi_{C\cup X\cup P}(K)), & &K_D^{c} = \delta_{D\setminus c}(c^{-1}K).
\end{align*}
In the new annotation, the symbol $d_D$ represents $S+\sum_{c\in
D}(\varphi(c)-S)$. Since each symbol $c\in C$ still represents $\varphi(c)-S$,
we cannot insert a whole word from $K$ for each inserted word from $L$: This would
insert a $c\in C$ in each step and we would count $\sum_{c\in D}(\varphi(c)-S)$ twice. Hence, in order to compensate for the new
constant symbol $d_D$, when generating a word starting with $d_D$,
we have to prevent exactly one occurrence of $c$ for each $c\in D$ from appearing.
To this end, we use the nonterminal $S_D$, which only allows derivation
subtrees in which of each $c\in D$, precisely one occurrence has been left out, i.e.
a production $S_D\to K_D^c$ (for some $D\subseteq C$) has been applied. In the productions $S_D\to K_D$
the symbol from $C$ on the right hand side is allowed to appear.
In order to have only a bounded number of occurrences of $\diamond$, one of our
modified versions of $K$ (namely $K_D^c$) introduces $\diamond$ and the other
one ($K_D$) does not. Since when generating a word starting with $d_D$, our
grammar makes sure that for each $c\in D$, a production of the form $S_E\to
K_E^c$ is used precisely once (and otherwise $S_E\to K_E$), the set $K_E^c$ is
set up to contain $\diamond$. This will guarantee that during the insertion
process simulating $S\to L$, we insert at most $|C|\cdot \ell$ occurrences of
$\diamond$, where $\ell$ is an upper bound for $|w|_{\diamond}$ for $w\in K$.
Let $N'=\{S'\}\cup \{S_D\mid D\subseteq C\}$ and let $\hat{P}$ consist of the
following productions:
\begin{align}
S' &\to \{d_D\diamond S_D\diamond\mid D\subseteq C\} & & \label{pa:one:production:first} \\
S_\emptyset &\to \{S\} & & \label{pa:one:production:last} \\
S_D &\to K_D & & \text{for each $D\subseteq C$} \label{pa:one:production:withoutmarker} \\
S_D&\to K_D^{c} & & \text{for each $D\subseteq C$ and $c\in D$.} \label{pa:one:production:withmarker}
\end{align}
Finally, let $M$ be the regular language
\[ M=\bigcup_{D\subseteq C} \{ w\in (C'\cup X\cup P'\cup \{\diamond\})^* \mid \pi_{C'\cup P'}(w)\in d_D P'^*_{d_D} \}. \]
By intersecting with $M$, we make sure that the commutative projection property is satisfied.
We shall prove that with the grammar $G'=(N',C'\cup X\cup P'\cup
\{\diamond\},\hat{P},S')$ and $K'=\Lang{G'}\cap M$, the tuple
$(K',C',P',(P'_c)_{c\in C'},\varphi',\diamond)$ is a PAIM for
$\SententialForms{G}$ in $\mathsf{G}_i$. By definition, $\Lang{G'}$ is contained in
$\Alg{\mathsf{G}_i}=\mathsf{G}_i$ and hence $K'$ since $\mathsf{G}_i$ is a full semi-AFL.
\newcommand{\rho}{\rho}
\newcommand{\constantset}[1]{\rho(#1)}
Let $h\colon (N'\cup C'\cup X\cup P'\cup\{\diamond\})^*\to (C'\cup X\cup P'\cup \{\diamond\})^*$ be the morphism that fixes $C'\cup X\cup P'\cup \{\diamond\}$ and
satisfies $h(S')=h(S_D)=S$ for $D\subseteq C$. Moreover, regard $\Powerset{C}$
as a monoid with $\cup$ as its operation. Then $\rho\colon (N'\cup
X)^*\to\Powerset{C}$ is the morphism with $\constantset{S_D}=D$ and
$\constantset{S'}=\constantset{x}=\emptyset$ for $x\in X$. Furthermore, let
$|w|_\diamond \le \ell$ for all $w\in K$. We claim that for each $n\ge 0$,
$d_D\diamond S_D\diamond\grammarstepsn[G']{n} w$ implies
\begin{enumerate}
\item\label{pa:one:disjoint} if $w=u_0S_{D_1}u_1\cdots S_{D_n}u_n$ with $u_i\in X^*$ for $0\le i\le n$, then $D_i\cap D_j=\emptyset$ for $i\ne j$.
\item\label{pa:one:projection} $h(\pi_{N'\cup X}(w))\in \SententialForms{G}$,
\item\label{pa:one:counting} $\varphi'(\pi_{C'\cup P'}(w))=\Parikh{h(\pi_{N'\cup X}(w))}+\sum_{c\in\constantset{w}} \varphi'(c)$,
\item\label{pa:one:boundedness} $|w|_\diamond \le 2+|D\setminus\constantset{w}|\cdot \ell$, and
\item\label{pa:one:insertion} for each $\mu\in \left(D\cup \bigcup_{c\in
D\setminus\constantset{w}} P_c\right)^\oplus$, there is a $w'\in
\SententialForms{G}$ such that $h(\pi_{N'\cup X\cup \{\diamond\}}(w))
\preceq_{\diamond} w'$ and $\Parikh{w'}=\Parikh{h(\pi_{N'\cup X}(w))}+\varphi'(\mu)$.
\end{enumerate}
We establish this claim using induction on $n$. Observe that all these
conditions are satisfied in the case $n=0$, i.e. $w=d_D\diamond S_D \diamond$,
\cref{pa:one:disjoint,pa:one:projection,pa:one:counting,pa:one:boundedness}
follow directly by distinguishing among the productions in $G'$. Therefore, we
only prove \cref{pa:one:insertion} in the induction step.
Suppose $n>0$ and $d_D\diamond
S_D\diamond\grammarstepsn[G']{n-1}\bar{w}\grammarstep[G'] w$. If the production
applied in $\bar{w}\grammarstep[G'] w$ is $S_\emptyset\to\{S\}$, then
$\constantset{w}=\constantset{\bar{w}}$ and $h(\pi_{N'\cup X\cup
\{\diamond\}})(w)=h(\pi_{N'\cup X\cup\{\diamond\}})(\bar{w})$, so that
\cref{pa:one:insertion} follows immediately from the same condition for
$\bar{w}$. If the applied production is of the form
\labelcref{pa:one:production:withoutmarker} or \labelcref{pa:one:production:withmarker}, then we have
$\constantset{w}\subseteq\constantset{\bar{w}}$ and hence $\constantset{\bar{w}}=\constantset{w}\cup E$ for some $E\subseteq D$, $|E|\le 1$.
Then
\[ \bigcup_{c\in D\setminus\constantset{w}} P_c=\bigcup_{c\in D\setminus\constantset{\bar{w}}} P_c \cup \bigcup_{c\in E} P_c. \]
We can therefore decompose $\mu\in \left(D\cup \bigcup_{c\in
D\setminus\constantset{w}} P_c\right)^\oplus$ into $\mu=\bar{\mu}+\nu$ with
$\bar{\mu}\in \left(D\cup \bigcup_{c\in D\setminus\constantset{\bar{w}}}
P_c\right)^\oplus$ and $\nu\in \left(\bigcup_{c\in E} P_c\right)^\oplus$. By
induction, we find a $\bar{w}'\in\SententialForms{G}$ such that
$h(\pi_{N'\cup X\cup\{\diamond\}}(\bar{w}))\preceq_{\diamond}\bar{w}'$
and $\Parikh{\bar{w}'}=\Parikh{h(\pi_{N'\cup X}(\bar{w}))}+\varphi'(\bar{\mu})$.
Let $\bar{w}=xSy$ be the decomposition facilitating the step $\bar{w}\grammarstep[G'] w$ and let $w=xzy$.
\begin{itemize}
\item If the production applied in $\bar{w}\grammarstep[G'] w$ is of the form
\labelcref{pa:one:production:withoutmarker}. Then
$\constantset{w}=\constantset{\bar{w}}$ and hence $E=\emptyset$ and $\nu=0$.
Furthermore, $z\in K_F$ for some $F\subseteq C$.
We define $z'=h(\pi_{N'\cup X}(z))$. Note that then $z'\in \pi_{X}(K)=L$ and $\Parikh{z'}=\Parikh{\pi_X(z)}+\varphi'(\nu)$.
\item If the production applied in $\bar{w}\grammarstep[G'] w$ is of the form
\labelcref{pa:one:production:withmarker}. Then $z\in K_F^c$ for some $c\in
F\subseteq C$ and thus $h(z)\in c^{-1}K$. This implies
$\constantset{\bar{w}}=\constantset{w}\cup \{c\}$, $E=\{c\}$, and hence $\nu\in
P_c^\oplus$. The insertion property of $K$ provides a $z'\in L$ such that
$\pi_{X\cup \{\diamond\}}(h(z)) \preceq_{\diamond} z'$ and
$\Parikh{z'}=\Parikh{\pi_{X}(h(z))}+\varphi(\nu)=\Parikh{\pi_X(h(z))}+\varphi'(\nu)$.
\end{itemize}
In any case, we have
\begin{align*}&z'\in L, & &h(\pi_{N'\cup X\cup\{\diamond\}}(z))\preceq_{\diamond} z', & &\Parikh{z'}=\Parikh{h(\pi_{N'\cup X}(z))}+\varphi'(\nu). \end{align*}
Recall that $\bar{w}=xSy$ and $w=xzy$. Since $\bar{w}\preceq_{\diamond} \bar{w}'$, we can find $x',y'$ with
\begin{align*}&\bar{w}'=x'Sy', & & h(\pi_{N'\cup X}(x))\preceq_{\diamond} x', & & h(\pi_{N'\cup X}(y))\preceq_{\diamond} y'.\end{align*}
Choose $w' = x'z'y'$. Then $\SententialForms{G}\ni\bar{w}'\grammarstep[G] w'$ and thus $w'\in\SententialForms{G}$. Moreover,
\begin{align*}
h(\pi_{N'\cup X\cup\{\diamond\}}(w)) &= h(\pi_{N'\cup X\cup \{\diamond\}}(x))h(\pi_{N'\cup X\cup \{\diamond\}}(z))h(\pi_{N'\cup X\cup \{\diamond\}}(y)) \\
&\preceq_{\diamond} x'z'y'=w'.
\end{align*}
Finally, $w'$ has the desired Parikh image:
\begin{align*}
\Parikh{w'}&=\Parikh{\bar{w}'}-S+\Parikh{z'} \\
&=\Parikh{h(\pi_{N'\cup X}(\bar{w}))}+\varphi'(\bar{\mu})-S+\Parikh{z'} \\
&=\Parikh{h(\pi_{N'\cup X}(\bar{w}))}+\varphi'(\bar{\mu})-S+\Parikh{h(\pi_{N'\cup X}(z))}+\varphi'(\nu) \\
&=\Parikh{h(\pi_{N'\cup X}(w))}+\varphi'(\bar{\mu})+\varphi'(\nu) \\
&=\Parikh{h(\pi_{N'\cup X}(w))}+\varphi'(\mu).
\end{align*}
This completes the induction step for \cref{pa:one:insertion}.
We now use our claim to prove that we have indeed constructed a PAIM.
\begin{itemize}
\item \emph{Projection property}. Our claim already entails $\pi_X(K')\subseteq\SententialForms{G}$:
For $w\in (C'\cup X\cup P'\cup \{\diamond\})^*$ with $d_D\diamond S_D\diamond\grammarsteps[G'] w$,
we have $\pi_X(w)=h(\pi_{N'\cup X}(w))\in\SententialForms{G}$ by \cref{pa:one:projection}.
In order to prove $\SententialForms{G}\subseteq\pi_X(K')$, suppose
$w\in\SententialForms{G}$ and let $t$ be a partial derivation tree for $G$ with
root label $S$ and $\yield{t}=w$.
Since $\children{x}\in L$ for each inner node $x$ of $t$, we can find a
$c_xw_x\in K$ with $\pi_X(c_xw_x)=\children{x}$. Then in particular
$\children{x}\preceq c_xw_x$, meaning we can obtain a tree $t'$ from $t$ as
follows: For each inner node $x$ of $t$, add new leaves directly below $x$ so
as to have $c_xw_x$ as the new sequence of child labels of $x$. Note that the
set of inner nodes of $t'$ is identical to the one of $t$. Moreover, we have
$\pi_X(\yield{t'})=w$.
Let $D=\{c_x \mid \text{$x$ is an inner node in $t'$}\}$. We pick for each
$c\in D$ exactly one inner node $x$ in $t'$ such that $c_x=c$; we denote the
resulting set of nodes by $R$. We now obtain $t''$ from $t'$ as follows: For
each $x\in R$, we remove its $c_x$-labeled child; for each $x\notin R$, we
remove all $\diamond$-labeled children. Note that again, the inner nodes of $t''$
are the same as in $t$ and $t'$. Moreover, we still have $\pi_X(\yield{t''})=w$.
For each inner node $x$ in $t''$, let $D_x=\{c_y \mid \text{$y\in R$ is below
$x$ in $t''$}\}$. Note that in $t,t',t''$, every inner node has the label $S$.
We obtain the tree $t'''$ from $t''$ as follows. For each inner node $x$ in
$t''$, we replace its label $S$ by $S_{D_x}$. Then we have
$\pi_X(h(\yield{t'''}))=w$. Clearly, the root node of $t'''$ is labeled $S_D$.
Furthermore, the definition of $K_E$ and $K_E^c$ yields that $t'''$ is a
partial derivation tree for $G'$. Hence
\[ S'~~\grammarstep[G']~~d_D\diamond S_D\diamond~~\grammarsteps[G']~~d_D\diamond\yield{t'''}\diamond. \]
Since in $t'''$, every leaf has a label in $T\cup\{S_\emptyset\}$, we have
$S'\grammarsteps[G'] d_D \diamond h(\yield{t'''})\diamond$. This means
$d_D\diamond h(\yield{t'''})\diamond\in \Lang{G'}$. Furthermore, we clearly have
$d_D\diamond h(\yield{t'''})\diamond\in M$ and since $\pi_X(d_D\diamond
h(\yield{t'''})\diamond)=w$, this implies $w\in \pi_X(K')$.
\item \emph{Counting property}. Apply \cref{pa:one:counting} in our claim to a
word $w\in (C'\cup X\cup P'\cup \{\diamond\})^*$ with $d_D\diamond S_D\diamond\grammarsteps[G'] w$. Since
$\constantset{w}=\emptyset$ and $h(\pi_{N'\cup X}(w))=\pi_X(w)$, this yields
$\varphi'(\pi_{C'\cup P'}(w))=\Parikh{\pi_X(w)}$.
\item \emph{Commutative projection property}.
Since $K'\subseteq M$, we clearly have $\Parikh{\pi_{C'\cup P'}(K')}\subseteq\bigcup_{c\in C'}
c+P'^\oplus_c$.
For the other inclusion, let $D\subseteq C$ with $D=\{c_1,\ldots,c_n\}$.
Suppose $\mu\in \bigcup_{c\in C'} c+P'^\oplus_c$,
$\mu=d_D+\nu+\sum_{i=1}^n\xi_i$ with $\nu\in D^\oplus$ and $\xi_i\in
P_{c_i}^\oplus$ for $1\le i\le n$.
The commutative projection property of $K$ allows us to choose for $1\le i\le n$ words $u_i,v_i\in
K$ such that
\begin{align*} && \Parikh{\pi_{C\cup P}(u_i)}=c_i, && \Parikh{\pi_{C\cup P}(v_i)}=c_i+\xi_i.\end{align*}
The words $v'_0,\ldots,v'_n$ are constructed as follows. Let $v'_0=d_D \diamond
S\diamond$ and let $v'_{i+1}$ be obtained from $v'_i$ by replacing the first
occurrence of $S$ by $c_{i+1}^{-1}v_{i+1}$. Furthermore, let $v''_i$ be
obtained from $v'_i$ by replacing the first occurrence of $S$ by
$S_{\{c_{i+1},\ldots,c_n\}}$ and all other occurrences by $S_\emptyset$. Then
clearly $d_D\diamond S_D\diamond=v''_0\grammarstep[G']\cdots \grammarstep[G']
v''_n$ and $v''_n\in (T\cup \{S_\emptyset\})^*$. Moreover, we have
$\Parikh{\pi_{C'\cup P'}(v''_n)}=d_D+\sum_{i=1}^n \xi_i$.
Let $g\colon X^*\to(T\cup \{S_\emptyset\})^*$ be the morphism with
$g(S)=S_\emptyset$ and that fixes the elements of $T$. For a word $w\in
(N'\cup X)^*$ that contains $S_\emptyset$ and $1\le i\le n$, let $U_i(w)$ be
the word obtained from $w$ by replacing the first occurrence of $S_\emptyset$
by $g(u_i)$. Then $w\grammarstep[G']U_i(w)$ and $\Parikh{\pi_{C'\cup
P'}(U_i(w))}=\Parikh{\pi_{C'\cup P'}(w)}+c_i$. Thus, with
\[ u=U_n^{\nu(c_n)}\cdots U_1^{\nu(c_1)}(v''_n), \]
we have $v''_n\grammarsteps[G'] u\grammarsteps[G'] h(u)$ and hence $h(u)\in
\Lang{G'}$. By construction, $h(u)$ is in $M$ and thus $h(u)\in K'$. Moreover,
we have
\begin{align*}
\Parikh{\pi_{C'\cup P'}(h(u))}&=\Parikh{\pi_{C'\cup P'}(u)}=\Parikh{\pi_{C'\cup P'}(v''_n)}+\nu \\
&=d_D+\sum_{i=1}^n \xi_i+\nu=\mu.
\end{align*}
This proves $\bigcup_{c\in C'}c+P'^\oplus_c\subseteq \Parikh{\pi_{C'\cup P'}(K')}$.
\item \emph{Boundedness}. Let $w\in (C'\cup X\cup P'\cup \{\diamond\})^*$ and $d_D\diamond
S_D\diamond\grammarsteps[G'] w$. By \cref{pa:one:boundedness} of our claim, we
have $|w|_{\diamond}\le 2+|C|\cdot \ell$.
\item \emph{Insertion property}. Let $w\in (C'\cup X\cup P'\cup\{\diamond\})^*$ and $d_D\diamond
S_D\diamond\grammarsteps[G'] w$. Then $\constantset{w}=\emptyset$ and
$h(\pi_{N'\cup X\cup\{\diamond\}}(w))=\pi_{X\cup\{\diamond\}}(w)$. Hence
\cref{pa:one:insertion} states that for each $\mu\in P'^\oplus_{d_D}$, there is
a $w'\in\SententialForms{G}$ with $\pi_{X\cup\{\diamond\}}(w)\preceq_{\diamond}
w'$ and $\Parikh{w'}=\Parikh{\pi_{X}(w)}+\varphi'(\mu)$.
\end{itemize}
\end{proof}
\begin{proof}[Proof of \cref{pa:onenonterminal}]
Let $G=(N,T,P,S)$. By \cref{pa:union}, we may assume that there is only one
production $S\to L$ in $P$. By \cref{pa:checkif,pa:checkifnot}, one can
construct PAIM for $L_0=L\setminus (N\cup T)^*S(N\cup T)^*$ and for $L_1=L\cap
(N\cup T)^*S(N\cup T)^*$.
If $G'$ the grammar $G'=(N,T,P',S)$, where $P'=\{S\to L_1\}$ and $\sigma\colon
(N\cup T)^*\to\Powerset{(N\cup T)^*}$ is the substitution with $\sigma(S)=L_0$
and $\sigma(t)=\{t\}$ for $t\in T$, then
$\Lang{G}=\sigma(\SententialForms{G'})$. Hence, one can construct a PAIM for
$\Lang{G}$ using \cref{pa:onenonterminal:sf,pa:substitution}.
\end{proof}
\section{Proof of \cref{pa:alg}}
\begin{proof}
Our algorithm works recursively with respect to the number of non-terminals.
In order to make the recursion work, we need the algorithm to work with right
hand sides in $\mathsf{G}_i$. We show that, given $i\in\mathbb{N}$, an
$\mathsf{G}_i$-grammar $G$, along with a PAIM in $\mathsf{G}_i$ for each right
hand side in $G$, we can construct a PAIM for $L(G)$ in $\mathsf{G}_i$. A PAIM
for a language $L$ in $\mathsf{F}_i$ can easily be turned into a PAIM for $L$ in
$\mathsf{G}_i$. Therefore, this statement implies the \lcnamecref{pa:alg}.
Let $G=(N,T,P,S)$ be an $\mathsf{G}_i$-grammar and $n=|N|$.
For each $A\in N\setminus\{S\}$, let $G_A=(N\setminus\{S\},
T\cup\{S\}, P_A, A)$, where $P_A = \{ B\to L\in P \mid B\ne S\}$. Since $G_A$
has $n-1$ nonterminals, we can construct a PAIM for $\Lang{G_A}$ in
$\mathsf{G}_i$ for each $A\in N\setminus\{S\}$.
Consider the substitution $\sigma\colon (N\cup T)^*\to\Powerset{(N\cup T)^*}$
with $\sigma(A)=\Lang{G_A}$ for $A\in N\setminus\{S\}$ and $\sigma(x)=\{x\}$
for $x\in T\cup\{S\}$. Let $G'=(\{S\},T,P',S)$ be the $\mathsf{G}_i$-grammar
with $P'=\{S\to\sigma(L) \mid S\to L\in P\}$. By \cref{pa:substitution}, we
can construct a PAIM in $\mathsf{G}_i$ for each right-hand-side of $G'$.
Therefore, \cref{pa:onenonterminal} provides a PAIM in $\mathsf{G}_i$ for
$\Lang{G'}$. We claim that $\Lang{G'}=\Lang{G}$.
The inclusion $\Lang{G'}\subseteq\Lang{G}$ is easy to see: Each
$w\in\Lang{G_A}$ satisfies $A\grammarsteps[G] w$. Hence, for $S\to L\in P$ and
$w\in\sigma(L)$, we have $S\grammarsteps[G] w$. This means
$\SententialForms{G'}\subseteq\SententialForms{G}$ and thus
$\Lang{G'}\subseteq\Lang{G}$.
Consider a derivation tree $t$ for $G$. We show by induction on the height of
$t$ that $\yield{t}\in\Lang{G'}$. We regard $t$ as a partial order. A
\emph{cut} in $t$ is a maximal antichain. We call a cut $C$ in $t$
\emph{special} if it does not contain the root, every node in $C$ has a label
in $T\cup \{S\}$, and if $x\in C$ and $y\le x$, then $y$ is the root or has a
label in $N\setminus\{S\}$.
There is a special cut in $t$: Start with the cut $C$ of all leaves. If there is a
node $x\in C$ and a non-root $y\le x$ with label $S$, then remove all nodes
$\ge y$ in $C$ and add $y$ instead. Repeat this process until it terminates.
Then $C$ is a special cut.
Let $u$ be the word spelled by the cut $C$. Since all non-root nodes $y<x$ for
some $x\in C$ have a label in $N\setminus\{S\}$, $u$ can be derived using a
production $S\to L$ once and then only productions $A\to M$ with $A\ne S$.
This means, however, that $u\in\sigma(L)$ and hence $S\grammarsteps[G'] u$.
The subtrees below the nodes in $C$ all have height strictly smaller than $t$.
Moreover, since all inner nodes in $C$ are labeled $S$, these subtrees are
derivation trees for $G$. Therefore, by induction we have
$u\grammarsteps[G']\yield{t}$ and thus $S\grammarsteps[G']\yield{t}$.
\end{proof}
\section{Proof of \cref{pa:sli}}
\begin{proof}
According to \cref{pa:morphism}, it suffices to show that we can construct
a PAIM for $L\cap\ParikhInv{S}$. Moreover, if $L=L_1\cup \cdots L_n$, then
\[L\cap\ParikhInv{S}=(L_1\cap\ParikhInv{S})\cup\cdots\cup (L_n\cap\ParikhInv{S}). \]
Thus, by \cref{pa:decomposelinear,pa:union}, we may assume that the PAIM for
$L$ is linear. Let $(K,c,P,\varphi,\diamond)$ be a linear PAIM for $L$ in
$\mathsf{G}_i$.
The set $T = \{ \mu\in P^\oplus \mid \varphi(c+\mu) \in S\}$ is semilinear as
well, hence $T=\bigcup_{i=1}^n T_i$ for linear $T_i\subseteq P^\oplus$.
Write $T_i=\mu_i+F_i^\oplus$ with $\mu_i\in P^\oplus$, and $F_i\subseteq P^\oplus$ being
a finite set. Let $P'_i$ be an alphabet with new symbols in bijection with the
set $F_i$ and let $\psi_i\colon P'^\oplus_i\to P^\oplus$ be the morphism
extending this bijection. Moreover, let $U_i$ be the linear set
\[ U_i=\mu_i+\{p+\psi_i(p)\mid p\in P'_i\}^\oplus+(X\cup \{\diamond\})^\oplus \]
and let $R_i=p_1^*\cdots p_m^*$, where $P'_i=\{p_1,\ldots,p_m\}$. We claim
that with new symbols $c'_i$ for $1\le i\le n$, $C'=\{c'_i\mid 1\le i\le
n\}$, $P'=\bigcup_{i=1}^n P'_i$ and
\begin{align*}
\varphi'(c'_i)&=\varphi(c)+\varphi(\mu_i), && \\
\varphi'(p)&=\varphi(\psi_i(p)) & & \text{for $p\in P'_i$}, \\
K'&=\bigcup_{i=1}^n c'_i\pi_{C'\cup X\cup P'\cup\{\diamond\}}\left(c^{-1}KR_i\cap \ParikhInv{U_i}\right),
\end{align*}
the tuple $(K',C',P',(P'_i)_{c'_i\in C'},\varphi',\diamond)$ is a PAIM for
$L\cap\ParikhInv{S}$.
\begin{itemize}
\item\emph{Projection property} For $w\in L\cap\ParikhInv{S}$, we find a $cv\in
K$ with $\pi_X(cv)=w$. Then $\varphi(\pi_{\{c\}\cup P}(cv))=\Parikh{w}\in S$
and hence $\Parikh{\pi_{P}(v)}\in T$. Let $\Parikh{\pi_P(v)}=\mu_i+\nu$ with
$\nu\in F_i^\oplus$, $P'_i=\{p_1,\ldots,p_m\}$, and $\psi_i(\kappa)=\nu$.
Then the word
\[ v'=vp_1^{\kappa(p_1)}\cdots p_m^{\kappa(p_m)} \]
is in $c^{-1}KR_i\cap\ParikhInv{U_i}$ and satisfies $\pi_X(v')=\pi_X(v)=w$.
Moreover, $v''=c'_i\pi_{C'\cup X\cup P'\cup\{\diamond\}}(v')\in K'$ and hence $w=\pi_X(v'')\in
\pi_X(K')$. This proves $L\cap\ParikhInv{S}\subseteq\pi_X(K')$.
We clearly have $\pi_X(K')\subseteq\pi_X(K)=L$. Thus, it suffices to show
$\Parikh{\pi_X(K')}\subseteq S$. Let $w=c'_iv\in K'$. Then $v=\pi_{C'\cup
X\cup P'\cup \{\diamond\}}(v')$ for some $v'\in c^{-1}KR_i\cap\ParikhInv{U_i}$.
Let $P'_i=\{p_1,\ldots,p_m\}$ and write $v'=v''p_1^{\kappa(p_1)}\cdots
p_m^{\kappa(p_m)}$ for $\kappa\in P'^\oplus_i$. This means $cv''\in K$ and
thus $\Parikh{\pi_X(cv'')}=\varphi(\pi_{\{c\}\cup P}(cv''))$ by the counting
property of $K$. Since $v'\in\ParikhInv{U_i}$, we have
$\Parikh{\pi_P(cv'')}=\Parikh{\pi_{P}(v')}=\mu_i+\psi_i(\kappa)\in T_i$. Thus
\begin{align*}
\Parikh{\pi_X(w)}&=\Parikh{\pi_X(c'_iv)}=\Parikh{\pi_X(v')}=\Parikh{\pi_X(cv'')} \\
&=\varphi(\pi_{\{c\}\cup P}(cv''))\in \varphi(c+T_i)\in S.
\end{align*}
\item\emph{Counting property}
Let $w=c'_iv\in K'$ with $v=\pi_{C'\cup X\cup P'\cup \{\diamond\}}(v')$ for some
$v'\in c^{-1}KR_i\cap\ParikhInv{U_i}$. By definition of $U_i$, this implies
\[ \pi_{P}(v')=\mu_i + \psi_i(\pi_{P'}(v')) \]
and hence
\[ \varphi(\pi_P(v'))=\varphi(\mu_i)+\varphi(\psi_i(\pi_{P'}(v')))=\varphi(\mu_i)+\varphi'(\pi_{P'}(v')). \]
Moreover, if we write $v'=v''r$ with $cv''\in K$ and $r\in R_i$, then
\begin{align*}
\varphi'(\pi_{C'\cup P'}(w))&=\varphi'(c'_i)+\varphi'(\pi_{P'}(v')) \\
&=\varphi(c)+\varphi(\mu_i)+\varphi'(\pi_{P'}(v')) \\
&=\varphi(c)+\varphi(\pi_P(v'))=\varphi(\pi_{C\cup P}(cv'')) \\
&=\Parikh{\pi_X(cv'')}=\Parikh{\pi_X(w)}.
\end{align*}
This proves the counting property.
\item\emph{Commutative projection property}.
Let $\mu\in c'_i+P'^\oplus_i$, $\mu=c'_i+\kappa$ with $\kappa\in P'^\oplus_i$.
Let $P'_i=\{p_1,\ldots,p_m\}$. Then $\nu=\psi_i(\kappa)\in P^\oplus$ and the
commutative projection property of $K$ yields a $cv\in K$ with $\Parikh{\pi_{C\cup
P}(cv)}=c+\mu_i+\nu$. This means that the word
\[ v'=vp_1^{\kappa(p_1)}\cdots p_m^{\kappa(p_m)} \]
is in $c^{-1}KR_i\cap\Parikh{U_i}$. Furthermore, $\Parikh{\pi_{P'}(v')}=\kappa$ and hence
\begin{align*}
\Parikh{\pi_{C'\cup P'}(c'_i\pi_{C'\cup X\cup P'\cup\{\diamond\}}(v'))}=c'_i+\kappa=\mu.
\end{align*}
This proves $\bigcup_{i=1}^n c'_i+P'^\oplus_i\subseteq\Parikh{\pi_{C'\cup P'}(K')}$.
The other inclusion follows directly from the definition of $K'$.
\item\emph{Boundedness} Since
$\pi_{\{\diamond\}}(K')\subseteq\pi_{\{\diamond\}}(K)$, $K'$ inherits boundedness
from $K$.
\item\emph{Insertion property} Let $c'_iw\in K'$ and $\mu\in P'^\oplus_i$. Write $w=\pi_{C'\cup X\cup P'\cup \{\diamond\}}(v)$
for some $v\in c^{-1}KR_i\cap\ParikhInv{U_i}$, and $v=v'r$ for some $r\in R_i$. Then $cv'\in K$ and applying the insertion property of $K$ to
$cv'$ and $\psi_i(\mu)\in P^\oplus$ yields a $v''\in L$ with
$\pi_{X\cup\{\diamond\}}(cv')\preceq_{\diamond} v''$ and
$\Parikh{v''}=\Parikh{\pi_X(cv')}+\varphi(\psi_i(\mu))$.
This word satisfies
\begin{align*}
\pi_{X\cup\{\diamond\}}(c'_iw)&=\pi_{X\cup\{\diamond\}}(v)=\pi_{X\cup\{\diamond\}}(cv')\preceq_{\diamond} v'', \\
\Parikh{\pi_X(v'')}&=\Parikh{\pi_X(cv')}+\varphi(\psi_i(\mu)) \\
&=\Parikh{\pi_X(c'_iw)}+\varphi(\psi_i(\mu))=\Parikh{\pi_X(c'_iw)}+\varphi'(\mu).
\end{align*}
and it remains to be shown that $v''\in L\cap\ParikhInv{S}$. Since $v''\in L$,
this amounts to showing $\Parikh{v''}\in S$.
Since $\Parikh{v'}\in U_i$, we have $\Parikh{\pi_P(v')}\in \mu_i+F_i^\oplus$
and $\psi_i(\mu)\in F_i^\oplus$ and hence also
$\Parikh{\pi_P(v')}+\psi_i(\mu)\in \mu_i+F_i^\oplus=T_i$. Therefore,
\begin{align*}
\Parikh{v''}&=\Parikh{\pi_X(cv')}+\varphi(\psi_i(\mu)) \\
&=\varphi(\pi_{C\cup P}(cv'))+\varphi(\psi_i(\mu)) \\
&=\varphi(\pi_{C\cup P}(cv')+\psi_i(\mu))\in\varphi(c+T_i)\subseteq S.
\end{align*}
\end{itemize}
\end{proof}
\section{Proof of \cref{dc:overapprox}}
First, we need a simple auxiliary lemma. For $\alpha,\beta\in X^\oplus$, we
write $\alpha\le \beta$ if $\alpha(x)\le\beta(x)$ for all $x\in X$. For a set
$S\subseteq X^\oplus$, we write $\Dclosure{S}=\{\mu\in X^\oplus \mid \exists
\nu\in S\colon \mu\le \nu\}$ and $\Uclosure{S}=\{\mu\in X^\oplus \mid \exists
\nu\in S\colon \nu\le\mu\}$. The set $S$ is called \emph{upward closed} if
$\Uclosure{S}=S$.
\begin{clemma}\label{basics:dclosure:recognizable}
For a given semilinear set $S\subseteq X^\oplus$, the set $\ParikhInv{\Dclosure{S}}$
is an effectively computable regular language.
\end{clemma}
\begin{proof}
The set $S'=X^\oplus\setminus(\Dclosure{S})$ is Presburger-definable in terms
of $S$ and hence effectively semilinear. Moreover, since $\le$ is a
well-quasi-ordering on $X^\oplus$, $S'$ has a finite set $F$ of minimal
elements. Again $F$ is Presburger-definable in terms of $S'$ and hence
computable. Since $S'$ is upward closed, we have $S'=\Uclosure{F}$. Clearly,
given $\mu\in X^\oplus$, the language $R_\mu=\{w\in X^* \mid \mu\le\Parikh{w}
\}$ is an effectively computable regular language. Since
$w\in\ParikhInv{\Dclosure{S}}$ if and only if $w\notin
\ParikhInv{\Uclosure{F}}$, we have
$X^*\setminus\ParikhInv{\Dclosure{S}}=\bigcup_{\mu\in F}R_\mu$. Thus, we can
compute a finite automaton for the complement, $\ParikhInv{\Dclosure{S}}$.
\end{proof}
\begin{proof}[Proof of \cref{dc:overapprox}]
We use \cref{pa:parikhannotations} to construct a PAIM $(K,C,P,(P_c)_{c\in
C},\varphi,\diamond)$ for $L$ in $\mathsf{G}_i$.
For each $c\in C$, we construct the semilinear sets $S_c=\{\mu\in P_c^\oplus
\mid \varphi(c+\mu)\in S\}$. By \cref{basics:dclosure:recognizable}, we can
effectively construct a finite automaton for the language
\[R=\bigcup_{c\in C} c\left(\ParikhInv{\Dclosure{S_c}}\shuf (X\cup \{\diamond\})^*\right).\]
We claim that $L'=\pi_X\left(K\cap R\right)$ is in $\mathsf{G}_i$ and satisfies
$L\cap\ParikhInv{S}\subseteq L'\subseteq \Dclosure{(L\cap\ParikhInv{S})}$. The
latter clearly implies $\Dclosure{L'}=\Dclosure{(L\cap\ParikhInv{S})}$. Since
$K\in\mathsf{G}_i$ and $\mathsf{G}_i$ is an effective full semi-AFL, we clearly have
$L'\in\mathsf{G}_i$.
We begin with the inclusion $L\cap\ParikhInv{S}\subseteq L'$. Let $w\in
L\cap\ParikhInv{S}$. Then there is a word $cv\in K$, $c\in C$ with
$\pi_X(v)=w$. Since $\Parikh{w}\in S$, we have $\varphi(\Parikh{\pi_{C\cup
P}(cv)})=\Parikh{\pi_X(v)}=\Parikh{w}\in S$ and hence $\Parikh{\pi_{P}(v)}\in
S_c\subseteq \Dclosure{S_c}$. In particular, $cv\in R$ and thus $w=\pi_X(cv)\in
L'$. This proves $L\cap\ParikhInv{S}\subseteq L'$.
In order to show $L'\subseteq\Dclosure{(L\cap\ParikhInv{S})}$, suppose $w\in L'$.
Then there is a $cv\in K\cap R$ with $w=\pi_X(cv)$. The fact that $cv\in R$ means that
$\Parikh{\pi_{P_c}(v)}\in \Dclosure{S_c}$ and hence there is a $\nu\in
P_c^\oplus$ with $\Parikh{\pi_{P_c}(v)}+\nu\in S_c$. This means in particular
\begin{equation}\Parikh{\pi_X(cv)}+\varphi(\nu)=\varphi(\pi_{C\cup P}(cv))+\varphi(\nu)\in S.\label{dc:insertion:satisfies}\end{equation}
The insertion property of $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ allows us
to find a word $v'\in L$ such that
\begin{align}
& \Parikh{v'}=\Parikh{\pi_X(cv)}+\varphi(\nu), && \pi_{X\cup\{\diamond\}}(cv)\preceq_{\diamond} v'.\label{dc:insertionword}
\end{align}
Together with \cref{dc:insertion:satisfies}, the first part of \cref{dc:insertionword} implies that $\Parikh{v'}\in S$.
The second part of \cref{dc:insertionword} means in particular that
$w=\pi_X(cv)\preceq v'$. Thus, we have $w\preceq v'\in L\cap\ParikhInv{S}$ and
hence $w\in \Dclosure{(L\cap\ParikhInv{S})}$.
\end{proof}
\section{Proof of \cref{pa:parikhannotations}}
\begin{clemma}[Finite languages]\label{pa:finite}
Given $L$ in $\mathsf{F}_0$, one can construct a PAIM for $L$ in $\mathsf{F}_0$.
\end{clemma}
\begin{proof}
Let $L=\{w_1,\ldots,w_n\}\subseteq X^*$ and define $C=\{c_1,\ldots,c_n\}$ and
$P=P_c=\emptyset$, where the $c_i$ are new symbols. Let $\varphi\colon (C\cup
P)^\oplus\to X^\oplus$ be the morphism with $\varphi(c_i)=\Parikh{w_i}$. It
is easily verified that with $K=\{c_1w_1,\ldots,c_nw_n\}$, the tuple
$(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ is a PAIM for $L$ in $\mathsf{F}_0$.
\end{proof}
\begin{proof}[Proof of \cref{pa:parikhannotations}]
We compute the PAIM for $L$ recursively:
\begin{itemize}
\item If $L\in\mathsf{F}_0$, we can construct a PAIM for $L$ in $\mathsf{F}_0$ using \cref{pa:finite}.
\item If $L\in\mathsf{F}_i$ and $i\ge 1$, then $L=h(L'\cap\ParikhInv{S})$ for some
$L'\subseteq X^*$ in $\mathsf{G}_{i-1}$, a semilinear $S\subseteq X^\oplus$, and
a morphism $h\colon X^*\to Y^*$. We compute a PAIM for
$L'$ in $\mathsf{G}_{i-1}$ and then use \cref{pa:sli} to construct a PAIM for $L$.
\item If $L\in\mathsf{G}_i$, then $L=\Lang{G}$ for an $\mathsf{F}_{i}$-grammar $G$. We
construct PAIM for the right-hand-sides of $G$ and then using \cref{pa:alg}, we
construct a PAIM for $L$ in $\mathsf{G}_i$.
\end{itemize}
\end{proof}
\section{Proof of \cref{strictness:sli}}
\begin{proof}[Proof of \cref{strictness:sli}]
We write $Y=X\cup \{\#\}$. Suppose $(L\#)^*\in\HomSLI{\mathcal{C}}$. Then
$(L\#)^*=h(L'\cap\ParikhInv{S})$ for some $L'\subseteq Z^*$, a semilinear
$S\subseteq Z^\oplus$, and a morphism $h\colon Z^*\to Y^*$. Since $\mathcal{C}$ has
PAIMs, there is a PAIM $(K,C,P,(P_c)_{c\in C},\varphi,\diamond)$ for $L'$ in
$\mathcal{C}$. Let $S_c=\{\mu\in P_c^\oplus \mid \varphi(c+\mu)\in S\}$. Moreover, let
$g$ be the morphism with
\begin{align*}
g\colon (C\cup Z\cup P\cup \{\diamond\})^*&\longrightarrow (Y\cup\{\diamond\})^* && \\
z&\longmapsto h(z) & &\text{for $z\in Z$}, \\
x&\longmapsto \varepsilon & &\text{for $x\in C\cup P$}, \\
\diamond&\longmapsto \diamond. & &
\end{align*}
Finally, we need the rational transduction $T\subseteq X^*\times (Y\cup
\{\diamond\})^*$ with
\[ T(M)=\{s\in X^* \mid r\#s\#t\in M~\text{for some $r,t\in (Y\cup\{\diamond\})^*$} \}. \]
We claim that
\begin{align*}
L=T(\hat{L}), && \text{where} && \hat{L}=\{ g(cw)\mid c\in C,~cw\in K,~\pi_P(w)\in \ParikhInv{\Dclosure{S_c}}\}.
\end{align*}
According to \cref{basics:dclosure:recognizable}, the language
$\ParikhInv{\Dclosure{S_c}}$ is regular, meaning $\hat{L}\in\mathcal{C}$ and hence
$T(\hat{L})\in\mathcal{C}$. Thus, proving $L=T(\hat{L})$ establishes the
\lcnamecref{strictness:sli}.
We begin with the inclusion $T(\hat{L})\subseteq L$. Let $s\in T(\hat{L})$ and
hence $r\#s\#t=g(cw)$ for $r,t\in (Y\cup\{\diamond\})^*$, $c\in C$, $cw\in K$ and
$\pi_P(w)\in\ParikhInv{\Dclosure{S_c}}$. The latter means there is a $\mu\in
P_c^\oplus$ such that $\Parikh{\pi_P(w)}+\mu\in S_c$ and hence
\[\Parikh{\pi_{Z}(cw)}+\varphi(\mu)=\varphi(c+\Parikh{\pi_P(w)}+\mu)\in S.\]
By the insertion property of $K$, there is a $v\in L'$ with
$\pi_{Z\cup\{\diamond\}}(cw)\preceq_{\diamond} v$ and
$\Parikh{v}=\Parikh{\pi_Z(cw)}+\varphi(\mu)$. This means $\Parikh{v}\in S$ and
thus $v\in L'\cap\ParikhInv{S}$ and hence $g(v)=h(v)\in (L\#)^*$. Since
$g(\diamond)=\diamond$, the relation $\pi_{Z\cup\{\diamond\}}(cw)\preceq_{\diamond}
v$ implies
\[ r\#s\#t=g(cw)=g(\pi_{Z\cup\{\diamond\}}(cw))\preceq_{\diamond} g(v)\in (L\#)^*. \]
However, $\diamond$ does not occur in $s$, meaning $\#s\#\in \#X^*\#$ is a factor of
$g(v)\in (L\#)^*$ and hence $s\in L$. This proves $T(\hat{L})\subseteq L$.
In order to show $L\subseteq T(\hat{L})$, suppose $s\in L$. The boundedness
property of $K$ means there is a bound $k\in\mathbb{N}$ with $|w|_{\diamond}\le k$ for
every $w\in K$. Consider the word $v=(s\#)^{k+2}$. Since $v\in (L\#)^*$, we
find a $v'\in L'\cap\ParikhInv{S}$ with $v=h(v')$. This, in turn, means there
is a $cw\in K$ with $c\in C$ and $\pi_Z(cw)=v'$. Then
\[ \varphi(c+\Parikh{\pi_P(w)})=\varphi(\pi_{C\cup P}(cw))=\Parikh{\pi_Z(cw)}=\Parikh{v'}\in S \]
and hence $\Parikh{\pi_P(w)}\in S_c\subseteq\Dclosure{S_c}$. Therefore,
$g(cw)\in\hat{L}\subseteq (Y\cup\{\diamond\})^*$. Note that $g$ agrees with
$h(\pi_Z(\cdot))$ on all symbols but $\diamond$, which is fixed by the former
and erased by the latter. Since $h(\pi_Z(cw))=h(v')=v=(s\#)^{k+2}$, the word
$g(cw)$ is obtained from $(s\#)^{k+1}$ by inserting occurrences of $\diamond$.
In fact, it is obtained by inserting at most $k$ of them since
$|g(cw)|_{\diamond}=|cw|_{\diamond}\le k$. This means $g(cw)$ has at least one
factor $\#s\#\in \#X^*\#$ and hence $s\in T(g(cw))\subseteq T(\hat{L})$. This
completes the proof of $L=T(\hat{L})$ and thus of the
\lcnamecref{strictness:sli}.
\end{proof}
\section{Proof of \cref{strictness:bursting}}
\begin{proof}
Suppose $G=(N,T,P,S)$ is $k$-bursting. Let $\sigma\colon (N\cup T)^*\to
\Powerset{T^*}$ be the substitution with $\sigma(x)=\{w\in T^{\le k} \mid
x\grammarsteps[G] w\}$ for $x\in N\cup T$. Since $\sigma(x)$ is finite for each
$x\in N\cup T$, there is clearly a locally finite rational transduction $T$
with $T(M)=\sigma(M)$ for every language $M\subseteq (N\cup T)^*$. In
particular, $\sigma(M)\in\mathcal{C}$ whenever $M\in\mathcal{C}$. Let $R\subseteq N$ be the set
of reachable nonterminals. We claim that
\begin{align} \Lang{G}\cap T^{>k}=\bigcup_{A\in R}\bigcup_{A\to L\in P} \sigma(L)\cap T^{>k}.\label{strictness:bursting:eq}\end{align}
This clearly implies $\Lang{G}\cap T^{>k}\in\mathcal{C}$. Furthermore, since $\mathcal{C}$ is a
union closed full semi-trio and thus closed under adding finite sets of words,
it even implies $\Lang{G}\in\mathcal{C}$ and hence the \lcnamecref{strictness:bursting}.
We start with the inclusion ``$\subseteq$''. Suppose $w\in \Lang{G}\cap T^{>k}$
and let $t$ be a derivation tree for $G$ with $\yield{t}=w$. Since $|w|>k$, $t$
clearly has at least one node $x$ with $|\yield{x}|>k$. Let $y$ be maximal
among these nodes (i.e. such that no descendent of $y$ has a yield of length
$>k$). Since $G$ is $k$-bursting, this means $\yield{y}=w$. Furthermore, each
child $c$ of $y$ has $|\yield{c}|\le k$. Thus, if $A$ is the label of $y$, then
$A$ is reachable and there is a production $A\to L$ with $w\in \sigma(L)$.
Hence, $w$ is contained in the right-hand side of
\labelcref{strictness:bursting:eq}.
In order to show ``$\supseteq$'' of \labelcref{strictness:bursting:eq}, suppose
$w\in\sigma(L)\cap T^{>k}$ for some $A\to L\in P$ and a reachable $A\in N$. By
the definition of $\sigma$, we have $A\grammarsteps[G] w$. Since $A$ is
reachable, there is a derivation tree $t$ for $G$ with an $A$-labeled node $x$
such that $\yield{x}=w$. Since $G$ is $k$-bursting and $|w|>k$, this implies
$w=\yield{x}=\yield{t}\in \Lang{G}$ and thus $w\in\Lang{G}\cap T^{>k}$.
\end{proof}
\section{Proof of \cref{strictness:shuffle}}
\begin{proof}[Proof of \cref{strictness:shuffle}]
Let $K=L\shuf \{a^nb^nc^n\mid n\ge 0\}$. If $K\in\Alg{\mathcal{C}}$, then also $M=K\cap
a^*(bX)^*c^*\in \Alg{\mathcal{C}}$. Hence, let $M=\Lang{G}$ for a reduced
$\mathcal{C}$-grammar $G=(N,T,P,S)$. This means $T=X\cup \{a,b,c\}$. Let
$\alpha,\beta\colon T^*\to\mathbb{Z}$ be the morphisms with
\begin{align*} \alpha(w)=|w|_a-|w|_b, && \beta(w)=|w|_b-|w|_c.\end{align*}
Then $\alpha(w)=\beta(w)=0$ for each $w\in M\subseteq K$. Thus,
\cref{nonterminal:extension} provides $G$-compatible extensions
$\hat{\alpha},\hat{\beta}\colon (N\cup T)^*\to\mathbb{Z}$ of $\alpha$ and $\beta$,
respectively.
Let $k=\max\{|\hat{\alpha}(A)|, |\hat{\beta}(A)| \mid A\in N\}+1$ and consider the
$\mathcal{C}$-grammar $G'=(N,X,P',S)$, where $P'=\{ A\to\pi_{N\cup X}(L) \mid A\to L\in
P\}$. Then clearly $\Lang{G'}=\pi_X(M)=L$. We claim that $G'$ is $k$-bursting.
By \cref{strictness:bursting}, this implies $L=\Lang{G'}\in\mathcal{C}$ and hence the
\lcnamecref{strictness:shuffle}.
Let $t$ be a derivation tree for $G'$ and $x$ a node in $t$ with
$|\yield{x}|>k$. Then by definition of $G'$, then there is a derivation tree
$\bar{t}$ for $G$ such that $t$ is obtained from $\bar{t}$ by deleting or
replacing by an $\varepsilon$-leaf each $\{a,b,c\}$-labeled leaf.
Since $x$ has to be an inner node, it has a corresponding node $\bar{x}$ in
$\bar{t}$. Since $G$ generates $M$, we have
\[ \yield{\bar{t}}=a^nbx_1bx_2\cdots bx_nc^n \]
for some $n\ge 0$ and $x_1,\ldots,x_n\in X$, $x_1\cdots x_n\in L$. Moreover,
$\yield{\bar{x}}$ is a factor of $\yield{\bar{t}}$ and
$\pi_X(\yield{\bar{x}})=\yield{x}$. This means $|\pi_X(\yield{\bar{x}})|>k$ and
since in $\yield{\bar{t}}$, between any two consecutive $X$-symbols, there is a
$b$, this implies $|\yield{\bar{x}}|_{b} > k-1$. Let $A$ be the label of $x$
and $\bar{x}$. By the choice of $k$, we have
$|\hat{\alpha}(\yield{\bar{x}})|=|\hat{\alpha}(A)|\le k-1$ and
$|\hat{\beta}(\yield{\bar{x}})|=|\hat{\beta}(A)|\le k-1$. Hence,
$|\yield{\bar{x}}|_b>k-1$ implies $|\yield{\bar{x}}|_a \ge 1$ and
$|\yield{\bar{x}}|_c\ge 1$. However, a factor of $\yield{\bar{t}}$ that
contains an $a$ and a $c$ has to comprise all of $bx_1\cdots bx_n$. Hence
\[ \yield{x}=\pi_X(\yield{\bar{x}})=x_1\cdots x_n=\pi_X(\yield{\bar{t}})=\yield{t}. \]
This proves that $G'$ is $k$-bursting.
\end{proof}
\section{Proof of \cref{strictness:hierarchy}}
\begin{proof}[Proof of \cref{strictness:hierarchy}]
First, note that if $V_i\in\mathsf{G}_i\setminus\mathsf{F}_i$, then
$U_{i+1}\in\mathsf{F}_{i+1}\setminus\mathsf{G}_i$: By construction of $U_{i+1}$, the fact
that $V_i\in\mathsf{G}_i$ implies $U_{i+1}\in\HomSLI{\mathsf{G}_i}=\mathsf{F}_{i+1}$. By
\cref{hierarchy:closure}, $\mathsf{F}_i$ is a union closed full semi-trio. Thus, if we
had $U_{i+1}\in \mathsf{G}_i=\Alg{\mathsf{F}_i}$, then \cref{strictness:shuffle} would imply
$V_i\in\mathsf{F}_i$, which is not the case.
Second, observe that $U_{i+1}\in\mathsf{F}_{i+1}\setminus\mathsf{G}_i$ implies
$V_{i+1}\in\mathsf{G}_{i+1}\setminus\mathsf{F}_{i+1}$: By construction of $V_{i+1}$, the fact
that $U_{i+1}\in\mathsf{F}_{i+1}$ implies $V_{i+1}\in\Alg{\mathsf{F}_{i+1}}=\mathsf{G}_{i+1}$. By
\cref{hierarchy:closure}, $\mathsf{G}_i$ is a full semi-AFL and by
\cref{pa:parikhannotations}, every language in $\mathsf{G}_i$ has a PAIM in $\mathsf{G}_i$.
Hence, if we had $V_{i+1}\in\mathsf{F}_{i+1}=\HomSLI{\mathsf{G}_i}$, then
\cref{strictness:sli} would imply $U_{i+1}\in\mathsf{G}_i$, which is not the case.
Hence, it remains to be shown that $V_0\in\mathsf{G}_0\setminus\mathsf{F}_0$. That, however,
is clear because $V_0=\#_0^*$, which is context-free and infinite.
\end{proof}
\section{Introduction}
\input{intro.tex}
\section{Preliminaries}
\label{sec:preliminaries}
\input{preliminaries.tex}
\section{A hierarchy of language classes}
\label{sec:hierarchy}
This section introduces a hierarchy of language classes that divides the class
of languages accepted by stacked counter automata into levels. This will allow
us to apply recursion with respect to these levels.
\subimport{basics/}{algebraic.tex}
\subimport{basics/}{semilinear.tex}
\subimport{basics/}{hierarchy.tex}
\section{Parikh annotations}
\label{sec:pa}
\subimport{annotations/}{parikh.tex}
\section{Computing downward closures}
\label{sec:dclosure}
\subimport{annotations/}{dclosure.tex}
\section{Strictness of the hierarchy}
\label{sec:strictness}
\subimport{annotations/}{strictness.tex}
\printbibliography
|
2,869,038,154,249 | arxiv | \section{Introduction}
A contact structure $\xi$ on an oriented $3$-manifold $M$ is an oriented tangent plane distribution such that there is a
$1$-form $\alpha$ on $M$ satisfying $\xi=\ker\alpha$,
$d\alpha|_{\xi}>0$, and $\alpha\wedge d\alpha>0$. Such a $1$-form is
called a contact form for $\xi$. A curve in $M$ is said to be
Legendrian if it is tangent to $\xi$ everywhere. $\xi$ is said to be
overtwisted if there is an embedded disk $D$ in $M$ such that
$\partial D$ is Legendrian, but $D$ is transversal to $\xi$ along
$\partial D$. A contact structure that is not overtwisted is called
tight.
There are three types of symplectic fillability for contact structures.
\begin{enumerate}
\item $\xi$ is called Stein fillable if there is a Stein surface $(W,J)$ such that $M=\partial W$ and $\xi=TM\cap J(TM)$.
\item $\xi$ is called strongly fillable if there is a symplectic $4$-manifold $(W,\omega)$ such that $M=\partial W$, $\omega$ is exact near $M$, and there exists a primitive $\alpha$ of $\omega$ near $M$ satisfying $\xi=\ker(\alpha|_M)$ and $\omega|_\xi>0$.
\item $\xi$ is called weakly fillable if there is a symplectic $4$-manifold $(W,\omega)$ such that $M=\partial W$ and $\omega|_\xi>0$.
\end{enumerate}
From the works of Eliashberg \cite{E5}, Etnyre and Honda \cite{EH2}, Gromov \cite{Gromov} and Ghiggini \cite{Ghstrongfill,Gh1}, we know
\begin{eqnarray*}
& & \{\text{Stein fillable contact structures}\} \\
& \subsetneq & \{\text{strongly fillable contact structures}\} \\
& \subsetneq & \{\text{weakly fillable contact structures}\} \\
& \subsetneq & \{\text{tight contact structures}\}.
\end{eqnarray*}
The classification problem of overtwisted contact structures was solved by Eliashberg \cite{E1}. The classification of tight contact structures up to isotopy is much more complex, and is only known for limited classes of $3$-manifolds.
Eliashberg \cite{E} and, independently, Weinstein \cite{We} defined the Legendrian surgery, which turns out to be a very useful method of constructing tight contact structures. We will recall Weinstein's construction in details in Section \ref{ssh}. From \cite{E4,EH2,We}, Legendrian surgery is known to preserve the above three types of symplectic fillability. It has been used to produce many interesting examples of tight contact structures.
In many cases, in order to classify tight contact structures, we need to distinguish between tight contact structures constructed by different
Legendrian surgeries. If the Legendrian surgeries are done on the standard contact $S^3$, which is Stain filled by the standard complex $B^4$, then the next two theorems provide an easy criterion.
\begin{theorem}\cite[Theorem 1.2]{LM}\label{stein-dis}
Let $X$ be a smooth $4$-manifold with boundary. Suppose $J_1$,
$J_2$ are two Stein structures with boundary on $X$ with
associated $Spin^\mathbb{c}$-structures $\mathfrak{s}_1$ and
$\mathfrak{s}_2$. If the induced contact structures $\xi_1$ and
$\xi_2$ on $\partial X$ are isotopic, then $\mathfrak{s}_1$ and
$\mathfrak{s}_2$ are isomorphic (and, in particular, have the same
first Chern class).
\end{theorem}
\begin{theorem}\cite[Proposition 2.3]{Go}\label{go-chern}
If $(W,J)$ is obtained from the standard complex $B^4$ by Legendrian surgery on a Legendrian link in the standard contact $S^3$, then the first Chern class $c_1(J)$ of the induced Stein structure $J$ is represented by a cocycle whose value on the $2$-dimensional homology class corresponding to a component of $L$ equals the rotation number of that component.
\end{theorem}
In particular, we have:
\begin{corollary}\label{stein-surgery-dis}
Let $L_1$, $L_2$ be two smoothly isotopic Legendrian links in the standard contact $S^3$ (which is Stein fillable). Suppose that the Thurston-Bennequin numbers of corresponding components of $L_1$ and $L_2$ are equal. Then the Legendrian surgeries on $L_1$, $L_2$ give two tight contact structures $\xi_1$ and $\xi_2$ on the same ambient $3$-manifold. And, if $\xi_1$ and $\xi_2$ are isotopic, then the rotation numbers of corresponding components of $L_1$ and $L_2$ are equal.
\end{corollary}
In practice, we can attain different rotation numbers by stabilizing a Legendrian link in different ways. Then Corollary \ref{stein-surgery-dis} implies that Legendrian surgeries on these stabilized Legendrian links give non-isotopic contact structures. This method can be modified to apply to other Stein fillable contact $3$-manifolds. See, e.g., \cite{GLS,H1,Wu} for applications. The goal of the present paper is to generalize Corollary \ref{stein-surgery-dis} to distinguish between tight contact structures obtained by Legendrian surgeries on stabilized Legendrian links in larger classes of tight contact $3$-manifolds, including all weakly fillable ones. Our main technical tool is the Ozsv\'ath-Szab\'o contact invariant.
\begin{theorem}\label{surgery-main}
Let $(M,\xi)$ be a tight contact $3$-manifold, and
\[
L=K^1\coprod K^2\coprod \cdots \coprod K^m
\]
a Legendrian link in it. For
$j=1,2,\cdots,m$, $i=1,2$, fix integers $s^j$, $p^j_i$, so that
$0\leq p^j_i\leq s^j$. Let $K^j_i$ be the Legendrian knot
constructed from $K^j$ by $p^j_i$ positive stabilizations and
$s^j-p^j_i$ negative stabilizations. Then the Legendrian surgeries
on $L_i=K^1_i\coprod K^2_i\coprod \cdots \coprod K^m_i$ give two
contact structures $\xi_1$ and $\xi_2$ on the same ambient
$3$-manifold $M'$. Assume that $\xi_1$ and $\xi_2$ are isotopic. We
have:
\begin{enumerate}
\item If $(M,\xi)$ is weakly filled by a symplectic
$4$-manifold $(W,\omega)$, then, for each $j=1,\cdots,m$,
\[
2(p^j_1-p^j_2)\left\{
\begin{array}{ll}
=0, & \hbox{if $K^j$ represents a torsion element in
$H_1(W)$;} \\
\equiv 0 \mod{d^j}, & \hbox{otherwise, where
$d^j=\gcd\{\langle\zeta,[K^j]\rangle|\zeta\in H^1(W)\}$.}
\end{array}
\right.
\]
\item If $(M,\xi)$ has non-vanishing Ozsv\'ath-Szab\'o $c^+$-invariant,
then, for each $j=1,\cdots,m$,
\[
2(p^j_1-p^j_2)\left\{
\begin{array}{ll}
=0, & \hbox{if $K^j$ represents a torsion element in
$H_1(M)$;} \\
\equiv 0 \mod{d^j}, & \hbox{otherwise, where
$d^j=\gcd\{\langle\zeta,[K^j]\rangle|\zeta\in H^1(M)\}$.}
\end{array}
\right.
\]
\end{enumerate}
\end{theorem}
The above theorem was proved in the author's attempt to classify
tight contact structures on the Brieskorn homology spheres
$-\Sigma(2,3,6n-1)$. In Section \ref{236}, we will discuss the tight
contact structures on these homology spheres using Theorem
\ref{surgery-main}. It is known to many contact topologists that
there are at most $\frac{n(n-1)}{2}$ tight contact structures on
$-\Sigma(2,3,6n-1)$. Using the tight contact structures on
$M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$, which are all weakly
fillable, we can give $\frac{n(n-1)}{2}$ different Legendrian
surgery constructions of tight contact structures on
$-\Sigma(2,3,6n-1)$. But it is not known whether these surgeries
give non-isotopic tight contact structures. We will use Theorem
\ref{surgery-main} to show that, among these surgeries, any two
different Legendrian surgeries on the same tight contact structure
on $M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$ give non-isotopic tight
contact structures on $-\Sigma(2,3,6n-1)$, which implies the following theorem.
\begin{theorem}\label{2n-3}
There are at least $2n-3$ pairwise non-isotopic tight contact
structures on $-\Sigma(2,3,6n-1)$.
\end{theorem}
It is still an open problem whether surgeries on different tight contact structures
on $M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$ give non-isotopic tight
contact structures on $-\Sigma(2,3,6n-1)$. The author believes that
the answer is yes, and the proof will likely require a better understanding of the Heegaard-Floer homology and the Ozsv\'ath-Szab\'o contact invariants.
\section{Standard symplectic $2$-handle and Legendrian
surgery}\label{ssh}
In this section, we recall Weinstein's construction of the standard
symplectic $2$-handle and the Legendrian surgery in \cite{We}.
Let $(x_1,y_1,x_2,y_2)$ be the standard Cartesian coordinates of
$\mathbb{R}^4$, and
\[
\omega_{st}=dx_1\wedge dy_1 + dx_2\wedge dy_2
\]
the standard symplectic form on $\mathbb{R}^4$. Define
\[
f_2=x_1^2-\frac{y_1^2}{2}+x_2^2-\frac{y_2^2}{2},
\]
\[v_2=\nabla f_2=2x_1\frac{\partial}{\partial
x_1}-y_1\frac{\partial}{\partial y_1}
+2x_2\frac{\partial}{\partial x_2}-y_2\frac{\partial}{\partial
y_2},
\]
and
\[
\alpha_2=\iota_{v_2}\omega_{st}=y_1dx_1+2x_1dy_1+y_2dx_2+2x_2dy_2.
\]
Then $v_2$ is a symplectic vector field, in the sense that
$d(\iota_{v_2}\omega_{st})=\omega_{st}$. Let
\[
X_-=\{(x_1,y_1,x_2,y_2)\in\mathbb{R}^4~|~f_2(x_1,y_1,x_2,y_2)=-1\}.
\]
$X_-$ is positively transverse to $v_2$, and, hence,
$\alpha_2|_{X_-}$ is a contact form. Let
\[
S^1_- = \{(0,y_1,0,y_2)~|~f_2(0,y_1,0,y_2)=-1\}.
\]
This is a Legendrian knot in $(X_-,\alpha_2|_{X_-})$.
\begin{lemma}\cite[Lemma 3.1]{We}\label{handle-construction}
For $A>1$, let
\[
F(x_1,y_1,x_2,y_2) = A(x_1^2+x_2^2)-\frac{y_1^2+y_2^2}{2}-1.
\]
Then the hypersurface
\[
\Sigma = F^{-1}(0)
\]
is positively transverse to $v_2$, and the region
\[
\mathcal{H}_2 = \{(x_1,y_1,x_2,y_2)\in\mathbb{R}^4 ~|~
f_2(x_1,y_1,x_2,y_2)\geq-1, ~F(x_1,y_1,x_2,y_2)\leq0\}
\]
is diffeomorphic to $D^2\times D^2$. Moreover, by choosing
$A\gg1$, we can make $\mathcal{H}_2\cap X_-$ an arbitrarily small
neighborhood of $S^1_-$ in $X_-$.
\end{lemma}
\begin{definition}\label{2-handle}
$(\mathcal{H}_2,\omega_{st}|_{\mathcal{H}_2})$ is called a
standard symplectic $2$-handle.
\end{definition}
\begin{proposition}\cite[Proposition 4.2]{We}\label{isotropic}
Suppose, for $i=1,2$, $(W_i,\omega_i)$ is a symplectic
$4$-manifold, $u_i$ is a symplectic vector field in
$(W_i,\omega_i)$, $M_i$ is a $3$-dimension submanifold of $W$
transverse to $u_i$, and $K_i$ is a Legendrian knot in $M_i$ with
respect to the contact form $\iota_{u_i}\omega_i|_{M_i}$. Then
there is an open neighborhood $U_i$ of $K_i$ in $W_i$, for
$i=1,2$, and a diffeomorphism $\varphi:U_1\rightarrow U_2$, s.t.,
$\varphi^{\ast}(\omega_2|_{U_2})=\omega_1|_{U_1}$,
$\varphi_{\ast}(u_1|_{U_1})=u_2|_{U_2}$, $\varphi(U_1\cap
M_1)=U_2\cap M_2$, $\varphi(K_1)=K_2$.
\end{proposition}
Let $(W,\omega)$ be a symplectic $4$-manifold with boundary, $M$ a
component of $\partial W$, and $\xi$ a contact structure on $M$ so
that $\omega|_{\xi}>0$. Let $K$ be a Legendrian knot in $(M,\xi)$.
By \cite[Lemma 2.4]{EH2}, we isotope $\xi$ near $K$ so that
there exit a neighborhood $U$ of $K$ in $W$, and a non-vanishing
symplectic vector field $v$ defined in $U$, s.t., $v$
transversally points out of $W$ along $U\cap M$, and $\xi|_{U\cap
M} = \ker(\iota_v\omega|_{U\cap M})$. Let $\{\psi_t\}$ be the flow
of $v$. Without loss of generality, we assume there exists
$\tau>0$ such that
\[
U= \bigcup_{0\leq t< \tau}\psi_{-t}(U\cap M).
\]
Choose a small $\varepsilon\in(0,\tau)$. By Proposition
\ref{isotropic}, there is an open neighborhood $V$ of $S^1_-$ in
$\mathbb{R}^4$, and an embedding $\varphi:V\rightarrow U$, s.t.,
$\varphi^{\ast}(\omega)=\omega_{st}$, $\varphi_{\ast}(v_2)=v$,
$\varphi(V\cap X_-)\subset \psi_{-\varepsilon}(U\cap M)$, and
$\varphi(S^1_-)=\psi_{-\varepsilon}(K)$. Choosing $A\gg1$ in Lemma
\ref{handle-construction}, we get a standard symplectic $2$-handle
$\mathcal{H}_2$, such that $\mathcal{H}_2 \cap X_- \subset V$. We
extend the map $\varphi:V \rightarrow U$ by mapping the flow of
$v_2$ to the flow of $v$. Then $\varphi$ becomes a symplectic
diffeomorphism from a neighborhood of $\mathcal{H}_2 \cap X_-$ to
a neighborhood of $K$ in $W$. Now, let
\[
W'=W\cup_\varphi\mathcal{H}_2, ~
\omega' =\left\{%
\begin{array}{ll}
\omega, & \hbox{on $W$;} \\
\omega_{st}, & \hbox{on $\mathcal{H}_2$,} \\
\end{array}%
\right., ~\text{and}~
v' =\left\{%
\begin{array}{ll}
v, & \hbox{on $U$;} \\
v_2, & \hbox{on $\mathcal{H}_2$.} \\
\end{array}%
\right.
\]
Then $(W',\omega')$ is a symplectic $4$-manifold, and $v'$ is a
symplectic vector field defined in $U\cup_{\varphi}\mathcal{H}_2$,
transversally pointing out of the boundary of $W'$. Let
\[
M'=(M\setminus\mathcal{H}_2)\cup(\mathcal{H}_2\cap\Sigma),
~\text{and}~ \xi'=\left\{%
\begin{array}{ll}
\xi, & \hbox{on $M\setminus\mathcal{H}_2$;} \\
\ker\alpha_2, & \hbox{on $\mathcal{H}_2\cap\Sigma$.} \\
\end{array}%
\right.
\]
Then $(M',\xi')$ is the contact $3$-manifold obtained from
$(M,\xi)$ by Legendrian surgery on $K$, and $\omega'|_{\xi'}>0$.
\begin{remark}
If $(M,\xi)$ is weakly fillable, then the above construction gives
$(M',\xi')$ a weak symplectic filling. For a general contact
$3$-manifold $(M,\xi)$, consider the symplectic $4$-manifold
$(M\times I,d(e^t\alpha))$, where $\alpha$ is a contact form for
$\xi$, and $t$ is the variable of $I$. We can carry out the above
construction near $M\times\{1\}$, and get a symplectic cobordism
from $(M,\xi)$ to $(M',\xi')$.
\end{remark}
\section{Ozsv\'ath-Szab\'o invariants and proof of Theorem
\ref{surgery-main}}
Ozsv\'ath and Szab\'o \cite{OS} introduced the
Ozsv\'ath-Szab\'o invariant $c(\xi)$ of a contact structure $\xi$ on
a $3$-manifold $M$. $c(\xi)$ is an element of the quotient
$\widehat{HF}(-M)/\{\pm1\}$ of the Heegaard-Floer homology group of
$-M$, and is invariant under isotopy of $\xi$. $c(\xi)$ vanishes
when $\xi$ is overtwisted. For our purpose, it is more convenient to
use the following variant of the Ozsv\'ath-Szab\'o invariant.
\begin{definition}\cite{Gh1,Pl}\label{c+}
Let $M$ be a closed, oriented $3$-manifold, and
\[
\iota:\widehat{HF}(-M)\rightarrow HF^+(-M)
\]
the canonical map. Define $c^+(\xi)=\iota(c(\xi))$ for any contact structure $\xi$ on
$M$.
\end{definition}
Clearly, $c^+(\xi)$ is also invariant under isotopy of $\xi$, and
vanishes when $\xi$ is overtwisted.
The behavior of Ozsv\'ath-Szab\'o invariants under Legendrian
surgeries is described in the following theorem of Ozsv\'ath and Szab\'o.
\begin{theorem}\cite{OS}\label{OS-surgery}
Let $(M',\xi')$ be the contact $3$-manifold obtained from $(M,\xi)$
by Legendrian surgery on a Legendrian link, then
$F^+_W(c^+(\xi'))=c^+(\xi)$, where $W$ is the cobordism induced by
the surgery.
Specially, this implies that $\xi'$ is tight if $c(\xi)\neq0$.
\end{theorem}
Ghiggini \cite{Gh1} refined Theorem \ref{OS-surgery} to
the following.
\begin{proposition}\cite[Lemma 2.11]{Gh1}\label{Gh1-3.3}
Suppose that $(M',\xi')$ is obtained from $(M,\xi)$ by Legendrian
surgery on a Legendrian link. Then we have
$F^+_{W,\mathfrak{t}}(c^+(\xi'))=c^+(\xi)$, where $W$ is the
cobordism induced by the surgery and $\mathfrak{t}$ is the canonical
$Spin^\mathbb{C}$-structure associated to the symplectic structure
on $W$. Moreover, $F^+_{W,\mathfrak{s}}(c^+(\xi'))=0$ for any
$Spin^\mathbb{C}$-structure $\mathfrak{s}$ on $W$ with
$\mathfrak{s}\neq\mathfrak{t}$.
\end{proposition}
In order to prove Theorem \ref{surgery-main} in the weakly fillable
case, we need to use the Ozsv\'ath-Szab\'o contact invariant twisted by
a $2$-form as defined in \cite{OS1}. Let $(M,\xi)$ be a contact
$3$-manifold with weak symplectic filling $(W,\omega)$, and $B$ an
embedded $4$-ball in the interior of $W$. Consider the element
$\underline{F}^+_{W\setminus B, \mathfrak{s}|_{W\setminus B};
[\omega|_{W\setminus B}]}(c^+(\xi;[\omega|_M]))$ of the group
$\underline{HF}^+(S^3;[\omega|_{S^3}])$, where $S^3=-\partial B$,
$\mathfrak{s}$ is a $Spin^{\mathbb{C}}$-structure on $W$,
$c^+(\xi;[\omega|_M])\in \underline{HF}^+(-M;[\omega|_{M}])$ is the
Ozsv\'ath-Szab\'o contact invariant of $\xi$ twisted by
$[\omega|_M]$, and $\underline{F}^+_{W\setminus B,
\mathfrak{s}|_{W\setminus B}; [\omega|_{W\setminus B}]}$ is the
homomorphism between the two twisted Heegaard-Floer homology groups
induced by the cobordism $W\setminus B$. Note that both
$c^+(\xi;[\omega|_M])$ and $\underline{F}^+_{W\setminus B,
\mathfrak{s}|_{W\setminus B}; [\omega|_{W\setminus B}]}$ are defined
up to an overall multiplication by a factor of the form $\pm T^c$
for some $c\in\mathbb{R}$. To make them absolute, we fix the
auxiliary choices in the constructions of them, including a triple
Heegaard diagram, a base Whitney triangle to define the
homomorphisms, and a representation of $c^+(\xi;[\omega|_M])$. We
also fix a minimal grading generator $\Theta^+$ of $HF^+(S^3)$. Note
that
$\underline{HF}^+(S^3;[\omega|_{S^3}])=HF^+(S^3)\otimes\mathbb{Z}[\mathbb{R}]$.
Let $P_{\xi,\mathfrak{s};[\omega]}\in\mathbb{Z}[\mathbb{R}]$ be the
coefficient of $\Theta^+\otimes1$ in
\[
\underline{F}^+_{W\setminus B,
\mathfrak{s}|_{W\setminus B}; [\omega|_{W\setminus
B}]}(c^+(\xi;[\omega|_M])).
\]
Define a degree on $\mathbb{Z}[\mathbb{R}]$ by setting
$\deg{0}=+\infty$ and $\deg{P}=c_1$ for
\[
P=\sum_{i=1}^m a_i T^{c_i} ~\in~ \mathbb{Z}[\mathbb{R}],
\]
where $a_i\neq0$ and $c_1<\cdots<c_m$. Denote by $\mathfrak{s}_{\omega}$ the canonical $Spin^{\mathbb{C}}$-structure of $(W,\omega)$.
\begin{lemma}\cite[Theorem 4.2]{OS1}\label{min-deg}
\[
\deg P_{\xi,\mathfrak{s}_{\omega};[\omega]}<\deg
P_{\xi,\mathfrak{s};[\omega]}
\]
for any $Spin^{\mathbb{C}}$-structure $\mathfrak{s}$ on $W$ with
$\mathfrak{s}\neq\mathfrak{s}_{\omega}$.
\end{lemma}
\begin{proof}
(Following the proof of \cite[Theorem 4.2]{OS1}.) Fix an open book of $M$ adapted to $\xi$ with connected binding and genus greater than $1$. Eliashberg \cite[Theorem 1.1]{E6} showed than $\omega$ extends over the the Giroux $2$-handle $M\xrightarrow{W_0} M_0$ corresponding to the $0$-surgery on the binding of the open book, where $M_0$ is the surface bundle over $S^1$ resulted from this surgery. Moreover, \cite[Theorem 1.3]{E6} implies that there is a $4$-manifold $V$ with $\partial V = -M_0$, $b_2^+(V)>1$, such that the extension of $\omega$ over $W\cup_M W_0$ further extends to a symplectic structure $\widetilde{\omega}$ on $X=W\cup_M W_0 \cup_{M_0} V$. Let $\widetilde{\mathfrak{s}}_{\widetilde{\omega}}$ be the canonical $Spin^{\mathbb{C}}$-structure of $(X,\widetilde{\omega})$.
Let $\mathfrak{s}$ be any $Spin^{\mathbb{C}}$-structure on $W$ such that $\mathfrak{s}|_M$ is the canonical $Spin^{\mathbb{C}}$-structure of $(M,\xi)$. Using the Composition Law \cite[Theorem 3.9]{OS4} and the arguments in the proof of \cite[Theorem 4.2]{OS1}, one can show that there exists a non-zero element $P\in \mathbb{Z}[\mathbb{R}]$ independent of $\mathfrak{s}$ such that
\begin{eqnarray*}
& & P \cdot \underline{F}^+_{W\setminus B, \mathfrak{s}|_{W\setminus B}; [\omega|_{W\setminus B}]}(c^+(\xi;[\omega|_M])) \\
& = & \sum_{\widetilde{\mathfrak{s}}\in Spin^{\mathbb{C}}(X), ~\widetilde{\mathfrak{s}}|_{W}=\mathfrak{s}, ~\widetilde{\mathfrak{s}}|_{W_0}=\widetilde{\mathfrak{s}}_{\widetilde{\omega}}|_{W_0}, ~\widetilde{\mathfrak{s}}|_{V} = \widetilde{\mathfrak{s}}_{\widetilde{\omega}}|_V} \Phi_{X,\widetilde{\mathfrak{s}}} \cdot T^{\left\langle \omega \cup c_1(\widetilde{\mathfrak{s}}),[X]\right\rangle},
\end{eqnarray*}
where $\Phi_{X,\widetilde{\mathfrak{s}}}$ is the closed $4$-manifold invariant defined in \cite{OS4}. By \cite[Theorem 1.1]{OS5}, the degree of the right hand side of the above equation is equal to $\left\langle \omega\cup c_1(\widetilde{\mathfrak{s}}_{\widetilde{\omega}}),[X]\right\rangle$ if $\mathfrak{s}=\mathfrak{s}_{\omega}$, and is strictly greater than $\left\langle \omega\cup c_1(\widetilde{\mathfrak{s}}_{\widetilde{\omega}}),[X]\right\rangle$ otherwise. This implies the lemma.
\end{proof}
The next two lemmas are technical results needed to prove Theorem
\ref{surgery-main}.
\begin{lemma}\label{bundles}
Let $X$ be a compact manifold with boundary, and $Y$ a closed
submanifold of $X$. Suppose that $L_1$ and $L_2$ are two complex
line bundles over $X$, and there is an isomorphism $\Psi:L_1|_Y
\rightarrow L_2|_Y$. Let $j:(X,\emptyset)\rightarrow(X,Y)$ be the
natural inclusion. Then there exists $\beta~\in~H^2(X,Y)$, such that
$j^{\ast}(\beta)=c_1(L_1)-c_1(L_2)$, and, for any embedded
$2$-manifold $\Sigma$ in $X$ with $\partial\Sigma\subset Y$, and any
non-vanishing section $v$ of $L_1|_{\partial\Sigma}$, we have
$\langle \beta,[\Sigma] \rangle = \langle c_1(L_1,v),[\Sigma]
\rangle - \langle c_1(L_2,\Psi(v)),[\Sigma] \rangle$, where
$[\Sigma]$ is the relative homology class in $H_2(X,Y)$ represented
by $\Sigma$.
\end{lemma}
\begin{proof}
Denote by $J_i$ the complex structure on $L_i$. Choose a metric
$g_2$ on $L_2|_Y$ compatible with $J_2$, and let
$g_1=\Psi^{\ast}(g_2)$. Consider the complex line bundle
$L=L_1\otimes \overline{L}_2$, where $\overline{L}_2$ is $L_2$ with
the complex structure $-J_2$. Let
$\mathcal{I}:L_2\rightarrow\overline{L}_2$ be the identity map, and
$\overline{\Psi}=\mathcal{I}\circ\Psi$. We define a smooth
non-vanishing section $\eta$ of $L|_Y$ as following: at any point
$p$ on $Y$, pick a unit vector $u_p\in L_1|_p$, and define
$\eta_p=u_p\otimes\overline{\Psi}(u_p)$. It is clear that $\eta_p$
does not depend on the choice of $u_p$ since $\overline{\Psi}$ is
conjugate linear. This gives a smooth non-vanishing section $\eta$
of $L|_Y$. Now, let $\beta=c_1(L,\eta)$. Then
$j^{\ast}(\beta)=c_1(L)=c_1(L_1)-c_1(L_2)$.
Without loss of generality, we assume that $v$ is of unit length.
Choose a section $V_1$ of $L_1|_{\Sigma}$ with only isolated
singularities that extends $v$, and a section $V_2$ of
$L_2|_{\Sigma}$ with only isolated singularities that extends
$\Psi(v)$. Then it is easy to see that
\begin{eqnarray*}
\langle \beta,[\Sigma] \rangle
& = & \text{Sum of indices of
singularities of } ~(V_1 \otimes
\mathcal{I}(V_2)) \\
& = & (\text{Sum of indices of singularities of } ~V_1) \\
& & - (\text{Sum of indices of singularities of } ~V_2) \\
& = & \langle c_1(L_1,v),[\Sigma] \rangle - \langle
c_1(L_2,\Psi(v)),[\Sigma] \rangle.
\end{eqnarray*}
\end{proof}
\begin{figure}[ht]
\setlength{\unitlength}{1pt}
\begin{picture}(360,200)(-180,-110)
\linethickness{.5pt}
\put(-175,-80){\line(0,1){160}}
\put(-85,-80){\line(0,1){160}}
\qbezier(-125,-80)(-125,60)(-110,0)
\qbezier(-110,0)(-95,-60)(-95,80)
\put(-130,80){\vdots}
\put(-130,-90){\vdots}
\put(-180,-100){{$K$}}
\put(-140,-100){{$S_+(K)$}}
\put(-150,-30){\Large{+}}
\put(-150,30){\Large{--}}
\linethickness{2pt}
\put(-175,60){\line(1,0){90}}
\put(-175,0){\line(1,0){90}}
\put(-175,-60){\line(1,0){90}}
\linethickness{.5pt}
\put(85,-80){\line(0,1){160}}
\put(175,-80){\line(0,1){160}}
\qbezier(125,80)(125,-60)(140,0)
\qbezier(140,0)(155,60)(155,-80)
\put(130,80){\vdots}
\put(130,-90){\vdots}
\put(80,-100){{$K$}}
\put(140,-100){{$S_-(K)$}}
\put(110,-30){\Large{+}}
\put(110,30){\Large{--}}
\linethickness{2pt}
\put(85,60){\line(1,0){90}}
\put(85,0){\line(1,0){90}}
\put(85,-60){\line(1,0){90}}
\linethickness{.5pt}
\put(-35,-3){dividing curves}
\put(-45,0){\vector(-1,0){30}}
\put(-45,5){\vector(-3,4){30}}
\put(-45,-5){\vector(-3,-4){30}}
\put(45,0){\vector(1,0){30}}
\put(45,5){\vector(3,4){30}}
\put(45,-5){\vector(3,-4){30}}
\end{picture}
\caption{Positive and Negative
Stabilizations.}\label{stablization-figure}
\end{figure}
Let $K$ be a Legendrian knot in a contact $3$-manifold $(M,\xi)$.
Choose an oriented embedded annulus $\widetilde{A}$ which has $-K$
as one of it is boundary components, and such that the index of the
contact framing of $K$ relative to the framing given by
$\widetilde{A}$ is negative. We can isotope $\widetilde{A}$ relative
to $K$ to make it convex, and such that $K$ has a standard annular
collar $A$ in $\widetilde{A}$. (See, e.g., \cite{H1} for the
definition of standard annular collars.) Then, by Legendrian
Realization Principle \cite[Theorem 3.7]{H1}, we can isotope $A$ relative to $K$ to make
the curved lines in Figure \ref{stablization-figure} Legendrian
without changing the dividing curves. Then these Legendrian curves
are (Legendrianly isotopic to) the positive and negative
stabilizations of $K$. By Giroux's Flexibility, we can again assume
the stabilization has a standard annular collar neighborhood in $A$,
and repeat the above process to obtain repeated stabilizations of
$K$. This observation and \cite[Proposition 4.5]{H1} give:
\begin{lemma}\label{c1-number}
Let $K$ be a Legendrian knot in a contact $3$-manifold $(M,\xi)$.
Then there is an embedded convex annulus $A$ in $M$, such that
$\partial A=(-K)\cup K'$, and $K'$ is (Legendrianly isotopic to) the
repeated stabilization of $K$ obtained by $p$ positive
stabilizations and $s-p$ negative stabilizations. Moreover, if $u$
and $u'$ are the unit tangent vector fields of $K$ and $K'$, then
$\langle c_1(\xi,(-u)\sqcup u'),[A,\partial A]\rangle = 2p-s$.
\end{lemma}
\vspace{.4cm}
\begin{proof}[Proof of Theorem \ref{surgery-main}] For
notational simplicity, we assume $L=K$ is a Legendrian knot, and
$K_i$, $i=1,2$, is a Legendrian knot obtained from $K$ by $p_i$
positive stabilizations and $s-p_i$ negative stabilizations. The
generalization to Legendrian links is straightforward.
\vspace{.4 cm}
\textbf{Part (1).} We assume that $(M,\xi)$ is weakly filled by
$(W,\omega)$.
First, by \cite[Lemma 2.4]{EH2}, we isotope $\xi$ in near $K$ so
that there is an open neighborhood $U$ of $K$ in $W$ and a
non-vanishing symplectic vector field $v$ defined in $U$, s.t.,
$\xi|_{U\cap M}=\ker(\iota_v\omega|_{U\cap M})$, and $v$
transversally points out of $W$ along $U\cap M$. Let $\{\psi_t\}$ be
the flow of $v$. Without loss of generality, we assume that
$K_i\subset U\cap M$, and
\[
U= \bigcup_{0\leq t< \tau}\psi_{-t}(U\cap M).
\]
Let $W'$ be the smooth $4$-manifold obtained from $W$ by attaching a
$2$-handle to $W$ along $K$ with the framing given by the contact
framing of $K$ plus $s+1$ left twists, and $M'=\partial W'$. Then
the Legendrian surgeries along $K_1$ and $K_2$ give two contact
structures $\xi_1$ and $\xi_2$ on $M'$, and two corresponding
symplectic structures $\omega_1$ and $\omega_2$ on $W'$, such that
$(W',\omega_i)$ is a weak symplectic filling of $(M',\xi_i)$.
\begin{lemma}\label{cohomologous}
We can arrange that $[\omega_1]=[\omega_2]\in H^2(W';\mathbb{R})$.
\end{lemma}
\begin{proof}
Choose a small $\varepsilon\in(0,\tau)$. Let
$N=\psi_{-\varepsilon}(U\cap M)$, and
$\hat{K}_i=\psi_{-\varepsilon}(K_i)$. Then We find a standard
$2$-handle $\mathcal{H}_2$, a neighborhood $V$ of $\mathcal{H}_2\cap
X_-$ in $\mathbb{R}^4$, and an embedding $\varphi_i:V\rightarrow U$,
s.t., $\varphi^{\ast}(\omega)=\omega_{st}$, $\varphi_{\ast}(v_2)=v$,
$\varphi(V\cap X_-)\subset N$, and $\varphi(S^1_-)=\hat{K}_i$. Since
$K_1$ and $K_2$ are isotopic as framed knots,
$\varphi_1|_{\mathcal{H}_2\cap X_-}$ and
$\varphi_2|_{\mathcal{H}_2\cap X_-}$ are isotopic as smooth
embeddings. So there is a smooth isotopy
$\hat{\varphi}_s:\mathcal{H}_2\cap X_-\rightarrow N$, $1\leq
s\leq2$, s.t., $\hat{\varphi}_i=\varphi_i|_{\mathcal{H}_2\cap X_-}$.
After a change of variable in $s$, we assume that
\[
\hat{\varphi}_s=\left\{%
\begin{array}{ll}
\varphi_1|_{\mathcal{H}_2\cap X_-}, & \hbox{if $1\leq s\leq1.1$;}
\\
\varphi_2|_{\mathcal{H}_2\cap X_-}, & \hbox{if $1.9\leq s\leq2$.}
\\
\end{array}%
\right.
\]
Let $\widetilde{W}=W\times[1,2]$, and
$\widetilde{\mathcal{H}}_2=\mathcal{H}_2\times[1,2]$. Define
$\widetilde{\omega}$ and $\widetilde{\omega}_{st}$ to be the pull
backs of $\omega$ and $\omega_{st}$ onto $\widetilde{W}$ and
$\widetilde{\mathcal{H}}_2$. And define $\widetilde{v}$ and
$\widetilde{v}_{2}$ to be the lifts of $v$ and $v_2$ to
$U\times[1,2]$ and $\widetilde{\mathcal{H}}_2$ that are tangent to
the horizontal slices $U\times\{s\}$ and $\mathcal{H}_2\times\{s\}$,
$1\leq s\leq2$. Then $\iota_{\widetilde{v}}\widetilde{\omega}$ and
$\iota_{\widetilde{v}_{2}}\widetilde{\omega}_{st}$ are the pull
backs of $\iota_v\omega$ and $\iota_{v_2}\omega_{st}$.
Define $\hat{\Phi}:(\mathcal{H}_2\cap X_-)\times[1,2]\rightarrow
N\times[1,2]$ by $\hat{\Phi}(p,s)=(\hat{\varphi}_s(p),s)$. By
mapping the flow of $\widetilde{v}_{2}$ to the flow of
$\widetilde{v}$, we extend $\hat{\Phi}$ to a diffeomorphism $\Phi$
from a neighborhood of $(\mathcal{H}_2\cap X_-)\times[1,2]$ in
$\widetilde{\mathcal{H}}_2$ to a neighborhood of
$\{\psi_{\varepsilon}\circ\hat{\varphi}_s(S^1_-)~|~1\leq s\leq2\}$
($\subset M\times[1,2]$) in $\widetilde{W}$. Clearly, we have
$\Phi_{\ast}(\widetilde{v}_{2})=\widetilde{v}$, and, near
$(\mathcal{H}_2\cap X_-)\times\{1,2\}$, we have
$\Phi^{\ast}(\widetilde{\omega})=\widetilde{\omega}_{st}$. Consider
the $1$-form $\Phi^{\ast}(\iota_{\widetilde{v}}\widetilde{\omega})$
defined in a neighborhood of $(\mathcal{H}_2\cap X_-)\times[1,2]$ in
$\widetilde{\mathcal{H}}_2$. It equals
$\iota_{\widetilde{v}_{2}}\widetilde{\omega}_{st}$ near
$(\mathcal{H}_2\cap X_-)\times\{1,2\}$. So, there is a $1$-form
$\widetilde{\alpha}$ on $\widetilde{\mathcal{H}}_2$, s.t.,
\[
\widetilde{\alpha}=
\left\{%
\begin{array}{ll}
\Phi^{\ast}(\iota_{\widetilde{v}}\widetilde{\omega}), & \hbox{near
$(\mathcal{H}_2\cap X_-)\times[1,2]$;} \\
\iota_{\widetilde{v}_{2}}\widetilde{\omega}_{st}, & \hbox{near
$\mathcal{H}_2\times\{1,2\}$.} \\
\end{array}%
\right.
\]
Define
\[
\widetilde{W}'=\widetilde{W}\cup_{\Phi}\widetilde{\mathcal{H}}_2,
\text{ and }
\widetilde{\omega}'=\left\{%
\begin{array}{ll}
\widetilde{\omega}, & \hbox{on $\widetilde{W}$;} \\
d\widetilde{\alpha}, & \hbox{on $\widetilde{\mathcal{H}}_2$.} \\
\end{array}%
\right.
\]
Then $\widetilde{\omega}'$ is a well defined closed $2$-form on
$\widetilde{W}'$.
For $s\in[1,2]$, let
\[
g_s: W' \rightarrow \widetilde{W}'
\]
be the embedding given by $g_s(p)=(p,s)$ for any point $p$ in $W$
or $\mathcal{H}_2$. Then $\{g_s\}$ is an isotopy of embeddings of
$W'$ into $\widetilde{W}'$. For $i=1,2$, let
$\omega_i=g_i^{\ast}(\widetilde{\omega}')$. Then $(W',\omega_i)$ is
a weak symplectic filling of $(M',\xi_i)$, and
$[\omega_1]=[\omega_2]\in H^2(W';\mathbb{R})$.
\end{proof}
Now we are in the situation that the contact $3$-manifold
$(M',\xi_i)$ is weakly symplectically filled by $(W',\omega_i)$ for
$i=1,2$, $\xi_1$ and $\xi_2$ are isotopic, and
$[\omega_1]=[\omega_2]$. Let $\mathfrak{s}_i$ be the canonical
$Spin^{\mathbb{C}}$-structure of $(W',\omega_i)$. Suppose that
$\mathfrak{s}_1\neq\mathfrak{s}_2$. Then, by Lemma \ref{min-deg}, we
have
\[
\deg P_{\xi_1,\mathfrak{s}_1;[\omega_1]} = \deg
P_{\xi_2,\mathfrak{s}_1;[\omega_2]} > \deg
P_{\xi_2,\mathfrak{s}_2;[\omega_2]},
\]
and, similarly,
\[
\deg P_{\xi_2,\mathfrak{s}_2;[\omega_2]} = \deg
P_{\xi_1,\mathfrak{s}_2;[\omega_1]} > \deg
P_{\xi_1,\mathfrak{s}_1;[\omega_1]}.
\]
This is a contradiction. Thus, $\mathfrak{s}_1=\mathfrak{s}_2$.
Next we construct a symplectic decomposition of $(TW',\omega_i)$ in
a neighborhood of the $2$-handle $\mathcal{H}_2$. First, define a
$2$-plane distribution $\widetilde{\xi}$ on $U$ by
$\widetilde{\xi}|_{\psi_t(p)}=\psi_{t\ast}(\xi_p)$ for $p\in M$. And
let $\widetilde{\eta}=\widetilde{\xi}^{\bot_\omega}$, the
$\omega$-normal bundle of $\widetilde{\xi}$. Clearly, $v$ is a
non-vanishing section of $\widetilde{\eta}$.
Define $\Theta:\mathbb{R}^4\setminus\{0\}\rightarrow Sp(4)$ by
\[
\Theta(x_1,y_1,x_2,y_2) = \frac{1}{\sqrt{4x_1^2+y_1^2+4x_2^2+y_2^2}}
\left(%
\begin{array}{cccc}
2x_1 & y_1 & -2x_2 & y_2 \\
-y_1 & 2x_1 & -y_2 & -2x_2 \\
2x_2 & y_2 & 2x_1 & -y_1 \\
-y_2 & 2x_2 & y_1 & 2x_1 \\
\end{array}%
\right).
\]
Note that $\Theta$ factors through the natural inclusion of $SU(2)$
into $Sp(4)$ induced by
\[
a+bi \mapsto \left(%
\begin{array}{cc}
a & -b \\
b & a \\
\end{array}%
\right).
\]
Since $SU(2)$ is simply connected, we can modify
$\Theta|_{\mathcal{H}_2}$ in a small neighborhood of the
intersection $\mathcal{H}_2\cap\{y_1=y_2=0\}$, and then extend it
into a smooth map $\hat{\Theta}:\mathcal{H}_2\rightarrow Sp(4)$
(c.f. \cite[Proposition 2.3]{Go}). Now let $\{e_1,e_2,e_3,e_4\}$
be the symplectic frame of $T\mathbb{R}^4|_{\mathcal{H}_2}$ defined
by
\[
(e_1,e_2,e_3,e_4) = (\frac{\partial}{\partial x_1},
\frac{\partial}{\partial y_1}, \frac{\partial}{\partial x_2},
\frac{\partial}{\partial y_2})\cdot\hat{\Theta}.
\]
Let $\varphi_i$ be the symplectic attaching map used above to
construct $(W',\omega_i)$, which is a symplectic diffeomorphism from
a neighborhood of $\mathcal{H}_2\cap X_-$ to a neighborhood of $K_i$
in $U$. Note that $\varphi_i$ maps $v_2$
($=\sqrt{4x_1^2+y_1^2+4x_2^2+y_2^2}\cdot e_1$ in the attaching
region) to $v$. So, in the attaching region, $\varphi_i$ identifies
$\widetilde{\xi}$ with the $2$-plane distribution on $\mathcal{H}_2$
spanned by $\{e_3,e_4\}$, and identifies $\widetilde{\eta}$ with the
$2$-plane distribution on $\mathcal{H}_2$ spanned by $\{e_1,e_2\}$.
Let
\[
\widetilde{\xi}_i=\left\{%
\begin{array}{ll}
\widetilde{\xi}, & \hbox{on $U$;} \\
\langle e_3,e_4 \rangle, & \hbox{on $\mathcal{H}_2$.} \\
\end{array}%
\right. \text{ and }~
\widetilde{\eta}_i=\left\{%
\begin{array}{ll}
\widetilde{\eta}, & \hbox{on $U$;} \\
\langle e_1,e_2 \rangle, & \hbox{on $\mathcal{H}_2$.} \\
\end{array}%
\right.
\]
Then
\[
TW'|_{U\cup_{\varphi_i}\mathcal{H}_2} = \widetilde{\xi}_i \oplus
\widetilde{\eta}_i.
\]
And $\widetilde{\xi}_i$ and $\widetilde{\eta}_i$ are
$\omega_i$-orthogonal to each other. Also, it easy to see that
$\widetilde{\eta}_i$ has a non-vanishing section since we can modify
$v_2$ near the intersection $\mathcal{H}_2\cap\{y_1=y_2=0\}$, and
then extend it to a non-vanishing multiple of $e_1$.
Choose an almost complex structure $J_i$ on
$U\cup_{\varphi_i}\mathcal{H}_2$ compatible with
$\omega_i|_{U\cup_{\varphi_i}\mathcal{H}_2}$ so that
$\widetilde{\xi}_i$ and $\widetilde{\eta}_i$ are complex sub-bundles
of $(TW'|_{U\cup_{\varphi_i}\mathcal{H}_2},J_i)$. Then
$\widetilde{\eta}_i$ becomes a trivial complex line bundle. Note
that $\mathfrak{s}_i|_{U\cup_{\varphi_i}\mathcal{H}_2}$ is the
$Spin^{\mathbb{C}}$-structure associated to $J_i$. There are natural
isomorphisms of complex line bundles
\[
\det (\mathfrak{s}_i)|_{U\cup_{\varphi_i}\mathcal{H}_2} ~\cong~ \det
(TW'|_{U\cup_{\varphi_i}\mathcal{H}_2},J_i) ~\cong~
\widetilde{\xi}_i.
\]
Moreover, there is a natural isomorphism
\[
\det (\mathfrak{s}_i)|_W \cong \det (\mathfrak{s}),
\]
where $\mathfrak{s}$ is the $Spin^{\mathbb{C}}$-structure on $W$
associated to $\omega$.
Let $A_i\subset M$ be the annulus bounded by $(-K)\cup K_i$ given in
Lemma \ref{c1-number}, and
\[
\Sigma_i=A_i\cup(\text{the core of the $2$-handle attached to }K_i),
\]
oriented so that $\partial\Sigma_i=-K$. Then
$[\Sigma_1]=[\Sigma_2]\in H_2(W',W)$. And, by Lemma \ref{bundles},
there exists $\beta~\in~H^2(W',W)$, such that
$j^{\ast}(\beta)=c_1(\det (\mathfrak{s}_1))-c_1(\det
(\mathfrak{s}_2))=0$, and
\begin{eqnarray*}
\langle \beta,[\Sigma_1] \rangle & = & \langle c_1(\det
(\mathfrak{s}_1),-\mu_1),[\Sigma_1] \rangle - \langle c_1(\det
(\mathfrak{s}_2),-\mu_2),[\Sigma_1] \rangle \\
& = & \langle c_1(\widetilde{\xi}_1,-u),[\Sigma_1] \rangle - \langle
c_1(\widetilde{\xi}_2,-u),[\Sigma_1] \rangle \\
& = & \langle c_1(\widetilde{\xi}_1,-u),[\Sigma_1] \rangle - \langle
c_1(\widetilde{\xi}_2,-u),[\Sigma_2] \rangle,
\end{eqnarray*}
where $u$ is the unit tangent vector field of $K$, and $\mu_i$ is
the section of $\det(\mathfrak{s}_i)|_K$ identified with $u$ through
the above isomorphisms.
Denote by $u_i$ the unit tangent vector field of $K_i$. Then $u_i$
extends over the core of the $2$-handle as a non-vanishing multiple
of $e_3$. So, by Lemma \ref{c1-number}, we have $\langle
c_1(\widetilde{\xi}_i,-u),[\Sigma_i]\rangle=\langle
c_1(\xi,(-u)\sqcup u_i),[A_i,\partial A_i]\rangle=2p_i-s$. Thus,
$\langle \beta,[\Sigma_1] \rangle=2(p_1-p_2)$. But, since
$j^{\ast}(\beta)=0$, there exists $\varsigma\in H^1(W)$, s.t.,
$\delta(\varsigma)=\beta$, where $\delta$ is the connecting map in
the long exact sequence of the pair $(W',W)$. So $2(p_1-p_2)=
\langle \delta(\varsigma),[\Sigma_1] \rangle = \langle
\varsigma,-[K] \rangle$. This implies $p_1=p_2$ when $[K]$ is
torsion, and $2p_1\equiv2p_2\mod{d}$ when $[K]$ is non-torsion,
where $d=\gcd\{\langle\zeta,[K]\rangle|\zeta\in H^1(W)\}$.
\vspace{.4 cm}
\textbf{Part (2).} We assume that $c^+(\xi)\neq0$.
Consider the symplectic $4$-manifold $(M\times I,d(e^t\alpha))$,
where $\alpha$ is a contact form for $\xi$, and $t$ is the variable
of $I$. Note that $\frac{\partial}{\partial t}$ is a symplectic
vector field in this setting, and it transversally points out of
$M\times I$ along $M\times \{1\}$. The flow of
$\frac{\partial}{\partial t}$ is the translation in the
$I$-direction. Let $\widetilde{\xi}$ be the $2$-plane distribution
on $M\times I$ generated by translating $\xi$ in the $I$-direction,
and $\widetilde{\eta}=\widetilde{\xi}^{\bot_{d(e^t\alpha)}}$, the
$d(e^t\alpha)$-normal bundle of $\widetilde{\xi}$. Note that
$\frac{\partial}{\partial t}$ is a section of $\widetilde{\eta}$.
We perform Legendrian surgery along $K_i\times \{1\}$. Let
$\varphi_i$ be the symplectic attaching map, which is a symplectic
diffeomorphism from a neighborhood of $S^1_-$ in $\mathcal{H}_2$ to
a neighborhood of $K_i\times \{1\}$ in $M\times I$. Let
\[
W = (M\times I) \cup_{\varphi_1} \mathcal{H}_2 \cong (M\times I)
\cup_{\varphi_2} \mathcal{H}_2.
\]
Then the two Legendrian surgeries give two symplectic structures
$\omega_1$ and $\omega_2$ on $W$, so that $(W,\omega_i)$ is a
symplectic cobordism from $(M,\xi)$ to $(M',\xi_i)$. Similar to the
construction used in Part (1), we construct an $\omega_i$-orthogonal
decomposition
\[
TW = \widetilde{\xi}_i \oplus \widetilde{\eta}_i,
\]
where $\widetilde{\xi}_i|_{M\times I}=\widetilde{\xi}$,
$\widetilde{\eta}_i|_{M\times I}=\widetilde{\eta}$, and, moreover,
$\frac{\partial}{\partial t}$ extends to a non-vanishing section of
$\widetilde{\eta}_i$. Let $J_i$ be an almost complex structure on
$W$ compatible with $\omega_i$ such that both $\widetilde{\xi}_i$
and $\widetilde{\eta}_i$ are complex sub-bundles of $(TW,J_i)$. Then
$\widetilde{\eta}_i$ becomes a trivial complex line bundle over $W$,
and, hence, $c_1(J_i)=c_1(\widetilde{\xi}_i)$.
Let $\mathfrak{s}_i$ be the canonical $Spin^\mathbb{C}$-structure
associated to $J_i$. Then it is also the canonical
$Spin^\mathbb{C}$-structure associated to $\omega_i$. If
$\mathfrak{s}_1$ and $\mathfrak{s}_2$ are non-isomorphic, according
to Proposition \ref{Gh1-3.3}, we have
\begin{eqnarray*}
F^+_{W,\mathfrak{s}_1}(c^+(\xi_1)) & = &
F^+_{W,\mathfrak{s}_2}(c^+(\xi_2)) ~=~ c^+(\xi) \neq 0 \\
F^+_{W,\mathfrak{s}_1}(c^+(\xi_2)) & = &
F^+_{W,\mathfrak{s}_2}(c^+(\xi_1)) ~=~ 0.
\end{eqnarray*}
But $\xi_1$ and $\xi_2$ are isotopic, this is impossible. So
$\mathfrak{s}_1$ and $\mathfrak{s}_2$ are isomorphic, and, hence,
$c_1(\widetilde{\xi}_1)=c_1(\widetilde{\xi}_2)$.
Let $A_i$ be the annulus in $M\times\{0\}$ bounded by $(-K)\times
\{0\}\cup K_i\times \{0\}$ given by Lemma \ref{c1-number}, and
\[
\Sigma_i=A_i\cup (K_i\times I) \cup(\text{the core of the $2$-handle
attached to }K_i\times \{1\}),
\]
oriented so that $\partial\Sigma=-K\times\{0\}$. Then $\Sigma_1$ and
$\Sigma_2$ are isotopic relative to boundary. And, by Lemma
\ref{bundles}, there exists $\beta~\in~H^2(W,M)$, such that
$j^{\ast}(\beta)=c_1(\widetilde{\xi}_1)-c_1(\widetilde{\xi}_2)=0$,
and
\begin{eqnarray*}
\langle \beta,[\Sigma_1] \rangle & = & \langle
c_1(\widetilde{\xi}_1,-u),[\Sigma_1] \rangle - \langle
c_1(\widetilde{\xi}_2,-u),[\Sigma_1] \rangle \\
& = & \langle c_1(\widetilde{\xi}_1,-u),[\Sigma_1] \rangle - \langle
c_1(\widetilde{\xi}_2,-u),[\Sigma_2] \rangle,
\end{eqnarray*}
where $u$ is the unit tangent vector field of $K\times\{0\}$. Denote
by $u_i$ the unit tangent vector field of $K_i\times \{0\}$. Then,
as in Part (1), $u_i$ extends over $K_i\times I$ and the core of the
$2$-handle without singularities. So, by Lemma \ref{c1-number}, we
have $\langle c_1(\widetilde{\xi}_i,-u),[\Sigma_i]\rangle=\langle
c_1(\xi,(-u)\sqcup u_i),[A_i,\partial A_i]\rangle=2p_i-s$. Thus,
$\langle \beta,[\Sigma_1] \rangle=2(p_1-p_2)$. But, since
$j^{\ast}(\beta)=0$, there exists $\varsigma\in H^1(M)$, s.t.,
$\delta(\varsigma)=\beta$, where $\delta$ is the connecting map in
the long exact sequence of the pair $(W,M\times\{0\})$. So
$2(p_1-p_2)= \langle \delta(\varsigma),[\Sigma_1] \rangle = \langle
\varsigma,-[K] \rangle$. This implies $p_1=p_2$ when $[K]$ is
torsion, and $2p_1\equiv2p_2\mod{d}$ when $[K]$ is non-torsion,
where $d=\gcd\{\langle\zeta,[K]\rangle|\zeta\in H^1(M)\}$.
\end{proof}
\begin{remark}
The weakly fillable case of Theorem \ref{surgery-main} can also be
proved using the monopole invariant defined by Kronheimer and
Mrowka \cite{KM}. Indeed, in Part (1) of the proof, after
proving Lemma \ref{cohomologous}, we are in the situation where
$\xi_1$ and $\xi_2$ are isotopic, and $[\omega_1]=[\omega_2]\in
H^2(W';\mathbb{R})$. After a possible isotopy supported near $M'$,
we assume that $\xi_1=\xi_2=\xi'$. Let $\mathfrak{s}_i\in
Spin^\mathbb{C}(W',\xi')$ be the element associated to $\omega_i$.
Then, by \cite[Theorems 1.1 and 1.2]{KM}, we have
$[\omega_1]\cup(\mathfrak{s}_2-\mathfrak{s}_1)\geq0$, and
$[\omega_2]\cup(\mathfrak{s}_1-\mathfrak{s}_2)\geq0.$ But
$[\omega_1]=[\omega_2]$. Thus,
$[\omega_1]\cup(\mathfrak{s}_2-\mathfrak{s}_1)=0$. And, according
to \cite[Theorem 1.2]{KM}, this implies that
$\mathfrak{s}_1=\mathfrak{s}_2$ as elements of
$Spin^\mathbb{C}(W',\xi')$, and, specially, that
$c_1(\mathfrak{s}_1) = c_1(\mathfrak{s}_2)$. Then we can repeat
the rest of Part (1) of the proof, and prove the weakly fillable
case of the theorem.
\end{remark}
\section{Tight contact structures on Brieskorn homology spheres
$-\Sigma(2,3,6n-1)$}\label{236}
A small Seifert fibered manifold is a $3$-manifold Seifert fibered
over $S^2$ with $3$ singular fibers. We denote by $M(r_1,r_2,r_3)$
the small Seifert fibered manifold with $3$ singular fibers with
coefficients $r_1$, $r_2$ and $r_3$.
The classification of tight contact structures on a small Seifert
fibered manifold is a hard problem. When the
Euler number of the small Seifert fibered manifold is not $-1$ or
$-2$, these tight contact structures are all Stein
fillable, and are classified in \cite{GLS,Wu}. Note all these manifolds are
$L$-spaces, i.e. have Heegaard-Floer homology like that of a lens
space. There are also partial results when the Euler number is
$-1$ or $-2$, and the manifold is an $L$-space (see e.g.
\cite{GLS2}). In solving these examples, the use of untwisted
Ozsv\'ath-Szab\'o contact invariant is essential. It appears that
the classification is much harder to achieve when the the small
Seifert fibered manifold is not an $L$-space.
Brieskorn homology sphere $-\Sigma(2,3,6n-1)$ is the small Seifert
fibered manifold $M(-\frac{1}{2},\frac{1}{3},\frac{n}{6n-1})$, which is not an $L$-space when
$n\geq2$. These appear to be good examples
of non-$L$-space small Seifert fibered manifolds to start with. In \cite{GS},
Ghiggini and Sch\"{o}nenberger showed that there is a
unique tight contact structure on $-\Sigma(2,3,11)$. This method was
extended to classify contact structures on $-\Sigma(2,3,17)$ in
\cite{Gh1}. Next we discuss the generalization of their method.
Let $\Sigma$ be an oriented three-hole sphere with boundary components $C_1$, $C_2$ and $C_3$.
Then $-\partial\Sigma\times S^1=T_1+T_2+T_3$, where the "$-$" sign
means reversing the orientation and $T_i=-C_i\times S^1$. We identify $T_i$ to
$\mathbb{R}^2/\mathbb{Z}^2$ by identifying $-C_i\times\{\text{pt}\}$ to $(1,0)^T$,
and $\{\text{pt}\}\times S^1$ to $(0,1)^T$. Also, for $i=1,2,3$,
let $V_i=D^2\times S^1$, and identify $\partial V_i$ with
$\mathbb{R}^2/\mathbb{Z}^2$ by identifying a meridian $\partial
D^2 \times \{\text{pt}\}$ with $(1,0)^T$ and a longitude
$\{\text{pt}\}\times S^1$ with $(0,1)^T$.
Define diffeomorphism $\varphi_i: \partial V_i \rightarrow T_i$ by the following matrices.
\[
\varphi_1 =
\left(%
\begin{array}{cc}
2 & -1 \\
1 & 0 \\
\end{array}%
\right),
\hspace{.5cm}
\varphi_2 =
\left(%
\begin{array}{cc}
3 & 1 \\
-1 & 0 \\
\end{array}%
\right),
\hspace{.5cm}
\varphi_3 =
\left(%
\begin{array}{cc}
6n-1 & 6 \\
-n & -1 \\
\end{array}%
\right).
\]
Then
\[
-\Sigma(2,3,6n-1) \cong M(-\frac{1}{2},\frac{1}{3},\frac{n}{6n-1}) \cong (\Sigma
\times S^1)\cup_{(\varphi_1\cup\varphi_2\cup\varphi_3)}(V_1\cup V_2\cup V_3).
\]
Note that each $S^1$-fiber in the product $\Sigma\times S^1$ becomes a regular fiber of the Seifert fibration, and the framing of the $S^1$-fiber from the product is the same as the standard framing of a regular fiber of the Seifert fibration. Also, the core curve of each $V_i$ becomes a singular fiber of the Seifert fibration, and our choice of the longitude of $\partial V_i$ gives each singular fiber a framing. If $\xi$ is a contact structure on $-\Sigma(2,3,6n-1)$, and $K$ is a Legendrian regular fiber (resp. Legendrian singular fiber) of the Seifert fibration, then the twisting number $t(K)$ of $K$ is defined to be the index of the contact framing of $K$ with respect to the standard framing (resp. the framing we chose). We define
\[
t(\xi)=\max\{t(K)~| ~K \text{ is a Legendrian regular fiber.}\}
\]
Etnyre and Honda \cite{EH} showed that $-\Sigma(2,3,5)$ does not admit tight contact structures. So we assume that $n\geq2$ in the discussions below. Next two lemmas are proved following the arguments in \cite[Subsection 4.2]{GS}. Similar methods were also used in e.g. \cite{Wu}.
\begin{lemma}\label{neg-twist}
If $\xi$ is a tight contact structure on $-\Sigma(2,3,6n-1)$, then $t(\xi)\leq -2$.
\end{lemma}
\begin{proof}
We prove the lemma in two steps: first prove that $t(\xi)<0$, and then prove that $t(\xi)\neq-1$.
Assume $t(\xi)\geq0$. Then we can find a Legendrian regular fiber $F$ with twisting number $0$. After possibly an isotopy, assume $F$ is contained in the piece $\Sigma\times S^1$. Let $F_i$ be a Legendrian knot $C^0$-close to the core curve of $V_i$. After repeated stabilization of $F_i$, we assume that $t(F_i)=n_i<<0$. After isotopy, assume that $V_i$ is a standard neighborhood of $F_i$. Then $\partial V_i$ is convex and has two parallel dividing curves of slope $\frac{1}{n_i}$. Now use the coordinates of $T_i$. Then the slopes of dividing curves of $T_1$, $T_2$ and $T_3$ are $s_1=\frac{n_1}{2n_1-1}$, $s_2=-\frac{n_2}{3n_2+1}$ and $s_3=-\frac{nn_3+1}{(6n-1)n_3+6}$, respectively. Since $n_i<<0$, we have $s_1>0$, $s_2>-\frac{1}{2}$ and $s_3>-\frac{1}{5}$. Then we can isotope $T_i$ as in \cite[Subsection 4.2.2]{GS} and get a decomposition
\[
\Sigma \times S^1 = (\Sigma' \times S^1) \cup (T_1 \times [0,1]) \cup (T_2 \times [0,1]) \cup (T_3 \times [0,1]),
\]
such that
\begin{itemize}
\item $\Sigma'$ is a three-sphere in $\Sigma$ with \[ \partial \Sigma' \times S^1 = (-T_1\times\{1\}) \cup (-T_2\times\{1\}) \cup (-T_3\times\{1\});\]
\item $\xi|_{T_i \times [0,1]}$ is a minimal twisting tight contact structure with minimal convex boundary;
\item The slopes of dividing curves on $T_1\times \{0\},T_2\times \{0\},T_3\times \{0\}$ are $0,-\frac{1}{2},-\frac{1}{5}$, respectively, and the slopes of dividing curves on $T_1\times \{1\},T_2\times \{1\},T_3\times \{1\}$ are $\infty$.
\end{itemize}
Then we can follow the arguments in the proof of \cite[Theorem 4.14]{GS} to show that $\xi$ must be overtwisted. This contradiction shows that $t(\xi)<0$.
Now assume that $t(\xi)=-1$. Let $F\subset \Sigma\times S^1$ be a Legendrian regular fiber with $t(F)=-1$, and $V_i$ a standard neighborhood of a Legendrian singular fiber $F_i$ with $t(F_i)=n_i<<0$. For $i=1,2$, connect $F$ to $\partial V_i$ by a vertical convex annulus $A_i$ that intersects the dividing curves of $\partial V_i$ efficiently. By Imbalance Principle \cite[Proposition 3.17]{H1}, there is a $\partial$-parallel dividing curve on $A_i$ along $A_i \cap (\partial V_i)$. Using the bypass from this $\partial$-parallel dividing curve, by the Twisting Number Lemma \cite[Lemma 4.4]{H1}, we can increase $n_i$ by $1$. Repeat this procedure, we can increase $n_1,n_2$ up to $n_1=0$, $n_2=-1$. When measured in the coordinates of $T_i$, the dividing curves on $T_1$ and $T_2$ have slopes $0$ and $-\frac{1}{2}$. Connecting $T_1$ to $T_2$ by a vertical convex annulus $A$ in $\Sigma\times S^1$ with $\partial A$ intersecting the dividing curves of $T_1,T_2$ efficiently. Then, by Imbalance Principle, there is a $\partial$-parallel dividing curve on $A$ along $A\cap T_2$. Adding the bypass from this dividing curve to $T_2$, we change the slope of dividing curves of $T_2$ to $-1$. Connect $T_1$ to this new $T_2$ by a vertical convex annulus $A'$ in $\Sigma\times S^1$ with $\partial A'$ intersecting the dividing curves of $T_1,T_2$ efficiently. If there are $\partial$-parallel dividing curves on $A'$, then, by Legendrian Realization Principle \cite[Theorem 3.7]{H1}, we can find a Legendrian regular fiber with twisting number $0$. This is a contradiction. If there are no $\partial$-parallel dividing curves on $A'$, cut $\Sigma\times S^1$ along $A'$ and smooth the edges. This gives us torus $T$ isotopic to $T_3$ with dividing curves of slope $0$. Note that the slope of dividing curves of $T_3$ is negative since $n_3<<0$. By \cite[Proposition 4.16]{H1}, there is a torus isotopic to $T_3$ with vertical dividing curves (isotopic to a regular fiber.) By the Legendrian Realization Principle, we can again find a Legendrian regular fiber with twisting number $0$, which is a contradiction. This implies that $t(\xi)\neq-1$.
\end{proof}
\begin{lemma}\label{upperbound}
There are at most $\frac{n(n-1)}{2}$ pairwise non-isotopic tight
contact structures on the Brieskorn homology sphere
$-\Sigma(2,3,6n-1)$.
\end{lemma}
\begin{proof}
Let $\xi$ be a tight contact structure on $-\Sigma(2,3,6n-1)$ with $t(\xi)=t$, where $t\leq -2$ by Lemma \ref{neg-twist}. Let $F\subset \Sigma\times S^1$ be a Legendrian regular fiber with $t(F)=t$. Isotope $V_i$ into a standard neighborhood of a Legendrian singular fiber $F_i$ with $t(F_i)=n_i<<0$. For $i=1,2$, connect $F$ to $\partial V_i$ by a vertical convex annulus $A_i$ that intersects the dividing curves of $\partial V_i$ efficiently.
First consider the annulus $A_1$. Using the Imbalance Principle and the Twisting Number Lemma, we can increase $n_1$ by $1$, and repeat this process till either $n_1=0$ or $|2n_1-1|\leq |t|$, whichever comes first. If $n_1=0$ comes first, then we have $t(F)=-1$, which is a contradiction. This means that the procedure stops at an integer $n_1\leq-1$ with $|2n_1-1|\leq |t|$. If $|2n_1-1|<|t|$, then we can use the Imbalance Principle to increase the twisting number of $F$, which contradicts our choice of $F$. So $|2n_1-1|=|t|$, which implies that $t=2n_1-1\leq-3$.
Next consider the annulus $A_2$. Using the Imbalance Principle and the Twisting Number Lemma, we can increase $n_2$ by $1$, and repeat this process till either $n_2=-1$ or $|3n_2+1|\leq |t|$, whichever comes first. If $n_1=-1$ comes first, then we have $t(F)\geq -2$, which is a contradiction. This means that the procedure stops at an integer $n_2\leq-2$ with $|3n_2+1|\leq |t|$. If $|3n_2+1|<|t|$, then we can use the Imbalance Principle to increase the twisting number of $F$, which contradicts our choice of $F$. So $|3n_2+1|=|t|$, which implies that $t=3n_2+1\leq-5$.
Clearly, there is a positive integer $m$ satisfying $t=1-6m$, $n_1=1-3m$ and $n_2=-2m$. Now connect $T_1$ and $T_2$ by a vertical convex annulus $A$ with Legendrian boundary intersecting the dividing curves of $T_1,T_2$ efficiently. If $A$ has $\partial$-parallel dividing curves, then we can use the Legendrian Realization Principle to find a Legendrian regular fiber with twisting number greater than $t$, which contradicts our choice of $t$. So every dividing curve of $A$ connects one boundary component of $A$ to the other. Cut $\Sigma\times S^1$ along $A$ and smooth the edges. We get a torus $T$ isotopic to $T_3$ with dividing curves of slope $-\frac{m}{6m-1}$. If $m\geq n$, then
\[
-\frac{m}{6m-1} \geq -\frac{n}{6n-1} > s_3=-\frac{nn_3+1}{(6n-1)n_3+6},
\]
where $s_3=-\frac{nn_3+1}{(6n-1)n_3+6}$ is the slope of dividing curves of $T_3$. By \cite[Proposition 4.16]{H1}, there is a torus isotopic to $T_3$ with vertical dividing curves (isotopic to a regular fiber.) By the Legendrian Realization Principle, we can again find a Legendrian regular fiber with twisting number $0$, which is a contradiction. This shows that $m<n$.
The torus $T$ separates $-\Sigma(2,3,6n-1)$ into two sides. One side is a solid torus $V$ isotopic to $V_3$. The other side $(-\Sigma(2,3,6n-1))\setminus V$ is the union of $V_1$, $V_2$ and a neighborhood of the annulus $A$. The dividing curves of $A$ are unique up to an isotopy of $A$ fixing one boundary component since none of the dividing curves is $\partial$-parallel. Fix the dividing curves on $A$, since $V_1$ and $V_2$ are standard neighborhoods of Legendrian knots. it is easy to see that $\xi|_{(-\Sigma(2,3,6n-1))\setminus V}$ is uniquely determined up to isotopy relative to $T$. When measured in the coordinates of $V_3$, the slope of dividing curves of $T$ is $m-n$. So, by \cite[Theorem 2.3]{H1}, up to isotopy relative to $T$, there are $n-m$ tight contact structures on $V$ satisfying the given boundary condition. Note that, for each pair of possible dividing sets of $A$, there is an isotopy of $-\Sigma(2,3,6n-1)$ that maps one of them to the other. Thus, up to isotopy of $-\Sigma(2,3,6n-1)$, there are at most $n-m$ tight contact structures on $-\Sigma(2,3,6n-1)$ with twisting number $1-6m$. So the number of tight contact structures up to isotopy on $-\Sigma(2,3,6n-1)$ is at most
\[
\frac{n(n-1)}{2}=\sum_{m=1}^{n-1}(n-m).
\]
\end{proof}
It seems that the number of tight contact structures on $-\Sigma(2,3,6n-1)$ is exactly $\frac{n(n-1)}{2}$ since there are actually
$\frac{n(n-1)}{2}$ different Legendrian surgery constructions of
tight contact structures on $-\Sigma(2,3,6n-1)$. Before constructing
these surgeries, we need some preliminaries about tight contact structures on
the small Seifert fibered manifold
$M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$, which is also the torus
bundle over $S^1$ given by the monodromy map $\psi:T^2\rightarrow
T^2$ induced by
\[
\Psi = \left(
\begin{array}{cc}
1 & 1 \\
-1 & 0 \\
\end{array}
\right):
\mathbb{R}^2\rightarrow\mathbb{R}^2.
\]
\begin{proposition}\cite[Theorem 0.1]{H2}\label{class-236}
There is a sequence of pairwise non-isotopic tight contact
structures $\{\xi_m\}_{m=1}^\infty$ on
$M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$. Any tight contact
structure on $M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$ is isotopic
to one of the $\xi_m$'s.
\end{proposition}
\begin{proposition}\cite[Propositions 15 and 16]{DG}\label{filling-236}
There is a simply connected symplectic manifold $(W,\omega)$ that
weakly fills $(M(-\frac{1}{2},\frac{1}{3},\frac{1}{6}),\xi_m)$ for
$\forall~ m\geq1$.
\end{proposition}
\begin{proof}
Such a symplectic manifold $(W,\omega)$ is constructed in \cite[Propositions 15 and 16]{DG}. We only need to show that $W$
is simply connected. Note that
\[
\left(
\begin{array}{cc}
1 & 1 \\
-1 & 0 \\
\end{array}
\right)
=
\left(
\begin{array}{cc}
1 & 0 \\
-1 & 1 \\
\end{array}
\right)
\left(
\begin{array}{cc}
1 & 1 \\
0 & 1 \\
\end{array}
\right).
\]
By the construction of $W$, there is a Lefschetz fibration
$W\rightarrow D^2$ which has exactly two singular points. The
vanishing circles of these two singular points induce a
$\mathbb{Z}$-basis for $\pi_1(T^2) \cong H_1(T^2) \cong \mathbb{Z}^2$. By \cite[Proposition 8.1.9]{GoSt}, there is an exact sequence
\[
\pi_1(T^2)\rightarrow\pi_1(W)\rightarrow\pi_1(D^2)(=0),
\]
where the first map is induced by the inclusion of $T^2$ into $W$ as
a regular fiber, and the second is induced by the projection. It follows that $\pi_1(W)=0$.
\end{proof}
The point $(0,0)^T\in\mathbb{R}^2$ induces the unique fixed point of
$\psi$, and gives a closed orbit $K_0$ in
$M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$, which is isotopic to the
$\frac{1}{6}$-singular fiber of the Seifert fibration. The torus
bundle structure gives $K_0$ a standard framing (c.f. \cite{Gh1}).
For any Legendrian knot $K$ in a tight contact manifold
$(M(-\frac{1}{2},\frac{1}{3},\frac{1}{6}),\xi)$ that is smoothly
isotopic to $K_0$, define its twisting number $t(K)$ to be the index
of its contact framing relative to this standard framing. Denote by
$t(\xi)$ the maximum of all such twisting numbers.
\begin{proposition}\cite[Lemma 3.5]{Gh1}\label{maxtwist-236}
$t(\xi_m)=-m$.
\end{proposition}
Performing a $(-n)$-surgery along the $\frac{1}{6}$-singular fiber
of $M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$ with respect to the
standard framing, we get $-\Sigma(2,3,6n-1)\cong
M(-\frac{1}{2},\frac{1}{3},\frac{n}{6n-1})$. For each
$m\in\{1,\cdots,n-1\}$, let $K_0^{(m)}$ be a Legendrian knot in
$(M(-\frac{1}{2},\frac{1}{3},\frac{1}{6}),\xi_m)$ smoothly
isotopic to $K_0$ with
$t(K_0^{(m)})=-m$. If we stabilize $K_0^{(m)}$ $n-m-1$ times, and
then perform a Legendrian surgery on resulted Legendrian knot, we
get a weakly fillable tight contact structure on
$-\Sigma(2,3,6n-1)$. (It is actually strongly fillable since
$-\Sigma(2,3,6n-1)$ is an integral homology sphere, c.f. \cite{E6}.)
There are $n-m$ ways to perform such an iterated stabilization of
$K_0^{(m)}$ depending on the number of positive stabilizations used
in the process. Denote by $\xi_{m,p}$ the tight contact structure on
$-\Sigma(2,3,6n-1)$ from the iterated stabilization of $K_0^{(m)}$
with $p$ positive stabilization and $n-m-p-1$ negative
stabilizations. This gives us a tight contact structure on
$-\Sigma(2,3,6n-1)$ for each pair $(m,p)$, where $1\leq m \leq n-1$,
and $0\leq p \leq n-m-1$. Altogether, we get $\frac{n(n-1)}{2}$
tight contact structures on $-\Sigma(2,3,6n-1)$. The hard part is to
show these tight contact structures are pairwise non-isotopic. Using
Theorem \ref{surgery-main}, we have following partial result.
\begin{proposition}\label{fix-m}
If $0\leq p_1,p_2 \leq n-m-1$ and $p_1\neq p_2$, then $\xi_{m,p_1}$
and $\xi_{m,p_2}$ are not isotopic.
\end{proposition}
\begin{proof}
This is a straightforward consequence of part (1) of Theorem
\ref{surgery-main} and the fact that the symplectic filling
$(W,\omega)$ of $\xi_m$ is simply connected.
\end{proof}
\begin{proof}[Proof of Theorem \ref{2n-3}]
Let $V$ be the cobordism from
$M(-\frac{1}{2},\frac{1}{3},\frac{1}{6})$ to $-\Sigma(2,3,6n-1)$
induced by the $(-n)$-surgery along the $\frac{1}{6}$-singular
fiber. Then, from Theorem \ref{OS-surgery}, we know that
$F^+_V(c^+(\xi_{m,p}))=c^+(\xi_m)$. Ghiggini \cite{Gh1} showed
that $c^+(\xi_1)\neq c^+(\xi_2)$. So $\xi_{1,p_1}$ and $\xi_{2,p_2}$
are non-isotopic for $0\leq p_1\leq n-2$ and $0\leq p_2\leq n-3$.
Combine this with Proposition \ref{fix-m}, we know that
$\xi_{1,0},\cdots,~\xi_{1,n-2},~\xi_{2,0},\cdots,~\xi_{2,n-3}$ are
$2n-3$ pairwise non-isotopic tight contact structures on
$-\Sigma(2,3,6n-1)$.
\end{proof}
The author hopes that, by a more careful computation of the
Ozsv\'ath-Szab\'o contact invariants, we can strengthen Theorem \ref{2n-3} and show that $\xi_{m_1,p_1}$ and $\xi_{m_2,p_2}$ are not
isotopic when $m_1\neq m_2$, which would complete the classification
of tight contact structures on $-\Sigma(2,3,6n-1)$.
\section{An example where our method does not apply}
The author was informed of Example \ref{counter} by Ghiggini,
which was proposed by Stipsicz.
\begin{example}\label{counter}
Consider the Stein fillable contact structure on $S^2\times S^1$.
Let $K$ be any Legendrian knot that is smoothly isotopic to an
$S^1$-fiber. Perform a Legendrian surgery on $K$, we get a Stein
fillable contact $3$-manifold, where the underlying smooth
$3$-manifold is $S^3$. To see this, note that $S^2\times S^1$ can
be constructed by performing a $0$-surgery on an unknot in $S^3$,
and an $S^1$-fiber comes from another unknot that links once with
the surgery unknot. So, topologically, the result of performing a
Legendrian surgery along $K$ is the same as performing a surgery
along a Hopf link in $S^3$, where one of its components has
coefficient $0$, and the other has an integer coefficient. This
clearly gives $S^3$. But there is only one tight contact structure
on $S^3$. This means the result of the Legendrian surgery here
does not depend on the choice of the Legendrian knot.
\end{example}
Ghiggini further remarked that, in the setting of Theorem
\ref{surgery-main}, if $[K]$ is a primitive element of $H_1(M)$,
then $H^2((M\times I) \cup_{\varphi_i} \mathcal{H}_2)=H^2(M)$, and
there is a unique $Spin^{\mathbb{C}}$-structure on $(M\times I)
\cup_{\varphi_i} \mathcal{H}_2$ that extends the
$Spin^{\mathbb{C}}$-structure on $M$ given by the contact structure.
So it is not possible to use $Spin^\mathbb{C}$-structures on the
cobordism to distinguish between contact structures resulted from the
Legendrian surgeries on stabilizations of $K$. Clearly, in the
weakly fillable case of Theorem \ref{surgery-main}, if $[K]$ is a
primitive element of $H_1(W)$, a similar remark applies. (These
examples correspond to the situation when $d=1$ in Theorem
\ref{surgery-main}. And Theorem \ref{surgery-main} does not give any
information about the result contact structures when $d=1,2$.)
\section*{Acknowledgments}
The author would like to thank Tomasz Mrowka for motivation and
helpful discussions, Paolo Ghiggini for pointing out a mistake in
an earlier version of this paper, and Andr\'as Stipsicz for
providing Example \ref{counter} to illustrate the mistake. The
author would also like to thank Peter Ozsv\'ath for helpful
discussions about \cite{OS1}.
|
2,869,038,154,250 | arxiv | \section{Introduction}
With the increase in travel demand, congestion has became an escalating issue nowadays impairing the efficiency of transportation systems. To mitigate this effect, substantial amount of efforts have been devoted into developing novel technologies and policies for traffic management in the last few decades. Central to the traffic management is the understanding of what factors affect travel time and/or traffic flow to what extent, and how to effectively predict travel time in real time. Moreover, the accuracy and reliability of the prediction usually plays an essential role in the deployment of those technologies and policies. Recent emerging sensing technologies bring in massive data from multiple data sources, which enable us to examine the travel time more closely. This research proposes a data-driven approach to holistically understand and predict highway travel time using massive data of traffic speeds, traffic counts, incidents, weather and events, all of which are acquired in real time from multiple data sources collected over the years. The prediction model selects the most related features from a high-dimensional feature space to better interpret the travel time that vary substantially both by time of day and by day to day.
Because of the rising demand for reliable prediction of traffic state/travel time, a number of methods have been proposed in the last two decades. Among them, machine-learning based methods, coupled with basic traffic flow mechanisms, are gaining popularities and becoming the mainstream in literature. Just to name a few representative studies, linear time series analysis has been widely recognized and used for traffic states forecasting, such as Linear regression\cite{zhang2003short}, Auto-Regressive Integrated Moving Average (ARIMA) \cite{pace1998spatiotemporal,kamarianakis2005space,kamarianakis2003forecasting} and its extensions, including KARIMA\cite{van1996combining} which uses Kohonen self-organizing map; Seasonal ARIMA \cite{williams1998urban}; Vector-ARMA and STARIMA \cite{kamarianakis2003forecasting}. Other examples include Kalman filtering \cite{guo2014adaptive}, non-parametric regression models \cite{smith2002comparison,rahmani2015non}, and Support Vector machines \cite{cong2016traffic,wu2004travel} for predicting travel time and flow. \cite{mitrovic2015low} exploited compressed sensing to reduce the complexity of road networks, then support vector regression (SVR) is used for predicting travel speed on links. A Trajectory Reconstruction Model is used by \cite{ni2008trajectory} for travel time estimations. \cite{qi2014hidden} applied hidden Markov Model that incorporates traffic volume, lane occupancy, and traffic speed data. \cite{ramezani2012estimation} and \cite{yeon2008travel} also used Markov chains to predict travel time on arterial routes. A fuzzy logic model was adopted by \cite{vlahogianni2008temporal} to model the temporal evolutions of traffic flow. A hybrid empirical mode decomposition and ARIMA, or hybrid EMD-ARIMA model is developed by \cite{wang2016novel} for short-term freeway traffic speed prediction. In recent years, studies that use deep neural networks for traffic estimation start to emerge. For example, a restricted Boltzmann Machine(RBM)-based RNN model is used to forecast highway congestions \cite{ma2015large}. Stacked auto-encoder deep architecture is used by \cite{lv2015traffic} to predict traffic flows on major highways of California. In addition, a time Delay Neural Network (TDNN) model synthesized by Genetic Algorithm (GA) is proposed and used for short-term traffic flow prediction \cite{abdulhai2002short}. A stacked Restricted Boltzmann Machine(RBM) in combination with sigmoid regression is used for predicting short term traffic flow \cite{siripanpornchana2016travel}. Last but not least, a detailed review of short-term travel time prediction can be found in \cite{oh2015short} and \cite{wang2016novel}.
Despite of tremendous research on travel time/flow prediction, many studies focus on exploring temporal correlation at a single location, or spatio-temporal correlation on a small-scale network (such as a corridor network). In fact, traffic states of two distant road segments can be strongly correlated temporally. However, only a few studies have taken such spatial-temporal correlations into considerations when building prediction models for large-scale networks. For example, \cite{kamarianakis2003forecasting} considered spatial correlations as function of distance and degree of neighbors when applying multivariate autoregressive moving-average (ARIMA) model to the forecasting of traffic speed. Furthermore, \cite{kamarianakis2012real} discussed the extensions of time series prediction model by considering correlations among neighbors and the utilization of LASSO for model selection. \cite{zou2014space} introduced a space–time diurnal (ST-D) method in which link-wise travel time correlation with a time lag is incorporated. \cite{cai2016spatiotemporal} proposed a k-nearest neighbors algorithm (k-NN) model to forecast travel time up to one hour in advance. This model uses redefined inter-segments distances by incorporating the grade of connectivity between road segments, and considers spatial-temporal correlations and state matrices to identify traffic states. \cite{min2011real} proposed a modified multivariate spatial-temporal autoregressive (MSTAR) model by leveraging the distance and average speed of road networks to reduce the number of parameters.
Among literature, travel time/speed and traffic counts are the two most commonly used metrics to be predicted, oftentimes based on the features constructed by themselves in the prediction model. Information of other features relevant to traffic states, such as weather, road incidents, local events are rarely explored, which may also exhibit potential correlation and causality relations with congestion on road segments. Previous studies have shown that adverse weather conditions have detrimental effects on traffic congestion \cite{goodwin2002weather,sridhar2006relationship}. Moreover, different weather features can bring various levels of impacts on traffic delays \cite{maze2006whether}. In this study, we incorporate a complete set of weather features in the prediction model: temperature, dew point, visibility, weather type (rain/snow/fog etc), wind speed, wind guests, pressure, precipitation intensity and pavement condition (wet/dry). It has also been shown that travel time is sensitive to traffic incidents of various kinds \cite{cohen1999measurement,kwon2011decomposition}, including crashes, planned work zones and disabled vehicles. The actual impacts of incidents on the travel time of particular road segments depend on a number of features of the incidents, such as time, location, type, severity and the number of lanes closed. Thus, we will also take into consideration all those incidents features in our prediction model.
In addition, the exploration of spatio-temporal correlations of traffic states in literature are limited to simple metrics, such as the distance between road segments, the degree of connections and the number of time lags. They usually are determined exogenously and do not necessarily reveal the actual observations. In our approach, spatio-temporal features of the network and travel demand are more extensively explored from a variety of data sets by the data-driven approach, and thus the prediction model can adapt to the real-world traffic conditions in response to diverse roadway/demand disruptions. For example, in urban areas with a daily commuting pattern, time dependent origin-destination (O-D) travel demand in the morning peak is adopted as a feature when predicting travel time of the afternoon peak, since demand patterns in morning peak and afternoon are oftentimes correlated as will be shown later. Based on the O-D demand, alternative routes of the target road segments of prediction interest can be derived and incorporated into the prediction model to explore their spatio-temporal correlations.
The goal of this paper is two-fold: 1) analyze and interpret the spatio-temporal relation among highway congestion and various features such as weather, incidents, demand and travel speed in the context of dynamic networks, and 2) establish a reliable travel time prediction model for an arbitrary part of a large-scale network. The method incorporates features of both supply and demand including roadway network, travel demand, traffic speed, incidents, weather and local events, all of which are collected over several years.
Comparing to existing methods in literature, this paper makes the following contributions,
\begin{itemize}
\item We consider a comprehensive list of data sets in the context of large-scale networks to extract features and explore their spatio-temporal relations with travel time. Those data sets include physical roadway networks, travel demand approximated by traffic counts, traffic speed, incidents, weather and local events, all of which are in high spatio-temporal resolutions and collected by time of day over the years. Existing studies usually focus on one single data set or a subset of them, on a small-scale network, and the spatial or temporal resolution is relatively coarse \cite{sridhar2006relationship,kwon2011decomposition,cohen1999measurement}.
\item The proposed prediction model is able to provide reliable results 30 minutes in advance. This is more advantageous than most short-term data-driven traffic prediction of 5-15 minutes ahead, such as ARIMA, Kalman filtering and decision tree \cite{chien2003dynamic,zhang2003short,kwon2000day,guin2006travel}, just to name a few.
\item The two case studies are conducted for afternoon hours (2-6pm) on busy and unreliable corridors, when travel time varies the most significantly, both from day to day and within day. Many existing methods are applied to the entire day on mildly congested roads, which may partially alleviate the prediction challenge. In this sense, our model attempts to most effectively capture factors impacting traffic throughput and congestion evolution by analyzing the time of day period with the highest travel time variability.
\item Performance of all prediction models, including a time-series model as a benchmark, are estimated through multi-fold cross validations in this paper, rather than separating training data set and testing data set in an ad-hoc manner. Cross validation results in a more robust model selection process and reliable estimators for model errors comparing to the conventional train/test validation in many studies.
\item By exploring the spatio-temporal correlation among multi-source features, the travel time prediction model can be interpreted with findings and insights from real-world traffic operation.
\end{itemize}
The rest of this paper is organized as follow. The proposed method for data analytics and travel time prediction is introduced in Section 2. The method is then applied to the following two case studies: (1)A 6-mile highway corridor of I-270 Northbound near Washington, DC. (2)A 2.3-mile highway segment of I-376 around downtown Pittsburgh. Results and findings are presented in Section 3. Conclusions and future works are discussed in Section 4.
\section{Methodology}
The proposed method has two main parts: data analytics and prediction model selection. The former aims to improving our understanding of the correlation among various features from multiple data sources and possible interpretation of congestion. The following methods are adopted: clustering, correlation analysis and principal component analysis. The latter part picks out a subset of features that are the most critical and robust in predicting travel time by incorporating the results from the analytics, prior knowledge of the network characteristics, and estimated recurrent travel demand. Finally, the best prediction model is selected out of several candidates, including LASSO (least absolute shrinkage and selection operator), stepwise regression, support vector regression and random forest. The overall procedure of the approach is shown in Fig \ref{data_proc}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Data_proc.png}
\caption{Procedure of data analytics and model selection}
\label{data_proc}
\end{figure}
\subsection{Multiple data sources related to travel time}
\label{datasource}
As for short-term travel time prediction, two widely used data sources in literature are speed/travel-time and traffic counts, which are also known as direct indicators of real-time traffic states. To understand spatio-temporal correlations of speed and counts among road segments, this research incorporates features from all road segments in the region of interest when predicting travel time for each road segment, comparing to only the road segment itself or a few ones in its vicinity in most existing studies. The intuition is that many road segments, though being distant away from the road segment of interest, can be on its parallel routes to the route containing the segment of interest. Or more generally, the segment of interest may be impacted as a result of the ripple effect from some perturbations to some distant segments. Thus, examining only the segment itself and adjacent segments may overlook critical spatio-temporal correlation among distant road segments. The speeds and counts of all road segments, measured for each of the time-of-day intervals (such as 5 minutes) are used as features.
Apart from speeds and counts, other data sources have also been proven to effectively influence traffic congestion. Previous studies have shown that adverse weather conditions bring detrimental effects on traffic congestion \cite{goodwin2002weather,sridhar2006relationship}. Moreover, different weather features may bring in various levels of impacts on traffic delay \cite{maze2006whether}. In this study, we incorporate the following weather features in the model: temperature, dew point, visibility, weather type (rain/snow/fog etc), wind speed, wind guests, pressure, precipitation intensity and pavement condition (wet/dry). Travel time is also sensitive to traffic incidents of various kinds \cite{cohen1999measurement,kwon2011decomposition}, including crashes, planned work zones and disabled vehicles. The actual impacts on the travel time by an incident are dependent on a number of factors of the incident, such as time of day, location, type, severity and the number of lanes closed. In this study, multiple incident features for each road segment of study are extracted to encapsulate the location information of incidents to the segment, including several binary variables implying upstream or downstream, whether or not the incident is on its alternative routes, and whether or not the incident is along its opposite direction. Furthermore, the severity of incidents is incorporated into the set of incident features. The severity is considered with the following features: number of lane closed, number of motorists injured, and number of vehicles involved, all of which may be available from crash data. All those features of incidents are carefully examined in the correlation analysis and ultimately selected for the prediction model.
Last but not least, local events, such as sports games and festivals would alter the daily travel demand, and many influence congestion on relevant road segments. The actual impacts of an event on traffic are related to its date/time, location and scale. Due to the sparsity and heterogeneity nature of events, it may be infeasible to incorporate all those factors for each event without overfitting. In principle, events with potential correlations are manually selected. The event type, location and time are included in the set of features. For instance, one event feature can be a binary variable indicating whether a particular type of event takes place in the evening.
\subsection{Network characteristics and recurrent demand level}
The characteristics of a road network as well as the daily travel demand are considered when forming the initial set of features to be selected for relating traffic speed and counts.
First off, for any road segment of study, we look into those road segments that can potentially carry major recurrent traffic flow upstream or downstream. for the reason that their travel flow are most likely to have correlations with the segment of study. These segments can be extracted and selected by examining possible routes of the time-dependent origin-destination (OD) travel demand.
Second, correlations between segments on alternative routes for each OD pair are considered in our approach. The travel demand, and thus congestion level, on alternative routes are usually correlated, even if some segments are distant from each other. Correlations among such segments would otherwise not be learned when the degree of connections in a graph is used to capture correlations. Alternative routes are extracted from possible routes for each origin-destination pair.
Third, the day-to-day variations of daily commuting traffic are considered in this study. Travel demand during morning peaks can be highly correlated to demand during afternoon peak on the opposite direction of travel on the same day. When analyzing and predicting afternoon peak travel time, morning peak travel time on the other direction can be used as an indicator to approximate the demand level of daily commuting traffic.
To sum up, for the road segment of study, speed/counts features extracted from the following segments are included in the initial speed/counts feature set prior to model selection and prediction:
\begin{itemize}
\item All upstream and downstream road segments that carry the same traffic flow along main routes with the segment of study, during the time period of prediction.
\item All road segments on the alternative routes which are derived for all origin-destination pairs.
\item When analyzing/predicting travel time of afternoon peaks, traffic counts on the opposite direction in morning peaks are included to approximate daily commute demand level.
\end{itemize}
We further illustrate how we can approximate the daily demand level. Cumulative traffic counts, from early in the morning to immediately before morning congestion starts, may reveal the demand level. Therefore, the demand level can be approximated by traffic counts of those locations surrounding the segment of study, observed at the same of time of day period from day to day. The time period can start from as early as 4am when commuting starts, to the latest possible time of day throughout the entire year prior to the morning traffic breaks down, (e.g., 6:30am). A traffic break-down can be defined as the time when travel speed drops below a certain threshold, such as 40 miles/hour. Traffic counts during the congestion are limited by the flow capacity of the furthest downstream link, and thus cannot be directly used to estimate the daily demand level.
\subsection{Data analytics}
We apply several data analytics techniques to the multi-source traffic data to gain a better understanding of how features are related to congestion, and provide insights for building a reliable prediction model.
\subsubsection{Clustering}
\label{cluster_section}
In general, daily traffic pattern usually owns day-of-week effects and/or seasonal effects. For instance, winter or summer, monday, Tuesday through Thursday, and Friday, days within each of those category may exhibit similar patterns. Thus, we cluster days of year into several categories, then conduct data analytics and travel time prediction for each cluster independently.
W can first apply K-means clustering to separate days into several clusters. Then, depending on the distribution of days of week in each cluster, we determine whether or not it is suitable to aggregated data based on their days of week. For example, if most Mondays fall into one cluster and rarely appear in another cluster, we can infer that Mondays exhibit a unique traffic pattern from other days and should be analyzed separately. The objective of the K-means clustering on observations $(x_1, x_2, \dots, x_n)$ is to find $k$ clusters of sets $S$, which satisfies
\begin{equation}
\arg \min_S \sum_{i=1}^{k}\sum_{x \in S_i} || x-\mu_i||^2 = \arg \min_S \sum_{i=1}^{k}|S_i|\text{Var} S_i
\label{kmeans}
\end{equation}
where $\mu_i$ is the mean of all observations in set $S_i$.
In another example, we would like days from the same month/seasons to be grouped together. In this case, we can apply hierarchical agglomerative clustering (HAC) \cite{zepeda2013hierarchical}, which aims to minimize the within-cluster sum of squared error, with additional constraints that observations in the same cluster must be in a connected graph. In other words, we can enforce that all of the days within the same cluster form a continuous range of dates.
In terms of the number of clusters, our method explores a range of options, and selects one based on the goodness of fitting as well as the dimension of the training set. In the case that the size of training set is large enough, e.g. daily observations of more than three years are available, we can split data into more clusters. For hierarchical agglomerative clustering, it is essential that each cluster contains a sufficient amount of days to avoid over-fitting.
\subsubsection{Correlation analysis}
To identify the relationship among different traffic features, correlation analysis is conducted first. The Pearson's product-moment coefficient is defined by,
\begin{equation}
\rho_{X,Y} = \text{corr}(X,Y) = \frac{\text{cov}(X,Y)}{\theta_X \theta_Y}=\frac{E[(X-\mu_X)(Y-\mu_Y)]}{\theta_X \theta_Y}
\label{corr}
\end{equation}
By calculating the correlation matrix, and conduct hypothesis testing on whether certain pairs of features are correlated, we analyze the data in the following ways:
\begin{itemize}
\item Calculate and evaluate the correlation among speed/count features of all road segments. We also explore the relationship of congestion among different road segments under various time lags. This helps us analyze how congestion propagates spatially and temporally.
\item Test and analyze the correlation between incidents, weather features and the travel time on the segment of study. This helps us determine if these factors are correlated with congestion, and to what extent hazardous conditions, such as crash and wet surface, impact the segment of study.
\item Correlation analysis results can also be used for feature selection. High correlation between features indicate redundancy in the feature set. In particular, if the two road segments exhibiting highly correlated speed/counts are adjacent to each other, we can either remove one of them, or replace both with their average in the feature set.
\item As we approximate daily travel demand level using morning traffic counts, correlation analysis helps determine which road segments and time periods are the most critical in predicting afternoon-peak travel time. Apart from comparing the correlation coefficients and conducting hypothesis tests, we also plot the day-to-day morning traffic counts against afternoon-peak counts, in order to infer if the morning counts are useful.
\end{itemize}
\subsubsection{Principal component analysis}
The selected features can be further explored by principal component analysis (PCA). PCA can break the entire set of features into several uncorrelated components via an orthogonal transformation. By conducting a PCA, the most important sources of variations from all the features can be found. PCA also allows us to compress the high-dimensional data by aggregating features into several critical dimensions.
In our method, a PCA can be conducted by first gathering all initial speed/counts features as well as the travel time of the segment of study, combining them with other features of incidents, weather and local events, and applying singular value decomposition \cite{abdi2010principal} to all features. Finally, we sort all principal components by their importance. Next, we can analyze the composition of the top few principal components to understand the main source of variations in the feature space. Also, by comparing the top principal components between different clusters of days, we are able to discover which features are the keys to distinguish clusters.
\subsection{Prediction model}
Based on the results of data analytics, a travel time prediction model can be established in the following steps.
\subsubsection{Dimension reduction (feature selection) from all the speed/counts features}
\label{221}
The number of features available from the aforementioned data sources are excessive comparing to the number of available data points. For instance, a segment of interest on I-270 northbound (to be shown later in the case study) has over 500 road segments of speed measurements in the regional network. When using five 5-min time lags, the prediction model has over 2,500 features from speed data only, all of which may have some degree of correlations with the travel time on I-270. There are around 260 workdays within a year that have afternoon peaks no longer than 5 hours. Predicting travel time with all those features, let alone features on weather/events/incidents, can be computational inefficient and more importantly, undertakes the risk of over-fitting. Hence, it is essential to reduce the dimension of the feature space before applying a prediction model.
The number of speed/counts features in the initial feature set is dependent on the complexity of road network and the number of time lags considered. For a particular segment of interest, a significant portion of those features are uncorrelated or redundant. Various regression models can be used to pick out those redundant features, such as rigid regression and LASSO. For LASSO, by tuning the $\lambda$ value in the formula below, we make trade off between the resulted dimension of features and the bias of the mean estimator.
\begin{equation}
\min_{\beta \in \mathbb{R}}\{\frac{1}{N}||y-X\beta||^2_2 + \lambda||\beta||_1\}
\label{lassoequ}
\end{equation}
Where $X, y, \beta$ are features, travel time of the segment of study, and the coefficients of features, respectively. For rigid regression, we can set a threshold for coefficients, below which the corresponding features are removed. We can achieve a similar flexibility by adjusting this threshold value. Although there is no strict rule on the amount of features to be retained in the final prediction model, the following factors can serve as metrics in determining a proper number: the size of the training data set, the complexity of the road network, and the expected minimum percentage of variation to be explained by the selected features (r-squared value). Notice that it is safe to leave slightly more features than it is necessary, since subsequence steps will further reduce the dimension of the entire feature space when weather/incidents/events are added.
While regression-based methods can pick out features that are linearly correlated with the travel time on the segment of study, those models may not reveal the actual possibly non-linear relationship. Moreover, data may be noisy and relatively sparse, it may not be reliable to rely solely on those data-driven methods. Thus, besides regression models, we also need to carefully review the selected features, and make modifications if necessary. For example, if there is no segment selected along the downstream/upstream of the segment of study, we should add some of them into the feature set manually, as they may exhibit non-linear relation with the target segment, or their relations are hidden by other highly correlated features chosen by regression. After this, all selected features will be used to create a non-linear regression model, such as a random forest.
\subsubsection{Model selection}
After pre-selecting the speed/counts features, we add weather/incidents/events features to construct a comprehensive prediction model. Correlation analysis is first conducted, to help select features that are highly correlated with the travel time of the target segment. Next, we apply a regression model to those selected features only. In this research, three models are adopted: LASSO linear regression, stepwise regression and random forest. Finally, we choose the one with the best prediction performance evaluated through cross validation as the final prediction model.
\section{Case studies}
To evaluate the performance of our method and explore in-depth insights from data analytics, we conduct two case studies. Details and results of the two case studies are presented in the next two subsections.
\subsection{I-270 Northbound}
The first case study is conducted on a 6-mile-long corridor of I-270 Northbound, between Montgomery Ave. and Quince Orchard Rd. in D.C. metropolitan region. The stretch of highway of interest is marked in purple in the left figure of Fig. \ref{DC_map}. The time period of study is 2PM-6PM on weekdays.
I-270 is a major corridor connecting D.C. metropolitan area with municipalities northwest of DC, such as Frederick and Gaithersburg. This segment is frequently congested during afternoon peak hours, and has substantial travel time variations from day to day and within days.
\subsubsection{Data sources and features}
As for the two case studies, travel time on road segments is measured in the manner of travel rate, which is the travel time over the entire stretch of study divided by their total length, namely the reciprocal of space mean speed. An initial features are constructed as follows:
\begin{enumerate}
\item Speed: TMC (Traffic Message Channel) based speed data from INRIX is used, in all 369 TMC segments which covers all major highways and arterials of the area of study. Historical data are available in 5-minutes intervals. Those TMCs are shown in the right map of Fig \ref{DC_map} and listed below:
\begin{itemize}
\item Upstream/downstream of I-270, both Northbound and Southbound.(Orange on the map)
\item Roadway network of northern D.C. area.(Blue on the map)
\item MD 335 Northbound, as an alternative route of I-270 Northbound for afternoon peaks.(Red on the map)
\item I-495 North, a major eastbound and westbound highway of northern D.C. area.(Green on the map)
\end{itemize}
We will predict travel time of the stretch of study in 30 minutes advance in the real time. 6 time lags are considered, from 60 minutes to 30 minutes in advance in 5-min intervals.
\begin{figure*}[!t]
\centering
{\includegraphics[width=2.5in]{DC_only_new}}
\hfil
{\includegraphics[width=2.5in]{DC_allroad_new}}
\caption{Left: A 6-mile corridor of I-270 Northbound; Right: Speed data: 369 TMC segments}
\label{DC_map}
\end{figure*}
\item Traffic counts: 5-min traffic flow counts from fixed location sensors on multiple locations of I-270 Northbound and Southbound are used. We use data from four of those sensors with the best data quality, two from each direction of this corridor. Morning and afternoon demand levels are estimated using the approximation method discussed in Section \ref{221}, namely demand level is approximated by cumulative counts in the early morning when congestion is not yet formed. In this case study, it is the aggregated counts on I-270 southbound, from 4AM to the earliest time of all days in 2017 when speed drops below 90\% of the free flow speed. Data from multiple sensors during this time period are summed up as the final value. The afternoon demand level is also calculated as the cumulative counts from 12PM to 3 PM.
\label{inci_feature}
\item Incidents: we use incidents data collected by the following state Department Of Transportation: Washington D.C., Maryland, and Virginia. Each incident is classified into one of the categories: crash, emergency road works, planned work zone and disabled vehicles. Each incident entry comes with start and end time, either provided in data or imputed. Note that we predict travel time/rate 30 min ahead, and thus, incidents reported within 30 minutes ahead at the time of prediction cannot be used as features. Based on the severity and geographical information of each incident, the following binary features are created:
\begin{itemize}
\item An incident on the upstream of the segments of study;
\item A severe incident on the downstream of the segments of study;
\item A non-severe incident on the downstream of the segments of study;
\item An incident on the opposite direction (I-270 S).
\item An incident on the downstream of the segments of study that is at least 3 miles away.
\item An incident on the alternative route (MD 335 N);
\item An incident in northern D.C. (the far upstream of the segments of study).
\end{itemize}
In particular, severe incidents are defined as those with personal injuries reported. Upstream I-270 includes all road segments within 3 miles to the south end of the segments of study, while downstream I-270 includes those within 3 miles to the north end as well as the stretch of study itself. The reason for separating downstream and upstream is that downstream/on-site incidents usually reduce traffic speed on this stretch as a result of queues, while upstream incidents can reduce incoming flow rate to the stretch which may, in turn, result in an increase of traffic speed. Incidents on the opposite direction is defined on segments of I-270 Southbound within the same range of latitudes as the stretch of study (I-270 N). Finally, alternative routes are defined as those segments of MD 335 Northbound in the Rockville and Gaithersburg area.
\item Weather: Hourly weather reports from Weatherunderground\footnote{https://www.wunderground.com/} is used in this case study. The following weather features are incorporated in the initial feature set: temperature (degrees Fahrenheit), wind chill temperature, precipitation intensity (inch/hour), precipitation type (Snow/Rain/None), Visibility (miles), wind speed (miles/hour), wind gust (miles/hour), pressure (millibar), Pavement Condition type (wet or dry). Also, we added categorical features of wind, visibility and precipitation intensity into the feature set. Those features are binary indicators whether the current condition is among the top 20\% extreme cases of the entire year of 2014.
\item Local events: we incorporate the schedule of the following events: MLB (Washington DC nationals); NFL (Washington Redskins); NBA (Washington Wizards); NHL (Washington Capitals); DC cherry blossom festival. The event feature is binary and constant for the entire PM peak, indicating whether there is an ongoing/incoming event on that day.
\end{enumerate}
\subsubsection{Clustering}
We selecte in all 30 TMC segments from the speed feature set based on the following criteria: 1) The set of selected TMCs should be representative for the network. Thus, every major highway/arterial has at least one TMC selected; and 2) TMCs with a higher correlation with the travel time on the corridor of study are selected with high priority. We use speed measurements of three time points, 2:00PM, 4:00PM and 6:00PM for the 30 TMCs, in all 90 features for each weekday in 2013.
We test both K-means Clustering and hierarchical agglomerative clustering (HAC). From the results of K-means, we discover that day of week can not be properly separated by clustering as the composition of all cluster are a mixture of different days of week. This is probability due to the high variation nature of congestion in this corridor. With HAC, we discover that it is feasible to separate the whole year into two seasons:(1) 2013-02-21 to 2013-11-04; (2) 2013-01-01 to 2013-02-20 and 2013-11-05 to 2013-12-31. The two clusters can be explained as the non-winter pattern and the winter pattern. As a result, conduct data analytics and regression for each of the two clusters independently.
\subsubsection{Correlation analysis}
The correlation matrix of top five TMC features with the lowest p-values and a subset of non-TMC features are visualized in Fig. \ref{Corre_map}, due to the limit of space, not all features are visualized in the figure. Hypothesis tests (significant level 0.05) are also conducted for whether selected features are correlated with the travel rate on the corridor of interest. Findings are listed below:
\begin{itemize}
\item Most TMC features are significant under the hypothesis tests, and the correlations between some TMC features and the travel rate(travel time) on target segments reach 0.7 in absolute values, much higher than all non-TMC features.
\item Travel demand level in the morning are positively correlated with targeted travel rate, implying that morning travel demand can reveal afternoon congestion to some degree and can be used to predict travel time in afternoon peaks.
\item Incidents on the upstream I-270 have a negative correlation with the targeted travel rate. In other words, when there is an incident upstream of I-270, downstream segments will experience less congestion than on a regular day.
\item Presence of incidents on alternative route (MD 335) is also negatively correlated with the travel rate on the target corridor.
\item Both severe and non-severe incidents on downstream I-270 are positively correlated with the targeted travel rate. The correlation coefficient of severe incidents is approximately 4 times as much as that of non-severe incidents. Overall, we see that the time, location and severity of incidents impact the congestion in very different ways.
\item Among all weather features, wind speed, wind gust, visibility, precipitation intensity, rain, snow and pavement condition are significant under their hypothesis tests. The test on visibility owns the lowest p-value as 2.34e-09. This may reveal the causal effect of hazardous weather condition on congestion.
\item Speed features are positively correlated with each other, including the speed on the corridor of study (inverse of the travel rate).
\item Rain and snow weather conditions have relatively high correlation with pavement condition, precipitation intensity and visibility, as expected.
\item Travel rate has a positive correlation with the hour of day. More specifically, the probability of congestion increases as time progresses from 2:00PM to 6:00PM.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=1.05\linewidth]{corr_winter.png}
\caption{Correlation plot of selected features for winter season}
\label{Corre_map}
\end{figure}
\subsubsection{Principal component analysis}
To find the sources of variation in the feature set, we conduct PCA. The PCA is conducted for the two seasons separately with the same set of features, including the five most correlated TMCs as well as features of counts, weather, incidents and events. The first two principal components (PC), which can be interpreted as the two most important dimensions of the data are plotted in Fig \ref{PCA}. Each black dot in the plot is one data entry mapped to the orthogonal space of the two PCs. From the plots we see clear distinction between the two seasons, which indicates the existence of seasonal effects and further justifies the necessity for clustering.
In terms of the composition of the principal components, the first PC of the two clusters contains all five TMC-based speed features, and both account for around 35\% of the total variance. The second PC consists of morning and afternoon demand features, downstream incidents, and several weather features including precipitation type, visibility and pavement condition, accounting for another 10\% of the variance. The third PC is a mixture of incidents, weather and TMC-based speed, accounting for 7\% of the total variance. To conclude, TMC-based speed features are the most essential source of variance in the data, followed by demand level approximated by counts, downstream incidents and weather conditions.
\begin{figure*}[!t]
\centering
\includegraphics[width=5 in]{PCA}
\caption{The first two principle components. Left: Non-winter cluster; Right: Winter cluster. Each black dot stands for a data entry. Red arrows are the loadings of all features}
\label{PCA}
\end{figure*}
\subsubsection{Dimension reduction (feature selection) in the TMC-based speed data set}
\label{select_DC}
We utilize LASSO to select a subset of TMC-based speed features in predicting travel rate/time. As discussed in section \ref{221}, by adjusting $\lambda$ in Equation \ref{lassoequ}, we obtain selected features and prediction results with different degrees of freedom. The degrees of freedom and corresponding r-squared values from LASSO, based on the winter cluster, is shown in Table \ref{T1}. Here, we also test the influence of prediction time lag to the model performance, and calculate the r-squared values under the same degree of freedom with 15min or 30min time lag, respectively. To predict travel time 30min ahead, we use speed features that are 30min, 35min, ... 55min ahead. Likewise, to predict travel teim 15min ahead, we use more speed features that are 15min, 20min, ... 55min ahead.
\begin{table}[t!]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|l|l|l|}
\hline
& \multicolumn{2}{l|}{R-squared values} \\ \hline
Degree of freedom & 30 min lag & 15 min lag \\ \hline
0 & 0.0000 & 0.0000 \\ \hline
1 & 0.2165 & 0.2165 \\ \hline
2 & 0.3532 & 0.3543 \\ \hline
5 & 0.4621 & 0.5358 \\ \hline
7 & 0.5422 & 0.6138 \\ \hline
10 & 0.5971 & 0.5824 \\ \hline
13 & 0.6328 & 0.7480 \\ \hline
14 & 0.6561 & 0.6770 \\ \hline
16 & 0.6709 & 0.7577 \\ \hline
18 & 0.6808 & 0.7269 \\ \hline
21 & 0.6873 & 0.7642 \\ \hline
24 & 0.6920 & 0.7582 \\ \hline
30 & 0.6951 & 0.7727 \\ \hline
33 & 0.6973 & 0.7744 \\ \hline
45 & 0.7011 & 0.7801 \\ \hline
61 & 0.7050 & 0.7823 \\ \hline
104 & 0.7116 & 0.7893 \\ \hline
148 & 0.7186 & 0.7937 \\ \hline
190 & 0.7246 & 0.7979 \\ \hline
244 & 0.7301 & 0.8019 \\ \hline
\end{tabular}
\caption{Speed feature selection for winter season using LASSO }
\label{T1}
\end{table}
Generally R-squared values increase as more speed features are selected. When predicting travel time 30min in advance, the marginal improvement in r-squared value starts to decline when the degree of freedom exceeds 18. Under the same degree of freedom, predicting travel time 15 min ahead are significantly more accurate than 30min ahead. Similar results are also observed for the non-winter cluster.
To the balance out the model's reliability (namely to avoid overfitting) and goodness of fit, we choose 18 speed features from 16 different TMC segments as a result of LASSO for the winter season. In Fig \ref{speed_DC}, the corridor of study is marked in blue, and those selected TMC segments are marked in red with time lags in minutes listed. For instance, 35 means the speed on this TMC segment in 35min advance is selected to predict the travel rate of the corridor of study.
\begin{figure}
\centering
\includegraphics[width=4.5 in]{DC_with_time_new.png}
\caption{TMC segments selected for the winter season to predict travel time on I-270 northbound. Time lags in minutes are listed for each selected TMC.}
\label{speed_DC}
\end{figure}
The results are compelling. Those 18 speed features selected by LASSO can be categorized into the following groups:
\begin{itemize}
\item Segments on I-270 northbound, upstream and downstream of the corridor.
\item Segments on the alternative route (MD 335 North).
\item Segments on I-495 North that merge into I-270 northbound.
\item One segment on East West Highway with three different time lags is selected, and one segment of I-495 North Eastbound.
\item Segments on several interchanges to the upstream of I-270 northbound.
\end{itemize}
Overall, the first three groups of features are expected by our feature selection criteria described in Section \ref{221}. The first group are fairly close to the targeted corridor in terms of degree of connections. Their correlations with the corridor can be explained by the propagation and spill back of congestion. For the two segments on MD 335 North in the second group, their correlations with the corridor are originated from the overwhelming travel demand from MD 335 to I-270 North. For the third group, since I-495 North is also the upstream of the corridor that serves one of the destinations of significant demand during afternoon peak, the traffic states on I-495 North can reveal the travel demand on I-270 in 30min advance. In addition, road segments in the fourth group are all eastbound. As their correlations with the northbound corridor are positive, we infer that the travel demand may peak at the same time for northbound and eastbound traffic during the afternoon peak. Last but not least, those segments on interchanges in the fifth group are all bottlenecks, as they are usually where congestion starts prior to congestion spills over to their upstream links. In other words, those segments on interchanges are more sensitive to incoming travel demand, and can serve as an early alert to the corridor of study. As a result, they are effective in predicting upcoming congestions.
\subsubsection{Prediction model}
\label{prediction}
We test and compare the performance of four prediction models: LASSO linear regression, stepwise regression, random forest and support vector regression. Model performance is evaluated by a 10-fold cross validation on each season (cluster). We adopt a univariate autoregressive moving average (ARMA) model as the baseline. It utilizes the speed data of the corridor only without considering spatio-temporal features of any kind. Prediction accuracy is measured by Normalized Root Mean Squared Error (NRMSE):
\begin{equation}
NRMSE = \frac{\sqrt { \frac{1}{N_t}\sum_{t \in all} ({\hat{y}_t -y_t}})^2 }{\frac{1}{N_t}\sum_{t \in all} y_t}
\label{RMSE}
\end{equation}
In which $N_t$ is the total number of data points in a cluster, $\hat{y}_t$ and $y_t$ are the predicted and observed travel rate, respectively.
First, a random set of days are selected and used to find the best fit for the ARMA model based on the AIC values. The remaining days are used for a 10-fold cross validation to compute NRMSE. As a result, the best ARMA in this case is that the order of autoregressive and moving average is 3 and 3 respectively. When predicting travel time 5-min in advance, ARMA reaches 7.35\% in NRMSE. For 30-min ahead prediction, ARMA averages at 23.9\% in NRMSE. This shows that predicting travel time 30-min in advance is much more challenging than 5-min ahead.
The final selected features for prediction contain all 18 TMC features for each cluster, and all other features that are significant under their hypothesis tests in correlation analysis. Some common features for the two clusters are incidents on the downstream segments and on the alternative routes, morning and afternoon travel demand level, and essential weather features including visibility, precipitation type, precipitation intensity, wind speed, and pavement conditions.
For stepwise regression, we use AIC values as the criteria. In LASSO, the norm-1 penalty $\lambda$ is set to maximize the percentage of deviance explained. The results of model fitting are shown in Table \ref{fitting}, in which CV training and CV test stand for the average training and testing errors in cross validation, respectively. Num.F stands for the average number of features used in each model, including the intercept (constant). Ave. CV test is the weighted average cross validation testing error of the two seasons, serving as an indicator for the final model performance. Results from random forest is based on the setting of 20 trees for each cluster. Note that by adjusting the number of trees in the model, its performance changes accordingly. In this case study, we test multiple values for the number of trees, ranging from 5 to 80, and find that as the number of trees increases from 5 to 20, the testing error improves from 17.8\% to 16.6\%, and levels off if the number of trees goes beyond 21.
By comparing the results of all models, we can see that the OLS regression on multivariate speed features with clustering marginally improves the benchmark model, ARMA(3,3). Furthermore, by incorporating non-TMC based features, LASSO outperforms OLS regression while owning a much lower complexity. It is effective to incorporate non-TMC features in travel time prediction, and LASSO can ensure the prediction is more robust (less overfitting) than the OLS where a large number of features are used. In addition, the two linear regression models, LASSO and stepwise regression by AIC share similar performance in prediction with an average testing error of around 20.4\%. However, the number of finally selected features in the stepwise regression model is lower than LASSO, since it uses 31 and 29 features for two clusters, comparing to 47 used by LASSO. Finally, random forest outperforms other models considerably with an average error rate of 16.6\%. Comparing to the benchmark ARMA model, our model effectively reduces the prediction error by a margin of 7.2\%.
\begin{table*}[t!]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{Model} & \multicolumn{3}{l|}{Winter} & \multicolumn{3}{l|}{Non-winter} & \multirow{2}{*}{Ave. CV test} \\ \cline{2-7}
& Num.F & CV train & CV test & Num.F & CV train & CV test & \\ \hline
Baseline--ARMA & \multicolumn{6}{l|}{NA} & 0.238 \\ \hline
OLS on all TMCs & 1231 & 0.162 & 0.230 & 1281 & 0.180 & 0.220 & 0.224 \\ \hline
LASSO & 37 & 0.199 & 0.203 & 36 & 0.213 & 0.210 & 0.207 \\ \hline
Stepwise AIC & 31 & 0.196 & 0.196 & 29 & 0.208 & 0.210 & 0.204 \\ \hline
Random forest & 47 & 0.067 & 0.186 & 47 & 0.070 & 0.153 & 0.166 \\ \hline
SVR & 47 & 0.160 & 0.182 & 47 & 0.169 & 0.182 & 0.182 \\ \hline
\end{tabular}
\caption{I-270 Case study: Model performance evaluations. Cross validation (CV) errors of predicting travel time 30-min in advance, errors are measured in NRMSE.}
\label{fitting}
\end{table*}
\subsection{I-376 Eastbound}
The second case study is conducted on a road segment of I-376 Eastbound, between Forbes Ave exit and Squirrel hill exit in Pittsburgh Metropolitan Area. This 2.8-mile-long highway corridor is one of the main roadways connecting Pittsburgh downtown to the east region of the city. Due to heavy traffic load and limited roadway capacity, congestion is sensitive to demand and very frequent on this corridor during afternoon peaks. The time period of study is the same as the I-270 case study, 2PM-6PM on weekdays during the year of 2014, while feature selection is based on all the data in 2013. The stretch is marked in red in the left map of Fig. \ref{Pit_map}. We predict its travel time 30 minutes in advance.
\subsubsection{Data sources and feature engineering}
The following datasets were collected and used in this case study:
\begin{enumerate}
\item Speed: TMC-based speed data from INRIX, in all 259 TMC segments covering all major roads near this corridor, in the neighborhood of Oakland and Southside. Historical data are available in 5-minutes granularity. Those TMCs are listed below and shown in the right map of Fig \ref{Pit_map}:
\begin{itemize}
\item Upstream/downstream of I-376 corridor of study, both Northbound and Southbound. (Blue on the map)
\item Roadway network of the region, including Pittsburgh downtown, Oakland and Southside. (Red on the map)
\item Three main arterial streets eastbound from Pittsburgh downtown: Forbes Ave, Penn Ave and Center Ave, as alternative routes for I-376 during afternoon peaks. (Purple on map)
\end{itemize}
\begin{figure*}[t!]
\centering
\includegraphics[width=3.5 in]{TMC_pit.png}
\includegraphics[width=3.5 in]{TMC_area_pit2.png}
\caption{Left: I-376 Eastbound Segment of study; Right: Speed dataset used in the case study: 259 TMC segments in total.}
\label{Pit_map}
\end{figure*}
\item Incidents: incidents data are obtained from PennDOT Road Condition Reporting System (RCRS) where each incident entry is categorized into the following binary features based on its location:
\begin{itemize}
\item An incident on the upstream of the segments of study.
\item An incident on the downstream of the segments of study, both severe and non-severe.
\item An incident on the opposite direction (i.e., I-376 W).
\item An incident on alternative routes (Penn Ave and Baum Blvd).
\item An incident in Downtown Pittsburgh, far upstream of the segments of study.
\end{itemize}
The definition of incident features in this study follows a similar fashion as in the I-270 case study, namely all features are binary and calculated based on the timestamp and geographic info of all RCRS entries. Due to the limited number of incident records on I-376, severe and non-severe incidents are aggregated into one feature in the model.
\item Weather: the set of weather features is identical to the I-270 case study, including: temperature (degrees Fahrenheit), wind chill temperature, precipitation intensity (inch/hour), precipitation type (Snow/Rain/None), Visibility (miles), wind speed (miles/hour), wind gust (miles/hour), pressure (millibar), pavement condition type (wet or dry), as well as categorical features of wind, visibility and precipitation intensity.
\item Local events: we incorporate the schedule of home games of NFL (Pittsburgh Steelers) and NHL (Pittsburgh Penguins) to analyze their impacts. Similar to the I-270 case, the event feature is binary and constant for the entire PM peak, indicating whether there is an ongoing/incoming event on that day.
\end{enumerate}
\subsubsection{Clustering}
Clustering is conducted on a set of 35 TMC segments covering all major highways of the network, selected in a similar way to the I-270 case study. Speed measurements of three time points, 2:00PM, 4:00PM and 6:00PM are used to represent the traffic states of the network during afternoon peaks.
As a result of HAC , all weekdays of 2013 are split into the following two clusters:(1) 2013-01-01 to 2013-05-29 and 2013-12-10 to 2013-12-31; (2) 2013-05-30 to 2013-12-09. Unlike the case study of I-270, the separation of the two clusters can interpreted as winter/spring and summer/fall seasons respectively.
\subsubsection{Correlation analysis}
We calculate the correlation matrix of the feature set for each cluster to explore the relationship among features, and conduct hypothesis tests to check whether certain features are linearly correlated. The correlation matrix for the winter/spring cluster is visualized in Fig. \ref{Corre_pit}. Similar to the I-270 case study, the correlations among top TMC features with the highest coefficients and the travel rate(travel time) of the corridor of study are significantly higher than other non-TMC features. Thu, those TMC speed data are the most important factors in explaining the variation of travel rate. Most non-TMC factors are correlated with the travel rate, such as incidents, visibility, drew points, pavement condition, the hour of day, and they should be included in the feature set for the prediction model. The features that are not significantly correlated with the travel rate (travel time) of the targeted corridor are thus removed from the feature set.
\begin{figure}[t!]
\centering
\includegraphics[width=5.5 in]{corr_pit.jpg}
\caption{Correlation matrix of selected features for the winter/spring season.}
\label{Corre_pit}
\end{figure}
\subsubsection{Principal component analysis}
Similar to the I-270 case study, PCA is conducted on the set of selected features for the two clusters, including the five most correlated TMCs, as well as features of traffic demand level, weather, incidents and events. The first two principal components (PC) are plotted in Fig \ref{PCA_pit}. The first PCs for both clusters contain several weather related features, including temperature, humidity, wind speed visibility and pavement condition, accounting for around 19\% of the total variance for each cluster. The second PCs consist of most TMC-based speed features, namely 6 TMCs for the first cluster and 9 for the second cluster, accounting for about 10\% of the total variance.
Comparing this result to the I-270 case study, it can be seen that the importance of non-TMC features differs from case to case, but the most correlated TMC-based speed features are always critical for prediction.
\begin{figure*}
\centering
\includegraphics[width=5.5 in]{PCA_pit.jpg}
\caption{Plot of Principle components, Left: First half-year cluster; Right: Second half-year cluster. Each black dot stands for one data point mapped to the orthogonal space of the two PCs.}
\label{PCA_pit}
\end{figure*}
\subsubsection{Dimension reduction (feature selection) in the TMC-based speed data set}
We use LASSO to first select a subset of TMC-based speed features in predicting travel time/rate. A total of 15 TMC features for the first cluster and 17 for the second cluster are selected following a similar method discussed in Section \ref{select_DC}. These selected TMCs are then combined with other non-TMC features to create the feature set. LASSO is further used to select final features to be included in the prediction model.
In Fig \ref{speed_PIt}, the I-376 corridor of study are marked in blue, and the 15 selected TMC-based speed features for the first cluster are marked in red, along with time lags in minutes listed. Those 15 speed features can be categorized as follows:
\begin{itemize}
\item A segment within the corridor of I-376 Eastbound.
\item A segment on the ramp merging into I-376 Eastbound.
\item Segments on the alternative routes: Penn Ave and Center Ave.
\item Segments on the opposite direction (I-376 Westbound).
\item Segments in Pittsburgh downtown area.
\item A segment in one of the adjacent neighborhood, Southside.
\end{itemize}
The first two segments can be seen as direct indicators of overall congestion level for the entire corridor. They are likely two of the bottlenecks. The third group of segments being selected shows that high correlations from alternative routes can effectively help predict congestion on the targeted corridor. Again, this is proven to be useful for prediction similar to the other case study. The last two groups of segments imply that the corridor congestion may be related to those critical roadways from the neighborhoods that feed travel demand to it. Though this does not necessarily constitute causal relations, those segments can send signals 40min ahead to alert congestion on the corridor.
To sum up, the features selected for the final models of the I-376 case study include: 15 and 17 TMC features for the two clusters; essential weather features including visibility, precipitation type, precipitation intensity, wind speed, and pavement conditions; local events; incidents on up/down stream of I-376 E as well as incidents on I-376 W; Hour of day.
\begin{figure}
\centering
\includegraphics[width= 5 in]{TMC_selected_Pit.png}
\caption{TMC segments selected for the winter/spring season. Time lags in minutes are listed for each selected TMC}
\label{speed_PIt}
\end{figure}
\subsubsection{Prediction model}
To find the best prediction model for this case, we train and evaluate the following candidate models: ARMA as a baseline model, OLS regression on speed, LASSO, stepwise regression, random forest and support vector regression. The steps to run those models are the same as the I-270 case study, described in Section \ref{prediction}. For ARMA, the best fit in this case is two autoregressive terms and one moving average term. As the baseline, ARMA reaches an error rate of 14.22\% for 5-min ahead prediction and 38.4\% for 30-min ahead prediction.
All model results are compared in Table \ref{fitting_pit}. The ranking of all candidate models are quite similar to the I-270 case. The two linear regression models, LASSO and AIC-based stepwise regression share similar performance with an average testing error of 25.2\%, a significant improvement from ARMA, but not great yet. Random forest achieves the lowest average error again, at 17\% in NRMSE, with a 21\% improvement comparing to the baseline ARMA. By combining the results of the two case studies, we summarize that adding spatio-temporal information from all segments in the network, weather, incidents and events can greatly improve real-time prediction. On the other hand, we notice that the accuracy of each model is quite consistent in the two case studies. LASSO, SVR and random forest are all good methods, showing considerable improvement than ARMA and naive models that use speed data only.
\begin{table*}[t!]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{Model} & \multicolumn{3}{l|}{Cluster 1(First half year)} & \multicolumn{3}{l|}{Cluster 2(Second half year)} & \multirow{2}{*}{Ave. CV test} \\ \cline{2-7}
& Num.F & CV train & CV test & Num.F & CV train & CV test & \\ \hline
Baseline--ARMA & \multicolumn{6}{l|}{NA} & 0.384 \\ \hline
OLS on all TMCs & 1063 & 0.219 & 0.270 & 1063 & 0.212 & 0.292 & 0.282 \\ \hline
LASSO & 31 & 0.247 & 0.260 & 33 & 0.233 & 0.245 & 0.252 \\ \hline
Stepwise AIC & 29 & 0.225 & 0.261 & 31 & 0.226 & 0.243 & 0.252 \\ \hline
Random forest & 36 & 0.080 & 0.164 & 38 & 0.082 & 0.175 & 0.170 \\ \hline
SVR & 36 & 0.162 & 0.193 & 38 & 0.190 & 0.208 & 0.200\\ \hline
\end{tabular}
\caption{I-376 Case study: Model performance evaluations. Cross validation (CV) errors of predicting travel time 30-min in advance, errors are measured in NRMSE.}
\label{fitting_pit}
\end{table*}
\section{Conclusions}
We propose a data-driven method for analyzing highway congestion and predicting travel time based on spatio-temporal network characteristics and multiple data sources including travel speed, counts, incidents, weather and events, all in the context of dynamic networks. The proposed method can be used to analyze the spatio-temporal correlations among various features related to travel time, explore possible causal relations to congestion, and identify the most critical and reliable features for real-time travel time prediction.
The proposed method is applied to two regional highway corridors, I-270 in D.C. region and I-376 in Pittsburgh region. The results validate the effectiveness of the data-driven approach in understanding the correlations of highway congestion and various spatio-temporal features. We are able to predict travel time on those corridor 30min in advance, and the prediction results are satisfactory. In particular, we find that: 1) The days of year can be clustered into seasons, each of which show different traffic patterns; 2) TMC-based speed features are the most critical components of travel time variability in the multi-source data set. They include road segments on the alternative routes to the corridor of study, downstream and upstream bottleneck and major demand sources, all can be machine selected by the data-driven approach; 3) Other features that are useful in predicting travel time include time/location of incidents, morning and afternoon travel demand level, visibility, precipitation intensity, weather type(rain, snow), wind speed/gust, and pavement conditions; and 4) Random-forest shows the most promise of all candidate models, reaching an NRMSE of 16.6\% and 17.0\% respectively in afternoon peak hours for the entire year of 2014.
\section*{Acknowledgements}
This research is funded in part by Traffic 21 Institute and Carnegie Mellon University’s Mobility21, a National University Transportation Center for Mobility sponsored by the US Department of Transportation. The data acquisition and pre-processing for the I-270 corridor are funded by a FHWA research project ``Data Guide for Travel Time Reliability''. The authors wish to thank John Halkias, Douglas Laird, James Sturrock and David Hale for their valuable comments. The contents of this report reflect the views of the authors only. The U.S. Government assumes no liability for the contents or use thereof.
\bibliographystyle{unsrt}
|
2,869,038,154,251 | arxiv | \section{Introduction}
Stellar bars are common features in disc galaxies on a broad range of stellar masses and local environments \citep[e.g.][]{jo04,sh08, bara08, nairbfrac, masters12,
skibba12, abreu12, pg15}. Because of their elongated shape,
bars can exert a significant gravitational torque onto the host galaxy stellar and gaseous components, making these features
one of the main drivers of galactic evolution \citep[see e.g.][for recent reviews]{KK04,K13,sell14}.
In particular, the interaction between the bar and the ISM
within the bar extent results in fast inflows of gas toward the galactic center \citep{sanders76, roberts79, atha92, sakam99}. Such inflows can
trigger nuclear bursts of star formation \citep[SF, as observationally confirmed by][]{Ho97, Martinet97, Hunt99, jo05, lauri10}, and, if the gas infall proceeds
unimpeded, accretion episodes onto the central massive black hole \citep[if present, e.g.][]{shlo89, beren98}.
Only recently \cite{verley07}, \cite{cheung13}, \cite{pg15} and \cite{fanali15} suggested that the prompt gas removal, if converted into stars on the short dynamical
time-scale of the galaxy nucleus, quenches any SF in the central few kpc region of the galaxy. This scenario has strong implications
for the evolution of the SF rate (SFR) observed in field disc galaxies as a function of their stellar mass, with the decline of the specific SFR
(sSFR) observed in massive galaxies possibly linked to the formation of a stellar bar \citep{pg15, C16}. On the other hand,
even if bars do not remove all the gas within their extent, they are expected to perturb the gas kinematics by pumping turbulence
in the ISM, preventing the gas from fragmenting and decreasing the central SFR \citep[e.g.][]{reynaud98,haywood16}.
In order to test the two above-mentioned scenarios, in this study we aim at mapping the distribution of gas in barred and unbarred galaxies.
The most direct probe would require the direct imaging of molecular and neutral atomic gas. Unfortunately, such information is available only for
a very limited sample of galaxies, and is often affected by either a too low angular resolution or a very limited field of view.
However these problems can be overcome because the molecular gas distribution correlates strongly with the distribution of the cold dust component \citep{boselli02}.
We take full advantage of such leverage by using the far--infrared (FIR) images from the \textit{Herschel} Reference Survey \citep[HRS, ][]{boselli10}.
We compare the {\it Herschel} data with the optical images from the Sloan Digital Sky Survey \citep[SDSS,][]{sdss}.
We study the correlation between the occurrence of bars in optical images and of either bar-like structures
or central zones of no emission in the HRS. We further measure the extent of such optical and infrared structures and check
whether they are correlated. For galaxies showing both an optical bar and an infrared bar-related structure we link their morphology to
the star formation distribution as traced by H$\alpha$ images \citep{kc98}.
Finally a qualitative comparison to the few available HI-maps tracing the atomic gas distribution is accomplished owing to the high resolution maps from the VIVA survey \citep{VIVA}.
\section{The \textit{Herschel} Reference Sample}
\begin{figure*}
\begin{centering}
\includegraphics[width=12.cm]{FIG_HRSbarre.eps}
\caption{Examples for the categories classified in this work. From left to right: NGC 4548 (HRS-208), NGC 4579 (HRS-220), NGC 4689 (HRS-256).
The top row shows the SDSS RGB image of the galaxy while the second row shows the corresponding PACS images. Green circles illustrate
qualitatively the circular region used to measure the extensions of structures.
In the third row is reported the H$\alpha$ image and in the fourth the HI map from the VIVA survey.
In each frame a 1 arcminute scale is given.
}
\label{examp}
\end{centering}
\end{figure*}
The galaxies analyzed in this work have been extracted from the
\textit{Herschel} Reference Survey, a volume-limited (15$\leq$ $D$ $\leq$ 25
Mpc), $K$-band-selected sample of nearby galaxies spanning a wide range
of morphological types, from ellipticals to dwarf irregulars, and stellar masses
(10$^8$ $\lesssim$ M$_{*} \lesssim$ 10$^{11}$ M$_{\odot}$) that has been observed in guaranteed
time with \textit{Herschel} \citep{boselli10}.
Since the present work aims at performing a visual comparison of the ISM and of the
stellar morphology in the HRS galaxies, we need a sufficient spatial resolution in both the
IR and the optical images as well as a good sensitivity and little dust obscuration in
the optical band.
For this purpose we characterize the morphological properties of the
stellar component and of the ISM using the SDSS images in the $i$-band (Cortese et
al. 2012) and the 160 $\mu$m maps obtained
with the PACS instrument (Cortese et al. 2014), respectively. At 160 $\mu$m the resolution is FWHM = 11.4 arcsec,
while the pixel size of the reduced maps is 2.85 arcsec pixel$^{-1}$ (Cortese et al. 2014).
This photometric band has been chosen
among those available for the whole sample galaxies (22$\mu$m from WISE,
Ciesla et al. 2014; 100-500 $\mu$m, Ciesla et al. 2012; Cortese et al. 2014)
as the best compromise in terms of sensitivity, angular resolution and dust temperature.
At this frequency the FIR emission gives the distribution of the cold dust component, which is
a direct tracer of the molecular gas phase of the ISM (e.g. Boselli et al. 2002) from galactic to sub-kpc scales \citep{corbelli12,smith12,bolatto13,sand13}.
On the optical side, the $i$-band is only little affected by dust and is the best SDSS tracer
of the stellar mass of a galaxy, and is preferred to the $z$-band for its higher sensitivity, while
the H$\alpha$ data are taken from \citet{boselli15}.
The SDSS, PACS and H$\alpha$ images are available on the HeDaM database (http://hedam.lam.fr/).
Further on, we limited the analysis to the 261 late-type galaxies of the sample in order to avoid contamination from
slow rotators \citep[namely ellipticals, which do not develop bars,][]{sell14} and from early-type disks (including S0s)
that have too little cold gas to test the bar-related quenching process \citep{boselli14b}.
Finally we exclude galaxies with an axis ratio lower
than $0.4$ to avoid a major inclination bias in our morphology classification and measures, leaving a final subsample of 165 late-type face-on galaxies.
\section{Results}
For each galaxy we visually inspect the $i$-band SDSS images and look for the presence of an evident stellar bar.
Separately we also visually inspect the PACS images looking for a central carved region with little to no emission that, if present, is distributed
along a bar-like component (see Fig.\ref{examp}, HRS208) or in a small nuclear region surrounded by a ring-like structure (see Fig.\ref{examp}, HRS220).
In Fig. \ref{examp}, from left to right, we show three illustrative cases representing the infrared morphologies
possibly associated with optical bars (column one and two, HRS 208 and 220) and a normal spiral galaxy (last column, HRS 254).
For each galaxy we give from top to bottom the
SDSS RGB, the 160$\mu\rm m$ , the continuum subtracted H$\alpha$ images and the HI map from the VIVA survey \citep{VIVA} that unfortunately
overlaps our sample only with few galaxies.\\
We find 51 barred galaxies ($\approx30\%$ of the sample) in the $i$-band, out of which
$75\%$ show in the corresponding 160$\mu\rm m$ images an elliptical/circular area where the only emission is distributed on
a bar- or ring-like structure.
On the other hand, we find 63 galaxies ($\approx38\%$ of the sample) hosting the described morphologies in the 160$\mu\rm m$ images out of which 38 ($\approx65\%$) galaxies are found barred in the
corresponding optical image.
The frequency of galaxies hosting an infrared feature that also show a corresponding optical bar, and the occurrence of optical bars showing
an infrared feature are $\approx 65 \%$ and $\approx 75\%$, respectively.
These percentages rise to $\approx 85 \%$ and $\approx 96\%$ if we include 16 galaxies classified as barred by other literature classifications found in the
NASA Extragalactic Database (NED). These
are mostly weak bars that are difficult to recognize visually and whose extent is difficult to quantify. For this reason we exclude these objects from our further analysis.
In order to quantitatively relate the region of star formation avoidance to the presence of an optical bar, we measure the size of these structures visually in the optical and
in the 160$\mu\rm m$ and then do the same using ellipse fits to isophotes. The two approaches are useful because the eye can effectively recognize features and their extent even if somewhat
subjective, while ellipse fits are objective measures that nevertheless can be strongly affected by other structures in the galaxies.
Four of the authors (GC, MD, FG, GG) manually evaluated the extent of optical bars by measuring the radius of
the circular region circumscribing the bar, avoiding possible HII regions at the end of it.
On the other hand, in the 160$\mu\rm m$ images, when an infrared bar is present we measure the radius of
the circle circumscribing the bar while, when no clear bar is discernible, we measure the inner semi-major axis of the ring-like
structure surrounding the depleted region (as depicted in Fig. \ref{examp}).
For the optical bars showing the region of avoidance in the 160$\mu\rm m$ images we also visually inspect the continuum subtracted H$\alpha$ images
finding similar morphologies and repeat the same measure.
Using IRAF\footnote{IRAF (Image Reduction and Analysis Facility) is a software for the reduction and analysis of astronomical data.} task $ellipse$, ellipticity ($\epsilon$) and position angle (P.A.) radial profiles of the isophotes of each sample galaxy
in each considered band.
In optical broad-bands, it is well tested that the radius at which there is a peak in the ellipticity profile
and a related plateau in the P.A. profile is a good proxy for the extension of the bar \citep{jo04,lauri10,C16b}.
Following this procedure we extract a radius of the bar in the $i$-band for each galaxy and,
we deduce the bars strength following \citep{lauri07} from the peak of the $\epsilon$ profile in the $i$-band. We find that $\approx 95\%$ of galaxies that we classified as barred
harbor strong bars ($\epsilon>0.4$).
Although this quantitative method has not been applied to far-IR data previously, ellipse fits can nevertheless be derived for the 160$\mu\rm m$ images and the $\epsilon$ and P.A. profiles
examined for a bar signature. Since we are trying to measure a region of decreased emission possibly surrounded by a ring-like
emitting structure, we also extract a radial surface brightness profile from concentric elliptical apertures centered on the galaxy and ellipticity fixed to the outer infrared isophotes.
The derived surface brightness profile therefore has a relative maximum in correspondence of the ring-like structure.
In the cases where no infrared bar is discernible, the radius at which this occurs is a good proxy of the extension of the non emitting region.\\
In the 160$\mu\rm m$ images, this method succeeds at extracting the radius of the non emitting region or of the bar in $75\%$ of the barred galaxies.
Because of the irregular and clumpy distribution of light at 160$\mu\rm m$ the fit of the isophotes does not converge in $25\%$ of the galaxies.
Therefore, in order to preserve the statistics of our already limited sample, we plot in Fig. \ref{rrr} the radius obtained averaging the measures of the optical bars
made by the authors versus those from the 160$\mu\rm m$ data (black empty dots) and those from the continuum subtracted H$\alpha$ images (red empty dots).
All radii are normalized to the $i$-band $25^{th}$ mag isophote radius of the galaxy, taken from \citet{cortese12}, and errors are evaluated from the standard deviation of our measurements.
The black and red dashed lines indicate the bisector fit \citep{bisfit} to the data respectively for the 160$\mu\rm m$ (slope $\sim0.89\pm0.11$) and the H$\alpha$ data (slope $\sim1.35\pm0.08$).
The slope of the fit relative to the optical versus H$\alpha$ data is strongly influenced by the extremely deviant point (associated to HRS 322) visible in Fig. \ref{rrr}. This outlier is characterized
by a very small error, as all the authors consistently measured the same radius with very little scatter. We stress, however, that
the semi-major axis of this galaxy in the H$\alpha$
distribution is perpendicular to the optical bar (see the optical and H$\alpha$ image in the online material) thus the important discrepancy is mostly due to projection effects. If we exclude this point
from the fit, the slope becomes $0.79\pm0.11$.
Finally, in green, we plot the best linear fit of the comparison of optical versus 160$\mu\rm m$ radii measured with IRAF (slope $\sim0.87\pm0.10$).
All fits show a strong consistency between them even when evaluated with independent methods.
To further check a possible bias due to inclination, we derived the same fits for a subsample of galaxies with axis ratio greater than 0.7 ($\approx40\%$ of the sample),
finding fully consistent results.
\begin{figure}
\begin{centering}
\includegraphics[width=9.cm]{allradii_werr.eps}
\caption{ Comparison between the radii of bars in the $i$-band and the radii of the
central zone of avoidance of the 160$\mu\rm m$ (black dots) and continuum subtracted H$\alpha$ (red dots) images.
The black and red dashed lines represent the bisector regression to the $i$-band versus 160$\mu\rm m$ and $i$-band versus H$\alpha$ data, respectively.
The green dashed line is the linear fit to the $i$-band versus 160$\mu\rm m$ radii measured using $ellipse$ in IRAF for $75\%$ of the sample.
All radii are normalised to the optical diameter of the galaxy taken from \citet{cortese12}. A comparison between the visual and automatic optical and 160$\mu\rm m$ radii is available in the
online material.
}
\label{rrr}
\end{centering}
\end{figure}
\section{Discussion and conclusions}
The study and comparison of frequencies of occurrence of bar related features in the optical and FIR, as traced by the stellar continuum and
by warm dust emission, respectively, results in a fraction of galaxies hosting an optical bar of $\sim30\%$ while
a zone of avoidance with or without an infrared bar is found in $38\%$ of the 160$\mu\rm m$ images. The percentages
of common occurrence suggests that FIR images are an effective way of identifying the presence of a bar in a galaxy.
For the galaxies hosting both an optical bar and a central zone of avoidance in the 160$\mu\rm m$ images,
we measured the angular size of both structures with independent methods, finding a good correspondence.
First, we measured the extent of bars in optical images,
while in the FIR images we measured the extent of the bar-like structure, if present, or of the inner semi-major axis of the
ring-like structure. In the latter case,
we stress that the projected angular sizes of the optical bar and the radius of non-emitting zone
may differ significantly\footnote{Up to a factor of $\approx 2.5$
for the maximum inclination of $B/A=0.4$ allowed in our sample.} depending on the bar orientation.
In $75\%$ of the barred galaxies we successfully ran the IRAF task $ellipse$ to objectively measure the extent of these structures
in both the $i$-band and 160$\mu\rm m$ images, using the derived ellipticity, P.A. and surface brightness profiles.
The goodness of the correlation strongly hints at a physical connection between
the presence of an optical strong bar and a gas-depleted/quenched region where little SF is still possible. Only
in the very center (where the bar conveys the gas originally within its reach) or along the bar is SF found.
Such an effect is consistent with what we see in the continuum subtracted H$\alpha$ images of the sample.
SF is indeed distributed mainly in the nuclear region of galaxies
and/or along the bar (consistent with \cite{verley07}; see Fig. \ref{examp}) and shows a morphology similar to the one observed in the FIR.
We conclude that the FIR morphologies are similar to the H$\alpha$ morphologies \citep[consistent with][]{verley07} and that both are
consistent with bar-driven inflows of gas inside the corotation radius as predicted by simulations \citep{sanders76,atha92}. Fig. \ref{examp} qualitatively
show that also the HI emission, when avilable, show similar mophologies.
The impact on the cold gas component, as derived from the FIR, is consistent to what has been observed in few galaxies \citep{sakam99} and
affects the star formation of barred galaxies \citep{verley07,pg15}:
as soon as a bar starts growing, the gas is initially perturbed and compressed along the bar, where it forms stars while gradually losing its angular momentum; as the time goes by, the gas is swept by the bar into sub-kpc
scales, leaving a gas-depleted and SF quenched region of the size of the bar itself, with or without a central knot of SF depending on the consumption timescale of the originally infalling gas.
\begin{acknowledgements}
We thank the anonymous Referee and the Editor, Francoise Combes, for their constructive criticism.
This research has been partly financed by the French national program PNCG
, and it has made use of the GOLDmine database (Gavazzi et
al. 2003, 2014b) and SDSS Web site \emph{http://www.sdss.org/}. \\
\end{acknowledgements}
|
2,869,038,154,252 | arxiv |
\section{Introduction}
\label{sec:introduction}
\input{sections/introduction}
\section{Related Work}
\label{sec:related-work}
\input{sections/related}
\section{Preliminaries}
\label{sec:problem}
\input{sections/problem}
\section{Overview of Tunable-LSH}
\label{sec:naive}
\input{sections/naive}
\section{Details of Tunable-LSH and Optimizations}
\label{sec:details}
\input{sections/hashing}
\section{Experimental Evaluation}
\label{sec:evaluation}
\input{sections/evaluation}
\section{Conclusions and Future Work}
\label{sec:conclusions}
In this paper, we introduce \textsc{Tu\-na\-ble-LSH}\xspace, which is a locality-sensitive hashing scheme,
and demonstrate its use in clustering records in an RDF data management system.
In particular, we keep track of the fragmented records in the database and use \textsc{Tu\-na\-ble-LSH}\xspace to decide,
in constant-time, where a record needs to be placed in the storage system.
\textsc{Tu\-na\-ble-LSH}\xspace takes into account the most recent query access patterns over the database,
and uses this information to auto-tune such that records that are accessed across similar sets of queries
are hashed as much as possible to the same or nearby pages in the storage system.
This property distinguishes \textsc{Tu\-na\-ble-LSH}\xspace from existing locality-sensitive hash functions, which are static.
Our experiments with
(i) a version of our prototype RDF data management system, \emph{chame\-leon-db}, that uses \textsc{Tu\-na\-ble-LSH}\xspace,
(ii) a hashtable that relies on \textsc{Tu\-na\-ble-LSH}\xspace to dynamically cluster its records, and
(iii) workloads that rigorously test the sensitivity of \textsc{Tu\-na\-ble-LSH}\xspace verify the potential benefits of \textsc{Tu\-na\-ble-LSH}\xspace.
As future work, it would be beneficial to answer the following questions.
First, the assumption that the last $k$ queries are representative of the future queries in the workload can be relaxed.
As outlined in~\cite{AlucPVLDB2014}, the issue of deciding ``when and based on what information to tune the physical design" of our system still remains an open problem.
Second, as our experiments indicate, query optimization in \emph{chame\-leon-db} has significant room for improvement.
We need techniques that can handle more approximate group-by-query clusters such as those generated by \textsc{Tu\-na\-ble-LSH}\xspace.
Third, we believe that \textsc{Tu\-na\-ble-LSH}\xspace can be used in a more general setting than just RDF systems.
In fact, it should be possible to extend the idea of the self-clustering in-memory hashtable that we have implemented to a more general, distributed key-value store.
{
\bibliographystyle{abbrv}
\subsection{Properties of Tunable-LSH}
\label{sec:details:properties}
In this part, we discuss the locality-sensitive properties of $h_{1}$ and $h_{2}$, and
demonstrate that $h_{2} \circ h_{1}$ can be used for clustering the records.
First, we show the relationship between record utilization vectors and
the record utilization counters that are obtained by applying $h_{1}$.
\begin{theorem}[Distance Bounds]
\label{thm:distance-bounds}
Given a pair of record utilization vectors $\vec{r}_{1}$ and $\vec{r}_{2}$ with size $k$,
let $\vec{c}_{1}$ and $\vec{c}_{2}$ denote two record utilization counters with size $b$
such that $\vec{c}_{1} = h_{1}(\vec{r}_{1})$ and $\vec{c}_{2} = h_{1}(\vec{r}_{2})$ (cf., Def.~\ref{def:tunable-lsh}).
Furthermore, let $c_{1}[i]$ and $c_{2}[i]$ denote the $i^{\text{th}}$ entry in $\vec{c}_{1}$ and $\vec{c}_{2}$, respectively.
Then,
\begin{align}
\delta( \vec{r_{1}}, \vec{r_{2}} ) & \geq \sum\limits_{i=0}^{b-1} \: \bigl\lvert c_{1}[i] - c_{2}[i] \bigr\rvert \label{eqn:lower-bound}
\end{align}
where $\delta(\vec{r_{1}}, \vec{r_{2}})$ represents the Hamming distance between $\vec{r}_{1}$ and $\vec{r}_{2}$.
\end{theorem}
\begin{proofSketch}
We prove Thm.~\ref{thm:distance-bounds} by induction on $b$.
\noindent \textbf{Base case}:
Thm.~\ref{thm:distance-bounds} holds when $b=1$.
According to Def.~\ref{def:tunable-lsh}, when $b=1$, $c_{1}[0]$ and $c_{2}[0]$ correspond to
the total number of $1$-bits in $\vec{r}_{1}$ and $\vec{r}_{2}$, respectively.
Note that the Hamming distance between $\vec{r}_{1}$ and $\vec{r}_{2}$ will be smallest
if and only if these two record utilization vectors are aligned on as many $1$-bits as possible.
In that case, they will differ in only $\bigl\lvert c_{1}[0] - c_{2}[0] \bigr\rvert$ bits,
which corresponds to their Hamming distance.
Consequently, Eqn.~\ref{eqn:lower-bound} holds for $b=1$.
\noindent \textbf{Inductive step}:
We show that if Eqn.~\ref{eqn:lower-bound} holds for $b \leq \alpha$,
where $\alpha$ is a natural number greater than or equal to $1$,
then it must also hold for $b = \alpha + 1$.
Let $\Pi_{f}(\vec{r}, g)$ denote a record utilization vector
$r' = (r'[0], \ldots, r'[k-1])$ such that
for all $i \in \{0, \ldots, k-1 \}$,
$r'[i] = r[i]$ holds
if $f(i) = g$, and
$r'[i] = 0$ otherwise.
Then,
\begin{align}
\delta (\vec{r}_{1}, \vec{r}_{2}) = \sum \limits_{g=0}^{b-1} \delta (\Pi_{f}(\vec{r}_{1}, g), \Pi_{f}(\vec{r}_{2}, g)). \label{eqn:additive-property}
\end{align}
That is, the Hamming distance between any two record utilization vectors is
the summation of their individual Hamming distances within each group of bits
that share the same hash value with respect to $f$.
This property holds because $f$ is a (total) function, and $\Pi_{f}$ masks all the irrelevant bits.
As an abbreviation, let $\delta_{g} = \delta (\Pi_{f}(\vec{r}_{1}, g), \Pi_{f}(\vec{r}_{2}, g))$.
Then, due to the same reasoning as in the base case,
for $g=\alpha$, the following equation holds:
\begin{align}
\delta_{\alpha}( \vec{r_{1}}, \vec{r_{2}} ) & \geq \bigl\lvert c_{1}[\alpha] - c_{2}[\alpha] \bigr\rvert \label{eqn:inductive:lower-bound}
\end{align}
Consequently, using the additive property in Eqn.~\ref{eqn:additive-property},
it can be shown that Eqn~\ref{eqn:lower-bound} holds also for $b = \alpha+1$.
Thus, by induction, Thm.~\ref{thm:distance-bounds} holds. \hfill $\blacksquare$
\end{proofSketch}
Thm.~\ref{thm:distance-bounds} suggests that the Hamming distance between
any two record utilization vectors $\vec{r_{1}}$ and $\vec{r_{2}}$ can be approximated
using record utilization counters $\vec{c_{1}} = h_{1} (\vec{r_{1}})$ and $\vec{c_{2}} = h_{1} (\vec{r_{2}})$ because
Eqn.~\ref{eqn:lower-bound} provides a lower bound on $\delta (\vec{r_{1}}, \vec{r_{2}})$.
In fact, the right-hand side of Eqn.~\ref{eqn:lower-bound} is equal to
the Manhattan distance~\cite{Krause1986} between $\vec{c_{1}}$ and $\vec{c_{2}}$ in $\intRangeCart{0}{\lceil \frac{k}{b} \rceil}{b}$,
and since $\delta(\vec{r_{1}}, \vec{r_{2}})$ is equal to
the Manhattan distance between $\vec{r_{1}}$ and $\vec{r_{2}}$ in $\intRangeCart{0}{1}{k}$,
it is easy to see that $h_{1}$ is a transformation that approximates Manhattan distances.
The following corollary captures this property.
\begin{corollary}[Distance Approximation]
Given a pair of record utilization vectors $\vec{r}_{1}$ and $\vec{r}_{2}$ with size $k$,
let $\vec{c_{1}}$ and $\vec{c_{2}}$ denote two points
in the coordinate system $\intRangeCart{0}{\lceil \frac{k}{b} \rceil}{b}$ such that
$\vec{c_{1}} = h_{1} (\vec{r}_{1})$ and $\vec{c_{2}} = h_{1} (\vec{r}_{2})$ (cf., Def.~\ref{def:tunable-lsh}).
Let $\delta^{M} (\vec{r_{1}}, \vec{r_{2}})$ denote the Manhattan distance
between $\vec{r_{1}}$ and $\vec{r_{2}}$, and
let $\delta^{M} (\vec{c_{1}}, \vec{c_{2}})$ denote the Manhattan distance
between $\vec{c_{1}}$ and $\vec{c_{2}}$.
Then, the following holds:
\begin{align}
\delta (\vec{r}_{1}, \vec{r}_{2}) = \delta^{M} (\vec{r_{1}}, \vec{r_{2}}) &\geq \delta^{M} (\vec{c_{1}}, \vec{c_{2}}) \label{eqn:manhattan:lower}
\end{align}
\end{corollary}
\begin{proofSketch}
Hamming distance in $\intRangeCart{0}{1}{k}$ is a special case of Manhattan distance.
Furthermore, by definition~\cite{Krause1986}, the right hand side of Eqn.~\ref{eqn:lower-bound}
equals the Manhattan distance $\delta^{M} (\vec{c_{1}}, \vec{c_{2}})$; therefore,
Eqn.~\ref{eqn:manhattan:lower} holds. \hfill $\blacksquare$
\end{proofSketch}
Next, we demonstrate that $h_{1}$ is a locality-sensitive transformation~\cite{IndykSTOC98,GionisVLDB1999}.
In particular, we use the definition of locality-sensitiveness by Tao et al.~\cite{TaoTODS2010}, and show that the probability that
two record utilization vectors $\vec{r_{1}}$ and $\vec{r_{2}}$ are transformed into
``near\-by" record utilization counters $\vec{c_{1}}$ and $\vec{c_{2}}$
increases as the (Manhattan) distance between $r_{1}$ and $r_{2}$ decreases.
\begin{theorem}[Good Approximation]
\label{thm:probabilities}
Gi\-ven a pair of re\-cord utilization vectors $\vec{r_{1}}$ and $\vec{r_{2}}$ with size $k$,
let $\vec{c_{1}}$ and $\vec{c_{2}}$ denote two points in the coordinate system
$\intRangeCart{0}{\lceil \frac{k}{b} \rceil}{b}$ such that
$\vec{c_{1}} = h_{1} (\vec{r_{1}})$,
$\vec{c_{2}} = h_{1} (\vec{r_{2}})$ and
$b=1$ (cf., Def.~\ref{def:tunable-lsh}).
Let $\delta^{M} (\vec{r_{1}}, \vec{r_{2}})$ denote the Manhattan distance between $\vec{r_{1}}$ and $\vec{r_{2}}$, and
let $\delta^{M} (\vec{c_{1}}, \vec{c_{2}})$ denote the Manhattan distance between
$\vec{c_{1}}$ and $\vec{c_{2}}$.
Furthermore, let $\textsc{Pr}_{\delta^{M} \leq \Theta}(x)$ be a shorthand for
\begin{align*}
\textsc{Pr} \Big( \delta^{M}(\vec{c_{1}}, \vec{c_{2}}) \leq \Theta \; \Big| \; \delta^{M}(\vec{r_{1}}, \vec{r_{2}})=x \Big).
\end{align*}
Then,
\begin{align}
\textsc{Pr}_{\delta^{M} \leq \Theta} (x) =
\frac {\mathlarger{\sum \limits_{i = \lceil \frac{x - \Theta}{2} \rceil}^{\lfloor \frac{x + \Theta}{2} \rfloor}} \binom{x}{i}}{2^{x}}
\label{eqn:probabilities}
\end{align}
where $\Theta, x \in \intRange{0}{\lceil \frac{k}{b} \rceil}$ such that $\Theta < x$.
\end{theorem}
\begin{proofSketch}
If the Hamming/Manhattan distance between $\vec{r_{1}}$ and $\vec{r_{2}}$ is $x$,
then it means that these two vectors will differ in exactly $x$ bits,
as shown below.
\begin{align*}
\vec{\mathbf{r_{1}}} &: \; \Box\Box\Box \, \overbrace{\mathbf{111 \ldots 1}}^{a} \, \mathbf{0 \cdots 000} \, \Box\Box\Box \\
\vec{\mathbf{r_{2}}} &: \; \Box\Box\Box \, \mathbf{000 \cdots 0} \, \underbrace{\mathbf{1 \ldots 111}}_{x-a} \, \Box\Box\Box
\end{align*}
Furthermore, if $\vec{r_{1}}$ has $\Delta + a$ bits set to $1$, then
$\vec{r_{2}}$ must have $\Delta + (x-a)$ bits set to $1$,
where $\Delta$ denotes the number of matching $1$-bits between $\vec{r_{1}}$ and $\vec{r_{2}}$ and
$a \in \{ 0, \ldots, x \}$.
Note that when $b=1$, the Manhattan distance between $\vec{c_{1}}$ and $\vec{c_{2}}$ is
equal to the difference in the number of $1$-bits that $\vec{r_{1}}$ and $\vec{r_{2}}$ have.
Hence,
\begin{align*}
\delta^{M} (\vec{c_{1}}, \vec{c_{2}}) &= \big| \Delta + x - a - (\Delta + a) \big| \\
&= \big| x - 2a \big|.
\end{align*}
It is easy to see that there are $(x+1)$ different configurations:
\begin{align*}
a &= 0 & \Rightarrow & & &\delta^{M}(\vec{c_{1}}, \vec{c_{2}}) = x \\
a &= 1 & \Rightarrow & & &\delta^{M}(\vec{c_{1}}, \vec{c_{2}}) = x\!-\!2 \\
& \; \; \vdots & & & & \\
a &= x\!-\!1 & \Rightarrow & & &\delta^{M}(\vec{c_{1}}, \vec{c_{2}}) = x\!-\!2 \\
a &= x & \Rightarrow & & &\delta^{M}(\vec{c_{1}}, \vec{c_{2}}) = x.
\end{align*}
Only when $a = \big\{ \lceil \frac{x-\Theta}{2} \rceil, \ldots, \lfloor \frac{x+\Theta}{2} \rfloor \big\}$,
will $\delta^{M} (\vec{c_{1}}, \vec{c_{2}}) \leq \Theta$ be satisfied.
For each satisfying value of $a$,
the non-matching bits in $\vec{r_{1}}$ and $\vec{r_{2}}$
can be combined in $\binom{x}{a}$ possible ways.
Therefore, there are a total of
\begin{align*}
\sum \limits_{i = \lceil \frac{x-\Theta}{2} \rceil}^{\lfloor \frac{x+\Theta}{2} \rfloor} \binom{x}{i}
\end{align*}
combinations such that $\delta^{M} (\vec{c_{1}}, \vec{c_{2}}) \leq \Theta$.
Since there are $2^{x}$ possible combinations in total, the posterior probability in Thm.~\ref{thm:probabilities} holds.
\hfill $\blacksquare$
\end{proofSketch}
Using Thm.~\ref{thm:probabilities}, it is possible to show that when $b=1$, for all $\Theta < x$ where $\Theta, \: x \in \intRange{0}{(k-2)}$, the following holds:
\begin{align}
Pr_{\delta^{M} \leq \Theta}(x) > Pr_{\delta^{M} \leq \Theta}(x\!+\!2). \label{eqn:lsh-property}
\end{align}
Therefore, $h_{1}$ is locality-sensitive for $b=1$.
Due to space limitations, we omit the proof of Eqn.~\ref{eqn:lsh-property}, but in a nutshell,
the proof follows from the fact that going from $x$ to $x+2$,
the denominator in Eqn.~\ref{eqn:probabilities} always increases by a factor of $4$,
whereas the numerator increases by a factor that is strictly less than $4$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{fig/multi_b_probability.pdf}
\end{center}
\caption{$\textsc{Pr}_{\delta^{M} \leq \Theta}$ for $k=24$ and $b=6$}
\label{fig:multi_b_probability}
\end{figure}
Generalizing Thm.~\ref{thm:probabilities} and Eqn.~\ref{eqn:lsh-property} to cases where $b \geq 2$ is more complicated.
However, our empirical analyses across multiple values of $k$ and $b$ demonstrate that
\begin{align*}
Pr_{\delta^{M} \leq \Theta} (x) \gg Pr_{\delta^{M} \leq \Theta} (y)
\end{align*}
holds when $y \gg x$.
For example, Fig.~\ref{fig:multi_b_probability} shows $Pr_{\delta^{M} \leq \Theta} (x)$ when $k=24$ and $b=6$.
Fig.~\ref{fig:multi_b_probability}, along with our empirical evaluations, verify that $h_{1}$ is locality sensitive.
Thus, combined with a space-filling curve,
it can be used to approximate the clustering problem.
\subsection{Achieving and Maintaining Tighter Bounds on Tunable-LSH}
\label{sec:details:adaptive}
Next, we demonstrate how it is possible to reduce the approximation error of $h_{1}$.
We first define \emph{load factor} of an entry of a record utilization counter.
\begin{definition}[Load Factor]
Given a record utilization counter $\vec{c} = (c[0], \ldots, c[b\!-\!1])$ with size $b$,
the \emph{load factor} of the $i^{th}$ entry is $c[i]$.
\end{definition}
\begin{theorem}[Effects of Grouping]
\label{thm:error-wrt-load}
Given two record utilization vectors $\vec{r_{1}}$ and $\vec{r_{2}}$ with size $k$,
let $\vec{c_{1}}$ and $\vec{c_{2}}$ denote two record utilization counters with size $b=1$
such that $\vec{c_{1}} = h_{1} (\vec{r_{1}})$ and $\vec{c_{2}} = h_{1} (\vec{r_{2}})$.
Then,
\begin{align}
\textsc{Pr} \left(
\begin{array}{l}
\delta^{M}(\vec{c_{1}}, \vec{c_{2}}) \\
\; \; = \delta^{M}(\vec{r_{1}}, \vec{r_{2}})
\end{array}
\: \Bigg| \:
\begin{array}{l}
c_{1}[0] = l_{1} \; \textsc{and} \\
c_{2}[0] = l_{2}
\end{array}
\right)
= \gamma \label{eqn:error-wrt-load:part1}
\end{align}
where
\begin{align}
\gamma = \frac { \binom{l_{\text{max}}}{l_{\text{min}}} \binom{k}{l_{\text{max}}} } { \binom{k}{l_{\text{max}}} \binom{k}{l_{\text{min}}} } \label{eqn:error-wrt-load:part2}
\end{align}
and
\begin{align*}
l_{\text{max}} &= max (l_{1}, l_{2}) \\
l_{\text{min}} &= min (l_{1}, l_{2}).
\end{align*}
\end{theorem}
\begin{proofSketch}
Let $\vec{r}_{\text{max}}$ denote the record utilization vector with the most number of $1$-bits among $\vec{r_{1}}$ and $\vec{r_{2}}$, and
let $\vec{r}_{\text{min}}$ denote the vector with the least number of $1$-bits.
When $b=1$, $\delta^{M} (\vec{c_{1}}, \vec{c_{2}}) = \delta (\vec{r_{1}}, \vec{r_{2}})$ holds if and only if
the number of $1$-bits on which $\vec{r_{1}}$ and $\vec{r_{2}}$ are aligned is $l_{\text{min}}$
because in that case, both $\delta^{M} (\vec{c_{1}}, \vec{c_{2}})$ and $\delta (\vec{r_{1}}, \vec{r_{2}})$ are equal to
$l_{\text{max}} - l_{\text{min}}$ (note that $\delta^{M} (\vec{c_{1}}, \vec{c_{2}})$ is always equal to $l_{\text{max}} - l_{\text{min}}$).
Assuming that the positions of $1$-bits in $\vec{r}_{\text{max}}$ are fixed,
there are $\binom{l_{\text{max}}}{l_{\text{min}}}$ possible ways of arranging the $1$-bits of $\vec{r}_{\text{min}}$ such that
$\delta (\vec{r}_{1}, \vec{r}_{2}) = l_{\text{max}} - l_{\text{min}}$.
Since the $1$-bits of $\vec{r}_{\text{max}}$ can be arranged in $\binom{k}{l_{\text{max}}}$ different ways, there are
$\binom{l_{\text{max}}}{l_{\text{min}}} \binom{k}{l_{\text{max}}}$ combinations such that
$\delta^{M} (\vec{c}_{1}, \vec{c}_{2}) = \delta (\vec{r}_{1} , \vec{r}_{2})$.
Note that in total, the bits of $\vec{r_{1}}$ and $\vec{r_{2}}$ can be arranged in
$\binom{k}{l_{\text{max}}} \binom{k}{l_{\text{min}}}$ possible ways;
therefore, Eqns.~\ref{eqn:error-wrt-load:part1} and~\ref{eqn:error-wrt-load:part2} describe the posterior probability that
$\delta^{M} (\vec{c_{1}}, \vec{c_{2}}) = \delta (\vec{r_{1}}, \vec{r_{2}})$,
given $c_{1}[0] = l_{1}$ and $c_{2}[0] = l_{2}$. \hfill $\blacksquare$
\end{proofSketch}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{fig/error_wrt_load.pdf}
\end{center}
\caption{$\textsc{Pr}(\delta^{M} \neq \delta)$ for $k=12$, $b=1$ and across varying load factors}
\label{fig:error_wrt_load}
\end{figure}
According to Eqns.~\ref{eqn:error-wrt-load:part1} and~\ref{eqn:error-wrt-load:part2} in Thm.~\ref{thm:error-wrt-load},
the probability that $\delta^{M} (\vec{c_{1}},\-\vec{c_{2}})$ is an approximation of $\delta (\vec{r_{1}}, \vec{r_{2}})$,
but that it is not exactly equal to $\delta (\vec{r_{1}}, \vec{r_{2}})$ is lower
for load factors that are close or equal to zero and
likewise for load factors that are close or equal to $\lceil \frac{k}{b} \rceil$ (cf., Fig.~\ref{fig:error_wrt_load}).
This property suggests that by carefully choosing $f$,
it is possible to achieve even tighter error bounds for $h_{1}$.
Contrast the matrices in Fig~\ref{fig:matrix-rep:rows} and Fig~\ref{fig:matrix-rep:columns},
which contain the same query access vectors, but the columns are grouped in two different ways\footnote{Groups are separated by vertical dashed lines.}:
\begin{inparaenum}[(i)]
\item in Fig.~\ref{fig:matrix-rep:rows}, the grouping is based on the original sequence of execution, and
\item in Fig.~\ref{fig:matrix-rep:columns}, queries with similar access patterns are grouped together.
\end{inparaenum}
Fig.~\ref{fig:rows:transformed} and Fig.~\ref{fig:rc:transformed} represent
the corresponding record utilization counters for
the record utilization vectors in the matrices in
Fig.~\ref{fig:matrix-rep:rows} and Fig.~\ref{fig:matrix-rep:columns}, respectively.
Take $\record{r_{3}}$ and $\record{r_{5}}$, for instance.
Their actual Hamming distance with respect to $q_{0}$--$q_{7}$ is $8$.
Now consider the transformed matrices.
According to Fig.~\ref{fig:rows:transformed}, the Hamming distance lower bound is $0$,
whereas according to Fig.~\ref{fig:rc:transformed}, it is $8$.
Clearly, the bounds in the second representation are closer to the original.
The reason is as follows.
Even though $\record{r_{3}}$ and $\record{r_{5}}$ differ on all the bits for $q_{0}$--$q_{7}$,
when the bits are grouped as in Fig.~\ref{fig:matrix-rep:rows},
the counts alone cannot distinguish the two bit vectors.
In contrast, if the counts are computed based on the grouping in Fig.~\ref{fig:matrix-rep:columns}
(which clearly places the $1$-bits in separate groups),
the counts indicate that the two bit vectors are indeed different.
The observations above are in accordance with Thm.~\ref{thm:error-wrt-load}.
Consequently, we make the following optimization.
Instead of randomly choosing a hash function,
we construct $f$ such that
it maps queries with similar access vectors (i.e., columns in the matrix)
to the same hash value.
This way, it is possible to obtain record utilization counters with entries that have
either very high or very low load factors (cf., Def.~\ref{def:tunable-lsh}),
thus, decreasing the probability of error (cf., Thm.~\ref{thm:error-wrt-load}).
We develop a technique to efficiently determine
groups of queries with similar access patterns and
to adaptively maintain these groups as the access patterns change.
Our approach consists of two parts:
\begin{inparaenum}[(i)]
\item to approximate the similarity between any two queries, we rely on the \textsc{Min-Hash} scheme~\cite{BroderSEQUENCES1997}, and
\item to adaptively group similar queries, we develop an incremental version of a multidimensional scaling (MDS) algorithm~\cite{MorrisonINFVIS2003}.
\end{inparaenum}
\textsc{Min-Hash} offers a quick and efficient way of approximating the similarity,
(more specifically, the Jaccard similarity~\cite{JaccardNP1912}), between two sets of integers.
Therefore, to use it,
the query access vectors in our conceptualization need to be translated into a set of positional identifiers
that correspond to the records for which
the bits in the vector are set to $1$.\footnote{In practice, this translation never takes place because the system maintains positional vectors to begin with.}
For example, according to Fig.~\ref{fig:matrix-rep:original},
$\vec{q_{1}}$ should be represented with the set $\{0, 5, 6\}$
because $r_{0}$, $r_{5}$ and $r_{6}$ are the only records for which the bits are set to $1$.
Note that, we do not need to store the original query access vectors at all.
In fact, after the access patterns over a query are determined,
we compute and store only its \textsc{Min-Hash} value.
This is important for keeping the memory overhead of our algorithm low.
\begin{table}[t]
\dataStructuresTable
\caption{Data structures referenced in algorithms}
\label{tab:data-structures}
\end{table}
Queries with similar access patterns are grouped together
using a multidimensional scaling (MDS) algorithm~\cite{KruskalPsychometrika1964} that was
originally developed for data visualization, and
has recently been used for clustering~\cite{BislimovskaEDBT2015}.
Given a set of points and a distance function,
MDS assigns coordinates to points such that their original distances are preserved as much as possible.
In one efficient implementation~\cite{MorrisonINFVIS2003},
each point is initially assigned a random set of coordinates,
but these coordinates are adjusted iteratively based on a spring-force analogy.
That is, it is assumed that points exert a force on each other that is proportional
to the difference between their actual and observed distances,
where the latter refers to the distance that is computed from
the algorithm-assigned coordinates.
These forces are used for computing the current velocity ($V$ in Table~\ref{tab:data-structures})
and the approximated coordinates of a point ($X$ in Table~\ref{tab:data-structures}).
The intuition is that, after successive iterations,
the system will reach equilibrium, at which point,
the approximated coordinates can be reported.
Since computing all pairwise distances can be prohibitively expensive,
the algorithm relies on a combination of sampling ($S[]$ in Table~\ref{tab:data-structures})
and maintaining for each point, a list of its nearest neighbours ($N[]$ in Table~\ref{tab:data-structures})---only these
distances are used in computing the net force acting on a point.
Then, the nearest neighbours are updated in each iteration
by removing the most distant neighbour of a point and replacing it with a new
point from the random sample if the distance between the point and the random sample
is smaller than the distance between the point and its most distant neighbour.
\begin{algorithm}[t]
\caption{Reconfigure-F} \label{alg:online}
{ \scriptsize
\begin{algorithmic}[1]
\Require
\Statex $\vec{\mathbf{q_{t}}}$: query access vector produced at time $t$
\Ensure
\Statex Coordinates of MDS points are updated, which are used in determining the outcome of $f$
\Procedure{Reconfigure-F}{$\vec{q_{t}}$}
\State{$\text{pos} \leftarrow$ $( \text{begin} + \text{size} ) \: \% \: k$} \label{alg:online:l1}
\State{$S[\text{pos}]\text{.clear()}$} \label{alg:online:l2}
\State{$N[\text{pos}]\text{.clear()}$} \label{alg:online:l3}
\State{$X[\text{pos}] \leftarrow -0.5 + rand() \: / \: \textsc{rand-max}$} \label{alg:online:l4}
\State{$V[\text{pos}] \leftarrow 0$} \label{alg:online:l5}
\State{$H[\text{pos}] \leftarrow \Call{Min-Hash}{\vec{q_{t}}}$} \label{alg:online:l6}
\If{$\text{size} < k$}
\State{size \verb!+=! $1$}
\Else
\State{$\text{begin} = (\text{begin} + 1) \: \% \: k$}
\EndIf
\For{$i \leftarrow 0, i < \text{size}, i\texttt{++}$} \label{alg:online:l7}
\State{$x \leftarrow (\text{begin}+i) \: \% \: k$}
\State \Call{Update-S-and-N}{$x$} \label{alg:online:l8}
\State \Call{Update-Velocity}{$x$} \label{alg:online:l9}
\EndFor
\For{$i \leftarrow 0, i < \text{size}, i\texttt{++}$}
\State{$x \leftarrow (\text{begin}+i) \: \% \: k$}
\State \Call{Update-Coordinates}{$x$} \label{alg:online:l10}
\EndFor \label{alg:online:l11}
\EndProcedure
\end{algorithmic}
}
\end{algorithm}
This algorithm cannot be used directly for our purposes because it is not incremental.
Therefore, we propose a revised MDS algorithm that incorporates the following modifications:
\begin{enumerate}
\item In our case, each point in the algorithm represents a query access vector.
However, since we are not interested in visualizing these points, but rather clustering them,
we configure the algorithm to place these points along a single dimension.
Then, by dividing the coordinate space into consecutive regions,
we are able to determine similar query access vectors.
\item Instead of computing the coordinates of all of the points at once,
our version makes incremental adjustments to the coordinates
every time reconfiguration is needed.
\end{enumerate}
The revised algorithm is given in Algorithm~\ref{alg:online}.
First, the algorithm decides which MDS point to assign
to the new query access vector $\vec{q_{t}}$ (line~\ref{alg:online:l1}).
It clears the array and the heap data structures containing, respectively,
\begin{inparaenum}[(i)]
\item the randomly sampled, and
\item the neighbouring set of points (lines~\ref{alg:online:l2}--\ref{alg:online:l3}).
\end{inparaenum}
Furthermore, it assigns a random coordinate to the point within the interval $[-0.5, 0.5]$ (line~\ref{alg:online:l4}), and
resets its velocity to $0$ (line~\ref{alg:online:l5}).
Next, it computes the \textsc{Min-Hash} value of $\vec{q_{t}}$ and stores it in $H[\text{pos}]$ (line~\ref{alg:online:l6}).
Then, it makes two passes over all the points in the system (lines~\ref{alg:online:l7}--\ref{alg:online:l11}), while
first updating their sample and neighbouring lists (line~\ref{alg:online:l8}),
computing the net forces acting on them based on the \textsc{Min-Hash} distances and
updating their velocities (line~\ref{alg:online:l9});
and then updating their coordinates (line~\ref{alg:online:l10}).
The procedures used in the last part are implemented in a similar way as the original algorithm~\cite{MorrisonINFVIS2003}; that is,
in line~\ref{alg:online:l8}, the sampled points are updated,
in line~\ref{alg:online:l9}, the velocities assigned to the MDS points are updated, and
in line~\ref{alg:online:l10}, the coordinates of the MDS points are updated based on these updated velocities.
However, our implementation of the \textsc{Update-Velocity} procedure (line~\ref{alg:online:l9}) is slightly different than the original.
In particular, in updating the velocities, we use a decay function so that
the algorithm forgets ``old" forces that might have originated from
the elements in $S[]$ and $N[]$ that have been assigned to new
query access vectors in the meantime.
Note that unless one keeps track of the history of all the forces
that have acted on every point in the system,
there is no other way of ``undoing" or ``forgetting" these ``old" forces.
\begin{algorithm}[t]
\caption{Hash Function $f$} \label{alg:hash-function-f}
{ \scriptsize
\begin{algorithmic}[1]
\Require
\Statex $\mathbf{t}$: sequence number of a query access vector
\Ensure
\Statex $f(t)$ is computed and returned
\Procedure{f}{$t$}
\State pos $\gets$ $t \: \% \: k$
\State $(\text{lo}, \text{hi}) \leftarrow$ \Call {group-bounds}{$X[$pos$]$}
\State $\text{coid} \leftarrow$ \Call {centroid} {$\text{lo}, \text{hi}$}
\State \Return \Call {hash} {$\text{coid}$} $\% \, b$
\EndProcedure
\end{algorithmic}
}
\end{algorithm}
Given the sequence number of a query access vector ($t$),
the outcome of the hash function $f$ is determined
based on the coordinates of the MDS point that had previously been
assigned to the query access vector by
the \textsc{Reconfigure} procedure.
To this end, the coordinate space is divided into $b$ groups
containing points with consecutive coordinates such that
there are at most $\lceil \frac{k}{b} \rceil$ points in each group.
Then, one option is to use the group identifier, which is a number in $\intRange{0}{b-1}$,
as the outcome of $f$, but
there is a problem with this na\"ive implementation.
Specifically, we observed that
even though the \emph{relative} coordinates of MDS points
within the ``same" group may not change significantly
across successive calls to the \textsc{Reconfigure} procedure,
points within a group, as a whole, may shift.
This is an inherent (and in fact, a desirable) property of the incremental algorithm.
However, the problem is that there may be far too many cases
where the group identifier of a point changes
just because the absolute coordinates of the group have changed,
even though the point continues to be part of the ``same" group.
To solve this problem,
we rely on a method of computing the centroid within a group
by taking the \textsc{Min-Hash} of the identifiers of points within that group
such that these centroids rarely change across successive iterations.
Then, we rely on the identifier of the centroid, as opposed to its coordinates,
to compute the group number, hence, the outcome of $f$.
The pseudocode of this procedure is given in Algorithm~\ref{alg:hash-function-f}.
We make one last observation.
Internally, \textsc{Min-Hash} uses multiple hash functions to approximate the degree to which two sets are similar~\cite{BroderSEQUENCES1997}.
It is also known that increasing the number of internal hash functions used (within \textsc{Min-Hash})
should increase the overall accuracy of the \textsc{Min-Hash} scheme.
However, as unintuitive as it may seem, in our approach, we use only a single hash function within \textsc{Min-Hash},
yet, we are still able to achieve sufficiently high accuracy.
The reason is as follows.
Recall that Algorithm~\ref{alg:online} relies on
multiple pairwise distances to position every point.
Consequently, even though individual pairwise distances may be inaccurate (because we are just using a single hash function within \textsc{Min-Hash}),
collectively the errors are cancelled out, and points can be positioned accurately on the MDS coordinate space.
\subsection{Resetting Old Entries in Record Utilization Counters}
\label{sec:details:reset}
\begin{figure}[t]
\centering
\counterShifting
\caption{Assuming $b=3$, $\Box$ indicates the allowed locations at each time tick, and $\emptyset$ indicates the counter to be reset.}
\label{fig:counter-shifting}
\end{figure}
Once the group identifier is computed (cf., Algorithm~\ref{alg:hash-function-f}),
it should be straightforward to update
the record utilization counters (cf., line~\ref{alg:overview:tune:l1} in Algorithm~\ref{alg:overview:tune}).
However, unless we maintain the original query access vectors,
we have no way of knowing which counters to decrement
when a query access vector becomes stale,
as maintaining these original query access vectors is prohibitively expensive.
Therefore, we develop a more efficient scheme in which
old values can also be removed from the record utilization counters.
Instead of maintaining $b$ entries in every record utilization counter,
we maintain twice as many entries ($2b$).
Then, whenever the \textsc{Tune} procedure is called,
instead of directly using the outcome of $f(t)$
to locate the counters to be incremented,
we map $f(t)$ to a location within an ``allowed"
region of consecutive entries in the record utilization counter
(cf., line~\ref{alg:overview:tune:l1} in Algorithm~\ref{alg:overview:tune}).
At every $\lceil \frac{k}{b} \rceil^{\text{th}}$ iteration,
this allowed region is shifted by one to the right,
wrapping back to the beginning if necessary.
Consider Fig.~\ref{fig:counter-shifting}.
Assuming that $b=3$ and that at time $t=0$
the allowed region spans entries from $0$ to $(b-1)$,
at time $t=\lceil \frac{k}{b} \rceil$, the region will span entries from $1$ to $b$;
at time $t=k$, the region will span entries from $b$ to $2b-1$; and
at time $t=\lceil \frac{4k}{b} \rceil$, the region will span entries $0$ and those from $b+1$ to $2b-1$.
Since $f(t)$ produces a value between $0$ and $b-1$ (inclusive), whereas the entries are numbered from $0$ to $2b-1$ (inclusive),
the \textsc{Reconfigure} procedure in Algorithm~\ref{alg:overview:tune} uses $f(t)$ as follows.
If the outcome of $f(t)$ directly corresponds to a location in the allowed region, then it is used.
Otherwise, the output is incremented by $b$ (cf., line~\ref{alg:overview:tune:l1} in Algorithm~\ref{alg:overview:tune}).
Whenever the allowed region is shifted to the right, it may land on an already incremented entry.
If that is the case, that entry is reset, thereby allowing ``old" values forgotten (cf., line~\ref{alg:overview:tune:l2} in Algorithm~\ref{alg:overview:tune}).
These are shown by $\emptyset$ in Fig.~\ref{fig:counter-shifting}.
This scheme guarantees any query access pattern that is less than $k$ steps old is remembered,
while any query access pattern that is more than $2k$ old is forgotten.
\section{Additional Proofs}
Given a set of record utilization vectors $R = \{ \record{r_{1}}, \ldots, \record{r_{l}} \}$, where each vector has size $k$,
let $R^{M} = \{ \record{P_{1}}, \ldots, \record{P_{l}} \}$ denote the coresponding set of points in the $k$-dimensional space with coordinates $\{ 0, 1 \}$ on each axis.
Furthermore, let $\delta( \cdot, \cdot )$ denote the edit distance between two record utilization vectors, and
let $\delta^{M}( \cdot, \cdot )$ denote the Manhattan distance between two points.
Then, for every $\record{r_{a}}, \record{r_{b}} \in R$,
$\delta( \record{r_{a}}, \record{r_{b}} ) = \delta^{M}( \record{P_{a}}, \record{P_{b}} )$,
where $ \record{P_{a}}, \record{P_{b}} \in R^{M} $ are the corresponding points for $\record{r_{a}}$ and $\record{r_{b}}$, respectively.
We prove Theorem ... by induction on the size of record utilization vectors ($k$).
\noindent \textbf{Base case:}
We prove Theorem ... when $k=1$.
$\record{r_{i}}$ can be either $(0)$ or $(1)$, and so can $\record{r_{j}}$, hence, there are four cases to consider.
As shown in Fig ..., for all the four cases, edit distances are equal to the Manhattan distances.
\noindent \textbf{Inductive step:}
Assuming edit distances equal Manhattan distances for any pair of record utilization vectors with size $k \leq C$, we prove that the same statement holds for $k = C+1$.
First, note that for $k=C+1$, the Manhattan distance between (any) two points $P_{a}$ and $P_{b}$ is defined as:
\begin{align}
\delta^{M}( P_{a}, P_{b} ) = \sum\limits_{i=1}^{C+1} \abs{ P_{a}[i] - P_{b}[i] } \text{.}
\end{align}
Therefore, going from $k=C$ to $k=C+1$, the Manhattan distance increases by $\abs{P_{a}[C+1] - P_{b}[C+1]}$.
Second, note that edit distances are also additive.
That is, by inserting a single bit into the same position in each of the two vectors,
the edit distance increases by the edit distance between the two inserted bits.
Therefore, there are four cases to consider (just like in the base case), and for each case,
edit distance increases by the same amount as the increase in Manhattan distance, thus, proving the induction.
\section{Original MDS Algorithm}
\begin{algorithm}[t]
{\footnotesize
\begin{algorithmic}
\Procedure{update-sample}{$x$}
\State{$S[x]$.clear()}
\For{$i \leftarrow 0, \: i < S[x].\text{capacity}() + N[x].\text{capacity}()$}
\State{$y \leftarrow$ rand()$\%k$}
\If{$N[x]$.isFull()}
\If{$\delta^{H}(x, y) < \delta^{H}(x, N[x]$.peek()$)$}
\State{$S[x]$.push($N[x]$.pop())}
\State{$N[x]$.push($y$)}
\Else
\State{$S[x]$.push($y$)}
\EndIf
\Else
\State{$N[x]$.push($y$)}
\EndIf
\State{$i\texttt{++}$}
\EndFor
\EndProcedure
\Procedure{update-velocity}{$x$}
\State{$f \leftarrow 0$}
\ForAll{$y \in S[x] \cup N[x]$}
\If{$X[x] < X[y]$}
\State $f \mathrel{+}= \bigl\lvert X[x] - X[y] \bigr\rvert - \delta^{H}(x, y)$
\Else
\State $f \mathrel{+}= \delta^{H}(x, y) - \bigl\lvert X[x] - X[y] \bigr\rvert$
\EndIf
\EndFor
\State{$f \leftarrow f / \bigl\lvert S[x] \cup N[x] \bigr\rvert$}
\State{$V[x] \leftarrow V[x] / 2 + f$}
\EndProcedure
\Procedure{update-coordinates}{$x$}
\State{$X[x] \mathrel{+}= V[x]$}
\EndProcedure
\end{algorithmic}
}
\caption{}
\label{alg:original}
\end{algorithm}
For completeness, first, we summarize the steps in the original MDS algorithm~\cite{} and subsequently, describe our specific adaptations.
The major building blocks of the algorithm are depicted in Algorithm~\ref{alg:original}.
The algorithm is based on a spring-force analogy.
Initially, MDS points are (uniformly) randomly scattered in the coordinate space, but in successive iterations the algorithm re-positions these points.
For every pair of points, it is assumed that the points exert a (positive/negative) force on each other
proportional to the difference between their original distances and their distances in the coordinate space.
In every iteration, using laws of classical physics,
for any given pair of points, their acceleration can be computed/updated, which, in turn, can be used for
updating the velocities (cf., \textsc{update-velocities} in Algorithm~\ref{alg:original}) and
coordinates (cf., \textsc{update-coordinates} in Algorithm~\ref{alg:original}) of the points.
Since computing the force between every pair of points is not scalable,
the authors propose an approximation in which forces are computed with respect to only a subset of points.
In each iteration, the algorithm takes additional steps to ensure that this sample becomes increasingly more representative of the true population (cf., \textsc{update-sample} in Algorithm~\ref{alg:original}).
Consequently, the points are iteratively re-positioned until the forces converge or in the worst case, for a predefined number of iterations.
\subsection{Tunable-LSH in chameleon-db}
\begin{table}[t]
{
\scriptsize
\begin{tabular}{r | r | r | r | r | r | r | r}
& \rot{\textbf{CDB [ICDE'15]}}
& \rot{\textbf{CBD [\textsc{Tu\-na\-ble-LSH}\xspace]}}
& \rot{\textbf{RDF-3x}}
& \rot{\textbf{VOS [6.1]}}
& \rot{\textbf{VOS [7.1]}}
& \rot{\textbf{MonetDB}}
& \rot{\textbf{4Store}} \\ \hline
WatDiv $10$M & $\mathbf{4.7}$ & $19.4$ & $18.8$ & $44.0$ & $24.5$ & $17.0$ & $93.0$ \\ \hline
WatDiv $100$M & $\mathbf{40.4}$ & $42.0$ & $71.4$ & $210.3$ & $96.4$ & $62.7$ & $767.2$ \\ \hline
\end{tabular}
}
\caption{Query execution time, geometric mean (milliseconds)~\cite{AlucICDE2015}}
\label{tab:evaluation:watdiv:overview}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{fig/cdb-results-watdiv10M-tlsh.png}
\caption{WatDiv $10$M triples}
\label{fig:evaluation:cdb:detailed:small}
\end{subfigure}
\begin{subfigure}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{fig/cdb-results-watdiv100M-tlsh.png}
\caption{WatDiv $100$M triples}
\label{fig:evaluation:cdb:detailed:large}
\end{subfigure}
\caption{Comparison of chameleon-db implemented using a hierarchical clustering algorithm and with \textsc{Tu\-na\-ble-LSH}\xspace}
\label{fig:evaluation:cdb:detailed}
\end{figure}
The first experiment evaluates \textsc{Tu\-na\-ble-LSH}\xspace with\-in \emph{chameleon-db}, which is our prototype RDF data management system~\cite{AlucUW2013}.
In particular, in earlier work, we had introduced a hierarchical clustering algorithm for grouping RDF triples into what we call \emph{group-by-query clusters}~\cite{AlucICDE2015}.
In this evaluation, we replace that hierarchical clustering algorithm with \textsc{Tu\-na\-ble-LSH}\xspace, and study its implications on the end-to-end query performance,
keeping the same experimental configuration.
For completeness, we quote our description of the experimental setup from our previous paper:
``For our evaluations, we [primarily] use the Waterloo SPARQL Diversity Test Suite (WatDiv)
because it facilitates the generation of test cases that are far more diverse than any of the existing benchmarks~\cite{AlucISWC2014}.
In this regard, we use the WatDiv \emph{data generator} to create two datasets:
one with $10$ million RDF triples and another with $100$ million RDF triples
(we observe that systems under test (SUT) load data into main memory on the smaller dataset whereas at $100$M triples, SUTs perform disk I/O).
Then, using the WatDiv \emph{query template generator}, we create $125$ query templates
and instantiate each query template with $100$ queries, thus, obtaining $12500$ queries.\footnote{\url{http://db.uwaterloo.ca/watdiv/stress-workloads.tar.gz}}"~\cite{AlucICDE2015}
We compare our approach with chameleon-db implemented with the hierarchical clustering algorithm (abbreviated CDB [ICDE'15]) and
``five popular systems, namely, RDF-3x~\cite{NeumannVLDBJ2010}, MonetDB~\cite{IdreosIEEEDEB2012}, 4Store~\cite{HarrisSSWS2009} and
Virtuoso Open Source (VOS) versions $6.1$~\cite{ErlingNKNM2009} and $7.1$~\cite{ErlingIEEEDEB2012}.
RDF-3x follows the single-table approach and creates multiple indexes;
MonetDB is a column-store, where RDF data are represented using vertical partitioning~\cite{AbadiVLDBJ2009};
and the last three systems are industrial systems.
Both 4Store and VOS group and index data primarily based on RDF predicates, but
VOS 6.1 is a row-store whereas VOS 7.1 is a column-store.
We configure these systems so that they make as much use of the available main memory as possible."~\cite{AlucICDE2015}
``We evaluate each system independently on each query template.
Specifically, for each query template, we first warm up the system by executing the workload for that query template once (i.e., $100$ queries).
Then, we execute the same workload five more times (i.e., $500$ queries).
We report average query execution time over the last five workloads."~\cite{AlucICDE2015}
``Our prototype starts with a completely segmented clustering, where each cluster consists of a single triple."~\cite{AlucICDE2015}
However, ``after the execution of the $100^{th}$ query, we allow the storage advisor to compute a better group-by-query clustering"~\cite{AlucICDE2015}
using either the hierarchical clustering algorithm in~\cite{AlucICDE2015} or \textsc{Tu\-na\-ble-LSH}\xspace.
Our experiments indicate that on average, the time to compute the group-by-query clusters has decreased by an order of magnitude with the introduction of \textsc{Tu\-na\-ble-LSH}\xspace.
For example, for the $100$M triples dataset, it took $317.6$ milliseconds on (geometric) average to compute the group-by-query clusters using the hierarchical clustering algorithm in~\cite{AlucICDE2015},
whereas with \textsc{Tu\-na\-ble-LSH}\xspace, it takes only about $26.1$ milliseconds.
This is due to the approximate nature of \textsc{Tu\-na\-ble-LSH}\xspace.
As shown in Table~\ref{tab:evaluation:watdiv:overview} and in Fig.~\ref{fig:evaluation:cdb:detailed}, this approximation has a slight impact on query performance,
but for the $100$M triples dataset, CDB is still significantly faster than the other RDF data management systems.
There is one apparent reason for this:
\textsc{Tu\-na\-ble-LSH}\xspace is an approximate method, and therefore, the generated group-by-query clusters are not perfect.
To verify this hypothesis, we studied the logs generated during our experiments, which revealed the following:
using the group-by-query clustering in~\cite{AlucICDE2015},
chameleon-db's query engine was able to execute $64.8\%$ of the queries without any decomposition (a property that chameleon-db's query optimizer is trying to achieve~\cite{AlucICDE2015}),
whereas, the group-by-query clustering computed using \textsc{Tu\-na\-ble-LSH}\xspace resulted in only $27.1\%$ of the queries to be executed without decomposition.
Of course, it is possible to improve chameleon-db's query optimizer, but that is a topic for future research.
This trade-off between the clustering overhead and the query execution time suggests that
for RDF workloads that are too dynamic to be predicted and sampled upfront, it might be desirable to have frequent clustering steps, in which case, using \textsc{Tu\-na\-ble-LSH}\xspace is a much better option because of its lower overhead.
\subsection{Self-Clustering Hashtable}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{fig/evaluation-hashtable-compare-all.pdf}
\caption{Random Access (All Data Structures) -- Control how loaded the workloads are}
\label{fig:evaluation:hashtable:compare-all}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{fig/evaluation-hashtable-increasing-coverage.pdf}
\caption{Random Access -- Control how loaded the workloads are}
\label{fig:evaluation:hashtable:increasing-coverage}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{fig/evaluation-hashtable-increasing-record-count.pdf}
\caption{Random Access -- Control record count, keep record size constant at $128$ bytes}
\label{fig:evaluation:hashtable:increasing-record-count}
\end{subfigure}
\begin{subfigure}[t]{0.31\textwidth}
\includegraphics[width=\textwidth]{fig/evaluation-hashtable-increasing-uniqueness.pdf}
\caption{Random Access -- Control workload dynamism}
\label{fig:evaluation:hashtable:increasing-uniqueness}
\end{subfigure}
\begin{subfigure}[t]{0.31\textwidth}
\includegraphics[width=\textwidth]{fig/evaluation-lsh-increasing-uniqueness.pdf}
\caption{Sensitivity analysis of \textsc{Tu\-na\-ble-LSH}\xspace -- Control workload dynamism}
\label{fig:evaluation:tlsh:increasing-uniqueness}
\end{subfigure}
\begin{subfigure}[t]{0.31\textwidth}
\includegraphics[width=\textwidth]{fig/evaluation-lsh-increasing-b.pdf}
\caption{Sensitivity analysis of \textsc{Tu\-na\-ble-LSH}\xspace -- Control \textsc{Tu\-na\-ble-LSH}\xspace parameter $2b$}
\label{fig:evaluation:tlsh:increasing-b}
\end{subfigure}
\caption{Experimental evaluation of \textsc{Tu\-na\-ble-LSH}\xspace in a self-clustering hashtable and the sensitivity analysis of \textsc{Tu\-na\-ble-LSH}\xspace}
\label{fig:evaluation:hashtable}
\end{figure*}
The second experiment evaluates an in-memory hash\-table that we developed
that uses \textsc{Tu\-na\-ble-LSH}\xspace to dynamically cluster re\-cords in the hashtable.
Hashtables are commonly used in RDF data management systems.
For example, the dictionary in an RDF data management system,
which maps integer identifiers to URIs or literals (and vice versa),
is often implemented as a hashtable~\cite{WilkinsonHPL2006,AbadiVLDBJ2009,ErlingIEEEDEB2012}.
Secondary indexes can also be implemented as hashtables,
whereby the hashtable acts as a key-value store and
maps tuple identifiers to the content of the tuples.
In fact, in our own prototype RDF system, \emph{chameleon-db},
all the indexes are secondary (dense) indexes
because instread of relying on any sort order inherent in the data,
we rely on the notion of group-by-query clusters,
in which RDF triples are ordered purely based on the workload~\cite{AlucUW2013,AlucICDE2015}.
The hashtable interface is very similar
to that of a standard hash\-table;
except that users are given the option to mark the beginning and end of queries.
This information is used to dynamically cluster records such that
those that are co-accessed across similar sets of queries also become physically co-located.
All of the clustering and re-clustering is transparent to the user,
hence, we call this the \emph{self-clustering hashtable}.
The self-clustering hashtable has the following advantages and disadvantages:
Compared to a standard hashtable that tries to avoid hash-collisions,
it deliberately co-locates records that are accessed together.
If the workloads favour a scenario in which many records are frequently accessed together, then
we can expect the self-clus\-tering hashtable to have improved fetch times
due to better CPU cache utilization, prefetching, etc.~\cite{AilamakiVLDB1999}.
On the other hand, these optimizations come with three types of overhead.
First, every time a query is executed, \textsc{Tu\-na\-ble-LSH}\xspace needs to be updated
(cf., Algorithms~\ref{alg:overview:tune} and~\ref{alg:online}).
Second, compared to a standard hashtable in which the physical address
of a record is determined solely using the underlying hash function (which is deterministic throughout the entire workload),
in our case, the physical address of a record needs to be maintained dynamically
because the underlying hash function is not deterministic (i.e., it is also changing dynamically throughout the workload).
Consequently, there is the overhead of going to a lookup table and retrieving the physical address of a record.
Third, physically moving records around in the storage system takes time---in fact, this is often an expensive operation.
Therefore, the objective of this set of experiments is twofold:
\begin{inparaenum}[(i)]
\item to evaluate the circumstances under which the self-clustering hashtable outperforms other popular data structures, and
\item to understand when the tuning overhead may become a bottleneck.
\end{inparaenum}
Consequently, we report the end-to-end query execution times, and if necessary,
break it down into the time to
\begin{inparaenum}[(i)]
\item \emph{fetch} the records, and
\item \emph{tune} the data structures
(which includes all types of overhead listed above).
\end{inparaenum}
In our experiments, we compare the self-clustering hashtable to popular implementations of three data structures.
Specifically, we use:
\begin{inparaenum}[(i)]
\item \emph{std\texttt{::}unordered\_map}\footnote{\url{http://www.cplusplus.com/reference/unordered\_map/unordered\_map/}},
which is the C\texttt{++} standard library implementation of a hashtable,
\item \emph{std\texttt{::}map}\footnote{\url{http://www.cplusplus.com/reference/map/map/}},
which is the C\texttt{++} standard library implementation of a red-black tree, and
\item \emph{stx\texttt{::}\-btree}\footnote{\url{https://panthema.net/2007/stx-btree/}},
which is an open source in-memory B\texttt{+} tree implementation.
\end{inparaenum}
As a baseline, we also include a static version of our hashtable, i.e., one that does not rely on \textsc{Tu\-na\-ble-LSH}\xspace.
We consider two types of workloads:
one in which records are accessed \emph{sequentially} and
the other in which records are accessed \emph{randomly}.
Each workload consists of $3000$ queries that are synthetically generated using WatDiv~\cite{AlucISWC2014}.
For each data structure, we measure the end-to-end workload execution time and compute the mean query execution time by dividing the total workload execution time by the number of queries in the workload.
Queries in these workloads consist of changing query access patterns, and in different experiments, we control different parameters such as the number of records that are accessed by queries on average, the rate at which the query access patterns change in the workload, etc.
We repeat each experiment $20$ times over workloads that are randomly generated with the same characteristics (e.g., average number of records accessed by each query, how fast the workload changes, etc.) and report averages across these $20$ runs.
We do not report standard errors because they are negligibly small and they do not add significant value to our results.
For the sequential case, \emph{stx\texttt{::}btree} and \emph{std\texttt{::}map} outperform the hashtables,
which is expected because once the first few records are fetched from main-memory,
the remaining ones can already be prefetched into the CPU cache (due to the predictability of the sequential access pattern).
Therefore, for the remaining part, we focus on the random access scenario,
which is more common in RDF data management systems,
and which can be a bottleneck even in systems like RDF-3x~\cite{NeumannVLDBJ2010}
that have clustered indexes over all permutations of attributes.
For more examples and a thorough explanation, we refer the reader to~\cite{AlucPVLDB2014}.
In this experiment, we control the number of records that a query needs to access (on average),
where each record is $128$ bytes.
Fig.~\ref{fig:evaluation:hashtable:compare-all} compares all the data structures with respect to their end-to-end (mean) query execution times.
Three observations stand out:
First, in the random access case, the self-clustering hashtable as well as the standard hashtable perform much better than the other data structures,
which is what would be expected.
This observation holds also for the subsequent experiments, therefore, for presentation purposes,
we do not include these data structures in Fig.~\ref{fig:evaluation:hashtable:increasing-coverage}--\ref{fig:evaluation:hashtable:increasing-uniqueness}.
Second, the baseline static version of our hashtable (i.e., without \textsc{Tu\-na\-ble-LSH}\xspace) performs
much worse than the standard hashtable, even worse than a B\texttt{+} tree.
This suggests that our implementation can be optimized further,
which might improve the performance of the self-clustering hashtable as well (this is left as future work).
Third, as the number of records that a query needs to access increases,
the self-clustering hashtable outperforms all the other data structures,
which verifies our initial hypothesis.
For the same experiment above,
Fig.~\ref{fig:evaluation:hashtable:increasing-coverage} focuses on the self-clus\-tering hashtable versus the standard hashtable, and
illustrates why the performance improvement is higher (for the self-clustering hash\-table)
for workloads in which queries access more records.
Note that while the \emph{fetch} time of the self-clustering hashtable scales proportionally
with respect to std\texttt{::}unordered\_map,
the \emph{tune} overhead is proportionally much lower for workloads
in which queries access more records.
This is because with increasing ``records per query count",
records can be re-located in batches across the pages in main-memory
as opposed to moving individual records around.
Next, we keep the average number of records that a query needs to access constant at $2000$,
but control the number of records in the database.
As in the previous experiment, each record is $128$ bytes.
As illustrated in Fig.~\ref{fig:evaluation:hashtable:increasing-record-count}, increasing the number of records in the database
(i.e., scaling-up) favours the self-clustering hashtable.
The reason is that, when there are only a few records in the database,
the records are likely clustered to begin with.
We repeat the same experiment, but this time, by controlling the record size and keeping the database size constant at $640$ megabytes.
Surprisingly, the relative improvement with respect to the standard hashtable remains more or less constant,
which indicates that the improvement is largely dominated by the size of the database,
and increasing it is to the advantage of the self-clustering hashtable.
Finally, we evaluate how sensitive the self-clustering hashtable is to the dynamism in the workloads.
Note that for the self-clustering hashtable to be useful at all, the workloads need to be predictable---at least to a certain extent.
That is, if records are physically clustered but are never accessed in the future, then all those clustering efforts are wasted.
To verify this hypothesis, we control the expected number of query clusters
(i.e., queries with similar but not exactly the same access vectors)
in any $100$ consecutive queries in the workloads that we generate.
Let us call this property of the workload, its \emph{$100$-Uniqueness}.
Fig.~\ref{fig:evaluation:hashtable:increasing-uniqueness} illustrates how the \emph{tuning} overhead starts to become a bottleneck
as the workloads become more and more dynamic, to the extent of being completely unique, i.e., each query accesses a distinct set of records.
\subsection{Sensitivity Analysis of Tunable-LSH}
In the final set of experiments, we evaluate the sensitivity of \textsc{Tu\-na\-ble-LSH}\xspace in isolation, that is,
without worrying about how it affects physical clustering, and compare it to three other hash functions:
\begin{inparaenum}[(i)]
\item a standard non-locality sensitive hash function\footnote{\url{http://en.cppreference.com/w/cpp/utility/hash}},
\item bit-sampling, which is known to be locality-sensitive for Hamming distances~\cite{IndykSTOC98}, and
\item \textsc{Tu\-na\-ble-LSH}\xspace without the optimizations discussed in Section~\ref{sec:details}.
\end{inparaenum}
These comparisons are made across workloads with different characteristics (i.e., dense vs.~sparse, dynamic vs.~stable, etc.)
where parameters such as
the average number of records accessed per query and
the expected number of query clusters within any $100$-consecutive sequence of queries in the workload are controlled.
Our evaluations indicate that \textsc{Tu\-na\-ble-LSH}\xspace generally outperforms its alternatives.
Due to space considerations, we cannot present all of our results in detail.
Therefore, we will summarize our most important observations.
Fig.~\ref{fig:evaluation:tlsh:increasing-uniqueness} shows how
the probability that \emph{the evaluated hash functions place records with similar utilization vectors to nearby hash values}
changes as the workloads become more and more dynamic.
In computing these probabilities, both the original distances (i.e., $\delta$) and the distances over the hashed values (i.e., $\delta^{*}$)
are normalized with respect to the maximum distance in each geometry.
As illustrated in Fig.~\ref{fig:evaluation:tlsh:increasing-uniqueness}, \textsc{Tu\-na\-ble-LSH}\xspace achieves higher probability even when the workloads are dynamic.
The unoptimized version of \textsc{Tu\-na\-ble-LSH}\xspace behaves more or less like a static locality-sensitive hash function, such as bit sampling, which is an expected result because
\textsc{Tu\-na\-ble-LSH}\xspace cannot achieve high accuracy
without the workload-sensitive arrangement introduced in Section~\ref{sec:details}.
It is also important to emphasize that even in that case \textsc{Tu\-na\-ble-LSH}\xspace is no worse than a standard LSH scheme, which is aligned with the theorems in Section~\ref{sec:details:properties}.
We have not included the results on the standard non-locality sensitive hash function,
because, as one might guess, it has a probability distribution that is completely unparalleled to our clustering objectives.
Fig.~\ref{fig:evaluation:tlsh:increasing-b} demonstrates how the choice of $b$ (or $2b$ as described in Section~\ref{sec:details:reset}) affects the accuracy of \textsc{Tu\-na\-ble-LSH}\xspace.
Having a higher $b$ implies less and less undesirable collisions of query access vectors, hence, a higher accuracy.
On the other hand, for bit sampling, the ideal number of samples is equal to the query clusters in the workload, thus, increasing $b$,
which corresponds to the number of bits that are sampled,
might result in oversampling and therefore, lower accuracy.
For example, consider two record utilization vectors $1001$ and $0001$ with Hamming distance $1$.
If only $1$ bit is sampled, there is $\frac{3}{4}$ probability that these two vectors will be hashed to the same value.
On the other hand, if $2$ bits are sampled, the probability drops to $\frac{1}{2}$.
|
2,869,038,154,253 | arxiv | \section{Introduction}
Denote by $\mathbb{B}^n$ the unit ball in $\mathbb{C}^n$. For a domain $D\subset\mathbb{C}^n$, the Carath\'eodory and Kobayashi-Eisenman volume elements on $D$ at a point $p \in D$ are defined respectively by
\begin{align*}
c_D(p) & = \sup \Big\{ \big\vert \det \psi^{\prime}(p) \big\vert^2 : \psi \in \mathcal{O}(D,\mathbb{B}^n), \psi(p) = 0 \Big\},\\
k_D(p) &= \inf \Big\{\big\vert \det \psi^{\prime}(0)\big\vert^{-2 } : \psi \in \mathcal{O}(\mathbb{B}^n, D), \psi(0)=p \Big\}.
\end{align*}
By Montel's theorem $c_D(p)$ is always attained and if $D$ is taut then $k_D(p)$ is also attained. Under a holomorphic map $F:D \to \Omega$, they satisfy the rule
\[
v_D(p) \geq \big\vert \det F^{\prime}(p)\big\vert^2 v_{\Omega}\big(F(p)\big)
\]
where $v=c,k$. In particular, equality holds if $F$ is a biholomorphism. Accordingly, if $k_D$ is nonvanishing (which is the case if $D$ is bounded or taut), then
\[
q_D(p)=\frac{c_D(p)}{k_D(p)}
\]
is a biholomorphic invariant and is called the \textit{quotient invariant}. If $D=\mathbb{B}^n$, then
\[
c_{\mathbb{B}^n}(p)=k_{\mathbb{B}^n}(p)=\big(1-\vert p \vert^2\big)^{-n-1},
\]
and thus $q_{\mathbb{B}^n}$ is identically equal to $1$. In general, an application of the Schwarz lemma shows that $q_D \leq 1$. It is a remarkable fact that if $D$ is any domain in $\mathbb{C}^n$ and $q_D(p)=1$ for some point $p \in D$, then $q_D(z)=1$ for all $z \in D$ and $D$ is biholomorphic to $\mathbb{B}^n$. This was first proved by Wong \cite{Wong} with the hypothesis that $D$ is bounded and complete hyperbolic, which was relaxed by Rosay \cite{Ro} to $D$ being any bounded domain. Dektyarev \cite{Dek} further relaxed this condition to $D$ being only hyperbolic and later Graham and Wu \cite{Gr-Wu} showed that no assumption on $D$ is required for the result to be true, in fact, it is true for any complex manifold. Thus $q_D$ measures the extent to which the Riemann mapping theorem fails for $D$. This fact is a fundamental step in the proof of the Wong-Rosay theorem and several other applications can be found in \cites{Gr-Kr1, Gr-Kr2, Kr-vol}.
The purpose of this note is to study the boundary asymptotics of the Kobayashi volume element on smoothly bounded convex finite type domains and Levi corank one domains in $\mathbb{C}^n$. Boundary behaviour of the quotient invariant on strongly pseudoconvex domains had been studied by several authors, see for example \cites{Cheung-Wong, Gr-Kr1, Ma}, and in particular it is known that $q_D(z) \to 1$ if $z \to \partial D$ for a strongly pseudoconvex domain $D$. Recently in \cite{Nik-sq}, \textit{nontangential} boundary asymptotics of the volume elements near $h$-extendible boundary points were obtained. Finally, we also note that in \cite{Nik-Pas}, a relation between the Carath\'eodory volume element and the Bergman kernel was observed in light of the multidimensional Suita conjecture. Our goal is to compute the boundary asymptotics of the Kobayashi volume element in terms of the distinguished polydiscs of McNeal and Catlin devised to capture the geometry of a domain near a convex finite type and Levi corank one boundary point respectively. In order to state our results, let us briefly recall these terminologies. First, let $D=\{\rho<0\}$ be a smoothly bounded convex finite type domain and $p^0 \in \partial D$. For each point $p \in D$ sufficiently close to $p^0$, and $\epsilon>0$ sufficiently small, McNeal's orthogonal coordinate system $z_1^{p,\epsilon}, \ldots, z_n^{p,\epsilon}$ centred at $p$ is constructed as follows (see \cite{McN-adv}). Denote by $D_{p, \epsilon}$ the domain
\[
D_{p,\epsilon}=\Big\{ z \in \mathbb{C}^n :\rho(z)< \rho(p)+\epsilon\Big\}.
\]
Let $\tau_n(p,\epsilon)$ be the distance of $p$ to $\partial D_{p,\epsilon}$ and $\zeta_n(p,\epsilon)$ be a point on $\partial D_{p, \epsilon}$ realising this distance. Let $H_n$ be the complex hyperplane through $p$ and orthogonal to the vector $\zeta_n(p,\epsilon)-p$. Compute the distance from $p$ to $\partial D_{p,\epsilon}$ along each complex line in $H_n$. Let $\tau_{n-1}(p,\epsilon)$ be the largest such distance and let $\zeta_{n-1}(p,\epsilon)$ be a point on $\partial D_{p,\epsilon}$ such that $\vert \zeta_{n-1}(p,\epsilon)-p\vert=\tau_{n-1}(p,\epsilon)$. For the next step, define $H_{n-1}$ as the complex hyperplane through $p$ and orthogonal to the span of the vectors $\zeta_n(p,\epsilon)-p, \zeta_{n-1}(p,\epsilon)-p$ and repeat the above construction. Continuing in this way, we define the numbers $\tau_n(p, \epsilon), \tau_{n-1}(p, \epsilon), \ldots, \tau_1(p, \epsilon)$, and the points $\zeta_{n}(p, \epsilon)$, $\zeta_{n-1}(p, \epsilon)$, $\ldots, \zeta_1(p, \epsilon)$ on $\partial D_{p, \epsilon}$. Let $T^{p, \epsilon}$ be the translation sending $p$ to the origin and $U^{p,\epsilon}$ be a unitary mapping aligning $\zeta_k(p,\epsilon)-p$ along the $z_k$-axis and $\zeta_k(p, \epsilon)$ to a point on the positive $\Re z_k$ axis. Set
\[
z^{p,\epsilon}= U^{p, \epsilon} \circ T^{p, \epsilon}(z).
\]
The polydisc
\[
P(p,\epsilon)=\Big\{z^{p,\epsilon} : \vert z^{p,\epsilon}_1 \vert < \tau_1(p, \epsilon), \ldots, \vert z^{p,\epsilon}_n\vert < \tau_{n}(p, \epsilon)\Big\}
\]
is known as McNeal's polydisc. Write $z = (z_1, z_2, \ldots, z_n) = ({}'z, z_n) \in \mathbb C^n$. The scaling method (which will be briefly explained later) shows that every sequence in $D$ that converges to $p^0 \in \partial D$ furnishes limiting domains
\begin{equation}\label{cvx-mod}
D_{\infty} = \left\{ z \in \mathbb C^n : -1 + \Re \sum_{\alpha=1}^n b_{\alpha} z_{\alpha} + P_{2m}({}'z) < 0 \right\},
\end{equation}
where $b_\alpha$ are complex numbers and $P_{2m}$ is a real convex polynomial of degree at most $2m$ ($m \ge 1$), where $2m$ is the $1$-type of $\partial D$ at $p^0$. The polynomial $P_{2m}$ is not unique in general and depends on how the given sequence approaches $p^0$. The limiting domains $D_{\infty}$ are usually called local models associated with $D$ at $p^0$. It is known that $D_{\infty}$ possesses a local holomorphic peak function at every boundary point including the point at infinity and hence is complete hyperbolic (see \cite{G2}).
\begin{thm} \label{cvx}
Let $D=\{\rho<0\}$ be a smoothly bounded convex finite type domain in $\mathbb{C}^n$ and $p^j \in D$ be a sequence converging to $p^0 \in \partial D$. Let $\epsilon_j=-\rho(p^j)$. Then up to a subsequence,
\[
k_D(p^j) \prod_{\alpha=1}^n \tau_{\alpha}(p^j,\epsilon_j)^2 \to k_{D_\infty}(0)
\]
as $j \to \infty$, where $D_{\infty}$ is a local model associated with $D$ at $p^0$.
\end{thm}
Now we consider the Levi corank one case. Recall that a boundary point $p^0$ of a domain $D \subset \mathbb{C}^n$ is said to have Levi corank one if there exists a neighbourhood of $p^0$ where $\partial D$ is smooth, pseudoconvex, of finite type, and the Levi form has at least $(n-2)$ positive eigenvalues. If every boundary point of $D$ has Levi corank one, then $D$ is called a Levi corank one domain. This includes the class of {\it all} smoothly bounded pseudoconvex finite type domains in $\mathbb{C}^2$. A basic example in higher dimension is the egg
\[
E_{2m} = \Big\{ z \in \mathbb C^n : \vert z_1 \vert^{2m} + \vert z_2 \vert^2 + \ldots + \vert z_n \vert^2 < 1\Big\}
\]
where $m\ge 2$ is an integer. In general, if $\rho$ is a local defining function for $D$ at a Levi-corank one boundary point $p^0$, then it was proved in \cite{Cho2} that for each point $p$ in a sufficiently small neighbourhood $U$ of $p^0$, there are holomorphic coordinates $\zeta=\Phi^p(z)$ such that
\begin{multline}\label{nrmlfrm}
\rho \circ (\Phi^{p})^{-1}(\zeta) = \rho(p) + 2 \Re \zeta_n + \sum_{ \substack{j + k \le 2m\\
j, k > 0}} a_{jk}(p) \zeta_1^j \overline \zeta_1^k + \sum_{\alpha=2}^{n-1} \vert \zeta_{\alpha}\vert^2 \\
+ \sum_{\alpha = 2}^{n - 1} \sum_{ \substack{j + k \le m\\
j, k > 0}} \Re \Big( \big(b_{jk}^{\alpha}(p) \zeta_1^j \overline \zeta_1^k \big) \zeta_{\alpha} \Big)+
O\big(\vert \zeta_n\vert \vert \zeta \vert+\vert \zeta_{*}\vert^2 \vert \zeta \vert+\vert \zeta_{*}\vert \vert \zeta_1\vert^{m+1}+\vert \zeta_1\vert^{2m+1}\big)
\end{multline}
where $\zeta_*=(0,\zeta_2,\ldots, \zeta_{n-1},0)$.
To construct the distinguished polydiscs around $p$, set
\begin{equation}\label{defn-A-B}
\begin{aligned}
A_l(p) &= \max\Big\{\big\vert a_{jk}(p)\big\vert : j + k = l\Big\}, \quad 2 \leq l \leq 2m,\\
B_{l'}(p) & = {\rm max} \Big\{ \big\vert b_{jk}^\alpha (p) \big\vert \; : \; j+k=l', \; 2 \leq \alpha \leq n-1\Big \}, \; 2 \leq l' \leq m.
\end{aligned}
\end{equation}
Now define for each $\delta > 0$, the special-radius
\begin{alignat}{3} \label{E46}
\tau(p, \delta) = \min \Big\{ \Big( \delta/ A_l(p) \Big)^{1/l}, \; \Big(\delta^{1/2}/B_{l'}(p) \Big)^{1/l'} \; : \; 2 \le l \le 2m, \; 2 \leq l' \leq m \Big \}.
\end{alignat}
It was shown in \cite{Cho2} that the coefficients $b_{jk}^\alpha$'s in the above definition of $\tau(p,\delta)$ are insignificant and may be ignored, so that
\begin{equation}\label{tau-defn2}
\tau(p, \delta) = \min \Big\{ \Big( \delta/ A_l(p) \Big)^{1/l} \;: \; 2 \le l \le 2m \Big\}.
\end{equation}
Set
\[
\tau_1(p, \delta) = \tau(p, \delta) = \tau, \tau_2(p, \delta) = \ldots = \tau_{n-1}(p, \delta) = \delta^{1/2}, \tau_n(p, \delta) = \delta.
\]
The distinguished polydiscs $Q(p, \delta)$ of Catlin are defined by
\[
Q(p,\delta)=\Big\{(\Phi^p)^{-1}(\zeta) : \vert \zeta_1 \vert < \tau_1(p,\delta), \ldots, \vert \zeta_n\vert < \tau_n(p,\delta)\Big\}.
\]
The scaling method (which is well known in this case and will be briefly explained later) shows that every sequence in $D$ that converges to $p^0 \in \partial D$ furnishes limiting domains
\begin{equation}\label{Lcoran1-mod}
D_{\infty} = \left\{ z \in \mathbb C^n : 2 \Re z_n + P_{2m}(z_1, \overline z_1) + \sum_{\alpha=2}^{n-1} \vert z_{\alpha} \vert^2<0\right\}
\end{equation}
where $P_{2m}(z_1, \overline z_1)$ is a subharmonic polynomial of degree at most $2m$ ($m \ge 1$) without harmonic terms, $2m$ being the $1$-type of $\partial D$ at $p^0$. Such a limiting domain $D_{\infty}$ is called a local model associated with $D$ at $p^0$. By Proposition~4.5 of \cite{Yu2} and the remark at the bottom of page~605 of the same article, $D_{\infty}$ possesses a local holomorphic peak function at every boundary point. By Lemma~1 of \cite{BP}, there is a local holomoprhic peak function for $D_{\infty}$ at the point at infinity also. It follows that $D_{\infty}$ is complete hyperbolic (see \cite{G2}). Observe that the point $b=({}'0,-1)$ lies in every such $D_{\infty}$.
\begin{thm} \label{Lcr1}
Let $D=\{\rho<0\}$ be a smoothly bounded Levi corank one domain in $\mathbb{C}^n$ and $p^j \in D$ be a sequence converging to $p^0 \in \partial D$. Let $\delta_j>0$ be such that $\tilde p^j=(p^j_{1},\cdots, p^j_{n}+\delta_j)$ is a point on $\partial D$. Then up to a subsequence,
\[
k_D(p^j) \prod_{\alpha=1}^n \tau_{\alpha}(\tilde p^j,\delta_j)^2 \to c(\rho,p^0)k_{D_\infty}(b)
\]
as $j \to \infty$, where $c(\rho,p^0)$ is a positive constant that depends only on $\rho$ and $p^0$, and $D_{\infty}$ is a local model associated with $D$ at $p^0$.
\end{thm}
We conclude the article by showing an efficacy of the quotient invariant in determining strong pseudoconvexity if its boundary behaviour is a priori known---a property enjoyed by the squeezing function and its dual the Fridman invariant as well. We refer the reader to the recent articles \cites{MV, NV} and the references therein for the definition and other relevant materials related to these two invariants. Let us denote the squeezing function for a domain $D$ by $s_D$ and the Fridman invariant by $h_D$. It was proved in \cite{Zim} that if $D$ is a bounded convex domain with $C^{2, \alpha}$ boundary for some $\alpha\in(0,1)$, then $D$ is strongly pseudoconvex if $s_D(z) \to 1$ as $z \to \partial D$. Mahajan and Verma \cite{MV} showed that if $D$ is a smoothly bounded convex domain or if $D$ is a smoothly bounded $h$-extendible domain (i.e., $D$ is a smoothly bounded pseudoconvex finite type domain for which the Catlin and D'Angelo multitypes coincide at every boundary point), then $D$ is strongly pseudoconvex if either $h_D(z) \to 0$ or $s_D(z) \to 1$ as $z \to \partial D$. We have the following analog for the quotient invariant:
\begin{thm}\label{q-appln}
For any positive integer $n$ and $\alpha \in (0,1)$, there exists some $\epsilon=\epsilon(n, \alpha)>0$ with the following property: If $D \subset \mathbb{C}^n$ is a bounded convex domain with $C^{2, \alpha}$ boundary and if
\[
q_D(p) \geq 1-\epsilon
\]
outside a compact subset of $D$, then $D$ is strongly pseudoconvex.
\end{thm}
\medskip
{\it Acknowledgements}: The authors would like to thank K. Verma for his support and encouragement. Some of the material presented here has been benefited from conversations that the first author had with G.P. Balakumar, S. Gorai, and P. Mahajan. We would like to thank them for their valuable comments and suggestions. We thank the anonymous referee for useful suggestions for improving the exposition herein, especially Theorem~1.3 and its proof are based on the ideas given by the referee.
\section{Regularity of the volume elements}
In this section we prove continuity of the volume elements that is required for the proofs of Theorems~\ref{cvx} and \ref{Lcr1}. The arguments are similar to the case of the Carath\'eodory-Reiffen and Kobayashi-Royden pseudometrics and we present them only for convenience. First, a few remarks. If $D \subset \mathbb{C}^n$ is any domain and $p \in D$, then $c_D(p)$ is attained. Indeed, choose a sequence $\psi^j \in \mathcal{O}(D, \mathbb{B}^n)$ such that $\psi^j(p)=0$ and $\vert \det (\psi^j)^{\prime}(p)\vert^2 \to c_D(p)$. By Montel's theorem, passing to a subsequence if necessary, $\psi^j$ converges uniformly on compact subsets of $D$ to a map $\psi \in \mathcal{O}(D, \overline{\mathbb{B}^n})$. Since $\psi(p)=0$, by the maximum principle $\psi\in \mathcal{O}(D, \mathbb{B}^n)$, and it follows that $c_D(p)=\vert \det \psi^{\prime}(p)\vert^2$. In particular, this implies that $c_D(p)$ is always finite. Note that $c_D(p)$ can vanish (for example if $D=\mathbb{C}$), but is strictly positive if $D$ is not a Liouville domain. Likewise, if $D$ is taut then similar arguments as above shows that $k_D(p)$ is attained. Observe that $k_D(p)$ is finite for any domain $D$ because we can put a ball $B(p,r)$ inside $D$ and consequently $\phi(t)=rt+p$ is a competitor for $k_D(p)$, giving us $k_D(p) \leq r^{-2n}$. It is possible that $k_D(p)$ can also vanish but if $D$ is bounded, then by invoking Cauchy's estimates we see that $k_D(p)>0$. Similarly, if $D$ is taut, then also $k_D(p)>0$ as it is attained. We will call a map $\psi \in \mathcal{O}(D, \mathbb{B}^n)$ satisfying $\psi(p)=0$ and $\vert \det \psi^{\prime}(p)\vert^2=c_D(p)$ a Carath\'eodory extremal map for $D$ at $p$. Similarly, a Kobayshi extremal map for $D$ at $p$ is a map $\psi \in \mathcal{O}(\mathbb{B}^n,D)$ with $\psi(0)=p$ and $\vert \det \psi^{\prime}(0)\vert^{-2}=k_D(p)$.
\begin{prop}\label{cont-vol}
Let $D \subset \mathbb{C}^n$ be a domain. Then $c_D$ is continuous. If $D$ is taut, then $k_D$ is also continuous.
\end{prop}
\begin{proof}
We will show that $c_D$ is locally Lipschitz which of course implies that $c_D$ is continuous. Let $B(a,2r) \subset\subset D$ and fix $p, q \in B(a,r)$. Choose a Carath\'eodory extremal map $\psi$ for $D$ at $p$. Then
\begin{multline*}
c_D(p)-c_D(q) \leq \big\vert \det \psi^{\prime}(p)\big\vert^2 - \big\vert\det \psi^{\prime}(q) \big\vert^2 c_{\mathbb{B}^n}\big(\psi(q)\big) \\
= \big\vert \det \psi^{\prime}(p)\big\vert^2- \frac{\big\vert\det \psi^{\prime}(q) \big\vert^2}{\big(1-\vert \psi(q) \vert^2\big)^{n+1}} \leq \big\vert \det \psi^{\prime}(p)\big\vert^2 - \big\vert\det \psi^{\prime}(q)\big \vert^2.
\end{multline*}
Since the distances of $p$ and $q$ to $\partial D$ is at least $r$, by Cauchy's estimates the right hand side is bounded above by $C_r \vert p-q\vert$ where $C_r$ is a constant that depends only on $r$. Thus we can interchange the role of $p$ and $q$ to have $\vert c_D(p)-c_D(q) \vert \leq C_r\vert p-q\vert$ that establishes local Lipschitz property of $c_D$.
For $k_D$, first we show that it is upper semicontinuous for any domain $D$. Let $p\in D$ and $\epsilon>0$. Then there exists $\phi\in\mathcal{O}(\mathbb{B}^n,D)$ with $\phi(0)=p$ such that
\begin{equation}\label{usc-1}
\vert \det \phi^{\prime}(0) \vert^{-2} < k_D(p)+\epsilon.
\end{equation}
Let $0<r<1$ and set for $z\in D$,
\[
f^z(t)=\phi\big((1-r) t\big)+ (z-p), \quad t \in \mathbb{B}^n.
\]
Since $\phi\big(B(0,1-r)\big)$ is a relatively compact subset of $D$, there exists $\delta>0$ such that if $z \in B(p,\delta)$, then $f^z \in \mathcal{O}(\mathbb{B}^n, D)$. Also $f^z(0)=z$ and so $f^z$ is a competitor for $k_{D}(z)$. Therefore,
\[
k_D(z) \leq \big\vert \det (f^z)^{\prime}(0)\big\vert^{-2} =(1-r)^{-2n}\big\vert \det \phi^{\prime}(0)\big\vert^{-2}.
\]
Letting $r\to 0^+$ and using \eqref{usc-1}, we obtain that
\[
k_D(z)< k_D(p)+\epsilon
\]
for all $z \in B(p,\delta)$ which proves the upper semicontinuity of $k_D$.
Next we assume that $D$ is taut and show that $k_D$ is lower semicontinuous. Let $p \in D$. If possible, assume that $k_D$ is not lower semicontinuous at $p$. Then $k_D(p)>0$ and there exist $\epsilon>0$, a sequence $p^j \to p$, such that
\[
k_D(p^j)< k_D(p)-\epsilon.
\]
Since $D$ is taut, there are Kobayashi extremal maps $g^j$ for $D$ at $p^j$. Again by tautness and the fact that $g^j(0)=p^j\to p \in D$, passing to a subsequence, $g^j$ converges uniformly on compact subsets of $\mathbb{B}^n$ to a map $g \in \mathcal{O}(\mathbb{B}^n,D)$. Therefore,
\[
k_D(p^j) = \big\vert \det (g^j)^{\prime}(0)\big\vert^{-2} \to \big\vert\det g^{\prime}(0)\big\vert^{-2}.
\]
But $g$ is a competitor for $k_D(p)$ and so $k_D(p) \leq \vert\det g^{\prime}(0)\vert^{-2}$. Thus we have
\[
k_D(p) \leq k_D(p)-\epsilon
\]
which is a contradiction. This proves the lower semicontinuity of $k_D$ and thus $k_D$ is continuous if $D$ is taut.
\end{proof}
\section{Proof of Theorem \ref{cvx}}
Let us recall the hypothesis of Theorem~\ref{cvx}: We are given a smoothly bounded convex finite type domain $D=\{\rho<0\}$ and a sequence $p^j \in D$ converging to $p^0 \in \partial D$. Without loss of generality assume that $p^0=0$. The numbers $\epsilon_j$ are defined by $\epsilon_j=-\rho(p^j)$. The maps $U^{p^j,\epsilon_j}\circ T^{p^j,\epsilon_j}$ satisfy
\[
U^{p^j,\epsilon_j}\circ T^{p^j,\epsilon_j}(p^j)=0.
\]
\subsection{Scaling}
Consider the dilations
\[
\Lambda^{p^j,\epsilon_j}(z)=\left(\frac{z_1}{\tau_1(p^j,\epsilon_j)},\ldots, \frac{z_n}{\tau_n(p^j,\epsilon_j)}\right).
\]
The scaling maps are the compositions $S^j=\Lambda^{p^j,\epsilon_j} \circ U^{p^j,\epsilon_j}\circ T^{p^j,\epsilon_j}$ and the scaled domains are $D^j=S^j(D)$. Note that $D^j$ is convex and $S^j(p^j)=0 \in D^j$, for each $j$. It was shown in \cite{G} that the defining functions $\rho^j=\frac{1}{\epsilon_j} \rho \circ (S^j)^{-1}$ for $D^j$, after possibly passing to a subsequence, converge uniformly on compact subsets of $\mathbb{C}^n$ to
\[
\rho_{\infty}(z)=-1 + \Re\sum_{\alpha=1}^n b_{\alpha} z_{\alpha} + P_{2m}({}'z),
\]
where $b_{\alpha}$ are complex numbers and $P_{2m}$ is a real convex polynomial of degree less than or equal to $2m$. This implies that after passing to a subsequence if necessary, the domains $D^j$ converge in the local Hausdorff sense to $D_{\infty}=\{\rho_{\infty}<0\}$.
\subsection{Stability of the volume elements}
\begin{lem}\label{normality-cvx}
Let $\phi^j \in \mathcal{O}(\mathbb{B}^n, D^j)$ and $\phi^j(0)=a^j \to a \in D_{\infty}$. Then $\phi^j$ admits a subsequence that converges uniformly on compact subsets of $\mathbb{B}^n$ to a map $\phi \in \mathcal{O}(\mathbb{B}^n, D_{\infty})$.
\end{lem}
\begin{proof}
By the arguments in the proof of Lemma~3.1 in \cite{G}, observe that the family $\phi^j$ is normal. Also, $\phi^j(0)=a^j \to a$. Hence, the sequence $\phi^j$ admits a subsequence, which we denote by $\phi^j$ itself, and which converges uniformly on compact subsets of $\mathbb{B}^n$ to a holomorphic map $\phi:\mathbb{B}^n\rightarrow \mathbb{C}^n$. We will now show that $\phi\in \mathcal{O}(\mathbb{B}^n, D_{\infty})$.
Let $0<r<1$. Then $\phi^j$ converges uniformly on $B(0,r)$ to $\phi$, and so the sets $\phi^j(B(0,r)) \subset K$ for some fixed compact set $K$ and for all large $j$. Since $\rho^j(\phi^j(t))<0$ for $t\in B(0,r)$ and for all $j$, we have $\rho_{\infty}(\phi(t)) \leq 0$, or equivalently $\phi(B(0,r))\subset \overline{D}_{\infty}$. Since $r\in(0,1)$ is arbitrary, we have $\phi(\mathbb{B}^n)\subset \overline{D}_\infty$. Since $\phi(0)=a \in D_\infty$, and $D_{\infty}$ possesses a local holomorphic peak function at every boundary point (see \cite{G2}), the maximum principle implies that $\phi(\mathbb{B}^n)\subset D_\infty$.
\end{proof}
\begin{prop}\label{stability-cvx}
For any $a\in D_\infty$,
\[
\lim_{j\rightarrow \infty} k_{D^j}(a)= k_{D_\infty}(a),
\]
Moreover, this convergence is uniform on compact subsets of $D_\infty$.
\end{prop}
\begin{proof}
Assume that $k_{D^j}$ does not converge to $k_{D_\infty}$ uniformly on some compact subset $S\subset D_\infty$. Then there exist $\epsilon_0>0$, a subsequence of $k_{D^j}$ which we denote by $k_{D^j}$ itself, and a sequence $a^j\in S$ satisfying
\[
\big\vert k_{D^j}(a^j)-k_{D_\infty}(a^j)\big\vert>\epsilon_0
\]
for all large $j$. Since $S$ is compact, after passing to a subsequence if necessary, $a^j \to a \in S$. Since $D_\infty$ is complete hyperbolic, and hence taut, $k_{D_\infty}$ is continuous by Proposition~\ref{cont-vol}. Hence for all large $j$, we have
\[
\big\vert k_{D_\infty}(a^j)-k_{D_\infty}(a) \big\vert \leq \frac{\epsilon_0}{2}.
\]
Combining the above two inequalities we have
\begin{equation}\label{stability-cvx-1}
\big\vert k_{D^j}(a^j)-k_{D_\infty}(a)\big\vert>\frac{\epsilon_0}{2}
\end{equation}
for all large $j$. We will deduce a contradiction in the following two steps:
\textit{Step 1}. $\limsup_{j\rightarrow \infty}k_{D^j}(a^j)\leq k_{D_\infty}(a)$. Since $D_{\infty}$ is taut, we have $0<k_{D_\infty}(a)<\infty$ and there exists a Kobayashi extremal map $\psi$ for $D_{\infty}$ at $a$. Fix $0<r<1$ and define the holomorphic maps $\psi^j:\mathbb{B}^n\rightarrow \mathbb{C}^n$ by
\[
\psi^j(t)=\psi\big((1-r)t\big)+(a^j-a).
\]
Since the image $\psi\big(B(0,1-r)\big)$ is compactly contained in $D_\infty$ and $a^j\rightarrow a$ as $j\rightarrow \infty$, it follows that $\psi^j \in \mathcal{O}(\mathbb{B}^n, D^j)$ for all large $j$. Also, $\psi^j(0)=\psi(0)+a^j-a= a^j$ and thus $\psi^j$ is a competitor for $k_{D^j}(a^j)$. Therefore,
\[
k_{D^j}(a^j)\leq \big\vert \det (\psi^j)'(0)\big\vert^{-2} =(1-r)^{-2n} \big\vert \det \psi'(0)\big\vert^{-2}.
\]
Letting $r \to 0^+$, we get
\[
\limsup_{j\rightarrow \infty}k_{D^j}(a^j)\leq k_{D_\infty}(a).
\]
\textit{Step 2}. $k_{D_\infty}(a)\leq \liminf_{j\rightarrow \infty}k_{D^j}(a^j)$. Fix $\epsilon>0$ arbitrarily small. Then there exist $\phi^j \in \mathcal{O}(\mathbb{B}^n, D^j)$ such that $\phi^j(0)=a^j$ and
\begin{equation}\label{stability-cvx-2}
\big\vert \det (\phi^j)'(0)\big\vert^{-2} < k_{D^j}(a^j)+\epsilon.
\end{equation}
By Lemma~\ref{normality-cvx}, $\phi^j$ admits a subsequence which we denote by $\phi^j$ itself, and which converges uniformly on compact subsets of $\mathbb{B}^n$ to a map $\phi\in \mathcal{O} (\mathbb{B}^n, D_{\infty})$. Then from \eqref{stability-cvx-2}
\[
\big\vert \det \phi'(0)\big\vert^{-2} \leq \liminf_{j \to \infty} k_{D^j}(a^j)+\epsilon
\]
But $\phi$ is a competitor for $k_{D_\infty}(a)$ and $\epsilon$ is arbitrary. So we obtain
\[
k_{D_{\infty}}(a) \leq \liminf_{j \to \infty} k_{D^j}(a^j)
\]
as required.
By Step 1 and Step 2, we have $\lim_{j\rightarrow \infty}k_{D^j}(a^j)=k_{D_\infty}(a)$ which contradicts \eqref{stability-cvx-1} and thus the proposition is proved.
\end{proof}
We believe that the analog of the above stability result holds for the Carath\'eodory volume element also but we do not have a proof. However, we do have the following:
\begin{prop}\label{c-stability-cvx}
For $a^j \in D^j$ converging to $a\in D_\infty$,
\[
\limsup_{j\rightarrow \infty} c_{D^j}(a^j) \leq c_{D_\infty}(a),
\]
\end{prop}
\begin{proof}
If possible, assume that this is not true. Then there exists a subsequence of $c_{D^j}(a^j)$ which we denote by $c_{D^j}(a^j)$ itself, and an $\epsilon>0$, such that
\[
c_{D^j}(a^j) > c_{D_{\infty}}(a)+\epsilon, \quad \text{for all } j \geq 1.
\]
Let $\psi^j$ be a Carath\'eodory extremal map for $D^j$ at $a^j$. Since the target of these maps is $\mathbb{B}^n$, passing to a subsequence if necessary, $\psi^j$ converges uniformly on compact subsets of $D_{\infty}$ to a holomorphic map $\psi: D_{\infty} \to \overline{\mathbb{B}^n}$, and since $\psi(a)=0$ we must have $\psi\in \mathcal{O}(D_{\infty}, \mathbb{B}^n)$. Now, the above inequality implies that this limit map satisfies
\[
\big\vert \det \psi'(a)\big\vert^2 \geq c_{D_{\infty}}(a)+\epsilon.
\]
On the other hand as $\psi$ is a candidate for $c_{D_{\infty}}(a)$, we also have
\[
c_{D_{\infty}}(a)\geq \big\vert \det \psi'(a)\big\vert^2.
\]
Combining the last two inequalities, we obtain
\[
c_{D_{\infty}}(a)\geq c_{D_{\infty}}(a)+\epsilon
\]
which is a contradiction.
\end{proof}
\subsection{Proof of Theorem~\ref{cvx}}
By the transformation rule
\[
k_D(p^j)=\big\vert \det (\Lambda^{p^j,\epsilon_j}U^{p^j,\epsilon_j}T^{p^j,\epsilon_j})'(p^j) \big\vert^2 k_{D^j}(0).
\]
Since $\vert \det (\Lambda^{p^j,\epsilon_j})'(0)\vert^2=\prod_{\alpha=1}^n \tau_{\alpha}(p^j,\epsilon_j)^{-2}$ we get
\[
k_D(p^j) \prod_{\alpha=1}^n \tau_{\alpha}(p^j,\epsilon_j)^{2}= k_{D^j}(0).
\]
Recall that the domains $D^j$ converge in the local Hausdorff sense to $D_{\infty}$ up to a subsequence and hence in view of Proposition~\ref{stability-cvx}, a limit of the right hand side is $k_{D_\infty}(0)$. This completes the proof of the theorem.
\section{Proof of Theorem \ref{Lcr1}}
\subsection{Change of coordinates}
Let $D=\{\rho<0\}$ be a smoothly bounded Levi corank one domain and $p^0 \in \partial D$. We may assume that the Levi form of $\rho$ at $p_0$ has exactly $n-2$ positive eigenvalues. We recall the definition of the change of coordinates $\Phi^p$ that transform $\rho$ into the normal form \eqref{nrmlfrm}. The maps $\Phi^p$ are actually holomorphic polynomial automorphisms defined as $\Phi^p=\phi_5 \circ \phi_4 \circ \phi_3 \circ \phi_2\circ \phi_1$ where $\phi_i$ are described below. Since the volume elements are invariant under unitary rotations, we assume without loss of generality that $\partial \rho/\partial z_n(p^0) \neq 0$. Then there is a neighbourhood $U$ of $p^0$ such that $(\partial \rho/\partial z_n)(p)\neq 0$ for all $p \in U$. Thus
\[
\nu=\left(\frac{\partial \rho}{\partial z_1}, \ldots, \frac{\partial \rho}{\partial z_n}\right)
\]
is a nonvanishing vector field on $U$. Note that the vector fields
\[
L_n=\frac{\partial}{\partial z_n}, \quad L_{\alpha}=\frac{\partial}{\partial z_{\alpha}}-b_{\alpha}\frac{\partial}{\partial z_n}, \quad 1 \leq \alpha \leq n-1,
\]
where $b_{\alpha}=\frac{\partial \rho}{\partial z_{\alpha}}/\frac{\partial \rho}{\partial z_n}$, form a basis of $T^{1,0}(U)$. Moreover, for $1 \leq \alpha \leq n-1$, $L_{\alpha}\rho\equiv 0$ and so $L_{\alpha}$ is a complex tangent vector field to $\partial D \cap U$. Shrinking $U$ if necessary, we also assume that
\[
\begin{bmatrix} \partial \overline \partial \rho (L_{\alpha}, \overline L_{\beta})\end{bmatrix}_{2 \leq \alpha,\beta \leq n-1}
\]
has all its eigenvalues positive at each $p \in U$.
\begin{enumerate}[(i)]
\item The map $\phi_1$ is defined by
\[
\phi_1(z)=\big(z_1-p_1, \ldots, z_{n-1}-p_{n-1}, \langle z-p, \nu(p)\rangle\big)
\]
and it normalises the linear part of the Taylor series expansion of $\rho$ at $p$. In the new coordinates which we denote by $z$ itself, $\rho$ takes the form
\[
\rho\circ \phi_1^{-1}(z)=\rho(p)+ 2 \Re z_n+ O(\vert z\vert^2).
\]
\item Now
\[
A=\begin{bmatrix}\frac{\partial^2 \rho}{\partial z_{\alpha}\partial \overline z_{\beta}}(p)\end{bmatrix}_{2 \leq \alpha,\beta\leq n-1}
\]
is a Hermitian matrix and there is a unitary matrix $P=\begin{bmatrix}P_{jk}\end{bmatrix}_{2 \leq j,k\leq n-1}$ such that $P^*AP=D$, where $D$ is a diagonal matrix whose entries are the positive eigenvalues of $A$. Writing $\tilde z=(z_2, \ldots z_{n-1})$, the map $w=\phi_2(z)$ is defined by
\begin{align*}
w_1=z_1, \quad w_n=z_n, \quad \tilde w = P^T\tilde z.
\end{align*}
Then
\[
\sum_{\alpha,\beta=2}^{n-1} \frac{\partial^2 \rho}{\partial z_{\alpha}\partial \overline z_{\beta}}(p) z_{\alpha} \overline z_{\beta} = \tilde z^T A \overline{\tilde z}=(\overline P \tilde w)^T A \overline{(\overline P \tilde w)}= \tilde w ^T D \overline{\tilde w} = \sum_{\alpha=2}^{n-1} \lambda_{\alpha} \vert w_{\alpha} \vert^2,
\]
where $\lambda_{\alpha}>0$ is the $\alpha$-th entry of $D$. Thus, denoting the new coordinates $w$ by $z$ again,
\[
\rho\circ \phi_1^{-1}\circ\phi_2^{-1}(z)=\rho(p)+2\Re z_n + \sum_{\alpha=2}^{n-1} \lambda_{\alpha}\vert z_{\alpha}\vert^2+O(\vert z\vert^2)
\]
where $O(\vert z\vert^2)$ consists of only the non-Hermitian quadratic terms and all other higher order terms.
\item The map $w=\phi_3(z)$ is defined by $w_1=z_1, w_n=z_n$, and $w_j=\lambda_j^{1/2}z_j$ for $2 \leq j \leq n-1$. In the new coordinates, still denoted by $z$,
\begin{multline}\label{rho-exp3}
\rho \circ \phi_1^{-1}\circ\phi_2^{-1} \circ \phi_3^{-1}(z)= \rho(p)+2\Re z_n +\sum_{\alpha=2}^{n-1}\sum_{j=1}^m 2 \Re\big((a^{\alpha}_jz_1^j+b^{\alpha}_j\overline z_1^j)z_{\alpha}\big)\\
+ 2 \Re \sum_{\alpha=2}^{n-1} c_{\alpha}z_{\alpha}^2
+ \sum_{2\leq j+k \leq 2m}a_{jk}z_1^j\overline z_1^k
+\sum_{\alpha=2}^{n-1} \vert z_{\alpha}\vert^2
+\sum_{\alpha=2}^{n-1} \sum_{\substack{j+k\leq m\\j,k>0}} 2 \Re \big(b^{\alpha}_{jk}z_1^j \overline z_1^k z_{\alpha}\big)\\
+O\big(\vert z_n\vert \vert z \vert+\vert z_{*}\vert^2 \vert z \vert+\vert z_{*}\vert \vert z_1\vert^{m+1}+\vert z_1\vert^{2m+1}\big)
\end{multline}
where $z_*=(0,z_2,\ldots, z_{n-1},0)$.
\item Next, the pure terms in \eqref{rho-exp3}, i.e., $z_{\alpha}^2$, $z_1^k$, $\overline z_1^k$, as well as $z_1^kz_{\alpha}$, $\overline z_1^k\overline z_{\alpha}$ terms are removed by absorbing them into the normal variable $z_n$ in terms of the change of coordinates $t=\phi_4(z)$ which is defined by
\begin{align*}
z_j & =t_j, \quad 1 \leq j \leq n-1,\\
z_n & =t_n -\hat{Q_1}(t_1, \ldots, t_{n-1}),
\end{align*}
where
\[
\hat{Q_1}(t_1, \ldots, t_{n-1}) = \sum_{k=2}^{2m}a_{k0}t_1^k-\sum_{\alpha=2}^{n-1}\sum_{k=1}^{m}a_k^{\alpha}t_{\alpha}t_1^k-\sum_{\alpha=2}^{n-1}c_{\alpha}t_{\alpha}^2.
\]
\item In the final step, the terms of the form $\overline t_1^j t_{\alpha}$ are removed by applying the transformation $\zeta=\phi_5(t)$ given by
\begin{align*}
t_1 & =\zeta_1, t_n=\zeta_n,\\
t_{\alpha} & = \zeta_{\alpha}-Q_2^{\alpha}(\zeta_1), \quad 2 \leq \alpha \leq n-1,
\end{align*}
where $Q_2^{\alpha}(\zeta_1)=\sum_{k=1}^m \overline b_k^{\alpha}\zeta_1^k$. In these coordinates, $\rho$ takes the normal form \eqref{nrmlfrm}.
\end{enumerate}
It is evident from the definition of $\Phi^p$ that $\Phi^p(p)=0$,
\[
\Phi^{p}(p_1, \ldots, p_{n-1}, p_n - \epsilon) = \left(0, \ldots, 0, -\epsilon \;\frac{\partial \rho}{\partial \overline z_n}(p)\right),
\]
and
\begin{equation}\label{der-phi-p}
\det (\Phi^p)^{\prime}(p)=\frac{\partial \rho}{\partial \overline z_n}(p) (\lambda_2 \cdots \lambda_{n-1})^{1/2},
\end{equation}
where $\lambda_2, \ldots, \lambda_{n-1}$ are the positive eigenvalues of
\[
\begin{bmatrix} \frac{\partial^2 \rho}{\partial z_{\alpha} \partial \overline z_{\beta}}(p) \end{bmatrix}_{2 \leq \alpha,\beta \leq n-1}.
\]
\subsection{Scaling}
Suppose $p^0=0$ and $\rho$ is in the normal form (\ref{nrmlfrm}) for $p=p^0$; in particular, $\nu(p^0) = ('0,1)$. Let $p^j \in D$ be a sequence converging to $p^0$. The points $ \tilde p^j \in \partial D $ are chosen so that $ \tilde p^j = p^j + ('0, \delta_j) $ for some
$ \delta_j > 0 $. Then $ \delta_j \approx \delta_D(p^j) $, where $\delta_D(p)=d(p,\partial D)$ is the distance of $p$ to the boundary of $D$. Here and henceforth by the notation $a \approx b$ for positive functions $a,b$ depending on several parameters, we mean that the ratio $a/b$ is bounded above and below by some uniform positive constants independent of the parameters. The polynomial automorphisms $ \Phi^{\tilde p^j} $ of $ \mathbb{C}^n $ as described above satisfy $ \Phi^{\tilde p^j}(\tilde p^j) = ('0, 0) $ and
\[
\Phi^{\tilde p^j} (p^j) =
\big('0, - \delta_j d_0(\tilde p^j) \big) ,
\]
where $ d_0(\tilde p^j ) = \partial \rho/\partial \overline{z}_n (\tilde p^j) \rightarrow 1 $ as $ j \rightarrow \infty $.
Define a
dilation of coordinates by
\[
\Delta^{\tilde p^j, \delta_j} (z_1, z_2, \ldots, z_n ) =
\left( \frac{z_1}{\tau(\tilde p^j, \delta_j)}, \frac{z_2}{\delta_j^{1/2}}, \ldots, \frac{z_{n-1}}{\delta_j^{1/2}}, \frac{z_n} {\delta_j} \right).
\]
The scaling maps are $S^j= \Delta^{\tilde p^j, \delta_j} \circ \Phi^{\tilde p^j}$ and the scaled domains are $D^j=S^j(D)$. Note that $D^j$ contains $S^j(p^j) = \left('0, - d_0(\tilde p^j) \right)$ which we will denote by $b^j$ and which converges to $b=('0,-1)$. From \eqref{nrmlfrm}, the defining function $\rho^j=\frac{1}{\delta_j} \rho \circ (S^j)^{-1}$ for $D^j$ has the form
\begin{align*}
\rho^j(z)=2\Re z_n+P^j(z_1, \overline z_1)+\sum_{\alpha=2}^n \vert z_{\alpha}\vert^2+\sum_{\alpha=2}^{n-1} \Re\big(Q^{j}_{\alpha}(z_1, \overline z_1)z_{\alpha}\big)+O(\tau_1^{j}),
\end{align*}
where $\tau_1^j=\tau_1(\tilde p^j, \delta_j)$,
\[
P^j(z_1, \overline z_1)=\sum_{\substack{\mu + \nu \le 2m\\
\mu, \nu> 0}}a_{\mu\nu}(\tilde p^j)\delta_j^{-1}(\tau_1^j)^{\mu+\nu}z_1^{\mu}\overline z_1^{\nu},
\]
and
\[
Q^j_{\alpha}(z_1, \overline z_1)=\sum_{\substack{\mu + \nu \le m\\
\mu, \nu > 0}}b^{\alpha}_{\mu\nu}(\tilde p^j)\delta_j^{-1/2}(\tau_1^j)^{\mu+\nu}z_1^{\mu}\overline z_1^{\nu}.
\]
By \eqref{defn-A-B} and the definition of $\tau_1$, the coefficients of $P^j$ and $Q^j_{\alpha}$ are bounded by $1$. By Lemma~3.7 in \cite{TT}, it follows that the defining functions $\rho^j$, after possibly passing to a subsequence, converge together with all derivatives uniformly on compact subsets of $\mathbb{C}^n$ to
\[
\rho_{\infty}(z)= 2 \Re z_n + P_{2m}(z_1, \overline z_1) + \sum_{\alpha=2}^{n-1} \vert z_{\alpha} \vert^2,
\]
where $P_{2m}(z_1, \overline z_1)$ is a polynomial of degree at most $2m$ without harmonic terms. This implies that the corresponding domains $D^j$ converge in the local Hausdorff sense to $D_{\infty} = \{\rho_{\infty}<0\}$. Note that since $D_{\infty}$ is a smooth limit of pseudoconvex domains, it is pseudoconvex and hence $P_{2m}$ is subharmonic.
\subsection{Stability of the volume elements}
\begin{lem}\label{normality2}
Let $\phi^j \in \mathcal{O}(\mathbb{B}^n, D^j)$ and $\phi^j(0)=a^j \to a \in D_{\infty}$. Then $\phi^j$ admits a subsequence that converges uniformly on compact subsets of $\mathbb{B}^n$ to a map $\phi\in \mathcal{O}(\mathbb{B}^n, D_{\infty})$.
\end{lem}
\begin{proof}
We first claim that the sequence $q^j:=(S^j)^{-1}(a^j) \in D$ converges to $p^0 \in \partial D$, where $p^0=0$ is the base point for scaling. Choose a relatively compact neighbourhood $K$ of $a$ in $D_{\infty}$. Since $a^j \to a \in D_{\infty}$, $a^j \in K$ for all large $j$. Now choose a constant $C>1$ large enough, so that $K$ is compactly contained in the polydisc
\[
\Delta(0, C^{1/2m}) \times \Delta(0,C^{1/2}) \cdots \Delta(0, C^{1/2})\times \Delta(0, C).
\]
From \eqref{tau-defn2}, we have $\tau_1(\tilde p^j, C\delta_j) \geq C^{1/2m} \tau_1(\tilde p^j, \delta_j)$. Moreover, by definition,
\[
\tau_{\alpha}(\tilde p^j, C\delta_j)=(C\delta^j)^{1/2}=C^{1/2} \tau_{\alpha}(\tilde p^j, \delta_j)
\]
for $\alpha=2,\ldots, n-1$, and
\[
\tau_n(\tilde p^j, C\delta_j)=C\delta_j=C\tau_n(\tilde p^j, \delta_j).
\]
As a consequence, the above polydisc is contained in
\[
\prod_{\alpha=1}^n \Delta\left(0, \frac{\tau_{\alpha}(\tilde p^j, C\delta_j)}{\tau_{\alpha}(\tilde p^j, \delta_j)}\right).
\]
The pull back of this polydisc by $S^j=\Delta^{\tilde p^j,\delta_j}\circ \Phi^{\tilde p^j}$ is precisely $Q(\tilde p^j, C\delta_j)$. Thus
\[
q^j \in Q(\tilde p^j, C\delta_j)
\]
for all large $j$. Since $\tilde p^j \to p^0$ and $\delta_j \to 0$ as $j \to \infty$, it follows that $q^j \to p^0$ establishing our claim.
Now we prove that the family $\phi^j$ is normal. Consider the sequence of maps
\[
f^j=(S^j)^{-1}\circ \phi^j : \mathbb{B}^n \to D.
\]
Note that $f^j(0)=q^j \to p^0$. By the arguments similar to the proof of Theorem~3.11 in \cite{TT} (also see Proposition~1 of \cite{Be}), for every $0<r<1$, there exists a constant $C_r$ depending only on $r$ such that
\[
f^j\big(B(0,r)\big) \subset Q(\tilde p^j, C_r\delta_j)
\]
for all large $j$. This implies that
\[
\phi^j\big(B(0,r)\big) \subset \prod_{\alpha=1}^n \Delta\left(0, \frac{\tau_{\alpha}(\tilde p^j, C_r\delta_j)}{\tau_{\alpha}(\tilde p^j, \delta_j)}\right)
\]
for all large $j$. Again from \eqref{tau-defn2}, $\tau_1(\tilde p^j, C_r\delta_j) \leq C_r^{1/2} \tau_1(\tilde p^j, \delta_j)$. Together with this, using the definition of $\tau_{\alpha}$ for $\alpha=2, \cdots n$, we see that the above polydisc is contained in
\[
\Delta\big(0, C_r^{1/2}\big) \times \cdots \times \Delta\big(0, C_r^{1/2}\big) \times \Delta\big(0, C_r\big).
\]
Using a diagonal argument, it now follows that the family $\phi^j$ is normal.
Now, since $\phi^j(0)=a^j \to a \in D_{\infty}$, $\phi^j$ admits a subsequence which we denote by $\phi^j$ itself and which converges uniformly on compact subsets of $\mathbb{B}^n$ to a holomorphic mapping $\phi:\mathbb{B}^n\rightarrow \mathbb{C}^n$. Since $D_{\infty}$ possess a local holomorphic peak function at every boundary point \cite[Proposition~4.5 and the remark in page 605]{Yu2}, arguments similar to Lemma~\ref{normality-cvx} now implies that $\phi(\mathbb{B}^n)\subset D_\infty$.
\end{proof}
With this lemma, the proof of the following proposition is exactly similar to that of Proposition~\ref{stability-cvx} and so we do not repeat the arguments.
\begin{prop}\label{stability-levicorank1}
For any $a\in D_\infty$,
\[
\lim_{j\rightarrow \infty} k_{D^j}(a)= k_{D_\infty}(a),
\]
Moreover, this convergence is uniform on compact subsets of $D_\infty$.
\end{prop}
Similarly, the proof of Proposition~\ref{c-stability-cvx} also gives
\begin{prop}\label{c-stability-levicorank1}
For $a^j \in D^j$ converging to $a\in D_\infty$,
\[
\limsup_{j\rightarrow \infty} c_{D^j}(a^j) \leq c_{D_\infty}(a).
\]
\end{prop}
\subsection{Proof of Theorem~\ref{Lcr1}}
Recall that we are in the case when $p^0=0$ and $\rho$ is in the normal form for $p=p^0$. Therefore, $\Phi^{p^0}=I$, the identity map. Observe that by the transformation rule
\[
k_D(p^j)=\big\vert \det\big(S^j\big)'(p^j)\big\vert^2 k_{D^j}(b^j),
\]
where $S^j= \Delta^{\tilde p^j, \delta_j} \circ \Phi^{\tilde p^j}$ are the scaling maps. Since
\[
\Big\vert \det (\Delta^{\tilde p^j,\delta_j})' \big(\Phi^{\tilde p^j}(p^j)\big)\Big\vert^2=\prod_{\alpha=1}^n \tau_{\alpha}(\tilde p^j,\delta_j)^{-2},
\]
we get
\begin{equation}\label{v-D-Dj}
k_{D}(p^j)\prod_{\alpha=1}^n \tau_{\alpha}(\tilde p^j,\delta_j)^{2}=\big\vert \det (\Phi^{\tilde p^j})'(p^j)\big\vert^2 k_{D^j}(b^j).
\end{equation}
Now $\vert \det (\Phi^{\tilde p^j})'(p^j)\vert \to \vert \det \big(\Phi^{p^0}\big)^{\prime}(p^0)\vert = 1$, and recall that after possibly passing to a subsequence, the domains $D^j$ converge in the local Hausdorff sense to $D_{\infty}$. Hence by Propostion~\ref{stability-levicorank1}, the right hand side of \eqref{v-D-Dj} has $k_{D_{\infty}}(b)$ as a limit, proving the theorem in the current situation.
For the general case, assume that $(\partial \rho/\partial z_n)(p^0)\neq 0$ and make an initial change of coordinates $w=T(z)=\Phi^{p^0}(z)$. Let $\Omega =T(D)$, $q^0=T(p^0)=0$, and $q^j=T(p^j)$. Then
\begin{equation}\label{vD-vOm}
k_{D}(p^j)=\big\vert \det T'(p^j)\big\vert^2 k_{\Omega}(q^j).
\end{equation}
To emphasise the dependence of $\Phi^p$, $\tau$, and $\tau_{\alpha}$ on $D=\{\rho<0\}$, we will write them now as $\Phi^p_\rho$, $\tau_\rho$ and $\tau_{\alpha,\rho}$ respectively.
Note that the defining function $r=\rho\circ T^{-1}$ for $\Omega$ is in the normal form at $q^0=0$. Choose $\eta_j$ such that $\tilde q^j=(q^j_1, \ldots, q^j_{n-1},q^j_n+\eta_j) \in \partial \Omega$. Then by the previous case
\begin{equation}\label{v-Om}
k_{\Omega} (q^j)\prod_{\alpha=1}^n \tau_{\alpha,r}(\tilde q^j, \eta_j)^2 \to k_{D_{\infty}}(b)
\end{equation}
up to a subsequence. Since $\delta_{\Omega} \circ T$ is a defining function for $D$, we have $\delta_{\Omega}\circ T \approx \delta_D$ and hence $\delta_j \approx \delta_D(p^j) \approx \delta_{\Omega}(q^j) \approx \eta_j$. Also, by (3.3) of \cite{TT},
\[
\rho \circ (\Phi_\rho^{p^j})^{-1} = r\circ (\Phi_r^{q^j})^{-1}.
\]
It follows from (2.9) of \cite{Cho2} that $\tau_{\rho}(\tilde p^j, \delta_j) \approx \tau_r(\tilde q^j, \eta_j)$. Hence, after passing to a subsequence if necessary,
\begin{equation}\label{tauD-tauOm}
\prod_{\alpha=1}^n \frac{\tau_{\alpha,\rho}(\tilde p^j,\delta_j)}{\tau_{\alpha,r}(\tilde q^j,\eta_j)} \to c_0
\end{equation}
for some $c_0>0$ that depends only on $\rho$. Also,
\begin{equation}\label{limit-der-T}
\big\vert \det T'(p^j)\big\vert \to \big\vert \det T'(p^0)\big\vert = \left \vert \frac{\partial \rho}{\partial \overline z_n}(p^0)\right\vert\prod_{\alpha=2}^{n-1} \lambda_{\alpha}^{1/2},
\end{equation}
by \eqref{der-phi-p}, where $\lambda_{\alpha}$'s are the positive eigenvalues of
\[
\begin{bmatrix}
\frac{\partial^2 \rho}{\partial z_{\alpha}\partial \overline z_{\beta}}(p^0)
\end{bmatrix}_{2 \leq \alpha,\beta\leq n-1}.
\]
It follows from \cref{vD-vOm,v-Om,tauD-tauOm,limit-der-T} that
\[
k_{D}(p^j) \prod_{i=1}^n \tau_{\alpha,\rho}(\tilde p^j,\delta_j)^2 \to c_0^2\left \vert \frac{ \partial \rho}{\partial \overline z_n}(p^0)\right\vert^2 \left(\prod_{\alpha=2}^{n-1} \lambda_{\alpha} \right)\, k_{D_{\infty}}(b)
\]
up to a subsequence. This completes the proof of the theorem.
\section{Proof of Theorem~1.3}
A convex domain $D \subset \mathbb{C}^n$ is called $\mathbb{C}$-properly convex if it does not contain any affine complex lines. Let $\mathbb{X}_n$ denote the set of all $\mathbb{C}$-properly convex domains endowed with the local Hausdorff topology. Consider the space
\[
\mathbb{X}_{n,0}=\big\{(D, p) : D \in \mathbb{X}_n, p \in D\big\} \subset \mathbb{X}_n \times \mathbb{C}^n
\]
endowed with the subspace topology. It was shown in \cite{Barth} that a convex domain in $\mathbb{C}^n$ is complete hyperbolic if and only if it is $\mathbb{C}$-properly convex. In particular, $\mathbb{C}$-properly convex domains are taut and hence the quotient invariant on such domains are well-defined. Thus we have a function $q: \mathbb{X}_{n,0} \to \mathbb{R}$ defined by
\[
q(D, p)=q_D(p).
\]
Recall that a function $f: \mathbb{X}_{n,0} \to \mathbb{R}$ is called intrinsic (see \cite{Zim16}) if $f(D,p)=f(D',p')$ whenever there exits a biholomorphism $F: D \to D'$ with $F(p)=p'$. Thus the function $q$ is intrinsic. The following theorem was proved by Zimmer:
\begin{thm}[\cite{Zim19}]\label{Zim-norm}
Let $f: \mathbb{X}_{n,0} \to \mathbb{R}$ be an upper semicontinuous intrinsic function with the following property: if $D\in \mathbb{X}_{n}$ and $f(D,p) \geq f(\mathbb{B}^n, 0)$ for all $p \in D$, then $D$ is biholomorphic to $\mathbb{B}^n$. Then for any $\alpha>0$, there exists some $\epsilon=\epsilon(n,f, \alpha)>0$ such that: if $D \subset \mathbb{C}^n$ is a bounded convex domain with $C^{2, \alpha}$ boundary and
\[
f(D,p) \geq f(\mathbb{B}^n, 0) -\epsilon
\]
outside some compact subset of $D$, then $D$ is strongly pseudoconvex.
\end{thm}
Observe that if $D \subset \mathbb{C}^n$ is any domain and if $q_D(p) \geq 1$ for some point $p \in D$, then $q_D(p)=1$ and so $D$ must be biholomorphic to $\mathbb{B}^n$. Thus, to prove Theorem~1.3, we only need to show that the function $q: \mathbb{X}_{n,0} \to \mathbb{R}$ is upper semicontinuous.
\begin{lem}\label{normality-hc}
Suppose $(D^j,p^j) \to (D_{\infty},p)$. If $f^j : \mathbb{B}^n \to D^j$, $f^j(0)=p^j$, then passing to a subsequence, $f^j$ converges uniformly on compact subsets of $D_{\infty}$ to a holomorphic function $f$ on $D_{\infty}$.
\end{lem}
This is precisely Lemma~3.1 of \cite{Zim16} with $\Delta$ replaced by $\mathbb{B}^n$ and since the proof is exactly same we do not repeat it.
\begin{prop}\label{usc}
The function $q: \mathbb{X}_{n,0} \to \mathbb{R}$ is upper semicontinuous.
\end{prop}
\begin{proof}
The proof is very similar to the proof of Proposition~\ref{stability-cvx} and so we only outline it. Let $(D^j, a^j) \to (D_{\infty},a)$. The Step 1 of Proposition~\ref{stability-cvx} holds without any change. In view of Lemma~\ref{normality-hc}, Step 2 also holds. This implies that
\[
\lim_{j \to \infty}k_{D^j}(a^j)=k_{D_{\infty}}(a).
\]
The proof of Proposition~\ref{c-stability-cvx} goes through without any change and thus we have
\[
\limsup_{j \to \infty} c_{D^j}(a^j) \leq c_{D_{\infty}}(a).
\]
It follows that
\[
\limsup_{j \to \infty} q_{D^j}(a^j) \leq q_{D_\infty}(a)
\]
establishing the upper semicontinuity of $q$.
\end{proof}
Thus we have shown that $q$ satisfies the hypothesis of Theorem~\ref{Zim-norm} and this completes the proof of Theorem~1.3.
\begin{bibdiv}
\begin{biblist}
\bib{BMV}{article}{
author={Balakumar, G. P.},
author={Mahajan, Prachi},
author={Verma, Kaushal},
title={Bounds for invariant distances on pseudoconvex Levi corank one
domains and applications},
journal={Ann. Fac. Sci. Toulouse Math. (6)},
volume={24},
date={2015},
number={2},
pages={281--388},
issn={0240-2963},
}
\bib{Barth}{article}{
author={Barth, Theodore J.},
title={Convex domains and Kobayashi hyperbolicity},
journal={Proc. Amer. Math. Soc.},
volume={79},
date={1980},
number={4},
pages={556--558},
}
\bib{BP}{article}{
author={Bedford, Eric},
author={Pinchuk, Sergey},
title={Domains in ${\bf C}^{n+1}$ with noncompact automorphism group},
journal={J. Geom. Anal.},
volume={1},
date={1991},
number={3},
pages={165--191},
}
\bib{Be}{article}{
author={Berteloot, F.},
author={C\oe ur\'{e}, G.},
title={Domaines de ${\bf C}^2$, pseudoconvexes et de type fini ayant un
groupe non compact d'automorphismes},
language={French, with English summary},
journal={Ann. Inst. Fourier (Grenoble)},
volume={41},
date={1991},
number={1},
pages={77--86},
}
\bib{Car}{article}{
author={Carath\'{e}odory, C.},
title={\"{U}ber die Abbildungen, die durch Systeme von analytischen
Funktionen von mehreren Ver\"{a}nderlichen erzeugt werden},
language={German},
journal={Math. Z.},
volume={34},
date={1932},
number={1},
pages={758--792},
issn={0025-5874},
}
\bib{Catlin}{article}{
AUTHOR = {Catlin, David W.},
TITLE = {Estimates of invariant metrics on pseudoconvex domains of
dimension two},
JOURNAL = {Math. Z.},
FJOURNAL = {Mathematische Zeitschrift},
VOLUME = {200},
YEAR = {1989},
NUMBER = {3},
PAGES = {429--466},
ISSN = {0025-5874},
MRCLASS = {32H15},
MRNUMBER = {978601},
MRREVIEWER = {K. T. Hahn},
}
\bib{Cheung-Wong}{article}{
author={Cheung, Wing Sum},
author={Wong, B.},
title={An integral inequality of an intrinsic measure on bounded domains
in ${\bf C}^n$},
journal={Rocky Mountain J. Math.},
volume={22},
date={1992},
number={3},
pages={825--836},
issn={0035-7596},
}
\bib{Cho2}{article}{
AUTHOR = {Cho, Sanghyun},
TITLE = {Boundary behavior of the {B}ergman kernel function on some
pseudoconvex domains in {${\bf C}^n$}},
JOURNAL = {Trans. Amer. Math. Soc.},
FJOURNAL = {Transactions of the American Mathematical Society},
VOLUME = {345},
YEAR = {1994},
NUMBER = {2},
PAGES = {803--817},
ISSN = {0002-9947},
MRCLASS = {32H10 (32H15)},
MRNUMBER = {1254189},
MRREVIEWER = {So-Chin Chen},
}
\bib{Dek}{article}{
author={Dektyarev, I. M.},
title={Criterion for the equivalence of hyperbolic manifolds},
language={Russian},
journal={Funktsional. Anal. i Prilozhen.},
volume={15},
date={1981},
number={4},
pages={73--74},
issn={0374-1990},
}
\bib{G}{article}{
author={Gaussier, Herv\'{e}},
title={Characterization of convex domains with noncompact automorphism
group},
journal={Michigan Math. J.},
volume={44},
date={1997},
number={2},
pages={375--388},
issn={0026-2285},
}
\bib{G2}{article}{
author={Gaussier, Herv\'{e}},
title={Tautness and complete hyperbolicity of domains in ${\bf C}^n$},
journal={Proc. Amer. Math. Soc.},
volume={127},
date={1999},
number={1},
pages={105--116},
issn={0002-9939},
}
\bib{Gr-Wu}{article}{
author={Graham, Ian},
author={Wu, H.},
title={Characterizations of the unit ball $B^n$ in complex Euclidean
space},
journal={Math. Z.},
volume={189},
date={1985},
number={4},
pages={449--456},
issn={0025-5874},
}
\bib{Gr-Kr1}{article}{
author={Greene, Robert E.},
author={Krantz, Steven G.},
title={Characterizations of certain weakly pseudoconvex domains with
noncompact automorphism groups},
conference={
title={Complex analysis},
address={University Park, Pa.},
date={1986},
},
book={
series={Lecture Notes in Math.},
volume={1268},
publisher={Springer, Berlin},
},
date={1987},
pages={121--157},
}
\bib{Gr-Kr2}{article}{
author={Greene, Robert E.},
author={Krantz, Steven G.},
title={Biholomorphic self-maps of domains},
conference={
title={Complex analysis, II},
address={College Park, Md.},
date={1985--86},
},
book={
series={Lecture Notes in Math.},
volume={1276},
publisher={Springer, Berlin},
},
date={1987},
pages={136--207},
}
\bib{Kr-vol}{article}{
author={Krantz, Steven G.},
title={The Kobayashi metric, extremal discs, and biholomorphic mappings},
journal={Complex Var. Elliptic Equ.},
volume={57},
date={2012},
number={1},
pages={1--14},
issn={1747-6933},
}
\bib{Ma}{article}{
author={Ma, Daowei},
title={Boundary behavior of invariant metrics and volume forms on
strongly pseudoconvex domains},
journal={Duke Math. J.},
volume={63},
date={1991},
number={3},
pages={673--697},
issn={0012-7094},
}
\bib{Ma-ext}{article}{
author={Ma, Daowei},
title={Carath\'{e}odory extremal maps of ellipsoids},
journal={J. Math. Soc. Japan},
volume={49},
date={1997},
number={4},
pages={723--739},
issn={0025-5645},
}
\bib{MV0}{article}{
AUTHOR = {Mahajan, Prachi},
AUTHOR={Verma, Kaushal},
TITLE = {Some aspects of the {K}obayashi and {C}arath\'{e}odory metrics on
pseudoconvex domains},
JOURNAL = {J. Geom. Anal.},
FJOURNAL = {Journal of Geometric Analysis},
VOLUME = {22},
YEAR = {2012},
NUMBER = {2},
PAGES = {491--560},
}
\bib{MV}{article}{
author={Mahajan, Prachi},
author={Verma, Kaushal},
title={A comparison of two biholomorphic invariants},
journal={Internat. J. Math.},
volume={30},
date={2019},
number={1},
pages={1950012, 16},
}
\bib{McN-adv}{article}{
author={McNeal, Jeffery D.},
title={Estimates on the Bergman kernels of convex domains},
journal={Adv. Math.},
volume={109},
date={1994},
number={1},
pages={108--139},
issn={0001-8708},
}
\bib{Nik-sq}{article}{
author={Nikolov, Nikolai},
title={Behavior of the squeezing function near h-extendible boundary
points},
journal={Proc. Amer. Math. Soc.},
volume={146},
date={2018},
number={8},
pages={3455--3457},
issn={0002-9939},
}
\bib{Nik-Pas}{article}{
author={Nikolov, Nikolai},
author={Thomas, Pascal J.},
title={Comparison of the Bergman kernel and the Carath\'eodory-Eisenman volume},
journal={To appear in Proc. Amer. Math. Soc.},
}
\bib{NV}{article}{
author={Nikolov, Nikolai}
author={Verma, Kaushal},
title={On the squeezing function and Fridman invariants},
journal={J. Geom. Anal.},
}
\bib{Ro}{article}{
author={Rosay, Jean-Pierre},
title={Sur une caract\'{e}risation de la boule parmi les domaines de ${\bf
C}^{n}$ par son groupe d'automorphismes},
journal={Ann. Inst. Fourier (Grenoble)},
volume={29},
date={1979},
number={4},
pages={ix, 91--97},
issn={0373-0956},
}
\bib{TT}{article}{
author={Thai, Do Duc},
author={Thu, Ninh Van},
title={Characterization of domains in $\Bbb C^n$ by their noncompact
automorphism groups},
journal={Nagoya Math. J.},
volume={196},
date={2009},
pages={135--160},
issn={0027-7630},
}
\bib{Wong}{article}{
author={Wong, B.},
title={Characterization of the unit ball in ${\bf C}^{n}$ by its
automorphism group},
journal={Invent. Math.},
volume={41},
date={1977},
number={3},
pages={253--257},
issn={0020-9910},
}
\bib{Yu2}{article}{
author={Yu, Ji Ye},
title={Weighted boundary limits of the generalized Kobayashi-Royden
metrics on weakly pseudoconvex domains},
journal={Trans. Amer. Math. Soc.},
volume={347},
date={1995},
number={2},
pages={587--614},
issn={0002-9947},
}
\bib{Zim16}{article}{
author={Zimmer, Andrew M.},
title={Gromov hyperbolicity and the Kobayashi metric on convex domains of
finite type},
journal={Math. Ann.},
volume={365},
date={2016},
number={3-4},
pages={1425--1498},
}
\bib{Zim}{article}{
author={Zimmer, Andrew},
title={A gap theorem for the complex geometry of convex domains},
journal={Trans. Amer. Math. Soc.},
volume={370},
date={2018},
number={10},
pages={7489--7509},
}
\bib{Zim19}{article}{
author={Zimmer, Andrew},
title={Characterizing strong pseudoconvexity, obstructions to
biholomorphisms, and Lyapunov exponents},
journal={Math. Ann.},
volume={374},
date={2019},
number={3-4},
pages={1811--1844},
}
\end{biblist}
\end{bibdiv}
\end{document} |
2,869,038,154,254 | arxiv | \section{Introduction}
\label{sec:intro}
Despite decades of tremendous experimental and theoretical efforts, the nature of dark matter (DM) is still elusive. Among the different possibilities, the idea that small collapsed structures, or primordial black holes (PBH) could account for all or part of the lacking mass is half-century old \cite{1966AZh....43..758Z,1971MNRAS.152...75H,Chapline:1975ojl}. While constraints on this hypothesis are strengthening for PBH with large masses, for relatively low PBH masses an interesting mass window $[10^{-16},10^{-10}]M_\odot$ is still presently unconstrained by lensing observation~\cite{2018JCAP...12..005K, 2019NatAs...3..524N, 2019arXiv190605950M}.
It is widely thought that interactions of PBH with compact stars are a promising avenue to shed light in that mass range, possibly via high-energy signatures. For example, it was proposed that a PBH crossing a white dwarf would trigger thermonuclear reaction of heavy elements and cause a runaway explosion leading to Type Ia supernova \cite{2015PhRvD..92f3007G}. In the case of a NS, the dense neutron medium is favourable for capture. Once trapped, the PBH grows and eventually swallows its host, transmuting it into a black hole (BH). The observation of old NS in DM-rich environments have already been used to set constraints on the PBH content of DM~\cite{2013PhRvD..87l3524C}. Furthermore, the transmutation process could lead to signatures in various messengers and wavelengths, such as radio burst \cite{2015MNRAS.450L..71F,2018ApJ...868...17A}, kilonovae \cite{2018PhRvD..97e5016B}, positrons~\cite{Takhistov:2017nmt}, gamma-ray burst \cite{Takhistov:2017nmt,Chirenti:2019sxw}, and gravitational waves \cite{Takhistov:2017bpt,2018ApJ...868...17A,2016PhRvD..93b3508K,2007CQGra..24S.187B}. These different scenarios crucially depend on the dynamics of the seed BH growth, which ultimately impacts the amount of matter and energy expelled in this cataclysmic event. It was argued in \cite{2014PhRvD..90d3512K} that at early stage, the trapped PBH would smoothly accrete the NS material, evacuating the angular momentum through viscous dissipation. The final stages of the collapse on which the observational signatures critically depends on are however strongly uncertain. While realistic simulations coupling magneto-hydrodynamic and general relativity are probably the unique way to investigate the final signatures (see e.g. \cite{2019arXiv190907968E}), the details of the interactions of the PBH with the NS dense medium is of prime importance to assess the capture rate and set the initial conditions of the simulations. In this paper we revisit and refine the different interaction mechanisms, with a main focus on the capture and the post-capture dynamics and their observational consequences, as well as the role of gravitational wave (GW) emissions.
The paper is organized as follows: In Sec.~\ref{sec:int_mech} we review the different energy-loss processes and discuss their velocity dependence. This includes, in Sec.~\ref{sec:int_gw}, the process due to gravitational wave (GW) losses, considered for the first time in this context.
In Sec.~\ref{sec:capture}, we discuss the relative relevance of the different mechanisms for the capture, also assessing the role played by GW emission. In Sec.~\ref{eq:postcapture} we discuss the post-capture dynamics, which is remarkably simple and amenable to a description in terms of an adiabatic invariant (Eq.~(\ref{eq:conservation})) which leads us to establish a prescription for future numerical simulations, see Eq.~(\ref{initcond}). In Sec.~\ref{sec:signatures} we discuss a number of phenomenological consequences, with particular emphasis on the GW signatures of both the encounter and the post-capture dynamics, for individual events as well as the stochastic background.
Finally, in Sec.~\ref{concl} we discuss our results and conclude.
In what follows we assume PBH masses $10^{-17}\,M_\odot \ll m\ll M_\odot$, so that: i) we can neglect Hawking radiation (and associated mass evaporation) in the whole evolution of the PBH, which thus behave, in the absence of accretion, as stable objects. ii) The mass and size of the PBH is negligible with respect to the NS mass and size. This is anyway a very interesting mass range, where current upper limits~\cite{Carr:2020gox} on the fraction of DM in the form of PBH, $f_{\rm PBH}$, are typically not better than 1\%, and often closer to the 10\% level. In the range $[10^{-16},10^{-10}]M_\odot$, bounds are absent or dependent on questionable assumptions, and PBH may also constitute the totality of the DM.
\section{Interaction mechanisms}\label{sec:int_mech}
By passing through (even close to) a NS, a PBH experiences several drag forces. While most of them have already been discussed in the literature, hereafter we review them, with a main focus on their velocity dependence. However, the content of Sec.~\ref{sec:int_coll} and especially the treatment of GW energy-losses (Sec.~\ref{sec:int_gw}) are novel considerations in the context of the problem at hand.
Note that, if the drag force ${\bold F}$ is known, the energy-loss can be promptly computed as the work of the force along the trajectory ${\cal C}$:
\begin{equation}
|\Delta E|= \int_{\cal C} {\bold F}\cdot {\rm d}{\bold l}\;.
\end{equation}
\subsection{Dynamical friction in a collisionless medium}\label{sec:int_dynfric}
As a PBH of mass $m$ passes through a collisionless medium, the gravitational pull from the wake of the PBH slows it down.
This force is called dynamical friction, and can be accounted for by the following formula \cite{RevModPhys.21.383,1987gady.book.....B}:
\begin{equation}
{\bf F}_{\rm dyn}=-4\pi G^2 m^2\rho \ln{\Lambda_{\rm dyn} }(v) \frac{\boldsymbol{v} }{v^3}\;,\label{eq:df}
\end{equation}
where $G$ is Netwon's gravitational constant, $\rho$ the density of the medium and $\ln{\Lambda}_{\rm dyn}$ the so-called Coulomb logarithm, which depends on the ratio of extreme impact parameters. Notice that in our case, following \cite{2013PhRvD..87l3524C}, the Coulomb logarithm does depend on the velocity to account for the degenerate nature of the neutron fluid: neutrons contributing to the drag force are actually those for which the momentum transferred in the gravitational scattering is sufficient to extract them from the Fermi sea. The Coulomb logarithm writes~\cite{2013PhRvD..87l3524C}
\begin{equation}
\ln \Lambda_{\rm dyn} (v)= v^4 \gamma^2 \frac{2}{R_g^2}\int_{d_{\rm crit }}^{d_{\rm max}}{\rm d} x \,x (1-\cos \varphi(x))\;,\label{eq:ln_dyn}
\end{equation}
where $R_g=2\,G\,m$ is the Schwarzschild radius of the PBH, $\varphi$ is the deviation angle of the neutron scattered by the PBH, $v$ its speed in the PBH reference frame and $\gamma$ is the Lorentz factor. Below the critical PBH-neutron impact parameter $d_{\rm crit }$, the neutrons are accreted onto the PBH, while $d_{\rm max}$ is set by the requirement that the scattered neutron must find an energy level not already occupied by another neutron, or simply be ejected from the Fermi sea. Since the typical chemical potential of neutrons in a NS is $\mu_F \approx 0.3$~GeV, the effect of the degenerate matter reduces $\ln{ \Lambda}_{\rm dyn}$ by a factor $\approx 10$ at a speed $v = 0.8$. In summary, the typical energy-losses scales as:
\begin{equation}\label{Edyn}
|\Delta E|_{\rm dyn} \sim \frac{R_g^2 M_\star}{R_\star^2} \frac{\ln \Lambda}{ v_\star^2}\;,
\end{equation}
with $R_\star$ and $M_\star$ the stellar radius and mass, respectively, $v_\star^2=GM_\star/R_\star$ the typical PBH velocity, and the numerical pre-factor in Eq.~(\ref{Edyn}) is determined by the actual integration along the trajectory.
\subsection{Dynamical friction in a collisional medium}\label{sec:int_coll}
Eqs.~(\ref{eq:df},\ref{eq:ln_dyn}) strictly apply to a collisionless medium. This clearly may not be the case for the strongly interacting neutron fluid. However, the results must still be correct if the gravitational interaction timescale is much shorter than the causal time for the neutron-neutron interaction, set by the sound speed of the medium, $c_s$. We expect thus that Eqs.~(\ref{eq:df},\ref{eq:ln_dyn}) are valid for a PBH moving at {\it supersonic speed} in the neutron fluid. This is confirmed by the study of friction in a collisional medium developed in \cite{1999ApJ...513..252O}. Its results can be summarized as follows: i) At ${\cal M}\equiv v/c_s\gtrsim 2$, the collisionless result is reproduced. ii) At $1\lesssim{\cal M}\lesssim 2$, the friction force is resonantly enhanced. iii) For speeds smaller than the sound speed, for a transient of
the order of $R_\star/c_s$, the PBH feels a force given by Eq.~(\ref{eq:df}) with $\ln \Lambda_{\rm dyn}$ replaced by
\begin{equation}
\ln{\Lambda}_{\rm coll}=\frac{1}{2}\ln\left( \frac{1+{\cal M}}{1- {\cal M}}\right)-{\cal M}\simeq \frac{{\cal M}^3}{3} \:{\rm for}\: {\cal M}\ll 1\;.\label{eq:ln_dyn_sub}
\end{equation}
Eventually, however, the friction force tends to zero when the system settles closer and closer to the steady state limit.
\subsection{Accretion}\label{sec:int_accr}
From Newton's laws, the accretion of matter with a rate $\dot m$ and zero momentum causes a drag force in the opposite direction of the motion:
\begin{equation}
{\bf F}_{\rm acc}= -\;\dot m \;\boldsymbol{v}\;.\label{eq:facc}
\end{equation}
In the supersonic regime, as argued in \cite{2013PhRvD..87b3507C}, this force can be written as Eq.~(\ref{eq:df}), with $\ln \Lambda_{\rm dyn}$ replaced by
\begin{equation}
\ln \Lambda_{\rm acc}(v)= v^4 \gamma^2 \frac{d_{\rm crit}^2}{R_g^2}\;, \label{eq:ln_acc_p}
\end{equation}
with the same notation as used in Eq.~(\ref{eq:ln_dyn}).
In the subsonic regime, an analytical theory only exists rigorously for a body accreting at rest, assumed to apply for very small speeds $v \ll c$. In this case,
the accretion rate $\dot m$ tends to the spherical Bondi accretion rate \cite{1952MNRAS.112..195B}
\begin{equation}
\dot m=\frac{{\rm d}m}{{\rm d
}t} =\frac{4\pi\,\lambda\, \rho \,G^2 m^2}{c_s^3}\;,\label{eq:bondi}
\end{equation}
with $\lambda$ depending on the medium properties, equal to 0.707 for a polytropic equation of state with index $\Gamma=4/3$~\cite{Kouvaris:2013kra}. Although some expressions valid for finite $v$, such as
\begin{equation}
\dot m=\frac{{\rm d}m}{{\rm d
}t} =\frac{4\pi\,\lambda\, \rho \,G^2 m^2}{(v^2 + c_s^2)^{3/2}}\;,\label{eq:bondi_v}
\end{equation}
have been proposed already in~\cite{1952MNRAS.112..195B} and roughly confirmed by simulations \cite{1985MNRAS.217..367S}, the correction to Eq.~(\ref{eq:bondi}) is expected to be small in the deeply sub-sonic regime of major interest in our paper, and we will neglect it in the following.
It is worth noting that the accretion force can be written as Eq.~(\ref{eq:df}), with $\ln \Lambda_{\rm dyn}$ replaced by
\begin{equation}
\ln \Lambda_{\rm sub}(v)=\lambda \,{\cal M}^3\,\label{eq:ln_acc_m}.
\end{equation}
\subsection{Surface waves}\label{sec:int_surf}
A PBH crossing the neutron star excites waves on its surface. These surface waves --- essentially, tidal deformations of the NS --- are different from the sound waves; in particular, they have a different dispersion relation. For this reason we discuss them separately.
The total energy dissipated by the BH into production of surface waves has been estimated in Ref.~\cite{2014PhRvD..90j3522D} by making use of a simple analytical model, an incompressible fluid in a uniform gravitational field. The result, Eq.~(13) of Ref.~\cite{2014PhRvD..90j3522D}, up to a numerical coefficient leads to the following estimate for the energy-loss in a NS
\begin{equation}
|\Delta E|_{\rm surf} \sim \frac{G\,m^2}{R_\star}\;.\label{eq:swel}
\end{equation}
Keeping in mind that $G\,M_\star/R_\star\gtrsim 0.2$ for a NS, this is parametrically similar to the energy-loss due to the dynamical friction, Eq.~(\ref{Edyn}), however {\em without the enhancement associated to the Coulomb logarithm}.
It is easy to understand this result intuitively in terms of the dynamical friction calculation. In the case of an infinite medium, the energy-loss from dynamical friction gets contributions from all distances, hence the logarithmic divergence of the sum. When applied to the star, the star radius $R_\star$ imposes a cutoff, which essentially means retaining only leading-log contributions. Changing the shape of the star would change the subleading constant term. Similar features are shared by surface waves.
Such arguments suggest that the contribution to the energy-loss due to the {\it tidal deformations} induced by PBH passing near NS {\em (without actually crossing its surface)} is also subleading. One may view this process as dynamical friction in an infinite medium, in which contributions of all volumes are switched off except the one actually occupied by the star. The Coulomb log now becomes $\log{[r_{\rm min} / (r_{\rm min} - R_\star )]}$, $r_{\rm min}$ being the periastron of the PBH orbit. Clearly this logarithm becomes subleading to the original $\log{(R_\star/R_g)}$ way before $r_{\rm min} - R_\star$ becomes comparable to $R_\star$. The fact that only the part of the volume actually occupied by the star is filled with matter reduces further this contribution, resulting in the suppression by some power of the ratio $R_\star/r_{\rm min}$. The actual calculation yields a series of the type
\begin{equation}
|\Delta E|_{\rm tidal} \sim \frac{G m^2}{R_\star} \sum_{l=2}^\infty
\left( {R_\star\over r_{\rm min}} \right)^{2\ell+2} T_\ell,
\;.\label{eq:tidal}
\end{equation}
where each term is suppressed by $[R_\star/r_{\rm min}]^{2\ell+2}$ \cite{1977ApJ...213..183P}, with $\ell$ the multipole number of the tidal deformation, and $T_\ell$ are dimensionless coefficients $\lesssim {\cal O}(1)$ that depend on the star properties and $R_\star/ r_{\rm min}$. Clearly, this expression is greatly suppressed for
$R_\star/ r_{\rm min}\ll 1$, while for $R_\star/ r_{\rm min} \to 1 $ it must turn into the energy-loss due to surface wave emission.
\subsection{Gravitational waves}\label{sec:int_gw}
The encounter of the relativistically moving PBH with the compact star produces gravitational waves (GW), which is the only energy-loss mechanism we take into account in some detail which does not strictly require contact to be operational. For hyperbolic encounters, analytical calculations for generic configurations have been performed in~\cite{Capozziello:2008ra} and~\cite{DeVittori:2012da}. In the context of pairs of PBH of stellar masses, this process has been considered in~\cite{2017PDU....18..123G,2018PDU....21...61G}. Here, we illustrate the results of these calculations, applying them to the case of interest, but also {\it generalize the calculation to the case of a PBH crossing the NS}, with the latter reported in Sec.~\ref{sec:gw}.
The power in GW is
\begin{equation}
\frac{{\rm d}E}{{\rm d}t}=\frac{G}{5\,c^5}\langle \dddot{Q}_{ij} \dddot{Q}_{ij}\rangle\,,\label{PowGW}
\end{equation}
where we introduced the quadrupole
\begin{equation}
Q_{ij}\equiv\int \rho(\boldsymbol{r}) (r_ir_j-\frac{1}{3} r^2 \delta_{ij})\,d^3 r\;.
\end{equation}
We actually re-express the energy-loss in terms of the distance $d$ to the source and the gravitational strain $h_{ij}$, defined as:
\begin{equation}
h_{ij}=\frac{2}{c^4}\frac{G}{d}\ddot{Q}_{ij}\,.
\end{equation}
We compute the typical gravitational strain in cartesian coordinates, expressing it as
\begin{equation}
h_{0}=(h_{xx}^2+h_{yy}^2+2h_{xy}^2)^{1/2}=\frac{R_g v_i^2}{c^4\,d}\, g(\phi,e)\;,\label{eq:gw_general}
\end{equation}
with $g(\phi,e)$ a complicated function that depends on the eccentricity $e$ and the phase angle $\phi=\phi(t)$. Its expression can be found in Appendix~\ref{app:pbh_motion}, where we give the detailed description of the orbit of the PBH depending on the initial PBH speed $v_i$ and impact parameter $b$.
The contribution to the energy radiated in GW can be split in two pieces, respectively accounting for the motion inside and outside the NS,
\begin{equation}
|\Delta E|_{\rm gw}= \Delta E_{\rm gw}^{\rm in} + \Delta E_{\rm gw}^{\rm out} \; \label{eq:gwel}.
\end{equation}
For the purpose of the capture, in the case when the PBH crosses the NS surface $\Delta E_{\rm gw}$ (and hence $\Delta E^{\rm in}_{\rm gw}$) is never important compared to the other contributions previously described. On the other hand, for larger impact parameters $\Delta E_{\rm gw}^{\rm out}$ may be relevant. One can borrow directly Eq.~(3.13) from \cite{DeVittori:2012da}. Rewriting it in terms of the eccentricity $e$ and the periastron distance $p(e)$, and denoting $M\equiv m+M_\star$, we have
\begin{equation}
\Delta E_{\rm gw}=\frac{8}{15}\frac{m^2M_*^2}{M^3}v_i^7\frac{p(e)}{(e-1)^{7/2}}\,. \label{DeltaEGW}
\end{equation}
Here the eccentricity $e$ of the orbit is related to the relevant independent parameters $b,v_i, m, M_\star$ via
\begin{equation}
e=\sqrt{1+\frac{b^2}{a^2}}=\sqrt{1+\frac{b^2v_i^4}{G^2M^2}}
\,,\label{eccentricity}
\end{equation}
where we also introduced the semi-major axis $a=\sqrt{G^2M^2/v_i^4}$. For completeness, the function $p(e)$ is given by
\[
p(e) = (e+1)^{-7/2} \Biggl\{
\arccos\left(-\frac{1}{e}\right)\left(24+73\,e^2+\frac{37}{4}e^4\right)
\]
\begin{equation}
+\frac{\sqrt{e^2-1}}{12}(602+673\,e^2)\Biggr\} \,\nonumber.
\end{equation}
It is straightforward to derive the scaling $\Delta E_{\rm gw}\propto v_i^{-7}$ in the regime of physical interest here ($e\approx 1$),
which suggests a growing relative importance of this energy-loss channel for low velocity dispersion systems.
\section{Capture}
\label{sec:capture}
A PBH gets captured by a NS, i.e. becomes gravitationally bound to it, if it loses enough kinetic energy so that its total energy becomes negative. The capture condition of a PBH of mass $m$ thus writes:
\begin{equation}
|\Delta E|>E_i=\frac{1}{2}m v_i^2 \;, \label{eq:cap_condition}
\end{equation}
with $v_i$ the PBH velocity at infinity and $\Delta E$ the energy-losses coming from the different interaction mechanisms reviewed. Based on the previous discussion, it is important to assess if the PBH is moving supersonically or subsonically when it interacts. We anticipate that, for a broad range of NS models, the first interaction (hence the possible capture) of the PBH with the NS material happens at supersonic or transonic velocities, i.e. ${\cal M}\gtrsim 1$. To reach this conclusion, we took several benchmark models from the literature \cite{2013A&A...560A..48P} (BSK-20 and BSK-21) spanning equations of state of varying stiffness, with a low and a high mass model which are in agreement with the LIGO/VIRGO data from a binary neutron star merger~\cite{Abbott:2018exr}. For the sake of clarity we define the typical velocity $v_\star$, associated angular velocity $\omega_\star$, frequency $f_\star$, period $T_\star$,
\begin{equation}
v_\star=\sqrt{\frac{GM_\star}{R_\star}}, \: \: \omega_\star=\frac{v_\star}{R_\star}, \: \: f_\star=\frac{1}{T_\star}=\frac{\omega_\star}{2\pi}\:
\end{equation}
that we will use throughout our calculations.
These scales are summarized in Tab.~\ref{tab:numbers} for the profiles considered; we also report the sound speed ($c_s$) and the chemical potential of neutrons ($\mu_n$) in the NS core. Unless written otherwise, the numerical estimates given in the paper are based on the values of the BSK-20-1 NS model, and assume that the NS is an homogeneous sphere of matter.
\begin{table}[h!]
\begin{center}
\begin{tabular}{l c c c c}
\hline\hline
Model & BSK-20-1 & BSK-20-2 & BSK 21-1 & BSK 21-2 \\
\hline
Radius $R_\star$ [km] & 11.6 & 10.7 & 12.5 & 12.0\\
Mass $M_\star$ [$\rm M_{\odot}$] & 1.52 & 2.12 & 1.54 & 2.11\\
$v_\star$ [$c$] & 0.44 & 0.54 & 0.43 & 0.50 \\
$f_\star=1/T_\star$ [kHz] & 1.8 & 2.4 & 1.6 & 2.0\\
$c_s$ (core) [$c$] & 0.68 & 0.97 & 0.64 & 0.81 \\
$\mu_n$ (core) [GeV] & 0.27 & 0.81 & 0.24 & 0.51 \\
\hline
\end{tabular}
\caption{Relevant parameters for the benchmark NS models considered.}
\label{tab:numbers}
\end{center}
\end{table}
To discuss if capture happens in the subsonic or supersonic regime, one should compare the speed of the PBH travelling through the NS with the sound speed of the NS medium along its trajectory. The trajectory of the PBH in the NS is presented in Appendix~\ref{app:pbh_motion}.
In the limit $v_i \ll v_\star$, the arrival speed of a PBH as a function of $r$ is given by the following expression in terms of the gravitational potential $\Phi$,
\begin{equation}
v(r)=\sqrt{1-e^{2(\Phi(\infty)-\Phi(r))}}\simeq v_\star\,\sqrt{3-r^2/R_\star^2}\;, \label{eq:vesc}
\end{equation}
where the first expression at the RHS takes into account GR effects, and the second approximate equality holds in the Newtonian limit, being accurate to within 5\%. In Fig.~\ref{fig:sound_speed} we show the sound speed of the chosen NS benchmark models as a function of $r$ (thick lines), as well as the velocity Eq.~(\ref{eq:vesc}) (thin lines of corresponding style and color). For all models except BSK-20-2, the PBH speed is always larger than the sound speed at any given $r$. For BSK-20-2, the velocity can drop slightly below (few percent) the speed of sound if the PBH enters within the inner third of the star.
Using the expressions valid in the {\it supersonic regime} for the mechanisms summarized in Sec.~\ref{sec:int_mech}, we compute the different energy-losses as a function of the impact parameter. To this goal, we define $b_c$ as the critical impact parameter, such that a PBH having $b=b_c$ will eventually graze the NS of radius $R_\star$, reaching in its orbit a minimal distance from the center $r_{\rm min}=R_\star$. In terms of the initial velocity $v_i$, one has:
\begin{equation}
b_c=R_\star \sqrt{1+2\frac{v_\star^2}{v_i^2}}
\;.\label{eq:bc}
\end{equation}
As an example, in the model BSK-20-1 we obtain $\tilde{b}_c\equiv b_c/R_\star\approx 624$. Our results are reported in Fig.~\ref{fig:energy_losses} for the initial velocity value $v_i=10^{-3}$.
It is clear that the dominant process for capture is dynamical friction whatever the impact parameter $b<b_c$. For all but the GW term, the vertical axis scales roughly as $1/E_i\propto (10^{-3}/v_i)^2$. For initial velocities $v_i<2\times 10^{-4}$, GW capture for $b>b_c$ becomes important. Note that, had we used expressions for the transonic regime, the impact of the dynamical friction would have been enhanced thanks to the resonant effect mentioned in Sec.~\ref{sec:int_coll}, while the accretion mechanism would have been comparatively suppressed. We conclude, consistently with the common lore, that dynamical friction is the dominant mechanism for energy-loss by the PBH passing through the NS. However, it is not always a dominant {\it capture} mechanism, as we will argue in Sec.~\ref{sec:signatures}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{sound_speed_vesc.pdf}
\caption{Sound speed (thick line) and PBH speed (thin line) as a function of the radius for the NS reference profiles considered.\label{fig:sound_speed}}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Energy_losses_capture.pdf}
\caption{Comparison of the contribution of the different processes of energy-losses to capture as a function of the impact parameter ratio $b/b_c$, for the benchmark $v_i=10^{-3}$.
\label{fig:energy_losses}}
\end{figure}
A naive look at Fig.~\ref{fig:energy_losses} would suggest no capture for the typical velocity dispersion in the Milky Way halo. However, this would be incorrect, given the broad
distribution of velocities. To be more quantitative, we assume that the PBHs follow a Maxwellian distribution in velocities with the dispersion $\bar{v}$,
\begin{equation}
{\rm d}^3n=n_\text{PBH}\left(\frac{3}{2\pi \bar{v}^2}\right)^{3/2}
\exp\left\{\frac{-3v^2}{2\bar{v}^2}\right\} {\rm d}^3v,
\end{equation}
where $n_\text{PBH}=\rho_\text{PBH}/m$, $\rho_\text{BH}$ is the density of PBHs at the star location, and $m$ their mass. It can be expressed in terms of the local DM density $\rho_\text{DM}$ as follows,
\begin{equation}
\rho_\text{PBH} = f_{\rm PBH} \rho_\text{DM},
\label{eq:rhoBH}
\end{equation}
with $ f_{\rm PBH}$ that can observationally attain values as large as 1 for $10^{-16}\,M_\odot \lesssim m\lesssim 10^{-10}\,M_\odot$, while being limited to $f_{\rm PBH}\lesssim{\cal O}$(0.01-0.1) for $10^{-10}\,M_\odot\lesssim m\lesssim 0.1\,M_\odot$. In the following we always assume that $\bar{v}\ll v_\star$.
The rate of NS-PBH encounter leading to capture is:
\begin{equation}
{\cal G}_{\star}=\int \frac{{\rm d}^3 n}{{\rm d}v^3}\; {\cal S}(v) \; v\; {\rm d}^3v\;,
\end{equation}
where ${\cal S}(v)=\pi\; b_{\cal G}^2$ is the effective cross-section of the star which leads to capture\footnote{Note that a more precise GR treatment accounting for the Schwarzschild metric of the NS would lead to an enhanced capture rate by a factor $1/(1-R_\star^s/R_\star)\approx 1.6$~\cite{2008PhRvD..77b3006K}, where $R_\star^s$ is the Schwarzschild radius of the NS.}. This is defined by the condition in Eq.~\ref{eq:cap_condition}: In practice, $b_{\cal G}(v)$ is the {\it largest} $b$ solving the implicit equation $\Delta E(b,v,m,M_\star)=m v^2/2$, where $\Delta E$ includes all energy-losses.
Considering all the processes discussed above, and using the typical PBH velocity dispersion $\bar{v}=10^{-3}$, we find numerically:
\begin{equation}
{\cal G}_{\star}\simeq 2.1\times 10^{-17} \; \left( \frac{\rho_\text{PBH}}{\rm GeV\,cm^{-3}}\right) \left( \frac{10^{-3}}{\bar v}\right)^3 {\cal C}\left[X \right] \rm yr^{-1}\;,\label{eq:cap_rate}
\end{equation}
with,
\begin{equation}
X=X(m,{\bar v})\equiv \left( \frac{m}{10^{25}\rm g}\right)\left( \frac{10^{-3}}{\bar v}\right)^2\;\label{eq:defX}\,,
\end{equation}
and
the function ${\cal C} [X]$ is displayed with a dashed-black line in Fig.~\ref{fig:CX_scaling}. Because of the form of ${\cal S}(v)$, the dependence on ${\bar{v}}$ and $m$ is not trivial. The contribution of GW to ${\cal C} [X]$ is shown with a blue line in Fig.~\ref{fig:CX_scaling}, whereas the dashed-gray curve displays the behaviour of $\cal C$ without accounting for gravitational capture. For $X<10$, ${\cal C} [X]$ is constant; for $10<X<10^3$ it declines as $X^{-1}$; when $X>10^3$, the decline follows the milder behaviour $\propto X^{-5/7}$, because capture by GW emission kicks in and dominates at large impact parameters. Note that, although at large $X$ the capture is suppressed, the GW capture becomes comparatively more important. If fixing the mass at $10^{25}\rm g$, at $\bar{ v}=10^{-3}$, $10^{-4}$ and $10^{-5}$ the GW capture is responsible for 1.1\%, 6.0\% and 99\% of the captures, respectively.
In obtaining the above results, we have considered only interactions between the PBH and an {\it isolated} NS. While a detailed account of {\it multi-body effects} goes beyond our goals, let us mention the current understanding of these processes. If the NS is in a tight binary, it has been shown in~\cite{2012PhRvL.109f1301B} that the capture can be enhanced by a factor up to 3-4, due to the energy-loss of the PBH (or any ``test particle'', for what matters) resulting from its gravitational scattering off the NS moving companion. More frequently, the PBH falling onto the NS will also experience tidal effects by the stellar clusters or even by the Galactic disk in which the NS is embedded. For the Milky Way disk, this effect (which in general may either enhance or deplete the capture probability) has been estimated to become important in the capture process for $m\lesssim {\rm few}\times 10^{-13}\,M_\odot$~\cite{2019arXiv190605950M}.
For comparison, we also define the rate of encounters which involve interaction with matter, but does not always lead to capture:
\begin{align}
\Gamma_{\star}&=\int \frac{{\rm d}^3 n}{{\rm d}v^3}\; \pi b_c^2(v) \; v\; {\rm d}^3v=\nonumber\;\\
&\simeq 3.8\times 10^{-16} \; \left( \frac{\rho_\text{BH}}{1 \rm GeV\, cm^{-3}}\right)\left( \frac{10^{25}\rm g}{m}\right) \left( \frac{10^{-3}}{\bar v}\right)\rm yr^{-1}\;,\label{encontTOT}
\end{align}
where $b_c$ is defined in Eq.~(\ref{eq:bc}).
Further comments on these capture rates and potentially observable consequences are reported in Sec.~\ref{sec:signatures}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{CX_scaling.pdf}
\caption{Evolution of the function ${\cal C}$ (black-dashed line) of Eq.~(\ref{eq:cap_rate}) as a function of $X$ defined in Eq.~(\ref{eq:defX}). The sole contribution of GW capture is displayed in blue and the difference with the total is shown with a dashed gray line.\label{fig:CX_scaling}}
\end{figure}
\section{Post-capture} \label{eq:postcapture}
If the PBH first interaction leads to its capture, it starts orbiting on a bounded trajectory, typically with large eccentricity. We can distinguish two cases, according if it was captured with an interaction outside or inside the NS. In the former case, the GW emission makes the PBH to settle on a meta-stable elliptical orbit around the NS, for a timescale
\begin{equation}
t_{\rm settle}^{\rm GW}\simeq 16\; \left(\frac{m}{10^{22}\,{\rm g}}\right)^{-3/2} \left(\frac{b}{b_c} \right)^{21/2}\left(\frac{v_\star}{0.44}\right)^{-19} \rm Myr\;.\label{tgwsettl}
\end{equation}
This time is estimated from the coalescence time of high eccentricity binaries from Ref.~\cite{PhysRev.136.B1224}, taking the same periastron for the elliptical trajectory as the hyperbolic one along which the PBH is captured, and choosing a binding energy of order $\Delta E_{\rm gw}$. This timescale is quite sensitive to the values of $v_\star$; for the reference values chosen, it becomes shorter than the age of the Universe $t_U$ for $m>1.4\times 10^{20}\,{\rm g}\simeq 7\times10^{-14}\,M_\odot$.
In the latter case, when the capture happens via an interaction inside the NS, or once the PBH drops inside the NS via GW losses, the energy-loss timescale is much shorter. The PBH mostly loses energy each time it passes through the NS, eventually settling on a fully contained orbit around the star center in a timescale~\cite{2013PhRvD..87l3524C}
\begin{equation}
t_{\rm settle}\lesssim 4\times 10^4\,\left(\frac{m}{10^{22}\,{\rm g}}\right)^{-3/2}\,{\rm yr}.\label{tsettl}
\end{equation}
This is shorter than $t_U$ for $m>2\times 10^{18}\,{\rm g}\simeq 10^{-15}\,M_\odot$.
While during the first passage the PBH crosses the NS with a supersonic velocity, at later stages when the orbit size becomes smaller than $r\lesssim R_\star c_s/v_\star$ the PBH motion becomes subsonic.
From this moment onward one may neglect all the contributions to the drag force except the one due to accretion of ambient matter, so that
\begin{equation}
{\bf F}_{\rm drag} = - \dot m \boldsymbol{v} =
-4\pi G^2 m^2\rho \frac{\boldsymbol{v} }{c_s^3}\;\label{pcdrag}
\end{equation}
where $\boldsymbol{v}$ is the {\em relative} velocity of the PBH and the ambient matter.
The equation of motion of the PBH in this case takes the following form:
\begin{equation}
\ddot{\boldsymbol{r}} +{\cal D}(t)
\left[
\dot {\boldsymbol{r}} - \boldsymbol{\Omega}\times \boldsymbol{r} \right] + \omega_\star^2 {\boldsymbol{r}}=0\; \label{eq:motion},
\end{equation}
where the PBH position $\boldsymbol{r}$ is defined with respect to the star center, ${\cal D}(t) = \dot m/m$ (cf. Eq.~(\ref{pcdrag})),
$\omega_\star = \sqrt{4\pi G\rho/3} \sim 1.1\times 10^4\,$s$^{-1}$ is the angular velocity around the NS center, and we have included the possibility that the NS rotates with angular velocity $\boldsymbol{\Omega}$.
This equation factorizes into three independent damped harmonic oscillator equations: one for the motion $r_3(t)$ parallel to
$\boldsymbol{\Omega}$, and two equations for a co-rotating and counter-rotating modes $r_\pm (t)= r_1(t)\pm i r_2(t) $ in the plane orthogonal to $\boldsymbol{\Omega}$. The damping term in these equations is small,
\begin{equation}
{{\cal D}\over \omega_\star} \sim 2.8\times 10^{-12} \left( { m \over 10^{22} {\rm g}}\right) \ll 1,
\label{eq:oscillation_cond}
\end{equation}
and slowly varying with time. The approximate solution is then written in the form
\begin{equation}
r_\pm \propto \exp \left\{ - {1\over 2} \left( 1\mp {\Omega\over\omega_\star}\right)
\ln m + i\omega_\star t\right\}.
\label{eq:oscillations-solution}
\end{equation}
The same solution with $\Omega=0$ is valid for $r_3(t)$.
For most of the observed NS the ratio $\Omega/\omega_\star$ is much smaller than 1, reaching about $1/4$ for the fastest millisecond pulsar. Thus, the correction due to NS rotation in Eq.~(\ref{eq:oscillations-solution}) can be neglected in most of the cases. Then all three solutions have the same behavior which implies
\begin{equation}
m\;r^2\;=\; {\rm const.}\label{eq:conservation}
\end{equation}
Note that this (approximate) conservation law does not depend on the accretion regime, as long as ${\cal D}\ll \omega_\star$.
Making use of this relation, one may readily estimate a typical displacement of the oscillating PBH form the star center at a time when its mass has grown to a fraction $f\ll 1$ of the star mass, $m=f M_\star$. Assuming initial mass $m_i$ and initial orbital radius $r_i\sim R_\star$, the final orbital radius is
\begin{equation}
R_f=R_\star \sqrt{\frac{m_i}{f\,M_\star}}\,.\label{initcond}
\end{equation}
To conclude this section, let us estimate the time it would take a PBH of mass $m$ settled within the NS to accrete the whole star. For the rough estimate we approximate the star as a medium of constant density $\rho_\star=3M_\star/(4\pi R_\star^3)$. This is a reasonable approximation in the center of a NS, whose typical profile goes as $\rho_\star \propto (1-(r/R_\star)^2)^{1/2}$ \cite{2013A&A...560A..48P}. Assuming the Bondi accretion rate, Eq.~(\ref{eq:bondi}),
we obtain
\begin{equation}
m(t)=\frac{m}{1-t/t_B}\,,
\end{equation}
where
\begin{equation}
t_{B}= \frac{ c_s^3\, R_\star^3}{3 \,G^2 \,M_\star\, m}\simeq 1\left( \frac{10^{22}\rm g}{m}\right) \rm yr
\end{equation}
is the typical time needed for the PBH to consume the whole NS. Actually, as discussed in Ref.~\cite{2014PhRvD..90d3512K}, the Bondi regime may fail before than the whole star is consumed and $m(t)\simeq M_\star$, the reason being the angular momentum conservation. In this case the Bondi regime is probably replaced by Eddington-like accretion during the last stages, slightly prolonging the life of the star.
\section{Signatures}\label{sec:signatures}
The dynamics outlined in the previous sections has a number of phenomenological consequences, which we discuss in this section.
\subsection{GW bursts from typical PBH-NS encounters}\label{sec:gw}
In a hyperbolic encounter between two massive objects, a characteristic ``tear drop'' burst signal is emitted, according the LIGO nomenclature~\cite{2017CQGra..34c4002P}. A similar signature for PBH has been considered in encounters between pairs of PBH in~\cite{2018PDU....21...61G,2017PDU....18..123G}.
Apart for the masses of the two bodies, the motion depends on the impact parameter $b$ and the initial speed $v_i$ or, equivalently, the eccentricity $e$ given by Eq.~(\ref{eccentricity}).
The GW signal can then be computed as explained in Sec.~\ref{sec:int_gw}, provided that the orbital function $g(e,\phi(t))$ is known.
In the limit of monochromatic emission and for $m\ll M_\star$ one can describe the strain due to a hyperbolic encounter as producing a typical GW burst of amplitude $h_c(b,v_i,d)$ and characteristic frequency $f_c(b,v_i)$ as in Refs.~\cite{2017PDU....18..123G, 2018PDU....21...61G} \footnote{Note that a factor $1/3$ is missing in their definition of $h$, given the expression they take for the quadrupole. }:
\begin{align}
h_c(b,v_i,d)&=\frac{2Gm}{3\; d\; c^2}\beta_i^2 \frac{2}{e-1}\sqrt{18(e+1)+5e^2}\\
f_c(b,v_i)&=\frac{1}{2\pi}\frac{v_i}{b}\frac{e+1}{e-1}\,,
\end{align}
where $d$ denotes the distance of the observer from the encounter. For $b=b_c$, $v_i=10^{-3}$, $d=1\,$kpc and $m=10^{25}$g, typical values are $h_c\approx 4\times 10^{-25}$ and $f\approx 3\;$kHz.
Note that these functions diverge for small impact parameter, i.e. $b \to 0$. Hence in the following,
we generalize the calculation to the case where the PBH passes within the NS. We consider a perturbative approach in which the GW emission is computed along the unperturbed trajectory. Outside the NS, both before entering and after exiting the star, the motion is hyperbolic with parameters determined as from Eq.~(\ref{eccentricity}). Inside the NS,
in the approximation of constant density, the gravitational potential is a harmonic potential. Hence, within the star, the PBH follows an elliptical orbit centered on the NS center, with semi-minor axis $\alpha_-$ and semi-major axis $\alpha_+$ given by
\begin{equation}
\frac{\alpha_\pm}{R_\star}=
\sqrt{{\cal V}} \left(1 \pm
\sqrt{1 -\left(\frac{v_i \tilde{b}}{v_\star {\cal V}}\right)^2}\right)
\end{equation}
where ${\cal V}=3/2+v_i^2/(2 v_\star^2)\simeq 3/2$.
These expressions are obtained by equating the effective potential (including the angular momentum) to zero. The eccentricity $\varepsilon$ is defined as
\begin{equation}
\varepsilon=\sqrt{1-\left(\frac{\alpha_-}{\alpha_+}\right)^2}\;.
\end{equation}
A representation of the trajectory can be found in Fig.~\ref{fig:motion} for different impact parameters. For $b=b_c/2$ the two hyperbolas followed by the PBH outside the star (whose border is the red circle) are drawn in blue and green, while the arc of ellipse followed inside the star is shown in dashed orange. Further details on the parameterization of the trajectory with respect to time are given in Appendix~\ref{app:pbh_motion}. The typical GW strain as a function of time for the same trajectories is described by Eq.~(\ref{eq:gwel}) and Appendix~\ref{app:GW} and plotted in Fig.~\ref{fig:gwte}. One can see that the typical gravitational strain and frequency saturate to $h_c = 4\sqrt{5}v_\star^4 R_\star m /(M_\star\,d)$ and $f_c = f_\star$ corresponding to taking the limit $\varepsilon\to 0$ in Eq.~(\ref{eq:gwin}).
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Hybrid_motion.png}
\caption{Examples of PBH trajectories (black lines) for impact parameters $\leq b_c$. The star is displayed in red, the construction of this trajectory (black) from two hyperboles (green and blue) and one ellipse (dashed orange) is shown with dashed lines. The radial scale is in units of $R_{\star}$.}\label{fig:motion}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Hybrid_GW_strain.png}
\caption{Evolution of the gravitational strain for the trajectories shown Fig.~\ref{fig:motion}. $h_c$ is in units of $10^{-25}$, assuming $v_i=10^{-3}$, $d=1\,$kpc and $m=10^{25}\,$g.}\label{fig:gwte}
\end{figure}
Assuming $N_\star=10^{9}$ NS in the Galaxy, Eq.~(\ref{encontTOT}) yields a total event rate of
\begin{align}
\Gamma_\star\,{\cal N}_{\star}\simeq 0.38 \;\left( \frac{\rho_\text{BH}}{{\rm GeV\, cm}^{-3}}\right) \left( \frac{10^{25}\rm g}{m}\right) \left( \frac{10^{-3}}{\bar v}\right){\rm Myr}^{-1}\;,
\end{align}
which, for $m\lesssim 10^{25}\rm g $, is not dissimilar from the estimated GRB rate in the Galaxy. Not surprisingly, this rate of encounters is large for very low $m$: at constant mass density, lighter PBH are more numerous and thus lend to more frequent encounters. On the other hand, the amplitude is proportional to $m$, hence louder encounters require heavier PBH and are correspondingly more rare.
\subsection{GW background from PBH-NS encounters}\label{sec:gwback}
In the previous section, we focused on the single GW emission from a possibly ``loud'' but rare encounter event. However, if PBH constitute a sizable fraction of the DM, for sub-stellar mass PBH there are many PBH traveling near NS at distances below the typical inter-stellar distances, $b_{\rm max}$. It may be therefore interesting to compute the overall GW signal due to these frequent but soft events. We will start by considering the signal for a single NS, then generalizing the calculation to a population of $N_\star$ NS, spread out in the Galactic disk of radius $R_{G}$. In order to talk of a stochastic background, the frequency of hyperbolic encounters must be larger than the typical frequency of a single encounter merger. This sets a lower distance, $b_{\rm min}$ (of the order 1 AU for $v=10^{-3}$), for the encounter to contribute to the background.
To set the relevant scales, let us estimate an order of magnitude of the number of encounters contributing to the extremely low GW frequency of $f=10^{-10}$Hz, corresponding to a typical impact parameter of $b=0.1\,$pc. Thus, considering a PBH density $\rho_{\rm PBH}=1\,$GeV/cm$^3$, there are $N_\star\times\pi\,b^2\, v_i \,\rho_{\rm PBH}/(m\,f)\approx10^4$ events at the same time in the Galaxy, for $N_\star=10^9$, $v_i=10^{-3}$, and $m={10^{25}\rm g}$. This number scales roughly as $1/f^3$, and becomes ${\cal O}(1)$ for frequencies higher than $\sim 10^{-7}\,$Hz, so that computing the GW background above this limit becomes irrelevant.
In the monochromatic approximation~\footnote{More correctly, the emission should be determined via an integral over the trajectory. Given the rather pessimistic conclusions on the detectability of this signal, we deem the monochromatic approximation sufficient.}, the energy released in a single encounter is given by $E_{\rm enc}(b,v_i,d)=P_{\rm gw}/f_c=\kappa \;h_c^2(b,v_i,d)/f_c(b,v_i)$ with $\kappa$ a proportionality constant.
In differential terms in frequency space,
\begin{equation}
\frac{{\rm d}E_{\rm enc}}{{\rm d}f}=\kappa\frac{h_c^2(d,b,v)}{f_c(b,v)}\delta\left(f-f_c(b,v_i)\right)\,.
\end{equation}
The total signal in the limit of incoherent sum can be written as:
\begin{eqnarray}
&& \left\langle \frac{{\rm d}E_{\rm diff}}{{\rm d} f}\right\rangle= \kappa\int_{V_{\rm MW}}{\rm d}V_{\rm MW}\,n_\star\times \nonumber \\
&& \int {\rm d}^3v\int_{b_{\rm min}}^{b_{\rm max}}{\rm d}b\;2\pi b\,v\frac{{\rm d}^3 n_{\rm BH}}{{\rm d}v^3}\; \frac{h_c^2(d,b,v)}{f_c(b,v)}\delta(f-f_c(b,v))\nonumber
\end{eqnarray}
where $n_\star$ is the density of stars as a function of the coordinates, and $V_{MW}$ the Milky Way volume considered. Once the integration over $b$ is performed, the delta function fixes the function $b(f,v)$.
Computing the integral over $v$ then leads to the following value for the effective strain:
\begin{eqnarray}
\sqrt{\left\langle h_c^2 \right\rangle}&&\simeq 3\times 10^{-20}\left(\frac{10^{-10}\,\rm Hz}{f}\right)^2\times\nonumber\\
&& \sqrt{\frac{N_\star}{10^9}\,\frac{m}{10^{25}\rm g}\,\frac{\rho_{\rm PBH}}{\rm GeV\,cm^{-3}}\,\ln\left(\frac{R_G}{20\rm \;kpc}\cdot\frac{\rm pc}{r_p}\right)}
\end{eqnarray}
where $r_p$ is the distance to the closest pulsar. This number is far below the SKA sensitivity~\cite{2015aska.confE..37J} expected to reach $10^{-16}$ for the effective strain measured at around $10^{-8}\,$Hz. Note that this estimate can be extended to the population of ordinary stars in the Galaxy, since the encounters considered here occur at distances larger than $b_{\rm min}\approx 1\,$AU. However, even taking $N_\star$ two orders of magnitude larger is not sufficient to reach the sensitivity of forthcoming low-frequency GW detectors.
\subsection{GW signature of a trapped PBH}\label{trapped}
In the relatively rare cases where the encounter leads to a capture, the PBH motion is also associated to a GW emission. If captured via GW emission in a highly eccentric orbit outside the NS, the GW emission consists of a few bursts at each periastron passage (the period being a fraction of Eq.~(\ref{tgwsettl}) of strain and frequency similar to what computed in Sec.~\ref{sec:gw}.
Once orbiting inside the NS, the GW signal is characterized by Eq.~(\ref{eq:conservation}). Interestingly, the expected emission is {\it monochromatic} with frequency $f_\star\sim$kHz and with a {\it constant amplitude} estimated as
\begin{equation}
h_0= \frac{4\sqrt{2}G}{d c^4}m r^2 \omega_\star^2 \approx 2.5\times 10^{-25} \left( \frac{m}{10^{25}\rm g}\right) \left( \frac{1\; \rm kpc}{d}\right)\;.
\end{equation}
This GW strain is sustained during the all accretion process, lasting:
\begin{equation}
t_{B}= \frac{ c_s^3\, R_\star^3}{3 \,G^2 \,M_\star\, m}\approx 9 \left( \frac{10^{25}\rm g}{m}\right) \rm hours\;.
\end{equation}
If accounting for rotation (see Sec.~\ref{eq:postcapture}), the GW strain is enhanced or reduced (depending on the sign of $\Omega$) in the last stages of accretion, according to:
\begin{equation}
h_0^R(t) = h_0 \,\left( \frac{m(t)}{m} \right)^{\Omega/\omega_\star} = h_0 \, \left( \frac{1}{1-t/t_B} \right)^{\Omega/\omega_\star} \,.
\end{equation}
Assuming $N_\star=10^{9}$ neutron stars in the Galaxy, Eq.~(\ref{eq:cap_rate}) yields an event rate of
\begin{equation}
{\cal G}_{\star}N_\star\simeq 2.1\times 10^{-8} \; \left( \frac{\rho_\text{PBH}}{ \rm GeV\,cm^{-3}}\right) \left( \frac{10^{-3}}{\bar v}\right)^3 {\cal C}\left[X \right] \rm yr^{-1}\,.\label{eq:cap_rate_MW}
\end{equation}
For typical Milky Way values of ($\rho_{\rm PBH},m,\bar{v}$), within the age of the Galaxy ($\approx 10^{10}\,$\rm yr) one would expect up to a few hundreds cases of NS transmuted in BH.
Note, however, that provided that $X$ is in the range where ${\cal C}\left[X \right] $ is constant, the capture rate is maximized in environments with large $\rho_{\rm PBH}$ and low velocity dispersion, singling out DM dominated dwarf spheroidals as comparatively more promising targets. Typical such objects (see for instance~\cite{Read:2018fxs}) have a velocity dispersion one order of magnitude or more below the Milky Way value and DM densities one order of magnitude higher than in the solar neighborhood, hence we expect that ${\cal G}_{\star}$ can be enhanced by 10$^4$ or more compared to the Milky Way value. Since each of these objects contains $\sim 10^{-4}$ of the stars of the Milky Way, the overall numbers of NS transmuted in BH may be thus comparable.
\subsection{Final stages}
If a PBH is trapped inside the NS, eventually it will swallow the entire NS, causing a so-called {\it transmutation} of the NS into a BH. This phenomenon is expected to be associated with both electromagnetic (EM) and gravitational wave signals. The reason why some EM burst is expected boils down to the
no-hair theorem and the fact that NS are magnetized objects: The newly formed BH must expel its magnetic field energy, liberating at least an energy~\cite{Chirenti:2019sxw}
\begin{equation}
E_B=\frac{B^2}{8\pi}\frac{4\pi}{3}R_\star^3 \simeq 2\times 10^{41}\left(\frac{B}{10^{12} {\rm G}}\right)^2 \left(\frac{R_\star}{10\, {\rm km}}\right)^3{\rm erg}\;
\end{equation}
into EM form. For some more details, see \cite{2015MNRAS.450L..71F,2018ApJ...868...17A,Chirenti:2019sxw}. It is unclear if further signatures are associated to the ejecta, if present in non-negligible amounts. Also, a fast change of the quadrupole will lead to some GW signature. These signals have been estimated to be rather unpromising for detection~\cite{2019arXiv190907968E} (see also~\cite{2011PhRvD..83h3512K,2019JCAP...05..035G}). However, current simulations have set the PBH exactly at the center, forcing a symmetry which definitely suppresses both the GW emission and other signatures (e.g. ejecta), and realistic magnetic fields are not accounted for. Our study (and notably Eq.~(\ref{initcond})) suggests some degree of asymmetry in the final phase of the PBH mass growth, which is more and more pronounced for a heavier and heavier PBH. For instance, a PBH of initial mass $\simeq 10^{-3}\,M_\odot$ will have reached a mass of 10\% of the NS (which one may consider at the onset of the final transmutation) at a distance of about 10\% of the NS center (about 3-4 times larger than its Schwarzschild radius). Although we cannot compute reliably the signatures associated to the final stages, a relation like Eq.~(\ref{initcond}) can be used to provide a more realistic initial condition in future simulations.
\section{Conclusion} \label{concl}
In this article, we have revisited the interaction processes between primordial black holes (PBH) and neutron stars (NS) and discussed their consequences for the dynamical evolution of the system.
In particular, we have
argued that dynamical friction, the major player in the PBH capture (which happens typically with the PBH hitting the NS at supersonic velocities), is negligible in the post-capture dynamics, when the PBH is orbiting within the star at subsonic speed.
Also, we have shown that (Bondi-like) accretion dominates the post-capture phase, and is responsible for an approximate conservation law, valid until the final stages, when the transmutation of the NS into a BH takes place. This also implies that the onset of the final catastrophic event is expected with the BH seed in slight off-center position: While the actual consequences of this fact must be investigated via numerical simulations, one can expect enhanced electromagnetic and gravitational wave signatures compared to current estimates.
For the first time, we also assessed the importance of GW losses in this context, notably for captures at large impact parameters in low velocity dispersion systems.
Finally, we discussed GW signals associated to different phases of the PBH-stellar interaction. In particular, we extended the hyperbolic encounter ``tear drop'' signal calculation to the case where the PBH enters the NS in its trajectory, and estimated the (small) GW background from frequent soft encounters. Unfortunately, for the single encouter case the signal rate and strength are anticorrelated: We expect sufficiently loud events (associated to massive PBH) to be rare, while frequent events (for light PBH) are below current or foreseen GW sensitivity.
Barring some luck, the still uncertain emission associated to the transmutation event appears the most promising opportunity for a discovery of these exotics.
It is interesting however to point out that as the result of cumulative transmutation events over the cosmic history, a population of low-mass BH (with mass $\sim 1\div \, M_\odot$) will build up. It has been speculated that up to a few percent of the NS-NS coalescence events may in fact involve such a transmuted low-mass BH~\cite{2018ApJ...868...17A}. This promising alternative diagnostics will however require high GW event statistics and a good measurement of the merger/ringdown part of the waveform, for which one will have to wait for third-generation GW detectors~\cite{Yang:2017gfb}.
\begin{acknowledgments}
Y.G warmly thanks Nicolas Chamel for providing him the equations of state for old neutron stars and for discussions. The work of Y.G. is supported by Villum Fonden under project no.~18994. PDS acknowledges support from IDEX Univ. Grenoble Alpes, under the program {\it Initiatives de Recherche Strat{\'e}giques}, project ``Multimessenger avenues in gravitational waves'' (PI: PDS). The work of P.T. is supported in part by the IISN grant 4.4503.15.
\end{acknowledgments}
\begin{appendix}
\section{Details on PBH trajectory}\label{app:pbh_motion}
\subsection{Parameterization of the PBH motion}
In this appendix we give the parameterization of the PBH trajectory for impact parameter $b<b_c$ (see Eq.~\ref{eq:bc}) for which the PBH crosses the NS. The case $b>b_c$ can readily be deduced for example from Refs.~\cite{2018PDU....21...61G,2017PDU....18..123G}. In the following we consider the classical trajectory of a PBH of mass $m\ll M_\star$ crossing a NS of constant density $\rho_\star=3M_\star/(4\pi R_\star^3)$. Using the polar coordinates ($r,\phi$), with $r=0$ corresponding to the NS center, the trajectory is parameterized by a hyperbola (I), an ellipse (II) and a hyperbola (III):
\begin{equation}
r(\phi)=
\begin{cases}
r_{\rm \tiny I}(\phi)=\displaystyle\frac{a (e^2-1)}{1+e\cos(\phi-\psi_0)}, &\phi\leqslant\phi_0\\[10pt]
r_{\rm \tiny II}(\phi)=\displaystyle\frac{\alpha_-}{\sqrt{1-\varepsilon^2\cos^2(\phi-\psi_1)}}, &\phi_0<\phi<\phi_1\\[10pt]
r_{\rm \tiny III}(\phi)=\displaystyle\frac{a (e^2-1)}{1+e\cos(\phi-\psi_2)}, &\phi_1\leqslant\phi\;,\\
\end{cases}
\end{equation}
For the hyperbolic motion ($r_{\rm I}$ and $r_{\rm III}$), the eccentricity is given by:
\begin{equation}
e=\sqrt{1+\frac{b^2}{a^2}}=\sqrt{1+\tilde{b}^2\left(\frac{v_i}{v_\star}\right)^4}\;.
\end{equation}
with semi-major axis $a_h$ such that:
\begin{equation}
\left(\frac{a}{R_\star}\right)^2 = \left(\frac{v_\star}{v_i}\right)^4\;.
\end{equation}
For the ellipsoid motion ($r_{\rm II }$), as recall in the main text, the eccentricity is given by:
\begin{equation}
\varepsilon=\sqrt{1-\left(\frac{\alpha_-}{\alpha_+}\right)^2}\;,
\end{equation}
with the semi-major and semi-minor axis $\alpha_\pm$ such that,
\begin{equation}
\tilde{\alpha}_\pm\equiv \frac{\alpha_\pm}{R_\star}=
\sqrt{{\cal V}} \left(1 \pm
\sqrt{1 -\left(\frac{v_i \tilde{b}}{v_\star {\cal V}}\right)^2}\right)\,,
\end{equation}
where ${\cal V}=3/2+v_i^2/(2 v_\star^2)\simeq 3/2$.
Concerning the angles of the problem, while $\psi_0$ is commonly defined as:
\begin{equation}
\psi_0 = \arccos[-1/e]\;,
\end{equation}
the other angles are obtained requiring the continuity of the trajectory:
\begin{align}
[r_{\rm I}(\phi_0)=R] \Rightarrow \phi_0 &= \psi_0 - \arccos\left[\frac{1}{e} \left(\tilde{b}\sqrt{e^2 - 1} - 1\right)\right]
\\
[r_{\rm II}(\phi_0)=R] \Rightarrow \psi_1 &= \phi_0 - \arccos\left[\frac{1}{\varepsilon} \sqrt{1 - \tilde{\alpha}_-^2}\right]\;,
\end{align}
and by symmetries,
\begin{align}
\phi_1 &= \pi - \phi_0 + 2\psi_1 \, \\
\psi_2 &= 2\psi_1 - \psi_0 + \pi\;.
\end{align}
The time evolution $\tau=t(\phi)/T_\star$ can also be split in the same $\phi$ intervals such that,
\begin{equation}
\tau(\phi)=
\begin{cases}
\tau_{\rm \tiny I}(\phi), &\phi\leqslant\phi_0\\[10pt]
\tau_{\rm \tiny II}(\phi), &\phi_0<\phi<\phi_1\\[10pt]
\tau_{\rm \tiny III}(\phi), &\phi_1\leqslant\phi\;,\\
\end{cases}
\end{equation}
with the recursive definitions,
\begin{align}
\tau_{\rm \tiny I}(\phi) &= {\cal T}_{\rm \tiny out}(\phi - \psi_0)-{\cal T}_{\rm \tiny out}(\phi_0 - \psi_0)\\
\tau_{\rm \tiny II}(\phi) &= {\cal T}_{\rm \tiny in}(\phi-\psi_1)-{\cal T}_{\rm \tiny in}(\phi_0-\psi_1)+\tau_{\rm \tiny I}(\phi_0)\\
\tau_{\rm \tiny III}(\phi) &= {\cal T}_{\rm \tiny out}(\phi - \psi_2)-{\cal T}_{\rm \tiny out}(\phi_1 - \psi_2) + \tau_{\rm \tiny II}(\phi_1)\;,
\end{align}
where we have introduced the functions:
\begin{align}
{\cal T}_{\rm \tiny out}(u)= \frac{\tilde{b}}{2 \pi}\frac{v_\star}{v_i}\, &\Bigg(\;\frac{e\,\sin u}{1 + e\,\cos u} \\ &- \frac{2}{\sqrt{e^2 -1}} \tanh^{-1}\left[\sqrt{\frac{e - 1}{e + 1}} \tan{\frac{u}{2}}\right]\Bigg)\;,
\end{align}
and,
\begin{equation}
{\cal T}_{\rm \tiny in}(u)= \; \frac{\alpha_-^2}{2\pi\,b\,R_\star}\frac{v_\star}{v_i}\frac{1}{\sqrt{1-\varepsilon^2}} \tan^{-1}\left[\frac{1}{\sqrt{1-\varepsilon^2}} \tan{u}\right]\;.\\
\end{equation}
\subsection{Gravitational wave emission}\label{app:GW}
The function used to compute the GW strain $h_0$ Eq.~(\ref{eq:gw_general}) is a piecewise function depending on the regime of the motion:
\begin{equation}
g(\phi)=
\begin{cases}
g_{\rm \tiny out}(\phi-\psi_0), &\phi\leqslant\phi_0\\[10pt]
g_{\rm \tiny in}(\phi-\psi_1), &\phi_0<\phi<\phi_1\\[10pt]
g_{\rm \tiny out}(\phi-\psi_2), &\phi_1\leqslant\phi\;,\\
\end{cases}
\end{equation}
with,
\begin{equation}
\begin{aligned}
g_{\rm \tiny out}(\phi)&= \frac{\sqrt{2}}{3}\frac{1}{e^2-1}\left[36+59 \,e^2+10 \,e^4 \right.\\ & \left. +(108 + 47 \,e^2)\,e\cos{\phi}+59\,e^2\cos{2\phi}+9\,e^3\cos{3\phi}\right]^{1/2}
\end{aligned}
\end{equation}
and,
\begin{equation}
\begin{aligned}
g_{\rm \tiny in}(\phi)= & \frac{{2}}{3}\left(\frac{b}{\alpha_-}\right)^2 \big[ 38\,(1-\varepsilon^2) +5\,\varepsilon^4 \\ & +80 (1-\varepsilon^2)^2 \left(2-\varepsilon^2(1+\cos{ 2\phi} \right)^{-2} \\ & +40 (2-3\,\varepsilon^2 + \varepsilon^4)\left(2-\varepsilon^2(1+\cos{2\phi})\right)^{-1} \big]^{1/2}\;. \label{eq:gwin}
\end{aligned}
\end{equation}
The power radiated in gravitational waves can be computed from \cite{1918SPAW.......154E}, as:
\begin{equation}
P_{\rm gw}=\frac{d E_{\rm gw}}{dt}=-\frac{G}{5 c^5}\langle \dddot Q_{ij} \dddot Q^{ij} \rangle
\end{equation}
We define the differential energy radiated by unit angle $p^\phi_{\rm gw}$ as:
\begin{equation}
p^\phi_{\rm gw}=\;\frac{d E_{\rm gw}}{d\phi} \;=\frac{1}{\dot \phi}\;P_{\rm gw}\;.
\end{equation}
We compute this quantity for the PBH travelling in or out of the star:
\begin{equation}
p^\phi_{\rm gw}(\phi,\tilde b)= \frac{2}{45} E_i \frac{v_\star^2 v_i^3}{c^5 } \frac{m}{M_\star}
\begin{cases}
f_{\rm \tiny out}(\phi-\psi_0), &\phi\leqslant\phi_0\\[10pt]
f_{\rm \tiny in}(\phi-\psi_1), &\phi_0<\phi<\phi_1\\[10pt]
f_{\rm \tiny out}(\phi-\psi_2), &\phi_1\leqslant\phi\;,\\
\end{cases}
\end{equation}
with,
\begin{align}
f_{\rm out} (\phi, \tilde b)= \frac{2}{\tilde b}\frac{(1 + e \cos\phi )^2}{(e^2 -1)^3} &(\,144 + 288 \,e\, \cos\phi \nonumber \\ & + 77 \,e^2 + 67\, e^2 \cos 2 \phi\, ) \;,
\end{align}
and,
\begin{align}
f_{\rm in} (\phi, \tilde b)=
4 \frac{\tilde b^5}{\tilde{\alpha}_-^6}& \frac{(1 - \varepsilon^2)^2}{(1 - \varepsilon^2\cos^2 \phi)^3} (\,72\,(1- \varepsilon^2) + 37 \,\varepsilon^4 \nonumber \\ & + 36\, \varepsilon^2\, (\varepsilon^2-2 ) \,\cos 2\phi - \varepsilon^4 \cos 4 \phi \,)\;.
\end{align}
From these expressions on can compute the gravitational energy radiated outside the NS,
\begin{equation}
|\Delta E|_{\rm gw}^{\rm out} (\tilde b) = \int^{\phi_0}_0 p^\phi_{\rm gw}(\phi,\tilde b) \, d\phi +\int_{\phi_1}^{\phi_{dev}} p^\phi_{\rm gw}(\phi,\tilde b) \, d\phi\;,
\end{equation}
with $\phi_{dev}=2\psi_1$ the total deflection angle, and inside the NS,
\begin{equation}
|\Delta E|_{\rm gw}^{\rm in} (\tilde b) =\int^{\phi_1}_{\phi_0} p^\phi_{\rm gw}(\phi,\tilde b) \, d\phi\;.
\end{equation}
\end{appendix}
\bibliographystyle{apsrev4-1}
|
2,869,038,154,255 | arxiv | \section{\label{sec:intro}Introduction}
Cellular control systems are inherently spatial, as reactions in a network involve macromolecules confined to certain locations or sub-compartments. For example, MAPK pathways involve sensing a stimulus by receptors localized to the cell membrane, propagating the signal in the cytoplasm via a phosphorylation cascade that modifies transcription factors, which are eventually imported into the nucleus where they bind to promoter sites on DNA and affect the expression of downstream genes. On an even finer level, spatial correlations on very short length scales impact the dynamics of various biochemical systems \cite{Mahmutovic2012}. The low copy numbers of key molecules introduces stochasticity in gene regulatory networks (GRNs). This is an important factor to account for when studying the regulatory properties of GRNs. Examples where both spatial and stochastic effects are predicted to be important \cite{Mahmutovic2012} include spatial gene regulation of Hes1 \cite{Sturrock2:2013}, polarization in budding yeast \cite{Lawson:2013}, and the MinD-system in \emph{E. Coli} \cite{FaEl}.
As a consequence, spatio-temporal simulation of reaction-diffusion systems is an important tool to analyze GRNs. In particular, two modeling frameworks have attracted considerable attention in the systems biology community: the mesoscopic, discrete stochastic reaction-diffusion master equation (RDME) in which point-like molecules are tracked on a grid, and Brownian Dynamics (BD) in which hard-sphere particles are tracked in continuous space.
Many capable software packages have been created to support such spatial modeling, including MCell \cite{mcell}, Smoldyn \cite{smoldyn}, E-Cell \cite{ecell}, MesoRD \cite{mesord}, VCell \cite{vcell}, STEPS \cite{steps}, NeuroRD \cite{neurord}, ReaDDy \cite{readdy}, URDME \cite{URDME_BMC}, PyURDME \cite{molns} and StochSS \cite{StochSS}, the latter which integrates spatial capabilities via PyURDME.
Mesoscopic simulators are efficient if a reasonably coarse mesh can be used. However, for some diffusion-limited systems, it is critical to capture short-range, short-timescale interactions between the molecules \cite{TaTNWo10,FangeSRDME,Mahmutovic2012}. In this case a microscopic, particle-scale resolution is needed for accurate simulation. This raises the question of how well the mesoscopic model can capture the microscale dynamics as the mesh size approaches the molecule size. Unless measures are taken, the RDME approximation quality degrades as the mesh size becomes increasingly fine \cite{doi:10.1137/070705039,HHP2,HP1}.
The point-particle mesoscopic model can be made to approximate a microscopic model by deriving scale-dependent reaction rates \cite{FangeSRDME,HHP,HHP2,ErCha09} down to a critical mesh size after which no one-neighbor stencil can provide increased accuracy \cite{HHP}. After this critical limit, it is possible to improve the approximation even further by considering a wider stencil \cite{HP1,FangeSRDME}. This approach results in a lattice method with a lattice spacing on the order of the size of the molecules. By considering the Doi model \cite{Doi1} rather than the Smoluchowski model on the microscopic scale, it is also possible to arrive at a non-local, convergent mesoscopic model by directly discretizing the microscopic model \cite{IsaacsonCRDME}. But even if these solutions allow for accurate simulation of diffusion-limited systems down to a mesh size close to individual particles, the simulation cost increases dramatically as the mesh becomes finer: the number of diffusion events (and so the simulation time) grows proportionally to $h^{-2}$ where $h$ is a measure of the mesh size.
A promising approach to efficiently simulate systems with multiscale properties are hybrid methods in which the reaction network is partitioned and parts of it are simulated on different modeling levels and with different solvers. Examples include mesoscopic-macroscopic (PDE) methods \cite{FERM2010343,tworegime1,SPILL2015429,Lo160485}, macroscopic-mesoscopic methods \cite{Yates20150141}, macroscopic-microscopic methods \cite{doi:10.1137/120882469}, and mesoscopic-microscopic methods, in which parts of a system are simulated with the RDME and parts of it are simulated with a particle-tracking algorithm \cite{hybrid1,hybrid2,doi:10.1093/bioinformatics/btv149}. If a good partitioning can be found, the cost savings of hybridization can be substantial by keeping the number of particles handled on the microscale small, yet maintaining a reasonable mesh size for the RDME simulator. However, one key problem with hybrid methods is that prior knowledge about the system is needed in order to partition it correctly. Also, the system dynamics may change over the course of the simulation or in different regions in the spatial domain, making the initial system partitioning invalid or suboptimal. These issues make hybrid solvers hard to use without expert knowledge, which limits their usefulness as black-box simulation tools. Another problem is that they are complex and challenging to implement, thus there is a lack of software capable of hybrid simulation.
In this paper we develop theory and a practical method that, given a user-supplied error tolerance, is capable of automatic selection of the appropriate modeling level for each species in a spatial stochastic model. We show that the hybrid method converges with decreasing time step, and demonstrate numerically that it accurately reproduces the fine-grained reaction dynamics of microscopic methods. Finally, we show that the hybrid method can simulate systems that are intractable with pure mesoscopic methods, without having to simulate the whole system microscopically.
The rest of the paper is organized as follows. In Sect. \ref{sec:background} we briefly introduce the mesoscopic and microscopic models, and review how they are related. In Sect. \ref{sec:method} we describe the proposed hybrid algorithm, we show how to split a general system, and we develop the condition to be used for adaptive system partitioning.
In Sect. \ref{sec:implementation} we discuss the practical implementation of the method, and demonstrate the accuracy and efficiency of the developed method on two challenging test problems in Sect. \ref{sec:results}. Sect \ref{sec:conclusions} concludes the paper.
\section{Background}
\label{sec:background}
Two modeling frameworks that are popular for simulating reaction-diffusion kinetics in systems biology are the microscopic Smoluchowski model and the mesoscopic reaction-diffusion master equation (RDME). In the former, particles are modeled as hard spheres and their positions are tracked continuously in space, whereas in the latter, particles are point-particles and their positions are tracked up to the resolution of a structured or unstructured grid approximating the domain.
\subsection{Mesoscopic model}
On the mesoscopic scale, we model molecules as point particles, and treat diffusion as jumps between adjacent voxels on a mesh. The state of the system is the discrete number $s_i$ of each chemical species $S_i$, $i=1\ldots M$ in the voxels of the grid, where the voxels are denoted by $\mathcal{V}_j,~j=1\ldots N$.
A diffusive jump is a linear event
\begin{align}
S_{1i} \xrightarrow{D_1} S_{1j},
\end{align}
taking a molecule of species $S_1$ from voxel $\mathcal{V}_i$ to an adjacent voxel $\mathcal{V}_j$, where $D_1$ is a rate constant that depends on the diffusion constant of species $S_1$ and the shape and size of the voxels \cite{efhl2009}.
Chemical reactions occur between molecules residing in the same voxel. For example, a bimolecular reaction between species $S_1$ and $S_2$ producing $S_3$ in voxel $\mathcal{V}_j$ can be written
\begin{align}
S_{1j} + S_{2j} \xrightarrow{k_a} S_{3j},
\end{align}
where $k_a$ is a mesoscopic reaction rate. The mathematical formalism is the continuous-time discrete-space Markov process. In this framework, the propensity for the reaction, $a(s_{1j},s_{2j}) = k_a s_{1j}s_{2j}/V_{j}$, is the probability of the reaction occurring in an infinitesimal interval $(t,t+\delta t)$, where $V_j$ is the volume of voxel $\mathcal{V}_j$. With this assumption, realizations of the process can be efficiently simulated using versions of the SSA optimized for reaction-diffusion systems such as the Next Subvolume Method (NSM) \cite{ElEh04}.
\subsection{Microscopic model}
\label{micro-background}
Consider two molecules $M_1$ and $M_2$, of species $S_1$ and $S_2$ respectively. The molecules can react irreversibly according to $\ce{S_1 + S_2 -> S_3}$. In the microscopic Smoluchowski model, molecules are modeled as hard spheres with a finite reaction radius, diffusing according to Brownian motion. We denote the molecules' reaction radii by $\sigma_1$ and $\sigma_2$, and their diffusion constants by $D_1$ and $D_2$. Their positions in $\mathbb{R}^3$ at time $t_0$ are denoted by $\mathbf{r}_{10}$ and $\mathbf{r}_{20}$. The probability that the molecules are at positions $\mathbf{r}_1$ and $\mathbf{r}_2$ at time $t$ is described by the probability density function (pdf) $p(\mathbf{r}_1,\mathbf{r}_2,t|\mathbf{r}_{10},\mathbf{r}_{20},t_0)$. Let $\mathbf{r} = \mathbf{r}_1-\mathbf{r}_2$, and $\mathbf{R} = \sqrt{\frac{D_2}{D_1}}\mathbf{r}_1+\sqrt{\frac{D_1}{D_2}}\mathbf{r}_2$. We can now rewrite the pdf as
\begin{align}
p(\mathbf{r}_1,\mathbf{r}_2,t|\mathbf{r}_{10},\mathbf{r}_{20},t_0) = p_{\mathbf{R}}(\mathbf{R},t|\mathbf{r}_0,t_0)p_{\mathbf{r}}(\mathbf{r},t|\mathbf{r}_0,t_0),
\end{align}
and it can be shown that the dynamics of the two molecules in $\mathbb{R}^3$ is governed by the following system of equations \cite{ZoWo5a}:
\begin{align}
\label{smolu-eq1}
\frac{\partial p_{\mathbf{R}}}{\partial t} = D\Delta_{\mathbf{R}} p_{\mathbf{R}}\\
\frac{\partial p_{\mathbf{r}}}{\partial t} = D\Delta_{\mathbf{r}} p_{\mathbf{r}},
\end{align}
where $D=D_1+D_2$. The initial and boundary conditions are given by
\begin{align}
4\pi\sigma^2D\frac{\partial p_{\mathbf{r}}}{\partial n}\bigg|_{\| \mathbf{r}\|=\sigma} &= k_a p_{\mathbf{r}}(\| \mathbf{r}\| = \sigma, t|\mathbf{r}_0,t_0)\label{smolu-eq1-bicond1}\\
p_{\mathbf{r}}(\| \mathbf{r}\| \to \infty, t|\mathbf{r}_0,t_0) &= 0\label{smolu-eq1-bicond2}\\
p_{\mathbf{r}}(\mathbf{r}, t_0|\mathbf{r}_0,t_0) &= \delta(\mathbf{r}-\mathbf{r}_0).\label{smolu-eq1-bicond3}
\end{align}
This system of equations can be solved exactly \cite{ZoWo5a,CarJae}, or using an operator split approach \cite{SHeLo11}. We will call $k_a$ the microscopic reaction rate. The probability that the two molecules react between times $t_0$ and $t$ is given by
\begin{align}
\label{micro_time}
p_{\mathbf{r}}(\ast,t|\mathbf{r}_0,t_0) = 1-\int_{t_0}^{t}p_{\mathbf{r}}(\mathbf{r},t|\mathbf{r}_0,t_0)\,d\mathbf{r}.
\end{align}
The time $t_d$ until a molecule undergoes a unimolecular reaction event with reaction rate $k_d$ is given by sampling $t_d$ from an exponential distribution with mean $k_d$. If the dissociation event produces two molecules, then they are placed at a distance of $\sigma$ apart. We will outline in Sect. \ref{micro-implementation} how to use $p_{\mathbf{r}}(\ast,t|\mathbf{r}_0,t_0)$ and $p(\mathbf{r}_1,\mathbf{r}_2,t|\mathbf{r}_{10},\mathbf{r}_{20},t_0)$ to simulate a more complex system within a bounded domain.
\subsection{Mesoscopic parameters}
\label{mesoparams}
We now ask how the mesoscopic and microscopic models are related. Specifically we need to relate the mesoscopic parameters to the microscopic parameters. The diffusion jump rates on the mesoscopic scale are obtained by discretizing the diffusion equation. On a structured mesh this is straightforward; for details on how to do it on an unstructured mesh, see \cite{URDME_BMC}. Below we outline an approach to relating the mesoscopic reaction rates to the microscopic reaction rates.
It is well known that if the reaction volume $V$ in 3D is much larger than the molecules, i.e. $V\gg\sigma$, then the mesoscopic reaction rate, $k_{\rm CK}$, relates to the microscopic reaction rate as
\begin{align}
\label{ck-equation}
k_{\rm CK} = \frac{1}{V}\frac{4\pi\sigma D k_a}{4\pi\sigma D + k_a},
\end{align}
where $D$ is the sum of the molecules' diffusion constants, and $\sigma$ is the sum of the reaction radii. This expression was first derived by Collins and Kimball \cite{CollinsKimball}, and later re-derived by Gillespie \cite{gillespierates}. It is easy to see that for a spatially discretized well-mixed system, or if the voxels are large enough ($h\gg\sigma$, where $h^3$ is the volume of a voxel), the mesoscopic reaction rate, $\krmi^{\rm meso}$, will be given by
\begin{align}
\krmi^{\rm meso} = \frac{1}{h^3}\frac{4\pi\sigma Dk_a}{4\pi\sigma D+k_a}.
\end{align}
In models with multiscale properties we often need to resolve part of the system to very high accuracy, which requires a highly resolved mesh. This implies that the condition $h\gg\sigma$ might not be satisfied. We thus need to derive reaction rates for this case.
Following the analysis in \cite{HHP,HHP2}, we start by considering a single irreversible reaction
\begin{align}
\label{r-basic}
\ce{S_{1} + S_{2} ->[$k_a$] S_3}
\end{align}
in a cube discretized by a Cartesian mesh. Additionally, we will assume that the $S_{1}$ molecule is fixed inside a voxel close to the center of the domain, while the $S_{2}$ molecule diffuses freely with diffusion rate $D$.
To study the relationship between the RDME and Smoluchowski models for small $h$, we compare the expected time until the molecules first react on the microscopic scale to the time on the mesoscopic scale.
First, consider the microscopic scale. Let the $S_{2}$ molecule have an initial position sampled from a uniform distribution and denote the mean binding time, or the average time until the molecules react given that the $S_{1}$ molecule is uniformly distributed, by $\tau_{\rm{micro}}$. Following the approach in \cite{HHP}, we split $\tau_{\rm{micro}}$ into two parts:
\begin{align}
\tau_{\rm{micro}} = \tau_{\rm{diff}}^{\rm{micro}}+\tau_{\rm{react}}^{\rm{micro}},
\end{align}
where $\tau_{\rm{diff}}^{\rm{micro}}$ is the average time until the $S_{1}$ molecule is in contact with the $S_{2}$ molecule for the first time, and $\tau_{\rm{react}}^{\rm{micro}}$ is the average time until the molecules react given that they are in contact.
\noindent
We know that \cite{FBSE10,HHP2}:
\begin{align}
\tau_{\rm{diff}}^{\rm{micro}} \approx \begin{cases}
\frac{V}{4\pi\sigma D}, \quad (3D)\\
\frac{V\left\{ \log\left(\pi^{-1}\frac{V^{1/2}}{\sigma}\right)\right\}}{2\pi D}, \quad (2D)
\end{cases}
\label{eq:microdiff}
\end{align}
and that
\begin{align}
\tau_{\rm{react}}^{\rm{micro}} = \frac{V}{k_a}\quad \text{(1D, 2D, 3D)}.
\label{eq:microreact}
\end{align}
We now consider the system \eqref{r-basic} on the mesoscopic scale. The $S_{2}$ molecule is fixed in a voxel close to the origin, so that it is far from the boundaries. The $S_{1}$ molecule is sampled uniformly on the mesh, and $\tau_{\rm{meso}}$ denotes the average time until the two molecules react for the first time. Let $\tau_{\rm{diff}}^{\rm{meso}}$ be the average time until the molecules are in the same voxel for the first time, and let $\tau_{\rm{react}}^{\rm{meso}}$ denote the average time until they react given that the molecules start in the same voxel.
Again we split the average binding time into two parts
\begin{align}
\tau_{\rm{meso}} = \tau_{\rm{diff}}^{\rm{meso}}+\tau_{\rm{react}}^{\rm{meso}},
\end{align}
where (with $C_2=0.1951$ and $C_3=1.5164$) \cite{MoWe65,Montroll68,HHP,HHP2}
\begin{align}
\label{eq:mesodiff}
\tau_{\rm{diff}}^{\rm{meso}} = \begin{cases}
\frac{C_3V}{6Dh}+O\left(N^{\frac{1}{2}}\right)\quad (3D)\\
\frac{V}{4\pi D}\log(N)+\frac{C_2 V}{4D}+O\left(N^{-1}\right)\quad (2D
\end{cases}
\end{align}
and
\begin{align}
\label{eq:mesoreact}
\tau_{\rm{react}}^{\rm{meso}} = \frac{N}{\krmi^{\rm meso}},
\end{align}
\noindent
We make the ansatz that the mean reaction times on the mesoscopic and microscopic scales are equal, to obtain
\begin{align}
\tau_{\rm{meso}} = \tau_{\rm{micro}} \\
\iff \tau_{\rm{diff}}^{\rm{meso}}+\tau_{\rm{react}}^{\rm{meso}} = \tau_{\rm{micro}} \\
\iff \krmi^{\rm meso} = \frac{N}{\tau_{\rm{micro}}-\tau_{\rm{diff}}^{\rm{meso}}},\label{eq:mesorate}
\end{align}
where the last equality follows from \eqref{eq:mesoreact}. Note that $\tau_{\rm{micro}}$ and $\tau_{\rm{diff}}^{\rm{meso}}$ are both known, so that we can compute $\krmi^{\rm meso}$ using \eqref{eq:mesorate}. We showed in \cite{HHP2} that $\krmi^{\rm meso}$ can be rewritten as
\begin{align}
\label{rate-eqs}
\krmi^{\rm meso} = \frac{k_a}{h^3}\left( 1+\frac{k_a}{D}G(h,\sigma) \right)^{-1}
\end{align}
where $G$ in 3D is given by
\begin{align}
G(h,\sigma) = \frac{1}{4\pi\sigma}-\frac{C_3}{6h}.
\end{align}
From the analysis follows the existence of an (for accuracy) optimal mesh size. To see this, consider that:
\begin{align}
\tau_{\rm{diff}}^{\rm{meso}} < \tau_{\rm{diff}}^{\rm{micro}} &\implies \tau_{\rm{react}}^{\rm{meso}} > \tau_{\rm{react}}^{\rm{micro}}\label{dyn1}\\
\tau_{\rm{diff}}^{\rm{meso}} = \tau_{\rm{diff}}^{\rm{micro}} &\implies \tau_{\rm{react}}^{\rm{meso}} = \tau_{\rm{react}}^{\rm{micro}}\label{dyn2}\\
\tau_{\rm{diff}}^{\rm{meso}} > \tau_{\rm{diff}}^{\rm{micro}} &\implies \tau_{\rm{react}}^{\rm{meso}} < \tau_{\rm{react}}^{\rm{micro}}\label{dyn3}.
\end{align}
The reaction dynamics is better resolved on the microscopic scale than on the mesoscopic scale, and so we expect the mesoscopic accuracy to increase as $\tau_{\rm{react}}^{\rm{meso}}$ approaches $\tau_{\rm{react}}^{\rm{micro}}$ from above. This was shown to be true in \cite{HHP2}. However, as $\tau_{\rm{react}}^{\rm{meso}}$ decreases further, the accuracy also worsens. This was also shown in \cite{HHP2}.
Thus, since $\tau_{\rm{diff}}^{\rm{meso}}$ increases with decreasing $h$, we note that in general we expect the most accurate mesoscopic simulations by selecting $h$ such that \eqref{dyn2} holds. Solving $\tau_{\rm{diff}}^{\rm{meso}}=\tau_{\rm{diff}}^{\rm{micro}}$ for $h$ yields the optimal mesh size $h^*$ \cite{HHP2}:
\begin{align}
h^* = \begin{cases}
\frac{2C_3}{3}\pi\sigma\approx 3.2\sigma, \quad (3D)\\
\sqrt{\pi}e^{\frac{3+2C_2\pi}{4}}\sigma\approx 5.1\sigma, \quad (2D)
\end{cases}
\end{align}
where
\begin{align}
C_3 \approx 1.5164\\
C_2 \approx 0.1951.
\end{align}
\subsection{Hybrid methods}
\label{sec:hybridbackground}
We previously developed a hybrid method \cite{hybrid1} that allowed a given system to be split into two parts: a mesoscopic part and a microscopic part. Species are divided into one subsystem simulated on the microscopic scale, and one subsystem simulated on the mesoscopic scale. The division could depend on spatial constraints, so that a species would be simulated as part of the microscopic subset only in certain parts of space, but not in others. With the system split into two subsets, we would proceed to simulate the system in sequence:
\begin{enumerate}
\item Initialize and set $t=0$. Let the final time be $T$. Select a splitting time step $\Delta t$.
\item Simulate the mesoscopic subset for $\Delta t$ seconds, while keeping the microscopic subset fixed.
\item Simulate the microscopic subset for $\Delta t$ seconds, while keeping the mesoscopic subset fixed. However, we allow microscopic molecules to react bimolecularly with mesoscopic molecules.
\item Synchronize and assign all newly created molecules to their respective scale.
\item Add $\Delta t$ to $t$. Repeat 2-4 until $t=T$.
\end{enumerate}
A crucial and counter-intuitive aspect of this algorithm is that it is necessary to select a time step $\Delta t$ that is neither too small nor too large. The method does not, in general, converge with $\Delta t\to 0$. A rule of thumb is that $\Delta t$ should be selected such that molecules diffuse on the length scale of individual voxels in between synchronization. However, it is straightforward to design a system that would require a smaller $\Delta t$ for accurate simulation, and for which the above hybrid method does not work.
To see why the simple scheme outlined above leads to the existence of an optimal timestep $\Delta t$, consider the following model system:
\begin{align}
\label{system-main}
\ce{S_1 ->[$k_1$] S_{11} + S_{12} ->[$k_2$] S_2},
\end{align}
with $S_1$ microscopic and $S_{11}$, $S_{12}$, and $S_2$ mesoscopic. When $S_1$ dissociates, $S_{11}$ and $S_{12}$ are placed at contact and they might therefore rebind quickly to form $S_2$. If $\Delta t$ is large, then it is likely that this fast interaction will be captured on the microscopic scale during that time step, and the accuracy will consequently be high. If $\Delta t$ is small on the other hand, then $S_{11}$ and $S_{12}$ will become mesoscopic quickly after $S_1$ dissociates, and information about the spatial correlation of $S_{11}$ and $S_{12}$ will be lost.
Below we propose a way to improve the algorithm to address this problem.
\section{A Convergent Hybrid Method}
\label{sec:method}
Here we propose a hybrid method which builds on the algorithm \cite{hybrid1} outlined in Sect. \ref{sec:hybridbackground} but improves it in two critical ways: First, we propose a new scheme to make the simulation convergent as the splitting time step $\Delta t \to 0$, and second, we use the theory in Sect. \ref{sec:theory} below to enable automatic system partitioning.
\subsection{Algorithm}
\label{sec:hybrid}
To make the method converge monotonically as its time step decreases, we here generalize the splitting over species to allow for \emph{dynamic splitting}, in which a \emph{time-dependent} function maps molecules to either scale. Let $t_j$ denote the time elapsed since the molecule with index $j$ was created, and let $F(S_j,t_j)$ denote the function mapping a molecule of species $S$ of age $t_j$ to either the mesoscopic subset or the microscopic subset. Then, for the system \eqref{system-main} we let
\begin{align}
F(S_1,t) &= \text{microscale}, \quad \text{for all } t\\
F(S_{11},t) &= F(S_{12},t) = \begin{cases}
\text{microscale}, \quad t\leq t_m,\\
\text{mesoscale}, \quad t> t_m,
\end{cases}\label{eq:tm}\\
F(S_2,t) &= \text{mesoscale}, \quad \text{for all } t
\end{align}
where $t_m$ is chosen sufficiently large (see Sect. \ref{selecttm} for how to choose $t_m$), and where $t$ is the time since the molecule was created.
The algorithm in \cite{hybrid1} now becomes a special case ($t_m = 0$) of the algorithm proposed here:
\begin{algorithm}[H]
\caption{\label{newhybrid}\, Hybrid method.}
\begin{enumerate}
\item Initialize the system. Set the time $t=0$. Let $T$ be the length of the simulation.
\item Assign molecules to the mesoscopic and microscopic subsets according to $F(S,t)$.
\item Simulate the mesoscopic molecules for $\Delta t$ seconds. Mesoscopic molecules cannot interact with microscopic molecules during the time step. Any molecules produced will, for the remainder of the time step, be simulated on the mesoscopic scale.
\item Simulate the microscopic molecules for $\Delta t$ seconds, while freezing the mesoscopic molecules. Microscopic molecules can react mesoscopically with mesoscopic molecules. Any molecules produced will, for the remainder of the time step, be simulated on the microscopic scale.
\item Add $\Delta t$ to $t$.
\item Repeat 2-5 until $t=T$.
\end{enumerate}
\end{algorithm}
\subsection{How to split a system}
\label{sec:split_system}
For a given system we will need to determine a suitable splitting in order to achieve high accuracy as well as efficient simulations. Again, consider the system in Eq. \eqref{system-main}. Taking symmetry into account, and by observing that for this particular system the species $S_2$ can be safely simulated on the mesoscopic scale as it only diffuses, we can split the system in five different ways:
\begin{align}
{\cal{X}}_1: \,\, S_1 \text{ micro};\, S_{11}, S_{12}, S_2\, \text{ meso} \\
{\cal{X}}_2: \,\, S_1, S_{11} \text{ micro};\, S_{12}, S_2\, \text{ meso} \\
{\cal{X}}_3: \,\, S_1, S_{11}, S_{12} \text{ micro};\, S_2\, \text{ meso} \\
{\cal{X}}_4: \,\, S_{11}, S_{12} \text{ micro};\, S_{1}, S_2\, \text{ meso} \\
{\cal{X}}_5: \,\, S_{11} \text{ micro};\, S_{1}, S_{12}, S_2\, \text{ meso}
\end{align}
We will now consider the accuracy and convergence of each of the splittings ${\cal{X}}_1-{\cal{X}}_5$.
\underline{${\cal{X}}_1$}:
In Alg. \ref{newhybrid} the $S_{11}$ and $S_{12}$ molecules will be simulated on the mesoscale for $t_m$ seconds, and therefore, if $t_m$ is chosen large enough such that the molecules either rebind or can be considered well-mixed inside their respective voxels at the end of $t_m$, the system will be accurately simulated. Importantly, this is also true for $\Delta t\to 0$.
\underline{${\cal{X}}_2$}: The accuracy is the same as for the splitting in ${\cal{X}}_1$, since $S_{12}$ is mesoscopic. Association events between microscale and mesoscale molecules have the same spatial resolution as a pure mesoscopic association event.
\underline{${\cal{X}}_3$}: All molecules of interest are simulated on the microscopic scale; the accuracy is therefore the same as for a pure microscale simulation, but with no efficiency gained.
\underline{${\cal{X}}_4$}: The $S_1$ molecule dissociates on the mesoscopic scale, so all spatial correlation is lost (up to the size of the voxel) upon dissociation. Even though we proceed to simulate the $S_{11}$ and $S_{12}$ molecules on the microscopic scale, we still get the accuracy of a mesocopic simulation.
\underline{${\cal{X}}_5$}: The argument from ${\cal{X}}_4$ holds here as well. The accuracy will be the same as for a mesoscopic simulation.
In conclusion, for this model problem the \emph{only} viable splitting of the system \eqref{system-main} (apart from the trivial pure microscopic simulation) is ${\cal{X}}_1$, assuming that we want to simulate as few species as possible on the microscopic scale.
Note that we may, in some cases, be able to simulate the system described by Eq. \eqref{system-main} accurately with the method in \cite{hybrid1}. However, with that algorithm, we cannot take the splitting time step arbitrarily small. The reason for this is described in detail in \cite{hybrid1}. While this is acceptable for some systems, it will lead to inaccuracies for many others.
As an example, consider the following system:
\begin{align}
\ce{S_1 ->[$k_1^1$] S_{11} + S_{12} ->[$k_2^1$] S_2}\label{reac1}\\
\ce{S_2 ->[$k_1^2$] S_{21} + S_{22} ->[$k_2^2$] S_3}\label{reac2},
\end{align}
where $k_2^1$ and $k_2^2$ are large, so that both association reactions are diffusion limited.
First consider the case of $t_m^1 = 0$ and $t_m^2 = 0$. The method now reduces to the method in \cite{hybrid1}. We know that $\Delta t$ has to be large enough. $S_1$ dissociates on the microscale, so both $S_{11}$ and $S_{12}$ will be microscale until the end of the time step. However, if they are created near the end of the time step, they are likely to survive until the end of the time step, and then turn mesoscopic. If they react on the mesoscale, then the product $S_2$ will be mesoscopic until the end of the time step. If $1/k_1^2\ll\Delta t$ the $S_2$ molecule is likely to dissociate before the end of the time step, in which case $S_{21}$ and $S_{22}$ are initially mesoscopic. Information about the spatial correlation between $S_{21}$ and $S_{22}$ is lost, up to the size of the voxel, and the rest of the simulation may therefore be inaccurate.
With Alg. \ref{newhybrid}, we can guarantee high accuracy, by fixing $t_m^1$ and $t_m^2$ large enough and choosing $\Delta t$ small enough. We then ensure that $S_{11}$ and $S_{12}$ exist on the microscale long enough to either react quickly or become well-mixed insider their respective voxels, while also ensuring that $S_2$ is mesoscopic only on a time scale much shorter than $k_1^2$ (by selecting $\Delta t \ll 1/k_1^2$).
\subsection{Criteria for selecting modeling scale}
\label{sec:theory}
Based on the work outlined in Sect. \ref{mesoparams}, it is clear that choosing a mesh size $h=h^*$ in general leads to the most accurate mesoscopic simulations. In fact, it is possible to push $h^*$ almost to the size of the molecules if the model is extended to allow for reactions between molecules in adjacent voxels \cite{HP1,FangeSRDME}. The problem is that $h^*$ is small and this makes the mesoscopic simulations expensive, sometimes significantly more expensive than microscopic simulations \cite{HP1}. Another problem is that $h^*$ is a function of the reaction radius, and thus different reactions may require different mesh resolutions to be resolved. This can make it impossible to simulate a system accurately with the RDME \cite{HP1}. We therefore want to perform mesoscopic simulations for the majority of the system with $h\ggh^*$, and handle reactions that require a very fine mesh with a microscopic solver.
For any given $h \ge h^*$, the RDME solver will match the mean binding time of the microscopic model if the mesoscale propensity functions from \cite{HHP2} are used, but if $h>h^*$ it will not perfectly match the fine-grained reaction dynamics. Here, we use the relative error in $\tau_{\rm{react}}^{\rm{meso}}$ to estimate this error. We let
\begin{align}
\label{eq:W}
W(h) &= \frac{\left| \tau_{\rm{react}}^{\rm{meso}}-\tau_{\rm{react}}^{\rm{micro}} \right|}{\tau_{\rm{react}}^{\rm{micro}}}\\
&= \frac{\frac{N}{\krmi^{\rm meso}}-\frac{V}{k_a}}{\frac{V}{k_a}}\\
&= \frac{k_a}{h^d\krmi^{\rm meso}}-1,
\end{align}
where we have used that $\tau_{\rm{react}}^{\rm{meso}}>\tau_{\rm{react}}^{\rm{micro}}$ for $h>h^*$. Now using \eqref{rate-eqs} in place of $\krmi^{\rm meso}$, we obtain
\begin{align}
W &= \frac{k_a}{h^d\frac{k_a}{h^d}\left( 1+\frac{k_a}{D}G(h,\sigma) \right)^{-1}}-1\\
\label{W-expression}
&= \frac{k_a}{D}G(h,\sigma).
\end{align}
We assume that for $W(h)<\epsilon$, for some small enough $\epsilon$, a reaction is sufficiently resolved on the mesoscopic scale. We can hence use $W(h)$ to decide which species need to be handled on the microscopic scale in order to resolve the reaction dynamics to high enough accuracy. The assumption $W<\epsilon$ with \eqref{rate-eqs} and \eqref{W-expression} holds if and only if
\begin{align}
\frac{k_a}{h^3}(1+\epsilon)^{-1} < \krmi^{\rm meso} \leq \frac{k_a}{h^3}.
\end{align}
In other words, a reaction is well resolved mesoscopically when the mesoscopic reaction rate is sufficiently close to the microscopic reaction rate (scaled by the volume of the voxel).
In Sect. \ref{sec:example2} we suggest a reasonable default value for $\epsilon$ based on numerical experiments.
\subsection{How to select $t_m$}
\label{selecttm}
We need to select $t_m$ in Eq. \eqref{eq:tm} so that the hybrid method accurately simulates the system in Eq. \eqref{system-main}. In particular we want the relative error
\begin{align}
\label{hybriderror}
E_{hybrid} = \frac{|\tau_{\rm{react}}^{\rm{hybrid}}-\tau_{\rm{react}}^{\rm{micro}} |}{\tau_{\rm{react}}^{\rm{micro}}}
\end{align}
to be small for $t_m$ large enough.
\noindent
Following a dissociation of $S_1$, the average time until the molecules react is given by
\begin{align}
\label{hr1}
\tau_{\rm{react}}^{\rm{hybrid}} = \tau_{\rm{react}}^{\rm{micro}}|_{t\leq t_m}+\tau_{\rm{react}}^{\rm{meso}}|_{t>t_m},
\end{align}
where $\tau_{\rm{react}}^{\rm{micro}}|_{t\leq t_m}$ is the average time until the molecules react on the microscopic scale, given that they react before $t_m$, and $\tau_{\rm{react}}^{\rm{meso}}|_{t>t_m}$ is the average time until the molecules react on the mesoscopic scale, given that they react after $t_m$.
Let $S(t)$ denote the probability that the molecules do not react before time $t$. Then
\begin{align}
\label{hr2}
\tau_{\rm{react}}^{\rm{meso}}|_{t>t_m} = S(t_m)(\tau_{\rm{diff}}^{\rm{meso}}|_{t>t_m}+\tau_{\rm{react}}^{\rm{meso}}),
\end{align}
where $\tau_{\rm{diff}}^{\rm{meso}}|_{t>t_m}$ is the average time that it takes for the molecule to diffuse back to the origin voxel, given that the molecules did not react before time $t_m$. By a similar argument, we can write
\begin{align}
\tau_{\rm{react}}^{\rm{micro}} &= \tau_{\rm{react}}^{\rm{micro}}|_{t\leq t_m} + \tau_{\rm{react}}^{\rm{micro}}|_{t>t_m}\label{mr1} \\
&= \tau_{\rm{react}}^{\rm{micro}}|_{t\leq t_m}+S(t_m)(\tau_{\rm{diff}}^{\rm{micro}}|_{t>t_m}+\tau_{\rm{react}}^{\rm{micro}}).\label{mr2}
\end{align}
Now, by \eqref{hr1}, \eqref{hr2}, \eqref{mr1}, and \eqref{mr2}
\begin{align}
\label{eheq}
E_{hybrid} &= \frac{\tau_{\rm{react}}^{\rm{hybrid}}-\tau_{\rm{react}}^{\rm{micro}}}{\tau_{\rm{react}}^{\rm{micro}}} \\
&= S(t_m)\frac{\tau_{\rm{diff}}^{\rm{meso}}|_{t>t_m}-\tau_{\rm{diff}}^{\rm{micro}}|_{t>t_m}+\tau_{\rm{react}}^{\rm{meso}}-\tau_{\rm{react}}^{\rm{micro}}}{\tau_{\rm{react}}^{\rm{micro}}} \\
&= S(t_m)\left(Q + \frac{k_a}{D}G(h,\sigma) \right)\\
&\le S(t_m)(Q+\epsilon),
\end{align}
with
\begin{align}
Q = \frac{\tau_{\rm{diff}}^{\rm{meso}}|_{t>t_m}-\tau_{\rm{diff}}^{\rm{micro}}|_{t>t_m}}{\tau_{\rm{react}}^{\rm{micro}}}.
\end{align}
It is now easy to see that since we are considering a bounded domain, where $\tau_{\rm{diff}}^{\rm{meso}}|_{t>t_m}\to \tau_{\rm{diff}}^{\rm{meso}}$, $\tau_{\rm{diff}}^{\rm{micro}}|_{t>t_m}\to\tau_{\rm{diff}}^{\rm{micro}}$, then as $t_m\to\infty$
\begin{align}
S(t_m)\to 0,\,\, Q \to 0, \,\, \text{as } t_m\to\infty.
\end{align}
The method thus converges as $t_m\to\infty$, which is also easy to see intuitively, as the simulation in practice becomes purely microscopic for $t_m$ large enough.
We have seen numerically that the mesoscopic method incurs an error in average rebind time that is on the order of the time that it takes for a molecule to become well-mixed inside a voxel \cite{HHP2}. We therefore propose to select $t_m=K^2V_{\rm{vox}}^{2/3}/(6D)$, that is, a time proportional to the time that it takes for a molecule to diffuse a distance proportional to the length scale of a voxel. Here $K$ is a constant, and we have found $K=6$ to be sufficiently large.
We note that $S(t)$ is not known in general since we are considering a bounded domain. However, assuming that $t_m$ will be small enough, and the molecules are some distance from the boundary, we can approximate $S(t)$ by the survival probability for an unbounded domain. This quantity is known analytically, and in 3D is given by \cite{kimshin}:
\begin{align}
\label{survprob}
S(t) = 1-\frac{k_a}{4\pi\sigma D+k_a}\left( 1-\exp(\alpha^2t)\rm{erfc}(\alpha\sqrt{t}\right),
\end{align}
where
\begin{align}
\alpha = \left(1+\frac{k_a}{4\pi\sigma D}\right)\frac{\sqrt{D}}{\sigma}.
\end{align}
We thus know all terms of $E_{hybrid}$ except for $Q$.
By selecting $t_m$ large enough, we are ensuring that either $Q\approx -k_a/D G(h,\sigma)$, or that $S(t_m)\approx 0$, and hence $E_{hybrid} \approx 0 $. We show numerically in Fig. \ref{rebindfig} that our choice of $t_m$ is sufficiently large to accurately reproduce the microscopic distribution of rebind times for two different diffusion-limited reaction rates.
\begin{figure}
\subfigure{\includegraphics[width=0.49\linewidth]{rebind1.pdf}}
\subfigure{\includegraphics[width=0.49\linewidth]{rebind1_ndl.pdf}}
\caption{\label{rebindfig} We consider two molecules, one fixed and one diffusing at diffusion rate $1.0$, in a cube of volume $1.0$. The molecules react irreversibly with reaction rate $k_a$. Let $\tau$ denote the time until the molecules react, given that they start in contact (or in the same voxel on the mesoscopic scale), with a total reaction radius of $0.005$. Above we plot the distribution of the logarithm of the rebind time $\tau$. In (a), $k_a=1.0$, and in (b), $k_a=0.1$. We can see that the hybrid method reproduces the microscopic distribution closely (note that green and red overlaps in the figures above), while the mesoscopic RDME does not reproduce the same distribution for diffusion-limited reactions. In particular, the RDME is unable to accurately resolve reaction events occurring on a spatial scale of one voxel or less.}
\end{figure}
\subsection{A more complex example}
To further illuminate how to practically implement the automatic splitting of a system, we will consider some more complex cases. The simple sequence
\begin{align}
\label{simple-system}
\ce{S_1 -> S_{11} + S_{12} -> S_2},
\end{align}
serves as the base case. A single molecule produces two new molecules, spatially correlated, that can then react to form a new molecule. However, we can consider more complex variants of this simple case.
For example, consider the following system:
\begin{align}
\label{splitting-example-1}
\ce{$S_1$ -> $S_{11}$ + $S_{12}$}\\
\ce{$S_{11}$ -> $S_{11}^\ast$}\\
\ce{$S_{11}^\ast$ + $S_{12}$ -> $S_2$}.
\end{align}
In this case $S_1$ produces two molecules, $S_{11}$ and $S_{12}$, that do not directly react. However, if the reaction $\ce{S_{11} -> S_{11}^\ast}$ is fast enough, the above system behaves similarly to the simple system in \eqref{simple-system}. We might therefore need to simulate $S_1$ on the microscopic scale.
We employ the following recursive strategy to identify molecules that should be simulated on the microscopic scale:
\begin{enumerate}
\item Identify all bimolecular reactions for which $W>\epsilon$, for some sufficiently small $\epsilon$, where $W$ is the relative error in the mesoscopic mean binding time, as defined in \eqref{eq:W}.
\item If the two reactants are produced by a dissociating molecule, the dissociating molecule is simulated on the microscopic scale.
\item If they are not, find all reactions producing either of the two molecules, or both.
\item For each sequence of reactions that produces the two reactants, determine whether it starts with a molecule dissociating. If so, that molecule is a candidate to be simulated on the microscopic scale.
\item If not, repeat the process, until we find no new sequences.
\end{enumerate}
For the system \eqref{splitting-example-1} we would therefore first identify that $S_2$ is produced by the molecules $S_{11}^\ast$ and $S_{12}$. These molecules are not produced through any dissociation reaction. We therefore proceed to look for reactions producing one of the two molecules, or both. We find that $S_{11}$ produces $S_{11}^\ast$. We now look for dissociating molecules producing $S_{12}$ and $S_{11}$. We find that $S_1$ produces $S_{12}$ and $S_{11}$, and therefore $S_1$ is a candidate for the microscopic subset.
Another example is the system
\begin{align}
\label{splitting-example2}
\ce{$S_1$ -> $P$ + $S_{12}$}\\
\ce{$P$ -> $D$ + $S_{11}$}\\
\ce{$S_{12}$ -> $S_{12}^\ast$}\\
\ce{$S_{11}$ + $S_{12}^\ast$ -> $S_2$ }
\end{align}
We first find that the only bimolecular reaction is $\ce{$S_{11}$ + $S_{12}$ -> $S_{2}$}$. Assume that we have $W>\epsilon$. We find no dissociation reaction producing $S_{11}$ and $S_{12}$. We now look for reactions producing either of the molecules, or both. No reactions produce both, but we find that $S_{12}$ produces $S_{12}^\ast$, and that $P$ produces $S_{11}$ and $D$. We proceed to look for any reactions producing either $S_{12}$ or $P$, or both. We now find only one such reaction, $\ce{$S_1$ -> $P$ + $S_{12}$}$. Since this is a dissociation reaction, $S_1$ will be simulated on the microscopic scale.
\section{Implementation}
\label{sec:implementation}
We have implemented the hybrid method as an extension to the high-level PyURDME \cite{molns} Python API. This allows for specification of microscopic/hybrid systems and execution via a simple, object-oriented Python modeling interface. In the following sections, we describe the different components of the solver and discuss computational complexity and performance aspects of hybrid simulation.
\subsection{Mesoscopic solver: Next Particle Method}
In the NSM, reaction and diffusion events in each voxel are grouped, and the heap is organized so that each leaf corresponds to a voxel. In each iteration, the next reaction or diffusion event is executed and the next event time is updated along with the heap for each affected voxel. For a fine mesh, the vast majority of events are diffusion events and the simulation cost is dominated by the time to execute diffusion events. Ignoring reactions, the simulation cost, $C_{NSM}$, on a uniform grid with $N$ voxels and $M$ molecules of a single diffusing species can be written as
\begin{align}
C_{NSM}(N,M) = C_1 N^{2} M\log{N},
\end{align}
where $C_1$ is an implementation- and architecture-dependent constant. Here we instead propose a particle-centric mesoscopic algorithm, the Next-Particle Method (NPM), that tracks individual particles on the grid. We simulate a mesoscopic system as follows:
\begin{enumerate}
\item Particles are stored in a list, with information about species type and which voxel they currently occupy.
\item For each particle we generate the time to and the destination voxel of its next diffusion event. Add each diffusion event to the heap. For $N$ particles, the size of the heap will be $N$.
\item For two reactive particles occupying the same voxel, we generate the time until the next tentative event, and add that event to the heap.
\item Execute the next event.
\item Update all dependent events. If a molecule diffused, add its next diffusion- and reaction events to the heap. If molecules reacted, add new diffusion and reaction events to the heap for all molecules that were affected.
\item Repeat 4-5 until the end of the simulation.
\end{enumerate}
The main advantage of the NPM in the context of the hybrid method is that it minimizes the overhead of switching between the mesoscopic and the microscopic solvers, since the two solvers can share one datastructure for the particle list. The microscopic solver needs to know the position of individual molecules, so maintaining one particle list simplifies the mapping between discrete positions on a grid and continuous positions in space.
The cost for the NPM for the example above can be written
\begin{align}
C_{NPM}(N,M) = C_2 N^{2} M\log{M},
\end{align}
\noindent
where $C_2$ is a constant. This highlights another potential advantage of the NPM in the context of hybrid simulation: it can be more efficient than NSM for highly resolved meshes if the number of voxels are larger than the number of particles. This will often be the case for highly resolved geometries. \
Note however, that the hybrid framework proposed here does not depend on the particular implementation of the mesoscopic method, and that it would be possible to alternate algorithms depending on the particular values of $N$ and $M$.
\subsection{Microscopic solver: GFRD}
\label{micro-implementation}
On the microscopic scale, the system is simulated with the Smoluchowski model, as described in Sect. \ref{micro-background}. For a system of more than one or two molecules, we have an intractable many-body problem. To deal with this, we employ a strategy conceptually similar to the GFRD algorithm \cite{ZoWo5a}.
The first step of the algorithm is to divide the system into subsets of one or two molecules, and to select a time step $\Delta t$, such that to high accuracy we can update the subsets independently during $\Delta t$. Molecules that are each others' nearest neighbors are updated in pairs, while all other molecules are updated as single molecules. The time step $\Delta t$ is chosen as large as possible, with the constraint that the probability of interactions between the separate subsets is small. We then simulate each subset for $\Delta t$ seconds.
For each subset we look for the next reaction: if two molecules react bimolecularly, we can sample the time until they react from $p_{\mathbf{r}}(\ast,t|\mathbf{r}_0,t_0)$ (defined in \eqref{micro_time}), if either or both molecules can react unimolecularly, we can sample the next reaction time from exponential distributions, and finally we will look for possible interactions with the boundary. The reaction that occurs first is executed. We then repeat the procedure until the subsystem has been advanced to time $t_0+\Delta t$.
Instead of solving Eq. \eqref{smolu-eq1} with boundary conditions given by Eqs. \eqref{smolu-eq1-bicond1}, \eqref{smolu-eq1-bicond2}, and \eqref{smolu-eq1-bicond3} exactly, we solve it using the operator split approach described in \cite{SHeLo11}. Furthermore, we only sample from $p_{\mathbf{r}}(\ast,t|\mathbf{r}_0,t_0)$ and $p_{\mathbf{r}}(\mathbf{r},t|\mathbf{r}_0,t_0)$ if the distance between the molecules is small and the probability of a reaction is fairly large. If the probability of reaction during the time step is small, it can be more efficient to simulate the pair using brute-force Brownian Dynamics until the molecules are close. The cut-off is typically at a distance of around a few reaction radii.
\subsection{Hybrid solver}
\label{hybrid-complexity}
We can now couple the mesoscopic NPM with the microscopic solver in a simple loop. Since the NPM keeps track of individual molecules, it is straightforward to map each molecule to either scale according to the splitting function $F$. Both solvers can keep track of how long a molecule has existed, thus making it easy to determine whether a microscopic molecule can be mapped to the mesoscopic scale.
When a molecule switches from the mesoscopic scale to the microscopic scale, we need to know its position in continuous space. We sample its position from a uniform distribution on its voxel. Similarly, when a molecule switches from the microscopic scale to the mesoscopic scale we need to know which voxel the molecule occupies. This is straightforward, since we track which voxel a molecule occupies to accurately simulate its interaction with the boundary. This process is described in detail in \cite{hybrid1}. The overhead from this switching is inversely proportional to the splitting timestep, $C_{coupling} = C_3 (\Delta t_s)^{-1}$, where we have assumed that the number of particles that switch in each timestep is small, compared to the total number of particles on both scales.
With $M_1$ the average number of mesoscopic particles and $M_2$ the average number of microscopic particles during the course of a simulation, the complexity of the overall hybrid method can be described by:
\begin{align}
C_{hybrid}(N,M_1,M_2) = \frac{C_2 M_1\log{M_1}}{N^{-2}} + C_{gfrd}(M_2) + C_3 (\Delta t_s)^{-1} .
\end{align}
\noindent
Since the number of particles handled on the different scales depends on the mesh resolution, i.e. $M_1$ and $M_2$ are functions of $N$, the cost of the solver is complicated to estimate a priori, and it implies the existence of an optimal choice of $N$ for performance. This will be illustrated in Sect.~\ref{seceff}.
Note that each solver could be optimized depending on the system. If we were mainly interested in simulating systems in a cube, the microscopic solver could be significantly optimized by simplifying the process of keeping track of the boundary. For a system with many more particles than voxels, we could choose to simulate the mesoscopic part of the system with the NSM rather than with the NPM.
In Sect. \ref{seceff} we show how the contribution to the total execution time of each solver depends on the mesh size and the system. The total cost of a simulation depends non-linearly on the mesh size, since we need to balance the trade-off between a coarse mesh and fast mesoscopic simulations but expensive microscopic simulations, with a fine mesh on which the mesoscopic simulations are more expensive while the microscopic simulations will be faster (due to the fact that we will simulate fewer molecules on the microscopic scale on a fine mesh).
\section{Numerical examples}
\label{sec:results}
While the theory above is derived under the assumption of a Cartesian mesh, we have shown that in most cases it can be applied also to the case of unstructured meshes \cite{rdmeunstruc}, by substituting the voxel width $h$ for $V_{vox}^{1/3}$, where $V_{vox}$ is the volume of a voxel in a mesh. In particular, we show in Sect. \ref{sec:example2} below that we can accurately split and simulate a system on an unstructured mesh.
Furthermore, we demonstrate that we can accurately simulate a problem previously shown to be intractable with the standard RDME model \cite{rdmeunstruc}, and finally we show the existence of an optimal mesh size, from an efficiency perspective, in between the coarsest and finest possible mesh sizes.
\subsection{Splitting Species: Accuracy and Efficiency}
\label{sec:example2}
In this example we demonstrate that for a given system, we can split the species into a microscopic subset and a mesoscopic subset using $W(h)$ defined in Eq. \eqref{eq:W}. The resulting splitting of species should yield accurate and efficient simulations on a given unstructured mesh.
First we want to determine a suitable $\epsilon$ such that $W<\epsilon$ indicates that the reaction is sufficiently resolved on the mesoscopic scale. We again consider the simple system
\begin{align}
\ce{S_1 ->[$k_1$] S_{11} + S_{12} ->[$k_2$] S_2}.
\label{example-simple-system}
\end{align}
In Sect. \ref{sec:theory} we found that
\begin{align}
W = \frac{k_2}{D}G\left(V_{\rm{vox}}^{\frac{1}{3}},\sigma\right)
\end{align}
is a measure of how well the rebinding time of a pair of molecules is resolved (where $k_2^{\rm{meso}}$ is given by \eqref{rate-eqs}).
While we have no theory relating $W$ directly to the error in the mesoscopic simulation of the system, we can use it as an indirect measure of the error. Via numerical simulations we can find an $\epsilon$ such that $W<\epsilon$ implies that the simulations will be accurate.
For simplicity we consider the system \eqref{example-simple-system} in a cube. We let the microscopic parameters be given by
\begin{align}
\begin{cases}
\sigma_1 = \sigma_{11} = \sigma_{12} = \sigma_2 = 0.0025\\
D_1 = D_{11} = D_{12} = D_2 = 1.0\\
k_1 = 10.0\\
V = (50h^*\sigma)^3,
\end{cases}
\end{align}
and we sample $k_2$ from $[0.001,1.0]$.
By design we expect the best agreement between mesoscale and microscale simulations for a mesh of $50^3$ voxels. Note that these parameters are chosen arbitrarily, but we will proceed to show that the results can be applied successfully to a numerical example with different parameters.
First we compare pure mesoscopic and microscopic simulations. Let $\mathbf{y_{\rm{me}}} = (y_{\rm{me}}^1,\ldots,y_{\rm{me}}^N)$ be the average number of $S_2$ molecules, computed from $M_{\rm{me}}$ mesoscale trajectories sampled at the time points $t_1,\ldots,t_L$, and let $\mathbf{y_{\rm{mi}}} = (y_{\rm{mi}}^1,\ldots,y_{\rm{mi}}^L)$ be the average number of $S_2$ molecules computed as the average of $M_{\rm{mi}}$ microscale trajectories sampled at the time points $t_1,\ldots,t_L$. We consider the max-norm error E, defined as
\begin{align}
\label{Edef}
E = \max_{1\leq i \leq L}\left| y_{\rm{me}}^i-y_{\rm{mi}}^i \right|.
\end{align}
For small values of $k_2$ we expect the mesoscopic simulations to be accurate also for coarse meshes, while for large $k_2$, we expect the error to be large unless the spatial resolution is near the maximum resolution of $50^3$ voxels. In Fig. \ref{example2_fig1} we show how $W$ correlates with the error $E$ for different $k_2$, and that $\epsilon=0.025$, although arbitrary, is a reasonable choice yielding an error of the order of 1.
\begin{figure}
\subfigure{\includegraphics[width=0.45\linewidth]{Whm.png}}
\subfigure{\includegraphics[width=0.45\linewidth]{Ehm.png}}
\caption{\label{example2_fig1}In (a) we plot $W$ as a function of $N^{\frac{1}{3}}$ and $k_a$. For points above the white solid line we have $W<0.025$. In (b) we plot the error $E$ as a function of $N^{\frac{1}{3}}$ and $k_a$. Again, for points above the white solid line, $W<0.025$. For small $k_a$, the error is small for all mesh sizes, while as $k_a$ increases, we need a large $N$ to keep the error small. We can see that for $W<0.025$, the error is roughly on the order of 1.}
\end{figure}
We now apply the choice of $\epsilon=0.025$ to an expanded system of three bimolecular reactions:
\begin{align}
\label{example2-full-system}
\ce{S_1 ->[$k^1_1$] S_{11} + S_{12} ->[$k^1_2$] S_2}\\
\ce{S_2 ->[$k^2_1$] S_{21} + S_{22} ->[$k^2_2$] S_3}\\
\ce{S_3 ->[$k^3_1$] S_{31} + S_{32} ->[$k^3_2$] S_4},
\end{align}
with parameters different from the simple system above. The system is simulated inside a sphere of radius $0.5$, discretized with an unstructured mesh consisting of 6395 voxels.
Depending on the values of $k^i_2$, $i=1,2,3$, we will simulate some combination of $S_1$, $S_2$, and $S_3$ on the microscopic scale. For $W_i>\epsilon$, $S_i$ is simulated on the microscopic scale. The minimum time that a molecule has to exist on the microscopic scale before it becomes mesoscopic is given by $t_m = \frac{V_{\rm{vox}}^{2/3}}{C^26D_i}$, with $C=6$, cf. Sect. \ref{selecttm}.
We consider six different combinations of reaction rates, see Table \ref{ex1-params-tab}. In each case we will have a different combination of $S_1$, $S_2$, and $S_3$ on the microscopic scale. For $k_2^\ast>0.1$, $W>\epsilon$, while for $k_2^\ast = 0.001$ we have $W<\epsilon$. Thus, for case 6 the hybrid method will simulate all molecules on the mesoscopic scale, and we therefore expect the mesoscopic simulation to agree well with the microscopic simulation.
\begin{table}
\begin{tabular}{ r | c | c | c | c | c | c |}
& Case 1 & Case 2 & Case 3 & Case 4 & Case 5 & Case 6\\ \cline{2-7}
$k_2^1$ & 0.1 & 0.001 & 0.1 & 0.1 & 0.001 & 0.001\\ \cline{2-7}
$k_2^2$ & 0.1 & 0.3 & 0.3 & 0.001 & 0.001 & 0.001\\ \cline{2-7}
$k_2^3$ & 0.1 & 0.001 & 0.001 & 0.2 & 0.2 & 0.001\\ \cline{2-7}
\end{tabular}
\caption{\label{ex1-params-tab}Association rates for the six different cases.}
\end{table}
In Fig.~\ref{example2_fig2} we show that the hybrid method agrees well with the microscopic simulations. In case 6, the RDME agrees well with the microscopic model, as expected. In addition, we show that the accuracy increases with a decreasing splitting time step. For a relatively large splitting time step of $0.1$, the method produces results with a fairly large error, but as we refine the splitting time step, the results approach that of a pure microscopic simulation. We have tabulated the errors of the hybrid method errors in Table \ref{tab1-error} with the errors of pure mesoscopic simulations. Even with a fairly large splitting time step, the error in the hybrid method is smaller. For case 6, in which the reactions are slow compared to diffusion, both the hybrid method and the RDME produce accurate results.
\begin{figure}
\subfigure{\includegraphics[width=0.49\linewidth]{example1_conv.pdf}}
\subfigure{\includegraphics[width=0.49\linewidth]{example1_avgtraj.pdf}}
\caption{\label{example2_fig2}Left (a): We plot the error $E$, as defined in Eq. \eqref{Edef} as a function of the splitting time step $\Delta t_{\rm{split}}$. For the smallest time step, $\Delta t_{\rm{split}}=0.001$, the error remains small for Case 6, but is larger for Cases 1 and 4, in which some or all of the reactions are diffusion limited. Right (b): The average number of $S_4$ molecules over time in Case 1. We see that the hybrid method with $\Delta t_{\rm{split}}=0.1$ underestimates the average number of $S_4$ molecules, but still produces better results than with a pure mesoscopic simulation. The hybrid method matches the microscopic results closely for $\Delta t_{\rm{split}}\leq 0.01$.}
\end{figure}
While one reason to use a hybrid method is to gain efficiency over a very fine-grained RDME simulation, another is that some systems cannot be simulated accurately with a standard RDME model. In \cite{rdmeunstruc} we considered the following system:
\begin{align}
\label{systemnonlocal}
\ce{S_1 ->[$k_d$] S_{11} + S_{12} ->[$k_r$] S_2}\\
\ce{S_2 ->[$k_d$] S_{21} + S_{22} ->[$k_r$] S_3},
\end{align}
where $k_d = 10.0$ and $k_r = 0.1$. If $\sigma_i$ is the reaction radius of species $S_i$, and $\sigma_{ij}$ the reaction radius of species $S_{ij}$, then $\sigma_1 = 10^{-3}$, $\sigma_{11} = 0.8\times 10^{-3}$, $\sigma_{12} = 0.8\times 10^{-3}$, $\sigma_{2} = 2.0\times 10^{-3}$, $\sigma_{21} = 1.8\times 10^{-3}$, $\sigma_{22} = 1.8\times 10^{-3}$, and $\sigma_{3} = 2.5\times 10^{-3}$. For simplicity, all molecules diffuse with diffusion rate $1.0$. The domain is a cube of volume $1.0$.
To resolve the first association we need a mesh size of around $h^*_1 = \frac{2}{3}C_3\pi(\sigma_{11}+\sigma_{12})\approx 5.0\times 10^{-3}$, and to resolve the second association we need a mesh size of around $h^*_1 = \frac{2}{3}C_3\pi(\sigma_{21}+\sigma_{22})\approx 1.14\times 10^{-2}$. We showed in \cite{rdmeunstruc} that we cannot resolve both reactions simultaneously with the standard local RDME; we could simulate the system by allowing reactions between neighboring voxels. However, these simulations become expensive as the mesh needs to be highly refined, and they cannot be trivially extended to unstructured meshes.
We show here that another viable approach is to simulate the system with a hybrid method. The system is simulated for 2 seconds, with 201 uniform time samples including $t=0$. In Fig. \ref{example1nonlocal} we plot the error $E$ as a function of the mesh size, where $E$ is defined as in \cite{rdmeunstruc}. Let ${\cal{S}}=\{S_1,S_{11},S_{12},S_{2},S_{21},S_{22},S_3\}$. Then
\begin{align}
\label{meanerror}
E(h) = \frac{1}{201}\sum_{i=1}^{201} \sum_{S\in{\cal{S}}}| S^{\ast}_{h,i}-S_i^{micro}|,
\end{align}
where $S_i^{micro}$ is the average population of species $S$ at time $t_i$, obtained with the microscopic algorithm, and where $S_{h,i}^{\ast}$ is the average population of species $S$ at time $t_i$ obtained with either the hybrid algorithm or with the NPM, simulated on a mesh with a voxel width of $h$.
\begin{figure}
\subfigure{\includegraphics[width=0.49\linewidth]{example1nonlocal.pdf}}
\subfigure{\includegraphics[width=0.49\linewidth]{example1nonlocal_traj.pdf}}
\caption{\label{example1nonlocal}Left (a): The error $E$, as defined by Eq. \ref{meanerror}. The RDME does not match the microscopic dynamics for any mesh size. The hybrid method is able to capture the dynamics of the system by simulating the $S_1$ and $S_2$ species on the microscopic scale, and all other species on the mesoscopic scale. Right (b): Example trajectory of the average population of $S_3$ molecules. The hybrid method agrees well with the microscopic results, while the NPM on a mesh of $80^3$ voxels does not agree with the microscopic simulations.}
\end{figure}
\begin{table}
\begin{tabular}{ r | c | c | c | c | c | c |}
& Case 1 & Case 2 & Case 3 & Case 4 & Case 5 & Case 6\\ \cline{2-7}
RDME & 27.08 & 3.78 & 7.19 & 9.87 & 4.597 & 0.81\\ \cline{2-7}
Hybrid, $\Delta t_{\rm{split}}=0.1$ & 4.82 & 2.79 & 2.26 & 4.52 & 2.437 & 0.75\\ \cline{2-7}
Hybrid, $\Delta t_{\rm{split}}=0.01$ & 0.36 & 1.90 & 2.25 & 0.97 & 1.157 & 0.85 \\ \cline{2-7}
Hybrid, $\Delta t_{\rm{split}}=0.001$ & 0.38 & 1.22 & 1.15 & 0.77 & 0.937 & 1.09 \\ \cline{2-7}
\end{tabular}
\caption{\label{tab1-error} Max-norm error. We see that the hybrid method, even for a large splitting time step, produces results that are more accurate than pure mesoscopic simulations. In the case where all reactions are slow compared to diffusion, the RDME produces accurate results, as does the hybrid method.}
\end{table}
\subsection{Efficiency: Non-linear dependence on the mesh size}
\label{seceff}
As already discussed in Sect. \ref{hybrid-complexity}, the total execution time is the sum of the time spent on the mesoscopic scale, $T_{meso}$, the microscopic scale, $T_{micro}$, and the overhead incurred from the coupling of the scales. The time spent on the microscopic scale depends on how many of the species are microscopic, which in turn depends on the resolution of the mesh. On a fine mesh, we will simulate fewer molecules on the microscopic scale, and for a shorter time, but we pay the price of a more costly mesoscopic simulation.
In this numerical example we show that the total execution time $T$ is a non-linear function of $T_{micro}$ and $T_{meso}$, and that to optimize $T$ we need to balance $T_{micro}$ and $T_{meso}$ in a non-trivial way.
We consider the system
\begin{align*}
\label{meshsweep-system}
\ce{S_1 ->[$k^1_1$] S_{11} + S_{12} ->[$k^1_2$] S_2}\\
\ce{S_2 ->[$k^2_1$] S_{21} + S_{22} ->[$k^2_2$] S_3}\\
\ce{S_3 ->[$k^3_1$] S_{31} + S_{32} ->[$k^3_2$] S_4}
\end{align*}
where $k^1_i = 20.0$, $i=1,\ldots,5$ and $k^1_2=0.0016$, $k^2_2=0.00145$, and $k^3_2=0.0014$. We initialize the system with 200 $S_1$ molecules and 200 $S_2$ molecules, all with uniformly sampled positions on the domain. The domain is a sphere with radius 0.5, and we consider a sequence of meshes, ranging from coarse to fine. In Fig. \ref{meshsweep-fig} we show that there exists an optimum, with respect to total execution time, between the coarsest and the most resolved mesh. Note that this particular system can be accurately simulated on any mesh resolution with the hybrid method, and therefore the error remains small for all mesh sizes, with the only thing changing being the number of molecules simulated on either scale, and for how long the microscopic molecules remain microscopic.
\begin{figure}
\subfigure{\includegraphics[width=0.49\linewidth]{exectimearea.pdf}}
\subfigure{\includegraphics[width=0.49\linewidth]{vox_vs_num_particles.pdf}}
\caption{\label{meshsweep-fig}Left (a): The total execution time is a nonlinear function of the mesh size. For a coarse mesh, most of the simulation time will be spent on the microscopic scale. As the mesh is successively refined, the time spent on the mesoscopic scale starts to dominate. The shortest total simulation time is obtained for a mesh of around 30,000 voxels. Right (b): Number of particles on the microscopic scale as a function of the mesh resolution. Red dots indicate the sample points used to generate the plot in (a). As we can see, for the two left-most points, we simulate $S_1$, $S_2$ and $S_3$ on the microscopic scale. As we move to the right, we will simulate two, one, and finally, for the right-most point, no species on the microscopic scale. We can see in (a) that a pure mesoscopic simulation is slower than a simulation with one microscopic particle, but with a much coarser mesh.}
\end{figure}
In general there is no way to \emph{a priori} determine the optimal mesh size, as it will depend on the system under study as well as the initial condition. It will also depend on the size of the simulation; if we are planning on running many or very long trajectories, making the total simulation time substantial, we can afford an expensive preprocessing step. On the other hand, if the total simulation time is short to moderate, an expensive preprocessing step will not be worthwhile. We therefore propose a heuristic approach to selecting the mesh size.
If we can afford an expensive preprocessing step, we can simulate either full trajectories or shortened trajectories on a sequence of mesh resolutions to find a mesh resolution that appears to minimize the total execution time (there is of course no guarantee that we have found an actual minima). To speed this process up we could, if the system allows it, perform the simulations on a structured Cartesian grid on a regular domain. That way we avoid the costly process of generating a sequence of unstructured meshes. While there is no guarantee that the system behaves the same way on a structured Cartesian mesh as on the actual domain of interest, we can still expect to get an approximation of the relative cost of simulations on different mesh sizes.
Also note that in many cases the mesh size will be constrained by the geometry of the problem. Internal structures could require a certain minimum mesh resolution, meaning that we cannot select the mesh size that optimizes the execution time, but that we instead are constrained to a certain mesh, and have to choose the best splitting given the mesh resolution.
\section{Discussion}
\label{sec:conclusions}
We have developed a hybrid method coupling simulation of the mesoscopic and microscopic modeling scales. The method can, for a certain class of systems, automatically propose a splitting of species based on how diffusion-limited the reactions are. Furthermore, we show that the new method converges with decreasing splitting time step for a larger class of systems than a previously developed method \cite{hybrid1}.
We apply the method to a numerical example, showing that it accurately, and with increased efficiency compared to microscopic simulations, splits the system. We also show how the optimal splitting can be found for a mesh between the coarsest and the finest possible resolutions. It is therefore necessary to find a balance between how many molecules to simulate on the microscopic scale, and how fine the mesh should be.
The approach described in this paper can, in general, be applied to systems where molecules are created in spatial proximity through some sequence of unimolecular and bimolecular reactions.
Another possibility is that molecules are created in spatial proximity due to more complex interactions with internal membranes or fibers; processes not necessarily captured by the scheme outlined above. It is also plausible that microscale resolution could be needed for other reasons, such as for processes where molecules in 3D react with complex membranes or move due to active transport. Automatic splitting of such systems would require a different analysis.
\section{Acknowledgement}
This work has been funded by National
Institute of Health (NIH) NIBIB Award No. R01-EB014877, Department of Energy (DOE) Award No. DE-SC0008975, the Swedish Research Council under Award No. 2015-03964 and the eSSENCE strategic collaboration on eScience.
|
2,869,038,154,256 | arxiv | \section{Introduction}
In 1998, the concept of continuous-time quantum walk on graphs was proposed firstly by Farhi and Gutman in \cite{Farhi}. Suppose that $M$ is a symmetric matrix associated with a connected graph $G$, such as adjacency matrix, Laplacian matrix, signless Laplacian matrix of $G$ and so on. Then the \emph{transition matrix} of continuous-time quantum walk on $G$ is defined by the unitary matrix
\begin{equation}
H_{M}(t)=\text{exp}(-itM),
\end{equation}
where $t\in \mathbb{R}$ and $i^2=-1$.
Bose \cite{Bose} in $2003$ studied the problem of information transition in a quantum spin system. Subsequently,
Christandl et al. \cite{Christandl} proved that this problem can be reduced to the perfect state transfer in quantum walk. Let $e_{u}^n$ denote the characteristic vector of order $n$ corresponding to the vertex $u$ of $G$. In the absence of confusion, $e_{u}^n$ is abbreviated as $e_{u}$. For two vertices $u$ and $v$ of $G$, if
\begin{equation}
\text{exp}(-itM)e_{u}=\lambda e_{v}
\end{equation}
with $ |\lambda| =1$, then $G$ is said to have \textit{perfect state transfer} relative to matrix $M$ between vertices $u$ and $v$ at time $t$. This concept has played a crucial role in quantum information and quantum algorithm. Characterizing graphs having perfect state transfer has been enjoyed world-wide attention among physic and mathematics communities. However, it is well known that graphs having perfect state transfer are rare. Godsil in \cite{Godsil2012} proposed a new concept called pretty good state transfer, whose condition is more relaxing in comparison to perfect state transfer. A graph $G$ is said to have \emph{pretty good state transfer} between vertices $u$ and $v$ at some time $t$ if
\begin{equation}
|\text{exp}(-itM)_{uv}|>1-\epsilon,
\end{equation}
for any $\epsilon>0$.
Many papers have focused mostly on quantum walk relative to adjacency matrix and Laplacian matrix and obtained some excellent results. For example, in the adjacency matrix case, Godsil in \cite{equation} proved that, for any $k\in\mathbb{Z}^+$, there exist at most finitely graphs with maximum degree $k$ admitting perfect state transfer. Bose et al. in \cite{S.Bose} showed the complete graph $K_{n}$ has no perfect state transfer, but $K_{n}$ with a missing edge has perfect state transfer between the two vertices of this missing edge for positive integer $n$. Ge et al. in \cite{Ge Yang} described some new constructions of perfect state transfer graphs using variants of the double cones and graph products, such as weak and lexicographic products, irregular and glued double cones and so on. Coutinho and Godsil in \cite{Coutinho2016} also constructed many new graphs of perfect state transfer using graph products and double covers of graphs. Ackelsberg et al. in \cite{Ackelsberg2017} showed that the corona of two graphs has no perfect state transfer under suitable conditions, but it has pretty good state transfer in some special cases. For more details on this area, we refer the reader to \cite{Coutinho2014,Godsil2012} and the cited references therein.
In the Laplacian matrix case, Alvir et al in \cite{Alvir2016} studied the
perfect state transfer in Laplacian quantum walks. It was indicated that quantum walks based on the adjacency matrix, Laplacian matrix and signless Laplacian matrix are all equivalent for a regular graph. They also investigated the Laplacian perfect state transfer on graph joins and provided a characterization of Laplacian perfect state transfer on the double cones. Coutinho and Liu in \cite{Coutinho2015} proved that a tree of order $n\geq 3$ has no Laplacian perfect state transfer. However, Banchi et al. in \cite{Banchi2017} showed that a path of order $n$ has Laplacian pretty good state transfer if and only if $n$ is a power of 2. Recently, Ackelsberg et al. in \cite{Ackelsberg2016} showed that the corona of two graphs has no Laplacian perfect state transfer, but it has Laplacian pretty good state transfer with some mild conditions. Li and Liu in \cite{Yipeng Li} showed $\mathcal{Q}$-graph of an $r$-regular graph $G$ has no Laplacian perfect state transfer when $r+1$ is a prime number, but it has Laplacian pretty good state transfer. Liu and Wang in \cite{Xiaogang Liu} proved that total graph of an $r$-regular graph $G$ has no Laplacian perfect state transfer when $r+1$ is not a Laplacian eigenvalue of $G$. They also gave a sufficient condition on Laplacian pretty good state transfer for total graph of a regular graph. For more information about Laplacian state transfer of graphs, readers may refer to \cite{Alvir2016,Coutinho2014,Wang2021} and the cited references therein.
Compared with the research on quantum state transfer relative to adjacency matrix and Laplacian matrix of graphs, there have been far less studies on the problem of signless Laplacian state transfer of graphs. For example, Alvir et al in \cite{Alvir2016} proved that, if $H$ is a $(\frac{n}{2}-1)$-regular graph of order $n\geq2$, then the double cones $\overline{K}_2+H$ has signless Laplacian perfect state transfer. It is proved \cite{Alvir2016} that there exists somewhat interesting tie between
quantum walks relative to the signless Laplacian matrix of a graph $G$ and adjacency matrix of its line graph $l(G)$. With the help of this, Alvir et al \cite{Alvir2016} proved that the path $P_n$ has no signless Laplacian prefect state transfer for $n\geq5$. Recently, Kempton et al. in \cite{potential,involution} mainly investigated perfect state transfer and pretty good state transfer with potential. Note that, if the potential on each vertex of graphs equals its degree, then it will result in the signless Laplacian quantum state transfer of graphs. In $2021$, Tian et al. \cite{Tian2021} showed that there is no signless Laplacian perfect state transfer on corona graph $G\circ K_{m}$ if $m$ equals one or a prime number. They also showed, if $m$ is an even number, then $K_{2}\circ \overline{K}_{m}$ has no signless Laplacian perfect state transfer between two vertices of $K_{2}$. However, $G\circ \overline{K}_{m}$ has signless Laplacian pretty good state transfer under some special conditions. For some properties about signless Laplacian matrices and applications in computer science, see \cite{Cvetkovic2010,Cvetkovic2009,Cvetkovic2011,Dam2003} and the cited references therein.
Motivated by above some results, we mainly consider to study on perfect state transfer and pretty good state transfer relative to signless Laplacian matrix in $\mathcal {Q}$-graph. The $\mathcal {Q}$-\textit{graph} \cite{Cvetkovic2010} of a graph $G$, denoted by $\mathcal {Q}(G)$, is the graph obtained from $G$ by inverting one new vertex in each edge and joining these new vertices by edges which lie on adjacent edge in $G$. In our work, we show that there is no signless Laplacian perfect state transfer on $\mathcal {Q}$-graph of a regular graph with a mild condition. Furthermore, we also present a sufficient condition for the $\mathcal {Q}$-graph admitting signless Laplacian pretty good state transfer.
\section{Preliminaries}
For a graph $G$ of order $n$, let $A(G)$ and $D(G)$ denote its adjacent matrix and degree diagonal matrix, respectively. Then $Q_{G}=A(G)+D(G)$ is the signless Laplacian matrix of $G$. The eigenvalues of $Q_{G}$ are called the \textit{signless Laplacian eigenvalues} of $G$. Let $q_{0}>q_{1}>\cdots>q_{d}$ be all distinct eigenvalues of $Q_{G}$ and $x_{1}^{(i)}, \ldots, x_{l_{i}}^{(i)}$ be the unit orthogonal eigenvectors corresponding to eigenvalue $q_{i}$ with multiplicity $l_{i}$, $i=0,1, \ldots, d$. Let $x^T$ denote the transpose of a column vector $x$. Then, the eigenprojector corresponding to eigenvalue $q_{i}$ can be written as the following matrix:
\begin{equation}
{f_{{q_i}}} = \sum\limits_{j = 1}^{{l_i}} {x_j^{(i)}(x_j^{(i)}} {)^T},
\end{equation}
and $\sum_{i=0}^{d} f_{{q_i}} =I_n$, where $I_n$ denotes the identity matrix of order $n$. It is easy to find that the $f_{{q_i}} $ is idempotent matrix $(f_{{q_i}}^{2}=f_{{q_i}})$ and $f_{{q_i}}f_{{q_k}}=0$ for $i\neq k$. We can obtain the following alternative expression of the signless Laplacian matrix $Q_{G}$ by the eigenprojector:
\begin{equation}
Q_{G}=Q_{G}\sum_{i=0}^{d} f_{{q_i}}=\sum_{i=1}^{d}\sum_{j=1}^{l_{i}} Q_{G} x_{j}^{(i)} (x_{j}^{(i)})^T=\sum_{i=0}^{d}\sum_{j=1}^{l_{i}} q_{i} x_{j}^{(i)} (x_{j}^{(i)})^T=\sum_{i=1}^{d}q_{i} f_{{q_i}},
\end{equation}
which is called the \textit{spectral decomposition} of $Q_{G}$. Since the continuous-time quantum walk on graph $G$ relative to the signless Laplacian matrix $Q_{G}$ is the unitary matrix
\begin{equation}
H_{Q_{G}}(t)=\text{exp}(-itQ_{G}),
\end{equation}
then
\begin{equation}\label{7}
H_{Q_{G}}(t)=\text{exp}(-itQ_G)=\sum_{k\geq 0}\frac{(-i)^{k}Q_{G}^{k}t^{k}}{k!}=\sum_{k\geq 0}\frac{(-i)^{k}\sum_{i=0}^{i=d}q_{i}^{k} f_{{q_i}}t^{k}}{k!}=\sum_{i=1}^{i=d} \text{exp}(-it q_{i})f_{{q_i}}.
\end{equation}
For a vertex $u$ of a graph $G$, denote its \textit{signless Laplaian eigenvalue support} in graph $G$ by $\text{supp}_{Q_{G}}(u)$, which is the set of all the eigenvalues of $Q_{G}$ satisfying $f_{{q_i}}e_{u}\neq 0$, where $e_{u}$ is called the \textit{characteristic vector} of $u$ (the $u$-th entry of the column vector $e_{u}$ is one, otherwise zero). The vertices $u$ and $v$ are said to be \textit{strongly signless Laplacian cospectral} if $f_{q}e_{u}=\pm f_{q}e_{v}$ for all eigenvalues $q$ of $Q_{G}$. Denote the set of all the eigenvalues such that $f_{q}e_{u}=f_{q}e_{v}$ by $S^{+}$, and denote the set of all the eigenvalues such that $f_{q}e_{u}=-f_{q}e_{v}$ by $S^{-}$.
The following are some main theorems and lemmas which will help us to study singless Laplacian perfect state transfer or singless Laplacian pretty good state transfer of $\mathcal {Q}$-graph.
\paragraph{Theorem 2.1.}(Coutinho \cite{Coutinho2014}) Let $G$ be a graph of order $n\geq2$ with the vertex set $V(G)$, and $u, v\in V(G)$.
Suppose that $q_{0}$ is the maxmum signless Laplacian eigenvalue in $G$.
Then $G$ admits signless Laplacian perfect state transfer between the vertices $u$ and $v$ if and only if the following conditions hold.
\begin{enumerate}[(i)]
\item Two vertices $u$ and $v$ are strongly cospectral relative to signless Laplacian.
\item Non-zero elements in $\text{supp}_{Q_G}(u)$ are either all integers or all quadratic integers. Moreover, there exists a square-free integer $\Delta$ and integers $a$, $b_q$ such that, for each signless Laplacian eigenvalue $q\in \text{supp}_{Q_G}(u)$,
\begin{equation*}
q=\frac{1}{2}(a+b_q\sqrt{\Delta}).
\end{equation*}
Here we allow $\Delta=1$ if all signless Laplacian eigenvalues in $\text{supp}_{Q_G}(u)$ are integers, and $a=0$ if all signless Laplacian eigenvalues in $\text{supp}_{Q_G}(u)$ are all multiples of $\sqrt{\Delta}$.
\item $q\in{S^+}$ if and only if $\dfrac{q_0-q}{g\sqrt{\Delta}}$ is even, where
$$g=\gcd\left(\left\{\dfrac{q_0-q}{\sqrt{\Delta}}:q\in \text{supp}_{Q_G}(u)\right\}\right).$$
\end{enumerate}
Moreover, if the above conditions hold, then the following also hold.
\begin{enumerate}[(1)]
\item There exists a minimum time $\tau_0>0$ at which signless Laplacian perfect state transfer occurs between $u$ and $v$, and
\begin{equation*}
\tau_0=\frac{1}{g}\dfrac{\pi}{\sqrt{\Delta}}.
\end{equation*}
\item The time of signless Laplacian perfect state transfer $\tau$ is an odd multiple of $\tau_0$.
\item The phase of signless Laplacian perfect state transfer is given by $\lambda=e^{-itq_0}$.
\end{enumerate}
\paragraph{Theorem 2.2.}(Hardy and Wright \cite{numbertheory}) Assume that $1, q_1, \ldots, q_m$ are linearly independent over the set of rational number $\mathbb{Q}$. Then, for any real numbers $\alpha_1, \ldots, \alpha_m$ and positive real numbers $N$, $\epsilon$, there exist integers $l>N$ and $\gamma_1, \ldots, \gamma_m$ such that
\begin{equation}
\vert{lq_{k}-\gamma_k-\alpha_k}\vert<\epsilon,
\end{equation}
for each $k=1,\ldots,m$.
\paragraph{Lemma 2.3.}(Richards \cite{galoistheory}) The set $\{\sqrt{\Delta}:\Delta$ is a square-free integer$\}$ is linearly independent over
$\mathbb{Q}$.
\paragraph{Lemma 2.4.}(Coutinho \cite{Coutinho2015}) A real number
$\lambda$ is a quadratic integer if and only if there exist integers $a$, $b$ and $\Delta$ such that $\Delta$ is square-free and one of the following cases holds.
\begin{enumerate}[(i)]
\item $\lambda=a+b\sqrt{\Delta}$ and $\Delta\equiv2,3\;(\text{mod}\;4)$.
\item $\lambda=\frac{1}{2}(a+b\sqrt{\Delta}),\; \Delta\equiv1\;(\text{mod}\;4)$, and either $a$ and $b$ are both even or both odd.
\end{enumerate}
\section{Signless Laplacian eigenvalues and eigenprojectors of $\mathcal {Q}$-graph}
Let $G$ be a graph of order $n$ with vertex set $V(G)=\{v_{1}, v_{2}, \ldots, v_{n}\}$ and edge set $E(G)=\{e_{1},e_{2}, \ldots, e_{m}\}$. The \textit{vertex-edge incidence matrix} of $G$ is an $n\times m$ matrix $R_{G}=(r_{ij})_{n\times m}$, in which $r_{ij}=1$ if the vertex $v_{i}$ is incident to the edge $e_{j}$, otherwise $r_{ij}=0$. Let $y_{1},y_{2},\ldots,y_{l}$ be all unit orthogonal vectors such that $R_Gy=0$. It is well known \cite{Cvetkovic1995} that $l=m-n$ if $G$ is non-bipartite graph and $l=m-n+1$ if $G$ is bipartite graph. Denote the column vector with all of its entries one by $j_n$.
Next, we first give the signless Laplacian eigenvalues and corresponding signless Laplacian eigenvectors of $\mathcal {Q}(G)$.
\paragraph{Theorem 3.1.} Let $G$ be an $r$-regular non-bipartite connected graph of order $n$, with $m$ edges and $r\geq2$. Also let $2r=q_{0}>q_{1}>\cdots>q_{d}$ be all distinct signless Laplacian eigenvalues of $G$ and $x_{1}^{(i)},x_{2}^{(i)}, \ldots, x_{l_{i}}^{(i)}$ be the unit orthogonal eigenvectors corresponding to eigenvalue $q_{i}$ with multiplicity $l_{i}$, $i=0,1,\ldots,d$. Assume that $y_{1}, y_{2}, \ldots, y_{m-n}$ are all unit orthogonal vectors such that $R_{G}y_k=0$. Then the signless Laplacian spectrum of $\mathcal{Q}$-graph of $G$ consists precisely of the following:
\begin{enumerate}[(i)]
\item $2r-2$ is the signless Laplacian eigenvalue of $\mathcal{Q}$-graph, with multiplicity $m-n$ and the corresponding orthogonal eigenvectors are
\[
Y_k=\frac{1}{||{y_k}||}\left({\begin{array}{*{20}{c}}
0\\
{{y_k}}
\end{array}}\right),
\]
for $k=1,2,\ldots,m-n$.
\item $ q_{i\pm}=\frac{3r+q_{i}-2 \pm \sqrt{(q_{i}+r-2)^{2}+4q_{i}}}{2}$ are the signless Laplacian eigenvalues of $\mathcal {Q}$-graph and the corresponding orthogonal eigenvectors are
\[
{X_{i \pm }^{j}} = \frac{1}{{\sqrt {{{({q_{i\pm}}+2-2r- {q_i})}^2+{q_i}}} }}\left( {\begin{array}{*{20}{c}}
{({q_{i \pm }} + 2 - 2r - {q_i}){x_j^{(i)}}}\\
{R_G^T{x_j^{(i)}}}
\end{array}} \right)
\]
for $j=1,2,\ldots,l_i$ and $i=0,1,\ldots,d$.
\end{enumerate}
\begin{proof}
According to the definition of $\mathcal {Q}$-graph, then the signless Laplacian matrix of $\mathcal {Q}$-graph of $G$ is given by
\begin{equation*}
{Q_{\mathcal {Q}(G)}} = \left( {\begin{array}{*{20}{c}}
{r{I_n}}&{{R_G}}\\
{{R_G}^T}&{2r{I_m} + A(\ell (G))},
\end{array}} \right)
\end{equation*}
where $\ell(G)$ denotes the line graph of $G$. The proof this theorem is divided into three claims.
\paragraph{Claim 1.} $2r-2$ and $q_{i\pm}$ are the signless Laplacian eigenvalues of $\mathcal {Q}$-graph of $G$ with corresponding respective multiplicities $m-n$ and $l_{i}$ for $i=0, 1, 2, \ldots, d$.
\\\\
\textit{Proof of Claim $1$.} Recall that Claim 1 can be obtained easily from Theorem 5.6 in \cite{J.-P. Li}, here we give the detailed proof for the convenience of readers. Since $A_{\ell(G)}=R_{G}^{T}R_{G}-2I_{m}$ and $Q_{G}=R_{G}R_{G}^{T}$, then the signless Laplacian characteristic polynomial of $\mathcal {Q}(G)$
\begin{equation*}
\begin{split}
{P_{{Q_{\mathcal {Q}(G)}}}}(t) &=det \left( {\begin{array}{*{20}{c}}
{(t - r){I_n}}&{ - {R_G}}\\
{ - R_G^T}&{(t - 2r){I_m} - A(\ell (G))}
\end{array}} \right)\\
& = det\left( {\begin{array}{*{20}{c}}
{(t - r){I_n}}&{ - {R_G}}\\
{ - R_G^T}&{(t + 2 - 2r){I_m} - R_G^T{R_G}}
\end{array}} \right)\\
& =det \left( {\begin{array}{*{20}{c}}
{(t - r){I_n}}&{ - {R_G}}\\
{(r - t - 1)R_G^T}&{(t + 2 - 2r){I_m}}
\end{array}} \right)\\
&=det \left( {\begin{array}{*{20}{c}}
{(t - r){I_n} + \frac{{r - t - 1}}{{t - 2r + 2}}{R_G}R_G^T}&0\\
{(r - t - 1)R_G^T}&{(t + 2 - 2r){I_m}}
\end{array}} \right)\\
& = {(t + 2 - 2r)^m}\det ((t - r){I_n} + \frac{{r - t - 1}}{{t - 2r + 2}}{R_G}R_G^T)\\
& = {(t + 2 - 2r)^{m - n}}\det ((t + 2 - 2r)(t - r){I_n} + (r - t - 1){R_G}{R_G}^T)\\
&= {(t + 2 - 2r)^{m - n}}\det ((t + 2 - 2r)(t - r){I_n} + (r - t - 1){Q_G}).
\end{split}
\end{equation*}
Since the distinct signless Laplacian eigenvalues of $G$ are $q_{0}>q_{1}>\cdots>q_{d}$ with respective multiplicities $l_{0}, l_{1}, \ldots, l_{d}$. Then we have
\[
{P_{{Q_{\mathcal {Q}(G)}}}}(t) = {(t + 2 - 2r)^{m - n}} \prod \limits_{i = 0}^d {[(t + 2 - 2r)(t - r) + (r - t - 1){q_i}]^{{l_i}}},
\]
which implies that $2r-2$ and
\[
{q_{i \pm }} = \frac{{3r + {q_i} - 2 \pm \sqrt {{{(q_{i} + r - 2)}^2} + 4{q_i}} }}{2}
\]
are the signless Laplacian eigenvalues of $\mathcal {Q}(G)$ for $i=0,1,\ldots,d$. Since $G$ is a non-bipartite graph, then $q_d\neq 0$. This implies that $q_{i \pm }\neq2r-2$ for $i=0,1,\ldots,d$. Hence, $2r-2$ and $q_{i \pm }$ are the signless Laplacian eigenvalues of $\mathcal {Q}(G)$ with corresponding respective multiplicities $m-n$ and $l_{i}$ for $i=0, 1, 2, \ldots, d$.
\paragraph{Claim 2.} The $Y_k$ and $X_{i \pm }^j$, as described by the theorem, are the signless Laplacian eigenvectors corresponding the signless Laplacian eigenvalues $2r-2$ and $q_{i \pm }$, respectively.
\\\\
\textit{Proof of Claim $2$.} By a simple calculation, we have
\[
Q_{\mathcal {Q}(G)}Y_k=
\left( {\begin{array}{*{20}{c}}
{r{I_n}}&{{R_G}}\\
{{R_G}^T}&{2r{I_m} + A(\ell (G))}
\end{array}} \right)\frac{1}{||{y_k}||}\left( {\begin{array}{*{20}{c}}
0\\
y_k
\end{array}} \right) = (2r - 2)\frac{1}{||{y_k}||}\left( {\begin{array}{*{20}{c}}
0\\
y_k
\end{array}} \right)=(2r - 2)Y_k,
\]
which implies that, for every $y ={y_1}, {y_2}, \ldots, {y_{m - n}}$,
\[
Y_k = \frac{1}{||{y_k}||}\left( {\begin{array}{*{20}{c}}
0\\
y_k
\end{array}} \right)
\]
is the signless Laplacian eigenvector corresponding to the signless Laplacian eigenvalue $2r-2$. Since $\{x_1^{(i)}, x_2^{(i)}, \ldots, x_{{l_i}}^{(i)}\}$ is an orthogonal basis of the eigenspace $V_{q_{i}}$ corresponding to the signless Laplacian eigenvalue $q_{i}$. Then $Q_{G}x_{j}^{(i)}=q_{i}x_{j}^{(i)}$ for $j=1, 2, \ldots, l_{i}$ and $i=0, 1, 2, \ldots, d$. It is easy to see that
\[
{q_{i \pm }}= r + \frac{{{q_i}}}{{{q_{i \pm }} + 2 - 2r - {q_i}}}.
\]
Now we obtain
\begin{equation*}
\begin{split}
{Q_{\mathcal {Q}(G)}}X_{i \pm }^j&=\frac{1}{{\sqrt {{{({q_{i \pm }} + 2 - 2r - {q_i})}^2} + {q_i}} }}\left( {\begin{array}{*{20}{c}}
{r{I_n}}&{{R_G}}\\
{{R_G}^T}&{2r{I_m} + A(\ell (G))}
\end{array}} \right)\left( {\begin{array}{*{20}{c}}
{({q_{i \pm }} + 2 - 2r - {q_i})x_j^{(i)}}\\
{R_G^Tx_j^{(i)}}
\end{array}} \right)\\
&= \frac{1}{{\sqrt {{{({q_{i \pm }} + 2 - 2r - {q_i})}^2} + {q_i}} }}\left( {\begin{array}{*{20}{c}}
{r{I_n}}&{{R_G}}\\
{{R_G}^T}&{(2r - 2){I_m} + R_G^T{R_G}}
\end{array}} \right)\left( {\begin{array}{*{20}{c}}
{({q_{i \pm }} + 2 - 2r - {q_i})x_j^{(i)}}\\
{R_G^Tx_j^{(i)}}
\end{array}} \right)\\
&= \frac{1}{{\sqrt {{{({q_{i \pm }} + 2 - 2r - {q_i})}^2} + {q_i}} }}\left( {\begin{array}{*{20}{c}}
{[r({q_{i \pm }} + 2 - 2r - {q_i}) + {q_i}]x_j^{(i)}}\\
{({q_{i \pm }} + 2 - 2r - {q_i} + 2r - 2 + {q_i})R_G^Tx_j^{(i)}}
\end{array}} \right)\\
&= \frac{1}{{\sqrt {{{({q_{i \pm }} + 2 - 2r - {q_i})}^2} + {q_i}} }}{q_{i \pm }}\left( {\begin{array}{*{20}{c}}
{({q_{i \pm }} + 2 - 2r - {q_i})x_j^{(i)}}\\
{R_G^Tx_j^{(i)}}
\end{array}} \right)\\
&= {q_{i \pm }}X_{i \pm }^j.
\end{split}
\end{equation*}
Hence, $X_{i\pm}^{j}$ are the signless Laplacian eigenvector of $\mathcal {Q}(G)$ corresponding to the signless Laplacian eigenvalues $q_{i\pm}$.
\paragraph{Claim 3.} All $X_{i\pm}^{j}$'s and $Y_k$'s are orthogonal signless Laplacian eigenvectors of $\mathcal {Q}(G)$.
\\\\
\textit{Proof of Claim $3$.} According to Claim $2$, we have $(X_{i\pm}^{j})^TX_{i\pm}^{k}=0$ and $Y_j^{T}Y_k=0$ for $j\neq k$. It is easy to know that $(X_{i\pm}^{j})^TY_{k}=0$ for $j=1, 2, \ldots, l_{i}$, $i=0, 1, 2, \ldots, d$ and $k=1,2,\ldots, m-n$.
In what follows, we will only prove that $X_{i+ }^jX_{i- }^j=0$. For the sake of convenience, set
$$\alpha=\frac{1}{{\sqrt {{{({q_{i + }} + 2 - 2r - {q_i})}^2+{q_i}}} }}\frac{1}{{\sqrt {{{({q_{i- }} + 2 - 2r - {q_i})}^2+{q_i}}} }}.$$
Since $({q_{i + }} + 2 - 2r - {q_i})({q_{i - }} + 2 - 2r - {q_i}) = - {q_i}$, then
\begin{equation*}
\begin{split}
{(X_{i + }^j)^T}X_{i - }^j &= \alpha{\left( {\begin{array}{*{20}{c}}
{({q_{i _{+} }} + 2 - 2r - {q_i})x_j^{(i)}}\\
{R_G^Tx_j^{(i)}}
\end{array}} \right)^T}\left( {\begin{array}{*{20}{c}}
{({q_{i _{-} }} + 2 - 2r - {q_i})x_j^{(i)}}\\
{R_G^Tx_j^{(i)}}
\end{array}} \right)\\
&= \alpha(({q_{i + }} + 2 - 2r - {q_i})({q_{i - }} + 2 - 2r - {q_i}){(x_j^{(i)})^T}x_j^{(i)} + {(x_j^{(i)})^T}{R_G}R_G^Tx_j^{(i)})\\
&= \alpha(- {q_i}{(x_j^{(i)})^T}x_j^{(i)} + {q_i}{(x_j^{(i)})^T}x_j^{(i)})\\
&= 0.
\end{split}
\end{equation*}
Therefore, $X_{i+}^j$ and $X_{i - }^j$ are orthogonal eigenvectors for any $j=1, 2, \ldots, l_{i}$ and $i=0, 1, 2, \ldots, d$.
Above all, all $X_{i\pm}^j$'s and $Y_k$'s are orthogonal eigenvectors for $j=1, 2, \ldots, l_{i}$, $i=0, 1, \ldots, d$ and $k=1,2,\ldots, m-n$.
\end{proof}
\paragraph{Theorem 3.2.}
Let $G$ be an $r$-regular $(r\geq2)$ bipartite connected graph of order $n$ with $m$ edges, and $V_{1}\cup V_{2}$ be the bipartition of the vertex set of $G$. Also let $ 2r=q_{0} > q_{1}> \cdots > q_{d}=0$ are different signless Laplacian eigenvalues of $G$ with the corresponding multiplicities $l_{0}, l_{1}, \ldots l_{d}$, and $x_{1}^{(i)},x_{2}^{(i)}, \ldots, x_{l_{i}}^{(i)}$ be the unit orthogonal eigenvectors corresponding to eigenvalue $q_{i}$ for $i=0,1,\ldots,d$. Suppose that $y_{1}, y_{2}, \ldots, y_{m-n+1}$ are all unit orthogonal vectors such that $R_{G}y_k=0$ for $k=1,2,\ldots,m-n+1$. Then $2r-2$, $r$ and
$$ q_{i\pm}=\frac{3r+q_{i}-2 \pm \sqrt{(q+r-2)^{2}+4q_{i}}}{2}$$
are the signless Laplacian eigenvalues of $\mathcal {Q}$-graph and the corresponding signless Laplacian eigenvectors are, for $k=1,2,\ldots, m-n+1$,
\[
Y_k = \frac{1}{||{y_k}||}\left( {\begin{array}{*{20}{c}}
0\\
{{y_k}}
\end{array}} \right),\;\;
\frac{1}{{\sqrt n }}\left( {\begin{array}{*{20}{c}}
{{j_{|{V_1}|}}}\\
{ - {j_{|{V_2}|}}}\\
{{0_m}}
\end{array}} \right)
\]
and
\[
{X_{i \pm }^{j}} = \frac{1}{{\sqrt {{{({q_{i \pm }} + 2 - 2r - {q_i})}^2+{q_i}}} }}\left( {\begin{array}{*{20}{c}}
{({q_{i \pm }} + 2 - 2r - {q_i}){x_j^{(i)}}}\\
{R_G^T{x_j^{(i)}}}
\end{array}} \right)
\]
for $j=1, 2, \ldots, l_{i}$, $i=0, 1, 2, \ldots, d-1$.
\begin{proof}
Since $G$ is bipartite, then the smallest signless Laplacian eigenvalue $q_{d}=0$. In the light of
\[
{q_{i \pm }} = \frac{{3r + {q_i} - 2 \pm \sqrt {{{(q_{i} + r - 2)}^2} + 4{q_i}} }}{2},
\]
we have ${q_{d + }}= 2r - 2$ and ${q_{d - }} = r$.
It is easy to verify that
\[
{Q_{\mathcal {Q}(G)}}\frac{1}{{\sqrt n }}\left( {\begin{array}{*{20}{c}}
{{j_{|{V_1}|}}}\\
{ - {j_{|{V_2}|}}}\\
{{0_m}}
\end{array}} \right)\\
= \left( {\begin{array}{*{20}{c}}
{r{I_n}}&{{R_G}}\\
{{R_G}^T}&{(2r - 2){I_m} + R_G^T{R_G}}
\end{array}} \right)\frac{1}{{\sqrt n }}\left( {\begin{array}{*{20}{c}}
{{j_{|{V_1}|}}}\\
{ - {j_{|{V_2}|}}}\\
{{0_m}}
\end{array}} \right)\\
= r\frac{1}{{\sqrt n }}\left( {\begin{array}{*{20}{c}}
{{j_{|{V_1}|}}}\\
{ - {j_{|{V_2}|}}}\\
{{0_m}}
\end{array}} \right).
\]
Thus
\[\frac{1}{{\sqrt n }}\left( {\begin{array}{*{20}{c}}
{{j_{|{V_1}|}}}\\
{ - {j_{|{V_2}|}}}\\
{{0_m}}
\end{array}} \right)\
is the signless Laplacian eigenvector corresponding to the signless Laplacian eigenvalue $r$ of $\mathcal {Q}(G)$. The rest of the proof is similar exactly to that of Theorem 3.1, omitted.
\end{proof}
According to Theorems 3.1 and 3.2, we obtain immediately the signless Laplacian eigenprojectors and spectral decomposition of the signless Laplacian matrix of the $\mathcal {Q}$-graph of $G$.
\paragraph{Theorem 3.3.} Assume that $G$ is an $r$-regular $(r\geq2)$ connected graph of order $n$ with $m$ edges. Then
\begin{enumerate}[(a)]
\item If $G$ is non-bipartite, then $F_{q_{i\pm}}$ and $F_{2r-2}$ are the eigenprojectors corresponding to the respective signless Laplacian eigenvalues $q_{i\pm}$ and $2r-2$ of $\mathcal {Q}(G)$, where
\begin{equation}\small\label{9}
{F_{{q_{i \pm }}}} = \frac{1}{{{{({q_{i \pm }} + 2 - 2r - {q_i})}^2} + {q_i}}}\left( {\begin{array}{*{20}{c}}
{{{({q_{i \pm }} + 2 - 2r - {q_i})}^2}{f_{{q_i}}}}&{({q_{i \pm }} + 2 - 2r - {q_i}){f_{{q_i}}}{R_G}}\\
{({q_{i \pm }} + 2 - 2r - {q_i}){{({f_{{q_i}}}{R_G})}^T}}&{R_G^T{f_{{q_i}}}{R_G}}
\end{array}} \right)
\end{equation}
and
\begin{equation}\label{10}
{F_{2r - 2}} = \sum\limits_{k = 1}^{m - n} {\frac{1}{{||{y_k}||^{2}}}} \left( {\begin{array}{*{20}{c}}
0&0\\
0&{{y_k}y_k^T}
\end{array}} \right).
\end{equation}
Thus, we get the spectral decomposition of $Q_{\mathcal {Q}(G)}$ as follows:
\begin{equation}
{Q_{\mathcal {Q}(G)}} = \sum\limits_{i = 0}^{d} {\sum\limits_ \pm {{q_{i \pm }}{F_{{q_{i \pm }}}} + (2r - 2)} } {F_{2r - 2}}.
\end{equation}
\item If $G$ is bipartite, then $F_{2r-2}$, $F_{r}$ and $F_{q_{i\pm}}$ are the eigenprojectors corresponding to the respective signless Laplacian eigenvalues $2r-2$, $r$ and $q_{i\pm}$ of $\mathcal {Q}(G)$, where
\begin{equation}
{F_{2r - 2}} = \sum\limits_{k = 1}^{m - n+1} {\frac{1}{{||{y_k}||^{2}}}} \left( {\begin{array}{*{20}{c}}
0&0\\
0&{{y_k}y_k^T}
\end{array}} \right),
\end{equation}
\begin{equation}
{F_r} = \frac{1}{n}\left( {\begin{array}{*{20}{c}}
{{J_{|{V_1}|}}}&{ - {J_{|{V_1}| \times |{V_2}|}}}&0\\
{ - {J_{|{V_2}| \times |{V_1}|}}}&{{J_{|{V_2}|}}}&0\\
0&0&0
\end{array}} \right) = \left( {\begin{array}{*{20}{c}}
{{f_0}}&0\\
0&0
\end{array}} \right),
\end{equation}
and for $i\neq d$
\begin{equation}\small
{F_{{q_{i \pm }}}} = \frac{1}{{{{({q_{i \pm }} + 2 - 2r - {q_i})}^2} + {q_i}}}\left( {\begin{array}{*{20}{c}}
{{{({q_{i \pm }} + 2 - 2r - {q_i})}^2}{f_{{q_i}}}}&{({q_{i \pm }} + 2 - 2r - {q_i}){f_{{q_i}}}{R_G}}\\
{({q_{i \pm }} + 2 - 2r - {q_i}){{({f_{{q_i}}}{R_G})}^T}}&{R_G^T{f_{{q_i}}}{R_G}}
\end{array}} \right).
\end{equation}
Thus, we get the spectral decomposition of $Q_{\mathcal {Q}(G)}$ as follows:
\begin{equation}
{Q_{\mathcal {Q}(G)}} =\sum\limits_{i = 0}^{d - 1} {\sum\limits_ \pm {{q_{i \pm }}{F_{{q_{i \pm }}}} + (2r - 2)} } {F_{2r - 2}} + r{F_r}.
\end{equation}
\end{enumerate}
In accordance with Theorems 3.1, 3.2 and 3.3, we obtain easily the following proposition, which will be applied to analyse the signless Laplacian state transfer of $\mathcal {Q}(G)$.
\paragraph{Proposition 3.4.} Let $G$ be an $r$-regular $(r\geq2)$ connected graph of order $n$ with $m$ edges. Then, for two vertices $u$ and $v$ of $G$, we have
\begin{enumerate}[(i)]
\item If $G$ is non-bipartite, then
\begin{equation}\small
{(e_v^{m + n})^T}\text{exp}( - it{Q_{\mathcal {Q}(G)}})e_u^{m + n}
= {e^{ - it\frac{{3r - 2}}{2}}}\sum\limits_{i = 0}^{d} {{e^{ - it\frac{{{q_i}}}{2}}}} e_v^T{f_{{q_i}}}{e_u}(\cos\frac{{{\Delta _{{q_i}}}t}}{2} + \frac{{{q_i} + r - 2}}{{{\Delta _{{q_i}}}}}\sin\frac{{{\Delta _{{q_i}}}t}}{2}),
\end{equation}
where $\Delta_{q_i}=\sqrt{(q_i+r-2)^2+4q_i}$ for $i=0,1,\ldots,d$.
\item If $G$ is bipartite, then
\begin{equation}\small
\begin{split}
(e_v^{m + n})&^T\text{exp}( - it{Q_{\mathcal {Q}(G)}})e_u^{m + n}\\
&= {e^{ - it\frac{{3r - 2}}{2}}}\sum\limits_{i = 0}^{d - 1} {{e^{ - it\frac{{{q_i}}}{2}}}} e_v^T{f_{{q_i}}}{e_u}(\cos\frac{{{\Delta _{{q_i}}}t}}{2} + \frac{{{q_i} + r - 2}}{{{\Delta _{{q_i}}}}}\sin\frac{{{\Delta _{{q_i}}}t}}{2}) + {e^{ - itr}}e_v^T{f_0}{e_u}.
\end{split}
\end{equation}
\end{enumerate}
\begin{proof} (i) Since $G$ is a non-bipartite graph. Then the (a) of Theorem 3.3 and (\ref{7}) imply that
\[
\text{exp}( - it{Q_{\mathcal {Q}(G)}}) = \sum\limits_{i = 0}^{d} {\sum\limits_ \pm {{e^{ - it{q_{i \pm }}}}{F_{{q_{i \pm }}}} + {e^{ - it(2r - 2)}}} } {F_{2r - 2}}.
\]
By a simple calculation, we get
$$({q_{i + }} + 2 - 2r - {q_i})({q_{i - }} + 2 - 2r - {q_i}) = - {q_i},$$
$${({q_{i + }} + 2 - 2r - {q_i})^2} + {({q_{i - }} + 2 - 2r - {q_i})^2} = {\Delta _{{q_i}}}^2 - 2{q_i}$$
and
$${({q_{i - }} + 2 - 2r - {q_i})^2} - {({q_{i + }} + 2 - 2r - {q_i})^2} = ({q_i} + r - 2){\Delta _{{q_i}}}.$$
Thus, from the formulas (\ref{9}) and (\ref{10}), we obtain
\begin{equation*}
\begin{split}
{(e_v^{m + n})^T}\text{exp}( - it{Q_{\mathcal {Q}(G)}})e_u^{m + n}
&= {(e_v^{m + n})^T}(\sum\limits_{i = 0}^{d} {\sum\limits_ \pm {{e^{ - it{q_{i \pm }}}}{F_{{q_{i \pm }}}} + {e^{ - it(2r - 2)}}} } {F_{2r - 2}})e_u^{m + n}\\
&= \sum\limits_{i = 0}^{d} {\sum\limits_ \pm {{e^{ - it{q_{i \pm }}}}{{(e_v^{m + n})}^T}{F_{{q_{i \pm }}}}} } e_u^{m + n}\\
&= {e^{ - it\frac{{3r - 2}}{2}}}\sum\limits_{i = 0}^{d} {{e^{ - it\frac{{{q_i}}}{2}}}} e_v^T{f_{{q_i}}}{e_u}(\cos\frac{{{\Delta _{{q_i}}}t}}{2} + \frac{{{q_i} + r - 2}}{{{\Delta _{{q_i}}}}}\sin\frac{{{\Delta _{{q_i}}}t}}{2})
\end{split}
\end{equation*}
for two vertices $u$ and $v$ of $G$.
(ii) Assume that $G$ is a bipartite graph. It follows from the (b) of Theorem 3.3 and (\ref{7}) that
\[
\text{exp}( - it{Q_{\mathcal {Q}(G)}}) = \sum\limits_{i = 0}^{d} {\sum\limits_ \pm {{e^{ - it{q_{i \pm }}}}{F_{{q_{i \pm }}}} + {e^{ - it(2r - 2)}}} } {F_{2r - 2}} + {e^{ - itr}}{F_r}.
\]
The rest proof is exactly similar to that of the (i), we omit the process of the proof.
\end{proof}
\section{Signless Laplacian perfect state transfer of $\mathcal {Q}$-graph}
In this section, we mainly study the existence of signless Laplacian perfect state transfer of $\mathcal {Q}$-graph. First we give a key lamma, which is similar to the Lamma $4.1$ in \cite{Yipeng Li}.
\paragraph{Lemma 4.1.} Let $G$ be an $r$-regular connected graph of order $n$ with $m$ edges and $r\geq2$. Also let $2r=q_{0}>q_{1}>\cdots>q_{d}$ be all distinct signless Laplacian eigenvalues of $G$, with the signless Laplacian eigenprojectors $f_{q_{0}}, f_{q_{1}},\ldots,f_{q_{d}}$, respectively.
\begin{enumerate}[(i)]
\item If $G$ is non-bipartite, then there is some index $i_0\in\{1,2,\ldots,d\}$ such that $(f_{q_{i_0}}R_{G})e_{k}^{m}\neq0$ for any $k\in\{1, 2,\ldots, m\}$.
\item If $G$ is bipartite, then there is some index $i_0\in\{1,2,\ldots, d-1\}$ such that $(f_{q_{i_0}}R_{G})e_{k}^{m}\neq0$ for any $k\in\{1, 2,\ldots, m\}$.
\end{enumerate}
\begin{proof} (i) Since $G$ is an $r$-regular non-bipartite graph, then the signless Laplacian eigenprojector $f_{q_{0}}=f_{2r}=\frac{1}{n}J_{n}$. In the light of $\sum\nolimits_{i = 0}^d{{f_{{q_i}}}}=I_{n}$, we have
$\sum\nolimits_{i = 1}^d {{f_{{q_i}}}} = {I_n} - {f_{2r}} = {I_n} - \frac{1}{n}{J_n}$.
Thus,
\begin{equation*}
\sum\limits_{i = 1}^d {{f_{{q_i}}}}R_{G}e_{k}^{m}=({I_n} - \frac{1}{n}{J_n})R_{G}e_{k}^{m}\neq0,
\end{equation*}
which implies that, for any $k\in\{1, 2,\ldots, m\}$, there exists some index $i_0\in\{1,2,\ldots,d\}$ such that $(f_{q_{i_0}}R_{G})e_{k}^{m}\neq0$.
(ii) In this case, assume that $V_{1}\cup V_{2}$ is the bipartition of the vertex set of $G$. Since $G$ is a regular graph, then $|V_{1}|=| V_{2}|=\frac{n}{2}$. According to $\sum\nolimits_{i = 0}^d {{f_{{q_i}}}} = I_{n}$, $f_{q_{0}}=f_{2r}=\frac{1}{n}J_{n}$ and
\[
{f_{{q_d}}} = {f_0} = \left( {\begin{array}{*{20}{c}}
{{J_{\frac{n}{2}}}}&{ - {J_{\frac{n}{2}}}}\\
{ - {J_{\frac{n}{2}}}}&{{J_{\frac{n}{2}}}}
\end{array}} \right),
\]
we obtain
\begin{equation*}
\sum\limits_{i = 1}^{d-1}{{f_{{q_i}}}} = {I_n} - {f_{2r}}-{f_{0}} = {I_n} - \frac{1}{n}{J_n}-\left( {\begin{array}{*{20}{c}}
{{J_{\frac{n}{2}}}}&{ - {J_{\frac{n}{2}}}}\\
{ - {J_{\frac{n}{2}}}}&{{J_{\frac{n}{2}}}}
\end{array}} \right).
\end{equation*}
Hence, for any $k\in\{1, 2,\ldots, m\}$,
\begin{equation*}
\sum\limits_{i = 1}^{d-1} {{f_{{q_i}}}}R_{G}e_{k}^{m}=\left({I_n} - \frac{1}{n}{J_n}-\left( {\begin{array}{*{20}{c}}
{{J_{\frac{n}{2}}}}&{ - {J_{\frac{n}{2}}}}\\
{ - {J_{\frac{n}{2}}}}&{{J_{\frac{n}{2}}}}
\end{array}} \right)\right)R_{G}e_{k}^{m}\neq0,
\end{equation*}
which implies that the required result follows.
\end{proof}
\paragraph{Theorem 4.2.} Let $G$ be an $r$-regular connected graph of order $n\geq2$ with $m$ edges.
If all the signless Laplacian eigenvalues of $G$ are integers, then there is no signless Laplacian perfect state transfer in $\mathcal {Q}(G)$.
\begin{proof} Here we only proof the case of non-bipartite graph $G$. For bipartite graph, the proof is similar to that of non-bipartite graph.
Let $V(G)\cup I(G)$ be the vertex set of $\mathcal {Q}(G)$, where $I(G)$ is the new vertex set in $\mathcal {Q}(G)$. Also let $z$ be a vertex of $\mathcal {Q}(G)$. Suppose towards the contradiction that $\mathcal {Q}(G)$ has signless Laplacian perfect state transfer between the vertex $z$ and another vertex. According to Theorem $2.1$, we only need to discuss it in the following two cases.\\
\emph{Case 1.} All the elements of $\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ are integers. In this case, we consider the following two subcases: the vertex $z\in V(G)$ and $z\in I(G)$.\\
\emph{Subcase 1.1.} The vertex $z\in V(G)$. Since $G$ is a connected graph, then there is a positive eigenvalue $q\in \text{supp}_{Q_{G}}(z)$. Thus $f_{q}e_{z}\neq 0$. It follows from Lemma 4.1 and the (a) of Theorem 3.3 that $F_{q\pm}e_{z}^{m+n}\neq 0$. This implies that $q_{\pm}\in\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$. Recall that all of $q$, $q_{+}$ and $q_{-}$ are integers. Since
\[
{q_ + } = \frac{{3r + q - 2 + \sqrt {{{(r + q - 2)}^2} + 4q} }}{2},\;\;
{q_ - } = \frac{{3r + q - 2 - \sqrt {{{(r + q - 2)}^2} + 4q} }}{2}.
\]
Then $q_{+}+q_{-}=3r+q-2$, $q_{+}-q_{-}=\sqrt{(r+q-2)^{2}+4q}$. It follows that $\sqrt{(r+q-2)^{2}+4q}$ is integer, equivalently, $(r+q-2)^{2}+4q$ is a perfect square. Clearly, $4q$ is even and
\[
{(q + r - 2)^2} < {(q + r - 2)^2} + 4q < {(q + r - 2)^2} + 4(q + r) < {(q + r + 2)^2}.
\]
If $(q + r - 2)^2 + 4q \neq (r+q)^{2}$, then at most one of $q_{+}$ and $q_{-}$ is integer, which contradicts that all the elements of $\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ are integers. Otherwise, $(q + r - 2)^2 + 4q = (r+q)^{2}$. Thus, we have $r=1$, which implies that $G=K_2$ and
$\mathcal {Q}(G)=P_3$. It is well known\cite{Coutinho2015} that $P_3$ has no signless Laplacian
state transfer, which contradicts to our hypothesis.\\
\emph{Subcase 1.2.} The vertex $z\in I(G)$. According to Lemma 4.1, there exists some signless Laplacian eigenvalue $q$ such that $(f_{q}R_{G})e_{z}^{m}\neq0$. Then, according to the (a) of Theorem 3.3, we obtain that
\[
{F_{{q_{\pm} }}}e_z^{m + n} = \frac{1}{{{{({q_{\pm} } + 2 - 2r - q)}^2} + q}}\left( {\begin{array}{*{20}{c}}
{({q_{\pm} } + 2 - 2r - q)({f_q}{R_G})e_z^m}\\
{(R_G^T{f_q}{R_G})e_z^m}
\end{array}} \right)\neq0.
\]
Thus, $q_\pm\in \text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ and $q_{\pm}$ are integer. Similar to the proof of Subcase $1.1$ above, we get that not all the elements of $\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ are integers, a contradiction.\\
\emph{Case 2.} All the elements of $\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ are quadratic integers. At this time, it is divided into two subcases below.\\
\emph{Subcase 2.1.} The vertex $z \in V(G)$. On the basis of Subcase 1.1, we have $ q_\pm\in \text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ whenever $q\in \text{supp}_{Q_{G}}(z)$. Since all the elements of $\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ are quadratic integers and $(q + r - 2)^2 + 4q $ can not be a perfect square for integer $q$. Then, there exists a square-free integer $\Delta>1$, along with integers $a$ (we allow that $a=0$), $b_{+}$ and $b_{-}$, such that
\[
{q_\pm } = \frac{{a + {b_ \pm }\sqrt \Delta }}{2}.
\]
Noting that $(q_{+} + 2 - 2r - q)({q_{-}} + 2 - 2r - q) = - q$. Thus,
\begin{equation*}
\frac{1}{4}{b_ + }{b_ - }\Delta + \frac{1}{4}{(a + 4 - 4r - 2q)^2} + \frac{1}{4}\sqrt \Delta ({b_ + } + {b_ - })(a + 4 - 4r - 2q) = - q.
\end{equation*}
Since $\sqrt{\Delta}$ is irrational, then either $b_ + + b_ - =0$ or $a+4-4r-2q=0$. If $b_ + + b_ - =0$, then $q_{+}+q_{-}=a=3r+q-2$. At this time, $|\text{supp}_{Q_{G}}(z)|=1$, which contradicts that $G$ is a connected graph with $n\geq2$ vertices. Otherwise, $a+4-4r-2q=0$, then $a=4r+2q-4$, which also implies that $|\text{supp}_{Q_{G}}(z)|=1$, a contradiction. Hence, if all the elements of $\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ are quadratic integer, then neither $b_ + + b_ - =0$ nor $a+4-4r-2q=0$.\\
\emph{Subcase 2.2.} The vertex $z\in I(G)$. According to Lamma $4.1$, there exists some signless Laplacian eigenvalue $q$ such that $(f_{q}R_{G})e_{z}^{m}\neq0$. Then, from Theorem $3.3$, we obtain that
\[
{F_{{q_\pm }}}e_z^{m + n} = \frac{1}{{{{({q_ \pm } + 2 - 2r - q)}^2} + q}}\left( {\begin{array}{*{20}{c}}
{({q_ \pm } + 2 - 2r - q)({f_q}{R_G})e_z^m}\\
{(R_G^T{f_q}{R_G})e_z^m}
\end{array}} \right)\neq0.
\]
So, $q_\pm \in \text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ and $q_{\pm}$ are quadratic integers. Similar to the proof of Subcase 2.1 above, not all the elements of $\text{supp}_{Q_{\mathcal {Q}(G)}}(z)$ are quadratic integers.
Neither two cases above is likely to happen. Hence, it follows from Theorem $2.1$ that there is no signless Laplacian perfect state transfer in $\mathcal {Q}(G)$.
\end{proof}
Remark that the path $P_{2}$ is signless Laplacian integral as the signless Laplacian spectrum of $P_{2}$ is $\{0,2\}$. Theorem 4.2 shows that its $\mathcal {Q}$-graph $P_{3}$ has no signless laplacian perfect state transfer. This result has been proved by Coutinho and Liu in \cite{Coutinho2015}. In addition, in accordance with the proof of Theorem 4.2, we obtain easily the following corollary.
\paragraph{Corollary 4.3.} Let $G$ be an $r$-regular connected graph of order $n\geq2$ with $m$ edges. Assume that $u$ is any vertex in $G$.
If all the elements of $\text{supp}_{Q_G}(u)$ are integers, then $\mathcal {Q}(G)$ has no signless Laplacian perfect state transfer between $u$ and any other vertex of $\mathcal {Q}(G)$.
\paragraph{Example 1.} Assume that $G$ is one of the following graph:
\begin{enumerate}[(i)]
\item $d$-dimensional hypercubes $Q_{d}$ for $d\geq2$;
\item Cocktail party graphs $\overline{mK_{2}}$ for $m\geq 2$;
\item Halved $2d$-dimensional hypercubes $\frac{1}{2}Q_{2d}$ for $d\geq2$, where the vertex set $V(\frac{1}{2}Q_{2d})$ consists of elements of $\mathbb{Z}_{2}^{2d}$ of even Hamming weight and two vertices are adjacent if and only if their Hamming distance is exactly two.
\end{enumerate}
Then $\mathcal {Q}(G)$ has no signless Laplacian perfect state transfer.
\begin{proof}
It has been showed \cite{Tian2021} that the signless Laplacian spectra of these graphs are given by
\begin{enumerate}[(i)]
\item $\text{Sp}(Q_{d})=\{2d-2l:0\leq l\leq d\}$;
\item $\text{Sp}(\overline{mK_{2}})=\{4m-4, 2m-2, 2m-4\}$;
\item $\text{Sp}(\frac{1}{2}Q_{2d})=\{2
\left( {\begin{array}{*{20}{c}}
{2d}\\
2
\end{array}} \right)-2l(2d-l): 0\leq l\leq d
\}$.
\end{enumerate}
Obviously, these graphs are regular connected graphs and all the signless Laplacian eigenvalues of these graphs are integers. Hence, if $G$ is each one of these graphs, then $\mathcal {Q}(G)$ has no signless Laplacian perfect state transfer by Theorem 4.2.
\end{proof}
\section{Signless Laplacian pretty good state transfer of $\mathcal{Q}$-graph}
Recall that the graph possessing Signless Laplacian perfect state transfer is rarely. In this section, we will discuss the existence of Signless Laplacian pretty good state transfer of $\mathcal {Q}$-graph and give some examples existing signless Laplacian pretty good state transfer.
\paragraph{Theorem 5.1} Let $G$ be an $r$-regular connected graph of order $n$ with $m$ edges and $r\geq2$. Also let $ q_{0}>q_{1}>\cdots>q_{d}$ be all different signless Laplacian eigenvalues of $G$ and $g$ is as described in Theorem 2.1. Assume that $G$ has signless Laplacian perfect state transfer between vertices $u$ and $v$. If $r$ is divisible by $g$, then $\mathcal {Q}(G)$ has signless Laplacian pretty good state transfer between vertices $u$ and $v$.
\begin{proof} The proof of this theorem is divided into the following two cases. In what follows, for the sake of convenience, set $S=\text{supp}_{Q_{G}}(u)$ and $\Delta_{q_i}=\sqrt{(q_i+r-2)^2+4q_i}$ for $i=0,1,\ldots,d$.\\
\emph{Case 1.} $G$ is a connected non-bipartite graph. In this case, from the (i) of Proposition 3.4, we have
\begin{equation*}
\begin{split}
{(e_v^{m + n})^T}\exp( - it{Q_{\mathcal {Q}(G)}})e_u^{m + n}
= {e^{ - it\frac{{3r - 2}}{2}}}\sum\limits_{i = 0}^{ d} {{e^{ - it\frac{{{q_i}}}{2}}}} e_v^T{f_{{q_i}}}{e_u}\left(\cos\frac{{{\Delta _{{q_i}}}t}}{2} + \frac{{{q_i} + r - 2}}{{{\Delta _{{q_i}}}}}\sin\frac{{{\Delta _{{q_i}}}t}}{2}\right).
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
{{{(e_v^{m + n})}^T}\exp( - it{Q_{\mathcal {Q}(G)}})e_u^{m + n}}
&= {e^{ - it\frac{{3r - 2}}{2}}}\sum\limits_{i = 0}^{d} {{e^{ - it\frac{{{q_i}}}{2}}}} e_v^T{f_{{q_i}}}{e_u}\left(\cos\frac{{{\Delta _{{q_i}}}t}}{2} + \frac{{{q_i} + r - 2}}{{{\Delta _{{q_i}}}}}\sin\frac{{{\Delta _{{q_i}}}t}}{2}\right)\\
&={e^{ - it\frac{{3r - 2}}{2}}}\sum\limits_{{q_i} \in S} {{e^{ - it\frac{{{q_i}}}{2}}}} e_v^T{f_{{q_i}}}{e_u}\left(\cos\frac{{{\Delta _{{q_i}}}t}}{2} + \frac{{{q_i} + r - 2}}{{{\Delta _{{q_i}}}}}\sin\frac{{{\Delta _{{q_i}}}t}}{2}\right),
\end{split}
\end{equation*}
where the last equality holds because $e_v^T{f_{{q_i}}}{e_u}\neq0$ if and only if $q_{i} \in \text{supp}_{Q_{G}}(u)$.
Clearly, $|e^{-it\frac{3r-2}{2}}|=1$ for any $t$. In order to prove that $\mathcal {Q}(G)$ has signless Laplacian pretty good state transfer between the vertices $u$ and $v$, we only need to find a time $t_0$ such that
\begin{equation}
\left|\sum\limits_{{q_i} \in S}^{} {{e^{ - it\frac{{{q_i}}}{2}}}} e_v^T{f_{{q_i}}}{e_u}\left(\cos\frac{{{\Delta _{{q_i}}}t_0}}{2} + \frac{{{q_i} + r - 2}}{{{\Delta _{{q_i}}}}}\sin\frac{{{\Delta _{{q_i}}}t_0}}{2}\right)\right| \approx 1.
\end{equation}
Observe that $f_{2r}e_{u}\neq 0$ implies that $2r\in S$. Since $G$ has signless Laplacian perfect state transfer at $u$ and $v$. Then, it follows from Theorem 2.1 and Lemma 2.4 that all the elements of $S$ are integers. This implies that $\mathcal {Q}(G)$ has no signless Laplacian perfect state transfer at $u$ and $v$ by Corollary 4.3. Bear in mind that $G$ is connected, $|S|\geq 2$. Furthermore, assume that $2r\neq q_{j}\in S$ for some $j\in\{1,2,\ldots, d\}$. Then $q_{j}$ is integer and $f_{q_{j}}e_{u}\neq 0$, which implies that $q_{j\pm} \in \text{supp}_{Q_{\mathcal {Q}(G)}}(u)$. In what follows, we let $\Delta _{q_{j}}=a_{j}\sqrt{b_{j}}$ for each $q_{j}\in S$, where $a_{j}, b_{j}\in \mathbb{Z}^{+}$ and $b_{j}$ is the square-free part of $\Delta _{q_{j}}^{2}$. Since $(q_{j}+r-2)^{2}<(q_{j}+r-2)^{2}+4q_{j}<(q_{j}+r)^{2}$ for $r>1$. Then the disjoint union
\[
\cup \{ \sqrt {{b_j}} :{q_j} \in S,{q_j} > 0\}
\]
is linearly independent over $\mathbb{Q}$ by Lemma 2.3. By Theorem $2.2$, there exist integers $\alpha$ and $c_{j}$ for each $q_{j}\in S$, such that
\begin{equation}\label{19}
\alpha \sqrt {{b_j}} - {c_j} \approx - \frac{1}{{2g}}\sqrt {{b_j}}.
\end{equation}
If $b_{l}=b_{j}$ for two distinct eigenvalues $q_{l}, q_{j}\in S$, then $c_{l}=c_{j}$. Multipling by $4a_{j}$ in both sides of (\ref{19}), we obtain
\[
\Delta_{q_{j}}\approx \frac{{4{a_j}{c_j}}}{{4\alpha + \frac{2}{g}}}.
\]
Take $t_0=(4\alpha+\frac{2}{g})\pi$, then
\[\cos \frac{{{\Delta _{{q_j}}}t_0 }}{2} \approx \cos \frac{{\frac{{4{a_j}{c_j}}}{{4\alpha + \frac{2}{g}}}(4\alpha + \frac{2}{g})\pi}}{2} = \cos 2{a_j}{c_j}\pi = 1.
\]
Hence, we have
\begin{equation*}
\begin{split}
\left|\sum\limits_{{q_j} \in S} {{e^{ - it_0 \frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u}(cos\frac{{{\Delta _{{q_j}}}t_0 }}{2} + \frac{{{q_j} + r - 2}}{{{\Delta _{{q_i}}}}}sin\frac{{{\Delta _{{q_j}}}t_0 }}{2})\right|
&\approx \left|\sum\limits_{{q_j} \in S} {{e^{ - it_0 \frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u}\right|\\
& \approx \left|\sum\limits_{{q_j} \in S} {{e^{ - i\frac{\pi }{g}{q_j}}}} e_v^T{f_{{q_j}}}{e_u}\right|=1,
\end{split}
\end{equation*}
where the last equality holds by Theorem 2.1.\\
\emph{Case 2.} $G$ is a connected bipartite graph. According to the (ii) of Proposition 3.4 and Case 1, we only need to prove that
\begin{equation}
\left|\sum\limits_{{q_j} \in S\backslash 0} {{e^{ - it\frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u}(\cos\frac{{{\Delta _{{q_j}}}t}}{2} + \frac{{{q_j} + r - 2}}{{{\Delta _{{q_j}}}}}\sin\frac{{{\Delta _{{q_j}}}t}}{2}) + {e^{ - itr}}e_v^T{f_0}{e_u}\right| \approx 1
\end{equation}
for some $t$. Take $t_0=(4\alpha+\frac{2}{g})\pi$ again. Similar to the discussion in Case 1, we give
\begin{equation}\label{21}
\begin{split}
&\left|\sum \limits_{{q_j} \in S\backslash 0} {{e^{ - it_0 \frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u}(\cos\frac{{{\Delta _{{q_j}}}t_0 }}{2} + \frac{{{q_j} + r - 2}}{{{\Delta _{{q_j}}}}}\sin\frac{{{\Delta _{{q_j}}}t_0 }}{2})+ {e^{ - it_0 r}}e_v^T{f_0}{e_u}\right|\\
&\approx\left|\sum\limits_{{q_j} \in S\backslash 0} {{e^{ - it_0 \frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u} + {e^{ - i\frac{r}{g}2\pi }}e_v^T{f_0}{e_u}\right| .
\end{split}
\end{equation}
Since $r$ is divisible by $g$, then ${e^{ - i\frac{r}{g}2\pi }}=e^{-it_0\frac{0}{2}}$. It follows from (\ref{21}) that
\begin{equation*}
\begin{split}
\left|\sum\limits_{{q_j} \in S\backslash 0} {{e^{ - it_0 \frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u} + {e^{ - i\frac{r}{g}2\pi }}e_v^T{f_0}{e_u}\right|
&= \left|\sum\limits_{{q_j} \in S\backslash 0} {{e^{ - it_0 \frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u} + {e^{ - it_0 \frac{0}{2}}}e_v^T{f_0}{e_u}\right|\\
&= \left|\sum\limits_{{q_j} \in S} {{e^{ - it_0 \frac{{{q_j}}}{2}}}} e_v^T{f_{{q_j}}}{e_u}\right|\\
& = \left|\sum\limits_{{q_j} \in S} {{e^{ - i\frac{\pi }{g}{q_j}}}} e_v^T{f_{{q_j}}}{e_u}\right|
= 1.
\end{split}
\end{equation*}
To sum up two case above, we obtain the required result.
\end{proof}
In what following, we present some families of $\mathcal {Q}$-graphs admitting signless Laplacian pretty good state transfer for distance regular graphs. Distance regular graphs have many beautiful combinatorial properties that play an important role in graph theory and combinatorial mathematics. In quantum walks of graphs, an interesting result is the following: \emph{the eigenvalue support of each vertex in a distance regular graph $G$ equals the set of all distinct eigenvalues of $G$} \cite{Coutinho2015}. Since, for a regular graph, its signless Laplacian matrix shares the same eigenprojectors with its adjacency matrix. Then this property also holds for the signless Laplacian matrices in regular graphs. On the other hand, for a regular graph $G$, there exists perfect state transfer between vertices $u$ and $v$ if and only if $G$ has signless Laplacian perfect state transfer between vertices $u$ and $v$
\cite{Alvir2016}. Thus, applying Theorem 5.1, we may give some examples having signless Laplacian pretty good state transfer.
\paragraph{Example 2.} For the $d$-dimensional hypercube $Q_{d}$, cocktail party graphs $\overline{mK_{2}}$ and halved $2d$-dimensional hypercubes $\frac{1}{2}Q_{2d}$, these graphs admit signless Laplacian perfect state transfer between antipodal vertices $u$ and $v$. However, Example 1 showed that the $\mathcal {Q}$-graphs of these graph have no signless Laplacian perfect state transfer. In contrast to these results, we find that
\begin{enumerate}[(i)]
\item The $\mathcal {Q}$-graph of $Q_{d}$ admits signless Laplacian pretty good state transfer between $u$ and $v$ whenever $d$ is even. Indeed, it is easy to check that $g=\gcd\left(\left\{\dfrac{q_0-q_k}{\sqrt{\Delta}}\right\}_{k=0}^d\right)=2$ and $Q_{d}$ is $d$-regular. The required result follows by Theorem 5.1;
\item Note that $\overline{mK_{2}}$ is $(2m-2)$-regular and $g=2$ (see Example 1). Hence, $\mathcal {Q}(\overline{mK_{2}})$ has signless Laplacian pretty good state transfer between $u$ and $v$;
\item Similarly, if $\left({\begin{array}{*{20}{c}}
2d\\
2
\end{array}}\right)$ is divisible by 2, then $\mathcal {Q}(\frac{1}{2}Q_{2d})$ admits signless Laplacian pretty good state transfer between $u$ and $v$.
\end{enumerate}
\section{Concluding remarks}
The paper focuses on perfect state transfer and pretty good state transfer in $\mathcal {Q}$-graphs of regular graphs relative to signless Laplacian. We have obtained the spectral decomposition of signless Laplacian matrix of $\mathcal {Q}$-graph of a regular graph. Furthermore, some sufficient conditions have been given about $\mathcal {Q}$-graphs of regular graphs admitting perfect state transfer and pretty good state transfer relative to signless Laplacian. We end this paper with the following problem: \emph{For a non-regular graph, determine whether or not the corresponding $\mathcal {Q}$-graph has perfect state transfer or pretty good state transfer relative to signless Laplacian.}\\
\\
\textbf{Acknowledgements} This work was in part supported by the National Natural Science Foundation of China (No. 11801521).
|
2,869,038,154,257 | arxiv | \section{Introduction and result} \label{sec:Intro}
K\"uhne in \cite{Kuehne:An_effective_result_of_Andre-Oort_type} and independently Bilu, Masser and Zannier in \cite{Bilu-Masser-Zannier} have studied the Andr\'e-Oort conjecture in the case of the Shimura variety $\mathbb{P}^1(\mathbb{C}) \times \mathbb{P}^1(\mathbb{C})$, where $\mathbb{P}^1(\mathbb{C})$ is the modular curve $\operatorname{SL}_2(\mathbb{Z}) \backslash \mathcal{H}^*$.
They obtain the first nontrivial, unconditional, effective results in the area.
In \cite{Paulin:Andre-Oort-P1xGm} the author investigates the variant $\mathbb{P}^1(\mathbb{C}) \times \mathbb{G}_m(\mathbb{C})$, where the special points are of the form $(j(\tau), \lambda)$, with $\tau$ an imaginary quadratic number and $\lambda \in \mathbb{C}$ a root of unity.
In this article we attack the same problem as in \cite{Paulin:Andre-Oort-P1xGm}, using a different method.
We prove a weaker version of the main explicit result of \cite{Paulin:Andre-Oort-P1xGm}, therefore we also reprove the main non-effective result.
The better bounds in \cite{Paulin:Andre-Oort-P1xGm} are achieved using more sophisticated class field theory.
We use less class field theory here, and instead are forced to apply linear forms in logarithms.
Even though the bounds are worse here, the methods presented could still be useful for other similar problems.
Let $\mathcal{H}$ denote the complex upper half-plane.
We call $(\alpha, \lambda) \in \mathbb{P}^1(\mathbb{C}) \times \mathbb{G}_m(\mathbb{C})$ a special point, if $\alpha = j(\tau)$ for some imaginary quadratic $\tau \in \mathcal{H}$ and $\lambda \in \mathbb{C}$ is a root of unity.
We work with the same assumptions as in Theorem 2 of \cite{Paulin:Andre-Oort-P1xGm}.
So $K$ is a number field of degree $d$ over $\mathbb{Q}$ with a fixed embedding into $\mathbb{C}$, and $F \in K[X,Y]$ is a nonconstant polynomial with $\delta_1 = \deg_X F$ and $\delta_2 = \deg_Y F$.
We assume that zero set of $F(X,Y) = 0$ contains no vertical or horizontal line, i.e.\ $F(X,Y)$ does not have a nonconstant divisor $f \in K[X]$ or $g \in K[Y]$.
Then clearly $\delta_1, \delta_2 > 0$.
Let $h(F)$ denote the height of the polynomial $F$ (so $h(F)$ is the absolute logarithmic Weil height of the point defined by the nonzero coefficients of $F$ in projective space, see the definition in section \ref{sec:Prelim}).
Let $(\alpha, \lambda)$ be a special point of $\mathcal{C}$, where $\alpha = j(\tau)$ for some $\tau \in \mathcal{H}$.
Let $\Delta$ denote the discriminant of the endomorphism ring of the complex elliptic curve $\mathbb{C}/(\mathbb{Z} + \mathbb{Z} \tau)$, and let $N$ be the smallest positive integer such that $\lambda^N = 1$.
\begin{theorem} \label{thm:main}
In the above situation
\begin{equation} \label{eq:main-Q-Delta}
|\Delta| < \left(\frac{1}{d \delta_2} C \log C\right)^2
\end{equation}
and
\begin{equation} \label{eq:main-Q-N}
N < C (\log C)^2 \log \log C
\end{equation}
with
\[
C = 2^{36} d^3 \delta_2^3 (\log(4 d \delta_2))^2 \max \left(d h(F) + (d-1) (\delta_1 + \delta_2) \log 2, 1\right).
\]
\end{theorem}
This result implies Theorem 1 of \cite{Paulin:Andre-Oort-P1xGm}, because there are only finitely many special points $(\alpha, \lambda)$ with $\Delta$ and $N$ bounded.
Similarly to \cite{Paulin:Andre-Oort-P1xGm}, we can reduce the proof to the case when $K = \mathbb{Q}$ and $\mathbb{Z} + \mathbb{Z}\tau$ is an order.
In the following section we collect some preliminary definitions and statements, and also prove some auxiliary results.
In section \ref{sec:proof} we prove Theorem \ref{thm:main}.
\section{Preliminaries} \label{sec:Prelim}
For the readers convenience we recall some definitions.
The (absolute logarithmic Weil) height of a point $P = (a_0: \dotsc: a_n) \in \mathbb{P}^n_{\overline{\mathbb{Q}}}$ is defined by
\[
h(P) = \sum_{v \in M_K} \frac{[K_v:\mathbb{Q}_v]}{[K:\mathbb{Q}]} \log(\max_i |a_i|_v),
\]
where $K$ is any number field containing all $a_i$, $M_K$ is the set of places of $K$, and for any place $v$, $|\cdot|_v$ is the absolute value on $K$ extending a standard absolute value of $\mathbb{Q}$.
Similarly, the (absolute logarithmic Weil) height of a polynomial $F \in \overline{\mathbb{Q}}[X_1, \dotsc, X_n]$ with nonzero coefficients $c_i$ is defined by
\[
h(F) = \sum_{v \in M_K} \frac{[K_v:\mathbb{Q}_v]}{[K:\mathbb{Q}]} \log(\max_i |c_i|_v),
\]
where $K$ is a number field containing the coefficients of $F$.
We use the notation $H(F) = e^{h(F)}$.
If $F \in \mathbb{Z}[X_1, \dotsc, X_n]$, and the gcd of the coefficients of $F$ is $1$, then $H(F)$ is equal to the maximum of the euclidean absolute values of the coefficients of $F$.
If $K$ is a number field and $\alpha \in K$, then the (absolute logarithmic Weil) height of $\alpha$ is
\[
h(\alpha) = h(\alpha:1) = \sum_{v \in M_K} \frac{[K_v:\mathbb{Q}_v]}{[K:\mathbb{Q}]} \log \max(1, |\alpha|_v).
\]
We use the notation $H(\alpha) = e^{h(\alpha)}$.
Let $f = a_d X^d + \dotsm + a_0 = a_d (X-\alpha_1) \dotsm (X-\alpha_d) \in \mathbb{C}[X]$, where $a_d \neq 0$.
The Mahler measure of $f$ (see e.g.\ \cite{Bombieri-Gubler}) is defined by
\[
M(f) = \exp\left(\int_0^1 \log|f(e^{2\pi i t})|dt\right) = |a_d| \prod_{j=1}^d \max(1, |\alpha_j|).
\]
If $\alpha \in \overline{\mathbb{Q}}$ and $f \in \mathbb{Z}[X]$ is the minimal polynomial of $\alpha$, then $h(\alpha) = \frac{\log M(f)}{\deg f}$ (see Proposition 1.6.6 in \cite{Bombieri-Gubler}).
If $\mathcal{O}$ is an order in an imaginary quadratic number field $L$, then the class number $h(\mathcal{O})$ denotes the number of equivalence classes of proper fractional ideals of $\mathcal{O}$ (see e.g.\ \cite{Cox:Primes_of_the_form_x2+ny2}).
Since $\mathcal{O}$ is an order in $L$, we can write it in the form $\mathbb{Z} + \mathbb{Z} \tau_0$ for some $\tau_0$ in $L \cap \mathcal{H}$.
Then the discriminant of the order $\mathcal{O}$ is $D(\mathcal{O}) = -4 (\Impart \tau_0)^2$ (see e.g.\ \S 7, Ch.\ 2 in \cite{Cox:Primes_of_the_form_x2+ny2}).
This is a negative integer congruent to $0$ or $1$ modulo $4$.
If $D<0$ is an integer congruent to $0$ or $1$ modulo $4$, then the class number $h(D)$ denotes the number of proper equivalence classes of primitive quadratic forms of discriminant $D$ (see \cite{Cox:Primes_of_the_form_x2+ny2} or \cite{Hua:Intro_to_Number_Theory}).
Theorem 7.7 in \cite{Cox:Primes_of_the_form_x2+ny2} says that if $\mathcal{O}$ is an order with discriminant $D$ in an imaginary quadratic number field, then $h(\mathcal{O}) = h(D)$.
The following theorem is the main result of \cite{BaW:Log_forms_and_group_varieties}.
\begin{theorem}[Baker, W\"ustholz] \label{thm:BaWu}
Let $\alpha_1, \dotsc, \alpha_n \in \mathbb{C} \setminus \{0,1\}$ be algebraic numbers, with fixed determinations of logarithms $\log \alpha_1, \dotsc, \log \alpha_n$.
The degree of the field extension $\mathbb{Q}(\alpha_1, \dotsc, \alpha_n)/\mathbb{Q}$ is denoted by $d$.
Let $L(z_1, \dotsc, z_n) = b_1 z_1 + \dotsm + b_n z_n$ be a linear form, where $b_1, \dotsc, b_n$ are integers such that at least one $b_i$ is nonzero.
We use the notation $h'(\alpha_i) = \max(h(\alpha_i), \frac{1}{d} |\log \alpha_i|, \frac{1}{d})$ (which depends on the choice of $\log \alpha_i$) and $h'(L) = \max(h(L),\frac{1}{d})$, where $h(L)$ is the absolute logarithmic Weil height of $(b_1:\dotsc:b_n)$ in $\mathbb{P}^{n-1}_{\mathbb{Q}}$.
If $\Lambda = L(\log \alpha_1, \dotsc, \log \alpha_n) \neq 0$, then
\[
\log |\Lambda| > -C(n,d) h'(\alpha_1) \dotsm h'(\alpha_n) h'(L),
\]
where
\[
C(n,d) = 18 (n+1)! n^{n+1} (32d)^{n+2} \log(2nd).
\]
\end{theorem}
The following lemma gives us a lower bound on the distance between an algebraic number and a root of unity.
\begin{lemma} \label{lemma:distance-root-of-unity}
Let $\lambda \in \mathbb{C}$ be an $N^{\textrm{th}}$ root of unity, where $N$ is a positive integer.
If $\gamma \in \mathbb{C}$ is algebraic of degree $d$ over $\mathbb{Q}$, and $\lambda \neq \gamma$, then
\[
\log |\lambda - \gamma| > -c d^3 \log(4d) \max\left(h(\gamma), \frac{4}{d} \right) \log \max(N,2)
\]
with $c = 2^{25} 3^3 \pi + 1$.
\end{lemma}
\begin{proof}
The right hand side is the same for $N=1$ and for $N=2$, so we may assume that $N \ge 2$.
Suppose first that $d=1$.
Then $\gamma \in \mathbb{Q}$, so there are coprime integers $a,b$ such that $b > 0$ and $\gamma = \frac{a}{b}$.
If $\lambda \in \{1, -1\}$, then $|\lambda - \gamma| = \left|\frac{a \pm b}{b}\right| \ge \frac{1}{b} \ge \frac{1}{H(\gamma)}$, hence $\log |\lambda-\gamma| \ge -h(\gamma)$.
Now let $\lambda \notin \{1, -1\}$, then $N \ge 3$, and $|\lambda - \gamma| \ge |\Impart(\lambda)| \ge \sin(\frac{2 \pi}{N})$.
The sine function is concave in the interval $[0, \pi]$, so $\sin x \ge \frac{\sin(2\pi/3)}{2\pi/3} x = \frac{3\sqrt{3}}{4 \pi} x$ for every $x \in [0, \frac{2\pi}{3}]$.
Thus $|\lambda - \gamma| \ge \frac{3\sqrt{3}}{4 \pi} \cdot \frac{2 \pi}{N} = \frac{3\sqrt{3}}{2N} \ge \frac{1}{N}$, therefore $\log |\lambda - \gamma| \ge - \log N$.
So the statement is true for $d=1$.
From now on we assume that $N, d \ge 2$.
If $|\lambda-\gamma| \ge \frac{1}{4}$, then $\log|\lambda-\gamma| \ge -\log 4$, which is clearly greater than the bound needed.
So we may assume that $|\lambda-\gamma|<\frac{1}{4}$.
Let us define the logarithm function in the unit disc around $1$ such that $\log(1) = 0$, and let $s = \log(\frac{\gamma}{\lambda})$.
Here $s$ is well defined, because $|\frac{\gamma}{\lambda}-1| = |\gamma - \lambda| < \frac{1}{2}$.
It is a basic fact from analysis that if $z \in \mathbb{C}$ and $|z|<\frac{1}{2}$, then $|\log(1+z)| \le 2 |z|$.
Using this for $z = \frac{\gamma}{\lambda}-1$, we get that $|s| \le 2|\frac{\gamma}{\lambda}-1| = 2|\lambda-\gamma|$.
In particular $|s| < \frac{1}{2}$.
We can find an integer $u$ such that $|u| \le \frac{N}{2}$ and $\lambda = e^{\frac{u}{N} 2\pi i}$.
Note that $e^{s + \frac{u}{N} 2\pi i} = \gamma$ and $e^{\pi i} = -1$, so we may choose the logarithms $\log \gamma$ and $\log(-1)$ to be $s + \frac{u}{N} 2\pi i$ and $\pi i$.
Define the linear form $L(z_1,z_2) = N z_1 - 2u z_2$.
We will apply the Baker-W\"ustholz estimate (Theorem \ref{thm:BaWu}) for $\Lambda = L(\log \gamma, \log(-1))$.
Here $\Lambda = N \log \gamma - 2u \pi i = N(s+\frac{u}{N} 2\pi i) - 2u \pi i = N s \neq 0$, because otherwise $\log \gamma = \frac{u}{N} 2\pi i$, so $\gamma = e^{\frac{u}{N} 2\pi i} = \lambda$, contradicting $\lambda \neq \gamma$.
Note that $-1, \gamma \notin \{0,1\}$, because $d \ge 2$.
Theorem \ref{thm:BaWu} tells us that
\[
\log |\Lambda| > -C(2,d) h'(\gamma) h'(-1) h'(L),
\]
where
\begin{align*}
C(2,d) &= 18 \cdot 6 \cdot 8 (32d)^4 \log(4d), \\
h'(\gamma) &= \max\left(h(\gamma), \frac{1}{d} |\log \gamma|, \frac{1}{d}\right), \\
h'(-1) &= \max\left(h(-1), \frac{1}{d}|\log(-1)|, \frac{1}{d}\right) = \frac{\pi}{d}, \\
h'(L) &= \max\left(h\left(\frac{2u}{N}\right), \frac{1}{d}\right) \le \max\left(\log N, \frac{1}{d}\right) \le \max\left(\log N, \frac{1}{2}\right) = \log N.
\end{align*}
Note that $|\log \gamma| = |s+\frac{2u}{N} \pi i| \le |s| + \pi \le \pi + \frac{1}{2} < 4$, so $h'(\gamma) \le \max(h(\gamma), \frac{4}{d})$.
Collecting these inequalities together, we get
\[
\log |\Lambda| > - c' d^3 \log(4d) \max\left(h(\gamma), \frac{4}{d}\right) \log N,
\]
where $c' = 18 \cdot 6 \cdot 8 \cdot 32^4 \cdot \pi = 2^{25} 3^3 \pi = c-1$.
From $|s| \le 2|\lambda-\gamma|$ and $\Lambda = Ns$ we obtain that $|\lambda-\gamma| \ge \frac{|\Lambda|}{2N}$.
So
\begin{align*}
\log |\lambda-\gamma| &\ge \log|\Lambda| - \log(2N) \ge \log|\Lambda| - 2 \log N \\
&> -(c'+1) d^3 \log(4d) \max\left(h(\gamma), \frac{4}{d}\right) \log N \\
&= - c d^3 \log(4d) \max\left(h(\gamma), \frac{4}{d}\right) \log N.
\end{align*}
\end{proof}
In the following lemma, we get a bound for the value of a polynomial at a root of unity.
\begin{lemma} \label{lemma:lower-bound-poly-root-of-unity}
Let $N$ and $d$ be positive integers.
Let $\lambda \in \mathbb{C}$ be an $N^{\textrm{th}}$ root of unity, and let $g \in \mathbb{Z}[X]$ be a polynomial of degree at most $\delta$ such that $g(\lambda) \neq 0$.
Then
\[
\log |g(\lambda)| > - 2^{35} \delta^2 (\log(4\delta))^2 \max(h(g),1) \log \max(N,2).
\]
\end{lemma}
\begin{proof}
We will prove the inequality
\begin{equation} \label{eq:g(lambda)-Mahler-bound}
\log |g(\lambda)| > - c_2 \delta^2 \log(4\delta) \log(2 M(g)) \log \max(N,2),
\end{equation}
where $c_0 = 2^{25} 3^3 \pi + 1$, $c_1 = \frac{4}{\log 2} c_0$ and $c_2 = c_1 + 6$.
We argue by induction on $\delta$.
The right hand side of \eqref{eq:g(lambda)-Mahler-bound} is the same for $N=1$ and for $N=2$, so we may assume that $N \ge 2$.
If $\deg g = 0$, then $g(\lambda) \in \mathbb{Z} \setminus \{0\}$, so $\log |g(\lambda)| \ge 0$.
Now let $\deg g \ge 1$.
The right hand side of \eqref{eq:g(lambda)-Mahler-bound} is a monotone decreasing function of $\delta$, so we may assume that $\deg g = \delta$, and that \eqref{eq:g(lambda)-Mahler-bound} is true for smaller values of $\delta$.
We may also assume that the gcd of the coefficients of $g$ is $1$, because multiplying $g$ by a positive integer increases the left hand side and decreases the right hand side of \eqref{eq:g(lambda)-Mahler-bound}.
Suppose $g$ is not irreducible, then $g = g_1 g_2$ for some polynomials $g_1, g_2 \in \mathbb{Z}[X]$ of positive degrees $d_1$ and $d_2$.
Note that $\delta = d_1 + d_2$ and $M(g) = M(g_1) M(g_2) \ge M(g_1), M(g_2)$.
Using the induction hypothesis for $g_1$ and $g_2$, we get
\begin{align*}
\log |g(\lambda)| &= \log |g_1(\lambda)| + \log |g_2(\lambda)| \\
&\ge - c_2 \log(2 M(g)) (\log N) \left( d_1^2 \log(4d_1) + d_2^2 \log(4d_2) \right) \\
&\ge - c_2 \log(2 M(g)) (\log N) \delta^2 \log(4\delta),
\end{align*}
because
\[
\delta^2 \log(4\delta) = (d_1^2 + d_2^2 + 2d_1 d_2) \log(4\delta) \ge d_1^2 \log(4d_1) + d_2^2 \log(4d_2).
\]
Finally, let $g$ be irreducible.
Let $g = a (X-\gamma_1) \dotsm (X-\gamma_{\delta})$, where $a \in \mathbb{Z} \setminus \{0\}$ and $\gamma_1, \dotsc, \gamma_{\delta} \in \mathbb{C}$.
Choose a $k \in \{1, \dotsc, n\}$ such that $|\lambda - \gamma_k|$ is minimal.
During the proof of Theorem A.3 in \cite{Bugeaud} it is shown that if $P \in \mathbb{Z}[X] \setminus \{0\}$ is separable polynomial of degree $n$, and $\alpha, \beta \in \mathbb{C}$ are such that $P(\alpha) = P(\beta) = 0$ and $\alpha \neq \beta$, then
\[
|\alpha - \beta|^2 \frac{n^3}{3} \max(1,|\alpha|, |\beta|)^{-2} n^{n-1} M(P)^{2n-2} > 1.
\]
So in fact
\[
|\alpha-\beta| > \sqrt{3} n^{-(n+2)/2} \max(1, |\alpha|, |\beta|) M(P)^{-(n-1)} \ge \sqrt{3} n^{-(n+2)/2} M(P)^{-(n-1)}.
\]
Applying this result for $P=g$ and $\alpha = \gamma_k$, $\beta = \gamma_i$ with $i \neq k$, we get that $|\gamma_k - \gamma_i| > R$ with $R = \sqrt{3} \delta^{-(\delta+2)/2} M(g)^{-(\delta-1)}$.
Then
\[
R < |\gamma_k - \gamma_i| \le |\gamma_k - \lambda| + |\lambda - \gamma_i| \le 2 |\lambda - \gamma_i|,
\]
so
\begin{equation} \label{eq:lambda-gammai-bound}
|\lambda - \gamma_i| > \frac{R}{2}
\end{equation}
for every $i \neq k$.
Applying Lemma \ref{lemma:distance-root-of-unity} and using $\log M(g) = \delta h(\gamma_k) \ge 0$, we obtain
\begin{equation} \label{eq:lambda-gammak-bound}
\begin{split}
\log |\lambda - \gamma_k| &> -c_0 \delta^2 \log(4\delta) \max\left(\log M(g), 4 \right) \log N \\
&\ge -c_1 \delta^2 \log(4\delta) \log(2M(g)) \log N.
\end{split}
\end{equation}
Let $A = \delta^2 \log(4\delta) \log(2M(g)) \log N$.
The bounds \eqref{eq:lambda-gammai-bound} and \eqref{eq:lambda-gammak-bound} together imply that
\begin{align*}
\log |g(\gamma)| &> (\delta-1) \log\left(\frac{R}{2}\right) - c_1 A \\
&= -(\delta-1) \log\left(\frac{2}{\sqrt{3}}\right) - \frac{(\delta-1)(\delta+2)}{2} \log \delta - (\delta-1)^2 \log M(g) - c_1 A.
\end{align*}
It is easy to check that the terms
\[
(\delta-1) \log\left(\frac{2}{\sqrt{3}}\right), \quad \frac{(\delta-1)(\delta+2)}{2} \log \delta, \quad (\delta-1)^2 \log M(g)
\]
are all smaller than $2A$, so $\log |g(\gamma)| > -(c_1 + 6) A = - c_2 A$.
This finishes the proof of \eqref{eq:g(lambda)-Mahler-bound}.
To prove the statement of the lemma, first note that we may assume that the gcd of the coefficients of $g$ is $1$.
Then every coefficient of $g$ has euclidean absolute value at most $H(g)$, so $M(g) \le \sqrt{\delta+1} H(g)$ (see Lemma 1.6.7 in \cite{Bombieri-Gubler}), hence
\begin{align*}
\log |g(\lambda)| &> - c_2 \delta^2 \log(4\delta) \log(2 \sqrt{\delta+1} H(g)) \log \max(N,2) \\
&\ge - 2 c_2 \delta^2 \log(4\delta)^2 \max(h(g),1) \log \max(N,2),
\end{align*}
because
\[
\log 2 + \frac{\log(\delta+1)}{2} + h(g) \le 2 \log(4\delta) \max(h(g),1).
\]
The statement of the lemma follows from $2 c_2 < 2^{35}$.
\end{proof}
If $\lambda$ is a primitive $N^{\textrm{th}}$ root of unity, then the degree of $\lambda$ over $\mathbb{Q}$ is $\varphi(N)$, where $\varphi$ denotes Euler's totient function.
During the proof of Theorem \ref{thm:main} we will need a lower bound for the degree of $\lambda$.
A more or less trivial bound would be $\varphi(N) \ge c(\varepsilon) N^{1-\varepsilon}$ for every positive $\varepsilon$.
We can actually do better.
\begin{proposition} \label{prop:lower-bound-phi}
If $N > 30$ is an integer, then $\varphi(N) > \frac{N}{3 \log \log N}$.
\end{proposition}
\begin{proof}
The statement can be easily verified case by case for $31 \le N \le 66$, so we may assume that $N \ge 67$.
Theorem 15 in \cite{Rosser-Schoenfeld:Approx_formulas} implies that
\[
\varphi(N) > \frac{N}{e^{\gamma} \log \log N + \frac{2.51}{\log \log N}}
\]
for $N \ge 3$, with $\gamma$ denoting the Euler constant.
Now $\log \log N \ge \log \log 67 > \sqrt{\frac{2.51}{3-e^{\gamma}}}$, hence $(3-e^{\gamma}) \log \log N > \frac{2.51}{\log \log N}$, therefore
\[
\varphi(N) > \frac{N}{e^{\gamma} \log \log N + \frac{2.51}{\log \log N}} > \frac{N}{3 \log \log N}.
\]
\end{proof}
We will use the following upper bound for the class number $h(D)$.
\begin{proposition} \label{prop:class-number-bound}
If $D<0$ is an integer congruent to $0$ or $1$ modulo $4$, then
\[
h(D) < \frac{1}{\pi} \sqrt{|D|} (2 + \log|D|).
\]
\end{proposition}
\begin{proof}
The class number formula (see Theorem 10.1, Ch.\ 12 in \cite{Hua:Intro_to_Number_Theory}) says that
\[
h(D) = \frac{w \sqrt{|D|}}{2\pi} L_D(1),\]
where $w$ is equal to $6$, $4$ and $2$ for $D=-3$, $D=-4$ and $D<-4$ respectively, and $L_D(1) = \sum_{n=1}^{\infty} \frac{1}{n} (\frac{D}{n})$.
If $D=-3$ or $-4$, then $h(D)=1$, thus the statement of the proposition is true.
So we may assume that $D<-4$.
Then $w=2$, so $h(D) = \frac{\sqrt{|D|}}{\pi} L_D(1)$.
Theorem 14.3, Ch.\ 12 in \cite{Hua:Intro_to_Number_Theory} says that $0 < L_D(1) < 2 + \log |D|$, which gives $h(D) < \frac{\sqrt{|D|}}{\pi}(2 + \log |D|)$.
\end{proof}
We will use the following auxiliary lemma in the proof of Theorem \ref{thm:main}.
\begin{lemma} \label{lemma:auxiliary}
Let $p > 0$, $q > e$ and $A > 10^4$ be real numbers such that
\[
p \le A \log q \qquad \textrm{and} \qquad \frac{q}{\log\log q} \le p \log p.
\]
Then $p < 2 A \log A$ and $q < 3 A (\log A)^2 \log \log A$.
\end{lemma}
\begin{proof}
Note that $0 < \frac{q}{\log \log q} \le p \log p$, so $p > 1$.
Since $p \log p$ is an increasing function of $p \in (1, \infty)$, we may assume that $p = A \log q$.
Then $\frac{q}{\log \log q} \le A (\log q) \log(A \log q)$, so $q \le G(q)$, where
\[
G(x) = A (\log x) (\log \log x) \log(A \log x).
\]
We claim that $G(x)/x$ is strictly decreasing function of $x$ in the interval $(e^4, \infty)$.
Indeed, if $x > e^4$, then $\log(Ax) \ge \log \log x > 1$, so
\[
G'(x) = \frac{A}{x} ((1+\log \log x)(1+\log(A\log x))-1) < \frac{4A}{x} (\log \log x) \log(A \log x),
\]
hence
\[
(G(x)/x)' = \frac{1}{x^2}(G'(x)x-G(x)) < \frac{A}{x^2} (4 - \log x) (\log \log x) \log(A \log x) < 0.
\]
We claim that $G(Q) < Q$, where $Q = 3 A (\log A)^2 \log \log A$.
This will prove the upper bound on $q$, because $G(Q)/Q < 1 \le G(q)/q$ implies $q < Q$, since $G(x)/x$ is a decreasing function in $(e^4, \infty)$, and $Q > e^4$.
So we need to show
\[
A (\log Q) (\log \log Q) \log(A \log Q) < 3 A (\log A)^2 \log \log A,
\]
or equivalently
\[
\frac{\log Q}{\log A} \cdot \frac{\log(A \log Q)}{\log A} \cdot \frac{\log \log Q}{\log \log A} < 3.
\]
One can easily check that $3 < A^{1/8}$, $\log A < A^{1/4}$ and $\log \log A < A^{1/8}$ for $A > 10^4$.
Thus $3 (\log A)^2 (\log \log A) < A^{3/4}$, hence $Q < A^{7/4}$ and $\frac{\log Q}{\log A} < \frac{7}{4}$.
Then
\[
\log Q < \frac{7}{4} \log A < A^{1/3},
\]
because $\frac{7}{4} < A^{1/12}$ and $\log A < A^{1/4}$ for $A>10^4$.
So $A \log Q < A^{4/3}$ and $\frac{\log(A \log Q)}{\log A} < \frac{4}{3}$.
We have
\[
\log \log Q < \log\left(\frac{7}{4} \log A\right) < \log\left((\log A)^{9/7}\right),
\]
because $\frac{7}{4} < (\log A)^{2/7}$ for $A>10^4$.
So $\frac{\log \log Q}{\log \log A} < \frac{9}{7}$.
This proves $q < Q$, because $\frac{7}{4} \cdot \frac{4}{3} \cdot \frac{9}{7} = 3$.
Finally, we have seen that $q < Q < A^{7/4} < A^2$, so $p = A \log q < 2 A \log A$.
\end{proof}
\section{Proof of Theorem \ref{thm:main}} \label{sec:proof}
One can show in the same way as in the proof of Theorem 2 in \cite{Paulin:Andre-Oort-P1xGm}, that it is enough to prove the statement in the case when $K = \mathbb{Q}$ and $\mathbb{Z}+\mathbb{Z} \tau$ is an order.
So let $K = \mathbb{Q}$ and $\mathcal{O} = \mathbb{Z} + \mathbb{Z} \tau$ an order.
Then $d=1$ and
\[
C = 2^{36} \delta_2^3 (\log(4 \delta_2))^2 \max(h(F),1).
\]
The polynomial $g(Y) = F(\alpha,Y) \in \mathbb{Q}(\alpha)[Y]$ is nonzero, because the zero set of $F$ contains no vertical line.
Moreover $g(\lambda) = 0$, so $[\mathbb{Q}(\alpha,\lambda):\mathbb{Q}(\alpha)] \le \deg g \le \delta_2$.
This gives
\[
\varphi(N) = [\mathbb{Q}(\lambda):\mathbb{Q}] \le [\mathbb{Q}(\alpha,\lambda):\mathbb{Q}] = [\mathbb{Q}(\alpha,\lambda):\mathbb{Q}(\alpha)] \cdot [\mathbb{Q}(\alpha):\mathbb{Q}] \le \delta_2 \cdot h(\mathcal{O}).
\]
The discriminant of the order $\mathcal{O}$ is $\Delta = -4 (\Impart \tau)^2$.
We know that $\Delta<0$ and $\Delta$ is congruent to $0$ or $1$ modulo $4$.
Moreover $h(\mathcal{O}) = h(\Delta)$ (see e.g.\ Theorem 7.7 in \cite{Cox:Primes_of_the_form_x2+ny2}).
Using Proposition \ref{prop:class-number-bound}, we obtain that
\begin{equation} \label{eq:phi(N)-bound}
\varphi(N) \le \delta_2 h(\Delta) \le \frac{\delta_2}{\pi} \sqrt{|\Delta|}(2 + \log|\Delta|).
\end{equation}
Suppose $|\Delta| < 25$.
Then \eqref{eq:main-Q-Delta} is true.
If $N \le 30$, then \eqref{eq:main-Q-N} is also true.
If $N > 30$, then using Proposition \ref{prop:lower-bound-phi}, we obtain
\[
\sqrt{N} < \frac{N}{3\log\log N} < \varphi(N) < \frac{\delta_2}{\pi} 5 (2 + \log 25) < 9 \delta_2,
\]
which implies $N < 81 \delta_2^2$.
This proves \eqref{eq:main-Q-N} if $|\Delta| < 25$.
From now on we assume that $|\Delta| \ge 25$.
Since $|\Delta| \ge 25$, we have $\Impart \tau = \frac{\sqrt{|\Delta|}}{2} \ge \frac{5}{2} > \frac{1}{2\pi} \log 6912$, and from \cite[Prop.\ 3.1]{Paulin:Andre-Oort-P1xGm}, we deduce as in \cite{Paulin:Andre-Oort-P1xGm} that $\frac{|\alpha|}{e^{2\pi \Impart \tau}} \in [\frac{1}{2},2]$.
Taking logarithms leads to $\log|\alpha| \ge 2\pi \Impart \tau - \log 2 > 6 \Impart \tau$, because $\Impart \tau \ge \frac{5}{2} > \frac{\log 2}{2\pi - 6}$.
Substituting $\Impart \tau = \frac{\sqrt{|\Delta|}}{2}$ we obtain
\begin{equation} \label{eq:Delta-log-alpha-bound}
3 \sqrt{|\Delta|} < \log|\alpha|.
\end{equation}
We multiply $F$ by a nonzero rational number, so that $F$ will have integer coefficients with gcd equal to $1$.
Then the maximum of the euclidean absolute values of the coefficients of $F$ is $H(F) = e^{h(F)}$.
Let $F = \sum_{i=0}^{\delta_1} g_i(Y) X^i$, where $g_i(Y) \in \mathbb{Z}[Y]$.
Here each $g_i$ has degree at most $\delta_2$.
Since $F(X,\lambda) \in \mathbb{C}[X]$ is a nonzero polynomial, $g_i(\lambda) \neq 0$ for some $i$.
Let $m$ be the maximal such $i$.
It is proved in \cite{Paulin:Andre-Oort-P1xGm} that
\[
|g_m(\lambda)| < \frac{(\delta_2+1) H(F)}{|\alpha|-1}.
\]
Since $|\Delta| \ge 25$, inequality \eqref{eq:Delta-log-alpha-bound} implies that $\log|\alpha|>15$, hence $|\alpha|>e^{15}>2$.
Thus
\[
\frac{(\delta_2+1) H(F)}{|\alpha|-1} \le \frac{4\delta_2 H(F)}{|\alpha|},
\]
therefore
\begin{equation} \label{eq:gm-lambda-upper-bound}
\log |g_m(\lambda)| < \log(4\delta_2) + h(F) - \log|\alpha|.
\end{equation}
On the other hand, we can apply Lemma \ref{lemma:lower-bound-poly-root-of-unity} for $g_m$ and $\lambda$.
Since $\deg g_m \le \delta_2$, and the coefficients of $g_m$ have euclidean absolute values at most $H(F) = e^{h(F)}$, we have $h(g_m) \le h(F)$, and Lemma \ref{lemma:lower-bound-poly-root-of-unity} says
\begin{equation} \label{eq:gm-lambda-lower-bound}
\log |g_m(\lambda)| > -2^{35} \delta_2^2 (\log(4\delta_2))^2 \max(h(F),1) \log \max(N,2).
\end{equation}
The inequalities \eqref{eq:Delta-log-alpha-bound}, \eqref{eq:gm-lambda-upper-bound} and \eqref{eq:gm-lambda-lower-bound} together imply
\begin{equation} \label{eq:final1}
\begin{split}
3 \sqrt{|\Delta|} &< 2^{35} \delta_2^2 (\log(4\delta_2))^2 \max(h(F),1) \log \max(N,2) + \log(4\delta_2) + h(F) \\
&< (2^{35}+2) \delta_2^2 (\log(4\delta_2))^2 \max(h(F),1) \log \max(N,2).
\end{split}
\end{equation}
If $N \le 30$, then we get
\[
\sqrt{|\Delta|} < 2^{36} \delta_2^2 (\log(4\delta_2))^2 \max(h(F),1) = \frac{1}{\delta_2} C < \frac{1}{\delta_2} C \log C,
\]
hence both \eqref{eq:main-Q-Delta} and \eqref{eq:main-Q-N} are true.
From now on we assume that $N > 30$.
Applying \eqref{eq:phi(N)-bound} and Proposition \ref{prop:lower-bound-phi}, we get
\begin{equation} \label{eq:final2}
\frac{N}{3\log\log N} < \frac{\delta_2}{\pi} \sqrt{|\Delta|}(2 + \log|\Delta|).
\end{equation}
Let $p = \frac{12}{\pi} \delta_2 \sqrt{|\Delta|}$ and $q = 2 N$, then \eqref{eq:final1} and \eqref{eq:final2} imply
\[
p < A \log q \qquad \textrm{and} \qquad
\frac{q}{\log\log q} < p \log p
\]
with $A = \frac{4}{\pi} (2^{35}+2) \delta_2^3 (\log(4\delta_2))^2 \max(h(F),1)$.
Applying Lemma \ref{lemma:auxiliary}, we obtain
\[
p < 2 A \log A \qquad \textrm{and} \qquad q < 3 A (\log A)^2 \log \log A.
\]
Then $\sqrt{|\Delta|} < \frac{\pi}{6 \delta_2} A \log A$ and $N < \frac{3}{2} A (\log A)^2 \log \log A$.
The inequalities \eqref{eq:main-Q-Delta} and \eqref{eq:main-Q-N} follow from these, because $\frac{\pi}{6} A < A < \frac{3}{2} A < C$.
\subsection*{Acknowledgements}
This paper has its origins in the author's Ph.D.\ studies under the supervision of Gisbert W\"ustholz at ETH Z\"urich.
Therefore the author thanks Gisbert W\"ustholz for introducing him to this field, and for all the helpful discussions.
\bibliographystyle{hplain}
|
2,869,038,154,258 | arxiv | \section{Introduction}
In much of machine learning, the central computational challenge is optimization: we try to minimize some training-set loss with respect to a set of model parameters.
If we treat the training loss as a negative log-posterior, this amounts to searching for a maximum \emph{a posteriori} (MAP) solution.
Paradoxically, over-zealous optimization can yield worse test-set results than incomplete optimization due to the phenomenon of \emph{over-training}.
A popular remedy to over-training is to invoke ``early stopping'' in which optimization is halted based on the continually monitored performance of the parameters on a separate validation set.
However, early stopping is both theoretically unsatisfying and incoherent from a research perspective: how can one rationally design better optimization methods if the goal is to achieve something ``powerful but not \emph{too} powerful''?
A related trick is to ensemble the results from multiple optimization runs from different starting positions.
Similarly, this must rely on imperfect optimization, since otherwise all optimization runs would reach the same optimum.
\begin{figure}[t]
\vskip 0.434in
\begin{center}
\includegraphics[width=\columnwidth]{fig_1.pdf}
\vskip -0.043in
\caption{A series of variational distributions implicitly defined by gradient descent on the log-likelihood of the posterior.
Intermediate distributions (green and blue) are implicitly defined by mapping each possible random initial parameters through many iterations of optimization.
These distributions don't have fixed parametric shape, and will eventually concentrate around the mode.}
\label{fig:cartoon}
\end{center}
\end{figure}
We propose an interpretation of incomplete optimization in terms of variational Bayesian inference, and provide a simple method for estimating the marginal likelihood of the approximate posterior.
Our starting point is a Bayesian posterior distribution for a potentially complicated model, in which there is an empirical loss that can be interpreted as a negative log likelihood and regularizers that have interpretations as priors.
One might proceed with MAP inference, and perform an optimization to find the best parameters.
The main idea of this paper is that such an optimization procedure, initialized according to some distribution that can be chosen freely, generates a sequence of distributions that are implicitly defined by the action of the optimization update rule on the previous distribution.
We can treat these distributions as variational approximations to the true posterior distribution.
A single optimization run for $N$ iterations represents a draw from the $N$th such distribution in the sequence.
Figure \ref{fig:cartoon} shows contours of these approximate distributions on an example posterior.
With this interpretation, the number of optimization iterations can be seen as a variational parameter, one that trades off fitting the data well against maintaining a broad (high entropy) distribution.
Early stopping amounts to optimizing the variational lower bound (or an approximation based on a validation set) with respect to this variational parameter.
Ensembling different random restarts can be viewed as taking independent samples from the variational posterior.
To establish whether this viewpoint is helpful in practice, we ask: can we efficiently estimate the marginal likelihood implied by unconverted optimization?
We tackle this question in section \ref{sec:techintro}.
Specifically, for stochastic gradient descent (SGD), we show how to compute an unbiased estimate of a lower bound on the log marginal likelihood of each iteration's implicit variational distribution.
We also introduce an `entropy-friendly' variant of SGD that maintains better-behaved implicit distributions.
We also ask whether model selection based on these marginal likelihood estimates picks models with good test-time performance.
We give some experimental evidence in both directions in section \ref{sec:experiments}.
A related question is how close the variational distributions implied by various optimization rules approximate the true posterior.
We briefly address this question in section \ref{sec:limitations}.
\subsection{Contributions}
\begin{itemize}
\item We introduce a new interpretation of optimization algorithms as samplers from a variational distribution that adapts to the true posterior, eventually collapsing around its modes.
\item We provide a scalable estimator for the entropy of these implicit variational distributions, allowing us to estimate a lower bound on the marginal likelihood of any model whose posterior is twice-differentiable, even on problems with millions of parameters and data points.
\item In principle, this marginal likelihood estimator can be used for hyperparameter selection and early stopping without the need for a validation set.
We investigate the performance of these estimators empirically on neural network models, and show that they have reasonable properties.
However, further refinements are likely to be necessary before this marginal likelihood estimator is more practical than using a validation set.
\end{itemize}
\section{Incomplete optimization as variational inference}
\label{sec:techintro}
Variational inference \citep{wainwright2008graphical}
aims to approximate an intractable posterior distribution, $p(\params | \data)$, with another more tractable distribution, $q(\mathbf{\theta})$.
The usual measure of the quality of the approximation is the Kullback-Leibler (KL) divergence from $q(\mathbf{\theta})$ to $p(\params , \data)$.
This measure provides a lower bound on the marginal likelihood of the original model;
applying Bayes' rule to the definition of $\KL{q(\mathbf{\theta})}{p(\params | \data)}$ gives the familiar inequality:
\begin{align}
\log p(\vx)
& \geq - \underbrace{\expectargs{q(\mathbf{\theta})}{ -\log p(\params , \data) }}_{\textnormal{\normalsize Energy $E[q]$}}
\underbrace{- \expectargs{q(\mathbf{\theta})}{\log q(\mathbf{\theta})}}_{\textnormal{\normalsize Entropy $S[q]$}} \nonumber \\
& := \mathcal{L}[q] \label{eq:varbound}
\end{align}
Maximizing $\mathcal{L}[q]$, the variational lower bound on the marginal likelihood, with respect to $q$ minimizes $\KL{q(\mathbf{\theta})}{p(\params | \data)}$, the KL divergence from $q$ to the true posterior, giving the closest approximation available within the variational family.
A convenient side effect is that we also get a lower bound on $p(\vx)$, which can be used for model selection.
To perform variational inference, we require a family of distributions over which to maximize $\mathcal{L}[q]$.
Consider a general procedure to minimize the energy~$(-\logp(\params , \data))$ with respect to~${\mathbf{\theta} \in \mathbb{R}^D}$.
The parameters~$\mathbf{\theta}$ are initialized according to some distribution~$q_0(\mathbf{\theta})$ and updated at each iteration according to a transition operation~${T : \mathbb{R}^D \rightarrow \mathbb{R}^D}$:
\begin{align}
\mathbf{\theta}_0 &\sim q_0(\mathbf{\theta}) \nonumber \\
\mathbf{\theta}_{t + 1} &= T(\mathbf{\theta}_t), \nonumber
\end{align}
Our variational family consists of the sequence of distributions~$q_0, q_1, q_2, \ldots$,
where~$q_t(\mathbf{\theta})$ is the distribution over~$\mathbf{\theta}_t$ generated by the above procedure.
These distributions don't have a closed form, but we can exactly sample from $q_t$ by simply running the optimizer for $t$ steps starting from a random initialization.
As shown in (\ref{eq:varbound}), $\mathcal{L}$ consists of an energy term and an entropy term.
The energy term measures how well~$q$ fits the data and the entropy term encourages the probability mass of~$q$ to spread out, preventing overfitting.
As optimization of~$\mathbf{\theta}$ proceeds from its~$q_0$-distributed starting point, we can examine how~$\mathcal{L}$ changes.
The negative energy term grows, since the goal of the optimization is to reduce the energy.
The entropy term shrinks because the optimization converges over time.
Optimization thus generates a sequence of distributions that range from underfitting to overfitting, and the variational lower bound captures this tradeoff.
We cannot evaluate $\mathcal{L}[q_t]$ exactly, but we can obtain an unbiased estimator.
Sampling~$\mathbf{\theta}_0$ from~$q_0$ and then applying the transition operator~$t$ times produces an exact sample~$\mathbf{\theta}_0$ from~$q_t$, by definition.
Since $\mathbf{\theta}_t$ is an exact sample from $q_t(\theta)$, $\log\subjointdist{}{t}$ is an unbiased estimator of the energy term of (\ref{eq:varbound}).
The entropy term is trickier, since we do not have access to the density $q(\mathbf{\theta})$ directly.
However, if we know the entropy of the initial distribution, $S[q_0(\mathbf{\theta})]$, then we can estimate $S[q_t(\mathbf{\theta})]$ by tracking the change in entropy at each iteration, calculated by the change of variables formula.
To compute how the volume shrinks or expands due to an iteration of the optimizer, we require access to the Jacobian of the optimizer's transition operator, $J(\mathbf{\theta})$:
\begin{align}
S[q_{t+1}] - S[q_t] =
\mathbb{E}_{q_t(\mathbf{\theta}_t)} \big[ \log
\left| J(\mathbf{\theta}_t) \right| \big] \,.
\end{align}
Note that this analysis assumes that the mapping $T$ is bijective.
Combining these terms, we have an unbiased estimator of $\mathcal{L}$ at iteration $T$,
based on the sequence of parameters, $\mathbf{\theta}_0, \ldots, \mathbf{\theta}_T$, from a single training run:
\begin{align}
\mathcal{L}[q_T] \approx
\underbrace{\log \subjointdist{}{T}}_{\textnormal{\normalsize Energy}} +
\underbrace{\sum_{t=0}^{T-1} \log \left| J(\mathbf{\theta}_t) \right| + S[q_0]}_{\textnormal{\normalsize Entropy}} \,.
\label{eq:entropy-bound}
\end{align}
\section{The entropy of stochastic gradient descent}
In this section, we give an unbiased estimate for the change in entropy caused by SGD updates.
We'll start with a na\"ive method, then in section \ref{sec:scalable-estimator}, we give an approximation that scales linearly with the number of parameters in the model.
Stochastic gradient descent is a popular and effective optimization procedure with the following update rule:
\begin{align}
\mathbf{\theta}_{t+1} &=
\mathbf{\theta}_t - \alpha \nabla L(\params),
\end{align}
where the $L(\params)$ the objective loss (or an unbiased estimator of it e.g. using minibatches)
for example ~$-\logp(\params , \data)$, and $\alpha$ is a `step size' hyperparameter.
Taking the Jacobian of this update rule gives the following unbiased estimator
for the change in entropy at each iteration:
\begin{align}
S[q_{t+1}] - S[q_t] \approx \log \left| I - \alpha H_t(\mathbf{\theta}_t)
\right| \label{eq:exact hessian}
\end{align}
where $H_t$ is the Hessian of $-\log\subjointdist{t}{}$ with respect to~$\mathbf{\theta}$.
Note that the Hessian does not need to be positive definite or even non-singular.
If some directions in $\mathbf{\theta}$ have negative curvature, as on the crest of a hill, it just means that optimization near there spreads out probability mass, increasing the entropy.
There are, however, restrictions on $\alpha$.
If ${\alpha\lambda_i = 1}$, for any $i$, where $\lambda_i$ are the eigenvalues of $H_t$, then the change in entropy will be undefined (infinitely negative).
This corresponds to a Newton-like update where multiple points collapse to the optimum in a single step giving a distribution with zero variance in a particular direction.
However, gradient descent is unstable anyway if ${\alpha\lambda_{\text{max}} > 2}$, where~$\lambda_{\text{max}}$ is the largest eigenvalue of~$H_t$.
So if we choose a sufficiently conservative step size, such that $\alpha\lambda_{\text{max}} < 1$,
this situation should not arise.
Algorithm~\ref{alg:sgd-with-estimate} combines these steps into an algorithm that tracks the approximate entropy during optimization.
\begin{algorithm}[t]
\caption{stochastic gradient descent with entropy estimate}
\label{alg:sgd-with-estimate}
\begin{algorithmic}[1]
\State {\bfseries input:}
Weight initialization scale $\sigma_0$, step size $\alpha$,
twice-differentiable negative log-likelihood $L(\mathbf{\theta}, t)$
\State {\bfseries initialize} $\mathbf{\theta}_0 \sim \N{0}{\sigma_0 \mathbf{I}_D}$
\State {\bfseries initialize} $S_{0} = \frac{D}{2} (1 + \log 2 \pi) + D \log\sigma_0$
\For{$t=1$ {\bfseries to} $T$}
\State $S_{t} = S_{t-1} + \log \left| \mathbf{I} - \alpha H_{t-1} \right|$\Comment{Update entropy} \label{step:entropy-update}
\State $\mathbf{\theta}_{t} = \mathbf{\theta}_{t-1} - \alpha \nabla L(\params_t, t)$ \Comment{Update parameters}
\EndFor
\State \textbf{output} sample $\mathbf{\theta}_T$, entropy estimate $S_T$
\end{algorithmic}
\end{algorithm}
So far, we have treated SGD as a deterministic procedure even though, as the name suggests,
the gradient of the loss at each iteration may be replaced by a stochastic
version. Our analysis of the entropy is technically valid if we fix the sequence of stochastic gradients to be the same for each optimization run, so that the only randomness comes from the parameter initialization.
This is a tendentious argument, similar to arguing that a pseudorandom sequence of numbers has only as much entropy as its seed.
However, if we do choose to randomize the gradient estimator differently for each training run
(e.g. choosing different minibatches) then the expression for the change in entropy, Equation \ref{eq:exact hessian}, remains valid as a \emph{lower bound} on the change in entropy and the
subsequent calculation of $\mathcal{L}$ remains a true lower bound on the log marginal likelihood.
\subsection{Estimating the Jacobian in high dimensions}
\label{sec:scalable-estimator}
The expression for the change in entropy given by (\ref{eq:exact hessian}) is impractical for large-scale problems since it requires an~$\bigo{D^3}$ determinant computation.
Fortunately, we can make a good approximation using just one or two Hessian-vector products, which can usually be performed in~$\bigo{D}$ time using reverse-mode differentiation \citep{pearlmutter1994fast}.
The idea is that since~$\alpha\lambda_{\text{max}}$ is small, the Jacobian is actually just a small perturbation to the identity and we can approximate its determinant using traces as follows:
\begin{align}
\log \left| I - \alpha H \right|
& = \sum_{i=0}^D \log\left(1 - \alpha\lambda_i\right) \nonumber\\
& \geq \sum_{i=0}^D \left[- \alpha\lambda_i
- (\alpha\lambda_i)^2 \right] \label{eq:logbound} \\
& = - \alpha \trace{H} - \alpha^2 \trace{HH}\,.
\end{align}
The bound in (\ref{eq:logbound}) is just a second order Taylor expansion of~$\log(1 - x)$ about~${x = 0}$ and is valid if ${\alpha\lambda_i < 0.68}$.
As we argue above, the regime in which SGD is stable requires that $\alpha\lambda_{\text{max}} < 1$, so again choosing a conservative learning rate keeps this bound in the correct direction.
For sufficiently small learning rates, this bound becomes tight.
The trace of the Hessian can be estimated using inner products of random vectors
\citep{bai1996some}:
\begin{align}
\trace{H} = \expectargs{}{\mathbf{r}^TH\mathbf{r}}, \qquad \mathbf{r} \sim \N{0}{I}\,.
\label{eq:approx-log-det}
\end{align}
We use this identity to derive algorithm~\ref{alg:fast-logdet-estimate}.
In high dimensions, the exact evaluation of the determinant in step~\ref{step:entropy-update} should be replaced with the approximation given by algorithm~\ref{alg:fast-logdet-estimate}.
\begin{algorithm}[t]
\caption{linear-time estimate of log-determinant of Jabobian of one iteration of stochastic gradient descent}
\label{alg:fast-logdet-estimate}
\begin{algorithmic}[1]
\State {\bfseries input:}
step size $\alpha$, current parameter vector $\mathbf{\theta}$,
twice-differentiable negative log-likelihood $L(\mathbf{\theta})$
\State {\bfseries initialize} $\mathbf{r}_0 \sim \N{0}{\sigma_0 \mathbf{I}_D}$
\State $\mathbf{r}_1 = \mathbf{r}_0 - \alpha \mathbf{r}_0^{\mathsf{T}} \nabla \nabla L(\mathbf{\theta}, t)$
\State $\mathbf{r}_2 = \mathbf{r}_1 - \alpha \mathbf{r}_1^{\mathsf{T}} \nabla \nabla L(\mathbf{\theta}, t)$
\State $\hat \mathcal{L} = \mathbf{r}_0^{\mathsf{T}} \left( -2 \mathbf{r}_0 + 3 \mathbf{r}_1 - \mathbf{r}_2 \right)$
\State \textbf{output} $\hat \mathcal{L}$, an unbiased estimate of a parabolic lower bound on the change in entropy.
\end{algorithmic}
\end{algorithm}
Note that the quantity we are estimating \eqref{eq:exact hessian} is well-conditioned, in contrast to the related problem of computing the log of the determinant of the Hessian itself.
This arises, for example, in making the Laplace approximation to the posterior \citep{mackay1992practical}.
This is a much harder problem since the Hessian can be arbitrarily ill-conditioned, unlike our small Hessian-based perturbation to the identity.
\subsection{Parameter initialization, priors, and objective functions}
\label{sec:priors}
What initial parameter distribution should we use for SGD?
The marginal likelihood estimate given by \eqref{eq:entropy-bound} is valid no matter which initial distribution we choose.
We could conceivably optimize this distribution in an outer loop using the marginal likelihood estimate itself.
However, using the prior distribution has several advantages.
First, it is usually designed to have broader support than the likelihood.
Since SGD usually decreases entropy, starting with a high-entropy distribution
is a good heuristic.
The second advantage has to do with our choice of objective function.
The obvious choice is the (unnormalized, negative) log-posterior, but we can actually use any function we like.
A more sensible choice is the negative log-likelihood.
variational distributions only differ from the
initial distribution to the extent that the posterior differs from the prior.
One nice implication is that the entropy estimate will be exactly correct for parameters that don't affect the likelihood.
Because of these favorable properties, we use these choices for the initial distribution and objective in our experiments.
\section{Designing entropy-friendly optimization methods}
\label{sec:entropy friendly}
SGD optimizes the training loss, not he variational lower bound.
In some sense, if this optimization happens to create a good variational distribution, it's only by accident.
Why not design a new optimization method that produces good variational lower bounds?
In place of SGD, we can use any optimization method for which we can approximate the change in entropy, which in practice means any optimization for which we can compute Jacobian-vector products.
An obvious place to start is with stochastic update rules inspired by Markov Chain Monte Carlo (MCMC).
Procedures like Hamiltonian Monte Carlo \citep{neal2011mcmc} and Langevin dynamics MCMC \citep{welling2011bayesian} look very much like optimization procedures but actually have the posterior as their stationary distribution.
This is exactly the approach taken by \citet{Bridging14}.
One difficulty with using stochastic updates, however, is that calculating the change in
entropy at each iteration requires access to the current distribution over parameters.
As an example, consider that convolving a delta function with a Gaussian yields an
infinite entropy increase, whereas convolving a broad uniform distribution with a Gaussian
yields only a small increase in entropy. \citet{welling2011bayesian} handle this
by learning a highly parameterized ``inverse model'' which implicitly models the distribution
over parameters. The downside of this approach is that the parameters of this model must be learned in an outer loop.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{fig_2.pdf}
\caption{The variational distribution implied by the modified, ``entropy-friendly'', SGD algorithm.
Compared to Figure \ref{fig:cartoon}, the variational distributions are slower to collapse into low-entropy filaments, causing the marginal likelihood to remain higher.}
\label{fig:cartoon-fatter}
\end{center}
\end{figure}
Another approach is to try to develop deterministic update rules
that avoid some of the pathologies of update rules like SGD.
This could could be a research agenda in itself, but we give one example here of a modification to
SGD which can improve the variational lower bound.
One problem with SGD in the context of posterior approximation is that
SGD can collapse the variational distribution into low-entropy filaments, shrinking in some directions to be orders of magnitude smaller than the width of the true posterior.
A simple trick to prevent this is to apply a nonlinear, parameter-wise warping
to the gradient, such that directions of very small gradient do not get optimized all the way
to the optimium.
For example, the modified gradient (and resulting modified Jacobian) could be
\begin{align}
g' & = g - g_0 \tanh \left(g / g_0 \right) \\
J' & = \left(1 - \cosh^{-2} (g / g_0) \right) J
\end{align}
where $g_0$ is a ``gradient threshold'' parameter that sets the scale of this shrinkage.
The effect is that entropy is not removed from parameters which are close to their optimum.
An example showing the effect of this entropy-friendly modification is shown in Figure~\ref{fig:cartoon-fatter}.
\section{Experiments}
\label{sec:experiments}
In this section we show that the marginal likelihood estimate can be used to choose when to stop training, to choose model capacity, and to optimize training hyperparameters without the need for a validation set.
We are not attempting to motivate SGD variational inference as a superior alternative to other procedures;
we simply wish to give a proof of concept that the marginal likelihood estimator has reasonable properties.
Further refinements are likely to be necessary before this marginal likelihood estimator is more practical than simply using a validation set.
\subsection{Choosing when to stop optimization}
As a simple demonstration of the usefulness of our marginal likelihood estimate, we show that it can be used to estimate the optimal number of training iterations before overfitting begins.
We performed regression on the Boston housing dataset
using a neural network with one hidden layer having 100 hidden units, sigmoidal activation functions, and no regularization.
Figure \ref{fig:housing} shows overfitting and shows that marginal likelihood peaks at a similar place to the peak of held-out log-likelihood, which is where early stopping would occur when using a large validation set.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{fig_3.pdf}
\vskip -0.1in
\caption{\emph{Top}: Training and test-set error on the Boston housing dataset.
\emph{Bottom}: Stochastic gradient descent marginal likelihood estimates.
The dashed line indicates the iteration with highest marginal likelihood.
The marginal likelihood, estimated online using only the training set, and the
test error peak at a similar number of iterations.}
\label{fig:housing}
\end{center}
\end{figure}
\subsection{Choosing the number of hidden units}
The marginal likelihood estimate is also comparable between training runs, allowing us to use it to select model hyperparameters, such as the number of hidden units.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{fig_4.pdf}
\vskip -0.1in
\caption{\emph{Top}: Training and test-set likelihood as a function of the number of hidden units in the first layer of a neural network.
\emph{Bottom}: Stochastic gradient descent marginal likelihood estimates.
In this case, the marginal likelihood over-penalizes high numbers of hidden units.
}
\label{fig:num hiddens}
\end{center}
\end{figure}
Figure \ref{fig:num hiddens} shows marginal likelihood estimates as a function of the number of hidden units in the hidden layer of a neural network trained on 50,000 MNIST handwritten digits.
The largest network trained in this experiment contains 2 million parameters.
The marginal likelihood estimate begins to decrease for more than 30 hidden units, even though the test-set likelihood in maximized at 300 hidden units.
We conjecture that this is due to the marginal likelihood estimate penalizing the loss of entropy in parameters whose contribution to the likelihood was initially large, but were made irrelevant later in the optimization.
\subsection{Optimizing training hyperparameters}
We can also use marginal likelihoods to optimize training parameters such as learning rates, initial distributions, or any other optimization parameters.
As an example, Figure \ref{fig:threshold} shows the marginal likelihood estimate as a function of the gradient threshold in the entropy-friendly SGD algorithm from section \ref{sec:entropy friendly} trained on 50,000 MNIST handwritten digits.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{fig_5.pdf}
\vskip -0.1in
\caption{\emph{Top}: Training and test-set likelihood as a function of the gradient threshold.
\emph{Bottom}: Marginal likelihood as a function of the gradient threshold.
A gradient threshold of zero corresponds to standard SGD.
The increased lower bound for non-zero thresholds indicates that the entropy-friendly variant of SGD is producing a better implicit variational distribution.}
\label{fig:threshold}
\end{center}
\end{figure}
As the level of thresholding increases, the training and test error get worse due to under-fitting.
However, for intermediate thresholds, the lower bound increases.
Because it is a lower bound, its increase means that the estimate of the marginal likelihood is becoming \emph{more accurate}, even though the actual model happens to be getting worse at the same time.
\subsection{Implementation details}
To allow easy computation of Hessian-vector products in arbitrary models, we implemented
a reverse-mode automatic differentiation package for Python, available at \url{github.com/HIPS/autograd}.
This package operates on standard Numpy~\citep{oliphant2007python} code, and can differentiate code containing loops, branches, and even its own gradient evaluations.
Code for all experiments in this paper is available at \url{github.com/HIPS/maxwells-daemon}.
\section{Limitations}
\label{sec:limitations}
In practice, the marginal likelihood estimate we present might not be useful for several reasons.
First, using only a single sample to estimate both the expected likelihood as well as the entropy of an entire distribution will necessarily have high variance under some circumstances.
These problems could conceivably be addressed by ensembling, which has an interpretation as taking multiple exact independent samples from the implicit variational posterior.
Second, as parameters converge, their entropy estimate (and true entropy) will continue to decrease indefinitely, making the marginal likelihood arbitrarily small.
However, in practice there is usually a limit to the degree of overfitting possible.
This raises the question: when are marginal likelihoods a good guide to predictive accuracy?
Presumably the marginal likelihood is more likely to be correlated with predictive performance when the
implicit distribution has moderate amounts of entropy.
In section \ref{sec:entropy friendly} we modified SGD to be less prone to produce regions of pathologically low entropy, but a more satisfactory solution is probably possible.
Third, if the model includes a large number of parameters that do not affect the predictive likelihood, but which are still affected by a regularizer, their convergence will penalize the marginal likelihood estimate even though these parameters do not affect test set performance.
This is why in section \ref{sec:priors} we recommend optimizing only the log-likelihood, and incorporating the regularizer directly into the initialization procedure.
More generally however, entropy could be underestimated if a large group of parameters are initially constrained by the data, but are later ``turned off'' by some other parameters in the model.
Finally, how viable is optimization as an inference method?
Standard variational methods find the best approximation in some class, but SGD doesn't even try to produce a good approximate posterior, other than by seeking the modes.
Indeed, Figure \ref{fig:cartoon} shows that the distribution implied by SGD collapses to a small portion of the true posterior early on, and mainly continues to shrink as optimization proceeds.
However, the point of early stopping is not that the intermediate distributions are particularly good approximations, but simply that they are better than the point masses that occur when optimization has converged.
\section{Related work}
\paragraph{Estimators for early stopping}
Stein's unbiased risk estimator (SURE) \citep{stein1981estimation} provides an unbiased estimate of generalization performance under very broad conditions, and can be used to construct a stopping rule.
\citet{raskutti2014early} derived a SURE estimate for SGD in a regression setting.
Interestingly, this estimator depends on the `shrinkage matrix' $\prod_{t=0}^{T} \left( \mathbf{I} - \alpha_t H_T \right)$, which is just the Jacobian of the entire SGD procedure along a particular path.
However, this estimator depends on an estimate of the noise variance, and is restricted to the i.i.d.\ regression setting.
It's also not clear if these stopping rules could also be used to select other training parameters or model hyperparameters.
\paragraph{Reversible learning}
Optimization is an intrinsically information-destroying process, since a (good) optimization procedure maps any initial starting point to one or a few final optima.
We can quantify this loss of information by asking how many bits must be stored in order to reverse the optimization, as in \citet{MacDuvAda2015hyper}.
We can think of the number of bits needed to exactly reverse the optimization procedure as the average number of bits `learned' during the optimization.
From this perspective, stopping before optimization converges can be seen as a way to limit the number of bits we try to learn about the parameters from the data.
This is a reasonable strategy, since we don't expect to be able to learn more than a finite number of bits from a finite dataset.
This is also an example of reducing the hypothesis space to improve generalization.
\paragraph{MCMC for variational inference}
Our method can be seen as a special case of \citet{Bridging14}, who showed that any set of stochastic dynamics, even those not satisfying detailed balance, can be used to implicitly define a variational distribution.
However, to provide a tight variational bound, one needs to estimate the entropy of the resulting implicit distribution.
\citet{Bridging14} do this by defining an inverse model which estimates backwards transition probabilities, and then optimizes this model in an outer loop.
In contrast, our dynamics are deterministic, and our estimate of the entropy has a simple fixed form.
\paragraph{Bayesian neural networks}
Variational inference has been performed in Bayesian neural-network models~\citep{graves2011practical, deepGPVar14, Miguel2015pbp}.
\citet{kingma2014efficient} show how neural networks having unknown weights can be reformulated as neural networks having known weights but stochastic hidden units, and exploit this connection to preform efficient gradient-based inference in Bayesian neural networks.
\paragraph{Black-box stochastic variational inference}
\citet{alp2014blackbox} introduce a general scheme for variational inference using only the gradients of the log-likelihood of a model.
However, they constrain their variational approximation to be Gaussian, as opposed to our free-form variational distribution.
\section{Future work and extensions}
\paragraph{Optimization with momentum}
One obvious extension would be to design an entropy estimator of
momentum-based optimizers such as stochastic gradient descent with momentum, or
refinements such as Adam~\citep{Adam14}.
However, it is difficult to track the entropy change during the updates to the momentum variables.
\paragraph{Gradient-based hyperparameter optimization}
Hyperparameters typically come in two forms:
Regularization parameters and training parameters.
Optimizing marginal likelihood rather than training loss lets us set regularization parameters during training without using a validation set.
The marginal likelihood estimate lets us optimize the variational parameters (training hyperparameters) in an outer loop.
However, optimizing more than a few of these is difficult without gradients.
We could gain access to exact gradients of the variational lower bound with respect to all variational parameters by simply using reverse-mode differentiation.
\citet{domke2012generic, MacDuvAda2015hyper} showed that this can be done in a memory-efficient way for momentum-based learning procedures.
Combining these two procedures would allow one to set all hyperparameters using gradient-based methods without the need for a validation set.
\paragraph{Stochastic dynamics}
One possible method to deal with over-zealous reduction in entropy by SGD would be to add noise to the dynamics.
In the case of Gaussian noise, we would recover Langevin dynamics~\citep{neal2011mcmc}.
However, estimating the entropy becomes much more difficult in this case.
\citet{welling2011bayesian} introduced stochastic gradient Langevin dynamics for doing inference with minibatches.
\citet{ma2013estimating} use Langevin dynamics and a floating temperature to estimate partition functions of graphical models.
More generally, we are free to design optimization algorithms that do a better job of producing samples from the true posterior, as long as we can track their entropy.
The gradient-thresholding method proposed in this paper is a simple first example of a refinement to SGD that maintains a tractable entropy estimate while improving the quality of the intermediate distributions.
\section{Conclusion}
Optimization algorithms with random initializations implicitly define a series of distributions which converge to posterior modes.
We showed that these nonparametric distributions can be seen as variational approximations to the true posterior.
We showed how to produce an unbiased estimate of this variational lower bound by approximately tracking the entropy change at each step of optimization.
This simple and inexpensive calculation turns standard gradient descent into an inference algorithm, and allows the optimization of hyperparameters without a validation set.
Our estimator is compatible with using data minibatches and scales linearly with the number of parameters, making it suitable for large-scale problems.
\subsection{Acknowledgements}
We are grateful to Roger Grosse, Miguel Hern\'andez-Lobato, Matthew Johnson, and Oren Rippel for helpful discussions.
We thank Analog Devices International and Samsung Advanced Institute of Technology for their support.
\section{GENERAL FORMATTING INSTRUCTIONS}
Papers are in 2 columns with the overall line width of 6.75~inches
(41~picas). Each column is 3.25~inches wide (19.5~picas). The space
between the columns is .25~inches wide (1.5~picas). The left margin
is 1~inch (6~picas). Use 10~point type with a vertical spacing of
11~points. Times Roman is the preferred typeface throughout.
Paper title is 16~point, caps/lc, bold, centered between 2~horizontal
rules. Top rule is 4~points thick and bottom rule is 1~point thick.
Allow 1/4~inch space above and below title to rules.
Reviewing is double-blind, so do not include author names, affiliations, or any
other identifying information in the original submission. If you include urls
to supplementary material, make sure the urls also do not disclose your identity.
After a paper is accepted, for the camera-ready submission, Authors' names are
centered, initial caps. The lead author's name is to be listed first
(left-most), and the Co-authors' names (if different address) are set to
follow. If only one co-author, center both the author and co-author,
side-by-side.
One-half line space between paragraphs, with no indent.
\section{FIRST LEVEL HEADINGS}
First level headings are all caps, flush left, bold and in point size
12. One line space before the first level heading and 1/2~line space
after the first level heading.
\subsection{SECOND LEVEL HEADING}
Second level headings must be flush left, all caps, bold and in point
size 10. One line space before the second level heading and 1/2~line
space after the second level heading.
\subsubsection{Third Level Heading}
Third level headings must be flush left, initial caps, bold, and in
point size 10. One line space before the third level heading and
1/2~line space after the third level heading.
\vskip .5pc
Fourth Level Heading
Fourth level headings must be flush left and initial caps.
One line space before the fourth level heading and 1/2~line space
after the fourth level heading.
\subsection{CITATIONS, FIGURES, REFERENCES}
\subsubsection{Citations in Text}
Citations within the text should include the author's last name and
year, e.g., (Cheesman, 1985). Reference style should follow the style
that you are used to using, as long as the citation style is
consistent.
For the original submission, take care not to reveal the authors' identity through
the manner in which one's own previous work is cited. For example, writing
``In (Bovik, 1970), we studied the problem of AI'' would be inappropriate, as
it reveals the author's identity. Instead, write ``(Bovik, 1970) studied the
problem of AI.''
\subsubsection{Footnotes}
Indicate footnotes with a number\footnote{Sample of the first
footnote} in the text. Use 8 point type for footnotes. Place the
footnotes at the bottom of the page on which they appear. Precede the
footnote with a 0.5 point horizontal rule 1~inch (6~picas)
long.\footnote{Sample of the second footnote}
\subsubsection{Figures}
All artwork must be centered, neat, clean, and legible. Figure number
and caption always appear below the figure. Leave 2 line spaces
between the figure and the caption. The figure caption is initial caps
and each figure numbered consecutively.
Make sure that the figure caption does not get separated from the
figure. Leave extra white space at the bottom of the page rather than
splitting the figure and figure caption.
\begin{figure}[h]
\vspace{1in}
\caption{Sample Figure Caption}
\end{figure}
\subsubsection{Tables}
All tables must be centered, neat, clean, and legible. Table number
and title always appear above the table. See
Table~\ref{sample-table}.
One line space before the table title, one line space after the table
title, and one line space after the table. The table title must be
initial caps and each table numbered consecutively.
\begin{table}[h]
\caption{Sample Table Title}
\label{sample-table}
\begin{center}
\begin{tabular}{ll}
\multicolumn{1}{c}{\bf PART} &\multicolumn{1}{c}{\bf DESCRIPTION} \\
\hline \\
Dendrite &Input terminal \\
Axon &Output terminal \\
Soma &Cell body (contains cell nucleus) \\
\end{tabular}
\end{center}
\end{table}
\newpage
\subsubsection*{Acknowledgements}
Use unnumbered third level headings for the acknowledgements title.
All acknowledgements go at the end of the paper.
\subsubsection*{References}
References follow the acknowledgements. Use unnumbered third level
heading for the references title. Any choice of citation style is
acceptable as long as you are consistent.
J.~Alspector, B.~Gupta, and R.~B.~Allen (1989). Performance of a
stochastic learning microchip. In D. S. Touretzky (ed.), {\it Advances
in Neural Information Processing Systems 1}, 748-760. San Mateo, Calif.:
Morgan Kaufmann.
F.~Rosenblatt (1962). {\it Principles of Neurodynamics.} Washington,
D.C.: Spartan Books.
G.~Tesauro (1989). Neurogammon wins computer Olympiad. {\it Neural
Computation} {\bf 1}(3):321-323.
\end{document}
|
2,869,038,154,259 | arxiv | \section{Introduction}
The search for magnetic semiconductors with a Curie temperature
$T_{\mathrm{C}}$ above room temperature (RT) is currently one of the
major challenges in semiconductor
spintronics.\cite{Dietl:2000_S,Zutic:2004_RMP,Jungwirth:2006_RMP} In
single-phase samples the highest Curie temperatures reported are
$\sim$190~K for (Ga,Mn)As.\cite{Olejnik:2008_PRB,Wang:2008_APL} The
magnetic ordering in these materials is interpreted in terms of the
$p-d$ Zener model.\cite{Dietl:2000_S, Dietl:2001_PRB} This model
assumes that dilute magnetic semiconductors (DMSs) are random alloys,
where a fraction of the host cations is substitutionally replaced by
magnetic ions -- hereafter with magnetic ions we intend transition metal ions -- and the indirect magnetic coupling is provided by
delocalized or weakly localized carriers ($sp$-$d$ exchange
interactions). The authors adopted the Zener approach within the
virtual-crystal (VCA) and molecular-field (MFA) approximations with a
proper description of the valence band structure in zinc-blende and
wurtzite (wz) DMSs. The model takes into account the strong spin-orbit and the
$k \cdotp p$ couplings in the valence band as well as the influence of
strain on the band density of states. This approach describes qualitatively, and
often quantitatively the thermodynamic, micromagnetic,
transport, and spectroscopic properties of DMSs with delocalized
holes.\cite{Dietl:2004_JPCM,Jungwirth:2006_RMP}
Experimental data for Ga$_{1-x}$Mn$_x$N reveal an astonishingly wide
spectrum of magnetic properties: some groups find high temperature
ferromagnetism\cite{Reed:2001_APL,Hwang:2007_APL,Sonoda:2002_JCG} with
$T_{\mathrm{C}}$ up to 940~K,\cite{Sonoda:2002_JCG} however other
detect only a paramagnetic response and their results show that the
spin$-$spin coupling is dominated by antiferromagnetic
interactions. Generally, the origin of the ferromagnetic response in
Mn doped GaN is not clear and two basic approaches to this issue have
emerged, namely: i) methods based on the mean-field Zener
model.\cite{Dietl:2000_S} -- according to this insight, in the absence
of delocalized or weakly localized holes, no ferromagnetism is
expected for randomly distributed diluted spins. Indeed, recent
studies of (Ga,Mn)N indicate that in samples containing up to $6\%$ of
diluted Mn, holes are strongly localized and, accordingly,
$T_{\mathrm{C}}$ below 10~K is experimentally
revealed.\cite{Sarigiannidou:2006_PRB,Edmonds:2005_APL} Higher values
of $T_{\mathrm{C}}$ could be obtained providing that efficient methods
of hole doping will be elaborated for nitride DMSs. Surprisingly,
however, electric field controlled RT ferromagnetism has been recently
reported in Ga$_{1-x}$Mn$_x$N layers, with a Mn content as low as $x
\approx 0.25\%$.\cite{Nepal:2009_APL} These results ($T_{\mathrm{C}}
\gtrsim 300$~K) cannot be explained in the context of the $p$-$d$
Zener model, where the Curie temperature increases linearly with the
Mn concentration and for $x < 0.5\%$ $T_{\mathrm{C}}$ should not
exceed 60~K; ii) several
studies\cite{Theodoropoulou:2001_APL,Zajac:2003_JAP,Dhar:2003_PRB}
acknowledge the (likely) presence of secondary
phases -- originating from the low solubility of magnetic ions in
GaN -- as being responsible for the observation of ferromagnetism. It
has been found that the aggregation of magnetic ions leads either to
crystallographic phase separation, i.e., to the precipitation of a
magnetic compound, nanoclusters of an elemental ferromagnet, or to the
chemical phase separation into regions with respectively high and low
concentration of magnetic cations, formed without distortion of the
crystallographic structure. It has been proposed recently that the
aggregation of magnetic ions can be controlled by varying their
valence (i.e. by tuning the Fermi level). Particularly relevant in
this context are data for (Zn,Cr)Te,\cite{Kuroda:2007_NM}
(Ga,Fe)N,\cite{Bonanni:2008_PRL} and also
(Ga,Mn)N,\cite{Kuroda:2007_NM,Reed:2005_APL,Kane:2006_JCG} where a
strict correlation between codoping, magnetic properties, and magnetic
ion distribution has been put into evidence.
There is generally a close relation between the ion arrangement
and the magnetic response of a magnetically doped semiconductor.
Specifically, depending on different preparation techniques and
parameters, coherently embedded magnetic nanocrystals [like wz-MnN
in GaN (Refs.~\onlinecite{Martinez-Criado:2005_APL} and \onlinecite{Chan:2008_PRB})] or
precipitates [like $\textit{e.g.}$ MnGa or Mn$_4$N] might in fact
give the major contribution to the total magnetic moment of the
investigated samples. In particular, randomly distributed
localized spins may account for the paramagnetic component of the
magnetization, whereas regions with a high local density of
magnetic cations are presumably responsible for ferromagnetic
features.\cite{Bonanni:2007_SST} In the case of low concentrations
of the magnetic impurity, it is often exceedingly challenging to
categorically identify the origin of the ferromagnetic signatures.
Up to very recently, in most of the reports the observation of
ferromagnetism or ferromagnetic-like behavior with apparent Curie
temperatures near or above RT, has been discussed primarily or even solely based on
magnetic hysteresis measurements. However, indirect means like superconducting quantum
interference device (SQUID)
magnetometry measurements or even the presence of the anomalous or
extraordinary Hall effect, may be not sufficient for a conclusive
statement and to verify a single-phase
system. Therefore, a careful and thorough characterization of the
systems at the nanoscale is required. This can only be achieved
through a precise correlation of the measured magnetic properties with
advanced material characterization methods, like $\textit{e.g.}$ synchrotron x-ray
diffraction (SXRD),
synchrotron based extended x-ray absorption fine structure
(EXAFS) and advanced element-specific microscopy techniques,
suitable for the detection of a crystallographic and/or chemical phase
separation.
The present work is devoted to a comprehensive study of the
Ga$_{1-x}$Mn$_x$N ($x\leq 1\%$) fabricated by metalorganic vapor phase
epitaxy (MOVPE), which was also employed by other
authors.\cite{Nepal:2009_APL,Reed:2005_APL}. A careful on-line control
of the growth process is carried out, which is followed by an extended
investigation of the structural, optical, and magnetic properties in
order to shed new light onto the mechanisms responsible for the
magnetic response of the considered system. Particular attention is
devoted to avoid the contamination of the SQUID magnetometry signal with spurious effects
and, thus, to the reliable determination of the magnetic
properties. Experimental procedures involving SXRD, high resolution transmission electron microscopy
(HRTEM), EXAFS and x-ray absorption near-edge spectroscopy (XANES) are
employed to probe the possible presence of secondary phases,
precipitates or nanoclusters, as well as the chemical phase
separation. Moreover, we extensively analyze the properties of single
magnetic-impurity states in the nitride host. The understanding of
this limit is crucial when considering the most recent suggestions for
the controlled incorporation of the magnetic ions and consequently
of the magnetic response through Fermi level engineering. By
combining the different complementary characterization techniques we establish that
randomly distributed Mn ions with a concentration $x < 1$\% generate
a paramagnetic response down to at least 2~K in Ga$_{1-x}$Mn$_x$N. In
view of our findings, the room temperature ferromagnetism observed in
this Mn concentration
range\cite{Nepal:2009_APL,Reed:2005_APL,Kane:2006_JCG,Ham:2006_ASS,Yang:2007_JCG}
has to be assigned to a non-random distribution of transition metal
impurities in GaN. We emphasize that in all reported works on (Ga,Mn)N fabricated by MOVPE
the Mn concentration was well below 5\%.
The paper is organized as follows: in the next section we give a
summary of the fabrication details, $\textit{in situ}$ monitoring of
the employed MOVPE process and an abridged overview of the
characterization techniques, together with a table listing the
principal properties and parameters characterizing the (Ga,Mn)N-based
samples considered. In Sec.~\ref{sec:structural} the results of the
structural analysis of the layers by SXRD, HRTEM, and EXAFS are
reported. These measurements prove a uniform distribution of the Mn
ions in the Ga sublattice of GaN. Section~\ref{sec:singlePhaseGaMnN}
is devoted to the determination of the Mn concentration and of the
charge and electronic state of the magnetic ions. In section
~\ref{sec:magnetic_properties} we give the experimental magnetization
characteristics of the system obtained from SQUID measurements, and
interpret the data based on the group theoretical model for Mn$^{3+}$
ions taking into account the trigonal crystal field, the Jahn-Teller
distortion and the spin-orbit coupling. Finally, conclusions and
outlook stemming from our work are summarized in
Sec.~\ref{sec:Summary}.
\section{Growth procedure}
The wz-(Ga,Mn)N epilayers here considered are fabricated by MOVPE in
an AIXTRON 200 RF horizontal reactor. All structures have been
deposited on $c$-plane sapphire substrates with TMGa (trimethylgallium),
NH$_3$, and MeCp$_2$Mn (bis-methylciclopentadienyl-manganese) as
precursors for, respectively, Ga, N and Mn, and with H$_2$ as carrier
gas. The growth process has been carried out according to a well
established procedure\cite{Bonanni:2003_JCG} consisting of: substrate
nitridation, low temperature (540$^{\circ}$C) deposition of a GaN nucleation
layer (NL), annealing of the NL under NH$_3$ until recrystallization
and the growth of a $\sim$1 $\mu$m thick device-quality GaN buffer at
1030$^{\circ}$C. On top of these structures, Mn doped GaN layers (200-700
nm) at 850$^{\circ}$C, at constant TMGa and different---over the samples
series---MeCp$_2$Mn flow-rates ranging from 25 to 490 sccm (standard
cubic centimeters per minute) have been grown. The nominal Mn content
in subsequently grown samples has been alternatively switched from low
to high and, \textit{vice versa}, to minimize long term memory effects
due to the presence of residual Mn in the reactor. During the whole
growth process the samples have been continuously rotated in order to
promote the deposition homogeneity and \textit{in situ} and on line
ellipsometry is employed for the real time control over the entire
fabrication process. The p-type superlattices have been grown
according to the optimized procedure already
reported.\cite{Simbrunner:2007_APL} Our MOVPE system is equipped with
an \textit{in situ} Isa Jobin Yvon ellipsometer that allows both
spectroscopic (variation of the optical parameters as a function of
the radiation wavelength) and kinetic (ellipsometric angles
$\textit{vs.}$ time) measurements\cite{Bonanni:2007_PRB,Peters:2000_JAP} in the energy range 1.5 - 5.5~eV.In Table I the considered (Ga,Mn)N samples are listed together with their specific parameters.
\begingroup
\squeezetable
\begin{table}
\centering
\caption{Data related to the investigated Ga$_{1-x}$Mn$_x$N. The following values are listed: the MeCp$_2$Mn flow
rate employed to grow the Mn-doped layers, the FWHM of the
(0002) reflex from GaN determined by $\textit{ex situ}$ HRXRD, the Mn$^{3+}$
concentration as obtained from magnetization data, the total Mn content from SIMS measurements and the thickness of each (Ga,Mn)N layer. Letters A and B denote the two different growth series}
\begin{ruledtabular}
\begin{tabular}{cccccc}
&MeMnCp$_2$ & thickness of & &Mn$^{3+}$ conc. & Mn conc.\\
sample & flow rate & (Ga,Mn)N & FWHM & SQUID & SIMS \\
number & [sccm] & [ nm] &[arcsec] & [$10^{20}$ cm$^{-3}$] & [$10^{20}$ cm$^{-3}$]\\
\hline
000B & 0 & 470 & & $<$0.06 & \\
025A & 25 & 450 & 242 & 0.28 & 0.3 \\%& 842\\
050A & 50 & 400 & 267 & 0.8 & 0.6 \\%& 841\\
100A & 100 & 400 & 243 & 0.8 & \\%& 844\\
100B & 100 & 520 & & 0.27 & \\
125A & 125 & 400 & 267 & 0.6 & 0.5 \\%& 849\\
150A & 150 & 400 & 247 & 1.0 & 0.7\\%& 845\\
175A & 175 & 400 & 251 & 2.2 & \\%& 843\\
200B & 200 & 500 & & 0.9 & \\
225A & 225 & 370 & 263 & 1.6 & 1.1 \\%& 850\\
250A & 250 & 370 & 243 & 1.4 & \\%& 851\\
275A & 275 & 400 & 256 & 1.6 & \\%& 854\\
300A & 300 & 400 & 272 & 1.4 & 1.3 \\%& 852\\
300B & 300 & 520 & & 1.4 & \\
325A & 325 & 400 & 269 & 2.2 & \\%& 856\\
350A & 350 & 370 & 273 & 2.2 & \\%& 853\\
375A & 375 & 400 & 284 & 2.5 & 1.9 \\%& 857\\
400A & 400 & 370 & 265 & 2.6 & \\%& 855\\
400B & 400 & 500 & & 2.0 & \\
475A & 475 & 700 & & 2.7 & \\%& 888\\
490A & 490 & 700 & & 3.8 & 2.4\\%& 889\\
490B & 490 & 470 & & 2.7 & \\
\end{tabular}
\end{ruledtabular}
\label{tab:SampleNo}
\end{table}
\endgroup
\section{EXPERIMENTAL TECHNIQUES}
\subsection{HRTEM experimental}
HRTEM studies have been carried out on cross-sectional samples
prepared by standard mechanical polishing followed by Ar$^+$ ion
milling, under a 4$^{\circ}$ angle at 4 kV for less than 2 h. The ion
polishing has been performed in a Gatan 691 PIPS system. The
specimens were investigated using a JEOL 2011 Fast TEM microscope
operated at 200~kV equipped with a Gatan CCD camera. The set-up is
capable of an ultimate point-to-point resolution of 0.19~nm, with the
possibility to image lattice fringes with a 0.14~nm resolution. The
chemical analysis has been accomplished with an Oxford Inca energy
dispersive x-ray spectroscopy (EDS) system.
\subsection{HRXRD and SXRD experimental}
High-resolution x-ray diffraction (HRXRD) rocking curves are routinely
acquired on each sample with a Philips XRD HR1 vertical
diffractometer with a CuK$_{\alpha}$ x-ray source working at a
wavelength of 0.15406 nm ($\sim$ 8~keV). A monochromator with a
Ge(440) crystal configuration is used to collimate the beam, that is
diffracted and collected by a Xe-gas detector. Angular ($\omega$) and
radial $\omega$/2$\theta$ scans have been collected along the growth
direction for the (002) GaN reflex, in order to gain information on
the crystal quality of the samples from the full width at half maximum
(FWHM) of the diffraction peak.
Though being aware that, if great care is exercised, also conventional
XRD may allow to detect small embedded clusters (like in the reported
case of Co in
ZnO)\cite{Venkatesan:2007_APL,Opel:2008_EPJB,Ney:2010_NJP} we
performed SXRD measurements that gave us the possibility to
additionally carry out $\textit{in situ}$ annealing experiments. The experiments
have been carried out at the beamline BM20 (Rossendorf Beam Line) of
the European Synchrotron Radiation Facility (ESRF) in Grenoble -
France. Radial coplanar scans in the $2\theta$ range from 20$^{\circ}$ to
60$^{\circ}$ were acquired at a photon energy of 10~keV. The beamline is
equipped with a double-crystal Si(111) monochromator with two
collimating/focusing mirrors (Si and Pt-coating) for rejection of
higher harmonics, allowing an acquisition energy range from 6 to 33~keV. The
measurements are performed using a heavy-duty 6-circle Huber
diffractometer, that is the system is suitable for (heavy)
user-specific environments ($\textit{e.g.}$ in our case a Be-dome for the annealing experiments
was required).
\subsection{EXAFS and XANES experimental}
The x-ray absorption fine structure (XAFS) measurements at the Mn-K
edge (6539~eV) have been performed at the GILDA Italian collaborating
research group beamline (BM08) of the ESRF in
Grenoble.\cite{D'Acapito:1998_EN} The monochromator is equipped with a
pair of Si(311) crystals and run in dynamical focusing
mode.\cite{Pascarelli:1996_JSR} Harmonics rejection is achieved
through a pair of Pd-coated mirrors with an estimated cutoff of
18~keV. Data are collected in the fluorescence mode using a 13-element
hyper pure Ge detector and normalized by measuring the incident beam
with an ion chamber filled with nitrogen gas. In order to minimize the
effects of coherent scattering from the substrate, the samples are
mounted on a dedicated sample holder for grazing-incidence
geometry;\cite{Maurizio:2009_RSI} measurements are carried out at room
temperature with an incidence angle of 1$^\circ$ and with the
polarization vector parallel to the sample surface ($E \perp c$). For
each sample the integration time for each energy point and the number
of acquired spectra are chosen in order to collect $\approx 10^6$
counts on the final averaged spectrum. Bragg diffraction peaks are
eliminated by selecting the elements of the fluorescence detector or
by manually de-glitching the affected spectra. In addition, before and
after each measurement a metallic Mn reference foil is measured in
transmission mode to check the stability of the energy scale and to
provide an accurate calibration. Considering the present optics setup
an energy resolution of $\approx 0.2$~eV is obtained at 6539~eV.
In our context, the EXAFS signal $\chi(k)$ is extracted from the
absorption raw data, $\mu(E)$, with the {\sc viper}
program\cite{Klementev:2001_JPDAP} employing a smoothing spline
algorithm and choosing the energy edge value ($E_0$) at the half
height of the energy step (Sec.~\ref{sec:xanes}). The quantitative
analysis is carried out with the {\sc ifeffit}$/${\sc artemis}
programs\cite{Newville:2001_JSR,Ravel:2005_JSR} in the frame of the
atomic model described below. Theoretical EXAFS signals are computed
with the {\sc feff8} code\cite{Ankudinov:1998_PRB} using muffin tin
potentials and the Hedin-Lunqvist approximation for their
energy-dependent part. In order to reduce the correlations between
variables, the minimum set of free fitting parameters used in the
analysis is: $\Delta$E$_0$ (correction to the energy edge), S$_0^2$
(amplitude reduction factor), $\Delta$R$_0$, $\Delta$R$_1$ (lattice
expansion factors, respectively, for the first Mn-N coordination shell
distances and all other upper distances) and $\sigma^2_i$
Debye-Waller factor for the $i^{\rm th}$ coordination shell around
the absorber plus a correlated Debye model\cite{Poiarkova:1999_PRB}
for multiple scattering paths with a fitted Debye temperature of
470(50)~K.\cite{Passler:2007_JAP}
\begin{figure}[t]
\includegraphics[width=8.8cm]{Layout1.eps}
\caption{(Color online) Magnetic response at 2, 15, and 200 K of a
typical ($5\times 5\times 0.3$)~mm$^3$ sapphire substrate measured
for both in-plane (darker shade squares) and out-of-plane (lighter
shade circles) configurations after the application of a correction linear in
the magnetic field and proportional to the magnetic
susceptibility of sapphire at 200~K. The magnetic moment obtained
in this way at 2 K reaches a value that would give a magnetization
of $\sim1$ emu/cm$^3$ for (a typical) 200~nm thick layer. Inset:
$m(H)$ at 2~K for the same sapphire sample, but without
correction. The axes labels are the same as in the main panel.
This ferromagnetic-like signal is isotropic, decreases with
temperature and vanishes above 15~K.} \label{fig:Szafiry}
\end{figure}
\subsection{SQUID experimental}
\label{sec:SQex}
The magnetic properties have been investigated in a Quantum Design
MPMS XL 5 SQUID magnetometer between 1.85 and 400~K and up to 5~T.
For magnetic studies the samples are typically cut into ($5\times 5$)
mm$^{2}$ pieces, and both in- and out-of-plane orientations are
probed. The (Ga,Mn)N layers are grown on 330 $\mu$m thick sapphire
substrates, so that the TM-doped overlayers constitute only a tiny
fraction of the volume investigated, and due to the substantial
magnetic dilution their magnetic moment is very small to small when
compared to the diamagnetic signal of the substrate. Therefore, a
simple subtraction of a diamagnetic component originating from the
sapphire substrate and linear with the field only exposes the
resulting data to various artifacts related to the SQUID system and to
arrangement of the measurements, as already discussed in
Refs.~\onlinecite{Bonanni:2007_PRB}, \onlinecite{Salzer:2007_JMMM}, and \onlinecite{Ney:2008_JMMM}. In order to circumvent this issue, the
magnetic data presented in this paper are obtained after subtracting
the magnetic response of a sapphire substrate with dimensions
equivalent to those of the investigated sample, independently measured
on the same holders and according to the same experimental
procedure. This method, in particular, eliminates a spurious magnetic
contribution that is due to the sapphire substrate and is not-linear
with the field and, moreover, depends on the temperature. As
exemplified in Fig.~\ref{fig:Szafiry} this extra $m(T,H)$ constitutes
a nontrivial and quite sizable contribution to the signal of
interest. Additionally - as shown in the inset to this figure - the
sapphire itself may convey a ferromagnetic response to the signal at
the lowest temperatures. We have also made sure that this method is
adequate to eliminate another weak and ferromagnetic-like contribution
appearing in the data after subtracting only the compensation linear
in the field. This fault is caused by an inaccuracy in the value of
the magnetic field as reported by the SQUID system, which assumes that
the field acting on the sample is strictly proportional to the current
sent to the superconducting coil, and disregards the magnet remanence
due to the flux pinning inside the superconducting
windings.\cite{QD:2009_manual} This remanence in our 5~T system is as
high as -15~Oe after the field has been risen to $H > +1$~T and
results in a zero field magnetic moment of +$2\times10^{-7}$~emu for
our typical sapphire substrate. Although the value is small, it
linearly scales with the mass of the substrate and it exceeds the
magnitude of the signal expected from a submicrometer thin layer of a DMS film.
\subsection{SIMS experimental}
The overall Mn concentration in the epilayers has been evaluated via
secondary-ion mass spectroscopy (SIMS). The SIMS analysis is performed
by employing Caesium ions as the primary beam with the energy set to
5.5 keV and the beam current kept at 150 nA. The raster size is 150 x
150 $\mu m^2$ and the secondary ions are collected from a central region
of 60 $\mu m$ in a diameter. The Mn concentration is derived from MnCs+
species, and the matrix signal NCs+ was taken as reference. Mn
implanted GaN is used as a calibration standard.
\section{Structural Properties}\label{sec:structural}
As already underlined, it is necessary to ascertain if the
investigated material contains any secondary phases, nanoclusters or
precipitates. In this context, it has been realized
recently\cite{Martinez-Criado:2005_APL, Jamet:2006_NM,
Bonanni:2007_PRB,Bonanni:2008_PRL} that the limited solubility of
transition metals in semiconductors can lead to a chemical
decomposition of the alloy, $\textit{i.e.}$ the formation of regions
with the same crystal structure of the semiconductor host, but with
respectively high and low concentration of magnetic constituents. In
this work the structural properties of the system are analyzed by
SXRD, HRTEM and EXAFS.
\subsection{HRXRD and SXRD - results}
\begin{figure}[hb]
\includegraphics[width=8.5 cm]{SXRDandLattice.eps}
\caption{(Color online) a) SXRD spectra for (Ga,Mn)N samples showing
no presence of secondary phases over a broad range of
concentration of the magnetic ions. b) Lattice parameters
\textit{vs.} Mn concentration. Values for an undoped GaN layer are
added for reference}
\label{fig:SXRD}
\end{figure}
In order to verify the homogeneity of the grown (Ga,Mn)N layers,
conventional XRD measurements have been routinely performed. From
rocking curves around the GaN (002) diffraction peak, the crystal
quality from the FWHM is verified and we obtain values in the range
240 to 290 arcsec, indicating a high degree of crystal perfection of
the layers. For the (Ga,Mn)N (002) diffraction peak we observe a shift
to lower angles with increasing Mn concentration in the acquired
$\omega/2\theta$ scans. This shift points to an increment in the
\textit{c}-lattice constant, as it has been also reported by
others.\cite{Thaler:2004_APL,Cui:2008_APL} Apart from the diffraction
peak shift, no evidence for second phases is observed in the XRD
measurements. These results have been confirmed by the SXRD
diffraction spectra reported in Fig.~\ref{fig:SXRD}(a), where no
crystallographic phase separation is detected over a broad range of Mn
concentrations.
The lattice parameters are determined by averaging the values for the
two symmetric SXRD diffractions (004) and (006) for the
\textit{c}-parameter, and one asymmetric diffraction (104) for the
\textit{a}-parameter. The variation of the lattice parameters with
increasing incorporation of Mn is presented in Fig.~\ref{fig:SXRD}b.
\begin{figure}[b
\includegraphics[width=8 cm]{Fig_annealing.eps}
\caption{(Color online) \textit{In situ} SXRD spectra upon
annealing of sample 400A at different temperatures: no formation of secondary phases is detected up to an annealing temperature of 900~$^{\circ}$C.}
\label{fig:annealing}
\end{figure}
To obtain further information on the solubility of Mn in our (Ga,Mn)N
layers, \textit{in situ} annealing experiments have been carried out
at the ESRF BM20 beamline. Sample 400A was annealed up to 900$^{\circ}$C
in N-rich atmosphere at a pressure of 200 mbar, to compensate the
nitrogen loss during annealing. Several radial scans have been
acquired upon increasing the sample temperature in subsequent 100$^{\circ}$C
steps, and realignment was performed after reaching each
temperature. The diffraction curves upon annealing are shown in
Fig.~\ref{fig:annealing}, and no additional diffraction peaks related
to the formation of secondary phases have been detected over the whole
process. This leads us to conclude that the considered (Ga,Mn)N grown
with Mn concentration below the solubility limit at the given
deposition conditions is stable in the dilute phase upon annealing
over a considerable thermal range. This behavior is to be contrasted
with the one reported for dilute (Ga,Mn)As, where annealing at
elevated temperatures provokes the formation of either hexagonal or
zinc-blende MnAs nanocrystals.\cite{Moreno:2002_JAP,Tanaka:2001_JCG}
\subsection{HRTEM results}
\par HRTEM has been carried out on all (Ga,Mn)N layers under
consideration and independently of the Mn concentration no evidence of
crystallographic phase separation could be found. This is also
confirmed by selected area electron diffraction (SAED) patterns (not
shown) recorded on different areas of each sample, where no satellite
diffraction spot apart from wurtzite GaN are detected. In Fig.~\ref{fig:TEM}, an
example of the HRTEM images acquired along the $[10\overline{1}0]$ (a)
and $[11\overline{2}0]$ (b) zone axis, respectively is
given. Through measurements previously reported and carried out
with the same microscope, we have been able to discriminate in
(Ga,Fe)N different phases of Fe-rich nanocrystals as small as 3~nm in
diameter, and also to detect mass contrast indicating the local
aggregation of Fe-ions.\cite{Bonanni:2008_PRL,Navarro:2010_PRB} The
HRTEM images in Fig.~\ref{fig:TEM}, in contrast to the case of phase
separated (Ga,Fe)N, strongly suggest that the (Ga,Mn)N film here
studied are in the dilute state
\begin{figure}[h]
\includegraphics[width=7cm]{fig_TEM.eps}
\caption{HRTEM images: along $[10\overline{1}0]$ (a)
and along the $[11\overline{2}0]$ zone axis (b).}
\label{fig:TEM}
\end{figure}
\par The EDS spectra collected on the (Ga,Mn)N layers provide
significant signatures of the presence of Mn, as evidenced in
Fig.~\ref{fig:EDS}. The EDS detector and the software we used here
identify the Mn elements automatically, and are sensitive to Mn
concentrations as low as 0.1\% (atomic\%). The Mn concentration for
sample 300A and reported in Fig.~\ref{fig:EDS} is found to be
0.18\% (atomic\%).
\begin{figure}[h]
\includegraphics[width=7cm]{fig_EDS.eps
\caption{\label{fig:EDS} EDS spectrum of sample 300A, with the
identification of the Mn peaks [\textit{L}$_\alpha$(0.636 keV),
\textit{K}$_{\alpha}$(5.895 keV) and \textit{K}$_{\beta}$(6.492
keV) ].}
\end{figure}
\subsection{EXAFS results}\label{sec:exafs}
EXAFS (Ref. \onlinecite{Lee:1981_RMP}) is a well established tool in the study of semiconductor heterostructures and
nanostructures\cite{Boscherini:2008_book} and has proven its power as
a chemically sensitive local probe for the site identification and
valence state of Mn and Fe dopants in GaN
DMS.\cite{Soo:2001_APL,Sato:2002_JJAP,Biquard:2003_JS,Bacewicz:2003_JPCS,Rovezzi:2009_PRB} The crystallinity of the films and the optimal signal to noise ratio
of the collected spectra are demonstrated by the large number of
atomic shells visible and reproducible by the fits below 8~$\AA$ in
the Fourier-transformed spectra reported in Fig.~\ref{fig:exafs} for
the two representative samples 100A and 490A, respectively. In
addition, the homogeneous Mn incorporation along the layer thickness
is tested by measuring the Mn fluorescence yield (at a fixed energy of
6700 eV) as a function of the incidence angle (not shown). The EXAFS
response of these two samples is qualitatively equivalent, as
evidenced in Fig.~\ref{fig:exafs}, and this is confirmed by the
quantitative analysis. The best fits are obtained by employing a
substitutional model of one Mn at a Ga site (Mn$_{\rm Ga}$) in a
wurtzite GaN crystal (using the lattice parameters previously found by
SXRD). The possible presence of additional phases in the sample as
octahedral or tetrahedral interstitials (Mn$_{\rm I}^{\rm O}$,
Mn$_{\rm I}^{\rm T}$) in GaN or Mn$_3$GaN
clusters\cite{Giraud:2004_EPL} has been checked by carrying out fits
with a two phases model. The fraction of the Mn$_{\rm Ga}$ is found to
be 98(4)~\% for the pair Mn$_{\rm Ga}$-Mn$_3$GaN,
99(3)\% for the pair Mn$_{\rm Ga}$-Mn$_{\rm I}^{\rm O}$ and 97(3)\%
for the pair Mn$_{\rm Ga}$-Mn$_{\rm I}^{\rm T}$, respectively. With these results we
can safely rule out the occurrence of phases other than Mn$_{\rm Ga}$,
at least above 5\% level.
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{fig_exafs.eps}
\caption{(Color online) $k^2$-weighted EXAFS signal, (a), for
samples 100A (circles) and 490A (diamonds) with relative best
fits (solid line) in the region [R$_{\rm min}$-R$_{\rm max}$] and,
(b), amplitude of the Fourier transforms (FT) carried out in the
range [k$_{\rm min}$-k$_{\rm max}$] by an Hanning window (slope
parameter $d$k=1); the vertical lines indicate the position of the
main peaks in the FT of the Mn$_{\rm I}^{\rm T}$, Mn$_{\rm I}^{\rm
O}$ and Mn$_3$GaN additional structures. Their possible presence
would be proptly detected as they fall in a region free from other
peaks. {\sc feff8} simulations, (c), for the tested theoretical
models as described in the text.}
\label{fig:exafs}
\end{figure}
The local structure parameters found for the measured samples are
equivalent within the error bars (reported on the last digit within
parentheses) and averaged values are given for simplicity. The
value of the amplitude reduction factor S$_0^2$~=~0.95(5) demonstrates
the good agreement with the theoretical coordination numbers for
Mn$_{\rm Ga}$ (considering the in-plane polarization) and the
correction to the energy edge $\Delta E_0 = -7(1)$~eV supports the
XANES analysis (Sec.~\ref{sec:xanes}). With respect to the lattice
parameters previously found by SXRD, the long range distortion fits
within the error ($\Delta R_1 = 0.1(2)$~\%), while the Mn-N nearest
neighbors have a $\Delta R_0 = 2.5(5)$~\% (expansion to 1.99(1)~\AA),
in line with previously reported experimental
results\cite{Sato:2002_JJAP,Bacewicz:2003_JPCS,Biquard:2003_JS} and
recent $\textit{ab initio}$ calculations.\cite{Stroppa:2009_PRB} Finally, all
the evaluated $\sigma^2_i$ attest around the average value of $8(2)
\cdot 10^{-3}$~\AA$^{-2}$, confirming the high crystallinity of the
layers.
\section{Properties of homogeneous single-phase $\mathrm{\textbf{(Ga,Mn)N}}$} \label{sec:singlePhaseGaMnN}
Thus, SXRD, HRTEM and XAFS experiments have confirmed the wurtzite
structure of the samples, the absence of secondary phases, and the
location of Mn in the Ga sublattice of the wurtzite GaN
crystal. Furthermore, the samples have been investigated to determine
the actual Mn concentration and the charge state of the magnetic ions.
\subsection{Determination of the Mn concentration}
\label{sec:x-Mn}
The depth profiling capabilities of SIMS provide not only an accurate
analysis of the (Ga,Mn)N layers composition, but allow also to monitor
the changes in composition along the sample depth. The SIMS depth
profiles reported in Figs.~\ref{fig:SIMS_thin_films}(a) and (b) give
evidence that the distribution of the Mn concentration $n_{\mathrm{Mn}}$ in the
investigated films is essentially uniform over the doped layers,
independent of the magnetic ions content as well as that the interface
between the (Ga,Mn)N overlayer and the GaN buffer layer is sharp. This
is confirmed by EDS studies, which with the sensitivity around 0.1\%
at. do not provide any evidence for Mn diffusion into the buffer. The
determined total Mn concentration increases with increasing MeCp$_2$Mn
flow rate and the corresponding $n_{\mathrm{Mn}}$ values for the
considered samples can be found in Table~\ref{tab:SampleNo}.
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm]{fig_SIMS_thin_films.eps}
\caption{(Color online) SIMS depth profiles of Mn, C, O and H for
the samples: a) 150A and b) 375A.}
\label{fig:SIMS_thin_films}
\end{figure}
\subsection{Energy levels introduced by Mn impurities}
The character of the paramagnetic response of DMS depends crucially on
the magnetic ion configuration. In III-V semiconductors, Mn in the
impurity limit substitutes the cation site giving three electrons to
the crystal bond. Depending on the compensation ratio, Mn can exist in
three different charge states and electron configurations, namely: i)
ionized acceptor Mn$^{2+}$, with five electrons localized in the Mn
$d$ shell. The electronic configuration of Mn$^{2+}$ is $d^{5}$, and
the ground level of the ion at zero magnetic field is a degenerate
multiplet with vanishing orbital momentum ($L=0$, $S=5/2$). The
magnetic moment of the ion results solely from the spin, and its
magnetic contribution can be described by a standard Brillouin
function for any orientation of the magnetic field. The neutral
configuration of Mn$^{3+}$ ($S=2$, $L=2$) can be realized in two ways:
ii) by substitutional manganese $d^{4}$ with four electrons tightly
bound in the Mn $d$ shell; iii) Mn$^{2+}$ + hole ($d^{5}$ + hole) with
five electrons in the Mn $d$ shell and a bound hole localized on
neighboring anions.
\subsection{XANES results}\label{sec:xanes}
The XANES spectra allow to determine the redox-state of the probed
species and give information on the structure of the surroundings of
the absorbing atom.\cite{Yamamoto:2008_XRS} Basically, the near edge
region resembles the density of those empty states, that are
accessible via optical transitions from the Mn $1s$ shell.
The goal of our XANES analysis is to determine the valence state of Mn
and to confirm the Mn$_{\rm Ga}$ incorporation, in comparison to the
findings and analysis carried out previously for molecular beam
epitaxy (MBE)-grown (Ga,Mn)N, and interpreted in terms of Mn$^{3+}$
(Refs.~\onlinecite{Sarigiannidou:2006_PRB} and \onlinecite{Titov:2005_PRB}) or
Mn$^{2+}$ (Ref.~\onlinecite{Sancho-Juan:2009_JPCM}). In order to
assign the Mn valence state, first of all we proceed with a comparison
of the position of the absorption Mn K-edge to reference compounds,
like Mn-based oxides since we do not have available data on Mn-nitrides. This
procedure was already adopted by other
groups\cite{Biquard:2003_JS,Sancho-Juan:2009_JPCM} but its reliability
could be questionable; to clarify this point {\em ab initio}
calculations are also performed.
As shown in Fig.~\ref{fig:xanes}, the XANES spectra determined for two
samples differing in Mn concentrations (100A and 490A) are
identical, confirming a conclusion from the SQUID data on the
independence of the Mn charge state of the Mn concentration. In
Fig.~\ref{fig:xanes}$(a)$ three spectra collected in transmission mode
from commercial powders of MnO, Mn$_2$O$_3$ and MnO$_2$, with
Mn-valence states 2+, 3+, 4+, respectively, are used as reference. As
seen, with the increasing charge state, the edge moves to a higher
energy, as the accumulated positive charge shifts downwards in energy
more the $1s$ Mn shell than the valence states, in agreement with the
Haldane-Anderson rule.
Usually, the edge position is taken at the first inflection point of
the plot, but in the present case (since the oxide spectra exhibit a
broad peak that modifies the slope at the edge) a better estimate of
the edge position is obtained by considering the energy of the half
step-height of the background function. In both investigated samples
this lies at 6550.0(5)~eV. For the oxides, their half-height energies
are determined to be 6545.7(5)~eV, 6550.3(5)~eV, 6553.3(5)~eV for MnO,
Mn$_2$O$_3$ and MnO$_2$, respectively. This would strongly suggest
that we deal with Mn$^{3+}$, in line with the SQUID results
(Sec.~\ref{sec:squid}). On the other hand, taking the position of the
inflection points, the determined charge state would be 2+, as
reported in Ref.~\onlinecite{Sancho-Juan:2009_JPCM}. This demonstrates
that, in this case, relying only on the edge position to determine the
valence state is prone to error and strongly depends on the local
surrounding of the probed species.\cite{Farges:2005_PRB}
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{fig_xanes.eps}
\caption{(Color online) Normalized XANES spectra of the samples
100A and 490A (points) compared with: (a) the reference
manganese oxides (MnO, Mn$_2$O$_3$, MnO$_2$) - the chosen edge
positions are highlighted by vertical lines; (b) {\em ab initio}
absorption spectra (without convolution) for Mn$_{\rm Ga}$ in the
3$d^4$ and 3$d^5$ electronic configurations. The inset (c) shows
the method used to extract the results of Table~\ref{tab:xanes},
focusing the near-edge region for sample 490A with the baseline
(Bkg), the relative fit and its components (A$_1$-A$_{4}$).}
\label{fig:xanes}
\end{figure}
To clarify this point, we look at the pre-edge peaks of the XANES
lines (Fig.~\ref{fig:xanes}$(b)$,$(c)$). In both probed samples, there
are two defined peaks below the absorption edge, which we label A$_1$
and A$_2$, while the edge itself shows two shoulders, A$_3$ and
A$_{4}$. In Table~\ref{tab:xanes} the results of Gaussian fits
performed by using an arctan-function as baseline, are
reported. Similar findings were previously
interpreted\cite{Titov:2005_PRB,Antonov:2010_PRB} as indicative of the
Mn$^{3+}$ charge state. The peaks A$_1$ and A$_2$ correspond to the
transitions to Mn 3$d$-4$p$ hybrid states, while A$_3$ and A$_4$ end
in the GaN higher conduction bands at positions with a high density of
4$p$ states. Due to the tetrahedral environment, the Mn 3$d$-levels
split in two nearly degenerate $e$- and three nearly degenerate
$t_2$-levels for each spin-direction. The actual position of those
states with respect to the GaN band structure is still a matter of
debate, but from absorption\cite{Korotkov:2001_PBCM,Wolos:2004_PRB_a}
and photoluminescence\cite{Zenneck:2007_JAP} measurements it is known
that for the majority spin carriers in Mn$^{3+}$, the $e$-levels lie
around 1.4~eV below the $t_2$-levels of Mn incorporated
substitutionally in GaN, and the $t_2$-level, {\em i.e.,} the
Mn$^{3+}$/Mn$^{2+}$ state is located about 1.8~eV above the valence
band. An interpretation of simulations applied to x-ray absorption spectra
is given in Refs.~\onlinecite{Titov:2005_PRB} and \onlinecite{Titov:2006_thesis}, and
states that, due to crystal field effects, the 3$d$- and 4$p$-states
can hybridize, making transitions from the 1$s$-level to the
$t_2$-levels dipole allowed, while the interaction of the $e$-levels
with the 4$p$ orbitals is much weaker and cannot be seen in K-edge
XANES.
\begin{table}[tbp]
\caption{Position $P$, integrated intensity $I$ and full width at half maximum $W$ of the Gaussians fitted to the peaks before and at the absorption edge. The background function is used to normalize the spectra.}
\label{tab:xanes}
\begin{center}
\begin{tabular}{|l|ccc|ccc|}
\hline
\hline
& \multicolumn{3}{c|}{100A} & \multicolumn{3}{c|}{490A}\\
& $P$ (eV) & $I$ & $W$ (eV) & $P$ (eV) & $I$ & $W$ (eV)\\
& $\pm$~0.2 & $\pm$~0.05 & $\pm$~0.1 & $\pm$~0.2 & $\pm$~0.05 & $\pm$~0.1\\
\hline
A$_1$ & 6538.9 & 0.18 & 1.5 & 6538.9 & 0.18 & 1.3 \\
A$_2$ & 6540.8 & 0.32 & 1.6 & 6540.8 & 0.33 & 1.5 \\
A$_3$ & 6545.8 & 0.70 & 3.6 & 6545.7 & 0.64 & 3.3 \\
A$_4$ & 6548.7 & 0.28 & 2.1 & 6548.4 & 0.22 & 2.0 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
In view of the above discussion we explain the physical mechanism
beyond the observed data considering possible $\textit{final}$ states
of the transitions from the $1s$ Mn shell. The final state
corresponding to the A$_1$ peak is Mn$^{2+}$, {\em i.e.} a $^6$A$_1$
state ($^6$S for the spherical symmetry), consisting of
$e^{2\uparrow}$ and $t_2^{3\uparrow}$ one electron levels. The A$_2$
peak can be interpreted as a crystal field multiplet derived from the
$^4G$ state consisting of
$e^{2\uparrow}t_2^{2\uparrow}t_2^{\downarrow}$, and lying about 2.5~eV
higher than the A$_1$ state. Apart from what reported in literature, a
reason why A$_1$ and A$_2$ are assigned to localized Mn-states is that
from the previous EXAFS analysis (Sec.~\ref{sec:exafs}) we obtain an
absorption edge value of 6543(1)~eV, between the energies of the A$_2$
and A$_3$ peaks, meaning that electrons excited to A$_1$ and A$_2$ can
not backscatter at the surrounding atoms, and they are thus
localized. This assignment gives a valuable information, namely, that
there is an empty state in the majority-spin $t_2$-level confirming
that most of the incorporated Mn-ions are really in the 3+ valence
state, in agreement with the conclusions of Refs.
\onlinecite{Sarigiannidou:2006_PRB} and \onlinecite{Titov:2005_PRB}. The model
explains also the presence of only one pre-edge peak in the case of
(Ga,Mn)As and $p$-(Zn,Mn)Te.\cite{Titov:2005_PRB} In those systems we
deal with Mn$^{2+}$ and delocalized holes, so that the final state of
the relevant transitions corresponds to the Mn $d^6$ level, involving
only one spin orientation. On the other hand, the XANES data do not
provide information on the radius of the hole localization in
(Ga,Mn)N, in other words, whether the Mn$^{3+}$ configuration
corresponds to the $d^4$ or rather to the $d^5$ + h situation, where
the relevant $t_2$ hole state is partly built from the neighboring
anion wave functions owing to a strong $p-d$ hybridization.
We also have simulated the Mn$_{Ga}$ K-edge absorption spectra in a
Ga$_{95}$Mn$_1$N$_{96}$ cluster (a $4a\times 4a\times 3c$ supercell,
corresponding to 1\% Mn concentration) focusing the attention on the
Mn electronic configuration: 3$d^4$ and 3$d^5$. The calculation is
conducted within the multiple-scattering approach implemented in {\sc
fdmnes}\cite{Joly:2001_PRB} using muffin-tin potentials, the
Hedin-Lunqvist approximation for their energy-dependent part, a
self-consistent potential calculation\cite{Joly:2009_JPC} for
enhancing the accuracy in the determination of the Fermi energy and
the in-plane polarization ($E \perp c$). Despite it is common practice
to report convoluted spectra to mimic the experimental resolution, we
find out that this procedure can arbitrary change the layout of the
pre-edge peaks and for this reason it is preferred to show
non-convoluted data [Fig.~\ref{fig:xanes}$(b)$)]. Regarding the fine
structure of the simulated spectra, we have a good agreement with
experimental data, confirming the Mn$_{\rm Ga}$ incorporation as found
by the EXAFS analysis (Sec.~\ref{sec:exafs}). On the other hand, the
simulated pre-edge features need a further investigation: the
experimental intensity of A$_1$ and A$_2$ and the position of A$_3$
are not properly reproduced. This could be due to some neglected
effects in the employed formalism, as explained in
Ref.~\onlinecite{Titov:2005_PRB}, where the two peak structure was
reproduced theoretically within a more elaborated model.
\section{Magnetic properties}
\label{sec:magnetic_properties}
\subsection{SQUID results}\label{sec:squid}
\begin{figure}[b]
\centering
\includegraphics[width=8.5cm]{fig_M_H_diffT.eps}
\caption{(Color online) Magnetization measurements at 1.85, 5, and
15~K of Ga$_{1-x}$Mn$_x$N as a function of the magnetic field
applied parallel (closed circles) and perpendicular (open squares)
to the GaN wurtzite $c$-axis. The solid lines show the
magnetization curves calculated according to the group theoretical
model for non-interacting Mn$^{3+}$ ions in wz-GaN.}
\label{fig:M_H_diffT}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{fig_M_T.eps}
\caption{(Color online) Temperature dependence of the magnetization
$M$ for sample 375A (points) at $H=10$~kOe. The solid lines
represent the magnetization calculated within the group theoretical
model of non-interacting Mn$^{3+}$ ions in wz-GaN.}
\label{fig:M_T}
\end{figure}
We investigate both the temperature dependence of the magnetization
$M$ at a constant field $M(T)$ and the sample response to the
variation of the external field at a constant temperature $M(H)$. The
same experimental routine is repeated for both in-plane and
out-of-plane configurations, that is with magnetic field applied
perpendicular and parallel to the hexagonal $c$-axis, respectively. In
Fig.~\ref{fig:M_H_diffT} representative low temperature $M(H)$ data
for both orientations are reported. We note that these curves exhibit
a paramagnetic behavior with a pronounced anisotropy with respect to
the $c$-axis of the crystal. This indicates a nonspherical Mn ion
configuration, expected for a $L\neq0$ state. At the same time we
report an absence of any ferromagnetic-like features, that---on the
other hand---are typical for (Ga,Fe)N layers\cite{Bonanni:2007_PRB} at
these concentrations of the magnetic ions, supporting the absence of
crystallographic phase separation in our layers, as suggested by the
SXRD and HRTEM studies. The same finding additionally indicates that
both chemical phase separation (spinodal decomposition) and
medium-to-long range ferromagnetic spin-spin coupling are also absent
in this dilute layers. The latter allows us to treat the Mn ions as
completely non-interacting, at least in the first approximation. The
solid lines in Figs.~\ref{fig:M_H_diffT} and \ref{fig:M_T} represent
fits to our experiential data on the paramagnetic response of
non-interacting Mn$^{3+}$ ions ($L=2$, $S=2$) with the trigonal
crystal field of the wurzite GaN structure and the Jahn-Teller
distortion taken into account (details in Sec.~\ref{sec:CF}). The
overall match validates our approach, which, in turn, is consistent
with previous findings\cite{Graf:2002_APL,Graf:2003_PSSB} that without
an intentional codoping, or when the stoichiometry of GaN:Mn is
maintained, Mn is occupying only the neutral Mn$^{3+}$ acceptor
state. Interestingly, all theoretical lines in
Figs.~\ref{fig:M_H_diffT} and \ref{fig:M_T} are calculated employing
only one set of crystal field parameters (as listed in
Table~\ref{tab:Parameters_CF}) having the Mn$^{3+}$ concentration
$n_{\mathrm{Mn^{3+}}}$ as the only adjustable parameter for each
individual layer. In Fig.~\ref{fig:Mn_Concentration} the
$n_{\mathrm{Mn^{3+}}}$ values as a function of the manganese precursor
flow rate are given together with the total Mn content
$x_{\mathrm{Mn}}$ as determined by SIMS.
However, there are hints that the interaction between Mn spins may
play a role for $x \gtrsim 0.6$\%. In Fig.~\ref{fig:M_H_normalized}
the $M(H)$ normalized at high field ($H=50$~kOe) to their in-plane
values, are reported. The fact that the shape of their magnetization
curves is independent of $x$ for $x \lesssim 0.6$\% means that the
interactions between Mn ions are unimportant for these dilutions. On
the other hand, the $M(H)$ for a layer with $x = 0.9$\% (490A)
secedes markedly from the curves for samples with $x \lesssim 0.6$\%,
indicating that supposedly ferromagnetic Mn-Mn coupling starts to
emerge with increasing relative number of Mn nearest neighbors in the
layers. Nevertheless, due to the generally low Mn concentration in the
considered samples, no conclusive statement about the strength of the
magnetic couplings can be drawn from our magnetization data.
Interestingly, depending on the very nature of the Mn centers both
ferromagnetic and/or antiferromagnetic $d$-$d$ interactions can emerge
in (Ga,Mn)N. The presence of Mn$^{2+}$ ions essentially leads to
antiferromagnetic superexchange, as in II-Mn-VI DMS, where
independently of the electrical doping, the position of the Mn
$d$-band guarantees its 3$d^5$ configuration. Significantly, the same
antiferromagnetic $d$-$d$ ordering and paramagnetic behavior typical
for $S=5/2$ of Mn$^{2+}$ was reported in $n$-type bulk (Ga,Mn)N samples
containing as much as $9\%$ of Mn.\cite{Zajac:2001_APL} On the other
hand, calculations for Mn$^{3+}$ within the DFT point to ferromagnetic
coupling\cite{Boguslawski:2005_PRB,Cui:2007_PRB} and, experimentally,
a Curie temperature $T_{\mathrm{C}}\simeq 8~K$ was observed in
single-phase Ga$_{1-x}$Mn$_x$N with $x \simeq 6\%$ and the majority of
Mn atoms in the Mn$^{3+}$ charge
state.\cite{Sarigiannidou:2006_PRB,Marcet:2006_PSSC} Our experimental
data seems to support these findings and to extend their validity
towards the very diluted limit. Finally, we remark that the
carrier-mediated ferromagnetism can be excluded at this stage due to
the insulating character of the samples, confirmed by room
temperature four probe resistance measurements and consistent with the
mid-gap location of the Mn acceptor level.
\begin{figure}[tbp]
\centering
\includegraphics[width=7.5cm]{fig_Mn_Concentration.eps}
\caption{(Color online) Mn concentration $n_{\mathrm{Mn}}$ obtained
from magnetization measurements (circles - series A, diamonds -
series B) and SIMS (squares - series A) as a function of the Mn
precursor flow rate.}
\label{fig:Mn_Concentration}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{fig_M_H_normalized.eps}
\caption{(Color online) Magnetization curves at T=1.85K of
five Ga$_{1-x}$Mn$_x$N samples normalized with respect to their in
plane magnetization at $H=50$~kOe.}
\label{fig:M_H_normalized}
\end{figure}
The observations presented here point to an uniqueness of Mn in
GaN. The fact that Ga$_{1-x}$Mn$_x$N with $x \lesssim 1$\% is
paramagnetic without even nanometer-scale ordering should be
contrasted with GaN doped with other TM ions. Depending on the growth
conditions, the TM solubility limit is rather low and typically,
except for Mn, it is difficult to introduce more than $1\%$ of magnetic
impurities into randomly distributed substitutional sites. For
example, the solubility limit of Fe in GaN has been shown to be
$x\approx0.4\%$ at optimized growth conditions (see
Ref.~\onlinecite{Bonanni:2007_PRB}), but signatures of a nanoscale
ferromagnetic coupling are observed basically for any
dilution.\cite{Bonanni:2007_PRB} The relatively large solubility limit
of Mn in GaN, in turn, has a remarkable significance in the search for
long-range coupling mediated by itinerant carriers.\cite{Dietl:2000_S,
Dietl:2001_PRB} Not only it lets foresee a high concentration of
substitutional Mn---important for the long-range ordering---but it can
ensure that the effects brought about by carriers are not masked by
signals from nanocrystals with different phases.
\subsection{Magnetism of Mn$^{3+}$ ions - theory}
\label{sec:CF}
The Mn concentration in our samples is then $x\lesssim 1\%$ as
evaluated by means of various characterization techniques
(subsection~\ref{sec:x-Mn}), implying that most of the Mn ions ($\geq
90\%$) have no nearest magnetic neighbors. Therefore, the model that
considers the Mn ions as single, noninteracting magnetic centers
should provide a reasonable picture. To describe the Mn$^{3+}$ ion we
follow the group theoretical model developed for Cr$^{2+}$ ion by
Vallin\cite{Vallin:1970_PRB, Vallin:1974_PRB} and then successfully
used for a Mn-doped hexagonal GaN
semiconductor.\cite{Gosk:2005_PRB,Wolos:2004_PRB_b} It should be
pointed out that symmetry considerations cannot discriminate between
the $d^5$+hole and $d^4$ many-electron configurations of the Mn ions,
therefore the presented model should be applicable to both
configurations. Through this section, the capital letters $T_i$
($i=1,2$), $E$ denote the irreducible representations of the point
group for the multielectron configurations in contrast to the single
electron states indicated by the small letters $d$, $e$, $t_2$.
We consider a Mn ion that in its free state is in the electronic
configuration $d^5s^2$ of the outer shells. When substituting for the
group III ($s^2p$) cation site, Mn gives three of its electrons to the
crystal bond and assumes the Mn$^{3+}$ configuration. In a tetrahedral
crystal field, the relevant levels are five-fold degenerate with
respect to the projection of the orbital momentum and are splet by
this field and by hybridization with the host orbitals into two
sublevels $e$ and $t_2$ with different energies. In the tetrahedral
case the $e$ states lie lower than the $t_2$ states. This fact can be
understood by analyzing the electron density distribution of the
$t_2^{xy}$, $t_2^{yz}$, $t_2^{zx}$ and $e^{x^2-y^2}$, $e^{z^2}$
levels. The density of the $t_2$ state extends along the direction
toward the N ligand anions, while the $e$ orbital has a larger
amplitude in the direction maximizing the distance to the N ion and
due to the negative charge of the N anions, the $t_2$ energy
increases. However, the relevant, {\em i.e.}, the uppermost $t_2$
state may actually originate from orbitals of neighboring anions, pull
out from the valence band by the $p$-$d$
hybridization.\cite{Dietl:2008_PRB} If the system has several
localized electrons, they successfully occupy the levels from the
bottom, according to Hund's first rule, and keep their spins
parallel. By considering the full orbital and spin moments, the
Mn$^{3+}$ center can be described through the following set of quantum
numbers ($Lm_LSm_S$) with $L=2$ and $S=2$. However, we underline that
this procedure can be used only if the intra-atomic exchange
$\Delta_{ex}$ interaction is larger than the splitting between the $e$
and $t_2$ states $\Delta_{CF}=E_{t_2}-E_e$ ($\Delta_{ex} >
\Delta_{CF}$). After this, the effect of the host crystal is taken
into account as a perturbation like in the single electron
problem. One forms first $2L+1$ wave functions for the $n$-electron
system determined by the Hund's rule, calculates the matrix elements
for these states and determines the energy level structure. In this
way, the impurity ions states are found and classified according to
the irreducible representations of the crystal point group and
characterized by the set ($\Gamma MSm_S$) of quantum numbers, with $M$
the number of the line of an irreducible representation $\Gamma = A_1,
A_2, E, T_1, T_2$ of the corresponding point group. In the case of a
Mn$^{3+}$ ($L=2, S=2$) ion in a tetrahedral environment the ground
state corresponds to the $^5T_2(e^2t_2^2)$ configuration with two
electrons in the $e$ and two electrons in the $t_2$ level. The ground
state is three-fold degenerate, since there are three possibilities to
choose two orbitals from three $t_2$ orbitals. The first excited state
for the Mn$^{3+}$ ion is $^5E(e^1t_2^3)$ (see
Ref.~\onlinecite{Kikoin:2004_B}).
\begingroup
\squeezetable
\begin{table}[tbp]
\centering \caption{Parameters of the group theoretical model used to calculate the magnetization of Ga$_{1-x}$Mn$_x$N. All values are in meV.}
\begin{ruledtabular}
\begin{tabular}{ ccccccc}
$B_4$ & $B_2^0$ & $B_4^0$ & $\tilde{B}_2^0$ & $\tilde{B}_4^0$ & $\lambda_{TT}$ & $\lambda_{TE}$ \\
\hline
\\
11.44&4.2&-0.56&-5.1&-1.02&5.0&10.0\\
\end{tabular}
\end{ruledtabular}
\label{tab:Parameters_CF}
\end{table}
\endgroup
The energy structure of a single ion in Mn$^{3+}$ charge state can be
described by the Hamiltonian
\begin{eqnarray}
\label{eq:Hcf}
H=H_{\mathrm{CF}}+H_{\mathrm{JT}}+H_{\mathrm{TR}}+H_{\mathrm{SO}}+H_{\mathrm{B}},
\end{eqnarray}
where $H_{\mathrm{CF}}=-2/3B_4(\hat{O}_4^0-20\sqrt{2}\hat{O}_4^3)$
gives the effect of a host having tetrahedral $T_{d}$ symmetry,
$H_{\mathrm{JT}}=\tilde{B}_2^0\hat{\Theta}_4^0+\tilde{B}_4^0\hat{\Theta}_4^2$
is the static Jahn-Teller distortion of the tetragonal symmetry,
$H_{\mathrm{TR}}=B_2^0\hat{O}_4^0+B_4^0\hat{O}_4^2$ represents the
trigonal distortion along the GaN hexagonal $c$-axis, that lowers the
symmetry to $C_{3V}$, $H_{\mathrm{SO}}=\lambda\hat{L}\hat{S}$
corresponds to the spin-orbit interaction and
$H_B=\mu_B(\hat{L}+2\hat{S})\textbf{B}$ is the Zeeman term describing
the effect of an external magnetic field. Here $\hat{\Theta}$,
$\hat{O}$ are Stevens equivalent operators for a tetragonal distortion
along one of the cubic axes $[100]$ and trigonal axis $[111]\|c$ (in a
hexagonal lattice) and $B_q^p$, $\tilde{B}_q^p$, $\lambda_{TT}$, and
$\lambda_{TE}$ are parameters of the group theoretical model. As
starting values we have used the parameters reported for Mn$^{3+}$ in
GaN:Mn,Mg \cite{Wolos:2004_PRB_b} which describe well the
magneto-optical data on the intra-center absorption related to the
neutral Mn acceptor in GaN. Remarkably, only a noticeable
modification (about 10\%) of $\lambda_{TT}$ and $B_2^0$ has been
necessary in order to reproduce our magnetic data (the remaining
parameters are within 3\% of their previously determined values.)
Actually, the model with the parameter values collected in Table III
describes both the magnetization $M(H)$ and its crystalline anisotropy
(Figs.~\ref{fig:M_H_diffT} and \ref{fig:M_T}) as well as the position and the field-induced splitting of optical lines.\cite{Wolos:2004_PRB_b}
The ground state of the Mn$^{3+}$ ion is an orbital and spin quintet
$^5D$ with $L=2$ and $S=2$. The term $H_{\mathrm{CF}}$ splits the
$^5D$ ground state into two terms of symmetry $^5E$ and $^5T_2$
(ground term). The $^5E-^5T_2$ splitting is $\Delta_{CF}=120B_4$. The
nonspherical Mn$^{3+}$ ion undergoes further Jahn-Teller distortion,
that lowers the local symmetry and splits the ground term $^5T_2$ into
an orbital singlet $^5B$ and an higher located orbital doublet
$^5E$. The trigonal field splits the $^5E$ term into two orbital
singlets and slightly decreases the energy of the $^5B$ orbital
singlet. The spin-orbital term yields further splitting of the spin
orbitals. Finally, an external magnetic field lifts all of the
remaining degeneracies.
For the crystal under consideration, there are three Jahn-Teller
directions: $[100]$, $[010]$ and $[001]$ (center A, B, C
respectively).\cite{Gosk:2005_PRB, Wolos:2004_PRB_b} It should be
pointed out that the magnetic anisotropy of the Mn$^{3+}$ system
originates from different distributions of nonequivalent Jahn-Teller
centers in the two orientations of the magnetic field and the
hexagonal axial field $H_{TR}$ along the $c$-axis. This picture of Mn
in GaN emphasizing the importance of the Jahn-Teller effect, which lowers the local symmetry and splits the ground term $^5T_2$ into an orbital singlet and a doublet, is in agreement with a recent $\textit{ab initio}$ study employing a hybrid exchange potential.\cite{Stroppa:2009_PRB}
The energy level scheme of the Mn$^{3+}$ ion is calculated through a
numerical diagonalization of the full $25\times 25$ Hamiltonian
(\ref{eq:Hcf}) matrix. The average magnetic moment of the Mn ion
$\textbf{m}=\textbf{L}+2\textbf{S}$ (in units of $\mu_B$) can be
obtained according to the formula:
\begin{equation}
\label{eq:M_cf}
<\textbf{m}>=Z^{-1}(Z_A<\textbf{m}>^A+Z_B<\textbf{m}>^B+Z_C<\textbf{m}>^C),
\end{equation}
with $Z_i$ ($i=A, B$ or $C$) being the partition function of the i-th
center, $Z=Z_A+Z_B+Z_C$ and
\begin{equation}
\label{eq:M_cf_Center}
<\textbf{m}>^i=\frac{\sum_{j=1}^N<\varphi_{j}|\hat{L}+2\hat{S}|\varphi_{j}>\mathrm{exp}(-E_j^i/k_BT)}{\sum_{j=1}^N\mathrm{exp}(-E_j^i/k_BT)},
\end{equation}
where $E_j^i$ and $\varphi_{j}$ are the j-th energy level and the
eigenstate of the Mn$^{3+}$ ion $i$-th center, respectively. As
already mentioned, the Mn concentration in our samples is relatively
small $x\lesssim 1\%$. Therefore, the model assuming a system of
single Mn ions provides a reasonable description of the magnetic
behavior. The macroscopic magnetization \textbf{M}, shown in
Figs.~\ref{fig:M_H_diffT} and \ref{fig:M_T}, can then be expressed in
the form
\begin{equation}
\label{eq:m_macro}
\textbf{M}=\mu_B<\textbf{m}>n_{Mn},
\end{equation}
where $n_{\mathrm{Mn}}=N_{\mathrm{Mn}}/V$ is the Mn concentration and
$N_\mathrm{Mn}$ the total number of Mn ions in a volume $V$.
\subsection{Search for hole-mediated ferromagnetism}
As already mentioned, according to the theoretical predictions within
the $p-d$ Zener model,\cite{Dietl:2000_S, Dietl:2001_PRB} RT
ferromagnetism is expected in single-phase (Ga,Mn)N and related
compounds, provided that a sufficiently high concentration of both
substitutional magnetic impurities (near 5$\%$ or above) and
valence-band holes will be realized. The latter condition is a more
severe one, as the high binding energy of Mn acceptors in the strong
coupling limit leads to hole localization.
\begin{figure}[b]
\centering
\includegraphics[width=7.5cm]{nepal_SIMS.eps}
\caption{(Color online) SIMS depth profiles of our (Ga,Mn)N/(Al,Ga)N:Mg/GaN:Si ($i$-$p$-$n$)
structure.}
\label{fig:nepal_sims}
\end{figure}
Surprisingly, RT ferromagnetism in $p$-type Ga$_{1-x}$Mn$_x$N with a Mn
content as low as $x \approx 0.25\%$ was recently
reported.\cite{Nepal:2009_APL} The investigated modulation-doped
structure consisted of a (Ga,Mn)N/(Al,Ga)N:Mg/GaN:Si ($i$-$p$-$n$)
multilayer and a correlation between the ferromagnetism of the
(Ga,Mn)N film at 300~K and the concentration of holes accumulated at
the (Ga,Mn)N/(Al,Ga)N:Mg interface was shown. The interfacial hole
density was controlled by an external gate voltage applied across the
$p$-$n$ junction of the structure, and a suppression of the FM
features---already existing without the gate bias---took place for a
moderate gate voltage applied. Apart from a high value of
$T_{\mathrm{C}}$, a puzzling aspect of the experimental results is the
large magnitude of the spontaneous magnetization,
$75$~$\mu$emu/cm$^2$.\cite{Nepal:2009_APL} Since the holes are
expected to be accumulated in a region with a thickness of the order
of 1~nm, the reported magnetic moment is about two orders of magnitude
larger than the one expected for ferromagnetism originating from an
interfacial region in (Ga,Mn)N with $x = 0.25$\%.
\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{nepal_squid.eps}
\caption{(Color online) Room temperature magnetic signal from the
(Ga,Mn)N/(Al,Ga)N:Mg/GaN:Si structure. For completeness, results
of both in-plane and out-of-plane orientations are
shown. Diamagnetic and paramagnetic contributions have been
compensated.}
\label{fig:nepal_squid}
\end{figure}
Nevertheless, we have decided to check the viability of this approach
that not only seemed to result in high temperature FM in GaN:TM, but
also allowed the all-electrical control of FM. Thus, we have combined
the $p$-type doping procedures we previously
optimized\cite{Simbrunner:2007_APL} with the growth of the dilute
(Ga,Mn)N presented in this work to carefully reproduce the
corresponding structure.\cite{Nepal:2009_APL} The desired architecture
of the investigated sample is confirmed by SIMS profiling (see
Fig.~\ref{fig:nepal_sims}) indicating the formation of well defined
(Ga,Mn)N/(Al,Ga)N:Mg and (Al,Ga)N:Mg/GaN:Si interfaces. However, as
shown in Fig.~\ref{fig:nepal_squid}, no clear evidence of a
ferromagnetic-like response is seen within our present experimental
resolution of $\approx 0.3$~$ \mu\mbox{emu/cm}^{2}$. To strengthen
the point, we note here that the maximum error bar of our results
($\approx 0.7$~$ \mu\mbox{emu/cm}^{2}$) corresponds to about 1/100 of
the saturation magnetization reported in the assessed experiment.
While the absence of a ferromagnetic response at the level of our
sensitivity is to be expected, the presence of a large ferromagnetic
signal found in Ref.~\onlinecite{Nepal:2009_APL} in a nominally
identical structure is surprising. Without a careful structural
characterization of the sample studied in
Ref.~\onlinecite{Nepal:2009_APL} by methods similar to those we have employed
in the case of our layers, the origin of differences in magnetic
properties between the two structures remains unclear.
\section{Summary}
\label{sec:Summary}
In this paper we have investigated Ga$_{1-x}$Mn$_x$N films grown by
MOVPE with manganese concentration $x\lesssim 1\%$. A set of
experimental methods, including SXRD, HRTEM, and EXAFS, has been
employed to determine the structural properties of the studied
material. These measurements reveal the absence of crystallographic
phase separation and a Ga-substitutional position of Mn in GaN. The
findings demonstrate that the solubility of Mn in GaN is much greater
than the one of Cr (Ref.~\onlinecite{Cho:2009_JCG}) and Fe
(Ref.~\onlinecite{Bonanni:2008_PRL}) in GaN grown under the same
conditions. Nevertheless, for the attained Mn concentrations and owing
to the absence of band carriers, the Mn spins remain
uncoupled. Accordingly, pertinent magnetic properties as a function of
temperature, magnetic field and its orientation with respect to the
$c$-axis of the wurtzite structure can be adequately described by the
paramagnetic theory of an ensemble of non-interacting Mn ions in the
relevant crystal field. Our SQUID and XANES results point to the 3+
configuration of Mn in GaN. However, the collected information can not
tell between $d^4$ and $d^5 + h$ models of the Mn$^{3+}$ state, that
is on the degree of hole localization on the Mn ions. A negligible
contribution of Mn in the 2+ charge state indicates a low
concentration of residual donors in the investigated films. Our
studies on modulation doped $p$-type Ga$_{1-x}$Mn$_{x}$N/(Ga,Al)N:Mg
heterostructures do not reproduce the high temperature robust
ferromagnetism reported recently for this system.\cite{Nepal:2009_APL}
\section*{Acknowledgements}
The work was supported by the FunDMS Advanced Grant of the European
Research Council within the "Ideas" 7th Framework Programme of the EC,
and by the Austrian Fonds zur {F\"{o}rderung} der wissenschaftlichen
Forschung (P18942, P20065 and N107-NAN). We also acknowledge H.~Ohno
and F.~Matsukura for valuable discussions, G. Bauer and R.T. Lechner
for their contribution to the XRD measurements as well as the support
of the staff at the Rossendorf Beamline (BM20) and at the Italian
Collaborating Research Group at the European Synchrotron Radiation
Facility in Grenoble.
\bibliographystyle{apsrev}
|
2,869,038,154,260 | arxiv | \section{Introduction\label{s1}}
Let $\sR^-$ and $\sR^+$ be, respectively, the subsets of all
negative and all positive real numbers and $\chi_E$ refer to
the characteristic function of the subset $E\subset \sR$ -- i.e.
\begin{equation*}
\chi_E(t):=\left \{
\begin{array}{ll}
1 & \text{ if } t\in E, \\
0 & \text{ if } t\in \sR\setminus E \\
\end{array}
\right.
\end{equation*}
In what follows, we often identify the spaces $L^p(\sR^+)$ and
$L^p(\sR^-)$, $1\leq p \leq\infty$ with the subspaces $\chi_{\sR^+} L^p(\sR)$ and $\chi_{\sR^-} L^p(\sR)$ of the space $L^p(\sR)$, which consist of the functions vanishing on $\sR^-$ and $\sR^+$, respectively.
Let $\mathcal{F}$ and $\mathcal{F}^{-1}$ be the direct and inverse Fourier transforms -- i.e.
\begin{equation*}
\mathcal{F}\varphi(\xi):=\int\limits_{-\infty}^\infty e^{i\xi x} \varphi(x)\,dx,\quad
\mathcal{F}^{-1}\psi(x):=\frac1{2\pi}\int\limits_{-\infty}^\infty
e^{-i\xi x}\psi(\xi)\,d\xi,\;\;x\in\sR.
\end{equation*}
Consider the set $\cL$ of functions $c\colon \sR\to \sC$ such that
$c=\cF k$ with $k\in L^1(\sR)$, and let $AP_W(\sR)\subset
L^\infty(\sR)$ be the set of functions $a\colon\sR\to \sC$ having the
representation
\begin{equation}\label{Eq18}
a(t)=\sum_{j\in\sZ} a_j e^{i\delta_j t}, \quad t\in\sR,
\end{equation}
with absolutely convergent series \eqref{Eq18}. It is assumed that
that $\delta_j\in\sR$ for all $j\in \sZ$ and $\delta_j \neq \delta_k$ if $j\neq k$. By $G$ we denote the
smallest closed subalgebra of $L^\infty(\sR)$, which contains both
$AP_W(\sR)\subset
L^\infty(\sR)$ and $\cL$. One can show that any function $g\in G$ can be
represented in the form
\begin{equation}\label{Eq19}
g=a+c, \quad a\in AP_W(\sR), \; c\in \cL.
\end{equation}
We also consider the subalgebra $G^+$ ($G^-$) of the algebra $G$,
which consists of all functions \eqref{Eq19} such that all numbers
$\delta_j$ are non-negative (non-positive) and the functions $c=\cF k$ such that $k(t)=0$ for all $t\leq 0$ ($t\geq 0$). The functions from $G^+$ and
$G^-$ admit holomorphic extensions respectively to the upper and lower
half-planes and the set $G^+\cap G^-$ contains constant functions only.
Any function $a\in G$ generates an operator $W^0(a)\colon L^p(\sR)\to L^p(\sR)$ and
operators $W(a),H(a)\colon L^p(\sR^+)\to L^p(\sR^+)$ defined by
\begin{equation*
\begin{aligned}
W^0(a)&:=\mathcal{F}^{-1}a\mathcal{F}\varphi, \\
W(a)&:=PW^0(a) , \\
H(a)&:= PW^0(a)QJ,
\end{aligned}
\end{equation*}
where $P \colon f\to \chi_{\sR^+} f$ and $Q:=I-P$ are the projections on
the subspaces $L^p(\sR^+)$ and $L^p(\sR^-)$, correspondingly, and
$J\colon L^p(\sR)\to L^p(\sR)$ is the reflection operator defined by $J\vp :=
\tilde{\vp}$. Here and in what follows, $\tilde{\vp}(t):=\vp(-t)$ for any $\vp\in L^p(\sR)$, $p\in [1,\infty]$. The operator
$W(a)$ is called the convolution on the semi-axis $\sR^+$ or the
Wiener-Hopf operator, whereas $H(a)$ is referred to as the Hankel
operator. It is well-known \cite{GF:1974} that for $a\in G$ all three operators are bounded on the corresponding space $L^p$ for any $p\in [1,\infty)$.
The operators $W^0$ and $W(a)$ can be also represented as
\begin{align*}
W^0(a)\vp(t)& =\sum_{j=-\infty}^\infty a_j \vp(t-\delta_j)
+\int_{-\infty}^\infty k(t-s) \vp(s)\,ds, \quad t\in\sR ,\\
W(a)\vp(t) &=\sum_{j=-\infty}^\infty a_j B_{\delta_j} \vp(t)
+\int_{0}^\infty k(t-s) \vp(s)\,ds, \quad t\in\sR^+ ,
\end{align*}
where
\begin{align*}
B_{\delta_j} \vp(t)&=\vp(t-\delta_j) \quad\text{if}\quad \delta_j\leq 0,\\
B_{\delta_j} \vp(t)&=\left \{
\begin{array}{l}
0, \quad 0\leq t \leq \delta_j\\
\vp(t-\delta_j), \quad t> \delta_j
\end{array}
\right.
\quad \text{if}\quad \delta_j >0.
\end{align*}
Moreover, for $a=\mathcal{F}k$ the operator $H(a)$ acts as
\begin{equation*}
H(a)\vp(t)=\int_0^\infty k(t+s)\vp(s)\,ds
\end{equation*}
and for $a=e^{\delta t}$ as
\begin{align*}
H(a)\vp(t) &=
\left \{
\begin{array}{ll}
\vp(\delta-t),& \quad 0\leq t\leq \delta\\
0,& \quad t>\delta
\end{array}
\right .,
\quad \text{if}\quad \delta >0,\\
H(a)\vp(t)& = 0, \quad t\in \sR^+, \quad \text{if}\quad \delta \leq0.
\end{align*}
Let us now recall a few useful identities involving the operators mentioned. It is easily seen that if $a,b\in G$, then
\begin{equation*}
W^0(a b)=W^0(a) W^0(b).
\end{equation*}
Wiener-Hopf operators $W(a)$ generally do not possess this property, but according to \cite[pp. 484, 485]{BS:2006} we still have
\begin{equation}\label{cst4}
\begin{aligned}
W(ab)&=W(a)W(b)+H(a) H(\tilde{b}),\\
H(ab)&= W(a)H(b)+H(a)W(\tilde{b}).
\end{aligned}
\end{equation}
Moreover, if $b\in G$, $c\in G^+$ and $c\in G^-$, then
\begin{equation*
W(abc)=W(a) W(b) W(c).
\end{equation*}
The operators $W(a)$ are well studied. For various classes of generating functions $a$, the conditions of Fredholmness or semi-Fredholmness of such operators can be efficiently written \cite{BS:2006,BKS:2002,CD:1969,Du:1973,Du:1977,Du:1979,GF:1974}. Moreover, Fredholm and semi-Fredholm Wiener-Hopf operators are one-sided invertible, the corresponding one-sided inverses are known and there is an efficient description of the kernels and cokernels of $W(a)$, $a\in G$.
Now we consider the Wiener-Hopf plus Hankel operators $\sW(a,b)$ acting on the space $L^p(\sR^+)$ and defined by
\begin{equation}\label{WHH}
\sW(a,b):=W(a)+H(b), \quad a,b\in L^\infty(\sR).
\end{equation}
The study of such operators is much more involved. Nevertheless, Fredholm properties of \eqref{WHH} can be established either directly or by passing to a Wiener-Hopf operator with a matrix symbol. Thus Roch \emph{et al.}~\cite{RSS:2011} studied the Fredholmness of Wiener-Hopf plus Hankel operators with piecewise continuous generating functions, acting on $L^p$-spaces, $p\in [1,\infty)$. Another approach, called equivalence after extension, has been applied to operators with generating functions from a variety of classes. Nevertheless, in spite of a vast amount of publications, it is mainly applied to the operators of a special form, namely, to the operators $\sW(a,a)=W(a)+H(a)$ acting on the $L^2$-space. It turns out that the Fredholmness, one-sided invertibility or invertibility of such operators are equivalent to the corresponding properties of the Wiener-Hopf operator $W (a \tilde{a}^{-1})$, so that they can be studied. However, even if an operator $\sW(a,a)$ is invertible, the corresponding inverse is not given (see, e.g. \cite[Corollary 2.2]{CN:2009} for typical results obtained by the method mentioned). If $a\neq b$, then hardly verified assumptions concerning the factorization of auxiliary matrix-functions is used. To study Wiener-Hopf plus Hankel operators of the form $I+H(b)$, another method has been employed in \cite{KS:2000, KS:2001}, where the essential spectrum and the index of such operators are determined.
On the other hand, recently the Wiener-Hopf plus Hankel operators \eqref{WHH} have been studied under the assumption that the generating functions $a$ and $b$ satisfy the condition
\begin{equation}\label{eqn1}
a\tilde{a}=b\tilde{b}.
\end{equation}
Thus if $a,b\in G$, then the Coburn-Simonenko Theorem for some classes of operators $\sW(a,b)$ is established \cite{DS:2014b}, and an efficient description of the space $\ker \sW(a,b)$ is obtained \cite{DS:2017a}. The aim of this work is to find conditions for one-sided invertibility, invertibility and generalized invertibility of the operators $\sW(a,b)$ and to provide efficient representations for the corresponding inverses when generating functions $a$ and $b$ satisfy the matching con\-di\-tion~\eqref{eqn1}. Similar problems for Toeplitz plus Hankel operators have been recently discussed in \cite{BE:2013, BE:2017, DS:2014a, DS:2014, DS:2016a, DS:2017}. However, the situation with Wiener-Hopf plus Hankel operators has some special features. Thus the operators here can also be semi-Fredholm -- i.e. in general, they may have infinite kernels and co-kernels. This creates additional difficulties. Therefore, in some cases, the results obtained are not as complete as for Fredholm Toeplitz plus Hankel operators.
This paper is organized as follows. Section~\ref{s2} contains known results on properties of Wiener-Hopf operators, Wiener-Hopf factorization of
functions $g\in G$ such that $g(t)g(-t)=1$, $t\in \sR$ and demonstrates their role in the description of the kernels of Wiener-Hopf plus Hankel operators $W(a)+H(b)$ under the condition \eqref{eqn1}.
In Section \ref{s3}, we establish necessary conditions for one-sided invertibility of the operators $\sW(a,b)$. Section~\ref{sec4} provides sufficient conditions for one-sided invertibility and presents efficient representations for the corresponding inverses. In Section \ref{s5}, we construct generalized inverses for Wiener-Hopf plus Hankel operators. The invertibility conditions presented in Section \ref{sec6} are supported by simple examples.
\section{Auxiliary results\label{s2}}
Let us recall the properties of Wiener-Hopf and Wiener-Hopf plus Hankel operators with generating functions from the algebra $G$. Thus it was shown in \cite{GF:1974} that for invertible functions $g$ the operators $W(g)$ are one-sided invertible. More precisely, if $a\in AP_W$, $c\in\cL$ and $g=a+c$ is invertible in $G$, then the element $a$ is also invertible in $G$. Therefore, the numbers
\begin{align}\label{ind}
\nu(g):=\lim_{l\to\infty}\frac{1}{2l} [\arg g(t)]_{-l}^l,
\quad n(g):=\frac{1}{2\pi} [\arg
(1+g^{-1}(t)c(t)]_{t=-\infty}^\infty,
\end{align}
are correctly defined. Moreover, the function $g$ admits the
factorization of the form
\begin{equation}\label{Eq20}
g(t)=g_-(t) e^{i\nu t}\left (\frac{t-i}{t+i} \right )^n g_+(t), \quad -\infty
<t <\infty,
\end{equation}
where $g_+^{\pm1}\in G^+$, $g_-^{\pm1}\in G^-$, $\nu=\nu(g)$ and
$n=n(g)$.
Let $-\infty<\nu<\infty$ be a real number. On the space $L^p(\sR^+)$ we consider an operator $U_\nu$ defined by
$$
(U_\nu \vp)(t):= \left \{
\begin{array}{ll}
\vp(t-\nu) & \;\text{ if }\; \max(\nu,0) <t, \\[1ex]
0 & \;\text{ if }\; 0\leq t\leq\max(\nu,0).
\end{array}
\right .
$$
It is easily seen that for any $\nu\geq0$, the operator $U_{\nu}$ is left invertible and $U_{-\nu}$ is one of its left-inverses. Moreover, $U_\nu=W(e^{it\nu})$ and $I-U_\nu U_{-\nu}$ is the projection operator
$$
(I-U_\nu U_{-\nu} \vp)(t):= \left \{
\begin{array}{ll}
\vp(t) & \;\text{ if }\; 0< t < \nu, \\[1ex]
0 & \;\text{ if }\; \nu< t<\infty.
\end{array}
\right .
$$
We also consider operators $V$ and $V^{(-1)}$ defined by
\begin{align*
(V\vp)(t):=\vp(t)-2\int_{0}^t e^{s-t}\vp(s)\,ds, \,
(V^{(-1)}\vp)(t):=\vp(t)-2\int_t^{\infty} e^{t-s}\vp(s)\,ds.
\end{align*}
Set $ V^{(m)}=V^m$ if $m\geq 0$ and
$V^{(-m)}=(V^{(-1)})^{-m}$ if $m<0$. It is known that if $m\in\sN$,
then $V^{(-m)} V^{(m)}=I$ \cite[Chapter 7]{GF:1974}, so that for $m>0$ the
operator $P_m:=I- V^{(m)}V^{(-m)}$ is a projection.
The factorization \eqref{Eq20} has been used in the construction of one-sided inverses for the Wiener-Hopf operators $W(g)$.
\begin{theorem}[{\cite{GF:1974}}]\label{thm5} If $g=a+c\in G$, $a\in AP_W, c\in
\cL$, then the operator $W(g)$ is one-sided invertible in
$L^p(\sR^+)$, $1\leq p<\infty$ if and only if $g$ is invertible
in $G$. Moreover, if $g\in G$ is invertible in $G$ and $\nu:=\nu(g)$, $n:=n(g)$, then
\begin{enumerate}[(i)]
\item If $\nu>0$ and $n\geq0$, then the operator $W(g)$ is left invertible
and
\begin{equation}\label{Eq21}
W_l^{-1}(g)=W(g_+^{-1})V^{(-n)} U_{-\nu} W(g_-^{-1})
\end{equation}
is one of its left-inverses.
\item If $\nu>0$ and $n<0$, then the operator $W(a)$ is left invertible
and
\begin{equation}\label{Eq22}
W_l^{-1}(g)=W(g_+^{-1})(I-U_{-\nu} P_{-n} U_{\nu} )^{-1} U_{-\nu} V^{-n}
W(g_-^{-1})
\end{equation}
is one of its left-inverses.
\item If $\nu<0$ and $n\leq0$, then the operator $W(a)$ is right invertible and
\begin{equation}\label{Eq23}
W_r^{-1}(g)=W(g_+^{-1}) V^{-n} U_{-\nu} W(g_-^{-1})
\end{equation}
is one of its right-inverses.
\item If $\nu<0$ and $n>0$, then the operator $W(a)$ is right invertible and one of its right-inverses is
\begin{equation}\label{Eq24}
W_r^{-1}(g)=W(g_+^{-1}) V^{(-n)}U_{-\nu}(I-U_{\nu} P_{n} U_{-\nu} )^{-1}
W(g_-^{-1}),
\end{equation}
where
\begin{equation}\label{Eq25}
(I-U_{-\nu} P_{-n} U_{\nu} )^{-1}=\sum_{j=0}^\infty (U_{-\nu} P_{-n}
U_{\nu})^j,
\end{equation}
and the series in the right-hand side of \eqref{Eq25} is uniformly
convergent.
\item If $\nu=0$ and $n\leq0$ $(n\geq0)$, then the operator $W(g)$ is right (left) invertible and one of the corresponding inverses has the form
\begin{equation}\label{Eq24a}
W_{r/l}^{-1}(g)=W(g_+^{-1}) V^{(-n)} W(g_-^{-1}),
\end{equation}
\end{enumerate}
\end{theorem}
Let us point out that there is also an efficient description of the kernels of the operators $W(g)$, but the structure of $\ker W(g)$ depends on the indices $\nu(g)$ and $n(g)$ and it will be reminded later on.
As far as the Wiener-Hopf plus Hankel operators $\sW(a,b):=W(a)+H(b)$ are concerned, here we always assume that the generating functions $a,b\in G$ and satisfy the matching condition \eqref{eqn1}. In this case, the duo $(a,b)$ is referred to as the matching pair. Moreover, in what follows, we will only consider the matching pairs $(a,b)$ with the elements $a$ invertible in $G$. Notice that if $\sW(a,b)$ is semi-Fredholm, then $a$ is invertible in $G$ and the matching condition yields the invertibility of $b$ in $G$.
Let us introduce another pair $(c,d)$ with the elements $c$ and $d$
defined by
\begin{equation*
c:=ab^{-1}= \tilde{a}^{-1}\tilde{b}, \quad d:=a\tilde{b}^{-1}=
\tilde{a}^{-1}b.
\end{equation*}
This duo is called the subordinated pair for $(a,b)$.The functions $c$ and $d$ possess a number of remarkable properties -- e.g. $c\tilde{c}=1=d\tilde{d}$. Following \cite{DS:2014b}, any function $g\in L_\infty(\sR)$ satisfying the condition $g \tilde{g}=1$
is called
matching function. In passing note, that if $(c,d)$ is
the subordinated pair for a matching pair $(a,b)$, then
$(\bar{d},\bar{c})$ is the subordinated pair for the
matching pair $(\bar{a}, \overline{\tilde{b}})$, which
defines the adjoint operator
\begin{equation}\label{cst10}
\sW^*(a,b)=W(\bar{a})+H(\overline{\tilde{b}}),
\end{equation}
for the operator $\sW(a,b)$.
The next proposition comprises results from \cite{DS:2014b,DS:2017a}. For the reader's convenience, they are reformulated in a form suitable for subsequent presentation.
\begin{proposition}\label{p1}
Assume that $g\in G$ is a matching function -- i.e. $g\tilde{g}=1$. Then
\begin{enumerate}[(i)]
\item Under the condition $g_-(0)=1$, the factors $g_+$ and $g_-$ in the factorization~\eqref{Eq20} are uniquely defined -- viz. the factorization takes the form
\begin{equation}\label{cst20}
g(t)=\left(\boldsymbol\sigma(g)\,\tilde{g}_+^{-1}(t)\right)e^{i\nu t}\Big(\frac{t-i}{t+i}\Big)^n g_+(t),
\end{equation}
where $\nu=\nu(g), n=n(g)$, $\boldsymbol\sigma(g)=(-1)^n g(0)$, $\tilde{g}_+^{\pm1}(t)\in G^-$ and $g_-(t)=\boldsymbol\sigma(g)\,\tilde{g}_+^{-1}(t)$.
\item If $\nu<0$ or if $\nu=0$ and $n<0$, then $W(g)$ is right-invertible and the operators $\mathbf{P}_g^{\pm}$,
\begin{equation*}
\mathbf{P}_g^{\pm}:=(1/2)(I\pm JQВW^0(g)P) \colon \ker W(g)\to \ker W(g),
\end{equation*}
considered on the kernel of the operator $W(g)$
are complementary projections.
\item If $(c,d)$ is the subordinated pair for a matching pair $(a,b)\in G \times G$ such that the operator $W(c)$ is right-invertible and $W_r^{-1}(c)$ is any right-inverse of $W(c)$, then
\begin{align*}
\vp^+=\vp^+(a,b)&:=\frac{1}{2} (
W_r^{-1}(c)W(\tilde{a}^{-1})- JQW^0(c)P
W_r^{-1}(c)W(\tilde{a}^{-1}))\nn\\
&\quad + \frac{1}{2} JQW^0(\tilde{a}^{-1}),
\end{align*}
is an injective operator from $\ker W(d)$ into $\ker (W(a)+H(b))$.
\item\label{iv} If $(c,d)$ is the subordinated pair for the matching pair $(a,b)$, then
\begin{enumerate}[(a)]
\item If the operator $W(c)\colon L^p(\sR^+)\to L^p(\sR^+)$, $1<p<\infty$ is right-invertible, then
\begin{equation}\label{ker}
\ker(W(a)+H(b))=\vp^+(\im \mathbf{P}^+_d) \dotplus\im \mathbf{P}^-_c.
\end{equation}
\item If the operator $W(d)\colon L^p(\sR^+)\to L^p(\sR^+)$, $1<p<\infty$ is left-invertible, then
\begin{equation}\label{coker}
\coker(W(a)+H(b))=\vp^+(\im \mathbf{P}^+_{\bar{c}}) \dotplus\im
\mathbf{P}^-_{\bar{d}},
\end{equation}
where the operator $\vp^+$ in \eqref{coker} is defined by the matching pair $(\bar{a}, \bar{\widetilde{b}})$.
\end{enumerate}
\item\label{v} Let $\Lambda_j$ be the normalized Laguerre polynomials and the functions $\psi_j$, $j\in \sZ_+$, be defined by
\begin{align}\label{lag}
\psi_j(t)&:= \left \{
\begin{array}{ll}
\sqrt{2} e^{-t} \Lambda_j(2t),& \text{ if } t>0,\\
0, & \text{ if } t<0,\\
\end{array}
\right .,& \quad j=0,1,\cdots\,.\phantom{--}
\end{align}
Then for $\nu=0$ and $n<0$,
the following systems $\fB_{\pm}(g)$ of functions $W(g_+^{-1})
\psi_{j}$ form bases in the spaces $\im \mathbf{P}^{\pm}_g$:
\begin{enumerate}[(a)]
\item If $n=-2m$, $m\in\sN$, then
\begin{equation*}
\fB_{\pm}(g)=\{W(g_+^{-1}) \left ( \psi_{m-k-1}\mp
\boldsymbol\sigma(g)\psi_{m+k}\right ): k=0,1,\cdots, m-1\},
\end{equation*}
and
\begin{equation}\label{even}
\dim\im \mathbf{P}^{\pm}_g=m.
\end{equation}
\item If $n=-2m-1$, $m\in\sZ_+$, then
\begin{equation*}
\fB_{\pm}(g)=\{W(g_+^{-1})\left ( \psi_{m+k}\mp
\boldsymbol\sigma(g)\psi_{m-k}\right ): k=0,1,\cdots, m\}\setminus \{0\},
\end{equation*}
and
\begin{equation}\label{odd}
\dim\im \mathbf{P}^{\pm}_g=m+ \frac{1\mp\boldsymbol\sigma(g)}{2}.
\end{equation}
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{remark}
If $\nu<0$, the corresponding spaces $\im \mathbf{P}^{\pm}_g$ are also described in~\cite{DS:2017a}. However, these representations are not used in what follows so that they are not included to the above proposition.
\end{remark}
\section{Necessary conditions for one-sided\\ inver\-ti\-bi\-lity\label{s3}}
From now on we always assume without mentioning it specifically that the generating functions $a$ and $b$ constitute a matching pair. Moreover, let us also recall that if an operator $W(a)+H(b)$, $a,b\in G$ acting in the space $L^p(\sR^+)$, $p\in (1,\infty)$ is Fredholm or semi-Fredholm, then the generating function $a$ is invertible in $G$. Therefore, the elements $c$ and $d$ of the subordinated pair $(c,d)$ are also invertible in $G$ and the Wiener-Hopf operators $W(c)$ and $W(d)$ are Fredholm or semi-Fredholm. Let $\nu_1:=\nu(c)$, $n_1:=n(c)$, $\nu_2:=\nu(d)$, and $n_2:=n(d)$ be the corresponding indices \eqref{ind} of the functions $c$ and $d$. We start with necessary conditions for one-sided invertibility of the operators $W(a)+H(b)$ in the case where at least one of the indices $\nu_1$, $\nu_2$ is not equal to zero. The situation $\nu_1=\nu_2=0$ will be considered later on.
\begin{theorem}\label{thm3.1}
Let $a,b\in G$ and the operator $W(a)+H(b)$ be one-sided invertible in $L^p(\sR^+)$. Then:
\begin{enumerate}[(i)]
\item Either $\nu_1 \nu_2\geq 0$ or $\nu_1>0$ and $\nu_2<0$.
\item If $\nu_1=0$ and $\nu_2>0$, then $n_1 >-1$ or $n_1=-1$ and $\boldsymbol\sigma(c)=-1$.
\item If $\nu_1<0$ and $\nu_2=0$, then $n_2<1$ or $n_2=1$ and $\boldsymbol\sigma(d)=-1$.
\end{enumerate}
\end{theorem}
\textit{Proof}
\emph{(i)} Assume that $\nu_1 \nu_2 < 0$. If $\nu_1<0$ and $\nu_2>0$, then the operator $W(c)$ is right invertible whereas $W(d)$ is left invertible. Moreover, the kernel of the operator $W(c)$ and cokernel of $W(d)$ are infinite-dimensional \cite{GF:1974} and so are the spaces $\im \mathbf{P}^{-}_c$ and $\im \mathbf{P}^{-}_{\overline{d}}$ \cite[Theorems 2.4 and 2.5]{DS:2017a}. Taking into account Proposition~\ref{p1}\ref{iv}, we obtain that $\ker (W(a)+H(b))\neq \{0\}$ and $\coker (W(a)+H(b))\neq \{0\}$, hence the operator $W(a)+H(b)$ is not one-sided invertible.
\emph{(ii)} Let $\nu_2>0$. By Proposition~\ref{p1}\ref{iv}, the operator $W(a)+H(b)$ has a non-zero cokernel. If, in addition, $n_1<-1$ or $n_1=1$ and $\boldsymbol\sigma(c)=1$, then \eqref{even} and \eqref{odd} show that in both cases, $\im \mathbf{P}^-_c \neq\{0\}$. Therefore, according to \eqref{ker}, the operator $W(a)+H(b)$ also has a non-trivial kernel and is not one-sided invertible.
The assertion (iii) can be proved analogously.
\qed
Let us briefly discuss the case where $\nu_1>0$ and $\nu_2<0$. As was mentioned in \cite{DS:2017a}, in this situation it is not clear whether the corresponding Wiener-Hopf operator is even normally solvable. Nevertheless, the kernel and cokernel of $W(a)+H(b)$ can still be described. This gives a possibility to establish necessary conditions of one-sided invertibility. However, they are not as transparent as before and, in addition to the relations between the indices $\nu_1, \nu_2, n_1,n_2$, the corresponding conditions can include information about the factors in the Wiener-Hopf factorizations of the subordinated functions $c$ and $d$. We consider one of possible cases.
\begin{theorem}\label{thm3.2}
Let $\nu_1>0$, $\nu_2<0$, $n_1=n_2=0$ and let $\fN_\nu^p$, $\nu>0$ denote the set of functions $f\in L^p(\sR^+)$ such that $f(t)=0$ for $t\in (0,\nu)$.
\begin{enumerate}[(i)]
\item If the operator $W(a)+H(b)\colon L^p(\sR^+)\to L^p(\sR^+)$, $1<p<\infty$ is invertible from the left, then
\begin{equation}\label{leftinv}
\vp^+(\mathbf{P}^+_d)\cap \fN_{\nu_1/2}^p= \{0\},
\end{equation}
where $\vp^+=\vp^+(ae^{-i\nu_1 t/2}, be^{i\nu_1 t/2})$.
\item If the operator $W(a)+H(b)\colon L^p(\sR^+)\to L^p(\sR^+)$, $1<p<\infty$ is invertible from the right, then
\begin{equation}\label{rightinv}
\vp^+(\mathbf{P}^+_{\overline{c}})\cap \fN_{-\nu_2/2}^p= \{0\},
\end{equation}
where $\vp^+=\vp^+(\overline{a}e^{i\nu_2 t/2},\overline{\tilde{b}}e^{-i\nu_2 t/2})$.
\end{enumerate}
\end{theorem}
\textit{Proof} Let $\nu_1>0$, $\nu_2<0$, $n_1=n_2=0$ and $W(a)+H(b)$ be a left-invertible operator. It can be represented in the form
\begin{equation}\label{eqn3.1}
W(a)+ H(b)= \left ( W \left ( ae^{-i\nu_1 t/2} \right )+ H\left
(
be^{i\nu_1 t/2} \right ) \right ) W \left ( e^{i\nu_1 t/2} \right
).
\end{equation}
Direct computations show that $( ae^{-i\nu_1 t/2}, be^{i\nu_1 t/2})$ is a matching pair with the subordinated pair $(c_1,d_1)=(c e^{-i\nu_1 t}, d)$. Since $\nu(c_1)=0$, $n(c_1)=n_1=0$, the kernel of the operator $W(c_1)$ is trivial. Consequently, $\ker P^-_{c_1}=\{0\}$ and the relation \eqref{ker} yields
\begin{equation*}
\ker\left ( W \left ( ae^{-i\nu_1 t/2} \right )+ H\left
(
be^{i\nu_1 t/2} \right ) \right )=\vp^+(\im \mathbf{P}^+_d)
\end{equation*}
with the operator $\vp^+=\vp^+(ae^{-i\nu_1 t/2}, be^{i\nu_1 t/2})$.
Therefore, taking into account \eqref{eqn3.1}, we obtain
\begin{equation*}
\ker (W(a)+H(b)) =\{\eta=W(e^{-i\nu_1t/2})u: u\in \vp^+(\mathbf{P}^+_d)\cap \im W(e^{i\nu_1t/2}) \}.
\end{equation*}
If the operator $W(a)+H(b)$ is left invertible, its kernel consists of the zero element only. However, since $\im W(e^{i\nu_1t/2})=\fN_{\nu_1/2}^p$ and
\begin{equation*}
\ker W(e^{-i\nu_1t/2})\cap (\vp^+(\mathbf{P}^+_d)\cap \fN_{\nu_1/2}^p)=\{0\},
\end{equation*}
the assumption
\begin{equation*}
\vp^+(\mathbf{P}^+_d)\cap \fN_{\nu_1/2}^p \neq \{0\}
\end{equation*}
yields the non-triviality of the kernel of $W(a)+H(b)$, so that \eqref{leftinv} holds.
The second assertion in Theorem \ref{thm3.2} comes from the first one by passing to the adjoint operator (see \eqref{cst10}).
\qed
\begin{remark}\label{rem0}
Theorem \ref{thm3.2} raises an interesting question: Are there exist invertible operators $W(a)+H(b)$, such that
\begin{equation*}
\dim \coker W(c)=\dim\ker W(d)=\infty?
\end{equation*}
Note that in the case $\nu(c)=\nu(d)=0$, for any prescribed natural number $N$ one can probably find invertible operators $W(a)+H(b)$ for which
\begin{equation}\label{large}
\ind |W(c)| > N, \quad \ind |W(d)| >N.
\end{equation}
Note that the set of Toeplitz plus Hankel operators possesses the property~\eqref{large} -- cf. \cite{DS:2019a}, but for Wiener-Hopf plus Hankel operators, this problem requires a separate study.
\end{remark}
\begin{remark}\label{rem1}
Although the description of the spaces $\im \mathbf{P}^+_d$ and $\im \mathbf{P}^+_{\overline{c}}$ is available \cite{DS:2017a}, the verification of the conditions \eqref{leftinv}-\eqref{rightinv} is not trivial. It depends on the properties of Wiener-Hopf operators constituting the operator $\vp^+$ and may require a lot of effort.
\end{remark}
\begin{remark}\label{rem2}
If $\nu_1>0$, $\nu_2<0$ but $n_1\neq 0$ or/and $n_2\neq 0$, the necessary conditions of one-sided invertibility have the same form \eqref{leftinv} and \eqref{rightinv} but the representation \eqref{eqn3.1}, spaces $\fN_\nu^p$ and operators $\vp^+$ should be redefined accordingly.
\end{remark}
We now consider the situation when both indices $\nu_1$ and $\nu_2$ vanish.
Let us start with an auxiliary result.
\begin{lemma}\label{lem2}
If $(a,b)\in G\times G$ is a matching pair with the subordinated pair $(c,d)$, then for the factorization signatures of the functions $c$ and $d$ the equation
\begin{equation}\label{sign}
\boldsymbol\sigma(c)=\boldsymbol\sigma(d)
\end{equation}
holds and the indices $n_1$ and $n_2$ are simultaneously odd or even.
\end{lemma}
\textit{Proof}
Let $n(a)$ and $n(b)$ be the corresponding indices \eqref{ind} for the functions $a$ and $b$, respectively. Then
\begin{equation}\label{ccc}
n_1=n(c)=n(a b^{-1})=n(a)-n(b), \quad n_2=n(d)=n(a \tilde{b}^{-1})=n(a)+n(b).
\end{equation}
Therefore,
\begin{align*
\boldsymbol\sigma(c) & = (-1)^{n(a)-n(b)}c(0)= (-1)^{n(a)-n(b)}a(0)b^{-1}(0), \\
\boldsymbol\sigma(d) & = (-1)^{n(a)+n(b)}d(0)= (-1)^{n(a)+n(b)}a(0)\tilde{b}^{-1}(0),
\end{align*}
and since $b(0)=\tilde{b}(0)$ and the numbers $n(a)-n(b)$ and $n(a)+n(b)$ are simultaneously odd or even, the equation \eqref{sign} follows.
Moreover, using the relations \eqref{ccc} again, we obtain
\begin{equation*}
n_1+n_2=2 n(a),
\end{equation*}
so that $n_1$ has the same evenness as $n_2$.
\qed
We start with the left invertibility of the operators $\sW(a,b)$.
\begin{theorem}\label{thm3.3}
If $a,b\in G$, $\nu_1=\nu_2=0$, $n_2\geq n_1$ and the operator $W(a)+H(b)$ is invertible from the left, then the index $n_1$ satisfies the inequality
\begin{equation*}
n_1\geq -1
\end{equation*}
and if $n_1=-1$, then $\boldsymbol\sigma(c)=-1$ and $n_2>n_1$.
\end{theorem}
\textit{Proof}
If $n_1<-1$, then the operator $W(c)$ is right invertible. By Proposition~\ref{p1}\ref{v}, the image of the projection $\mathbf{P}^-_c$ contains non-zero elements, and by \eqref{ker} so is $\ker (W(a)+H(b))$. This contradicts the left invertibility of the operator $W(a)+H(b)$, hence $n_1\geq -1$.
Assume now that $n_1=-1$ and $\boldsymbol\sigma(c)=1$. Using \eqref{ker} and Proposition~\ref{p1}\ref{v} again, we note that $\im \mathbf{P}_c^-\neq \{0\}$, so that the operator $W(a)+H(b)$ has a non-trivial kernel and, therefore, it is not left-invertible. Hence $\boldsymbol\sigma(c)=-1$. Assuming, in addition, that $n_2=-1$ and $\ker (W(a)+H(b))=\{0\}$, we obtain
\begin{equation*}
\boldsymbol\sigma(c)=-1, \quad \boldsymbol\sigma(d)=1,
\end{equation*}
which is not possible by Lemma \ref{lem2}. Hence, $n_2>n_1$.
\qed
\begin{theorem}\label{thm3.4}
If $a,b\in G$, $\nu_1=\nu_2=0$, $n_1>n_2$ and the operator $W(a)+H(b)$ is invertible from the left, then the inequality
\begin{equation*
n_1\geq 1
\end{equation*}
holds. Moreover, the index $n_2$ is either non-negative or $n_2<0$ and $n_1\geq -n_2$.
\end{theorem}
\textit{Proof}
If $n_1\leq0$, then $n_2\leq -2$ -- cf. Lemma~\ref{lem2}, and $W(a)+H(b)$ has a non-trivial kernel, which contradicts the left-invertibility of this operator. On the other hand, if $1\leq n_1$ and $0\leq n_2$, then $W(a)+H(b)$ is clearly left-invertible, so we proceed with the case $n_2<0$. By Lemma~\ref{lem2}, both numbers $n_1$ and $n_2$ are either even or odd. In both cases the proof of the fact that the indices $n_1$ and $n_2$ satisfy the inequality $n_1\geq n_2$ is similar, but each situation should be examined separately. Here we only analyse the case where $n_1$ and $n_2$ are odd numbers. Considering $\ind W(c):=\mathbf{k}_1=-n_1$ we chose $k_1\in \sZ$ such that
\begin{equation*
2k_1 + \mathbf{k}_1=1.
\end{equation*}
Then, according to \cite[Theorem 3.2]{DS:2017a}, we have
\begin{equation}\label{eqnKer}
\begin{aligned}
& \ker(W(a)+H(b)) = \left \{ W\left ( \left ( \frac{t-i}{t+i} \right)^{-k_1}\right )u :\right . \\
& \left . u\in \left \{ \frac{1+\boldsymbol\sigma(c)}{2}W(c_+^{-1})\{\sC\psi_0\} \dotplus \vp^+(\im \mathbf{P}^+_d) \right \} \cap \im W\left (\! \left ( \!\frac{t-i}{t+i} \right)^{k_1}\!\right )\!\right \},
\end{aligned}
\end{equation}
where the operator $\vp^+=\vp^+(a_1,b_1)$ is defined by the matching pair
\begin{equation*}
(a_1,b_1)=\left (a(t)\left(\frac{t-i}{t+i} \right)^{-k_1}, b(t)\left(\frac{t-i}{t+i} \right)^{k_1}\right )
\end{equation*}
and $c_+$ is the plus factor in the Wiener-Hopf factorization \eqref{cst20} of the function $c$. The function $\psi_0$ is defined in \eqref{lag} and using another representation of the Laguerre polynomials -- cf.~\cite[Eq.~(2.5)]{DS:2017a}, one can show that
\begin{equation*}
\im W\left ( \left ( \frac{t-i}{t+i} \right)^{k_1}\right ) = \mathrm{clos}\, \mathrm{span}_{L^p(\sR^+)} \left \{ \psi_{k_1}, \psi_{k_1+1}, \cdots \right \}.
\end{equation*}
Thus if a function
$$
u \in \im W\left ( \left ( \frac{t-i}{t+i} \right)^{k_1}\right )
$$
is expanded in a Fourier series of the Laguerre polynomials $\psi_j, j=0,1,\cdots$, its first $k_1$ Fourier-Laguerre coefficients are equal to zero. If we now assume that the dimension of the subspace
\begin{equation*}
\fS(c,d):= \left \{ \frac{1+\boldsymbol\sigma(c)}{2}W(c_+^{-1})\{\sC\psi_0\} \dotplus \vp^+(\im \mathbf{P}^+_d) \right \}
\end{equation*}
is greater than $k_1$, then there is a non-zero function $u_0\in \fS(c,d)$, the first $k_1$ Fourier-Laguerre coefficients of which vanish. Hence, \eqref{eqnKer} shows that the kernel of $W(a)+H(b)$ contains a non-zero element. This contradicts the left invertibility of the operator $W(a)+H(b)$. Therefore,
\begin{equation}\label{dimen}
k_1 \geq \dim \fS(c,d),
\end{equation}
and taking into account the Eq.~\eqref{odd}, we rewrite the inequality \eqref{dimen} as
\begin{equation}\label{aaa}
k_1 \geq \frac{1+\boldsymbol\sigma(c)}{2}+ k_2,
\end{equation}
where
\begin{equation*}
k_2= r +\frac{1-\boldsymbol\sigma(d)}{2}
\end{equation*}
and $-n_2=2r+1$. Since $k_1=(1-\mathbf{k}_1)/2=(1+n_1)/2$, the inequality \eqref{aaa} takes the form
\begin{equation*}
\frac{1 + n_1}{2}\geq \frac{1 +\boldsymbol\sigma(c)}{2} + \frac{-n_2-1}{2} +\frac{1 -\boldsymbol\sigma(d)}{2}
\end{equation*}
or
\begin{equation*}
n_1\geq -n_2 +\boldsymbol\sigma(c)-\boldsymbol\sigma(d).
\end{equation*}
Since $\boldsymbol\sigma(c)=\boldsymbol\sigma(d)$ by Lemma~\ref{lem2}, the proof is completed.
\qed
Thus Theorems \ref{thm3.3}, \ref{thm3.4} provide necessary conditions for the left invertibility of the operators $\sW(a,b)=W(a)+H(b)$. Passing to right-invertible operators, one can recall a simple fact that the operator $\sW(a,b)$ is right-invertible if and only if the adjoint operator $\sW^*(a,b)$ is left invertible. However, relation \eqref{cst10} shows that
\begin{equation*}
\sW^*(a,b)=\sW(\bar{a}, \overline{\tilde{b}}).
\end{equation*}
We note that $(\bar{a}, \overline{\tilde{b}})$ is also a matching pair with the subordinated pair \linebreak $(c_1,d_1)=(\bar{d}, \bar{c})$, so that
\begin{equation*
\begin{aligned}
&\nu(c_1)=\nu(\bar{d})=-\nu_2, \quad\nu(d_1)= \nu(\bar{c})=-\nu_1,\\
&n(c_1)= n(\bar{d})=-n_2, \quad n(d_1)= n(\bar{c})=-n_1,\\
&\boldsymbol\sigma(c_1)=\boldsymbol\sigma(d), \quad \boldsymbol\sigma(d_1)=\boldsymbol\sigma(c).
\end{aligned}
\end{equation*}
Now Theorems \ref{thm3.3} and \ref{thm3.4} can be used to write the necessary conditions for the right invertibility of the operators $W(a)+H(b)$. Let us just formulate the corresponding results.
\begin{theorem}\label{thm3.6}
Let $a,b\in G$, $\nu_1=\nu_2=0$, $n_1\leq n_2$ and the operator $W(a)+H(b)$ is invertible from the right. Then
\begin{equation*}
n_2\leq 1
\end{equation*}
and if $n_2=1$, then $\boldsymbol\sigma(d)=-1$ and $n_1<n_2$.
\end{theorem}
\begin{theorem}\label{thm3.7}
Let $a,b\in G$, $\nu_1=\nu_2=0$, $n_1>n_2$ and the operator $W(a)+H(b)$ is invertible from the right. Then the inequality
\begin{equation*}
n_2\leq -1
\end{equation*}
holds. Moreover, the index $n_1$ is either non-positive or $n_1\leq -n_2$.
\end{theorem}
\section{Sufficient conditions of one-sided\\ invertibility and one-sided inverses\label{sec4}}
Our next goal is to establish sufficient conditions for one-sided invertibility of the operators $W(a)+H(b)$. In fact, many necessary conditions above are also sufficient ones.
\begin{theorem}\label{3.10} Let $a,b\in G$ and indices $\nu_1, \nu_2, n_1$ and $n_2$ satisfy any of the following conditions:
\begin{enumerate}[(i)]
\item $\nu_1<0$ and $\nu_2<0$.
\item $\nu_1>0$, $\nu_2<0$, $n_1=n_2=0$, operator $W(a)+H(b)$ is normally solvable and satisfies the condition \eqref{rightinv}.
\item $\nu_1<0$, $\nu_2=0$ and $n_2<1$ or $n_2=1$ and $\boldsymbol\sigma(d)=-1$.
\item $\nu_1=0$, $n_1\leq0$ and $\nu_2<0$.
\item $\nu_1=0$ and $\nu_2=0$
\begin{enumerate}[(i)]
\item $n_1\leq0$, $n_2<1$;
\item $n_1\leq 0$, $n_2=1$ and $\boldsymbol\sigma(d)=-1$;
\end{enumerate}
\end{enumerate}
Then the operator $W(a)+H(b)\colon L^p(\sR^+)\to L^p(\sR^+)$, $1<p<\infty$ is right invertible.
\end{theorem}
\textit{Proof}
By Proposition \ref{p1}, each condition in Theorem~\ref{3.10} ensures that
\begin{equation*}
\coker (W(a)+H(b))=\{0\}.
\end{equation*}
Hence the operator $W(a)+H(b)$ is invertible from the right.
\qed
Sufficient conditions for the left invertibility of the operators $W(a)+H(b)$ can be obtained from Theorem \ref{3.10} by passing to the adjoint operators and we leave it to the reader.
In the remaining part of this section we deal with the construction of right inverses for the operators $W(a)+H(b)$. Recall that one-sided inverses of Wiener-Hopf operators can be easily determined from the Wiener-Hopf factorizations of the corresponding generating functions -- cf.~Theorem~\ref{thm5}. However, finding the inverses for Wiener-Hopf plus Hankel operators is much more difficult problem and to the best of our knowledge, so far there was no efficient representation of the corresponding inverses even for the simplest pairs of generating functions. Now we want to establish formulas for the left and right inverses of the operators $W(a)+H(b)$ in the case of matching generating functions.
Let us assume that the operators $W(c)$ and $W(d)$ are invertible from the same side. This condition is not necessary for the one-sided invertibility and note that the corresponding inverses can be also constructed even if the condition mentioned is not satisfied.
\begin{theorem}\label{t5}
Let $(a,b)\in G \times G$ be a matching pair such that the
operators $W(c)$ and $W(d)$ are invertible from the right. Then the
operator $W(a)+H(b)$ is also right invertible and one of its right inverses has the form
\begin{equation}\label{EqRI}
B:= (I - H(\tilde{c})) \mathbf{A} + H(a^{-1})W_r^{-1}(d),
\end{equation}
where $\mathbf{A}=W_r^{-1}(c) W(\tilde{a}^{-1})W_r^{-1}(d)$.
\end{theorem}
\textit{Proof}
The proof of this result uses equations \eqref{cst4}. Consider the product
$(W(a)+H(b))B$,
\begin{equation}\label{EqN17}
\begin{aligned}
&(W(a)+H(b))B =\\
&\quad (W(a)+H(b))(I - H(\tilde{c})) \mathbf{A} +
(W(a)+H(b))H(a^{-1})W_r^{-1}(d).
\end{aligned}
\end{equation}
It follows from \eqref{cst4} that
\begin{equation*
\begin{aligned}
H(b)H(\tilde{c})&=W(bc)-W(b)W(c)
=W(a)-W(b)W(c), \\
W(a)H(\tilde{c})&=H(a\tilde{c})-H(a)W(c)=H(b)-
H(a)W(c).
\end{aligned}
\end{equation*}
Therefore, the first product in the right-hand side of \eqref{EqN17} can be rewritten as
\begin{equation}
\begin{aligned}\label{EqN3}
&\quad(W(a)+H(b)) (I-H(\tilde{c})\mathbf{A} =
(W(b)W(c)+H(a)W(c))\mathbf{A} \\
&=(W(b)\!+\!H(a))W(c)\mathbf{A}\!=\! (W(b)\!+\!H(a)\!)W(c)W_r^{-1}(c)
W(\tilde{a}^{-1})W_r^{-1}(d) \\
& =W(b)W(\tilde{a}^{-1})W_r^{-1}(d)+ H(a)\!
W(\tilde{a}^{-1})W_r^{-1}(d).
\end{aligned}
\end{equation}
Analogously,
\begin{equation*
\begin{aligned}
W(a)H(a^{-1})&=H(aa^{-1})-H(a)W(\tilde{a}^{-1})=-H(a)W(\tilde{a}^{-1}), \\
H(b)H(a^{-1})&=W(b\tilde{a}^{-1})-W(b)W(\tilde{a}^{-1})
=W(d)-W(b)W(\tilde{a}^{-1}),
\end{aligned}
\end{equation*}
and the second product in the right-hand side of \eqref{EqN17} has the form
\begin{equation}\label{EqN4}
\begin{aligned}
&\quad (W(a)+H(b))H(a^{-1})W_r^{-1}(d)\\
&=-H(a)W(\tilde{a}^{-1})W_r^{-1}(d)
+
W(d)W_r^{-1}(d)-W(b)W(\tilde{a}^{-1})W_r^{-1}(d)\\
&=I-H(a)W(\tilde{a}^{-1})W_r^{-1}(d)-W(b)W(\tilde{a}^{-1})W_r^{-1}(d).
\end{aligned}
\end{equation}
Combining \eqref{EqN3} and \eqref{EqN4}, one obtains
$$
(W(a)+H(b))B=I,
$$
hence $B$ is a right inverse for the Wiener-Hopf plus
Hankel operator $W(a)+H(b)$.
\qed
\begin{corollary}\label{cor2}
Let $a,b\in G$ and indices $\nu_1, \nu_2, n_1$ and $n_2$ satisfy one of the following conditions:
\begin{enumerate}[(i)]
\item $\nu_1<0$ and $\nu_2<0$.
\item $\nu_1<0$, $\nu_2=0$ and $n_2\leq 0$.
\item $\nu_1=0$, $n_1\leq0$ and $\nu_2<0$.
\end{enumerate}
Then $W(a)+H(b)$ is invertible from the right and one of its right inverses can be constructed by formula \eqref{EqRI}.
\end{corollary}
\begin{example}
Let us consider the operator
\begin{equation}\label{Eq26}
\sW(\nu_1,\nu_2)=W(e^{i\nu_1t})+H(e^{i\nu_2t}),\quad t\in \sR,
\end{equation}
where $\nu_1$ and $\nu_2$ are real numbers such that
\begin{align}
\nu_1-\nu_2\leq 0, \label{Eq27a}\\
\nu_1+\nu_2\leq 0. \label{Eq27b}
\end{align}
In passing note that the conditions \eqref{Eq27a}-\eqref{Eq27a} are equivalent to the inequality
\begin{equation*}
\nu_1\leq -|\nu_2|,
\end{equation*}
so that $\nu_1\leq0$. Consider now the generating functions
$a(t)=e^{i\nu_1t}$ and $b(t)=e^{i\nu_2t}$. They satisfy the matching
conditions \eqref{eqn1}, namely,
$$
a(t)a(-t)=b(t)b(-t)=1.
$$
The elements $c$ and $d$ of the subordinated pair for
matching pair $(e^{i\nu_1t},e^{i\nu_2t})$ are
$$
c(t)=e^{i(\nu_1-\nu_2)t}, \quad d(t)=e^{i(\nu_1+\nu_2)t}.
$$
Taking into account the conditions \eqref{Eq27a}-\eqref{Eq27a}, we observe that the corresponding Wiener-Hopf operators $W(c)$, $W(d)$ are right invertible and have infinite dimensional kernels.
In order to construct a right inverse of the
operator \eqref{Eq26} one can use Theorem~\ref{t5}.
Let us recall simple properties of Wiener-Hopf
and Hankel operators with exponential generating function. Thus for
the generating function $a(t)=e^{i\nu t}$ one has:
\begin{enumerate}[(i)]
\item If $\nu \leq 0$, then the operator $W(e^{i\nu t})$ is
right invertible and one of its right inverses is
$$
W^{-1}_r (e^{i\nu t})=W(e^{-i\nu t}).
$$
\item If $\nu \geq 0$, then the operator $W(e^{i\nu t})$ is
left invertible and one of its left inverses is
$$
W^{-1}_l (e^{i\nu t})=W(e^{-i\nu t}).
$$
\item If $\nu<0$, then $H(e^{i\nu t})=0$.
\end{enumerate}
Therefore,
\begin{equation*
W_r^{-1}(c)=W(e^{-i(\nu_1-\nu_2)t}), \quad
W_r^{-1}(d)=W(e^{-i(\nu_1+\nu_2)t}).
\end{equation*}
Thus the operator \eqref{Eq26} is subject to Theorem \ref{t5}. In order to write the cor\-responding right inverse of $W(a)+H(b)$, we first determine the operator~$\mathbf{A}$. Simple computations show that
$$
\mathbf{A}=W(e^{-i(\nu_1-\nu_2)t}) W(e^{-i\nu_2t}).
$$
Therefore the right inverse \eqref{EqRI} for the operator
\eqref{Eq26} has the form
\begin{align*}
(W(e^{i\nu_1t})+H(e^{i\nu_2t}))_r^{-1}=&
(I-H(e^{-i(\nu_1-\nu_2)t}))W(e^{-i(\nu_1-\nu_2)t})
W(e^{-i\nu_2t}) \\
& + H(e^{-i\nu_1t})W(e^{-i(\nu_1+\nu_2)t})
\nn.
\end{align*}
Moreover, using formulas \eqref{cst4}, one obtains
\begin{equation*}
H(e^{-i(\nu_1-\nu_2)t})W(e^{-i(\nu_1-\nu_2)t})=0, \quad H(e^{-i\nu_1t})W(e^{-i(\nu_1+\nu_2)t})= W(e^{-i\nu_1t}),
\end{equation*}
and the operator $(W(e^{i\nu_1t})+H(e^{i\nu_2t}))_r^{-1}$ can be
finally written as
\begin{equation*
(W(e^{i\nu_1t})+H(e^{i\nu_2t}))_r^{-1}=
H(e^{-i\nu_1t})W(e^{-i(\nu_1+\nu_2)t}) +W(e^{-i\nu_1t}).
\end{equation*}
\end{example}
We now construct a left inverse for the operator $W(a)+H(b)$.
\begin{theorem}\label{t4.4}
Let $(a,b)\in G \times G$ be a matching pair such that the
operators $W(c)$ and $W(d)$ are invertible from the left. Then the
operator $\sW(a,b)=W(a)+H(b)$ is also left-invertible and one of its left-inverses has the form
\begin{equation}\label{EqLI}
\sW_l(a,b)= \mathbf{C}(I - H(\tilde{d})) + W_l^{-1}(c)H(\tilde{a}^{-1}),
\end{equation}
where $\mathbf{C}=W_l^{-1}(c) W(\tilde{a}^{-1})W_l^{-1}(d)$.
\end{theorem}
\textit{Proof}
Recalling that the adjoint operator $\sW^*(a,b)$ to the operator $W(a)+H(b)$ can be identified with the operator
\begin{equation*
\sW^*(a,b)=T(a_1)+H(b_1),\quad a_1=\bar{a}, \quad b_1=\overline{\tilde{b}},
\end{equation*}
we note that $(a_1,b_1)$ is a matching pair with the subordinated pair $(c_1,d_1)=(\bar{d}, \bar{c})$. Since $W(c_1)=W(\bar{d})$ and $W(d_1)=W(\bar{c})$ are invertible from the right, the operator $\sW^*(a,b)$ is also right-invertible by Theorem \ref{t5} and according to \eqref{EqRI}, one of its right inverses can be written as
\begin{equation} \label{RInvW}
\begin{aligned}
(\sW^*(a,b))_r^{-1}&=(I-H(\tilde{c}_1) \textbf{A}_1 +H(a_1^{-1})W_r^{-1}(d_1)\\
&=(I-H(\overline{\tilde{d}}) \textbf{A}_1 +H(\bar{a}^{-1})W_r^{-1}(\bar{c}),
\end{aligned}
\end{equation}
where
\begin{equation}\label{A_1}
\textbf{A}_1=W_r^{-1}(c_1) W(\tilde{a}^{-1}_1) W_r^{-1}(d_1)=W_r^{-1}(\bar{d}) W(\overline{\tilde{a}}^{-1}) W_r^{-1}(\bar{c}).
\end{equation}
The left inverse to the operator $\sW(a,b)$ can be now obtained by computing the adjoint operator for the operator $(\sW^*(a,b))_r^{-1}$. Since for any right-invertible operator $A$ one has
\begin{equation*}
\left ( A_r^{-1} \right )^* =\left ( A^* \right )_l^{-1},
\end{equation*}
we can use the relations
\begin{equation*}
W^*(g)=W(\bar{g}), \quad H^*(g)=H(\overline{\tilde{g}}), \quad g\in G,
\end{equation*}
to obtain the representation \eqref{EqLI} from \eqref{RInvW}-\eqref{A_1}.
\qed
\section{Generalized invertibility of Wiener-Hopf\\ plus Hankel operators\label{s5}}
An operator $A$ is called generalized invertible if there exists an operator
$A_g^{-1}$ referred to as generalized inverse for $A$, such that
$$
A A_g^{-1} A =A.
$$
If $A_g^{-1}$ is a generalized inverse for the operator $A$ and
the equation
\begin{equation}\label{EqG}
Ax=y
\end{equation}
is solvable, then the element $x_0=A_g^{-1} y$ is a solution of
equation \eqref{EqG}.
Our next task is to determine generalized inverses for Wiener-Hopf
plus Hankel operators $W(a)+H(b)$ if the generating functions $a$ and $b$ constitute a matching pair. For this we recall useful formulas connecting Wiener-Hopf plus Hankel operators and matrix Wiener-Hopf operators. Thus according to
\cite[Equation (2.4)]{DS:2014b}, the diagonal operator $\diag
(W(a)+H(b)+Q, W(a)-H(b)+Q)$ can be represented in the form
\begin{equation}\label{eqn3}
\left(\
\begin{array}{cc}
W(a)+H(b)+Q & 0\\
0 & W(a)-H(b)+Q\\
\end{array
\!\right) =
\cJ A_1 A_2(W(V(a,b))\!+\!\cQ)C \cJ^{-1}\,,
\end{equation}
where the operators $A_1, A_2$, $\cJ$, $C=C(a,b)$ and $V=V(a,b)$
are defined by
\begin{align}
& A_1:=\diag(I,I)-\diag(P,Q)W^0 \left
\begin{array}{cc}
a & b \\
\tilde{b} & \tilde{a} \\
\end{array
\right) \diag(Q,P), \nn\\[1ex]
& A_2:=\diag(I,I)+\cP W^0 (V(a,b)) \cQ, \quad \cJ:= \frac{1}{2}\left
\begin{array}{cc}
I & J \\
I & -J \\
\end{array}
\right) ,\nn\\
& C(a,b) :=\left
\begin{array}{cc}
I & 0 \\
W^0(\tilde{b}) & W^0(\tilde{a}) \\
\end{array
\right),
\quad
V(a,b) :=\left
\begin{array}{cc}
a-b \tilde{b} \tilde{a}^{-1} & b \tilde{a}^{-1} \\
- \tilde{b}\tilde{a}^{-1} & \tilde{a}^{-1} \\
\end{array
\right), \nn
\end{align}
and
$$
\cP:=\diag(P,P), \quad \cQ:=\diag(Q,Q)\, .
$$
Using the notation
\begin{align}\label{Eq3.5}
B&:= W(V(a,b))+\cQ, \\[1ex
\cR&:= \diag(W(a)+H(b),W(a)-H(b)), \quad R:= \cR+\cQ, \nn
\end{align}
we write the equation \eqref{eqn3} as
\begin{equation}\label{Eq4}
R= \cJ A_1 A_2 B C \cJ^{-1}\,.
\end{equation}
Considering the operator $R$ and taking into account the equation \eqref{Eq4} and the invertibility of the operators $\cJ, C,
A_1$ and $A_2$, we write
\begin{equation*
R^{-1}_g = \cJ C^{-1} B^{-1}_g A_2^{-1} A_1^{-1} \cJ^{-1}.
\end{equation*}
Observe that $R^{-1}_g$ is diagonal since so is the operator $R$.
Thus
$$
R^{-1}_g =\diag(\bF^{-1}_{g},\bK^{-1}_{g}),
$$
and it is clear that the diagonal elements $\bF^{-1}_{g}$ and
$\bK^{-1}_{g}$ have the form
$$
\bF^{-1}_{g}=F^{-1}_{g}+ Q, \quad \bK^{-1}_{g}=K^{-1}_{g}+ Q,
$$
where $F^{-1}_{g},K^{-1}_{g}\colon L^p(\sR^+)\to L^p(\sR^+)$ are
generalized inverses for the operators $W(a)+H(b)$ and $W(a)-H(b)$,
respectively.
In this section we construct a generalized inverse for operator
$W(a)+H(b)$ provided that the operator $B$ is generalized invertible
and a generalized inverse of $B$ can be represented in a special
form. The following theorem has been proved in \cite{DS:2016a} in the case of Toeplitz plus Hankel operators. For Wiener-Hopf plus Hankel operators the proof literally repeats all constructions there and is omitted here.
\begin{theorem}\label{t4}
Let $(a,b)$ be a matching pair with the subordinated pair $(c,d)$.
Assume that the operator $B$ of \eqref{Eq3.5} is generalized
invertible and has a generalized inverse $B^{-1}_g$ of the form
\begin{equation}\label{Eq5}
B^{-1}_g=
\left
\begin{array}{cc}
\bA & \bB \\
\bD & 0 \\
\end{array
\right) +\cQ,
\end{equation}
where $\bA,\bB$ and $\bD$ are operators acting in the space
$L^p(\sR^+)$. Then $W(a)+H(b)$ is generalized invertible and the
operator $G$,
\begin{equation}\label{Eq6}
\begin{aligned}
G: =-H(\tilde{c})(\bA(I-H(d))-\bB
H(\tilde{a}^{-1}))+H(a^{-1})\bD(I-H(d))+W(a^{-1}),
\end{aligned}
\end{equation}
is a generalized inverse for the operator $W(a)+H(b)$.
\end{theorem}
\begin{lemma}\label{lem1}
Let $(a,b)\in G\times G$ be a matching pair such that one of the following conditions holds:
\begin{enumerate}[(i)]
\item The operators $W(c)$ and $W(d)$ are
right invertible.
\item The operators $W(c)$ and $W(d)$ are
left invertible.
\item $W(c)$ and $W(d)$ are, respectively, left and right invertible
operators.
\end{enumerate}
Then the operator $B$ of \eqref{Eq3.5} is generalized invertible and
it has a generalized inverse of the form \eqref{Eq5}.
\end{lemma}
\textit{Proof} For a matching pair $(a,b)$ the operator
$W(V(a,b))$ has the form
$$
W(V(a,b))=
\left
\begin{array}{cc}
0 & W(d) \\
-W(c) & W(\tilde{a}^{-1}) \\
\end{array
\right).
$$
Assume for definiteness that both operators $W(c)$ and $W(d)$ are
right invertible. Then the operator $W(V(a,b))$ is also right
invertible and it is easily seen that one of its right inverses is
given by the formula
\begin{equation*
B_g^{-1}=
\left
\begin{array}{cc}
W_r^{-1}(c) W(\tilde{a}^{-1})W_r^{-1}(d) & -W_r^{-1}(c)\\[1ex]
W_r^{-1}(d) & 0 \\
\end{array
\right) +\cQ,
\end{equation*}
where $W_r^{-1}(c)$ and $W_r^{-1}(d)$ are right-inverses of the
operators $W(c)$ and $W(d)$, correspondingly. Thus in this case,
condition \eqref{Eq5} is satisfied with the operators
\begin{equation}\label{Eq15}
\mathbf{A}= W_r^{-1}(c) W(\tilde{a}^{-1})W_r^{-1}(d), \quad
\mathbf{B}= -W_r^{-1}(c), \quad \mathbf{D}=W_r^{-1}(d).
\end{equation}
The other cases are considered analogously. Thus if both
operators $W(c)$ and $W(d)$ are left invertible, then $B$ is left invertible with a left-inverse having the form \eqref{Eq5}, where
\begin{equation}\label{Eq16}
\mathbf{A}= W_l^{-1}(c) W(\tilde{a}^{-1})W_l^{-1}(d), \quad
\mathbf{B}= -W_l^{-1}(c), \quad \mathbf{D}=W_l^{-1}(d),
\end{equation}
and if $W(c)$ is left-invertible and $W(d)$ is right invertible,
then the corresponding operators $\mathbf{A}$, $\mathbf{B}$ and
$\mathbf{D}$ in \eqref{Eq5} are
\begin{equation}\label{Eq17}
\mathbf{A}= W_r^{-1}(c) W(\tilde{a}^{-1})W_l^{-1}(d), \quad
\mathbf{B}= -W_r^{-1}(c), \quad \mathbf{D}=W_l^{-1}(d),
\end{equation}
which completes the proof.
\qed
Combining Theorem \ref{t4} and Lemma \ref{lem1} one obtains the
following result.
\begin{theorem}\label{thm2.3}
Let operators $W(c)$ and $W(d)$ satisfy one of the assumptions
of Lemma \ref{lem1} and $\mathbf{A}, \mathbf{B}$ and
$\mathbf{D}$ be the operators defined by one of the
relations \eqref{Eq15}--\eqref{Eq17}. Then the operator $W(a)+H(b)$
is generalized invertible and \eqref{Eq6} is one of its generalized inverses.
\end{theorem}
\begin{remark}
We note that in cases (i) and (ii), the operator $W(a)+H(b)$ is one-sided invertible and formulas for the corresponding inverses obtained in Section~\ref{sec4} are simpler that the representation \eqref{Eq6}.
\end{remark}
\section{Invertibility of Wiener-Hopf plus Hankel operators\label{sec6}}
The results of the previous sections can now be used to establish various invertibility conditions for the operators $W(a)+H(b)$ and write the corresponding inverses. Let us formulate one of such results and provide a few examples.
\begin{corollary}\label{cor1}
Let $(a,b)$, $a,b\in G $ be a matching pair such that the
operators $W(c)$ and $W(d)$ are invertible. Then the operator
$W(a)+H(b)$ is invertible and
\begin{equation}\label{EqN5}
(W(a)+H(b))^{-1}=(I - H(\tilde{c})) W^{-1}(c)
W(\tilde{a}^{-1})W^{-1}(d) + H(a^{-1})W^{-1}(d).
\end{equation}
\end{corollary}
\textit{Proof}
If the operators $W(c)$ and $W(d)$ are invertible, then relations
(2.7) and (2.4) of \cite{DS:2014b} show that the operators
$W(a)+H(b)$ is invertible and the result follows from Theorem
\ref{t5}.
\qed
Let us point out that this is a very surprising result. There is a
vast literature devoted to the study of the Fredholmness and
one-sided invertibility of Wiener-Hopf plus Hankel operators in the
situation where generating functions satisfy the relation $b=a$
or $\tilde{b}=a$. Of course, such generating functions
constitute a matching pair. The other case studied is $a(t)=1$ for
all $t\in\sR$ and $b=b(t)$ is a specific matching function. However,
to the best of our knowledge, so far there are no efficient representations
for the inverse operators. On the other hand, for a wide class of generating
functions $g$ the inverse operators $W^{-1}(g)$ can be constructed.
Therefore, formula \eqref{EqN5} is an efficient
tool in constructing the inverse operators $(W(a)+H(b))^{-1}$ in the case where $a$ and $b$ constitute a matching generating pair.
\begin{example}\label{ex2}
Let us consider the operators $W(a)+H(b)$ in the case where $a=b$.
In this situation $c(t)=1$ and $d(t)=a(t) \tilde{a}^{-1}$.
Hence, $H(\tilde{c})=0$, $W(c)=I$ and if the operator $W(d)$ is
invertible, then the operator $W(a)+H(a)$ is also invertible and
$$
(W(a)+H(a))^{-1}=(W(\tilde{a}^{-1}) + H(a^{-1}))W^{-1}(a
\tilde{a}^{-1}).
$$
\end{example}
\begin{example}\label{ex3}
Let $b=\tilde{a}$. Then $c(t)=a(t)
\tilde{a}^{-1}(t)$ and $d(t)=1$. Hence, if the operator $W(c)$ is
invertible, then the operator $W(a)+H(a)$ is also invertible and
$$
(W(a)+H(\tilde{a}))^{-1}=(I - H(\tilde{a}a^{-1})) W^{-1}(a
\tilde{a}^{-1}) W(\tilde{a}^{-1}) + H(a^{-1}).
$$
\end{example}
\begin{example}\label{ex4}
Let $a(t)=1$ and $b(t)b(-t)=1$ for all $t\in
\sR$. In this situation, $c(t)=\tilde{b}(t)$, $d(t)=b(t)$ and
if the operator $T(b)$ is invertible, then
$$
(I+H(b))^{-1}=(I - H(b)) W^{-1}(\tilde{b}) W^{-1}(b).
$$
\end{example}
\section*{Conclusion}
For matching generating functions $a,b\in G$, the invertibility of the operators $W(a)+H(b)$ can be described in terms of indices $\nu$ and $n$ of the subordinated functions $c$ and $d$. Moreover, the corresponding inverses can be represented using only auxiliary Wiener-Hopf and Hankel operators along with the corresponding inverses of scalar Wiener-Hopf operators. This approach is efficient and can be realised as soon as the Wiener-Hopf factorization of the functions $c$ and $d$ is available --- cf.~\eqref{Eq21}-\eqref{Eq24a}.
|
2,869,038,154,261 | arxiv | \section{Introduction}
Our motivation for this work is twofold. On one hand, in a recent paper \cite{AkHaSz2014}, two of the co-authors of the present paper studied a certification method
for approximate roots of exact overdetermined and singular polynomial systems, and wanted to extend the method to certify the multiplicity structure at the root as well. Since all these problems are ill-posed, in \cite{AkHaSz2014} a hybrid symbolic-numeric approach was proposed, that included the exact computation of a square polynomial system that had the original root with multiplicity one. In certifying singular roots, this exact square system was obtain from a deflation technique that added subdeterminants of the Jacobian matrix to the system iteratively. However, the multiplicity structure is destroyed by this deflation technique, that is why it remained an open question how to certify the multiplicity structure of singular roots of exact polynomial systems.
Our second motivation was to find a method that simultaneously refines the accuracy of a singular root and the parameters describing the multiplicity structure at the root. In all previous numerical approaches that approximate these parameters, they apply numerical linear algebra to solve a linear system with coefficients depending on the approximation of the coordinates of the singular root. Thus the local convergence rate of the parameters was slowed from the quadratic convergence of Newton's iteration applied to the singular roots. We were interested if the parameters describing the multiplicity structure can be simultaniously approximated with the coordinates of the singular root using Newton's iteration.
In the present paper we first give a new improved version of the deflation method that can be used in the certification algorithm of \cite{AkHaSz2014}, reducing the number of added equations at each deflation iteration from quadratic to linear. We prove that applying a single linear differential form to the input system, corresponding to a generic kernel element of the Jacobian matrix, already reduced both the multiplicity and the depth of the singular root. Secondly, we give a description of
the multiplicity structure using a polynomial number of parameters, and express these parameters together with the coordinates of the singular point as the roots of a multivariate polynomial system. We prove that this new polynomial system has a root corresponding to the singular root but now with multiplicity one, and the new added coordinates describe the multiplicity structure. Thus this second approach completely deflates the system in one step. The number of equations and variables in the second construction depends polynomially on the number of variables and equations of the input system and the multiplicity of the singular root. Both constructions
are exact in the sense that approximations of the coordinates of
the singular point are only used to detect numerically non-singular
submatrices, and not in the coefficients of the constructed polynomial systems.
\textbf{Related work.}
The treatment of singular roots is a critical issue for numerical analysis and there is a
huge literature on methods which transform the problem into a new one
for which Newton-type methods converge quadratically to the root.
Deflation techniques which add new equations in order to
reduce the multiplicity have already been considered in
\cite{Ojika1983463}, \cite{Ojika1987199}:
By triangulating the Jacobian matrix at the (approximate) root,
new minors of the polynomial Jacobian matrix are added to the initial
system in order to reduce the multiplicity of the singular solution.
A similar approach is used in \cite{HauWam13} and \cite{GiuYak13}, where a maximal
invertible block of the Jacobian matrix at the (approximate) root is
computed and minors of the polynomial Jacobian matrix are added to the
initial system. In \cite{GiuYak13}, an additional step is
considered where the first derivatives of the input polynomials are
added when the Jacobian matrix at the root vanishes.
These constructions are repeated until a system with a simple root is obtained.
In these methods, at each step, the number of added equations is $(n-r)\times (m-r)$,
where $n$ is number of variables, $m$ is the number of equations and $r$
is the rank of the Jacobian at the root.
In~\cite{Lecerf02}, a triangular presentation of the ideal in a
good position and derivations with respect to the leading variables are used
to iteratively reduce the multiplicity. This process is applied for p-adic
lifting with exact computation.
In other approaches, new variables and new equations are introduced simultaneously.
In \cite{YAMAMOTO:1984-03-31}, new variables are introduced to
describe some perturbations of the initial equations and some differentials which
vanish at the singular points.
This approach is also used in \cite{LiZhi2014}, where it is shown that
this iterated deflation process yields a system with a simple root.
In \cite{mantzaflaris:inria-00556021},
perturbation variables are
also introduced in relation with the inverse system of the singular point
to obtain directly a deflated system with a simple root.
The perturbation is constructed from a monomial basis of the local
algebra at the multiple root.
In~\cite{lvz06,lvz08}, only variables for the differentials of the initial
system are introduced.
The analysis of this deflation is improved in \cite{DaytonLiZeng11},
where it is shown that the number of steps is bounded by the order
of the inverse system.
This type of deflation is also used in \cite{LiZhi2013}, for the special case
where the Jacobian matrix at the multiple root has rank $n-1$ (case of
breath one).
In these methods, at each step, the number of variables is at least doubled and new
equations are introduced, which are linear in these new variables.
The mentioned deflation techniques usually breaks the structure of the local
ring at the singular point. The first method to compute the inverse system describing this structure is due to
F.S. Macaulay \cite{mac1916}
and known as the dialytic method.
More recent algorithms for the construction of inverse systems are described
e.g. in \cite{Marinari:1995:GDM:220346.220368}, reducing the size of the
intermediate linear systems (and exploited in
\cite{Stetter:1996:AZC:236869.236919}).
In~\cite{zeng05}, the dialytic method is used, and they analyze the relationship of deflation some methods to the inverse system.
It had been further improved in \cite{Mourrain97} and more recently in
\cite{mantzaflaris:inria-00556021},
using an integration method. This technique reduces significantly the
cost of computing the inverse system, since it relies on the solution
of linear system related to the inverse system truncated in some
degree and not on the number of monomials in this degree.
Multiplication matrices corresponding to systems with singular roots were studied in \cite{Moller95,Corless97}.
The computation of inverse systems has been used to approximate a
multiple root.
In~\cite{Pope2009606}, a minimization approach is used to reduce the value of
the equations and their derivatives at the approximate root, assuming a basis
of the inverse system is known.
In
\cite{WuZhi2011}, the inverse system is constructed via
Macaulay's method; tables of multiplications are deduced and their
eigenvalues are used to improve the approximated root. They
show that the convergence is quadratic at the multiple root. In \cite{LiZhi12}
they show that in the breadth one case the parameters needed to describe the
inverse system is small, and use it to compute the singular roots in \cite{LiZhi12b}.
In \cite{mantzaflaris:inria-00556021}, the inverse system is used to
transform the singular root into a simple root of an augmented system.
\textbf{Contributions.}
We propose a new deflation method for polynomial systems with
isolated singular points, which does introduce new
parameters. At each step, a single differential of the system is
considered based on the analysis of the Jacobian at the singular
point. A linear number of new
equations is added instead of the quadratic increases of the previous
deflations.
The deflated system does not involved any approximate coefficients and
can therefore be used in certification methods as in \cite{AkHaSz2014}.
To approximate efficiently both the singular point and its inverse
system, we propose a new deflation, which involves a small number of
new variables compared to other approaches which rely on Macaulay
matrices. It is based on a new characterization of the isolated
singular point together with its multiplicity structure. The deflated
polynomial system exploits the nilpotent and commutation properties of
the multiplication matrices in the local algebra of the singular
point. We prove that it has a simple root which yields the root and
the coefficients of the inverse system at this singular point. Due to
the upper triangular form of the multiplication matrices in a
convenient basis of the local algebra, the number of new parameters
introduced in this deflation is less than ${1\over 2} n ({\delta} -1){\delta} $
where $n$ is the number of variables and ${\delta}$ the multiplicity of the
singular point. The parameters involved in the deflated system are
determined from the analysis of an approximation of the singular
point. Nevertheless, the deflated system does not involve any
approximate coefficients and thus it can also be used in certification
techniques as \cite{AkHaSz2014}.
In this paper we present two new constructions. The first one is a
new deflation method for a system of polynomials with an isolated
singular root. The new construction uses a single linear
differential form defined from the Jacobian matrix of the input, and
defines the deflated system by applying this differential form to
the original system. We prove that the resulting deflated system
has strictly lower multiplicity and depth at the singular point than
the original one. The advantage of this new deflation is that it
does not introduce new variables, and the increase in the number of
equations is linear, instead of the quadratic increase of previous
deflation methods. The second construction gives the coefficients of
the so called inverse system or dual basis, which defines the
multiplicity structure at the singular root. The novelty of our
construction is that we show that the nilpotent and commutation
properties of the multiplication matrices define smoothly the
singular points and its inverse system. We give a system of
equations in the original variables plus a relatively small number
of new variables, and prove that the roots of this new system
correspond to the original multiple roots but now with multiplicity
one, and they uniquely determine the multiplicity structure. The
number of unknowns used to describe the multiplicity structure is
significantly smaller, compared to the direct computation of the
dual bases from the so called Macaulay matrices. Both constructions
are ``exact" in the sense that approximations of the coordinates of
the singular point are only used to detect numerically non-singular
submatrices, and not in the rest of the construction. Thus these
constructions would allow to treat all conjugate roots
simultaneously, as well as to apply these constructions in the
certification of the singular roots and the multiplicity structure
of an exact rational polynomial system.
\section{Preliminaries}
Let ${\bf f}:= (f_1, \ldots, f_N)\in {\mathbb K}[{\bf x} ]^N$ with ${\bf x} =(x_1, \ldots, x_n)$ for some ${\mathbb K}\subset {\mathbb C}$ field. Let ${{\mathbf{\xi}}}=(\xi_1, \ldots, \xi_n)\in {\mathbb C}^n$ be an isolated multiple root of ${\bf f}$.
Let $I=\langle f_1, \ldots, f_N\rangle$, ${\mathfrak m}_{\xi}$ be the maximal ideal at ${\xi}$ and ${Q}$ be the primary component of $I$ at ${{\mathbf{\xi}}}$ so that $\sqrt{{Q}}={\mathfrak m}_{\xi}$.\\
Consider the ring of power series ${\mathbb C}[[{\mbox{\boldmath$\partial$}}_\xi]]:= {\mathbb C}[[\partial_{1,\xi}, \ldots, \partial_{n,\xi}]]$ and we use the notation
for ${\beta}=(\beta_1, \ldots, \beta_n)\in {\mathbb N}^n$
$${\mbox{\boldmath$\partial$}}^{{\beta}}_{\xi}:=\partial_{1, \xi}^{\beta_1}\cdots \partial_{n, \xi}^{\beta_n}.$$
We identify ${\mathbb C}[[{\mbox{\boldmath$\partial$}}_\xi]]$ with the dual space ${\mathbb C}[{\bf x} ]^*$ by
considering ${\mbox{\boldmath$\partial$}}^{{\beta}}_{\xi}$ as derivations and evaluations at ${{\mathbf{\xi}}}$, defined by
\begin{equation}\label{partial}
{\mbox{\boldmath$\partial$}}^{{\beta}}_{ \xi}(p):={\mbox{\boldmath$\partial$}}^{\beta}(p)|_{{{\mathbf{\xi}}}} := \frac{d^{|{\beta}| } p}{d x_1^{\beta_1}\cdots d x_n^{\beta_n}} ({{\mathbf{\xi}}}) \quad \text{ for } p\in {\mathbb C}[{\bf x}].
\end{equation}
Hereafter, the derivations ``at $\mathbf{x}$'' will be denoted
${\mbox{\boldmath$\partial$}}^{{\beta}}$ instead of ${\mbox{\boldmath$\partial$}}^{{\beta}}_{\mathbf{x}}$. The
derivation with respect to the variable $\partial_{i}$ is denoted
$\pp{\partial_{i}}$ $(i=1,\ldots, n)$.
Note that
$$
\frac{1}{{\beta}!} {\mbox{\boldmath$\partial$}}^{{\beta}}_{ \xi}(({\bf x}-{{\mathbf{\xi}}})^{\alpha})=\begin{cases} 1 & \text{ it } {\alpha}={\beta}\\
0 & \text{ otherwise}
\end{cases},
$$
where we use the notation $\frac{1}{{\beta}!}= \frac{1}{\beta_1!\cdots \beta_n!} $.
For $p\in {\mathbb C}[{\bf x}]$ and $ \Lambda\in {\mathbb C}[[{\mbox{\boldmath$\partial$}}_\xi]]={\mathbb C}[{\bf x}]^{*}$,
let
$$
p\cdot \Lambda: q \mapsto \Lambda (p\,q).
$$
We check that $p=(x_i-\xi_i)$ acts as a derivation on
${\mathbb C}[[{\mbox{\boldmath$\partial$}}_\xi]]$:
$$
(x_{i}-\xi_{i}) \cdot {\mbox{\boldmath$\partial$}}^{{\beta}}_{ \xi}= \pp{\partial_{i, \xi}} ({\mbox{\boldmath$\partial$}}^{{\beta}}_{ \xi})
$$
For an ideal $I\subset {\mathbb C}[{\bf x}]$, let $I^{\perp}=\{\Lambda \in
{\mathbb C}[[{\mbox{\boldmath$\partial$}}_{\xi}]]\mid \forall p\in I, \Lambda (p)=0\}$.
The vector space $I^{\perp}$ is naturally identified with the dual
space of ${\mathbb C}[{\bf x}]/I$.
We check that $I^{\perp}$ is a vector subspace of ${\mathbb C}[[{\mbox{\boldmath$\partial$}}_{\xi}]]$,
which is stable by the derivations ${d_{\partial_{i, \xi}}}$.
\begin{lemma}\label{lem:primcomp}
If $Q$ is a ${\mathfrak m}_{\xi}$-primary component of $I$, then $Q^{\perp}=I^{\perp}\cap{\mathbb C}[{\mbox{\boldmath$\partial$}}_{\xi}]$.
\end{lemma}
This lemma shows that to compute $Q^{\perp}$, it suffices to compute
all polynomials of ${\mathbb C}[{\mbox{\boldmath$\partial$}}_{\xi}]$ which are in $I^{\perp}$.
Let us denote this set ${\mathscr D}= I^{\perp}\cap{\mathbb C}[{\mbox{\boldmath$\partial$}}_{\xi}]$. It is a
vector space stable under the derivations
$\pp{\partial_{i, \xi}}$. Its dimension is the dimension of
$Q^{\perp}$ or ${\mathbb C}[{\bf x}]/Q$, that is the {\em multiplicity} of
$\xi$, denote it by ${\delta}_{\xi} (I)$, or simply by ${\delta}$ if $\xi$ and $I$ is clear from the context.
For an element $\Lambda({\mbox{\boldmath$\partial$}}_\xi) \in {\mathbb C}[{\mbox{\boldmath$\partial$}}_\xi]$ we
define the {\em order} ${\rm ord}(\Lambda)$ to be the maximal
$|{\beta}|$ such that ${\mbox{\boldmath$\partial$}}^{{\beta}}_{ \xi}$ appears in
$\Lambda({\mbox{\boldmath$\partial$}}_\xi)$ with non-zero coefficient.
For $t\in {\mathbb N}$, let ${\mathscr D}_{t}$ be the elements of ${\mathscr D}$ of order $\leq t$.
As ${\mathscr D}$ is of dimension $d$, there exists a smallest $t\geq 0$ such that
${\mathscr D}_{t+1}= {\mathscr D}_{t}$. Let us call this smallest $t$, the {\em nil-index} of
${\mathscr D}$ and denote it by $o_{\xi} (I)$, or simply by $o$. As ${\mathscr D}$ is stable by the derivations
$\pp{\partial_{i, \xi}}$,
we easily check that for $t\geq o_{\xi} (I)$, ${\mathscr D}_{t}={\mathscr D}$ and
that $o_{\xi} (I)$ is the maximal degree of the elements in ${\mathscr D}$.
\section{Deflation using first differentials}\label{Sec:Deflation}
\noindent To improve the numerical approximation of a root, one usually
applies a Newton-type method to converge quadratically from
a nearby solution to the root of the system, provided it is simple.
In the case of multiple roots, deflation techniques are employed to
transform the system into another one which has an equivalent root
with a smaller multiplicity or even with multiplicity one.
We describe here a construction, using differentials of order one,
which leads to a system with a simple root. This construction improves
the constructions in \cite{lvz06,DaytonLiZeng11}
since no new variables are added.
It also improves the constructions presented in
\cite{HauWam13} and the ``kerneling'' method of \cite{GiuYak13}
by adding a smaller number of equations at each deflation step.
Note that, in \cite{GiuYak13}, there are smart preprocessing and postprocessing
steps which could be utilized in combination with our method. In the preprocessor, one
adds directly partial derivatives of polynomials which are zero at the root.
The postprocessor extracts a square subsystem of the completely deflated system
for which the Jacobian has full rank at the root.
Consider the Jacobian matrix $J_{\mathbf{f}} ({\bf x}) = \left[\partial_{j} f_{i} ({\bf x})\right]$ of the
initial system $\mathbf{f}$.
By reordering properly the rows and columns (i.e., polynomials and variables),
it can be put in the form
\begin{equation}
J_{\mathbf{f}} ({\bf x})
:= \left [
\begin{array}{cc}
A ({\bf x}) & B ({\bf x}) \\
C ({\bf x}) & D ({\bf x})
\end{array}
\right]
\end{equation}
where $A({\bf x})$ is an $r\times r$ matrix with
$r = {\rm rank} J_{\mathbf{f}}({{\mathbf{\xi}}}) = {\rm rank} A({{\mathbf{\xi}}})$.
Suppose that $B({\bf x})$ is an $r\times c$ matrix. The $c$ columns
\begin{eqnarray*}\label{parkernel}
\det (A ({\bf x})) \left[
\begin{array}{c}
-A^{-1} ({\bf x}) B ({\bf x}) \\
\mathrm{Id}
\end{array}
\right]
\end{eqnarray*}
(for $r= 0$ this is the identity matrix) yield the $c$ elements
$$
\Lambda_{1}^{{\bf x}}=\sum_{i=1}^{n} \lambda_{1,j} ({\bf x}) \partial_{j},~\ldots,~
\Lambda_{c}^{{\bf x}}=\sum_{i=1}^{n}\lambda_{c,j} ({\bf x}) \partial_{j}.
$$
Their coefficients $\lambda_{i,j}({\bf x})\in {\mathbb K}[{\bf x}]$ are polynomial in the
variables ${\bf x}$.
Evaluated at ${\bf x}={{\mathbf{\xi}}}$, they generate the kernel of
$J_{\mathbf{f}} ({{\mathbf{\xi}}})$ and form a basis of ${\mathscr D}_{1}$.
\begin{definition}
The family $D^{{\bf x}}_{1}=\{\Lambda_{1}^{{\bf x}}, \ldots,
\Lambda_{c}^{{\bf x}}\}$ is the {\em formal} inverse system of
order $1$ at ${{\mathbf{\xi}}}$.
For ${\bf m} i=\{i_{1},\ldots, i_{k}\}\subset$ $\{1, \ldots, c\}$
with $|{\bf m} i|\neq 0$, the ${\bf m} i$-{\em deflated system} of order~$1$~of~$\mathbf{f}$~is
$$
\{\mathbf{f}, {\Lambda}_{i_{1}}^{{\bf x}} (\mathbf{f}), \ldots, {\Lambda}_{i_{k}}^{{\bf x}} (\mathbf{f})\}.
$$
\end{definition}
By construction, for $i=1,\ldots,c$,
$$
{\Lambda}_{i}^{{\bf x}} (\mathbf{f})
= \sum_{j=1}^{n} \partial_{j} (\mathbf{f}) {\lambda}_{i,j}({\bf x})
= \det (A ({\bf x})) J_{\mathbf{f}} ({\bf x}) [{\lambda}_{i,j}({\bf x})]
$$
has $n-c$ zero entries. Thus,
the number of non-trivial new equations added in the
${\bf m} i$-deflated system
is \mbox{$|{\bf m} i|\cdot(N-n+c)$}.
The construction depends on the choice of the invertible block $A({{\mathbf{\xi}}})$ in
$J_{\mathbf{f}} ({{\mathbf{\xi}}})$.
By a linear invertible transformation of the initial system and by
computing a ${\bf m} i$-deflated system, one obtains
a deflated system constructed from any $|{\bf m} i|$ linearly
independent elements of the~kernel~of~$J_{\mathbf{f}} ({{\mathbf{\xi}}})$.
\begin{example}\label{Ex:Illustrative} Consider the multiplicity $2$ root ${{\mathbf{\xi}}} = (0,0)$
for the system $f_1({\bf x}) = x_1 + x_2^2$ and $f_2({\bf x}) = x_1^2 + x_2^2$.
Then,
{\small $$
J_{\mathbf{f}}({\bf x}) =
\left[
\begin{array}{cc}
A ({\bf x}) & B ({\bf x}) \\
C ({\bf x}) & D ({\bf x})
\end{array}
\right] = \left[\begin{array}{cc} 1 & 2x_2 \\ 2x_1 & 2x_2 \end{array}\right].
$$}
The corresponding vector $[-2x_2 ~~ 1]^T$ yields the element
$$\Lambda_1^{{\bf x}} = -2x_2\partial_1 + \partial_2.$$
Since $\Lambda_1^{{\bf x}}(f_1) = 0$, the $\{1\}$-deflated system of
order $1$ of $\mathbf{f}$ is
$$
\left\{x_1 + x_2^2, ~~x_1^2 + x_2^2, ~-4x_1x_2 + 2x_2\right\}
$$
which has a multiplicity $1$ root at ${{\mathbf{\xi}}}$.
\end{example}
We use the following to analyze this deflation procedure.
\begin{lemma}[Leibniz rule]
For $a,b\in {\mathbb K}[{\bf m} x]$,
$$
{\mbox{\boldmath$\partial$}}^{\alpha} (a\,b) =\sum_{\beta\in {\mathbb N}^{n}} \frac{1}{\beta!} {\mbox{\boldmath$\partial$}}^{\beta} (a) \pp{\partial}^{{\bf m}\beta}
({\mbox{\boldmath$\partial$}}^{\alpha}) (b).
$$
\end{lemma}
\begin{proposition}\label{deflation:1} Let $r$ be the rank of
$J_{\mathbf{f}}({{\mathbf{\xi}}})$. Assume that $r<n$. Let ${\bf m} i\subset
\{1,\ldots,n\}$ with $0<|{\bf m} i|\leq n-r$
and $\mathbf{f}^{(1)}$~be the ${\bf m} i$-{\em deflated system} of order $1$ of
$\mathbf{f}$. Then, ${\delta}_{{{\mathbf{\xi}}}}
(\mathbf{f}^{(1)})\geq 1$ and $o_{{{\mathbf{\xi}}}} (\mathbf{f}^{(1)}) < o_{{{\mathbf{\xi}}}} (\mathbf{f})$, which also implies that ${\delta}_{{{\mathbf{\xi}}}}
(\mathbf{f}^{(1)})< {\delta}_{{{\mathbf{\xi}}}}
(\mathbf{f})$.
\end{proposition}
\begin{proof}
By construction, for $i\in{\bf m} i$,
the polynomials ${\Lambda}_{i}^{{\bf x}} (\mathbf{f})$
vanish at ${{\mathbf{\xi}}}$, so that ${\delta}_{{{\mathbf{\xi}}}} (\mathbf{f}^{(1)})\ge 1$.
By hypothesis, the Jacobian of $\mathbf{f}$ is not
injective yielding $o_{{{\mathbf{\xi}}}}(\mathbf{f})> 0$.
Let ${\mathscr D}^{(1)}$ be the inverse
system of $\mathbf{f}^{(1)}$ at ${{\mathbf{\xi}}}$.
Since $(\mathbf{f}^{(1)})\supset (\mathbf{f})$,
we have ${\mathscr D}^{(1)}\subset {\mathscr D}$.
In particular, for any non-zero element $\Lambda \in {\mathscr D}^{(1)}\subset
{\mathbb K}[{\bf m}\partial_{{{\mathbf{\xi}}}}]$ and $i\in {\bf m} i$,
$\Lambda (\mathbf{f})=0$ and $\Lambda ({\Lambda}^{{\bf x}}_{i}
(\mathbf{f}))=0$.
Using Leibniz rule, for any $p\in {\mathbb K}[{\bf x}]$, we have
{\scriptsize
\begin{eqnarray*}
\Lambda ({\Lambda}_{i}^{{\bf x}} (p)) &=&
\Lambda \left(\sum_{j=1}^{n} {\lambda}_{i,j} ({\bf x})\partial_{j} (p)\right)\\
&=&
\sum_{\beta\in {\mathbb N}^{n}} \sum_{j=1}^{n} \frac{1}{\beta!} {\mbox{\boldmath$\partial$}}_{{{\mathbf{\xi}}}}^{{\bf m}\beta} ( {\lambda}_{i,j} ({\bf x}))
\pp{\partial_{{{\mathbf{\xi}}}}}^{{\bf m}\beta}
(\Lambda) \partial_{j,{{\mathbf{\xi}}}} (p)\\
&=&
\sum_{\beta\in {\mathbb N}^{n}} \sum_{j=1}^{n} \frac{1}{\beta!} {\mbox{\boldmath$\partial$}}_{{{\mathbf{\xi}}}}^{{\bf m}\beta} ( {\lambda}_{i,j} ({\bf x})) {\mbox{\boldmath$\partial$}}_{j,\xi} \pp{\partial_{\xi}}^{{\bf m}\beta}
(\Lambda) (p)\\
&=&
\sum_{\beta\in {\mathbb N}^{n}} \Delta_{i,\beta} \pp{\partial_{\xi}}^{{\bf m}\beta} (\Lambda) (p)\\
\end{eqnarray*}
} where
{\small
$$
\Delta_{i,{\bf m}\beta}=\sum_{j=1}^{n}
{\lambda}_{i,j,{\bf m}\beta} \partial_{j,\xi}\in{\mathbb K}[{\bf m}\partial_{{{\mathbf{\xi}}}}]
\hbox{~and~}
{\lambda}_{i,j,{\bf m}\beta} = \frac{1}{{\bf m}\beta!}
\partial_{{{\mathbf{\xi}}}}^{{\bf m}\beta}({\lambda}_{i,j}({\bf x}))\in {\mathbb K}.$$
}
The term $\Delta_{i,{\bf m} 0}$ is
$\sum_{j=1}^{n}
\lambda_{i,j} ({{\mathbf{\xi}}}) \partial_{j,\xi}$ which has
degree $1$ in~${\mbox{\boldmath$\partial$}}_{{{\mathbf{\xi}}}}$
since $[\lambda_{i,j} ({{\mathbf{\xi}}})]$ is a non-zero element of
$\ker J_{\mathbf{f}} ({{\mathbf{\xi}}})$.
For simplicity, let $\phi_{i}(\Lambda):= \sum_{{\bf m}\beta\in {\mathbb N}^{n}} \Delta_{i,{\bf m}\beta}
\pp{\partial}^{{\bf m}\beta} (\Lambda)$.
For any $\Lambda\in {\mathbb C}[{\mbox{\boldmath$\partial$}}_{{{\mathbf{\xi}}}}]$, we have
{\small
\begin{eqnarray*}
\pp{\partial_{j,\xi}} (\phi_{i}(\Lambda))
&=&\sum_{\beta\in {\mathbb N}^{n}} \lambda_{i,j,\beta} \pp{\partial}^{{\bf m}\beta}(\Lambda)+
\Delta_{i,\beta} \pp{\partial}^{{\bf m}\beta} (\pp{\partial_{j,\xi}}
(\Lambda))\\
&=&\sum_{\beta\in {\mathbb N}^{n}} \lambda_{i,j,\beta} \pp{\partial}^{{\bf m}\beta}(\Lambda)+
\phi_{i} (\pp{\partial_{j,\xi}} (\Lambda)).
\end{eqnarray*}
} Moreover, if $\Lambda \in {\mathscr D}^{(1)}$, then by definition
$\phi_{i}(\Lambda) (\mathbf{f})=0$.
Since ${\mathscr D}$ and ${\mathscr D}^{(1)}$ are both stable by derivation,
it follows that $\forall \Lambda \in {\mathscr D}^{(1)}$,
$\pp{\partial_{j,\xi}} (\phi_{i} (\Lambda))\in {\mathscr D}^{(1)}+ \phi_{i}({\mathscr D}^{(1)})$.
Since \mbox{${\mathscr D}^{(1)}\subset {\mathscr D}$}, we know ${\mathscr D}+\phi_{i} ({\mathscr D}^{(1)})$ is stable by
derivation. For any element $\Lambda$ of ${\mathscr D}+\phi_{i} ({\mathscr D}^{(1)})$,
$\Lambda (\mathbf{f})=0$. We deduce that ${\mathscr D}+\phi_{i} ({\mathscr D}^{(1)})={\mathscr D}$.
Consequently, the order of the elements in $\phi_{i} ({\mathscr D}^{(1)})$
is at most $o_{{{\mathbf{\xi}}}} (\mathbf{f})$.
The statement follows since $\phi_i$ increases the order by $1$,
therefore $o_{{{\mathbf{\xi}}}}(\mathbf{f}^{(1)})< o_{{{\mathbf{\xi}}}}(\mathbf{f})$.
\end{proof}
We consider now a sequence of deflations of the system~$\mathbf{f}$.
Let $\mathbf{f}^{(1)}$ be the ${{\bf m} i}_{1}$-deflated system of $\mathbf{f}$. We
construct inductively
$\mathbf{f}^{(k+1)}$ as the ${{\bf m} i}_{k+1}$-deflated system of $\mathbf{f}^{(k)}$ for some
choices of ${{\bf m} i}_{j}\subset \{1,\ldots,n\}$.
\begin{proposition}
There exists $k\leq o_{{{\mathbf{\xi}}}} (\mathbf{f})$ such that ${{\mathbf{\xi}}}$ is a
simple root of $\mathbf{f}^{(k)}$.
\end{proposition}
\begin{proof}
By Proposition \ref{deflation:1}, ${\delta}_{{{\mathbf{\xi}}}} (\mathbf{f}^{(k)}) \geq 1$ and $o_{{{\mathbf{\xi}}}} (\mathbf{f}^{(k)})$ is
strictly decreasing with $k$ until it reaches the value $0$.
Therefore, there exists $k\leq o_{{{\mathbf{\xi}}}}(I)$ such that $o_{{\bf m}\xi}
(\mathbf{f}^{(k)})=0$ and ${\delta}_{{{\mathbf{\xi}}}} (\mathbf{f}^{(k)})\geq~1$.
This implies that ${{\mathbf{\xi}}}$ is a simple root of $\mathbf{f}^{(k)}$.
\end{proof}
To minimize the number of equations added at each deflation step, we
take $|{\bf m} i|=1$. Then, the number of non-trivial
new equations added at each step is at most $N-n+c$.
We described this approach using first order differentials
arising from the Jacobian, but this
can be easily extended to use higher order differentials.
\section{The multiplicity structure}\label{Sec:PointMult}
Before describing our results, we start this section by recalling the definition of orthogonal primal-dual pairs of bases for the space ${\mathbb C}[{\bf x} ]/Q$ and its dual. The following is a definition/lemma:
\begin{lemma}[Orthogonal primal-dual basis pair]\label{pdlemma}
Let ${\bf f}$, ${{\mathbf{\xi}}}$, $Q$, ${\mathscr D}$, ${\delta}= {\delta}_\xi({\bf f})$ and $o=o_\xi({\bf f})$ be as above.
Then there exists a primal-dual basis pair of the local ring ${\mathbb C}[{\bf x} ]/ {Q}$ with the following properties:
\begin{itemize}
\item The {\em primal basis} of the local ring ${\mathbb C}[{\bf x} ]/ {Q}$ has the form
\begin{equation}\label{pbasis}
B:=\left\{( {\bf x}-\xi)^{{\alpha}_1}, ( {\bf x}-\xi)^{{\alpha}_2},\ldots, ( {\bf x}-\xi)^{{\alpha}_{{\delta}}}\right\}.
\end{equation}
We can assume that $\alpha_1=0$ and that the monomials in $B$ are {\em connected to 1}
(c.f. \cite{Mourrain99-nf}). Define the set of exponents in $B$
$$
E:=\{\alpha_1, \ldots, \alpha_{\delta}\}.
$$
\item There is a unique {\em dual basis} $\mathbf{\Lambda}\subset {\mathscr D}$ orthogonal to $B$, i.e. the elements of $\mathbf{\Lambda}$ are given in the following form:
\begin{eqnarray}\label{Macbasis}
\Lambda_{0}&=& {\mbox{\boldmath$\partial$}}^{{\alpha}_1}_{{\mathbf{\xi}}}=1_{{\mathbf{\xi}}}\nonumber \\
\Lambda_{1}&=&\frac{1}{{\alpha}_1 !}{\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}^{{\alpha}_1} +\sum_{|{\beta}|\leq o \atop {\beta}\not\in E}\nu_{{\alpha}_1, {\beta}} \;{\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}^{{\beta}}\nonumber\\
&\vdots&\\
\Lambda_{{\delta}-1}&=&\frac{1}{{\alpha}_{{\delta}} !}{\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}^{{\alpha}_{{\delta}}} +\sum_{|{\beta}|\leq o\atop {\beta}\not\in E}\nu_{{\alpha}_{\delta}, {\beta}}\; {\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}^{{\beta}},\nonumber
\end{eqnarray}
\item We have
$0=\mathrm{ord}(\Lambda_{0}) \leq \cdots \leq \mathrm{ord}(\Lambda_{{\delta}-1})$, and for all $0\leq t\leq o$ we have
$$
{\mathscr D}_t={\rm span}\left\{ \Lambda_{j}\;:\; \mathrm{ord}(\Lambda_{{j}})\leq t \right\},
$$ where ${\mathscr D}_{t}$ denotes the elements of ${\mathscr D}$ of order $\leq t$, as above.
\end{itemize}
\end{lemma}
\begin{proof}
Let $\succ$ be the graded reverse lexicographic ordering in ${\mathbb C}[{\mbox{\boldmath$\partial$}}]$ such that $\partial_{1}\prec \partial_{2}\prec \cdots \prec \partial_{n}$.
We consider the initial $\mathrm{In}({\mathscr D})=\{\mathrm{In}(\Lambda)\mid \Lambda \in {\mathscr D}\}$ of ${\mathscr D}$ for the monomial ordering $\succ$.
It is a finite set of increasing monomials
$D:=\left\{{\mbox{\boldmath$\partial$}}^{{\alpha}_0}, {\mbox{\boldmath$\partial$}}^{{\alpha}_1},\ldots, {\mbox{\boldmath$\partial$}}^{{\alpha}_{{\delta}-1}}\right\},$
which are the leading monomials of the elements of
$\mathbf{\Lambda}=\{\Lambda_{0}, \Lambda_{1},\ldots$, $\Lambda_{{{\delta}-1}}\} \subset {\mathscr D}.$
As $1\in {\mathscr D}$ and is the lowest monomial $\succ$, we have $\Lambda_{0}=1$.
As $\succ$ is refining the total degree in ${\mathbb C}[{\mbox{\boldmath$\partial$}}]$, we have $\mathrm{ord}({\Lambda}_{i})=|{\alpha}_{i}|$ and
$0=\mathrm{ord}({\Lambda}_{0}) \leq \cdots \leq \mathrm{ord}({\Lambda}_{{\delta}-1})$.
Moreover, every element in ${\mathscr D}_{t}$ reduces to $0$ by the elements in $\mathbf{\Lambda}$.
As only the elements $\Lambda_{{i}}$ of order $\le t$ are involved in this reduction, we deduce that
${\mathscr D}_{t}$ is spanned by the elements $\Lambda_{{i}}$ with $\mathrm{ord}({\Lambda}_{i})\leq t$.
Let $E=\{ {\alpha}_{0},\ldots, {\alpha}_{{\delta}-1}\}$.
The elements $\Lambda_{{i}}$ are of the form
$$
\Lambda_{{i}}=\frac{1}{{\alpha}_{i} !}{\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}^{{\alpha}_{i}} +\sum_{|{\beta}|\prec |{\alpha}_{i}|}\nu_{{\alpha}_i, {\beta}}\; {\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}^{{\beta}}.
$$
By auto-reduction of the elements $\Lambda_{{i}}$, we can even suppose that ${\beta}\not\in E$ in the summation above, so that they are of the form \eqref{Macbasis}.
Let ${B}(\xi)=\left\{( {\bf x}-\xi)^{{{\alpha}}_0}, \ldots, ( {\bf x}-\xi)^{{{\alpha}}_{{\delta}-1}}\right\}\subset {\mathbb C}[{\bf x}]
$. As $(\Lambda_{i}(({\bf x}-\xi)^{{{\alpha}}_j}))_{0\le i,j\leq {\delta}-1}$ is the identity matrix, we deduce that
$B$ is a basis of ${\mathbb C}[{\bf x} ]/ {Q}$, which is dual to $\mathbf{\Lambda}$.
As ${\mathscr D}$ is stable by derivation, the leading term of $\frac{d }{d\partial_{i}}(\Lambda_{j})$ is in $D$.
If $\frac{d }{d\partial_{i}}({\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}^{{\alpha}_{j}})$ is not zero, then it is the leading term of
$\frac{d }{d\partial_{i}}(\Lambda_{j})$, since the monomial ordering is compatible with the
multiplication by a variable. This shows that $D$ is stable by division by the variable $\partial_{i}$
and that $B$ is connected to $1$. This ends the proof of the lemma.
\end{proof}
Such a basis of ${\mathscr D}$ can be obtained from any other basis of ${\mathscr D}$ by transforming first the coefficient matrix of the given dual basis into row echelon form and then reducing the elements above the pivot coefficients.
The integration method described in \cite{mantzaflaris:inria-00556021} computes a primal-dual pair,
such that the coefficient matrix has a block row-echelon form, each block being associated to an order.
The computation of a basis as in Lemma \ref{pdlemma} can be then performed order by order.
\begin{example} Let
$$f_1=x_1-x_2+x_1^2, f_2= x_1-x_2+x_1^2,
$$
which has a multiplicity $3$ root at ${{\mathbf{\xi}}}=(0,0)$. The integration method described in \cite{mantzaflaris:inria-00556021} computes a primal-dual pair
$$
\tilde{B}=\left\{1,x_1,x_2\right\}, \; \tilde{\mathbf{\Lambda}}=\left\{1, \partial_1+\partial_2, \partial_2+\frac{1}{2}\partial_1^2+ \partial_1\partial_2+\frac{1}{2}\partial_1^2\right\}.
$$
This primal dual pair does not form an orthogonal pair, since $( \partial_1+\partial_2)(x_2)\neq 0$. However, using let say the degree lexicographic ordering such that $x_1>x_2$, we easily deduce the primal-dual pair of Lemma \ref{pdlemma}:
$$
{B}=\left\{1,x_1,x_1^2\right\}, \quad \mathbf{\Lambda}=\tilde{\mathbf{\Lambda}}.
$$
\end{example}
Throughout this section we assume that we are given a fixed primal basis $B$ for ${\mathbb C}[{\bf x} ]/ {Q}$
such that a dual basis $\mathbf{\Lambda}$ of ${\mathscr D}$ satisfying the properties of Lemma \ref{pdlemma} exists. Note that such a primal basis $B$ can be computed numerically from an approximation of $\xi$ and using a modification of the integration method of \cite{mantzaflaris:inria-00556021}.
Given the primal basis $B$, a dual basis can be computed by
Macaulay's dialytic method which can be used to deflate the root ${{\mathbf{\xi}}}$ as
in \cite{lvz08}. This method would introduce
\mbox{$n+({\delta}-1)
\left({{n+o}\choose{n}}-{\delta}\right)$} new variables, which is not polynomial in $o$. Below, we give a construction of a
polynomial system that only depends on at most
$n+ n{\delta}({\delta}-1)/2$ variables. These variables
correspond to the entries of the {\em multiplication matrices} that we~define~next.
Let
\begin{eqnarray*}
M_{i} : {\mathbb C}[{\bf x}]/Q&\rightarrow & {\mathbb C}[{\bf x}]/Q\\
p & \mapsto & (x_{i}-\xi_{i})\, p
\end{eqnarray*}
be the multiplication operator by $x_{i}-\xi_{i}$ in
${\mathbb C}[{\bf x}]/Q$. Its transpose operator is
\begin{eqnarray*}
M_{i}^{t} : {\mathscr D}&\rightarrow & {\mathscr D}\\
\Lambda & \mapsto & \Lambda \circ M_{i}= (x_{i}-\xi_{i})\cdot \Lambda = \frac{d}{d\partial_{i,\xi}} (\Lambda)=d_{\partial_{i, \xi}}(\Lambda)
\end{eqnarray*}
where ${\mathscr D}= Q^{\perp}\subset {\mathbb C}[{\mbox{\boldmath$\partial$}}]$. The matrix of
$M_{i}$ in the basis $B$ of ${\mathbb C}[{\bf x}]/Q$ is denoted ${\tt M}_{i}$.
As $B$ is a basis of ${\mathbb C}[{\bf x}]/Q$, we can identify the elements of
${\mathbb C}[{\bf x}]/Q$ with the elements of the vector space ${\rm{span}}_{\mathbb C}( B)$.
We define the normal form $N(p)$ of a polynomial $p$ in ${\mathbb C}[{\bf x}]$ as the
unique element $b$ of ${\rm span}_{\mathbb C}(B)$ such
that $p-b\in Q$. Hereafter, we are going to identify the elements of
${\mathbb C}[{\bf x}]/Q$ with their normal form in ${\rm{span}}_{\mathbb C} (B )$.
For any polynomial $p (x_{1}, \ldots, x_{n}) \in {\mathbb C}[{\bf x}]$, let $p
({\bf M})$ be the operator of ${\mathbb C}[{\bf x}]/Q$ obtained by replacing $x_{i}-\xi_{i}$ by $M_{i}$.
\begin{lemma}
For any $p\in {\mathbb C}[{\bf x}]$, the normal form of $p$ is $N (p)= p ({\bf M}) (1)$ and we
have
$$
p ({\bf M}) (1) = \Lambda_{0}(p)\, 1 + \Lambda_{1}(p) \, ( {\bf x}-{{\mathbf{\xi}}})^{{\alpha}_1}+\cdots + \Lambda_{{{\delta}-1}}(p)\, ( {\bf x}-{{\mathbf{\xi}}})^{{\alpha}_{{\delta}-1}}.
$$
\end{lemma}
This shows that the coefficient vector $[p]$ of $N (p)$ in the basis $B$ of
is $[p]= (\Lambda_{{i}}(p))_{0\le i \le {\delta}-1}$.
The following lemma is also well known, but we include it here with proof:
\begin{lemma}\label{multlemma} Let $B$ as in (\ref{pbasis}) and denote the exponents in $B$ by
$
E:=\{\alpha_1, \ldots, \alpha_{\delta}\}
$ as above.
Let
$$E^+:=\bigcup_{i=1}^n (E+ {\bf e}_i )$$
with $E+{\bf e}_i=\{(\gamma_1, \ldots , \gamma_i+1, \ldots, \gamma_n):\gamma\in E\}$ and
we denote $\partial(E)= E^{+}\setminus E$.
The values of the coefficients $\nu_{\alpha, \beta}$
for $(\alpha,\beta)\in E\times \partial(E)$ appearing in the dual basis (\ref{Macbasis})
uniquely determine the system of pairwise commuting multiplication matrices ${\tt M}_{i}$, namely,
for $i=1, \ldots, n$
\begin{eqnarray}\label{multmat}
{\tt M}_{i}^{t}=
\begin{array}{|ccccc|}
\cline{1-5}
0&\nu_{{\alpha}_1, {\bf e}_i}&\nu_{{\alpha}_2, {\bf e}_i}&\cdots &\nu_{{\alpha}_{d-1}, {\bf e}_i} \\
0&0&\nu_{{\alpha}_2,{\alpha}_1+{\bf e}_i}&\cdots &\nu_{{\alpha}_{d-1},{\alpha}_1+{\bf e}_i}\\
\vdots &\vdots &&&\vdots\\
0&0&0&\cdots &\nu_{{\alpha}_{d-1},{\alpha}_{d-2}+{\bf e}_i}\\
0&0&0&\cdots &0\\
\cline{1-5}
\end{array}
\end{eqnarray}
Moreover,
$$
\nu_{\alpha_i, \alpha_k+{\bf e}_j}=\begin{cases} 1 & \text{ if } \alpha_i= \alpha_k+{\bf e}_j\\
0 & \text{ if } \alpha_k+{\bf e}_j \in E, \; \alpha_i \neq \alpha_k+{\bf e}_j .
\end{cases}
$$
\end{lemma}
\begin{proof}
As $M_{i}^{t}$ acts as a derivation on ${\mathscr D}$ and ${\mathscr D}$ is closed under derivation, so the third property in Lemma \ref{pdlemma} implies that the matrix
of $M_{i}^{t}$ in the basis of $\mathbf{\Lambda}$ of ${\mathscr D}$ has an upper triangular form with
zero (blocks) on the diagonal.
For an element $\Lambda_{{j}}$ of order $k$, its image by
$M_{i}^{t}$ is
{\small \begin{eqnarray*}
&&M_{i}^{t} (\Lambda_{{j}}) =
(x_{i} - \xi_{i}) \cdot \Lambda_{{j}}\\
&&=\sum_{|{\alpha}_{l}|<k} \Lambda_{{j}} ((x_{i}-\xi_{i})
({\bf x}-{{\mathbf{\xi}}})^{{\alpha}_{l}}) \Lambda_{{l}}\\
&&= \sum_{|{\alpha}_{l}|<k} \Lambda_{{j}}
(({\bf x}-{{\mathbf{\xi}}})^{{\alpha}_{l}+{\bf e}_{i}}) \, \Lambda_{{l}}
= \sum_{|{\alpha}_{l}|<k}
\nu_{{\alpha}_{j}, {\alpha}_l+ {\bf e}_i} \Lambda_{{l}}.
\end{eqnarray*} }
This shows that the entries of ${\tt M}_{i}$ are the coefficients of the
dual basis elements corresponding to exponents in $E\times \partial(E)$. The second claim is clear from the definition of ${\tt M}_{i}$.
\end{proof}
The previous lemma shows that the dual basis uniquely defines the system of multiplication matrices for $ i=1, \ldots, n$
{\small
\begin{eqnarray*}
{\tt M}_{i}^{t}&=&\begin{array}{|ccc|}
\cline{1-3}
\Lambda_{{0}}(x_i-\xi_i)&\cdots &\Lambda_{{{\delta}-1}}(x_i-\xi_i) \\
\Lambda_{{0}}\left(( {\bf x}-{{\mathbf{\xi}}})^{{\alpha}_1+{\bf e}_i}\right)&\cdots &\Lambda_{{{\delta}-1}}\left(( {\bf x}-{{\mathbf{\xi}}})^{{\alpha}_1+{\bf e}_i}\right) \\
\vdots &&\vdots\\
\Lambda_{{0}}\left(( {\bf x}-{{\mathbf{\xi}}})^{{\alpha}_d+{\bf e}_i}\right)&\cdots &\Lambda_{{{\delta}-1}}\left(( {\bf x}-{{\mathbf{\xi}}})^{{\alpha}_{\delta}+{\bf e}_i}\right) \\
\cline{1-3}
\end{array}\nonumber\\
&=&
\begin{array}{|ccccc|}
\cline{1-5}
0&\nu_{{\alpha}_1, {\bf e}_i}&\nu_{{\alpha}_2, {\bf e}_i}&\cdots &\nu_{{\alpha}_{{\delta}-1}, {\bf e}_i} \\
0&0&\nu_{{\alpha}_2,{\alpha}_1+{\bf e}_i}&\cdots &\nu_{{\alpha}_{{\delta}-1},{\alpha}_1+{\bf e}_i}\\
\vdots &\vdots &&&\vdots\\
0&0&0&\cdots &\nu_{{\alpha}_{{\delta}-1},{\alpha}_{{\delta}-2}+{\bf e}_i}\\
0&0&0&\cdots &0\\
\cline{1-5}
\end{array}
\end{eqnarray*}}
Note that these matrices are nilpotent by their upper triangular
structure, and all $0$ eigenvalues.
As $o$ is the maximal order of the elements of ${\mathscr D}$, we have
${\tt M}^{{\gamma}}=0$ if $|{\gamma}|> o$.\\
Conversely, the system of multiplication matrices ${\tt M}_1, \ldots, {\tt M}_n$ uniquely defines the dual basis as follows. Consider $\nu_{{\alpha}_i,{\gamma}}$ for some $({\alpha}_i, {\gamma})$ such that $|{\gamma}|\leq o$ but ${\gamma} \not\in E^+$. We can uniquely determine $\nu_{{\alpha}_i,{\gamma}}$ from the values of $\{\nu_{{\alpha}_j, {\beta}}\; : \;({\alpha}_j, {\beta})\in E\times \partial(E)\}$ from the following identities:
\begin{equation}\label{restnu}
\nu_{{\alpha}_i,{\gamma}}= \Lambda_{i}(({\bf x} -{{\mathbf{\xi}}})^{{\gamma}})
=[{\tt M}_{({\bf x} -\xi)^{{\gamma}}}]_{1, i}= [{\tt M}^{{\gamma}}]_{1,i}.
\end{equation}
The next definition defines the {\em parametric multiplication matrices} that we use in our constriction.
\begin{definition}[Parametric multiplication matrices] Let $B$ as in (\ref{pbasis}), and $E$, $\partial(E)$ as in Lemma \ref{multlemma}. We define array ${{\mu}}$ of length $n{\delta}({\delta}-1)/2$ consisting of $0$'s, $1$'s and the variables
$\mu_{\alpha_i, \beta}$
as follows: for all $\alpha_i, \alpha_k\in E$ and $j\in \{1, \ldots, n\}$ the corresponding entry is
\begin{eqnarray}\label{defmuE}
{{\mu}}_{\alpha_i, \alpha_k+{\bf e}_j}=\begin{cases} 1 & \text{ if } \alpha_i= \alpha_k+{\bf e}_j\\
0 & \text{ if } \alpha_k+{\bf e}_j \in E, \; \alpha_i \neq \alpha_k+{\bf e}_j \\
\mu_{\alpha_i, \alpha_k+{\bf e}_j} & \text{ if } \alpha_k+{\bf e}_j \not\in E.
\end{cases}
\end{eqnarray}
The {\em parametric multiplication matrices} are defined
for $i=1, \ldots, n$ by
\begin{equation}\label{Mmu}
{\tt M}_{i}^{t}({{\mu}}):= \begin{array}{|ccccc|}
\cline{1-5}
0&{\mu}_{{\alpha}_1, {\bf e}_i}&{\mu}_{{\alpha}_2, {\bf e}_i}&\cdots &{\mu}_{{\alpha}_{{\delta}-1}, {\bf e}_i} \\
0&0&{\mu}_{{\alpha}_2,{\alpha}_1+{\bf e}_i}&\cdots &{\mu}_{{\alpha}_{d-1},{\alpha}_1+{\bf e}_i}\\
\vdots &\vdots &&&\vdots\\
0&0&0&\cdots &{\mu}_{{\alpha}_{d-1},{\alpha}_{{\delta}-2}+{\bf e}_i}\\
0&0&0&\cdots &0\\
\cline{1-5}
\end{array} ,
\end{equation}
We denote by
$$
{\tt M}({\mu})^{\gamma}:={\tt M}_1({\mu})^{\gamma_1}\cdots{\tt M}_n({\mu})^{\gamma_n},
$$
and note that for general parameters values ${\mu}$, the matrices ${\tt M}_i(\mu)$ do not commute, so we fix their order by their indices in the above definition of ${\tt M}(\mu)^{\gamma}$.
\end{definition}
\begin{remark} \label{reduce} Note that we can reduce the number of free parameters in the parametric multiplication matrices by exploiting the commutation rules of the multiplication matrices corresponding to a given primal basis $B$. For example, consider the breadth one case, where we can assume that $E=\{{\bf 0}, {\bf e}_1, 2{\bf e}_1, \ldots, (\delta-1){\bf e}_1\}$. In this case the only free parameters appear in the first columns of ${\tt M}_2(\mu), \ldots, {\tt M}_n(\mu)$, the other columns are shifts of these. Thus, it is enough to introduce $ (n-1)(\delta-1)$ free parameters, similarly as in \cite{LiZhi2013}. In Section \ref{Sec:Examples} we present a modification of \cite[Example 3.1]{LiZhi2013} which has breadth two, but also uses at
most $ (n-1)(\delta-1)$ free parameters.
\end{remark}
\begin{definition}[Parametric normal form] \label{parnorm} Let ${\mathbb K}\subset {\mathbb C}$ be a field. We define
\begin{eqnarray*}
\Nc_{{\bf z},{\mu}} : {\mathbb K}[{\bf x}] & \rightarrow & {\mathbb K}[{\bf z},{\mu}]^{{\delta}}\\
p& \mapsto& \Nc_{{\bf z},{\mu}} (p) := \sum_{{\gamma}\in {\mathbb N}^n}
\frac{1}{{\gamma}!} {\mbox{\boldmath$\partial$}}_{{\bf z}}^{{\gamma}}(p) \, {\tt M}({{\mu}})^{{\gamma}}[1].
\end{eqnarray*}
where $[1]=[1,0,\ldots,0]$ is the coefficient vector of $1$ in the basis $B$. This sum is finite since for $|{\gamma}|\geq {\delta}$,
${\tt M}({{\mu}})^{{\gamma}}=0$, so the entries of $\Nc_{{\bf z},{\mu}} (p)$ are polynomials in ${{\mu}}$ and ${\bf z}$.
\end{definition}
Notice that this notation is not ambiguous, assuming that the matrices ${\tt M}_{i}(\mu)$
($i=1,\ldots,n$) are commuting.
The specialization at $({\bf x},{\mu})= ({{\mathbf{\xi}}},\bnu)$ is the vector
$$
\Nc_{{{\mathbf{\xi}}},\bnu} (p) =[\Lambda_{{0}} (p),
\ldots, \Lambda_{{{\delta}-1}} (p)]^{t}\in {\mathbb C}^{{\delta}}.
$$
\subsection{The multiplicity structure equations of a singular point}
\noindent We can now characterize the multiplicity structure by
polynomial equations.
\begin{theorem} \label{theorem1}
Let ${\mathbb K}\subset {\mathbb C}$ be any field, ${\bf f}\in{\mathbb K}[{\bf x}]^N$ and let ${{\mathbf{\xi}}}\in
{\mathbb C}^n$ be an isolated solution of ${\bf f}$. Let ${\tt M}_i({\mu})$ for $i=1, \ldots n$ be the parametric multiplication matrices as in~(\ref{Mmu}) and~$\Nc_{{{\mathbf{\xi}}},{\mu}}$ be the parametric normal form as in Defn.~\ref{parnorm} at ${\bf z}={{\mathbf{\xi}}}$.
Then the ideal $J_{{{\mathbf{\xi}}}}$ of ${\mathbb C}[{\mu}]$ generated by the polynomial system
{\small\begin{eqnarray}\label{matrixeq}
\begin{cases}
\Nc_{{{\mathbf{\xi}}},{\mu}} (f_{k})\;\; \text{ for } k=1, \ldots, N, \\
{\tt M}_{i}({{\mu}})\cdot {\tt M}_{j}({{\mu}})-{\tt M}_{i}({{\mu}})\cdot {\tt M}_{i}({{\mu}})\;\;\text{ for } i, j=1, \ldots, n
\end{cases}
\end{eqnarray}}
is the maximal ideal
$$
{\mathfrak m}_{\nu}= ({\mu}_{{\alpha}, {\beta}}- \nu_{{\alpha}, {\beta}}, ({\alpha},{\beta})\in E\times \partial(E))
$$
where $\nu_{{\alpha}, {\beta}}$ are the coefficients of the dual basis defined in~(\ref{Macbasis}).
\end{theorem}
\begin{proof}
As before, the system \eqref{matrixeq} has a solution
${\mu}_{{\alpha},{\beta}}=\nu_{{\alpha},{\beta}}$ for $({\alpha},{\beta})\in
E\times \partial(E)$. Thus $J_{{{\mathbf{\xi}}}}\subset {\mathfrak m}_{\nu}$.
Conversely, let $C={\mathbb C}[\mu]/J_{{{\mathbf{\xi}}}}$ and consider the map
$$
\Phi: C[{\bf x}] \rightarrow C^{{\delta}}, \;\;
p \mapsto \Nc_{{{\mathbf{\xi}}} ,{\mu}}(p).
$$
Let $K$ be its kernel.
Since the matrices ${\tt M}_{i} ({\mu})$ are commuting
modulo $J_{{{\mathbf{\xi}}}}$, we can see that $K$ is an ideal.
As $f_{k}\in K$, we have ${\mathcal I}:=(f_{k}) \subset K$.
Next we show that $Q\subset K$.
By construction, for any $\alpha\in {\mathbb N}^{n}$ we have modulo $J_\xi$
\begin{equation*}\label{eq:rel1}
\Nc_{{{\mathbf{\xi}}} ,{\mu}}(({\bf x}-{{\mathbf{\xi}}})^{\alpha})= \sum_{{\gamma}\in {\mathbb N}^n}
\frac{1}{{\gamma}!} {\mbox{\boldmath$\partial$}}_{{{\mathbf{\xi}}}}^{{\gamma}}(({\bf x}-{{\mathbf{\xi}}})^{\alpha}) \, {\tt M}({{\mu}})^{{\gamma}}[1] = {\tt M}({{\mu}})^{\alpha}[1].
\end{equation*}
Using the previous relation, we check that $\forall p,q \in C[{\bf x}]$,
\begin{equation}\label{eq:prod}
\Phi(p q) = p({{\mathbf{\xi}}}+{\tt M} ({\mu})) \Phi(q)
\end{equation}
where $ p({{\mathbf{\xi}}}+{\tt M} ({\mu}))$ is obtained by replacing $x_{i}-\xi_{i}$ by ${\tt M}_{i} ({\mu})$.
Let $q\in Q$. As $Q$ is the ${\mathfrak m}_{{{\mathbf{\xi}}}}$-primary component of ${\mathcal I}$,
there exists $p\in {\mathbb C}[{\bf x}]$ such that $p ({{\mathbf{\xi}}})\neq 0$ and $p\, q\in
{\mathcal I}$. By~\eqref{eq:prod},~we~have
$$
\Phi (p\, q)= p({{\mathbf{\xi}}}+{\tt M} ({\mu})) \Phi (q) = 0.
$$
Since $p ({{\mathbf{\xi}}})\neq 0$ and $ p({{\mathbf{\xi}}}+{\tt M} ({\mu}))= p ({{\mathbf{\xi}}}) Id + N$ with $N$
lower triangular and nilpotent, $p({{\mathbf{\xi}}}+{\tt M} ({\mu}))$ is invertible.
We deduce that $\Phi(q)= p ({{\mathbf{\xi}}}+{\tt M} ({\mu}))^{-1}\Phi (pq) =0$ and $q \in K$.
Let us show now that $\Phi$ is surjective and more precisely, that
$\Phi (({\bf x}-{{\mathbf{\xi}}})^{\alpha_{k}})={\bf e}_{k}$ (abusing the notation as here
${\bf e}_k$ has length ${\delta}$ not $n$). Since $B$ is connected to $1$,
either $\alpha_k=0$ or there exists $\alpha_j\in E$ such that
$\alpha_k=\alpha_j+{\bf e}_i$ for some $i\in \{1, \ldots, n\}$. Thus the
$j^{\rm th}$ column of ${\tt M}_i({{\mu}})$ is ${\bf e}_k$ by (\ref{defmuE}). As
$\{{\tt M}_{i}({{\mu}}): i=1, \ldots, n\}$ are pairwise commuting, we have
${\tt M}({{\mu}})^{\alpha_k}={\tt M}_j({\mu}) {\tt M}({{\mu}})^{\alpha_j}$, and if we
assume by induction on $|\alpha_{j}|$ that the first column of ${\tt M}({{\mu}})^{\alpha_j}$
is ${\bf e}_j$, we obtain \mbox{${\tt M}({{\mu}})^{\alpha_k}[1]={\bf e}_k$}.
Thus, for $k = 1,\dots,{\delta}$,
$\Phi (({\bf x}-{{\mathbf{\xi}}})^{\alpha_{k}})={\bf e}_{k}$.
We can now prove that ${\mathfrak m}_{\nu}\subset J_{{{\mathbf{\xi}}}}$. As $M_{i}(\nu)$ is the multiplication by $(x_{i}-\xi_{i})$
in ${\mathbb C}[{\bf x}]/Q$, for any $b\in B$ and $i=1,\ldots,n$, we have
$(x_{i}-\xi_{i})\, b = M_{i}(\nu) (b) + q$ with $q\in Q\subset K$. We deduce that for $k=0,\ldots,{\delta}-1$,
$$\hbox{\scriptsize
$\Phi ((x_{i}-\xi_{i})\, ({\bf x} -{{\mathbf{\xi}}})^{\alpha_{k}}) = {\tt M}_{i}({\mu}) \Phi
(({\bf x} -{{\mathbf{\xi}}})^{\alpha_{k}}) = {\tt M}_{i}({\mu}) ({\bf e}_{k}) =
{\tt M}_{i}(\nu) ({\bf e}_{k})$.}$$
This shows that ${\mu}_{\alpha,\beta}-\nu_{\alpha,\beta}\in J_{{{\mathbf{\xi}}}}$
for $(\alpha,\beta) \in E\times\partial(E)$ and that ${\mathfrak m}_{\nu}=J_{{{\mathbf{\xi}}}}$.
\end{proof}
In the proof of the next theorem we need to consider cases when the multiplication matrices do not commute. We introduce the following definition:
\begin{definition}\label{commutator}
Let ${\mathbb K}\subset {\mathbb C}$ be any field. Let ${\mathcal C}$ be the ideal of ${\mathbb K}[{\bf z},\mu]$ generated by entries of the commutation
relations:
${\tt M}_{i}({{\mu}})\cdot {\tt M}_{j}({{\mu}})-{\tt M}_{j}({{\mu}})\cdot
{\tt M}_{i}({{\mu}})=0$, $i,j=1,\ldots,n$. We call ${\mathcal C}$ the {\em commutator ideal}.
\end{definition}
\begin{lemma} For any field ${\mathbb K}\subset {\mathbb C}$, $p\in {\mathbb K}[{\bf x}]$, and $i=1,\ldots,n$, we have
\begin{equation}
\Nc_{{\bf z},{\mu}} ( x_{i} p) = x_{i} \Nc_{{\bf z}, {\mu}} (p) + {\tt M}_{i} ({\mu})\, \Nc_{{\bf z}, {\mu}} (p) + O_{i, {\mu}}(p). \label{eq:nf}
\end{equation}
where $O_{i, \mu}: {\mathbb K}[{\bf x}]\rightarrow {\mathbb K}[{\bf z}, \mu]^{{\delta}}$ is linear with image in the commutator ideal ${\mathcal C}$.
\end{lemma}
\begin{proof}
$\Nc_{{\bf z}, {\mu}} (x_{i} p)
=
\sum_{{\gamma}}
\frac{1}{{\gamma}!} \, \partial_{{\bf z}}^{{\gamma}} (x_{i} p) \,
{\tt M}({\mu})^{{\gamma}}[1]
$
\begin{eqnarray*}
& = &
x_{i} \sum_{{\gamma}} \frac{1}{{\gamma}!} \partial_{{\bf z}}^{{\gamma}} (p) \, {\tt M}({{\mu}})^{{\gamma}}[1] +
\sum_{{\gamma}}
\frac{1}{{\gamma}!} \gamma_{i}\, \partial_{{\bf z}}^{{\gamma}-e_{i}} (p) \,
{\tt M}({\mu})^{{\gamma}}[1]\\
& =&
x_{i} \sum_{{\gamma}} \frac{1}{{\gamma}!} \partial_{{\bf z}}^{{\gamma}} (p)
\, {\tt M}({{\mu}})^{{\gamma}}[1]
+ \sum_{{\gamma}} \frac{1}{{\gamma}!} \partial_{{\bf z}}^{{\gamma}} (p) \,
{\tt M}({{\mu}})^{{\gamma}+e_{i}}[1]\\
& = &
x_{i} \, \Nc_{{\bf z}, {\mu}} (p)
+ {\tt M}_{i}({\mu}) \left ( \sum_{{\gamma}} \frac{1}{{\gamma}!} \partial_{{\bf z}}^{{\gamma}} (p) \,
{\tt M}({{\mu}})^{{\gamma}}[1] \right) \\
&&~~~~~~+ \sum_{{\gamma}} \frac{1}{{\gamma}!} \, \partial_{{\bf z}}^{{\gamma}} (p) \,
O_{i, {\gamma}}({{\mu}})[1] \\
\end{eqnarray*}
where $O_{i, {\gamma}}= {\tt M}_{i} ({\mu}) {\tt M} ({\mu})^{{\gamma}}- {\tt M} ({\mu})
^{{\gamma}+e_{i}}$ is a ${\delta} \times {\delta}$ matrix with coefficients in ${\mathcal C}$.
Therefore, $O_{i, \mu}:p\mapsto\sum_{{\gamma}} \frac{1}{{\gamma}!} \partial_{{\bf z}}^{{\gamma}} (p) \,
O_{i,{\gamma}}({{\mu}})[1]$ is a linear functional of $p$ with coefficients
in ${\mathcal C}$.
\end{proof}
The next theorem proves that the system defined as in~(\ref{matrixeq}) for general ${\bf z}$ has $({{\mathbf{\xi}}}, {\nu})$ as a simple root.
\begin{theorem}
Let ${\bf f}\in {\mathbb K}[{\bf x}]^N$ and ${{\mathbf{\xi}}}\in {\mathbb C}^n$ be as above. Let ${\tt M}_i({\mu})$ for $i=1, \ldots n$ be the parametric multiplication matrices defined in (\ref{Mmu}) and $\Nc_{{\bf x},\mu}$ be the parametric normal form as in Defn.~\ref{parnorm}.
Then $({\bf z}, {\mu})=({{\mathbf{\xi}}}, {\nu})$ is an isolated root with multiplicity one of the polynomial system in ${\mathbb K}[{\bf z}, {\mu}]$:
{\small \begin{eqnarray}\label{overdet}
\begin{cases}
\Nc_{{\bf z},{\mu}} (f_{k}) = 0 \;\text{ for } k=1, \ldots, N,\\
{\tt M}_{i}({{\mu}})\cdot {\tt M}_{j}({{\mu}})-{\tt M}_{j}({{\mu}})\cdot {\tt M}_{i}({{\mu}})=0\;
\text{ for } i, j=1, \ldots, n. \end{cases}
\end{eqnarray}}
\end{theorem}
\begin{proof}
For simplicity, let us denote the (non-zero) polynomials appearing in (\ref{overdet}) by
$$P_1, \ldots, P_M\in {\mathbb K}[{\bf z}, {{\mu}}],$$
where $M\leq N{\delta}+n(n-1)({\delta}-1)({\delta}-2)/4$. To prove the theorem, it is sufficient to prove that the columns of the Jacobian matrix of the system $[P_1, \ldots, P_M]$ at $({\bf z}, {{\mu}})=({{\mathbf{\xi}}}, {\nu})$ are linearly independent. The columns of this Jacobian matrix correspond to the elements in ${\mathbb C}[{\bf z}, {{\mu}}]^*$
$$\partial_{1, \xi}, \ldots, \partial_{n, \xi}, \text{ and } \partial_{\mu_{{\alpha}, {\beta}}}\;\text{ for } \; ({\alpha}, {\beta}) \in E\times \partial(E),
$$
where $\partial_{i, \xi}$ defined in (\ref{partial}) for ${\bf z}$ replacing ${\bf x}$, and $\partial_{{\mu}_{{\alpha}, {\beta}}}$ is defined by
$$
\partial_{{{\mu}_{{\alpha}, {\beta}}}}(q) = \frac{d q}{ d {\mu}_{{\alpha}, {\beta}}}\left|_{({\bf z}, {\mu})=({{\mathbf{\xi}}}, {\nu})} \right. \quad \text{ for } q\in {\mathbb C}[{\bf z}, {\mu}].
$$
Suppose there exist $a_1, \ldots, a_n,$ and $a_{{\alpha}, {\beta}}\in {\mathbb C}$ for $({\alpha},{\beta}) \in E\times \partial(E)$ not all zero
such that
$$
\Delta:= a_1\partial_{1, \xi}+ \cdots + a_n\partial_{n, \xi}+\sum_{{\alpha}, {\beta}} a_{{\alpha}, {\beta}} \partial_{\mu_{{\alpha}, {\beta}}}\in {\mathbb C}[{\bf z}, {{\mu}}]^*
$$
vanishes on all polynomials $P_1, \ldots, P_M$ in (\ref{overdet}). In
particular, for an element $P_{i} ({\mu})$ corresponding to the commutation
relations and any polynomial $Q \in {\mathbb C}[{\bf x}, \mu]$, using the product rule for the linear differential operator $\Delta$ we get
$$
\Delta (P_{i} Q)= \Delta (P_{i}) Q ({{\mathbf{\xi}}},\bnu) + P_{i} (\bnu) \Delta (Q) = 0
$$
since $ \Delta (P_{i}) =0$ and $P_{i} (\bnu)=0$. By the linearity of $\Delta$, for any
polynomial $C$ in the commutator ideal $ {\mathcal C}$, we have $\Delta (C)=0$.
Furthermore, since $\Delta(\Nc_{{\bf z}, {\mu}}(f_k))=0$ and $$\Nc_{{{\mathbf{\xi}}}, \bnu} (f_k) = [\Lambda_{0}(f_k), \ldots, \Lambda_{{{\delta}-1}}(f_k)]^t,$$ we get that
{\small \begin{equation}\label{dualeq}
(a_1\partial_{1, \xi}+ \cdots+a_n\partial_{n, \xi})\cdot \Lambda_{{{\delta}-1}}(f_k)+ \sum_{|{\gamma}|\leq |{\alpha}_{{\delta}-1}|}p_{{\gamma}}({\nu}) \; {\mbox{\boldmath$\partial$}}^{{\gamma}}_{\xi}(f_k)=0
\end{equation}}
where $p_{{\gamma}}\in {\mathbb C}[{\mu}]$ are some polynomials in the
variables $\mu$ that do not depend on $f_k$.
If $a_{1}, \ldots, a_{n}$ are not all zero, we have an element $\tilde{\Lambda}$ of ${\mathbb C}[{\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}]$ of order strictly greater than
${\rm ord} (\Lambda_{{{\delta}-1}})=o$ that vanishes on $f_1, \ldots, f_N$.
Let us prove that this higher order differential also vanishes on all multiples of $f_k$ for
$k=1, \ldots, N$.
Let $p\in {\mathbb C}[{\bf x}]$ such that $\Nc_{{{\mathbf{\xi}}},\bnu} (p)=0$, $\Delta
(\Nc_{{\bf z},{\mu}} (p))=0$. By~\eqref{eq:nf}, we have
\begin{eqnarray*}
\lefteqn{\Nc_{{{\mathbf{\xi}}},\bnu} ((x_{i}-\xi_{i}) p)}\\ &= &
(x_{i}-\xi_{i}) \Nc_{{{\mathbf{\xi}}},\bnu} (p) +
{\tt M}_{i} (\nu) \Nc_{{{\mathbf{\xi}}},\bnu} (p) + O_{i,\nu} (p) = 0
\end{eqnarray*}
and ${\Delta (\Nc_{{\bf z}, {\mu}} ((x_{i}-\xi_{i}) p))}$
\begin{eqnarray*}
&= &
\Delta ((x_{i}-\xi_{i}) \Nc_{{\bf z}, {\mu}} (p)) +
\Delta ({\tt M}_{i} (\mu) \Nc_{{\bf z}, {\mu}} (p)) + \Delta( O_{\mu} (p))\\
& = &
\Delta (x_{i}-\xi_{i}) \Nc_{{{\mathbf{\xi}}}, \bnu} (p) + (\xi_{i}-\xi_{i})\Delta( \Nc_{{\bf z}, {\mu}} (p)) \\
&& ~~~~ + \Delta ({\tt M}_{i} (\mu)) \Nc_{{{\mathbf{\xi}}}, {\mu}} (p) +
{\tt M}_{i} (\nu) \Delta (\Nc_{{\bf z}, {\mu}} (p)) \\
&& ~~~~ + \Delta (O_{i,{\mu}} (p))\\
& = & 0.
\end{eqnarray*}
As $\Nc_{{{\mathbf{\xi}}},\bnu} (f_{k})=0$, $\Delta (\Nc_{{\bf z},{\mu}} (f_{k}))=0$,
$i=1,\ldots, N$, we
deduce by induction on the degree of the multipliers and by linearity that for any
element $f$ in the ideal $I$ generated by $f_{1}, \ldots, f_{N}$, we
have
$$
\Nc_{{{\mathbf{\xi}}},\bnu} (f)=0 \hbox{~~~and~~~} \Delta (\Nc_{{\bf z},{\mu}} (f))=0,
$$
which yields $\tilde{\Lambda} \in I^{\bot}$. Thus we have
$\tilde{\Lambda} \in I^{\bot}\cap {\mathbb C}[{\mbox{\boldmath$\partial$}}_{{\mathbf{\xi}}}]= Q^{\bot}$ (by Lemma
\ref{lem:primcomp}).
As there is no element of degree strictly bigger than $o$ in
$Q^{\bot}$, this implies that
$$a_1=\cdots=a_n=0.$$
Then, by specialization at ${\bf x}={{\mathbf{\xi}}}$, $\Delta$ yields an element of the kernel
of the Jacobian matrix of the system \eqref{matrixeq}.
By Theorem \ref{theorem1}, this Jacobian has a zero-kernel, since it defines
the simple point $\nu$. We deduce that $\Delta=0$ and $({{\mathbf{\xi}}},\bnu)$ is
an isolated and simple root of the system \eqref{overdet}.
\end{proof}
The following corollary applies the polynomial system defined in (\ref{overdet}) to refine the precision of an approximate multiple root together with the coefficients of its Macaulay dual basis. The advantage of using this, as opposed to using the Macaulay multiplicity matrix, is that the number of variables is much smaller, as was noted above.
\begin{corollary} Let ${\bf f}\in {\mathbb K}[{\bf x}]^N$ and ${{\mathbf{\xi}}}\in {\mathbb C}^n$ be as above, and let $\Lambda_{0}({\nu}), \ldots, \Lambda_{{{\delta}-1}}({\nu})$ be its dual basis as in~(\ref{Macbasis}). Let $E\subset {\mathbb N}^n$ be as above. Assume that we are given approximates for the singular roots and its inverse system as in (\ref{Macbasis})
$$
\tilde{{{\mathbf{\xi}}}} \cong {{\mathbf{\xi}}} \; \text{ and } \; \tilde{\nu}_{\alpha_i, {\beta}}\cong \nu_{\alpha_i, {\beta}} \;\;\forall {\alpha}_i \in E,\; \beta\not\in E, \;|{\beta}|\leq o.
$$
Consider the overdetermined system in ${\mathbb K}[{\bf z}, \mu]$ from (\ref{overdet}).
Then a random square subsystem of (\ref{overdet}) will have
a simple root at ${\bf z}={{\mathbf{\xi}}}$, $\mu=\nu$ with high probability. Thus, we can apply Newton's method for this square subsystem to refine $\tilde{{{\mathbf{\xi}}}}$ and $\tilde{\nu}_{\alpha_i, {\beta}}$ for $({\alpha}_i, {\beta})\in E\times \partial(E)$. For $\tilde{\nu}_{\alpha_i, {\gamma}}$ with ${\gamma}\not \in E^+$ we can use (\ref{restnu}) for the update.
\end{corollary}
\begin{example}\label{Ex:Illustrative2}
Reconsider the setup from Ex.~\ref{Ex:Illustrative} with primal
basis $\{1,x_2\}$ and $E = \{(0,0),(0,1)\}$. We obtain
$${\tt M}_1(\mu) = \left[\begin{array}{cc} 0 & 0 \\ \mu & 0 \end{array}\right]~~\hbox{and}~~
{\tt M}_2(\mu) = \left[\begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array}\right].$$
The resulting deflated system in (\ref{overdet}) is
{\small
$$F(z_1,z_2,\mu) = \left[\begin{array}{c} z_1 + z_2^2 \\
\mu + 2 z_2 \\ z_1^2 + z_2^2 \\ 2 \mu z_1 + 2 z_2 \end{array}\right]$$
}
which has a nonsingular root at $(z_1,z_2,\mu) = (0,0,0)$ corresponding
to the origin with multiplicity structure $\{1,\partial_{2}\}$.
\end{example}
\section{Examples}\label{Sec:Examples}
Computations for the following examples, as well as several other
systems, along with \textsc{Matlab} code can be found at
\url{www.nd.edu/~jhauenst/deflation/}.
\subsection{A family of examples}
\noindent We first consider a modification of \cite[Example 3.1]{LiZhi2013}. For any $n\geq 2$, the following system has $n$ polynomials, each of degree at most $3$, in $n$ variables:
\begin{eqnarray*}
x_1^3+x_1^2-x_2^2, \;x_2^3+x_2^2-x_3, \ldots, x_{n-1}^3+x_{n-1}^2-x_n, \;x_n^2.
\end{eqnarray*}
The origin is a multiplicity $\delta:=2^n$ root having breadth $2$ (i.e., the
corank of Jacobian at the origin is $2$).
We apply our parametric normal form method described in \S~\ref{Sec:PointMult}. Similarly as in Remark \ref{reduce}, we can reduce the number of free parameters to be at most $(n-1)(\delta-1)$ using the structure of the primal basis $B=\{x_1^ax_2^b:a<2^{n-1}, \; b<2\}$.
The following table shows the multiplicity, number of
variables and polynomials in the deflated system, and the time (in
seconds) it took to compute this system (on a iMac, 3.4 GHz Intel Core i7 processor, 8GB 1600Mhz DDR3 memory).
Note that when comparing our method to an
approach using the
null spaces of Macaulay multiplicity matrices (see for example \cite{DayZen2005,lvz08}), we found that for $n\geq 4$ the deflated system derived from the Macaulay multiplicity matrix was too large to compute. This is because the nil-index at the origin is $2^{n-1}$, so the size of the Macaulay multiplicity matrix is $\;n\cdot{{2^{n-1}+n-1}\choose{n-1}}\times{{2^{n-1}+n}\choose{n}}$.
$$\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{} &\multicolumn{3}{|c|}{\hbox{New approach}} & \multicolumn{3}{|c|}{\hbox{Null space}} \\
\hline
n & \hbox{mult} & \hbox{vars} & \hbox{poly} & \hbox{time} & \hbox{vars} & \hbox{poly} & \hbox{time}\\
\hline
2 & 4 & 5 & 9 & 1.476 &8&17&2.157\\
\hline
3 & 8 & 17 & 31 & 5.596&192&241&208 \\
\hline
4 & 16 & 49 & 100 & 19.698 &7189 &19804&>76000\\
\hline
5 & 32 & 129 & 296 & 73.168&N/A&N/A&N/A \\
\hline
6 & 64 & 321 & 819 & 659.59 &N/A&N/A&N/A\\
\hline
\end{array}$$}
\subsection{Caprasse system}\label{Sec:Caprasse}
\noindent We consider the Caprasse system \cite{Caprasse88,Posso98}:
$$
\begin{array}{l}
f(x_1,x_2,x_3,x_4) =
\left[
\begin{array}{l}
{x_{{1}}}^{3}x_{{3}}-4\,x_{{1}}{x_{{2}}}^{2}x_{{3}}-4\,{x_{{1}}}^{2}x_{{2}}x_{{4}}-2\,{x_{{2}}}^{3}x_{{4}}-4\,{x_{{1}}}^{2}+\\ ~~~~~~~~~10\,{x_{{2}}}^{2}- 4\,x_{{1}}x_{{3}}+10\,x_{{2}}x_{{4}}-2,\\
x_{{1}}{x_{{3}}}^{3}-4\,x_{{2}}{x_{{3}}}^{2}x_{{4}}-4\,x_{{1}}x_{{3}}{x_{{4}}}^{2}-2\,x_{{2}}{x_{{4}}}^{3}-4\,x_{{1}}x_{{3}}+\\ ~~~~~~~~~10\,x_{{2}}x_{{4}}- 4\,{x_{{3}}}^{2}+10\,{x_{{4}}}^{2}-2,\\
{x_{{2}}}^{2}x_{{3}}+2\,x_{{1}}x_{{2}}x_{{4}}-2\,x_{{1}}-x_{{3}},\\
{x_{{4}}}^{2}x_{{1}}+2\,x_{{2}}x_{{3}}x_{{4}}-2\,x_{{3}}-x_{{1}}
\end{array}\right]
\end{array}
$$
}at the multiplicity $4$ root ${{\mathbf{\xi}}}=(2, -\sqrt{-3}, 2, \sqrt{-3})$.
We first consider simply deflating the root.
Using the approaches of \cite{DayZen2005,HauWam13,lvz06}, one iteration suffices.
For example, using an extrinsic and intrinsic version of \cite{DayZen2005,lvz06},
the resulting system consists of 10 and 8 polynomials, respectively,
and 8 and 6 variables, respectively.
Following \cite{HauWam13}, using all minors results in a system
of 20 polynomials in 4 variables which can be reduced to
a system of 8 polynomials in 4 variables using the $3\times3$ minors
containing a full rank~$2\times2$~submatrix.
The approach of \S~\ref{Sec:Deflation} using an $|{\bf m} i|=1$
step creates a deflated system consisting
of $6$ polynomials in $4$ variables.
In fact, since the null space of the Jacobian at the root
is $2$ dimensional, adding two polynomials is necessary and sufficient.
Next, we consider the computation of both the point and multiplicity structure.
Using an intrinsic null space approach via a second order Macaulay
matrix, the resulting system consists of $64$ polynomials in $37$ variables.
In comparison,
using the primal
basis \mbox{$\{1,x_1,x_2$, $x_1x_2\}$}, the approach
of~\S~\ref{Sec:PointMult}
constructs a system
of $30$ polynomials in $19$ variables.
\subsection{Examples with multiple iterations}\label{Sec:MultipleIterations}
\noindent In our last set of examples, we consider simply deflating a root of the last three systems
from \cite[\S~7]{DayZen2005}
and a system from \cite[\S~1]{Lecerf02}, each of which
required more than one iteration to deflate.
These four systems and corresponding points are:
{\small\begin{itemize}
\item[1:] $\{x_1^4 - x_2 x_3 x_4, x_2^4 - x_1 x_3 x_4, x_3^4 - x_1 x_2 x_4, x_4^4 - x_1 x_2 x_3\}$ at $(0,0,0,0)$ with ${\delta} = 131$ and $o = 10$;
\item[2:] $\{x^4, x^2 y + y^4, z + z^2 - 7x^3 - 8x^2\}$ at $(0,0,-1)$ with ${\delta} = 16$ and $o = 7$;
\item[3:] $\{14x + 33y - 3\sqrt{5}(x^2 + 4xy + 4y^2 + 2) + \sqrt{7} + x^3 + 6x^2y + 12xy^2 + 8y^3, 41x - 18y - \sqrt{5} + 8x^3 - 12x^2y + 6xy^2 - y^3 + 3\sqrt{7}(4xy - 4x^2 - y^2 - 2)\}$ at $Z_3 \approx (1.5055, 0.36528)$ with ${\delta} = 5$ and $o = 4$;
\item[4:] $\{2x_1 + 2x_1^2 + 2x_2 + 2x_2^2 + x_3^2 - 1,
\mbox{$(x_1 + x_2 - x_3 - 1)^3-x_1^3$}, \\
(2x_1^3 + 5x_2^2 + 10x_3 + 5x_3^2 + 5)^3 - 1000 x_1^5\}$ at
$(0,0,-1)$ with ${\delta} = 18$ and $o = 7$.
\end{itemize}}
We compare using the following four methods:
(A) intrinsic slicing version of \cite{DayZen2005,lvz06};
(B) isosingular deflation \cite{HauWam13} via a maximal rank submatrix;
(C) ``kerneling'' method in \cite{GiuYak13};
(D) approach of \S~\ref{Sec:Deflation} using an $|{\bf m} i|=1$ step.
We performed these methods without the use of preprocessing and postprocessing
as mentioned in \S~\ref{Sec:Deflation} to directly compare the
number of nonzero distinct polynomials, variables, and iterations
for each of these four deflation methods.
\vskip -0.05in
$$
\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
&
\multicolumn{3}{|c|}{\hbox{Method A}} &
\multicolumn{3}{|c|}{\hbox{Method B}} &
\multicolumn{3}{|c|}{\hbox{Method C}} &
\multicolumn{3}{|c|}{\hbox{Method D}}\\
\cline{2-13}
&
\hbox{Poly} & \hbox{Var} & \hbox{It} &
\hbox{Poly} & \hbox{Var} & \hbox{It} &
\hbox{Poly} & \hbox{Var} & \hbox{It} &
\hbox{Poly} & \hbox{Var} & \hbox{It} \\
\hline
1 & 16 & 4 & 2 & 22 & 4 & 2 & 22 & 4 & 2 & 16 & 4 & 2 \\
\hline
2 & 24 & 11 & 3 & 11 & 3 & 2 & 12 & 3 & 2 & 12 & 3 & 3 \\
\hline
3 & 32 & 17 & 4 & 6 & 2 & 4 & 6 & 2 & 4 & 6 & 2 & 4 \\
\hline
4 & 96 & 41 & 5 & 54 & 3 & 5 & 54 & 3 & 5 & 22 & 3 & 5 \\
\hline
\end{array}$$}%
For breadth one singular points as in system 3, methods B, C, and D yield
the same deflated system.
Except for methods B and C on the second system, all four methods required the same number of iterations to deflate the root.
For the first and third systems, our new approach matched
the best of the other methods and resulted in a
significantly smaller deflated system for~the~last~one.
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def------.~{------.~}
|
2,869,038,154,262 | arxiv | \section{Introduction}
\subsection{Main Results}
Given an abelian group $A$ with an action of a finite group $G$, summation along the orbit provides a natural map $\nm_G\colon A_G \to A^G$ from the co-invariants to the invariants.
In general, this map may have both non-trivial kernel and cokernel.
However, when $A$ is a rational vector space, $\nm_G$ is always an isomorphism.
Similarly, given a spectrum $X$ with an action of $G$, the
spectra of homotopy orbits $X_{hG}$ and homotopy fixed points $X^{hG}$
are also related by a canonical norm map $\nm_G\colon X_{hG}\to X^{hG}$.
As before, this map is usually far from being an equivalence.
However, there are certain homology theories, such that when working locally with respect to them, the analogous norm map is always a local equivalence.
For a spectrum $E$, let us denote by $\Sp_{E}$ the $\infty$-category
of $E$-local spectra, and for $X\in\Sp_{E}$ with a $G$-action, we denote
by $X_{hG}$ and $X^{hG}$ the homotopy orbits and homotopy fixed
points respectively, in the $\infty$-category $\Sp_{E}$.
\begin{thm}
[Greenlees-Hovey-Sadofsky, \cite{HState,GState}]\label{thm:Hovey_Sadofsky_Greenlees}Let
$\K\left(n\right)$ be Morava $K$-theory of height $n$. For every
$X\in\Sp_{K\left(n\right)}$ with an action of a finite group $G$,
the canonical norm map
\[
\nm_G\colon X_{hG}\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,} X^{hG}\quad\in\quad\Sp_{K\left(n\right)}
\]
is an equivalence.
\end{thm}
Since $\K\left(0\right)=H\mathbb{Q}$, the case $n=0$ follows easily from the invertibility of $\nm_G$ on rational representations of $G$.
However, for $n>0$ this is a remarkable fact, showcasing the intermediary behavior of $\K\left(n\right)$-local homotopy theory, interpolating between zero and positive characteristic.
Considering the classifying space $BG$ as
an $\infty$-groupoid, the data of an $E$-local spectrum with an action of $G$ is equivalent to a functor $F\colon BG\to\Sp_{E}$. In these terms, the homotopy orbits and homotopy fixed points of the action are then just the colimit and limit of $F$ respectively (again, in $\Sp_{E}$).
In \cite{HopkinsLurie}, Hopkins and Lurie extended \thmref{Hovey_Sadofsky_Greenlees} to more general limits and colimits.
\begin{defn}
\label{def:m_Finite}Given $m\ge-2$, a space $A$ is called \emph{$m$-finite}
if it is $m$-truncated, has finitely many connected components and
all of its homotopy groups are finite. It is called \emph{$\pi$-finite}
if it is $m$-finite for some $m$.
\end{defn}
\begin{thm}
[Hopkins-Lurie, \cite{HopkinsLurie}]\label{thm:Hopkins_Lurie}Let
$A$ be a $\pi$-finite space. For every $F\colon A\to\Sp_{K\left(n\right)}$,
there is a canonical (and natural) equivalence
\[
\nm_{A}\colon\colim_A F\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}\holim_A F\quad\in\quad\Sp_{K\left(n\right)}.
\]
\end{thm}
The special case, where $A=BG$ for a finite group $G$, recovers \thmref{Hovey_Sadofsky_Greenlees}.
The canonical norms of \thmref{Hopkins_Lurie} (and \thmref{Hovey_Sadofsky_Greenlees})
can be set in the broader context of higher semiadditivity, developed in \cite{HopkinsLurie}.
Let $\mathcal{C}$ be an $\infty$-category that admits all (co)limits
indexed by $\pi$-finite spaces. For every $\pi$-finite space $A$,
we have two functors
\[
\colim\limits _{A}\,,\ \holim\limits _{A}\colon\fun\left(A,\mathcal{C}\right)\to\mathcal{C}.
\]
In \cite{HopkinsLurie}, the authors set up a general process that
attempts to construct canonical natural transformations
\[
\nm_{A}\colon\colim\limits _{A}\to\holim\limits _{A}
\]
for all $m$-finite spaces $A$, by induction on $m$. The $m$-th
step of this process requires that all canonical norm maps for $\left(m-1\right)$-finite
spaces, that were constructed in the previous step, are \emph{isomorphisms}.
The property of an $\infty$-category $\mathcal{C}$, that these canonical
norm maps can be constructed and are isomorphisms for all $m$-finite
spaces, is called \emph{$m$-semiadditivity} (see \subsecref{Higher_Semiadditivity}).
We can thus restate \thmref{Hovey_Sadofsky_Greenlees}
as saying that the $\infty$-category $\Sp_{K\left(n\right)}$ is $1$-semiadditive, and \thmref{Hopkins_Lurie} as saying that it
is $\infty$-semiadditive (i.e. $m$-semiadditive for all $m$).
Kuhn extended \thmref{Hovey_Sadofsky_Greenlees} in a different direction, by replacing $\K\left(n\right)$-localization with the closely related telescopic localization. Namely, let $\T\left(n\right)$ be a telescope on a $v_{n}$-self map of some type $n$ finite spectrum.
\begin{thm}
[Kuhn, \cite{Kuhn}]\label{thm:Kuhn}
The $\infty$-category $\Sp_{\T\left(n\right)}$ is $1$-semiadditive.
\end{thm}
In view of \thmref{Kuhn} and \thmref{Hopkins_Lurie}, M. Hopkins asked whether the $\infty$-category $\Sp_{\T\left(n\right)}$ is $\infty$-semiadditive as well.
Our first result is an affirmative answer to this question.
\begin{theorem}[\ref{thm:Tn_Semiaddi}]
\label{thm:telescopic_infty_semiadditive_intro}
\emph{
The $\infty$-category $\Sp_{\T\left(n\right)}$ is $\infty$-semiadditive.
}
\end{theorem}
Our proof of \thmref{telescopic_infty_semiadditive_intro} uses the general framework of higher semiadditivity developed by Hopkins and Lurie in \cite{HopkinsLurie}, but is quite different than their proof of \thmref{Hopkins_Lurie} (see \subsecref{Outline_Proof} for an outline). Since the latter is implied by the former, our argument provides an alternative proof for \thmref{Hopkins_Lurie} as well.
Our next result concerns the classification of $1$-semiadditive localizations of $p$-local spectra with respect to homotopy rings\footnote{In fact, all the results apply more generally to \emph{weak rings}. That is, spectra equipped with a multiplication map and a one-sided unit, and no associativity conditions (see \defref{Weak_Ring}).}. We show that the $\infty$-categories $\Sp_{\K\left(n\right)}$ and $\Sp_{\T\left(n\right)}$ are precisely the minimal and maximal examples of such localizations.
\begin{theorem}[\ref{thm:Monochrom}]
\label{thm:Tn_Universal}
\emph{
Let $R$ be a non-zero $p$-local homotopy ring spectrum. The $\infty$-category $\Sp_{R}$ is $1$-semiadditive if and only if there exists a (necessarily unique) integer $n\ge 0$, such that
\[
\Sp_{\K\left(n\right)}\ss\Sp_{R}\ss\Sp_{\T\left(n\right)}.
\]
}
\end{theorem}
Equivalently, using the Nilpotence Theorem, $\Sp_R$ is $1$-semiadditive if and only if there is exactly one integer $n\ge0$ for which $R\otimes \K\left(n\right) \neq 0$, and $R \otimes H\mathbb{F}_p =0$. Namely, if $R$ is supported at a unique (finite) chromatic height\footnote{A detailed argument for this equivalence is given in the proof of \thmref{Are_Equivalent_Intro} (\ref{thm:Monochrom}).}.
Combining \thmref{telescopic_infty_semiadditive_intro} with \thmref{Tn_Universal}, and using the arithmetic square, we show that for localizations of $\Sp$ with respect to homotopy rings, the entire hierarchy of higher semiadditivity collapses.
\begin{theorem}[\ref{cor:Semiadd_Collapse}]
\label{thm:Main_Theorem}
\emph{
Let $R\in \Sp$ be a homotopy ring spectrum. The $\infty$-category $\Sp_R$ is $1$-semiadditive if and only if it is $\infty$-semiadditive.
}
\end{theorem}
This leads us to formulate the following general conjecture:
\begin{conj}
Every presentable, stable, and $1$-semiadditive $\infty$-category is $\infty$-semiadditive.
\end{conj}
Another remarkable property of the localizations $\Sp_{\K(n)}$ and $\Sp_{\T(n)}$, is the existence of the so-called \emph{Bousfield-Kuhn functor}, i.e. a retract of $\Omega^{\infty}{\colon}\Sp_{R}\to\mathcal{S_{*}}$.
This phenomenon turns out to be also strongly connected to higher semiadditivity. In \cite{ClausenAkhil}, the authors gave a new (and short) proof of \thmref{Kuhn}, by showing that every localization of $\Sp$, that admits a Bousfield-Kuhn functor, is $1$-semiadditive. Combined with all of the above, the situation can be pleasantly summarized as follows:
\begin{theorem}[\ref{thm:Monochrom}]
\emph{
\label{thm:Are_Equivalent_Intro}
Let $R$ be a non-zero $p$-local homotopy ring spectrum.
The following are equivalent:
\begin{enumerate}
\item There is exactly one integer $n\ge0$ for which \(R\otimes \K\left(n\right) \ne 0,\) and $R\otimes H\mathbb{F}_p=0$.
\item There exists a (necessarily unique) integer $n\ge0$, such that
\(\Sp_{\K\left(n\right)}\ss\Sp_{R}\ss\Sp_{\T\left(n\right)}.\)
\item Either $\Sp_{R}=\Sp_{H\bb Q}$, or $\Omega^{\infty}{\colon}\Sp_{R}\to\mathcal{S_{*}}$
admits a retract.
\item $\Sp_{R}$ is $1$-semiadditive.
\item $\Sp_{R}$ is $\infty$-semiadditive.
\end{enumerate}
}
\end{theorem}
It seems appropriate at this point to say a few words about the results summarized in \thmref{Are_Equivalent_Intro}, in light of the (still open) Telescope Conjecture, which asserts that $\Sp_{\K\left(n\right)} = \Sp_{\T\left(n\right)}$ (see \cite{ravconj}).
If true, the property of higher semiadditivity characterizes completely the $\K\left(n\right)$-local $\infty$-categories among localizations of $\Sp$ with respect to homotopy rings (as does the existence of the Bousfield-Kuhn functor).
If false, the property of higher semiadditivity fails to detect the difference, but on the upside, we are provided with more examples of $\infty$-semiadditive $\infty$-categories.
At any rate, our results corroborate, the by now well-established fact, that the Telescope Conjecture is rather subtle.
The $1$-semiadditivity of $\Sp_{T(n)}$ and $\Sp_{K(n)}$, has found many applications in chromatic homotopy theory. For example, it was used to analyze the Balmer spectrum in an equivariant setting \cite{barthel2017balmer}.
It was also recently used in \cite{heuts2018lie} to generalize Quillen's rational homotopy theory to higher chromatic heights.
In an upcoming work we shall use \thmref{telescopic_infty_semiadditive_intro}, i.e. the \emph{$\infty$-semiadditivity} of $\Sp_{T(n)}$, to lift the maximal abelain Galois extension of $\Sp_{K(n)}$ to $\Sp_{T(n)}$ and draw consequences for the Picard group of $\Sp_{T(n)}$.
We shall now describe an application of $\infty$-semiadditivity of $\Sp_{T(n)}$ to a matter of chromatic homotopy theory, that does not mention higher semiadditivity explicitly.
\begin{theorem}[\ref{thm:Height_Below_n}]
\label{thm:Bounded_Height}
\emph{
Let $R$ be a $p$-local homotopy ring spectrum and let $d\ge0$. The following are equivalent\footnote{The equivalence of (1) and (2) is well known. The new content is that they are both equivalent to (3).}:
\begin{enumerate}
\item $R\otimes \K\left(m\right)=0$ for all $m> d$.
\item $R\otimes \X\left(d+1\right)=0$ for a finite spectrum $\X\left(d+1\right)$ of type $d+1$.
\item $R\otimes \Sigma^{\infty}A=0$ for every $d$-connected $\pi$-finite space $A$.
\end{enumerate}
}
\end{theorem}
Namely, we obtain an equivalence of three different notions of ``height $\le d$'' for a homotopy ring: (1) the ``algebraic'' one using Morava $K$-theories, (2) the ``geometric'' one using finite complexes, and (3) the ``categorical'' one using $\pi$-finite spaces.
The categorical height of a spectrum (i.e. the minimal $d$ for which condition (3) holds) was considered, using different terminology, by Bousfield in \cite{Bousfield82}.
The most prominent example of such $R$ is $\K\left(n\right)$, which by \cite{RavenelWilson}, has categorical height $n$. Bousfield's work also implies that for all $n\ge0$, the spectrum $\T\left(n\right)$ has \emph{some} finite categorical height, but determining its \emph{precise} value has been an open question\footnote{By comparing with $K(n)$, it is clearly at least $n$, but not much has been known about it beyond that.}.
This can be now settled using \thmref{Bounded_Height}; as the algebraic and geometric heights of $\T\left(n\right)$ are known to be equal to $n$, so must the categorical height.
The proof of the above results relies on establishing certain consequences of $1$-semiadditivity, especially in the context of \emph{stable} $\infty$-categories.
The main one, which is central to the proof of
\thmref{telescopic_infty_semiadditive_intro}, but is also of independent interest, is the existence of certain canonical ``power operations''.
\begin{theorem}[\ref{thm:Delta_Semi_Add}, \ref{thm:Frob_Lift}]
\emph{
Let $E\in\Sp$, such that $\Sp_{E}$ is $1$-semiadditive (e.g. $E=\T\left(n\right)$)
and let $X$ be an $\bb E_{\infty}$-algebra in $\Sp_{E}$. The commutative
ring $R=\pi_{0}X$ admits a canonical additive $p$-derivation $\delta\colon R\to R$
(see \defref{Delta}). In particular, the operation
\[
\psi\left(x\right)=x^{p}+p\delta\left(x\right)
\]
is a linear map, which is a canonical lift of the Frobenius endomorphism
modulo $p$. The operation $\delta$ (and hence $\psi$) is functorial
with respect to maps of $\bb E_{\infty}$-algebras.
}
\end{theorem}
For $K(1)$-local $\bb{E}_{\infty}$-rings, Hopkins has constructed in \cite{HopkinsK1} similar looking power operations denoted $(\psi,\theta)$. Generalizations of these operations to higher heights were studied by different authors including \cite{StricklandSym} and \cite{Rezkpower}. In particular, a canonical lift of Frobenius was constructed in \cite{StapletonLift} for the Morava $E$-theory cohomology ring of a space. However, even for $K(1)$-local rings, our power operation $\delta$ turns out to be \emph{different} from the operation $\theta$ constructed by Hopkins. We differ the detailed study of the wealth of power operations on $\bb{E}_{\infty}$-algebras in $1$-semiadditive stable symmetric monoidal $\infty$-categories to a future work.
Employing this power operation we obtain a general criterion for detecting nilpotence in the homotopy groups of an $\mathbb{E}_{\infty}$-ring, which is inspired by (and generalizes) a conjecture of J. P. May, that was proved in \cite{MathewMay}.
\begin{theorem}[\ref{thm:May}]
\emph{
\label{thm:Main_Sofic}Let $E$ be a homotopy commutative ring spectrum,
such that $\Sp_{E}$ is $1$-semiadditive and let $R$ be an $\bb E_{\infty}$-ring
spectrum\footnote{In fact, it suffices that $R$ is an $H_\infty$-ring spectrum.}. For every $x\in\pi_{*}R$, if the image of $x$ in $\pi_{*}\left(H\bb Q\otimes R\right)$
is nilpotent, then the image of $x$ in $\pi_{*}\left(E\otimes R\right)$
is nilpotent (i.e. the single homology theory $H\bb Q$ detects nilpotence
in all $1$-semiadditive multiplicative homology theories).
}
\end{theorem}
In fact, we prove \thmref{Main_Sofic} for a wider class of spectra
$E$, which we call \emph{sofic} (see \defref{sofic}). These include all spectra whose Bousfield class is contained in a sum of spectra $E$ for which $\Sp_{E}$ is $1$-semiadditive.
In particular, it can be applied to the Morava $E$-theories of any height and the finite localizations of the sphere $L_{n}^{f}\bb S$.
\subsection{Background on Higher Semiadditivity\label{subsec:Higher_Semiadditivity}}
We shall now give an informal introduction to higher semiadditivity. The goal is to motivate both the concept of higher semiadditivity introduced in \cite[section 4]{HopkinsLurie} and the more general perspective on it, that we develop in this paper, using abstract norms and integration.
\subsubsection{From Norms to Integration}
Since the construction of the canonical norm maps is inductive, it
will be helpful to begin with describing some \emph{consequences}
of having invertible norm maps. This will also clarify their relation
to the classical notion of semiadditivity. For an ordinary category
$\mathcal{C}$, semiadditivity is a property, whose main feature is
the ability to sum a finite family of morphisms between two objects.
Similarly, for an $\infty$-category $\mathcal{C}$, being $m$-semiadditive
is a property, whose main feature is the ability to sum an \emph{$m$-finite}
family of morphisms between two objects. Namely, given an $m$-finite
space $A$ and a map
\[
\varphi\colon A\to\map_{\mathcal{C}}\left(X,Y\right),
\]
we define a map
\[
\int\limits _{A}\varphi\colon X\to Y,
\]
which we should think of as the sum (or integral) of $\varphi$ over
$A$, as the composition
\[
X\oto{\,\Delta\,}\holim\limits _{A}X\oto{\holim\varphi}\holim\limits _{A}Y\oto{\nm_{A}^{-1}}\colim\limits _{A}Y\oto{\,\nabla\,}Y.
\]
Note, that for an ordinary semiadditive category, summation over a
finite set $A$ is indeed obtained in this way using the \emph{canonical}
isomorphism
\[
\nm_{A}\colon\coprod_{A}X\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}\prod_{A}X.
\]
As a special case, for every object $X\in\mathcal{C}$, integrating
the constant $A$-family on $\Id_{X}$, produces an endomorphism $|A|\in\map_{\mathcal{C}}\left(X,X\right)$.
This generalizes the ``multiplication by $k$'' endomorphism of $X$ for an integer $k$ and should be thought of as multiplication by the ``cardinality of $A$''.
\subsubsection{From Integration to Norms}
We now turn things around and \emph{construct} norm maps for $m$-finite
spaces by integrating some $\left(m-1\right)$-finite families of
maps. In general, given any space $A$ and a diagram $F\colon A\to\mathcal{C}$,
to specify a morphism
\[
\nm_{A}\colon\colim\limits _{A}F\to\holim\limits _{A}F,
\]
roughly amounts to specifying a compatible collection of morphisms
\[
a,b\in A\colon\quad\nm_{A}^{a,b}\colon F\left(a\right)\to F\left(b\right).
\]
Fixing $a,b\in A$ and denoting by $A_{a,b}$ the space of paths from
$a$ to $b$, the diagram $F$ itself provides a \emph{family} of
candidates for $\nm_{A}^{a,b}$:
\[
F_{a,b}\colon A_{a,b}\to\map\left(F\left(a\right),F\left(b\right)\right).
\]
There is a priori no obvious (compatible) way to choose one of them,
but assuming we are able to integrate maps over the spaces $A_{a,b}$,
we can just ``sum them all''
\[
\nm_{A}^{a,b}=\int\limits _{A_{a,b}}F_{a,b}.
\]
This construction is somewhat easier to grasp when $F$ is constant
on an object $X$. In this special case, a morphism
\[
\colim\limits _{A}X\to\holim\limits _{A}X,
\]
is the same as a map of spaces
\[
A\times A\to\map_{\mathcal{C}}\left(X,X\right).
\]
That is, an ``$A\times A$ matrix'' of endomorphism of $X$, where
the $\left(a,b\right)\in A\times A$ entry corresponds to $\nm_{A}^{a,b}$.
The construction sketched above specializes to give $\nm_{A}^{a,b}=|A_{a,b}|$.
The construction of the norm in the general case can be thought of
as a ``twisted'' version of the one for the constant diagram.
\subsubsection{The Inductive Process}
To tie things up, we observe that if $A$ is $m$-finite, then the
path spaces $A_{a,b}$ are $\left(m-1\right)$-finite. Thus, assuming
inductively that we have \emph{invertible} canonical norm maps $\nm_{A}$
for all $\left(m-1\right)$-finite spaces $A$, we obtain a canonical
way to integrate $\left(m-1\right)$-finite families of morphisms.
As explained above, this allows us to define norm maps for all $m$-finite
spaces. It is now a \emph{property} that all those new norm maps are
isomorphisms, which in turn induces an operation of integration over
$m$-finite spaces and so on. We spell out the situation for small
values of $m$.
\begin{itemize}
\item [($-2$)]We define \emph{every} $\infty$-category to be $\left(-2\right)$-semiadditive.
Indeed, if $A$ is $\left(-2\right)$-finite, then $A\simeq\pt$ and
the canonical norm map $\nm_{\pt}$ is the identity natural transformation
of the identity functor. In particular, we get a canonical way to
sum a one point family of maps, which is just
taking the value at the point itself.
\item [($-1$)]The only non-contractible $\left(-1\right)$-finite space
is $A=\operatorname{{\small \normalfont{\text{\O}}}}$. The associated norm map is the unique map
\[
\nm_{\operatorname{{\small \normalfont{\text{\O}}}}}\colon0_{\mathcal{C}}\to1_{\mathcal{C}}
\]
from the initial object to the terminal object of $\mathcal{C}$.
Requiring this map to be an isomorphism is to require the existence
of a zero object. Thus, $\mathcal{C}$ is $\left(-1\right)$-semiadditive
if and only if it is \emph{pointed}. This in turn allows us to integrate
an empty family of morphisms. Namely, given $X,Y\in\mathcal{C}$,
we get a canonical zero map given by the composition
\[
X\to1_{\mathcal{C}}\iso0_{\mathcal{C}}\to Y.
\]
\item [($0$)]A $0$-finite space is one that is equivalent to a finite
set $A$. Given a collection of objects $\left\{ X_{a}\right\} _{a\in A}$
in a \emph{pointed} $\infty$-category $\mathcal{C}$, we get a canonical
map
\[
\nm_{A}\colon\coprod_{a\in A}X_{a}\to\prod_{a\in A}X_{a}.
\]
This map is given by the ``identity matrix'' (this uses the zero
maps, which in turn use the inverse of $\nm_{\operatorname{{\small \normalfont{\text{\O}}}}}$). Requiring these
maps to be isomorphisms is precisely the usual property of being \emph{semiadditive},
which allows one to sum a finite family of morphisms.
\item [($1$)]A connected $1$-finite space is of the form $A=BG$ for a
finite group $G$. A diagram $F\colon BG\to\mathcal{C}$ is equivalent
to an object $X\in\mathcal{C}$ equipped with an action of $G$. When
$\mathcal{C}$ is \emph{semiadditive}, one can construct the canonical
norm map
\[
\nm_{BG}\colon X_{hG}\to X^{hG}
\]
and it can be identified with the classical norm of $G$. If $\mathcal{C}$
is stable, then $\nm_{BG}$ is an isomorphism if and only if its cofiber,
the Tate construction $X^{tG}$, vanishes. It is in this form that
\thmref{Hovey_Sadofsky_Greenlees} and \thmref{Kuhn} were originally
stated and proved.
\end{itemize}
\subsubsection{Relative and Axiomatic Integration}
Just like with ordinary semiadditivity, integration of $m$-finite
families of maps satisfies various compatibilities. These generalize
associativity, changing summation order, distributivity with respect to composition, etc. To conveniently manage those compatibility relations
it is useful to extend the integral operation to the relative case.
Given a map of $m$-finite spaces $q\colon A\to B$, the pullback
along $q$ functor
\[
q^{*}\colon\fun\left(B,\mathcal{C}\right)\to\fun\left(A,\mathcal{C}\right),
\]
admits a left and right adjoint, which we denote by $q_{!}$ and $q_{*}$
respectively. If $\mathcal{C}$ is $\left(m-1\right)$-semiadditive,
one can construct a canonical norm map $\nm_{q}\colon q_{!}\to q_{*}$
and it is an isomorphism when $\mathcal{C}$ is $m$-semiadditive.
Similarly to the absolute case, given objects $X,Y\in\fun\left(B,\mathcal{C}\right)$,
one can use the inverse of $\nm_{q}$ to define ``integration along
the fibers of $q$'',
\[
\int\limits _{q}\colon\map_{\fun\left(A,\mathcal{C}\right)}\left(q^{*}X,q^{*}Y\right)\to\map_{\fun\left(B,\mathcal{C}\right)}\left(X,Y\right).
\]
The approach we take in this paper is to further generalize the situation and to put it in an axiomatic framework. We define a \emph{normed
functor}
\[
q\colon\mathcal{D}\nto\mathcal{C},
\]
to be a functor
\(
q^{*}\colon\mathcal{C}\to\mathcal{D},
\)
that admits a left adjoint $q_{!}$, a right adjoint $q_{*}$, and
is equipped with a natural transformation $\nm_{q}\colon q_{!}\to q_{*}$.
If this natural transformation is an isomorphism, we can use the same
formulas as above to define an abstract integration operation
\[
\int\limits _{q}\colon\map_{\mathcal{D}}\left(q^{*}X,q^{*}Y\right)\to\map_{\mathcal{C}}\left(X,Y\right)
\]
for all $X,Y\in\mathcal{C}$. We proceed to develop a general calculus
of normed functors and integration, which can then be applied to the
context of higher semiadditivity. One advantage of this axiomatic
approach, is that it separates the formal aspects of this ``calculus''
from the rather involved inductive construction of the canonical norm
maps. Another advantage is that it unifies many seemingly different
phenomena as special cases of several general formal statements. This
renders the development of the theory more economic and streamlined. Finally, we
believe that this axiomatic framework might be of use elsewhere.
\subsection{Outline of the Proof}
\label{subsec:Outline_Proof}
The core result of this paper is the $\infty$-semiadditivity of $\Sp_{\T\left(n\right)}$. For the convenience of the reader, we shall now sketch the proof.
The argument is inductive on the level of semiadditivity $m$.
The basis of the induction is $m=1$, which is given by \thmref{Kuhn}.
Assume that $\Sp_{T\left(n\right)}$ is $m$-semiadditive.
In order to show that $\Sp_{T\left(n\right)}$ is $\left(m+1\right)$-semiadditive, we need to prove that for every $\left(m+1\right)$-finite space $B$, the natural transformation $\nm_{B}\colon\colim_{B}\to\holim_{B}$ is an isomorphism.
We proceed by a sequence of reductions.
First, since $\Sp_{T\left(n\right)}$ is stable and $p$-local, by \cite[Proposition 4.4.16]{HopkinsLurie}, it suffices to show that
\begin{quotation}
(1) The norm map $\nm_{B}$ is an isomorphism for the single space
$B=B^{m+1}C_{p}$.
\end{quotation}
Now, consider a fiber sequence of spaces
\[
\left(*\right)\quad A\to E\to B,
\]
where $A$ and $E$ are $m$-finite, and $B$ is connected and $\left(m+1\right)$-finite.
We prove that if the natural transformation $|A|$ is \emph{invertible}
(we call such $A$ \emph{amenable}), then $\nm_{B}$ is an isomorphism
(\propref{Amenable_Space}). In fact, it suffices to show that the
component of $|A|$ at the monoidal unit $\bb S_{T\left(n\right)}$
is invertible (\lemref{Box_Unit}). By abuse of notation, we denote
this component also by $|A|$.
In order to apply the above to $B=B^{m+1}C_{p}$, we introduce the
following class of ``candidates'' for $A$. We call a space $A$,
\emph{$m$-good} if it is connected, $m$-finite with $\pi_{m}A\neq0$,
and all homotopy groups of $A$ are $p$-groups. Since such $A$ is
in particular nilpotent, one can always fit it in a fiber sequence
$\left(*\right)$ with $B=B^{m+1}C_{p}$. Thus, we are reduced to
showing that
\begin{quotation}
(2) There exists an $m$-good space $A$, such that $|A|\in\pi_{0}\bb S_{T\left(n\right)}$
is invertible.
\end{quotation}
To detect invertibility in the ring $\pi_{0}\bb S_{T\left(n\right)}$,
we transport the problem into a better understood setting. Let $E_{n}$
be the Morava $E$-theory $\bb E_{\infty}$-ring spectrum of height
$n$, and let $\widehat{\Mod}_{E_n}$ be the
$\infty$-category of $\K\left(n\right)$-local $E_{n}$-modules.
The functor
\[
E_n\widehat{\otimes}\left(-\right)\colon\Sp_{T\left(n\right)}\to\widehat{\Mod}_{E_n}
\]
is symmetric monoidal, and hence induces a map of commutative rings
\[
f\colon\pi_{0}\bb S_{T\left(n\right)}\to\pi_{0}E_{n}=\bb Z_{p}[[u_{1},\dots,u_{n-1}]].
\]
Using the Nilpotence Theorem and standard techniques of chromatic
homotopy theory, we show that an element of $\pi_{0}\bb S_{T\left(n\right)}$
is invertible, if and only if its image under $f$ is invertible (\corref{Nil_Conservativity_Telescopic}).
Moreover, the functor $E_{n}\widehat{\otimes}\left(-\right)$ is colimit
preserving. Thus, by general arguments of higher semiadditivity we can deduce that $\widehat{\Mod}_{E_n}$
is also $m$-semiadditive (\corref{Semi_Add_Mode}(2)), and moreover,
$f\left(|A|\right)$ coincides with the element $|A|$
of $\pi_{0}E_{n}$ (\corref{Integral_Functor}). Thus, we can replace
$\Sp_{T\left(n\right)}$ with the more approachable $\infty$-category
$\widehat{\Mod}_{E_n}.$ Namely,
it suffices to show
\begin{quotation}
(3) There exists an $m$-good space $A$, such that $|A|\in\pi_{0}E_{n}$
is \emph{invertible}.
\end{quotation}
By \cite[Lemma 1.33]{BobkovaG}, the image of $f$ is contained in the constants $\bb Z_{p}$. Hence, $|A|$ is invertible, if and only
if its $p$-adic valuation is zero. On $\bb Z_{p},$ we have the Fermat
quotient operation
\[
\tilde{\delta}\left(x\right)=\frac{x-x^{p}}{p},
\]
with the salient property of reducing the $p$-adic valuation of non-invertible
non-zero elements. The heart of the proof comprises of realizing the
algebraic operation $\tilde{\delta}$ in a way that acts on the elements
$|A|$ in an understood way. It is for this step that it
is crucial that our induction base is $m=1$. Namely, for
a presentable, \emph{$1$-semiadditive}, stable, $p$-local, symmetric
monoidal $\infty$-category $(\mathcal{C},\otimes, \one_{\mathcal{C}})$, we construct a ``power
operation'' (\defref{Delta} and \thmref{Delta_Semi_Add})
\[
\delta\colon\pi_{0}\left(\one_{\mathcal{C}}\right)\to\pi_{0}\left(\one_{\mathcal{C}}\right),
\]
that shares many of the formal properties of $\tilde{\delta}$.
In particular, specializing to the case $\mathcal{C}=\widehat{\Mod}_{E_n}$,
the operation $\delta$ coincides with $\tilde{\delta}$ on $\bb Z_{p}\ss\pi_{0}E_{n}$.
Moreover, for an $m$-good $A$, we have
\[
\delta\left(|A|\right)=|A'|-|A''|,
\]
where $A'$ and $A''$ are also $m$-good (combine \defref{Delta_Semi_Add}
and \thmref{Alpha_Box}). It follows that if $|A|$ is
non-zero (and not already invertible), then at least one of $|A'|$
and $|A''|$ has \emph{lower} $p$-adic valuation than
$|A|$. The prototypical $m$-good space is the Eilenberg-MacLane
space $B^{m}C_{p}$. Hence, it suffices to show that
\begin{quotation}
(4) The element $|B^{m}C_{p}|\in\pi_{0}E_{n}$ is \emph{non-zero}.
\end{quotation}
To get a grip on the elements $|A|$, we reformulate them
in terms of the symmetric monoidal dimension (which does not refer
at all to higher semiadditivity). Let us denote by $A\otimes E_{n}$,
the colimit of the constant $A$-shaped diagram on $E_{n}$ in $\widehat{\Mod}_{E_n}$.
We show that $A\otimes E_{n}$ is a dualizable object\footnote{We show that this follows from higher semiadditivity, but it can be also deduced directly from the finite dimensionality of $K(n)_*(A)$ (\cite{RavenelWilson}). See \cite{hoveystrickland} and \cite{RognesStablyDualizable}.}, and that (\corref{Dim_Sym})
\[
\dim\left(A\otimes E_{n}\right)=|A^{S^{1}}|\quad\in\pi_{0}E_{n}.
\]
Since
\[
|\left(B^{m}C_{p}\right)^{S^{1}}| = |B^{m}C_{p}\times B^{m-1}C_{p}|
= |B^{m}C_{p}||B^{m-1}C_{p}|,
\]
it suffices to show that
\begin{quotation}
(5) The element $\dim\left(B^{m}C_{p}\otimes E_{n}\right)\in\pi_{0}E_{n}$
is \emph{non-zero}.
\end{quotation}
Finally, it can be shown that $\dim\left(A\otimes E_{n}\right)$ equals
the Euler characteristic of the $2$-periodic Morava $K$-theory (\lemref{Morava_Dimension})\footnote{We prove this only for $B^{m}C_{p}$, as this suffices for our purposes,
but this is true in general.}
\[
\chi_{n}\left(A\right)=\dim_{\bb F_{p}}\K\left(n\right)_{0}A-\dim_{\bb F_{p}}\K\left(n\right)_{1}A.
\]
Hence, it suffices to prove that
\begin{quotation}
(6) The integer $\chi_{n}\left(B^{m}C_{p}\right)$ is \emph{non-zero}.
\end{quotation}
This is an immediate consequence of the explicit computation of $\K\left(n\right)_{*}(B^{m}C_{p})$,
carried out in \cite{RavenelWilson}.
We alert the reader that at several points, this outline diverges
from the actual proof we give. Most significantly, we make use of
the fact that the steps (1)--(5) are completely formal and the ideas
involved can be formalized in a much greater generality. Instead of
the functor $E_{n}\widehat{\otimes}\left(-\right)$, we can consider any
colimit preserving symmetric monoidal functor $F\colon\mathcal{C}\to\mathcal{D}$
between stable, $p$-local, symmetric monoidal $\infty$-categories.
Given such a functor $F$, we show how to bootstrap $1$-semiadditivity
to higher semiadditivity under appropriate conditions (\thmref{Bootstrap_Machine}).
This necessitates some technical changes in the argument outlined
above\footnote{In particular, we bypass \cite{BobkovaG} using a somewhat different
and more general argument.}. It is only in the final section that we specialize to $\mathcal{C}=\Sp_{T\left(n\right)}$,
and verify the assumptions of this general criterion.
\subsection{Organization}
We now describe the content of each section of the paper.
In section 2, we develop the axiomatic framework of normed functors
and integration. We begin by developing some general calculus for
this notion and study its functoriality properties. We then study
the interaction of integration with symmetric monoidal structures and
the notion of duality. We conclude with a discussion of the property
of \emph{amenability}.
In section 3, we apply the axiomatic theory of section 2 to the
setting of local systems valued in an $m$-semiadditive $\infty$-category.
We begin by recalling the canonical norm on the pullback functor along
an $m$-finite map (introduced in \cite[Section 4.1]{HopkinsLurie}),
and its interaction with various operations. We then consider $m$-finite
colimit preserving functors between $m$-semiadditive $\infty$-categories
(a.k.a $m$-semiadditive functors), and their behavior with respect
to integration. We continue with studying the interaction of $m$-semiadditivity
with symmetric monoidal structures, duality, and dimension. Finally,
we study the behavior of equivariant powers in $1$-semiadditive $\infty$-categories,
which is used in the sequel in the construction of power operations.
In section 4, we construct the above-mentioned power operations for
$1$-semiadditive stable $\infty$-categories. First, we introduce
the algebraic notion of an additive $p$-derivation and study some
of its properties. We then construct an auxiliary operation $\alpha$
in the presence of $1$-semiadditivity. Specializing to the \emph{stable}
($p$-local) case, we construct from $\alpha$ the additive $p$-derivation $\delta$ and establish its naturality properties.
Finally, we formulate and
prove the ``bootstrap machine'', that
gives general conditions for a $1$-semiadditive $\infty$-category to be $\infty$-semiadditive. We conclude the section with a discussion of ``nil-conservativity'' which is a natural setup to which one can apply the bootstrap machine.
In section 5, we apply the abstract theory of sections 2--4 to chromatic
homotopy theory. After some generalities, we use the additive $p$-derivation
of section 4 to derive a generalization of a conjecture of May about nilpotence in $H_{\infty}$-rings.
We then apply the ``bootstrap machine'' to the $1$-semiadditive $\infty$-category $\Sp_{\T\left(n\right)}$, to show that it is $\infty$-semiadditive, and deduce that $\T\left(n\right)$-homology of $\pi$-finite spaces depends only on the $n$-th Postnikov truncation. Finally, we consider localizations with respect to general weak rings. We show, among other things, that in this setting $1$-semiadditivity implies $\infty$-semiadditivity, and that various notions of ``bounded height'' coincide.
\subsection{Acknowledgments}
We would like to thank Tobias Barthel, Agn{\`e}s Beaudry, Gijs Heuts,
and Nathaniel Stapleton for useful discussions, and Shay Ben Moshe for his valuable comments on an earlier draft of the manuscript. We would especially
like to thank Michael Hopkins, for suggesting this question and for
useful discussions. Finally, we thank the anonymous referee for his/her careful reading of the manuscript and the many helpful comments and suggestions.
The first author is partially supported by the Adams Fellowship of
the Israeli Academy of Science. The second author is supported by
the Alon Fellowship and ISF1588/18. The third author is supported
by the ISF grant 1650/15~
The second author would like to thank the Isaac Newton Institute for
Mathematical Sciences, Cambridge, for support and hospitality during
the programme ``Homotopy Harnessing Higher Structures'', where work
on this paper was undertaken. This work was supported by EPSRC grant
no EP/K032208/1.
\subsection{Terminology and Notation}
Throughout the paper we work in the framework of $\infty$-categories (a.k.a. quasicategories), introduced by A. Joyal \cite{joyalquasicat}, and extensively developed by Lurie in \cite{htt} and \cite{ha}. We shall also use the following terminology and notation:
\begin{enumerate}
\item We use the term \emph{isomorphism} for an invertible morphism of an
$\infty$-category (i.e. an equivalence).
\item We say that a space $A$ is
\begin{enumerate}
\item $\left(-2\right)$-finite, if it is contractible.
\item $m$-finite for $m\ge-1$, if $\pi_{0}A$ is finite and all the fibers
of the diagonal map $\Delta_{A}\colon A\to A\times A$ are $\left(m-1\right)$-finite
(for $m\ge0$, this is equivalent to $A$ having finitely many components,
each of them $m$-truncated with finite homotopy groups).
\item $\pi$-finite, if it is $m$-finite for some integer $m\ge-2$.
\end{enumerate}
\item We say that an $\pi$-finite space $A$ is a $p$-space, if all the
homotopy groups of $A$ are $p$-groups.
\item Given a map of spaces $q\colon A\to B$, for every $b\in B$ we denote
by $q^{-1}\left(b\right)$ the homotopy fiber of $q$ over $b$.
\item For $m\ge-2$, we say that a map of spaces $q\colon A\to B$ is $m$-finite (resp. $\pi$-finite) if $q^{-1}(b)$ is $m$-finite (resp. $\pi$-finite) for all $b\in B$.
\item Given an $\infty$-category $\mathcal{C}$, we say that $\mathcal{C}$
admits all $q$-limits (resp. $q$-colimits) if it admits all limits
(resp. colimits) of shape $q^{-1}\left(b\right)$ for all $b\in B$.
\item Given a functor $F\colon\mathcal{C}\to\mathcal{D}$ of $\infty$-categories,
we say that $F$ preserves $q$-colimits (resp. $q$-limits) if it
preserves all colimits (resp. limits) of shape $q^{-1}\left(b\right)$
for all $b\in B$.
\item We use the notation
\[
f\colon X\oto gY\oto hZ
\]
to denote that $f\colon X\to Z$ is the composition $h\circ g$ (which
is well defined up to a contractible space of choices). We use similar
notation for composition of more than two morphisms.
\item Given functors $F,G\colon\mathcal{C}\to\mathcal{D}$ and $H,K\colon\mathcal{D}\to\mathcal{E}$,
and natural transformations $\alpha\colon F\to G$ and $\beta\colon H\to K$,
we denote their horizontal composition by $\beta\star\alpha\colon HF\to KG$.
The vertical composition of natural transformations is denoted simply
by juxtaposition.
\item For a symmetric monoidal $\infty$-category $\mathcal{C}$, we denote
by $\calg(\mathcal{C})$ the $\infty$-category of $\mathbb{E}_{\infty}$-algebras
in $\mathcal{C}$. We denote $\cocalg(\mathcal{C})=\calg(\mathcal{C}^{op})^{op}$
the $\infty$-category of $\bb E_{\infty}$-coalgebras in $\mathcal{C}$,
where $\mathcal{C}^{op}$ is endowed with the canonical symmetric
monoidal structure induced from $\mathcal{C}$.
\item For an abelian group $A$ and $k\ge0$, we denote by $B^{k}A$ the
Eilenberg MacLane space with $k$-th homotopy group equal to $A$.
\end{enumerate}
\section{Norms and Integration}
\label{sec:norms_and_integration}
In this section, we develop an abstract formal framework of norms on
functors between $\infty$-categories and the operation of integration
on maps, that such norms induce. This framework abstracts, axiomatizes, and generalizes
the theory of norms and integrals arising from ambidexterity developed
in \cite[Section 4]{HopkinsLurie}. We develop a ``calculus'' for such integrals
and study their functoriality properties and interaction with monoidal
structures.
\subsection{Normed Functors and Integration}
\subsubsection{Norms and Iso-Norms}
We begin by fixing some terminology regarding adjunctions of $\infty$-categories.
\begin{defn}
Let $F\colon\mathcal{C}\to\mathcal{D}$ be a functor of $\infty$-categories.
\begin{enumerate}
\item By a \emph{left adjoint} to $F$, we mean a pair $\left(L,u\right)$,
where $L\colon\mathcal{D}\to\mathcal{C}$ is a functor and
\[
u\colon\Id_{\mathcal{D}}\to F\circ L
\]
is a unit natural transformation in the sense of \cite[Definition 5.2.2.7]{htt}.
\item By a \emph{right adjoint} to $F$, we mean a pair $\left(R,c\right)$,
where $R\colon\mathcal{D}\to\mathcal{C}$ is a functor and
\[
c\colon F\circ R\to\Id_{\mathcal{D}}
\]
a counit natural transformation (i.e. satisfying the dual of \cite[Definition 5.2.2.7]{htt}).
\end{enumerate}
\end{defn}
Given a datum of a left adjoint $\left(L,u\right)$, there exists
a map $c\colon L\circ F\to\Id_{\mathcal{C}}$, such that $u$ and
$c$ satisfy the zig-zag identities up to homotopy. From this also
follows that $c$ is a counit map exhibiting $\left(F,c\right)$ as
a right adjoint to $L$. This counit map $c$ is unique up to homotopy,
and we shall therefore sometimes speak of ``the'' associated counit
map (in fact, the space of such maps together with a homotopy witnessing
\emph{one} of the zig-zag identities is contractible \cite[Proposition 4.4.7]{RiehlV}).
We shall similarly speak of the unit map $u\colon\Id_{\mathcal{C}}\to R\circ F$
associated with a right adjoint $\left(R,c\right)$.
Adjoint functors can be composed in the following (usual) sense:
\begin{defn}
\label{def:Adj_Composition}Given a pair of composable functors
\[
\xymatrix{\mathcal{C}\ar[r]^{F} & \mathcal{D}\ar[r]^{F'} & \mathcal{E}}
,
\]
with left adjoints $\left(L,u\right)$ and $\left(L',u'\right)$ respectively,
the composite map
\[
u''\colon\Id_{\mathcal{E}}\oto{u'}F'L'\oto uF'FLL',
\]
which is well defined up to homotopy, is a unit map exhibiting $LL'$
as left adjoint to $F'F$. We define the counit map of the composition
of right adjoints in a similar way.
\end{defn}
The central notion we are about to study in this section is the following:
\begin{defn}
\label{def:Normed_Functor}Given $\infty$-categories $\mathcal{C}$
and $\mathcal{D}$, a \emph{normed functor}
\[
q\colon\mathcal{D}\nto\mathcal{C},
\]
is a functor $q^{*}\colon\mathcal{C}\to\mathcal{D}$ together with
a left adjoint $\left(q_{!},u_{!}^{q}\right)$, a right adjoint $\left(q_{*},c_{*}^{q}\right)$,
and a natural transformation
\[
\nm_{q}\colon q_{!}\to q_{*},
\]
which we call a \emph{norm}. We say that $q$ is \emph{iso-normed},
if $\nm_{q}$ is an isomorphism natural transformation. For $X\in \mathcal{C}$, we also write
$X_{q}=q_{!}q^{*}X$, and denote by $c_{!}^{q}\colon q_{!}q^{*}\to\Id$
and $u_{*}^{q}\colon\Id\to q_{*}q^{*}$, the associated counit and
unit of the respective adjunctions. We drop the superscript $q$ whenever
it is clear from the context.
\end{defn}
\begin{rem}
In subsequent sections, we shall sometimes abuse language and refer
to $\nm_{q}$ as a \emph{norm on $q^{*}$} and to $q^{*}$ itself
(with the data of $\nm_{q}$) as a normed functor. Since the left
and right adjoints of $q^{*}$ are essentially unique (when they exist),
this seems to be a rather harmless convention.
\end{rem}
There is a useful criterion for detecting when a normed functor is
iso-normed.
\begin{lem}
\label{lem:Iso_Normed_Criterion} A normed functor $q\colon\mathcal{D}\nto\mathcal{C}$
is iso-normed if and only if the norm $\nm_{q}\colon q_{!}\to q_{*}$
is an isomorphism at $q^{*}X$ for all $X\in\mathcal{C}$.
\end{lem}
\begin{proof}
The ``only if'' part is clear. For the ``if'' part, consider the
two diagrams
\[
\xymatrix@C=3pc{ & q_{!}q^{*}q_{*}\ar[d]_{\nm_{q}}^{\wr}\ar[r]^{\ c_{*}} & q_{!}\ar[d]_{\nm_{q}}\\
q_{*}\ar[r]^{u_{*}\ } & q_{*}q^{*}q_{*}\ar[r]^{\ c_{*}} & q_{*},
}
\qquad \qquad
\xymatrix@C=3pc{q_{!}\ar[d]^{\nm_{q}}\ar[r]^{u_{!}\ } & q_{!}q^{*}q_{!}\ar[d]_{\wr}^{\nm_{q}}\ar[r]^{\ c_{!}} & q_{!}\\
q_{*}\ar[r]^{u_{!}\ } & q_{*}q^{*}q_{!},
}
\]
which commute by naturality of the (co)unit maps. By the zig-zag identities,
the composition along the bottom row in the left diagram is the identity.
Thus, the left diagram shows that $\nm_{q}$ has a right inverse.
Similarly, the right diagram shows that $\nm_{q}$ has a left inverse
and therefore $\nm_{q}$ is an isomorphism.
\end{proof}
Given a functor $q^{*}\colon\mathcal{C}\to\mathcal{D}$ with a left
adjoint $\left(q_{!},u_{!}^{q}\right)$ and a right adjoint $\left(q_{*},c_{*}^{q}\right)$,
the data of a natural transformation $\nm_{q}\colon q_{!}\to q_{*}$
is equivalent to the data of its mate $\nu_{q}\colon q^{*}q_{!}\to\Id$.
Moreover,
\begin{lem}
\label{lem:Norm_Counit} Let $q\colon\mathcal{D}\nto\mathcal{C}$
be a normed functor. For every $Y\in\mathcal{D}$, the map $\nm_{q}\colon q_{!}\to q_{*}$
is an isomorphism at $Y\in\mathcal{D}$ if and only if the mate $\nu_{q}\colon q^{*}q_{!}\to\Id$
is a counit map at $Y$. Namely, for all $X\in\mathcal{C}$, the composition
\[
\map_{\mathcal{C}}\left(X,q_{!}Y\right)\oto{q^{*}}\map_{\mathcal{D}}\left(q^{*}X,q^{*}q_{!}Y\right)\oto{\nu\circ-}\map_{\mathcal{D}}\left(q^{*}X,Y\right)
\]
is a homotopy equivalence.
\end{lem}
\begin{proof}
For every $X\in\mathcal{C}$, consider the commutative diagram in
the homotopy category of spaces:
\[
\xymatrix@C=4pc{\map_{\mathcal{C}}\left(X,q_{!}Y\right)\ar[d]^{q^{*}}\ar[r]^{\nm_{q}\circ-} & \map_{\mathcal{C}}\left(X,q_{*}Y\right)\ar[d]^{q^{*}}\ar[dr]^{\sim}\\
\map_{\mathcal{D}}\left(q^{*}X,q^{*}q_{!}Y\right)\ar[r]^{\nm_{q}\circ-}\ar@/_{2pc}/[rr]_{\nu_{q}\circ-} & \map_{\mathcal{D}}\left(q^{*}X,q^{*}q_{*}Y\right)\ar[r]^{c_{*}\circ-} & \map_{\mathcal{D}}\left(q^{*}X,Y\right).
}
\]
By the Yoneda lemma, $\nm_{q}$ is an isomorphism at $Y$ if and only
if the top map in the diagram is an isomorphism for all $X\in\mathcal{C}$.
By 2-out-of-3, this is the case if and only if the composition of the top map
and the diagonal map is an isomorphism for all $X$. Since the diagram
commutes, this is if and only if the composition of the left vertical
map with the long bottom map is an isomorphism for all $X$, which
is by definition if and only if $\nu_{q}$ is a counit at $Y$.
\end{proof}
\begin{notation}
When $\nm_{q}$ is an isomorphism at $q^*X$, and hence $\nu_{q}$ is
a counit at $X$, we denote the associated \emph{unit} by $\mu_{q,X}\colon X\to q_{!}q^{*}X=X_{q}$.
If $q$ is iso-normed, we let $\mu_{q}\colon\Id\to q_{!}q^{*}$ be
the unit natural transformation associated with $\nu_{q}$. As usual,
we drop the subscript $q$, whenever the map is understood from the
context.
\end{notation}
\begin{rem}
We will use the two points of view, that of a norm $\nm_{q}\colon q_{!}\to q_{*}$
and that of a ``wrong way counit'' $\nu_{q}\colon q^{*}q_{!}\to\Id$
interchangeably. Each point of view has its own advantages. We note
that the definition using $\nu_{q}$ seems to be slightly more general
as it is available even if $q^{*}$ does not (a priori) admit a right
adjoint. In practice, we are mainly interested in situations where
$\nu_{q}$ is indeed a counit map for an adjunction, exhibiting $q_{!}$
as a right adjoint of $q^{*}$. Thus, the gain in generality is rather
negligible.
\end{rem}
\begin{defn}
\label{def:Norm_Construction}We define the identity normed functor
and composition of normed functors (up to homotopy) as follows.
\begin{enumerate}
\item (Identity) For every $\infty$-category $\mathcal{C}$, the identity
normed functor $\Id\colon\mathcal{C}\nto\mathcal{C}$ consists of
the identity functor $\Id\colon\mathcal{C}\to\mathcal{C}$ viewed
as a left and right adjoint to itself using the identity natural transformation
$\Id\to\Id$ as the (co)unit map and with the identity natural transformation
$\Id\to\Id$ as the norm.
\item (Composition) Given a pair of composable normed functors
\[
\xymatrix{\mathcal{E}\ \ar@{>->}[r]^{p} & \mathcal{D}\ \ar@{>->}[r]^{q} & \mathcal{C}}
,
\]
we define their composition $qp\colon\mathcal{E}\nto\mathcal{C}$
by composing the adjunctions (\defref{Adj_Composition})
\[
\left(qp\right)^{*}=p^{*}q^{*},\quad\left(qp\right)_{!}=q_{!}p_{!},\quad\left(qp\right)_{*}=q_{*}p_{*}
\]
and take the norm map to be the horizontal composition of the norms
(the order does not matter)
\[
q_{!}p_{!}\oto{\nm_{q}}q_{*}p_{!}\oto{\nm_{p}}q_{*}p_{*}.
\]
We denote the norm of the composite by $\nm_{qp}$. If $p$ and $q$
are iso-normed, then so is $qp$.
\end{enumerate}
\end{defn}
\begin{rem}
\label{rem:Cat_Norm}It is possible to define an $\infty$-category
$\widehat{\cat}_{\infty}^{\nm}$, whose objects are $\infty$-categories
and morphisms are normed functors, such that the above constructions
give the identity morphisms and composition in the homotopy category.
This $\infty$-category captures the higher coherences manifest in
the above definitions. We intend to elaborate on this point in a future
work, but for the purposes of this one, which will not use the higher
coherences in any way, we shall be content with the above explicit
definitions up to homotopy.
\end{rem}
\subsubsection{Integration}
The main feature of iso-normed functors is that they allow us to define
a formal notion of ``integration'' of maps.
\begin{defn}
Let $q\colon\mathcal{D}\nto\mathcal{C}$ be an iso-normed functor.
For every $X,Y\in\mathcal{C}$, we define an \emph{integral} map
\[
\int\limits _{q}\colon\map_{\mathcal{D}}\left(q^{*}X,q^{*}Y\right)\to\map_{\mathcal{C}}\left(X,Y\right),
\]
which is natural in $X$ and $Y$, as the composition
\[
\map_{\mathcal{D}}\left(q^{*}X,q^{*}Y\right)\oto{q_{*}}\map_{\mathcal{C}}\left(q_{*}q^{*}X,q_{*}q^{*}Y\right)\oto{\nm_{q}^{-1}}\map_{\mathcal{C}}\left(q_{*}q^{*}X,q_{!}q^{*}Y\right)\oto{c_{!}\circ-\circ u_{*}}\map_{\mathcal{C}}\left(X,Y\right).
\]
\end{defn}
\begin{rem}
\label{rem:Integration_Unit}Alternatively, using the \emph{wrong
way unit }$\mu_{q}\colon\Id\to q_{!}q^{*}$, one can define the integral
as the composition
\[
\map_{\mathcal{D}}\left(q^{*}X,q^{*}Y\right)\oto{q_{!}}\map_{\mathcal{C}}\left(q_{!}q^{*}X,q_{!}q^{*}Y\right)\oto{c_{!}\circ-\circ\mu}\map_{\mathcal{C}}\left(X,Y\right).
\]
\end{rem}
As a special case we have
\begin{defn}
Let $q\colon\mathcal{D}\nto\mathcal{C}$ be an iso-normed functor.
For every $X\in\mathcal{C}$, we define a map
\[
|q|_{X}\colon X\to X
\]
by
\[
|q|_{X}\coloneqq \int\limits _{q}q^{*}\Id_{X}=\int\limits _{q}\Id_{q^{*}X}.
\]
These are the components of the natural endomorphism $|q|=c_{!}^{q}\circ\mu_{q}$
of $\Id_{\mathcal{C}}$.
\end{defn}
Integration satisfies a form of ``homogeneity''.
\begin{prop}
[Homogeneity]\label{prop:Homogenity}Let $q\colon\mathcal{D}\nto\mathcal{C}$
be an iso-normed functor and let $X,Y,Z\in\mathcal{C}$.
\begin{enumerate}
\item For all maps $f\colon q^{*}X\to q^{*}Y$ and $g\colon Y\to Z$ we
have
\[
g\circ\left(\int\limits _{q}f\right)=\int\limits _{q}\left(q^{*}g\circ f\right)\quad\in\hom_{h\mathcal{C}}\left(X,Z\right).
\]
\item For all maps $f\colon X\to Y$ and $g\colon q^{*}Y\to q^{*}Z$ we
have
\[
\left(\int\limits _{q}g\right)\circ f=\int\limits _{q}\left(g\circ q^{*}f\right)\quad\in\hom_{h\mathcal{C}}\left(X,Z\right).
\]
\end{enumerate}
\end{prop}
\begin{proof}
For (1), consider the commutative diagram
\[
\xymatrix@C=3pc{X\ar[d]^{\mu}\ar[r]^{\mu} & q_{!}q^{*}X\ar[d]^{f}\ar[r]^{f} & q_{!}q^{*}Y\ar[d]^{g}\ar[r]^{c_{!}} & Y\ar[d]^{g}\\
q_{!}q^{*}X\ar[r]^{f} & q_{!}q^{*}Y\ar[r]^{g} & q_{!}q^{*}Z\ar[r]^{c_{!}} & Z.
}
\]
The composition along the top and then right path is $g\circ\int\limits _{q}f$,
while the composition along the left and then bottom path is $\int\limits _{q}\left(q^{*}g\circ f\right)$
(see \remref{Integration_Unit}).
For (2), consider the diagram
\[
\xymatrix@C=3pc{X\ar[d]^{f}\ar[r]^{\mu} & q_{!}q^{*}X\ar[d]^{f}\ar[r]^{f} & q_{!}q^{*}Y\ar[d]^{g}\ar[r]^{g} & q_{!}q^{*}Z\ar[d]^{c_{!}}\\
Y\ar[r]^{\mu} & q_{!}q^{*}Y\ar[r]^{g} & q_{!}q^{*}Z\ar[r]^{c_{!}} & Z
}
\]
and apply an analogous argument.
\end{proof}
Integration also satisfies a form of ``Fubini's Theorem''.
\begin{prop}
[Higher Fubini's Theorem]\label{prop:Fubini} Given a pair of composable
iso-normed functors
\[
\xymatrix{\mathcal{E}\ \ar@{>->}[r]^{p} & \mathcal{D}\ \ar@{>->}[r]^{q} & \mathcal{C}}
,
\]
for all $X,Y\in\mathcal{C}$, and $f\colon p^{*}q^{*}X\to p^{*}q^{*}Y$,
we have
\[
\int\limits _{q}\left(\int\limits _{p}f\right)=\int\limits _{qp}f\qquad\in\hom_{h\mathcal{C}}\left(X,Y\right).
\]
\end{prop}
\begin{proof}
Since $q$ and $p$ are iso-normed, we can construct the following
diagram
\[
\xymatrix@C=3pc{ & q_{*}p_{*}p^{*}q^{*}X\ar[r]^{f} & q_{*}p_{*}p^{*}q^{*}Y\ar[d]_{\nm_{p}^{-1}}\ar[r]^{\nm_{qp}^{-1}} & q_{!}p_{!}p^{*}q^{*}Y\ar@{=}[d]\ar[rd]^{c_{!}^{qp}}\\
X\ar[dr]_{u_{*}^{q}}\ar[ur]^{u_{*}^{qp}} & & q_{*}p_{!}p^{*}q^{*}Y\ar[d]_{c_{!}^{p}}\ar[r]^{\nm_{q}^{-1}} & q_{!}p_{!}p^{*}q^{*}Y\ar[d]_{c_{!}^{p}} & Y.\\
& q_{*}q^{*}X\ar[uu]_{u_{*}^{p}}\ar[r]^{\int\limits _{p}\negmedspace f} & q_{*}q^{*}Y\ar[r]^{\nm_{q}^{-1}} & q_{!}q^{*}Y\ar[ru]_{c_{!}^{q}}
}
\]
The triangles and the bottom right square commute for formal reasons.
The top right square commutes by the way norms are composed (\defref{Norm_Construction}(2))
and the left rectangle commutes by definition of $\int\limits _{p}f$.
Thus, the composition along the top path, which is $\int\limits _{qp}f$,
is homotopic to the composition along the bottom path, which is $\int\limits _{q}\left(\int\limits _{p}f\right)$.
\end{proof}
\subsection{Ambidextrous Squares \& Beck-Chevalley Conditions}
In this section we study functoriality properties of norms and integrals
and develop further the ``calculus of integration''.
\subsubsection{Beck-Chevalley Conditions}
We begin by recalling some standard material regarding commuting squares
involving adjoint functors (e.g. see beginning of \cite[Section 7.3.1]{htt}).
A commutative square of functors
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \tilde{\mathcal{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}
}}
\qquad\left(\square\right)
\]
is formally a natural isomorphism
\[
F_{\mathcal{D}}q^{*}\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}\tilde{q}^{*}F_{\mathcal{C}}.
\]
If the vertical functors admit left adjoints $q_{!}\dashv q^{*}$
and $\tilde{q}_{!}\dashv\tilde{q}^{*}$ (suppressing the units),
we get a $\bc_{!}$ (Beck-Chevalley) natural transformation
\[
\beta_{!}\colon\tilde{q}_{!}F_{\mathcal{D}}\oto{u_!^q}
\tilde{q}_{!}F_{\mathcal{D}}q^*q_!\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}
\tilde{q}_{!}\tilde{q}^{*}F_{\mathcal{C}}q_! \oto{c_!^{\tilde{q}}}
F_{\mathcal{C}}q_{!}.
\]
Similarly, if the vertical functors admit right adjoints $q^{*}\dashv q_{*}$
and $\tilde{q}^{*}\dashv\tilde{q}_{*}$, we get a $\bc_{*}$ (Beck-Chevalley)
natural transformation
\[
\beta_{*}\colon F_{\mathcal{C}}q_{*}\oto{u_*^{\tilde{q}}}
\tilde{q}_*\tilde{q}^*F_{\mathcal{C}}q_{*}\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}
\tilde{q}_*F_{\mathcal{D}}q^*q_{*}\oto{c_*^q}
\tilde{q}_{*}F_{\mathcal{D}}.
\]
\begin{defn}
We say that the square $\square$ satisfies the $\text{BC}_{!}$ (resp.
$\text{BC}_{*}$) condition, if $q^{*}$ and $\tilde{q}^{*}$ admit
left (resp. right) adjoints and the map $\beta_{!}$ (resp. $\beta_{*}$)
is an isomorphism.
\end{defn}
\begin{rem}
It may happen that in $\square$, the horizontal functors $F_{\mathcal{C}}$
and $F_{\mathcal{D}}$ also have left or right adjoints. In this case,
there are other BC maps one can write. To avoid confusion, we will
always speak about the BC maps with respect to the \emph{vertical}
functors.
\end{rem}
Given a commutative square $\square$ as above, we denote $u_{*}=u_{*}^{q}$
and $\tilde{u}_{*}=u_{*}^{\tilde{q}}$ and similarly for other (co)unit
maps (when they exist). It is an easy verification using the zig-zag
identities, that the BC-maps are compatible with these units and counits
in the following sense.
\begin{lem}
\label{lem:BC_Co_Units} Given a commutative square of functors $\square$,
such that $q^{*}$ and $\tilde{q}^{*}$ admit left (resp. right) adjoints,
the following four diagrams commute up to homotopy (when they are
defined)
\[
(1)\xymatrix{ & \red{F_{\mathcal{C}}}{}q_{*}q^{*}\ar[d]^{\beta_{*}}\\
\red{F_{\mathcal{C}}}{}\ar[ru]^{u_{*}}\ar[rd]_{\tilde{u}_{*}} & \tilde{q}_{*}\red{F_{\mathcal{D}}}{}q^{*}\ar[d]^{\wr}\\
& \tilde{q}_{*}\tilde{q}^{*}\red{F_{\mathcal{C}}}{}
}
\qquad
(2)\quad\xymatrix{\red{F_{\mathcal{D}}}{}q^{*}q_{*}\ar[d]^{\wr}\ar[rd]^{c_{*}}\\
\tilde{q}^{*}\red{F_{\mathcal{C}}}{}q_{*}\ar[d]^{\beta_{*}} & \red{F_{\mathcal{D}}}{}\\
\tilde{q}^{*}\tilde{q}_{*}\red{F_{\mathcal{D}}}{}\ar[ru]_{\tilde{c}_{*}}
}
(3)\xymatrix{ & \tilde{q}^{*}\tilde{q}_{!}\red{F_{\mathcal{D}}}{}\ar[d]^{\beta_{!}}\\
\red{F_{\mathcal{D}}}{}\ar[ru]^{\tilde{u}_{!}}\ar[rd]_{u_{!}} & \tilde{q}^{*}\red{F_{\mathcal{C}}}{}q_{!}\ar[d]^{\wr}\\
& \red{F_{\mathcal{D}}}{}q^{*}q_{!}
}
\qquad
(4)\quad\xymatrix{\tilde{q}_{!}\tilde{q}^{*}\red{F_{\mathcal{C}}}{}\ar[d]^{\wr}\ar[rd]^{\tilde{c}_{!}}\\
\tilde{q}_{!}\red{F_{\mathcal{D}}}{}q^{*}\ar[d]^{\beta_{!}} & \red{F_{\mathcal{C}}}{}.\\
\red{F_{\mathcal{C}}}{}q_{!}q^{*}\ar[ru]_{c_{!}}
}
\]
\end{lem}
The $\bc$ maps also satisfy some naturality properties with respect
to horizontal and vertical pasting, as well as multiplication and
exponentiation of squares. We begin with horizontal pasting. Given
a commutative diagram of $\infty$-categories and functors
\[
\quad \qquad \qquad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]^{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \mathcal{\tilde{C}}\ar[d]^{\tilde{q}^{*}}\ar[r]^{G_{\mathcal{C}}} & \mathcal{\tilde{\tilde{C}}}\ar[d]^{\tilde{\tilde{q}}^{*}}\\
\mathcal{D}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}\ar[r]^{G_{\mathcal{D}}} & \mathcal{\tilde{\tilde{D}}},
}}
\qquad\left(*\right)
\]
we call the outer square the \emph{horizontal pasting} of the
left and right small squares. The following is easy to verify.
\begin{lem}
\label{lem:Horizontal_Pasting_Formula}Given a horizontal pasting
diagram $\left(*\right)$ as above,
\begin{enumerate}
\item The $\bc_{!}$-map for the outer square is homotopic to the composition
of the $\bc_{!}$ maps for the left and right squares
\[
\tilde{\tilde{q}}_{!}\red{G_{\mathcal{D}}}{}\red{F_{\mathcal{D}}}{}\to\red{G_{\mathcal{C}}}{}\tilde{q}_{!}\red{F_{\mathcal{D}}}{}\to\red{G_{\mathcal{C}}}{}\red{F_{\mathcal{C}}}{}q_{!}.
\]
\item The $\bc_{*}$-map for the outer square is homotopic to the composition
of the $\bc_{*}$ maps for the left and right squares
\[
\red{G_{\mathcal{C}}}{}\red{F_{\mathcal{C}}}{}q_{*}\to\red{G_{\mathcal{C}}}{}\tilde{q}_{*}\red{F_{\mathcal{D}}}{}\to\tilde{\tilde{q}}_{*}\red{G_{\mathcal{D}}}{}\red{F_{\mathcal{D}}}{}.
\]
\end{enumerate}
\end{lem}
This immediately implies the following horizontal pasting lemma for
$\bc$ conditions.
\begin{cor}
\label{cor:Horizontal_Pasting_BC}Given a horizontal pasting diagram
$\left(*\right)$ as above, denote by $\square_{L}$, $\square_{R}$
and $\square$, the left, right and outer squares respectively.
\begin{enumerate}
\item If $\square_{L}$ and $\square_{R}$ satisfy the $\bc_{!}$ (resp.
$\bc_{*}$) condition, then so does $\square$.
\item If $\square_{R}$ and $\square$ satisfy the $\bc_{!}$ (resp. $\bc_{*}$)
condition and $G_{\mathcal{C}}$ is conservative, the so does $\square_{L}$.
\end{enumerate}
\end{cor}
We now turn to vertical pasting. Given a commutative diagram of $\infty$-categories
and functors
\[
\qquad \qquad \qquad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \mathcal{\tilde{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\ar[d]_{p^{*}}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}\ar[d]^{\tilde{p}^{*}}\\
\mathcal{E}\ar[r]^{F_{\mathcal{E}}} & \tilde{\mathcal{E}},
}}
\qquad\left(**\right)
\]
we call the big outer square (i.e. rectangle) the \emph{vertical pasting} of the top
and bottom small squares. The following is easy to verify.
\begin{lem}
\label{lem:Vertical_Pasting_Formula}Given a vertical pasting diagram
$\left(**\right)$ as above,
\begin{enumerate}
\item The $\bc_{!}$-map for the outer square is homotopic to the composition
of the $\bc_{!}$ maps for the top and bottom squares
\[
\tilde{q}_{!}\tilde{p}_{!}\red{F_{\mathcal{E}}}{}\to\tilde{q}_{!}\red{F_{\mathcal{D}}}{}p_{!}\to\red{F_{\mathcal{C}}}{}q_{!}p_{!}.
\]
\item The $\bc_{*}$-map for the outer square is homotopic to the composition
of the $\bc_{*}$ maps for the top and bottom squares
\[
\red{F_{\mathcal{C}}}{}q_{*}p_{*}\to\tilde{q}_{*}\red{F_{\mathcal{D}}}{}p_{*}\to\tilde{q}_{*}\tilde{p}_{*}\red{F_{\mathcal{E}}}{}.
\]
\end{enumerate}
\end{lem}
Again, this immediately implies the following vertical pasting lemma
for $\bc$ conditions.
\begin{cor}
\label{cor:Vertical_Pasting_BC}Given a vertical pasting diagram $\left(**\right)$
as above, denote by $\square_{T}$, $\square_{B}$ and $\square$,
the top, bottom, and outer squares respectively. If $\square_{T}$
and $\square_{B}$ satisfy the $\bc_{!}$ (resp. $\bc_{*}$) condition,
then so does $\square$.
\end{cor}
Finally, the $\bc$ conditions are also natural with respect to multiplication
and exponentiation.
\begin{lem}
\label{lem:Exponential_Rule_BC}Given a pair of squares corresponding under the adjunction
$
(-)\times\mathcal{E} \dashv \fun(\mathcal{E},-),
$
\[
\qquad \qquad
\vcenter{
\xymatrix@R=2.25pc@C=3pc{\mathcal{C}\times\mathcal{E}\ar[d]_{q^{*}\times\Id}\ar[r]^{F_{\mathcal{C}}} & \tilde{\mathcal{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\times\mathcal{E}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}
}}
\quad\left(\square_{1}\right),\qquad \qquad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{\widehat{F}_{\mathcal{C}}\quad} & \fun(\mathcal{E},\tilde{\mathcal{C}})\ar[d]^{\left(q^{*}\right)^{\mathcal{E}}}\\
\mathcal{D}\ar[r]^{\widehat{F}_{\mathcal{D}}\quad} & \fun(\mathcal{E},\tilde{\mathcal{D}})
}}
\quad\left(\square_{2}\right).
\]
the square $\square_{1}$ satisfies the $\bc_{!}$ (resp. $\bc_{*}$)
if and only if $\square_{2}$ satisfies the $\bc_{!}$ (resp. $\bc_{*}$)
condition.
\end{lem}
\begin{proof}
Under the canonical equivalence of $\infty$-categories
\[
\fun(\mathcal{D}\times\mathcal{E},\tilde{\mathcal{C}})\simeq\fun(\mathcal{D},\fun(\mathcal{E},\tilde{\mathcal{C}})),
\]
the $\text{BC}_{!}$ (resp. $\text{BC}_{*}$) map for $\square_{1}$
corresponds to the $\text{BC}_{!}$ (resp. $\text{BC}_{*}$) map of
$\square_{2}$ and isomorphisms correspond to isomorphisms.
\end{proof}
\subsubsection{Normed and Ambidextrous Squares}
We now consider commuting squares of $\infty$-categories, where the
vertical functors are \emph{normed}.
\begin{defn}
\label{def:Normed_Ambi_Square}We define:
\end{defn}
\begin{enumerate}
\item A \emph{normed square} is a pair of normed functors $q\colon\mathcal{D}\nto\mathcal{C}$
and $\tilde{q}\colon\tilde{\mathcal{D}}\nto\tilde{\mathcal{C}}$,
together with a commutative diagram
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \tilde{\mathcal{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}.
}}
\qquad\left(*\right)
\]
It is \emph{iso-normed} if $q$ and $\tilde{q}$ are iso-normed.
\item Given a normed square as in (1), we have an associated \emph{norm-diagram}:
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{\red{F_{\mathcal{C}}}{}q_{!}\ar[r]^{\nm_{q}} & \red{F_{\mathcal{C}}}{}q_{*}\ar[d]^{\beta_{*}}\\
\tilde{q}_{!}\red{F_{\mathcal{D}}}{}\ar[u]^{\beta_{!}}\ar[r]^{\nm_{\tilde{q}}} & \tilde{q}_{*}\red{F_{\mathcal{D}}}{}.
}}
\qquad\left(\square\right)
\]
\item A \emph{weakly ambidextrous }square is a normed square, such that
the associated norm diagram $\square$ commutes up to homotopy. An
\emph{ambidextrous square }is a weakly ambidextrous square that is
iso-normed (note that an ambidextrous square satisfies the $\bc_{!}$
condition if and only if it satisfies the $\bc_{*}$ condition).
\end{enumerate}
\begin{rem}
We shall often abuse language and say that $\left(*\right)$ is a
normed (or ambidextrous) square implying by this that we also have
normed functors $q$ and $\tilde{q}$ as in the definition.
\end{rem}
As with any definition regarding norms, we can recast the definition
of an ambidextrous square in terms of wrong way counits. As this will
be used in the sequel, we shall spell this out.
\begin{lem}
\label{lem:Triangle_Unit_Counit_Norm_Diagram} Let $\left(*\right)$
be a normed square as in \defref{Normed_Ambi_Square}(1). Consider
the diagrams (where $\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}$ is defined only when $\left(*\right)$
is iso-normed).
\[
\qquad \qquad
\vcenter{
\xymatrix{\red{F_{D}}{}q^{*}q_{!}\ar[rd]^{\nu_{q}}\\
\tilde{q}^{*}\red{F_{\mathcal{C}}}{}q_{!}\ar[u]^{\wr} & \red{F_{D}}\\
\tilde{q}^{*}\tilde{q}_{!}\red{F_{\mathcal{D}}}{}\ar[u]^{\beta_{!}}\ar[ru]_{\nu_{\tilde{q}}}}}
\qquad \left(\mathbin{\rotatebox[origin=c]{-90}{$\triangle$}}\right), \qquad \qquad
\vcenter{
\xymatrix{ & \tilde{q}_{!}\tilde{q}^{*}\red{F_{\mathcal{C}}}{}\ar[d]^{\wr}\\
\red{F_{\mathcal{C}}}{}\ar[ru]^{\mu_{\tilde{q}}}\ar[rd]_{\mu_{q}} & \tilde{q}_{!}\red{F_{\mathcal{D}}}{}q^{*}\ar[d]^{\beta_{!}}\\
& \red{F_{\mathcal{C}}}{}q_{!}q^{*}.
}}
\qquad\left(\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}\right)
\]
\begin{enumerate}
\item The norm-diagram $\square$ commutes if and only if the diagram $\mathbin{\rotatebox[origin=c]{-90}{$\triangle$}}$
commutes.
\item If $\left(*\right)$ is iso-normed, satisfies the $\bc_{!}$ condition
and the norm-diagram $\square$ commutes, then the diagram $\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}$
commutes.
\end{enumerate}
\end{lem}
\begin{proof}
We begin with (1). The norm-diagram $\square$ commutes if and only
if the two maps $\tilde{q}_{!}F_{\mathcal{D}}\to\tilde{q}_{*}F_{\mathcal{D}}$
are homotopic. This holds if and only if their mates $\tilde{q}^{*}\tilde{q}_{!}F_{\mathcal{D}}\to F_{\mathcal{D}}$
are homotopic. To compute the mate, one applies $\tilde{q}^{*}$ and
post-composes with the counit $\tilde{c}_{*}\colon\tilde{q}^{*}\tilde{q}_{*}\to\Id$
(of the right way adjunction). Now, consider the diagram
\[
\xymatrix@C=3pc{\red{F_{D}}{}q^{*}q_{!}\ar[r]^{\nm_{q}} & \red{F_{D}}{}q^{*}q_{*}\ar[d]^{\wr}\ar[rd]^{c_{*}}\\
\tilde{q}^{*}\red{F_{\mathcal{C}}}{}q_{!}\ar[u]^{\wr}\ar[r]^{\nm_{q}} & \tilde{q}^{*}\red{F_{\mathcal{C}}}{}q_{*}\ar[d]^{\beta_{*}} & \red{F_{D}}.\\
\tilde{q}^{*}\tilde{q}_{!}\red{F_{\mathcal{D}}}{}\ar[u]^{\beta_{!}}\ar[r]^{\nm_{\tilde{q}}} & \tilde{q}^{*}\tilde{q}_{*}\red{F_{\mathcal{D}}}{}\ar[ru]_{\tilde{c}_{*}}
}
\]
The triangle on the right commutes by \lemref{BC_Co_Units}(2). The
composition of the top maps is $F_{\mathcal{D}}\nu_{q}$ and of the
bottom maps is $\nu_{\tilde{q}}F_{\mathcal{D}}$. Hence, $\square$
commutes, if and only if $\mathbin{\rotatebox[origin=c]{-90}{$\triangle$}}$ commutes.
We now turn to (2). To check the commutativity of $\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}$,
we may replace $\beta_{!}$ with its inverse. By assumption, all maps
in $\square$ are isomorphisms. Thus, the map $\beta_{!}^{-1}$ in
$\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}$ is homotopic to the composition
\[
\red{F_{\mathcal{C}}}q_{!}\oto{\nm_{q}}\red{F_{\mathcal{C}}}q_{*}\oto{\beta_{*}}\tilde{q}_{*}\red{F_{\mathcal{D}}}\oto{\left(\nm_{\tilde{q}}\right)^{-1}}\tilde{q}_{!}\red{F_{\mathcal{D}}}.
\]
Unwinding the definitions, this exhibits $\beta_{!}^{-1}$ as the $\bc_{*}$-map of the \emph{wrong
way} adjunctions $q^{*}\dashv q_{!}$ and $\tilde{q}^{*}\dashv\tilde{q}_{!}$.
The commutativity of $\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}$ now follows from the compatibility
of $\bc$-maps with units (\lemref{BC_Co_Units}(1)).
\end{proof}
The main feature of ambidextrous squares is that they behave well
with respect to the integral operation.
\begin{prop}
\label{prop:Integral_Ambi}Let
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \tilde{\mathcal{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}
}}
\qquad\left(\square\right)
\]
be an ambidextrous square that satisfies the $\bc_{!}$ condition
(and hence the $\bc_{*}$ condition). For all $X,Y\in\mathcal{C}$
and $f\colon q^{*}X\to q^{*}Y,$ we have
\[
F_{\mathcal{C}}\left(\int_{q}f\right)=\int_{\tilde{q}}F_{\mathcal{D}}\left(f\right)\quad\in\hom_{h\tilde{\mathcal{C}}}\left(F_{\mathcal{C}}X,F_{\mathcal{C}}Y\right).
\]
In particular, for all $X\in\mathcal{C}$, we have
\[
F_{\mathcal{C}}\left(|q|_{X}\right)=|\tilde{q}|_{F_{\mathcal{C}}\left(X\right)}\quad\in\hom_{h\tilde{\mathcal{C}}}\left(F_{\mathcal{C}}X,F_{\mathcal{C}}X\right).
\]
\end{prop}
\begin{proof}
Since $\square$ is iso-normed, we can construct the following diagram:
\[
\xymatrix@C=3pc{ & \red{F_{\mathcal{C}}}{}q_{*}q^{*}X\ar[d]_{\beta_{*}}^{\wr}\ar[r]^{f} & \red{F_{\mathcal{C}}}{}q_{*}q^{*}Y\ar[d]_{\beta_{*}}^{\wr}\ar[r]^{\nm_{q}^{-1}} & \red{F_{\mathcal{C}}}{}q_{!}q^{*}Y\ar[dr]^{c_{!}}\\
\red{F_{\mathcal{C}}}{}X\ar[ru]^{u_{*}}\ar[dr]_{\tilde{u}_{*}} & \tilde{q}_{*}\red{F_{\mathcal{D}}}{}q^{*}X\ar@{-}[d]^{\wr}\ar[r]^{f} & \tilde{q}_{*}\red{F_{\mathcal{D}}}{}q^{*}Y\ar@{-}[d]^{\wr}\ar[r]^{\nm_{\tilde{q}}^{-1}} & \tilde{q}_{!}\red{F_{\mathcal{D}}}{}q^{*}Y\ar@{-}[d]^{\wr}\ar[u]_{\wr}^{\beta_{!}} & \red{F_{\mathcal{C}}}{}Y.\\
& \tilde{q}_{*}\tilde{q}^{*}\red{F_{\mathcal{C}}}{}X\ar[r]^{f} & \tilde{q}_{*}\tilde{q}^{*}\red{F_{\mathcal{C}}}{}Y\ar[r]^{\nm_{\tilde{q}}^{-1}} & \tilde{q}_{!}\tilde{q}^{*}\red{F_{\mathcal{C}}}{}Y\ar[ru]_{\tilde{c}_{!}}
}
\]
The left and right triangles commute by the compatibility of $\bc$ maps
with (co)units (\lemref{BC_Co_Units}, diagrams (1), and (4) respectively).
The top right square commutes by the assumption that the square $\square$ is
ambidextrous and satisfies the $\bc$ conditions and the rest of
the squares commute for trivial reasons. Hence, the composition along
the top path is homotopic to the composition along the bottom path,
which proves the first claim. The second claim follows from the first
applied to the map $f=q^{*}\Id_{X}$.
\end{proof}
\subsubsection{Calculus of Normed Squares}
As discussed before, squares of functors can be pasted horizontally
and vertically. We extend these operations to \emph{normed} squares
and consider their compatibility with the notion of ambidexterity.
We begin with horizontal pasting. Given normed functors
\[
q\colon\mathcal{D}\nto\mathcal{C},\quad\tilde{q}\colon\tilde{\mathcal{D}}\nto\tilde{\mathcal{C}},\quad\tilde{\tilde{q}}\colon\mathcal{\tilde{\tilde{D}}}\nto\tilde{\tilde{\mathcal{C}}},
\]
and a commutative diagram
\[
\qquad \qquad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]^{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \mathcal{\tilde{C}}\ar[d]^{\tilde{q}^{*}}\ar[r]^{G_{\mathcal{C}}} & \mathcal{\tilde{\tilde{C}}}\ar[d]^{\tilde{\tilde{q}}^{*}}\\
\mathcal{D}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}\ar[r]^{G_{\mathcal{D}}} & \mathcal{\tilde{\tilde{D}}},
}}
\qquad\left(*\right)
\]
we call the big outer \emph{normed} square the \emph{horizontal pasting}
of the left and right small \emph{normed} squares. We have the following
horizontal pasting lemma for ambidexterity.
\begin{lem}
[Horizontal Pasting]\label{lem:Horizontal_Pasting_Ambi}Let $\left(*\right)$
be a horizontal pasting diagram of normed squares as above. We denote
by $\square_{L}$, $\square_{R}$ and $\square$, the left, right,
and outer normed squares respectively. If $\square_{L}$ and $\square_{R}$
are (weakly) ambidextrous, then so is $\square$.
\end{lem}
\begin{proof}
Consider the following diagram composed of whiskerings of the norm
diagrams of $\square_{L}$ and $\square_{R}$ (with all horizontal
maps the respective $\bc$-maps).
\[
\xymatrix{\red{G_{\mathcal{C}}}{}\red{F_{\mathcal{C}}}{}q_{!}\ar[d]^{\nm_{q}} & \red{G_{\mathcal{C}}}{}\tilde{q}_{!}\red{F_{\mathcal{D}}}{}\ar[d]^{\nm_{\tilde{q}}}\ar[l] & \tilde{\tilde{q}}_{!}\red{G_{\mathcal{D}}}{}\red{F_{\mathcal{D}}}{}\ar[d]^{\nm_{\tilde{\tilde{q}}}}\ar[l]\\
\red{G_{\mathcal{C}}}{}\red{F_{\mathcal{C}}}{}q_{*}\ar[r] & \red{G_{\mathcal{C}}}{}\tilde{q}_{*}\red{F_{\mathcal{D}}}{}\ar[r] & \tilde{\tilde{q}}_{*}\red{G_{\mathcal{D}}}{}\red{F_{\mathcal{D}}}.
}
\]
By \lemref{Horizontal_Pasting_Formula}, the outer square is the norm
diagram for $\square$, which implies the claim.
\end{proof}
We now turn to vertical pasting. Given normed functors
\[
q\colon\mathcal{D}\nto\mathcal{C},\quad\tilde{q}\colon\tilde{\mathcal{D}}\nto\mathcal{\tilde{C}},\quad p\colon\mathcal{E}\nto\mathcal{D},\quad\tilde{p}\colon\tilde{\mathcal{E}}\nto\tilde{\mathcal{D}}
\]
and a commutative diagram
\[
\qquad \qquad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \mathcal{\tilde{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\ar[d]_{p^{*}}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}\ar[d]^{\tilde{p}^{*}}\\
\mathcal{E}\ar[r]^{F_{\mathcal{E}}} & \tilde{\mathcal{E}},
}}
\qquad\left(**\right)
\]
we call the big outer normed square, with respect to the compositions
of normed functors $qp$ and $\tilde{q}\tilde{p}$, the \emph{vertical
pasting} of the top and bottom small normed squares. We have the following
vertical pasting lemma for ambidexterity.
\begin{lem}
[Vertical Pasting]\label{lem:Vertical_Pasting_Ambi}Let $\left(**\right)$
be a vertical pasting diagram of normed squares as above. We denote
by $\square_{T}$, $\square_{B}$ and $\square$, the top, bottom, and outer normed squares respectively. If $\square_{T}$ and $\square_{B}$
are (weakly) ambidextrous, then so is $\square$.
\end{lem}
\begin{proof}
Consider the following diagram composed of whiskerings of the norm
diagrams of $\square_{T}$ and $\square_{B}$ (with all horizontal
maps the respective $\bc$-maps).
\[
\xymatrix{\red{F_{\mathcal{C}}}{}q_{!}p_{!}\ar[d]^{\nm_{q}} & \tilde{q}_{!}\red{F_{\mathcal{D}}}{}p_{!}\ar[d]^{\nm_{\tilde{q}}}\ar[l] & \tilde{q}_{!}\tilde{p}_{!}\red{F_{\mathcal{E}}}{}\ar[d]^{\nm_{\tilde{q}}}\ar[l]\\
\red{F_{\mathcal{C}}}{}q_{*}p_{!}\ar[d]^{\nm_{p}}\ar[r] & \tilde{q}_{*}\red{F_{\mathcal{D}}}{}p_{!}\ar[d]^{\nm_{p}} & \tilde{q}_{*}\tilde{p}_{!}\red{F_{\mathcal{E}}}{}\ar[d]^{\nm_{\tilde{p}}}\ar[l]\\
\red{F_{\mathcal{C}}}{}q_{*}p_{*}\ar[r] & \tilde{q}_{*}\red{F_{\mathcal{D}}}{}p_{*}\ar[r] & \tilde{q}_{*}\tilde{p}_{*}\red{F_{\mathcal{E}}}.
}
\]
By \lemref{Vertical_Pasting_Formula}, the outer diagram is the norm
diagram for $\square$. Thus, it is enough to check that all four
small squares commute. The top right and bottom left squares commute
for trivial reasons. The top left and bottom right squares are whiskerings
of the norm diagrams of $\square_{T}$ and $\square_{B}$ respectively
and hence commute by assumption.
\end{proof}
\subsection{Monoidal Structure and Duality}
In this section, we study the interaction of norms and integration
with (symmetric) monoidal structures on the source and target $\infty$-categories.
Under suitable hypotheses, this interaction allows us to reduce questions
about ambidexterity to questions about duality.
\subsubsection{Tensor Normed Functors}
\begin{defn}
\label{def:Tensor_Normed_Functor}Let $\mathcal{C}$ and $\mathcal{D}$
be monoidal $\infty$-categories. A \emph{$\otimes$-normed functor}
from $\mathcal{D}$ to $\mathcal{C}$, is a normed functor $q\colon\mathcal{D}\nto\mathcal{C}$,
such that $q^{*}$ is monoidal (and hence $q_{!}$ is colax monoidal
by the dual of \cite[Corollary 7.3.2.7]{ha}) and for all $Y\in\mathcal{D}$
and $X\in\mathcal{C}$, the compositions of the canonical maps
\[
q_{!}\left(Y\otimes\left(q^{*}X\right)\right)\to\left(q_{!}Y\right)\otimes\left(q_{!}q^{*}X\right)\oto{\Id\otimes c_{!}}\left(q_{!}Y\right)\otimes X
\]
and
\[
q_{!}\left(\left(q^{*}X\right)\otimes Y\right)\to\left(q_{!}q^{*}X\right)\otimes\left(q_{!}Y\right)\oto{c_{!}\otimes\Id}X\otimes\left(q_{!}Y\right)
\]
are isomorphisms.
\end{defn}
\begin{rem}
The above definition does not depend on the norm and is actually just
a property of the functor $q^{*}$. However, we shall only be interested
in this property in the context of normed functors.
\end{rem}
\begin{notation}
To make diagrams involving (co)units more readable, we shall employ
the following graphical convention. When writing a unit map of an
adjunction whiskered by some functors, we enclose in parenthesis the
effected terms in the target. Similarly, when writing a counit map
of an adjunction whiskered by some functors, we underline the effected
terms in the source.
\end{notation}
We adopt the definitions and terminology of \cite{HopkinsLurie} regarding
duality in monoidal $\infty$-categories. In the situation of \defref{Tensor_Normed_Functor}, substituting $q^*\one_{\mathcal{C}}$ for $Y$, gives a natural isomorphism from the functor $q_{!}q^{*}$ to the functor $\one_{q}\otimes-$,
where $\one_{q}=q_{!}q^{*}\one_{\mathcal{C}}$. We can therefore consider
the map
\[
\varepsilon\colon\one_{q}\otimes\one_{q}\simeq q_{!}\underline{q^{*}q_{!}}q^{*}\one_{\mathcal{C}}\oto{\nu}\underline{q_{!}q^{*}}\one_{\mathcal{C}}\oto{c_{!}}\one_{\mathcal{C}}.
\]
\begin{prop}
\label{prop:Iso_Normed_Duality}Let $q\colon\mathcal{D}\nto\mathcal{C}$
be a $\otimes$-normed functor of monoidal $\infty$-categories. The following are equivalent:
\begin{enumerate}
\item $\nm_{q}$ is an isomorphism natural transformation (i.e. $q$ is
iso-normed).
\item $\nm_{q}$ is an isomorphism at $q^{*}\one_{\mathcal{C}}$.
\item The map $\varepsilon\colon\one_{q}\otimes\one_{q}\to\one_{\mathcal{C}}$
is a duality datum (exhibiting $\one_{q}$ as a self dual object in
$\mathcal{C}$).
\end{enumerate}
\end{prop}
\begin{proof}
(1) $\implies$ (2) is obvious. Assume (2). The map $\nm_{q}\colon q_{!}\to q_{*}$
has a mate $\nu\colon q^{*}q_{!}\to\Id$. By \lemref{Norm_Counit},
since $\nm_{q}$ is an isomorphism at $q^{*}\one_{\mathcal{C}}$,
the map $\nu$ is a counit map at $q^{*}\one_{\mathcal{C}}$ and has
an associated unit map $\mu_{\one}\colon\one_{\mathcal{C}}\to q_{!}q^{*}\one_{\mathcal{C}}$.
Let
\[
\eta\colon\one_{\mathcal{C}}\oto{\mu_{\one}}\left(q_{!}q^{*}\right)\one_{\mathcal{C}}\oto{u_{!}}q_{!}\left(q^{*}q_{!}\right)q^{*}\one_{\mathcal{C}}=\one_{q}\otimes\one_{q}.
\]
We prove (3) by showing that $\varepsilon$ and $\eta$ satisfy the
zig-zag identities. As above, we identify $\one_{q}$ with $q_{!}q^{*}\one_{\mathcal{C}}$
and $\one_{q}\otimes\one_{q}$ with $q_{!}q^{*}q_{!}q^{*}\one_{\mathcal{C}}$.
For the first zig-zag identity, consider the diagram
\[
\xymatrix{q_{!}q^{*}\one_{\mathcal{C}}\ar[rr]^{\mu_{\one}}\ar@{=}[rrd]\ar@/^{2pc}/[rrr]^{\eta\star\Id} & & (q_{!}\underline{q^{*})q_{!}}q^{*}\one_{\mathcal{C}}\ar[r]^{u_{!}\quad}\ar[d]^{\nu} & q_{!}\left(q^{*}q_{!}\right)\underline{q^{*}q_{!}}q^{*}\one_{\mathcal{C}}\ar[d]^{\nu}\ar@/^{2.5pc}/@<1ex>[dd]^{\Id\star\varepsilon}\\
& & q_{!}q^{*}\one_{\mathcal{C}}\ar[r]^{u_{!}\quad}\ar@{=}[rd] & q_{!}(q^{*}\underline{q_{!})q^{*}}\one_{\mathcal{C}}\ar[d]^{c_{!}}\\
& & & q_{!}q^{*}\one_{\mathcal{C}}.
}
\]
The square commutes by the interchange law for natural transformations. The upper triangle by
the definition of $\mu_{\one}$ (i.e. the corresponding zig-zag identity
at $\one_{\mathcal{C}}$) and the bottom by the zig-zag identities
for $u_{!}$ and $c_{!}$. For the second zig-zag identity, consider
a similar diagram
\[
\xymatrix{q_{!}q^{*}\ar[rr]^{\mu_{\one}}\one_{\mathcal{C}}\ar@{=}[rrd]\ar@/^{2pc}/[rrr]^{\Id\star\eta} & & q_{!}\underline{q^{*}(q_{!}}q^{*})\one_{\mathcal{C}}\ar[r]^{u_{!}\quad}\ar[d]^{\nu} & q_{!}\underline{q^{*}q_{!}}\left(q^{*}q_{!}\right)q^{*}\one_{\mathcal{C}}\ar[d]^{\nu}\ar@/^{2.5pc}/@<1ex>[dd]^{\varepsilon\star\Id}\\
& & q_{!}q^{*}\ar[r]^{u_{!}\quad}\one_{\mathcal{C}}\ar@{=}[rd] & \underline{q_{!}(q^{*}}q_{!})q^{*}\one_{\mathcal{C}}\ar[d]^{c_{!}}\\
& & & q_{!}q^{*}\one_{\mathcal{C}}.
}
\]
Assume (3). By \lemref{Iso_Normed_Criterion} and \lemref{Norm_Counit},
it is enough to show that $\nu$ is a counit at $q^{*}X$ for all
$X\in\mathcal{C}$. Consider the following diagram
\[
\scalebox{0.9}{
\xymatrix{ & \map\left(Y,\one_{q}\otimes X\right)\ar[dl]_{\sim}\ar[rr]^{\one_{q}\otimes-} & & \map\left(\one_{q}\otimes Y,\one_{q}\otimes\one_{q}\otimes X\right)\ar[ld]_{\sim}\ar[dd]^{\varepsilon\circ-}\\
\map\left(Y,q_{!}q^{*}X\right)\ar[r]^{q^{*}}\ar@{-->}[rd] & \map\left(q^{*}Y,\underline{q^{*}q_{!}}q^{*}X\right)\ar[d]^{\nu\circ-}\ar[r]^{q_{!}} & \map\left(q_{!}q^{*}Y,q_{!}\underline{q^{*}q_{!}}q^{*}X\right)\ar[d]^{\nu\circ-}\\
& \map\left(q^{*}Y,q^{*}X\right)\ar[r]^{q_{!}}\ar[rd]_{\sim} & \map\left(q_{!}q^{*}Y,\underline{q_{!}q^{*}}X\right)\ar[d]^{c_{!}\circ-} & \map\left(\one_{q}\otimes Y,X\right)\ar[ld]_{\sim}\\
& & \map\left(q_{!}q^{*}Y,X\right)
}}
\]
The triangles commute by definition and the rest by naturality. The
composition along the top and then right path is an isomorphism since
$\varepsilon$ is an evaluation map of a duality datum on $\one_{q}$.
Thus, the dashed arrow is an isomorphism by 2-out-of-3, which proves
that $\nu$ is a counit at $q^{*}X$.
\end{proof}
\begin{rem}
A similar result is given in \cite[Proposition 5.1.8]{HopkinsLurie}.
\end{rem}
\subsubsection{Tensor Normed Squares}
The following is the analogous notion to a normed square in the monoidal
setting.
\begin{defn}
A \emph{$\otimes$-normed square} is a pair of $\otimes$-normed functors
$q\colon\mathcal{D}\nto\mathcal{C}$ and $\tilde{q}\colon\tilde{\mathcal{D}}\nto\tilde{\mathcal{C}}$
and a commutative square of monoidal $\infty$-categories and monoidal
functors
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \tilde{\mathcal{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}.
}}
\qquad\left(*\right)
\]
For a $\otimes$-normed square $\left(*\right)$ as above, we define
a colax natural transformation of functors
\[
\theta\colon\left(-\right)_{\tilde{q}}F_{\mathcal{C}}=\tilde{q}_{!}\tilde{q}^{*}F_{\mathcal{C}}\simeq\tilde{q}_{!}F_{\mathcal{D}}q^{*}\oto{\beta_{!}}F_{\mathcal{C}}q_{!}q^{*}=F_{\mathcal{C}}\left(-\right)_{q}.
\]
Using the isomorphisms from \defref{Tensor_Normed_Functor} we define
the natural isomorphisms
\[
L_{q}\colon\left(X\otimes Y\right)_{q}=q_{!}q^{*}\left(X\otimes Y\right)\simeq q_{!}\left(q^{*}X\otimes q^{*}Y\right)\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,} q_{!}q^{*}X\otimes Y=X_{q}\otimes Y,
\]
\[
R_{q}\colon\left(X\otimes Y\right)_{q}=q_{!}q^{*}\left(X\otimes Y\right)\simeq q_{!}\left(q^{*}X\otimes q^{*}Y\right)\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,} X\otimes q_{!}q^{*}Y=X\otimes Y_{q}.
\]
We shall need a technical lemma regarding the compatibility of the
maps $L$, $R$, and $\theta$.
\end{defn}
\begin{lem}
\label{lem:Tensor_Square_Compatibility}Let $\left(*\right)$ be a
$\otimes$-normed square as above. For all $X,Y\in\mathcal{C}$, the
following diagram:
\[
\scalebox{0.95}{
\xymatrix{F_{\mathcal{C}}\left(X\otimes Y\right)_{\tilde{q}\tilde{q}}\ar[d]^{\theta_{X\otimes Y}}\ar@{-}[r]^{\sim} & \left(F_{\mathcal{C}}\left(X\right)\otimes F_{\mathcal{C}}\left(Y\right)\right)_{\tilde{q}\tilde{q}}\ar[r]^{R_{\tilde{q}}} & \left(F_{\mathcal{C}}\left(X\right)\otimes F_{\mathcal{C}}\left(Y\right)_{\tilde{q}}\right)_{\tilde{q}}\ar[d]^{\Id\otimes\theta_{Y}}\ar[r]^{L_{\tilde{q}}} & F_{\mathcal{C}}\left(X\right)_{\tilde{q}}\otimes F_{\mathcal{C}}\left(Y\right)_{\tilde{q}}\ar[d]^{\Id\otimes\theta_{Y}}\\
F_{\mathcal{C}}\left(\left(X\otimes Y\right)_{q}\right)_{\tilde{q}}\ar[d]^{\theta_{\left(X\otimes Y\right)_{q}}}\ar[r]^{R_{q}} & F_{\mathcal{C}}\left(X\otimes Y_{q}\right)_{\tilde{q}}\ar[d]^{\theta_{X\otimes Y_{q}}}\ar@{-}[r]^{\sim} & \left(F_{\mathcal{C}}\left(X\right)\otimes F_{\mathcal{C}}\left(Y_{q}\right)\right)_{\tilde{q}}\ar[r]^{L_{\tilde{q}}} & F_{\mathcal{C}}\left(X\right)_{\tilde{q}}\otimes F_{\mathcal{C}}\left(Y_{q}\right)\ar[d]^{\theta_{X}\otimes\Id}\\
F_{\mathcal{C}}\left(\left(X\otimes Y\right)_{qq}\right)\ar[r]^{R_{q}} & F_{\mathcal{C}}\left(\left(X\otimes Y_{q}\right)_{q}\right)\ar[r]^{L_{q}} & F_{\mathcal{C}}\left(X_{q}\otimes Y_{q}\right)\ar@{-}[r]^{\sim} & F_{\mathcal{C}}\left(X_{q}\right)\otimes F_{\mathcal{C}}\left(Y_{q}\right)
}}
\]
commutes up to homotopy.
\end{lem}
\begin{proof}
The top right square commutes by naturality of $L_{\tilde{q}}$ and
the bottom left square commutes by naturality of $\theta$. We now
show the commutativity of the top left rectangle (the commutativity
of the bottom right rectangle is completely analogous). By unwinding
the definition of $R_{q}$, the top left rectangle is obtained by
applying $\left(-\right)_{\tilde{q}}$ to the following diagram
\[
\scalebox{0.95}{
\xymatrix{\left(F_{\mathcal{C}}\left(X\right)\otimes F_{\mathcal{C}}\left(Y\right)\right)_{\tilde{q}}\ar@{-}[d]^{\wr}\ar[r]\ar@/^{2pc}/[rr]^{R_{\tilde{q}}} & F_{\mathcal{C}}\left(X\right)_{\tilde{q}}\otimes F_{\mathcal{C}}\left(Y\right)_{\tilde{q}}\ar[d]^{\theta_{X}\otimes\theta_{Y}}\ar[r]^{\tilde{c}_{!}\otimes\Id} & F_{\mathcal{C}}\left(X\right)\otimes F_{\mathcal{C}}\left(Y\right)_{\tilde{q}}\ar[d]^{\Id\otimes\theta_{Y}}\\
F_{\mathcal{C}}\left(X\otimes Y\right)_{\tilde{q}}\ar[d]^{\theta_{X\otimes Y}} & F_{\mathcal{C}}\left(X_{q}\right)\otimes F_{\mathcal{C}}\left(Y_{q}\right)\ar@{-}[d]^{\wr}\ar[r]^{c_{!}\otimes\Id} & F_{\mathcal{C}}\left(X\right)\otimes F_{\mathcal{C}}\left(Y_{q}\right)\ar@{-}[d]^{\wr}\\
F_{\mathcal{C}}\left(\left(X\otimes Y\right)_{q}\right)\ar[r]\ar@/_{2pc}/[rr]_{R_{q}} & F_{\mathcal{C}}\left(X_{q}\otimes Y_{q}\right)\ar[r]^{c_{!}\otimes\Id} & F_{\mathcal{C}}\left(X\otimes Y_{q}\right).
}}
\]
The left rectangle commutes by the monoidality of $\theta$ and the
bottom right square commutes by naturality. The top right square is
a tensor product of two squares
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{F_{\mathcal{C}}\left(X\right)_{\tilde{q}}\ar[d]^{\theta_{X}}\ar[r]^{\tilde{c}_{!}} & F_{\mathcal{C}}\left(X\right)\ar[d]^{\Id}\\
F_{\mathcal{C}}\left(X_{q}\right)\ar[r]^{c_{!}} & F_{\mathcal{C}}\left(X\right)
}}
\qquad\left(\square_{1}\right),\qquad \qquad
\vcenter{
\xymatrix@C=3pc{F_{\mathcal{C}}\left(Y\right)_{\tilde{q}}\ar[d]^{\theta_{Y}}\ar[r]^{\Id} & F_{\mathcal{C}}\left(Y\right)_{\tilde{q}}\ar[d]^{\theta_{Y}}\\
F_{\mathcal{C}}\left(Y_{q}\right)\ar[r]^{\Id} & F_{\mathcal{C}}\left(Y_{q}\right).
}}
\qquad\left(\square_{2}\right)
\]
The square $\square_{2}$ commutes for trivial reasons and the square
$\square_{1}$ commutes by the compatibility of $\bc$-maps with counits
(\lemref{BC_Co_Units}(4)).
\end{proof}
The main fact we shall use about $\otimes$-normed squares is the
following:
\begin{prop}
\label{prop:Tensor_Ambi}Let $\left(*\right)$ be a $\otimes$-normed square
as above. Assume that $\left(*\right)$ is weakly ambidextrous and
satisfies the $\bc_{!}$-condition. If $q$ is iso-normed, then $\tilde{q}$
is iso-normed and the $\bc_{*}$ condition is satisfied as well.
\end{prop}
\begin{proof}
By the assumption of the $\bc_{!}$-condition, the operation $\theta$
is an isomorphism. Observe that $\one_{\tilde{q}}\simeq F_{\mathcal{C}}\left(\one_{\mathcal{C}}\right)_{\tilde{q}}$
and consider the following diagram:
\[
\xymatrix@C=3pc{\one_{\tilde{q}}\otimes\one_{\tilde{q}}\ar[d]_{\theta\otimes\theta}^{\wr}\ar[r]^-{\quad L_{\tilde{q}}^{-1}R_{\tilde{q}}^{-1}\quad} & \tilde{q}_{!}\underline{\tilde{q}^{*}\tilde{q}_{!}}\tilde{q}^{*}F_{\mathcal{C}}\left(\one_{\mathcal{C}}\right)\ar[dd]^{\theta\star\theta}\ar[r]^-{\tilde{\nu}} & \underline{\tilde{q}_{!}\tilde{q}^{*}}F_{\mathcal{C}}\left(\one_{\mathcal{C}}\right)\ar[dd]^{\theta}\ar[rd]^{\tilde{c}_{!}}\\
F_{\mathcal{C}}\left(\one_{q}\right)\otimes F_{\mathcal{C}}\left(\one_{q}\right)\ar[d]^{\wr} & & & F_{\mathcal{C}}\left(\one_{\mathcal{C}}\right)\simeq\one_{\tilde{\mathcal{C}}}.\\
F_{\mathcal{C}}\left(\one_{q}\otimes\one_{q}\right)\ar[r]^-{L_{q}^{-1}R_{q}^{-1}} & F_{\mathcal{C}}\left(q_{!}\underline{q^{*}q_{!}}q^{*}\one_{\mathcal{C}}\right)\ar[r]^-{\nu} & F_{\mathcal{C}}\left(\underline{q_{!}q^{*}}\one_{\mathcal{C}}\right)\ar[ru]_{c_{!}}
}
\]
The middle rectangle and the triangle commute by the compatibility
of BC maps with counits (\lemref{BC_Co_Units}(4)). The left rectangle
commutes by applying \lemref{Tensor_Square_Compatibility} with $X=Y=\one_{\mathcal{C}}$.
By \propref{Iso_Normed_Duality}, $\varepsilon_{q}\colon\one_{q}\otimes\one_{q}\to\one_{\mathcal{C}}$
is a duality datum and since $F_{\mathcal{C}}$ is monoidal,
\[
F_{\mathcal{C}}\left(\varepsilon_{q}\right)\colon F_{\mathcal{C}}\left(\one_{q}\right)\otimes F_{\mathcal{C}}\left(\one_{q}\right)\to F_{\mathcal{C}}\left(\one_{\mathcal{C}}\right)\simeq\one_{\tilde{\mathcal{C}}}
\]
is a duality datum as well. The commutativity of the above diagram,
identifies $F_{\mathcal{C}}\left(\varepsilon_{q}\right)$ with $\varepsilon_{\tilde{q}}$
and hence $\varepsilon_{\tilde{q}}$ is a duality datum for $\one_{\tilde{q}}$.
By \propref{Iso_Normed_Duality} again, $\tilde{q}$ is iso-normed.
Finally, the $\bc_{*}$ condition is satisfied by 2-out-of-3 for the
norm diagram.
\end{proof}
\subsection{Amenability}
\begin{defn}
An iso-normed functor $q\colon\mathcal{D}\nto\mathcal{C}$ is called
\emph{amenable}, if $|q|$ is an isomorphism natural transformation.
\end{defn}
\begin{rem}
The name is inspired by the notion of amenability in geometric group
theory. Given an object $X\in\mathcal{C}$, the integral operation
\[
\int\limits _{q}\colon\map\left(q^{*}X,q^{*}X\right)\to\map\left(X,X\right)
\]
can be thought of intuitively as ``\emph{summation} over the fibers
of $q$''. Amenability allows us to ``\emph{average} over the fibers
of $q$'' by multiplying the integral with $|q|^{-1}$.
This is especially suggestive in the prototypical example of local-systems,
which we study in the next section.
\end{rem}
\begin{lem}
\label{lem:Amenable_Conservative}Let
\[
\xymatrix@C=3pc{\mathcal{C}\ar[d]_{q^{*}}\ar[r]^{F_{\mathcal{C}}} & \tilde{\mathcal{C}}\ar[d]^{\tilde{q}^{*}}\\
\mathcal{D}\ar[r]^{F_{\mathcal{D}}} & \tilde{\mathcal{D}}
}
\]
be an ambidextrous square, such that $F_{\mathcal{C}}$ is conservative.
If $\tilde{q}$ is amenable, then $q$ is amenable.
\end{lem}
\begin{proof}
Given $X\in\mathcal{C}$, since the square is ambidextrous, we have
by \propref{Integral_Ambi},
\[
F_{\mathcal{C}}\left(|q|_{X}\right)=|\tilde{q}|_{F_{\mathcal{C}}\left(X\right)}.
\]
The claim follows from the assumption that $F_{\mathcal{C}}$ is conservative.
\end{proof}
The next result demonstrates how can amenability be profitably used
for ``averaging''.
\begin{thm}
[Higher Maschke's Theorem]\label{thm:Amenable_Section} Let $q\colon\mathcal{D}\nto\mathcal{C}$
be an iso-normed functor. If $q$ is amenable, then for every $X\in\mathcal{C}$
the counit map $c_{!}\colon q_{!}q^{*}X\to X$ has a section (i.e.
left inverse) up to homotopy. In particular, every object of $\mathcal{C}$
is a retract of an object in the essential image of $q_{!}$.
\end{thm}
\begin{proof}
Let $X\in\mathcal{C}$. By the zig-zag identities,
\[
q^{*}X\oto{u_{!}}(q^{*}\underline{q_{!})q^{*}}X\oto{c_{!}}q^{*}X
\]
is the identity on $q^{*}X$. Integrating along $q$ and using \propref{Homogenity}(1),
we get
\[
|q|_{X}=\int\limits _{q}\Id_{\left(q^{*}X\right)}=\int\limits _{q}\left(q^{*}\left(c_{!}\right)_{X}\circ\left(u_{!}\right)_{q^{*}X}\right)=\left(c_{!}\right)_{X}\circ\int\limits _{q}\left(u_{!}\right)_{q^{*}X}.
\]
Hence, if $|q|_{X}$ is an isomorphism, then $\left(c_{!}\right)_{X}$
has a section up to homotopy.
\end{proof}
\begin{thm}
[Cancellation Theorem]\label{thm:Ambi_Cancellation}Let
\[
\xymatrix{\mathcal{E}\ \ar@{>->}[r]^{p} & \mathcal{D}\ \ar@{>->}[r]^{q} & \mathcal{C}}
\]
be a pair of normed functors. If $p$ is amenable and $qp$
is iso-normed, then $q$ is iso-normed.
\end{thm}
\begin{proof}
The map $\nm_{qp}$ is given by the composition
\[
q_{!}p_{!}\oto{\nm_{q}}q_{*}p_{!}\oto{\nm_{p}}q_{*}p_{*}.
\]
Since $\nm_{qp}$ and $\nm_{p}$ are isomorphisms, so is $q_{!}p_{!}\oto{\nm_{q}}q_{*}p_{!}$.
By \thmref{Amenable_Section}, every $X\in\mathcal{D}$ is a retract
of $p_{!}Y$ for some $Y\in\mathcal{E}$. Isomorphisms are closed
under retracts, and so $\nm_{q}$ is an isomorphism for every $X\in\mathcal{D}$.
\end{proof}
\begin{rem}
This is essentially the same argument as the one used in the proof
of \cite[Proposition 4.4.16]{HopkinsLurie}.
\end{rem}
\section{Local-Systems and Ambidexterity}
\label{sec:local_systems}
The main examples of normed functors that we are interested in are
the ones provided by the theory of higher semiadditivity developed in \cite{HopkinsLurie}
and further in \cite{Harpaz}. In what follows, we first briefly recall
the relevant definitions and explain how they fit into the abstract
framework developed in the previous section. Then we apply the theory
of the previous section to this special case. The theory developed
in \cite{HopkinsLurie} is set up in a rather general framework of
Beck-Chevalley fibrations. Even though this framework fits into our
theory of normed functors, for concreteness and clarity, we shall
confine ourselves to the special case of local systems.
\subsection{Local-Systems and Canonical Norms}
Let $\mathcal{C}$ be an $\infty$-category and let $A$ be a space
viewed as an $\infty$-groupoid. We call $\fun\left(A,\mathcal{C}\right)$
the $\infty$-category of $\mathcal{C}$-valued \emph{local systems}
on $A$. Let $q\colon A\to B$ be a map of spaces and assume that
$\mathcal{C}$ admits all $q$-limits and $q$-colimits. The functor of precomposition with $q$, denote by
\[
q^{*}\colon\fun\left(B,\mathcal{C}\right)\to\fun\left(A,\mathcal{C}\right),
\]
admits both a left adjoint $q_{!}$ and a right adjoint $q_{*}$ (given
by left and right Kan extension respectively). We shall define, after
\cite[Section 4.1]{HopkinsLurie}, a class of \emph{weakly $\mathcal{C}$-ambidextrous}
maps $q$, to which we associate a canonical norm map $\nm_{q}\colon q_{!}\to q_{*}$.
This norm map gives rise to a normed functor
\[
q^{\can}\colon\fun\left(A,\mathcal{C}\right)\nto\fun\left(B,\mathcal{C}\right).
\]
A map $q$ is called \emph{$\mathcal{C}$-ambidextrous }if it is weakly
$\mathcal{C}$-ambidextrous and the associated canonical norm is an
isomorphism (i.e. $q^{\can}$ is iso-normed).
\subsubsection{Base Change \& Canonical Norms}
We begin with some terminology regarding the operation of base change
for local-systems.
\begin{defn}
\label{def:Base_Change_Square}Given an $\infty$-category $\mathcal{C}$
and a pullback diagram of spaces
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{\tilde{A}\ar[d]_{\tilde{q}}\ar[r]^{s_{A}} & A\ar[d]^{q}\\
\tilde{B}\ar[r]^{s_{B}} & B
}}
\qquad\left(*\right)
\]
the associated \emph{base-change square} (of $\mathcal{C}$-valued
local-systems) is
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{\fun\left(B,\mathcal{C}\right)\ar[d]_{q^{*}}\ar[r]^{s_{B}^{*}} & \fun(\tilde{B},\mathcal{C})\ar[d]^{\tilde{q}^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar[r]^{s_{A}^{*}} & \fun(\tilde{A},\mathcal{C}).
}}
\qquad\left(\square\right)
\]
\end{defn}
\begin{lem}
\label{lem:Base_Change_BC}Let $\mathcal{C}$ be an $\infty$-category
and let $\left(*\right)$ be a pullback diagram of spaces as in \defref{Base_Change_Square}
above. If $\mathcal{C}$ admits all $q$-colimits (resp. $q$-limits),
then the associated base-change square $\square$ satisfies the $\bc_{!}$
(resp. $\bc_{*}$ condition).
\end{lem}
\begin{proof}
For $\bc_{!}$ this is \cite[Proposition 4.3.3]{HopkinsLurie} (note
we only need $q$-colimits). For $\bc_{*}$ a completely analogous
argument works.
\end{proof}
The construction of the canonical norm rests on the following more
general construction.
\begin{defn}
\label{def:Diagonal_Induction}Let $q\colon A\to B$ be a map of spaces
and let $\delta\colon A\to A\times_{B}A$ be the diagonal of $q$. Let $\mathcal{C}$
be an $\infty$-category that admits all $q$-(co)limits and $\delta$-(co)limits.
Given an isomorphism natural transformation
\[
\nm_{\delta}\colon\delta_{!}\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}\delta_{*},
\]
we define the \emph{diagonally induced} norm map
\[
\nm_{q}\colon q_{!}\to q_{*}
\]
as follows. Consider the commutative diagram
\[
\xymatrix@C=3pc{A\ar[rd]^{\delta}\ar@/^{1pc}/@{=}[rrd]\ar@/_{1pc}/@{=}[rdd]\\
& A\times_{B}A\ar[d]^{\pi_{2}}\ar[r]^{\pi_{1}} & A\ar[d]^{q}\\
& A\ar[r]^{q} & B.
}
\]
To the iso-norm $\nm_{\delta}$, corresponds a wrong way unit map
$\mu_{\delta}\colon\Id\to\delta_{!}\delta^{*}$. By \lemref{Base_Change_BC},
the base change square associated with $\left(*\right)$ satisfies
the $\bc_{!}$ condition, and so we can define the composition
\[
\nu_{q}\colon q^{*}q_{!}\oto{\beta_{!}^{-1}}\left(\pi_{2}\right)_{!}\pi_{1}^{*}\oto{\mu_{\delta}}\left(\pi_{2}\right)_{!}\delta_{!}\delta^{*}\pi_{1}^{*}\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}\Id.
\]
We define $\nm_{q}\colon q_{!}\to q_{*}$ to be the mate of $\nu_{q}$
under the adjunction $q^{*}\dashv q_{*}$.
\end{defn}
\begin{rem}
\label{rem:Diagonal_Induction_Integral}In light of \cite[Remark 4.1.9]{HopkinsLurie},
we can informally say that the diagonally induced norm map on $q$
is obtained by integrating the identity map along the diagonal $\delta$.
Though we shall not use this perspective, it is helpful to keep it
in mind.
\end{rem}
Note that if $q\colon A\to B$ is $m$-truncated for some $m\ge-1$,
then $\delta$ is $\left(m-1\right)$-truncated. This allows us to
define canonical norm maps inductively on the level of truncatedness
of the map.
\begin{defn}
\label{def:Ambidexterity}Let $\mathcal{C}$ be an $\infty$-category
and $m\ge-2$ an integer. A map of spaces $q\colon A\to B$ is called
\begin{enumerate}
\item \emph{weakly $m$-$\mathcal{C}$-ambidextrous}, if $q$ is $m$-truncated,
$\mathcal{C}$ admits $q$-(co)limits and either of the two holds:
\begin{itemize}
\item $m=-2$, in which case the inverse of $q^{*}$ is both a left and
right adjoint of $q^{*}$. We define the \emph{canonical norm} map
on $q^{*}$ to be the identity of some inverse of $q^{*}$.
\item $m\ge-1$, and the diagonal $\delta\colon A\to A\times_{B}A$ of $q$
is \emph{$\left(m-1\right)$-$\mathcal{C}$-ambidextrous}.\emph{ }In
this case we define the \emph{canonical norm} on $q^{*}$ to be the
diagonally induced one from the canonical norm of $\delta$.
\end{itemize}
\item \emph{$m$-$\mathcal{C}$-ambidextrous}, if it is weakly $m$-$\mathcal{C}$-ambidextrous
and its canonical norm map is an isomorphism.
\end{enumerate}
A map of spaces $q\colon A\to B$ is called (weakly) $\mathcal{C}$-ambidextrous
if it is (weakly) $m$-$\mathcal{C}$-ambidextrous for some $m$.
\end{defn}
By \cite[Proposition 4.1.10 (5)]{HopkinsLurie}, the canonical norm
associated with a map $q\colon A\to B$, that is $m$-truncated for
some $m$, is independent of $m$.
\begin{defn}
In the situation of \defref{Ambidexterity}, given a map $q\colon A\to B$ that is weakly $\mathcal{C}$-ambidextrous, we define the associated \emph{canonical normed
functor}
\[
q_{\mathcal{C}}^{\can}\colon\fun\left(A,\mathcal{C}\right)\nto\fun\left(B,\mathcal{C}\right),
\]
by
\[
\left(q_{\mathcal{C}}^{\can}\right)^{*}=q^{*},\quad\left(q_{\mathcal{C}}^{\can}\right)_{!}=q_{!},\quad\left(q_{\mathcal{C}}^{\can}\right)_{*}=q_{*},
\]
and the norm map $\nm_{q}\colon q_{!}\to q_{*}$ the canonical norm
of \defref{Ambidexterity}.
\end{defn}
Note that the normed functor $q_{\mathcal{C}}^{\can}$ is iso-normed
if and only if $q$ is $\mathcal{C}$-ambidextrous. We add the following
definition.
\begin{defn}
Let $\mathcal{C}$ be an $\infty$-category. A $\mathcal{C}$-ambidextrous
map $q\colon A\to B$ is called \emph{$\mathcal{C}$-amenable} if
$q^{\can}$ is amenable.
\end{defn}
\begin{notation}
\label{not:Canonical_Norm_Notation}Given a weakly $\mathcal{C}$-ambidextrous
map of spaces $q\colon A\to B$, we write $q^{\can}$ for $q_{\mathcal{C}}^{\can}$
if $\mathcal{C}$ is understood from the context. We also write $\left(-\right)_{q}$,
$\int_{q}$ and $|q|$ instead of $\left(-\right)_{q^{\can}},$
$\int_{q^{\can}}$ and $|q^{\can}|$. For a map $q\colon A\to\pt$,
we shall also say that $A$ is (weakly) $\mathcal{C}$-ambidextrous
or amenable if $q$ is, and write $\left(-\right)_{A}$, $\int_{A}$
, and $|A|$ instead of $\left(-\right)_{q}$, $\int_{q}$
and $|q|$.
\end{notation}
The next proposition ensures that the canonical norms are preserved
under base change, compositions and identity as in \defref{Norm_Construction}.
\begin{prop}
\label{prop:Canonical_Norms_Compatibility}Let $\mathcal{C}$ be an
$\infty$-category.
\begin{enumerate}
\item (Identity) Given an isomorphism of spaces $q\colon A\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,} B$, the
functor $q^{*}$ is $\mathcal{C}$-ambidextrous and its canonical
norm is the identity of the left and right adjoint inverse of $q^{*}$.
\item (Composition) Given (weakly) $\mathcal{C}$-ambidextrous maps $q\colon A\to B$
and $p\colon B\to C$, the composition $pq\colon A\to C$ is (weakly)
$\mathcal{C}$-ambidextrous and $\left(pq\right)^{\can}$ can be identified
with $p^{\can}q^{\can}$.
\item (Base-change) Let $\left(*\right)$ be a pullback diagram of spaces
as in \defref{Base_Change_Square}. If $q$ is (weakly) $\mathcal{C}$-ambidextrous,
then $\tilde{q}$ is (weakly) $\mathcal{C}$-ambidextrous and the
associated base-change square
\[
\xymatrix@C=3pc{\fun\left(B,\mathcal{C}\right)\ar[d]_{q^{*}}\ar[r]^{s_{B}^{*}} & \fun(\tilde{B},\mathcal{C})\ar[d]^{\tilde{q}^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar[r]^{s_{A}^{*}} & \fun(\tilde{A},\mathcal{C})
}
\]
is (weakly) ambidextrous.
\end{enumerate}
\end{prop}
\begin{proof}
(1) follows directly from the definition. (2) is the content of \cite[Remark 4.2.4]{HopkinsLurie}.
(3) is a restatement of \cite[Remark 4.2.3]{HopkinsLurie}.
\end{proof}
The following is a central notion for this paper.
\begin{defn}\label{def:Semiadd}
Let $m\ge-2$ be an integer. An $\infty$-category $\mathcal{C}$
is called $m$-semiadditive, if it admits all $m$-finite limits and
$m$-finite colimits and every $m$-finite map of spaces is $\mathcal{C}$-ambidextrous.
It is called $\infty$-semiadditive if it is $m$-semiadditive for
all $m$.
\end{defn}
\begin{rem}
Our definition of $m$-semiadditivity agrees with \cite[Definition 3.1]{Harpaz}
and differs slightly from \cite[Definition 4.4.2]{HopkinsLurie} in
that we do not require $\mathcal{C}$ to admit \emph{all} small colimits,
but only $m$-finite ones. Note that using the ``wrong way counit''
perspective, one could phrase $m$\textendash semiadditivity without
the \emph{assumption} that $\mathcal{C}$ admits $m$-finite limits,
but this would then be a direct \emph{consequence}. Thus, \defref{Semiadd}
is somewhat more general then \cite[Definition 4.4.2]{HopkinsLurie}.
\end{rem}
\subsubsection{Base Change and Integration}
We can now apply the theory of integration developed in the previous
section to the canonically normed functors associated with ambidextrous
maps.
\begin{example}
\label{exa:Integral_Sum}(see \cite[Remark 4.4.11]{HopkinsLurie})
Let $\mathcal{C}$ be a 0-semiadditive $\infty$-category (e.g. $\mathcal{C}$ is stable). For every \emph{finite set} $A$, the map $q\colon A\to\pt$ is $\mathcal{C}$-ambidextrous.
Given $X,Y\in\mathcal{C}$, a map of local systems $f\colon q^{*}X\to q^{*}Y$,
can be viewed as a collection of maps $\left\{ f_{a}\colon X\to Y\right\} _{a\in A}$.
We have
\[
\int\limits _{A}f=\sum_{a\in A}f_{a}\quad\in\hom_{h\mathcal{C}}\left(X,Y\right).
\]
\end{example}
Applying the general theory of integration to base change squares,
we get
\begin{prop}
\label{prop:Base_Change_Ambi}Let $\mathcal{C}$ be an $\infty$-category
and let $\left(*\right)$ be a pullback diagram of spaces as in \defref{Base_Change_Square},
such that $q$ (and hence $\tilde{q}$) is $\mathcal{C}$-ambidextrous.
For all $X,Y\in\fun\left(B,\mathcal{C}\right)$ and $f\colon q^{*}X\to q^{*}Y$,
we have
\[
s_{B}^{*}\int\limits _{q}f=\int\limits _{\tilde{q}}s_{A}^{*}f\quad\in\hom_{h\fun\left(\tilde{B},\mathcal{C}\right)}\left(s_{B}^{*}X,s_{B}^{*}Y\right).
\]
In particular, for all $X\in\fun\left(B,\mathcal{C}\right)$ we have
\[
s_{B}^{*}|q|_{X}=|\tilde{q}|_{s_{B}^{*}X}\quad\in\hom_{h\fun\left(\tilde{B},\mathcal{C}\right)}\left(s_{B}^{*}X,s_{B}^{*}X\right).
\]
\end{prop}
\begin{proof}
Denote by $\square$ the associated base-change square. By \propref{Canonical_Norms_Compatibility}(3),
$\square$ is ambidextrous and by \lemref{Base_Change_BC}, it satisfies
the $\bc_{!}$ condition. Now, the result follows from \propref{Integral_Ambi}.
\end{proof}
As a consequence, we get a form of ``distributivity'' for integration.
\begin{cor}
\label{cor:Distributivity}Let $\mathcal{C}$ be an $\infty$-category
and let $q_{1}\colon A_{1}\to B$ and $q_{2}\colon A_{2}\to B$ be
two $\mathcal{C}$-ambidextrous maps of spaces. Consider the pullback
square
\[
\xymatrix@C=3pc{A_{2}\times_{B}A_{1}\ar[d]_{\pi_{2}}\ar[r]^{\pi_{1}}\ar[dr]|{q_{2}\times_{B}q_{1}} & A_{1}\ar[d]^{q_{1}}\\
A_{2}\ar[r]^{q_{2}} & B.
}
\]
The map $q_{2}\times_{B}q_{1}$ is $\mathcal{C}$-ambidextrous and
for all $X,Y,Z\in\fun\left(B,\mathcal{C}\right)$ and maps
\[
f_{1}\colon q_{1}^{*}X\to q_{1}^{*}Y,\quad f_{2}\colon q_{2}^{*}Y\to q_{2}^{*}Z,
\]
we have
\[
\int\limits _{q_{2}\times_{B}q_{1}}\left(\pi_{2}^{*}f_{2}\circ\pi_{1}^{*}f_{1}\right)=\int\limits _{q_{2}}f_{2}\circ\int\limits _{q_{1}}f_{1}\quad\in\hom_{h\fun\left(B,\mathcal{C}\right)}\left(X,Z\right).
\]
In particular, for every $X\in\fun\left(B,\mathcal{C}\right)$, we
have
\[
|q_{2}\times_{B}q_{1}|_{X}=|q_{2}|_{X}\circ|q_{1}|_{X}\quad\in\hom_{h\fun\left(B,\mathcal{C}\right)}\left(X,X\right).
\]
\end{cor}
\begin{proof}
The map $\pi_{2}$ is $\mathcal{C}$-ambidextrous by \propref{Canonical_Norms_Compatibility}(3)
and therefore $q_{2}\times_{B}q_{1}=q_{2}\pi_{2}$ is $\mathcal{C}$-ambidextrous
by \propref{Canonical_Norms_Compatibility}(2). We now start from
the left hand side and use \propref{Fubini}, \propref{Homogenity}(1),
\propref{Base_Change_Ambi} and \propref{Homogenity}(2) (in that
order).
\[
\int\limits _{q_{2}\times_{B}q_{1}}\left(\pi_{2}^{*}f_{2}\circ\pi_{1}^{*}f_{1}\right)=\int\limits _{q_{2}\pi_{2}}\left(\pi_{2}^{*}f_{2}\circ\pi_{1}^{*}f_{1}\right)=\int\limits _{q_{2}}\int\limits _{\pi_{2}}\left(\pi_{2}^{*}f_{2}\circ\pi_{1}^{*}f_{1}\right)=
\]
\[
\int\limits _{q_{2}}\left(f_{2}\circ\int\limits _{\pi_{2}}\pi_{1}^{*}f_{1}\right)=\int\limits _{q_{2}}\left(f_{2}\circ q_{2}^{*}\int\limits _{q_{1}}f_{1}\right)=\int\limits _{q_{2}}f_{2}\circ\int\limits _{q_{1}}f_{1}.
\]
The second claim follows from applying the first to $f_{2}=q_{2}^{*}\Id_{X}$
and $f_{1}=q_{1}^{*}\Id_{X}$.
\end{proof}
As another consequence, we obtain the additivity property of the integral.
\begin{prop}[Integral Additivity]
\label{prop:Integral_Additivity}Let $\mathcal{C}$ be a $0$-semiadditive
$\infty$-category and let $q_i\colon A_i\to B$ for $i=1,\dots,k$ be $\mathcal{C}$-ambidextrous maps. Then,
\[
\left(q_{1},\dots,q_{k}\right)\colon A_{1}\sqcup\cdots\sqcup A_{k}\to B
\]
is $\mathcal{C}$-ambidextrous and for all $X,Y\in\fun\left(B,\mathcal{C}\right)$ and maps $f_{i}\colon q_{i}^{*}X\to q_{i}^{*}Y$ for $i=1,\dots,k$,
we have
\[
\int\limits _{\left(q_{1},\dots,q_{k}\right)}\left(f_{1},\dots,f_{k}\right)=\sum_{i=1}^{k}\left(\int\limits _{q_{i}}f_{i}\right)\quad\in\hom_{h\fun\left(B,\mathcal{C}\right)}\left(X,Y\right).
\]
\end{prop}
\begin{proof}
By induction, we may assume $k=2$. Write $\left(q_{1},q_{2}\right)$ as a composition
\[
A_{1}\sqcup A_{2}\oto{q_{1}\sqcup q_{2}}B\sqcup B\oto{\nabla}B,
\]
where $\nabla$ is the fold map. By \cite[Proposition 4.3.5]{HopkinsLurie}, the map $q_1\sqcup q_2$ is $\mathcal{C}$-ambidextrous. Consider the pullback square of spaces, with $j_1$ the natural inclusion inclusion of the first summand,
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{A_{1}\ar[d]_{q_{1}}\ar[r]^-{j_{1}} & A_{1}\sqcup A_{2}\ar[d]^{q_{1}\sqcup q_{2}}\\
B\ar[r]^-{j_{1}} & B\sqcup B.
}}
\qquad\left(*\right)
\]
By \propref{Base_Change_Ambi} applied to the base-change square of
$\left(*\right)$, we get that
\[
j_{1}^{*}\left(\int\limits _{q_{1}\sqcup q_{2}}\left(f_{1},f_{2}\right)\right)\simeq\int\limits _{q_{1}}f_{1}.
\]
Applying the analogous argument to the second component, we get
\[
\int\limits _{q_{1}\sqcup q_{2}}\left(f_{1},f_{2}\right)=\left(\int\limits _{q_{1}}f_{1},\int\limits _{q_{2}}f_{2}\right).
\]
Since $\nabla\colon B\sqcup B\to B$ is $0$-finite and $\mathcal{C}$ is $0$-semiadditive, $\nabla$ is $\mathcal{C}$-ambidextrous and the map $(q_1,q_2)$ is $\mathcal{C}$-ambidextrous as a composition of two such (\propref{Canonical_Norms_Compatibility}(2)). Using Fubini's Theorem (\propref{Fubini}), and a direct computation from the definition of the integral over $\nabla$ (identical to \exaref{Integral_Sum}) we get
\[
\int\limits _{\left(q_{1},q_{2}\right)}\left(f_{1},f_{2}\right)\simeq\int\limits _{\nabla}\int\limits _{q_{1}\sqcup q_{2}}\left(f_{1},f_{2}\right)=\int\limits _{\nabla}\left(\int\limits _{q_{1}}f_{1},\int\limits _{q_{2}}f_{2}\right)=\int\limits _{q_{1}}f_{1}+\int\limits _{q_{2}}f_{2}.
\]
\end{proof}
\subsubsection{Amenable Spaces}
Ambidexterity of the base-change square has also a corollary for the
notion of amenability.
\begin{cor}
\label{cor:Amenable_Base_Change}Let $\mathcal{C}$ be an $\infty$-category
and let $\left(*\right)$ be a pullback diagram of spaces as in \defref{Base_Change_Square}.
If $s_{B}$ is surjective on connected components and $\tilde{q}$
is $\mathcal{C}$-amenable, then $q$ is $\mathcal{C}$-amenable.
\end{cor}
\begin{proof}
Since $s_B$ is surjective on connected components, the $\mathcal{C}$-ambidexterity of $\tilde{q}$ implies the $\mathcal{C}$-ambidexterity of $q$ by \cite[Corollary 4.3.6]{HopkinsLurie}. Thus, by \propref{Canonical_Norms_Compatibility}(3), the diagram $\square$ of \defref{Base_Change_Square} is ambidextrous. Since $s_{B}$ is surjective on connected components, $s_{B}^{*}$ is conservative and the claim follwos from \lemref{Amenable_Conservative}.
\end{proof}
The following two propositions give the core properties of amenable
spaces.
\begin{prop}
\label{prop:Amenable_Space}Let $\mathcal{C}$ be an $\infty$-category
and let $A\to E\oto pB$ be a fiber sequence of weakly $\mathcal{C}$-ambidextrous
spaces, where $B$ is connected. If $E$ is $\mathcal{C}$-ambidextrous
and $A$ is $\mathcal{C}$-amenable, then $B$ is $\mathcal{C}$-ambidextrous.
\end{prop}
\begin{proof}
By assumption, $A$ is $\mathcal{C}$-amenable and $B$ is connected,
hence by \corref{Amenable_Base_Change}, the map $p$ is $\mathcal{C}$-amenable.
Denote $q\colon B\to\pt$ and consider the pair of composable canonically
normed functors
\[
\xymatrix{\fun\left(E,\mathcal{C}\right)\ \ar@{>->}[r]^{p^{\can}} & \fun\left(B,\mathcal{C}\right)\ \ar@{>->}[r]^{q^{\can}} & \fun\left(\pt,\mathcal{C}\right).}
\]
Since $p^{\can}$ is amenable and $\left(qp\right)^{\can}=q^{\can}p^{\can}$
is iso-normed, by \thmref{Ambi_Cancellation}, $q^{\can}$ is iso-normed.
In other words, the map $q$ (namely, the space $B$) is $\mathcal{C}$-ambidextrous.
\end{proof}
\begin{prop}
\label{prop:Amenable_Contractible}Let $\mathcal{C}$ be an $\infty$-category
and let $A$ be a connected space, such that $\mathcal{C}$ admits
all $A$-(co)limits and $\Omega A$-(co)limits. Denoting $q\colon A\to\pt$,
if $\Omega A$ is $\mathcal{C}$-amenable, then the counit map
\[
c_{!}^{q}\colon q_{!}q^{*}\to\Id,
\]
is an isomorphism.
\end{prop}
\begin{proof}
Let $e\colon\pt\to A$ be a choice of a base point. The composition
\[
\Id=q_{!}\underline{e_{!}e^{*}}q^{*}\oto{c_{!}^{e}}\underline{q_{!}q^{*}}\oto{c_{!}^{q}}\Id
\]
is the counit of the adjunction
\[
\Id=q_{!}e_{!}\dashv e^{*}q^{*}=\Id,
\]
and hence an isomorphism. Thus, the whiskering $q_{!}c_{!}^{e}q^{*}$
is a left inverse of $c_{!}^{q}$ up to isomorphism. It therefore
suffices to show that $c_{!}^{e}$ has itself a left inverse. Since
$A$ is connected and $\Omega A$ is $\mathcal{C}$-amenable, the
map $e$ is $\mathcal{C}$-amenable by \corref{Amenable_Base_Change}.
Thus, by \thmref{Amenable_Section}, the map $c_{!}^{e}$ has a left
inverse.
\end{proof}
\subsubsection{Higher Semiadditivity \& Spans }
We conclude with recalling from \cite{Harpaz} some results regarding
the universality of spans of $m$-finite spaces among $m$-semiadditive
$\infty$-categories. These results are useful in reducing questions
about general $m$-semiadditive categories to the universal case,
in which they are sometimes easier to solve.
Let $\mathcal{S}_{m}\ss\mathcal{S}$ be the full subcategory spanned
by $m$-finite spaces and let $\mathcal{S}_{m}^{m}$ be the $\infty$-category
of spans of $m$-finite spaces (see \cite{Barwick}). Roughly,
\begin{itemize}
\item The objects of $\mathcal{S}_{m}^{m}$ are $m$-finite spaces.
\item A morphism from $A$ to $B$ is a span $A\from E\to B$, where $E$
is $m$-finite as well.
\item Composition, up to homotopy, is given by pullback of spans.
\end{itemize}
By \cite[Section 2.2]{Harpaz}, the $\infty$-category $\mathcal{S}_{m}^{m}$
of spans of $m$-finite spaces inherits a symmetric monoidal structure
from the Cartesian symmetric monoidal structure on $\mathcal{S}_{m}$.
While this symmetric monoidal structure is not itself Cartesian, the
unit is $\pt\in\mathcal{S}_{m}^{m}$ and the tensor of two maps $A_{1}\xleftarrow{q_{1}}E_{1}\oto{r_{1}}B_{1}$
and $A_{2}\xleftarrow{q_{2}}E_{2}\oto{r_{2}}B_{2}$ is equivalent
to
\[
A_{1}\times A_{2}\xleftarrow{q_{1}\times q_{2}}E_{1}\times E_{2}\oto{r_{1}\times r_{2}}B_{1}\times B_{2}.
\]
One of the main results of \cite{Harpaz} is that $\mathcal{S}_{m}^{m}$
canonically acts on any $m$-semiadditive $\infty$-category (and
the existence of such an action is in fact equivalent to $m$-semiadditivity).
Formally,
\begin{thm}
[Harpaz, {{\cite[Corollary 5.2]{Harpaz}}}]\label{thm:=00005BHarpaz=00005DSpan_Action}
For every $m$-semiadditive $\mathcal{C}$, there is a unique monoidal
$m$-finite colimit preserving functor $\mathcal{S}_{m}^{m}\to\fun\left(\mathcal{C},\mathcal{C}\right)$.
\end{thm}
Unwinding the definition of this action, we get that
\begin{itemize}
\item The image of an $m$-finite space $a\colon A\to\pt$ is equivalent
to the functor
\[
\left(-\right)_{A}=a_{!}a^{*}\colon\mathcal{C}\to\mathcal{C}
\]
(i.e. colimit over the constant $A$-shaped diagram).
\item The image of a ``right way'' arrow $A\xleftarrow{=}A\oto rB$ is
homotopic to the right way counit map
\[
\left(-\right)_{A}=a_{!}a^{*}\simeq b_{!}\underline{r_{!}r^{*}}b^{*}\oto{c_{!}^{r}}b_{!}b^{*}=\left(-\right)_{B},
\]
where $a\colon A\to\pt$ and $b\colon B\to\pt$ are the unique maps
(i.e. it is the natural map induced on colimits).
\item The image of a ``wrong way'' arrow $B\xleftarrow{q}A\xrightarrow{=}A$
is homotopic to the wrong way unit map
\[
\left(-\right)_{B}=b_{!}b^{*}\oto{\mu_{q}}b_{!}\left(q_{!}q^{*}\right)b^{*}\simeq a_{!}a^{*}=\left(-\right)_{A}
\]
(which can informally be thought of as ``integration along the fibers
of $q$'').
\item The natural transformation $|A|$ at $\pt\in \mathcal{S}_m^m$, is given by the span $\pt \from A \to \pt.$
\end{itemize}
\begin{rem}
If one is only interested in this functor on the level of homotopy
categories (as we are),
\[
h\mathcal{S}_{m}^{m}\to h\fun\left(\mathcal{C},\mathcal{C}\right),
\]
one can use the above formulas as a \emph{definition}. The compatibility
with composition can be verified using \cite[Proposition 4.2.1 (2)]{HopkinsLurie}.
\end{rem}
\subsection{Higher Semiadditive Functors}
In this section, we study $m$-finite colimit preserving functors between
$m$-semiadditive $\infty$-categories and study their behavior with
respect to integration. We call such functors\emph{ $m$-semiadditive.}
\begin{defn}
Let $F\colon\mathcal{C}\to\mathcal{D}$ be a functor of $\infty$-categories
and $q\colon A\to B$ a map of spaces. We define the \emph{$\left(F,q\right)$-square
}to be the commutative square
\[
\xymatrix@C=3pc{\fun\left(B,\mathcal{C}\right)\ar[d]_{q^{*}}\ar[r]^{F_{*}} & \fun\left(B,\mathcal{D}\right)\ar[d]^{q^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar[r]^{F_{*}} & \fun\left(A,\mathcal{D}\right),
}
\]
where the horizontal functors are post-composition with $F$ and the
vertical functors are pre-composition with $q$. If $q$ is weakly $\mathcal{C}$ and $\mathcal{D}$ ambidextrous, then this square is canonically normed.
\end{defn}
\begin{prop}
\label{prop:BC_Functor}Let $F\colon\mathcal{C}\to\mathcal{D}$ be
a functor of $\infty$-categories and $q\colon A\to B$ a map of spaces.
If $\mathcal{C}$ and $\mathcal{D}$ admit, and $F$ preserves, all
$q$-colimits (resp. $q$-limits), then the $\left(F,q\right)$-square
satisfies the $\bc_{!}$ (resp. $\bc_{*}$) condition.
\end{prop}
\begin{proof}
This follows from the point-wise formulas for the left and right Kan
extensions.
\end{proof}
The following is the main result of this section.
\begin{thm}
\label{thm:Ambi_Functors}Let $F\colon\mathcal{C}\to\mathcal{D}$
be a functor of $\infty$-categories which preserves $\left(m-1\right)$-finite
colimits. Let $q\colon A\to B$ be an $m$-finite map of spaces. If
$q$ is (weakly) $\mathcal{C}$-ambidextrous and (weakly) $\mathcal{D}$-ambidextrous,
then the $\left(F,q\right)$-square is (weakly) ambidextrous.
\end{thm}
\begin{proof}
The statement about ambidexterity follows immediately from the ambidexterity of $q$ and the statement about weak ambidexterity.
We shall prove the latter by induction on $m$. For $m=-2$, both vertical
maps in the $\left(F,q\right)$-square are isomorphisms, and so the
claim follows from \propref{Canonical_Norms_Compatibility}(1). We
therefore assume $m\ge-1$. Consider the diagram
\[
\qquad \quad
\vcenter{
\xymatrix@C=3pc{A\ar[rd]^{\delta}\ar@/^{1pc}/@{=}[rrd]\ar@/_{1pc}/@{=}[rdd]\\
& A\times_{B}A\ar[d]^{\pi_{2}}\ar[r]^{\pi_{1}} & A\ar[d]^{q}\\
& A\ar[r]^{q} & B.
}}
\qquad\left(\heartsuit\right)
\]
The square in the diagram induces a $\bc_{!}$ map $\beta_!\colon\left(\pi_{2}\right)_{!}\pi_{1}^{*}\to q^{*}q_{!}$,
which is an isomorphism by \lemref{Base_Change_BC}. By definition,
$\nu_{q}^{\mathcal{C}}$ is the composition of maps
\[
q^{*}q_{!}\xrightarrow{\beta^{-1}}\left(\pi_{2}\right)_{!}\pi_{1}^{*}\xrightarrow{\mu_{\delta}}\left(\pi_{2}\right)_{!}\delta_{!}\delta^{*}\pi_{1}^{*}\simeq\Id.
\]
By \lemref{Triangle_Unit_Counit_Norm_Diagram}(1), it suffices to
show that the wrong way counit diagram of $q$ commutes. This will
follow from the commutativity of the (solid) diagram:
\[
\xymatrix@C=3pc{q^{*}q_{!}\red{F_{*}}{}\ar[dd]^{\beta_{!}}\ar[r]^-{\beta^{-1}} & \left(\pi_{2}\right)_{!}\left(\pi_{1}\right)^{*}\red{F_{*}}{}\ar@{-->}[dd]^{\wr}\ar[r]^-{\mu_{\delta}^{\mathcal{D}}} & \left(\pi_{2}\right)_{!}\delta_{!}\delta^{*}\left(\pi_{1}\right)^{*}\red{F_{*}}{}\ar@{-->}[d]^{\wr}\ar@/^{1pc}/[rrdd]^{\sim}\\
& & \left(\pi_{2}\right)_{!}\delta_{!}\delta^{*}\red{F_{*}}{}\left(\pi_{1}\right)^{*}\ar@{-->}[d]^{\wr}\\
q^{*}\red{F_{*}}{}q_{!}\ar[dd]^{\wr} & \left(\pi_{2}\right)_{!}\red{F_{*}}{}\left(\pi_{1}\right)^{*}\ar@{-->}[dd]\ar@{-->}[ru]^{\mu_{\delta}^{\mathcal{D}}}\ar@{-->}[rd]_{\mu_{\delta}^{\mathcal{C}}} & \left(\pi_{2}\right)_{!}\delta_{!}\red{F_{*}}{}\delta^{*}\left(\pi_{1}\right)^{*}\ar@{-->}[d]\ar@{-->}[rr]^{\sim} & & \red{F_{*}}.\\
& & \left(\pi_{2}\right)_{!}\red{F_{*}}{}\delta_{!}\delta^{*}\left(\pi_{1}\right)^{*}\ar@{-->}[d]\\
\red{F_{*}}{}q^{*}q_{!}\ar[r]^-{\beta^{-1}} & \red{F_{*}}{}\left(\pi_{2}\right)_{!}\left(\pi_{1}\right)^{*}\ar[r]^-{\mu_{\delta}^{\mathcal{C}}} & \red{F_{*}}{}\left(\pi_{2}\right)_{!}\delta_{!}\delta^{*}\left(\pi_{1}\right)^{*}\ar@/_{1pc}/[rruu]_{\sim}
}
\]
The two trapezoids and the upper triangle commute for formal reasons.
The bottom triangle commutes by \lemref{Vertical_Pasting_Formula}(1)
and the fact that $\pi_{2}\circ\delta=\Id$. For the rectangle on
the left, it is enough to prove the commutativity of the associated
rectangular diagram with $\beta$ instead of $\beta^{-1}$ in both
horizontal lines, which we denote by $\left(*\right)$. We can now
consider the commutative cubical diagram
\[
\vcenter{
\xymatrix@R=1pc@C=1pc{\fun\left(B,\mathcal{C}\right)\ar[dd]\sb(0.3){q^{*}}
\ar[rrr]\sp(0.6){\red{F_{*}}{}}\ar[rd]^{q^{*}}\ar@{..>}[rrrrd] & & & \fun\left(B,\mathcal{D}\right)\ar@{->}'[d][dd]\sp(0.3){q^{*}}\ar[rd]^{q^{*}}\\
& \fun\left(A,\mathcal{C}\right)\ar[dd]\sb(0.3){\pi_{2}^{*}}\ar[rrr]\sp(0.4){\red{F_{*}}{}} & & & \fun\left(A,\mathcal{D}\right)\ar[dd]\sp(0.65){\pi_{2}^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar@{..>}[rrrrd]\ar@{->}'[r][rrr]\sp(0.4){\red{F_{*}}{}\qquad}\ar[rd]^{\pi_{1}^{*}} & & & \fun\left(A,\mathcal{D}\right)\ar[rd]^{\pi_{1}^{*}}\\
& \fun\left(A\times_{B}A,\mathcal{C}\right)\ar[rrr]\sp(0.4){\red{F_{*}}{}} & & & \fun\left(A\times_{B}A,\mathcal{D}\right).
}}
\qquad\left(\spadesuit\right)
\]
Applying \lemref{Horizontal_Pasting_Formula}(1) once to the back
and then right face of $\left(\spadesuit\right)$ and once to the
left and then front face of $\left(\spadesuit\right)$, we get two
presentations of the $\bc_{!}$ map of the diagram
\[
\xymatrix@C=3pc{
\fun(B,\mathcal{C})\ar[d]_{q^*}\ar@{..>}[r] & \fun(A,\mathcal{D})\ar[d]^{\pi_{2}^{*}} \\
\fun(A,\mathcal{C})\ar@{..>}[r] & \fun(A\times_{B}A,\mathcal{D}).
}
\]
These two presentations correspond precisely to the two paths
in $\left(*\right)$.
It is left to check the commutativity of the triangle in the middle,
which is a whiskering of the diagram
\[
\qquad
\vcenter{
\xymatrix{ & \delta_{!}\delta^{*}\red{F_{*}}{}\ar[d]^{\wr}\\
\red{F_{*}}{}\ar[ru]^{\mu_{\delta}^{\mathcal{D}}}\ar[rd]_{\mu_{\delta}^{\mathcal{C}}} & \delta_{!}\red{F_{*}}{}\delta^{*}\ar[d]^{\beta_{!}}\\
& \red{F_{*}}{}\delta_{!}\delta^{*}.
}}
\qquad\left(\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}\right)
\]
The map $\delta$ is an $\left(m-1\right)$-finite map that is both
$\mathcal{C}$-ambidextrous and $\mathcal{D}$-ambidextrous. By assumption,
$F$ preserves $\left(m-1\right)$-finite colimits and so, by the
inductive hypothesis, the norm diagram of the $\left(F,\delta\right)$-square
commutes. Thus, $\mathbin{\rotatebox[origin=c]{90}{$\triangle$}}$ commutes by \lemref{Triangle_Unit_Counit_Norm_Diagram}(2).
\end{proof}
As a corollary, we get a higher analogue of a known fact about $0$-semiadditive
categories.
\begin{cor}
\label{cor:Semi_Add_Functors}Let $F\colon\mathcal{C}\to\mathcal{D}$
be a functor of $m$-semiadditive $\infty$-categories. The functor
$F$ preserves $m$-finite colimits if and only if it preserves $m$-finite
limits.
\end{cor}
\begin{proof}
We proceed by induction on $m$. For $m=-2$, there is nothing to
prove. For $m\ge-1$, assume by induction the claim holds for $m-1$.
Since $\mathcal{C}$ and $\mathcal{D}$ are in particular $\left(m-1\right)$-semiadditive
and $F$ preserves either $\left(m-1\right)$-colimits or $\left(m-1\right)$-limits,
we deduce that $F$ preserves both. For every $m$-finite $A$, consider
the map $q\colon A\to\pt$. Since $\mathcal{C}$ and $\mathcal{D}$
are in particular $\left(m-1\right)$-semiadditive and $F$ preserves
$\left(m-1\right)$-colimits, by \thmref{Ambi_Functors}, the $\left(F,q\right)$-square
is weakly ambidextrous. Since $\mathcal{C}$ and $\mathcal{D}$ are
$m$-semiadditive, the $\left(F,q\right)$-square is in fact ambidextrous.
It follows that the $\left(F,q\right)$-square satisfies the $\bc_{!}$
condition if and only if it satisfies the $\bc_{*}$ condition. Namely,
$F$ preserves $A$-shaped colimits if and only if it preserves $A$-shaped
limits.
\end{proof}
\begin{cor}
\label{cor:product_semiadditive}
Let $\{\mathcal{C}_i\}_{i\in I}$ be a collection of $m$-semiadditive $\infty$-categories. The $\infty$-category $\mathcal{C}\coloneqq \prod_{i\in I}\mathcal{C}_i$ is $m$-semiadditive.
\end{cor}
\begin{proof}
We proceed by induction on $m$.
For $m=-2$ there is nothing to prove, and so we may assume that $m\ge -1$. Let $q\colon A\to B$ be an $m$-finite map of spaces.
By induction, $\mathcal{C}$ is $(m-1)$-semiadditive, and hence $q$ is weakly $\mathcal{C}$-ambidextrous.
In particular, the map $\nm_q^{\mathcal{C}}$ is defined and it is left to show that it is an isomorphism.
For every $i\in I$, the map $q$ is $\mathcal{C}_i$-ambidextrous and the projection $\pi_i \colon \mathcal{C}\to \mathcal{C}_i$ preserves colimits. Thus, by \thmref{Ambi_Functors}, the $(\pi_i,q)$-square is weakly ambidextrous. Additionally, as $\pi_i$ commutes with limits and colimits, the $(\pi_i,q)$-square satisfies both the $\bc_!$ and $\bc_*$ conditions. The $\mathcal{C}_i$-ambidexterity of $q$ implies now that the natural transformation \[
\pi_i \nm_q^{\mathcal{C}}\colon \pi_i q_! \to \pi_i q_*
\]
is a natural isomorphism. Finally, since the collection $\{\pi_i\}_{i\in I}$ is jointly conservative, we deduce that $\nm_q^{\mathcal{C}}$ is an isomorphism.
\end{proof}
\begin{defn}
Let $\mathcal{C}$ and $\mathcal{D}$ be $m$-semiadditive $\infty$-categories.
A functor $F\colon\mathcal{C}\to\mathcal{D}$ is called \emph{$m$-semiadditive},
if\emph{ }it preserves $m$-finite (co)limits.
\end{defn}
The fundamental property of $m$-semiadditive functors, which justifies
their name, is
\begin{cor}
\label{cor:Integral_Functor} Let $F\colon\mathcal{C}\to\mathcal{D}$
be an $m$-semiadditive functor and let $q\colon A\to B$ be an $m$-finite map
of spaces.
For all $X,Y\in\fun\left(B,\mathcal{C}\right)$ and $f\colon q^{*}X\to q^{*}Y$,
we have
\[
F\left(\int\limits _{q}f\right)=\int\limits _{q}F\left(f\right)\quad\in\quad\hom_{h\fun\left(B,\mathcal{D}\right)}\left(FX,FY\right).
\]
In particular, for all $X\in\fun\left(B,\mathcal{C}\right)$ we have
\[
F\left(|q|_{X}\right)=|q|_{F\left(X\right)}\quad\in\quad\hom_{h\fun\left(B,\mathcal{D}\right)}\left(FX,FX\right).
\]
\end{cor}
\begin{proof}
The $\left(F,q\right)$-square is ambidextrous by \thmref{Ambi_Functors}
and satisfies the $\bc$ conditions by \propref{BC_Functor}, and
so the claim follows form \propref{Integral_Ambi}.
\end{proof}
\begin{rem}
In view of \remref{Diagonal_Induction_Integral}, one can reinterpret
\thmref{Ambi_Functors} informally, as saying that
\[
\int\limits _{\delta}F\left(\Id\right)=F\left(\int\limits _{\delta}\Id\right),
\]
where $\delta\colon A\to A\times_{B}A$ is the diagonal of $q\colon A\to B$.
Since $\delta$ is $\left(m-1\right)$-finite, this in turn follows
inductively from \corref{Integral_Functor}. Turning this argument
into a rigorous proof requires some categorical maneuvers that we
preferred to avoid.
\end{rem}
\subsubsection{Multivariate Functors}
We now discuss a multivariate version of higher semiadditive functors.
\begin{defn}
\label{def:External_Peoduct}Let $\mathcal{C}_{1},\dots,\mathcal{C}_{k}$
and $\mathcal{D}$ be $\infty$-categories and $F\colon\prod\limits _{i=1}^{k}\mathcal{C}_{i}\to\mathcal{D}$
a functor. Given a collection of diagrams $X_{i}\colon A_{i}\to\mathcal{C}_{i}$
for $i=1,\dots,k$, their external product $X_{1}\boxtimes\dots\boxtimes X_{k}$
is defined to be the composition
\[
\prod_{i=1}^{k}A_{i}\oto{\prod_{i=1}^{k}X_{i}}\prod_{i=1}^{k}\mathcal{C}_{i}\oto F\mathcal{D}.
\]
This assembles to give a functor
\[
\boxtimes\colon\prod_{i=1}^{k}\fun\left(A_{i},\mathcal{C}_{i}\right)\to\fun\left(\prod_{i=1}^{k}A_{i},\mathcal{D}\right).
\]
Given a collection of maps of spaces $q_{i}\colon A_{i}\to B_{i}$
for $i=1,\dots,k$, we obtain the associated \emph{external product
square}:
\[
\qquad \qquad
\vcenter{
\xymatrix@R=3pc{\prod\limits _{i=1}^{k}\fun\left(B_{i},\mathcal{C}_{i}\right)\ar[d]^{\prod\limits _{i=1}^{k}q_{i}^{*}}\ar[rr]^{\boxtimes} & & \fun\left(\prod\limits _{i=1}^{k}B_{i},\mathcal{D}\right)\ar[d]^{\left(\prod\limits _{i=1}^{k}q_{i}\right)^{*}}\\
\prod\limits _{i=1}^{k}\fun\left(A_{i},\mathcal{C}_{i}\right)\ar[rr]^{\boxtimes} & & \fun\left(\prod\limits _{i=1}^{k}A_{i},\mathcal{D}\right).
}}
\qquad\left(*\right)
\]
\end{defn}
\begin{prop}
\label{prop:BC_External_Product}Let $\mathcal{C}_{1},\dots,\mathcal{C}_{k}$
and $\mathcal{D}$ be $\infty$-categories and $F\colon\prod\limits _{i=1}^{k}\mathcal{C}_{i}\to\mathcal{D}$
a functor. Additionally, let $q_{i}\colon A_{i}\to B_{i}$ for $i=1,\dots,k$
be a collection of maps of spaces. If $F$ preserves all $q_{i}$-colimits
(resp. $q_{i}$-limits) in the $i$-th coordinate, then the external
product square $\left(*\right)$ satisfies the $\text{BC}_{!}$ (resp.
$\text{BC}_{*}$) condition.
\end{prop}
\begin{proof}
We proceed by a sequence of reductions. First, by induction on $k$ and
horizontal pasting (\corref{Horizontal_Pasting_BC}), we can reduce
to $k=2$. Write $q_{1}\times q_{2}$ as a composition
\[
A_{1}\times A_{2}\oto{q_{1}\times\Id}B_{1}\times A_{2}\oto{\Id\times q_{2}}B_{1}\times B_{2}.
\]
The diagram
\[
\xymatrix@R=2pc@C=3pc{\fun\left(B_{1},\mathcal{C}_{1}\right)\times\fun\left(B_{2},\mathcal{C}_{2}\right)\ar[d]^{\Id\times q_{2}^{*}}\ar[r]^-{\boxtimes} & \fun\left(B_{1}\times B_{2},\mathcal{D}\right)\ar[d]^{\left(\Id\times q_{2}\right)^{*}}\\
\fun\left(B_{1},\mathcal{C}_{1}\right)\times\fun\left(A_{2},\mathcal{C}_{2}\right)\ar[d]^{q_{1}^{*}\times\Id}\ar[r]^-{\boxtimes} & \fun\left(B_{1}\times A_{2},\mathcal{D}\right)\ar[d]^{\left(q_{1}\times\Id\right)^{*}}\\
\fun\left(A_{1},\mathcal{C}_{1}\right)\times\fun\left(A_{2},\mathcal{C}_{2}\right)\ar[r]^-{\boxtimes} & \fun\left(A_{1}\times A_{2},\mathcal{D}\right)
}
\]
exhibits $\left(*\right)$ as a vertical pasting of the top and bottom
squares. Hence, by \corref{Vertical_Pasting_BC}, it is enough to
show that each of them satisfies the $\bc_{!}$ (resp. $\bc_{*}$)
condition. We will focus on the bottom square (the argument for the
top square is analogous). Since (co)limits in $A_{2}$-local systems
are computed point-wise, the external product functor
\[
F_{A_{2}}\colon\mathcal{C}_{1}\times\fun\left(A_{2},\mathcal{C}_{2}\right)\to\fun\left(A_{2},\mathcal{D}\right)
\]
preserves in each coordinate the (co)limits which are preserved by
$F$. By replacing the $\infty$-category $\mathcal{C}_{2}$ with
$\fun\left(A_{2},\mathcal{C}_{2}\right)$, the $\infty$-category
$\mathcal{D}$ with $\fun\left(A_{2},\mathcal{D}\right)$ and the
functor $F$ with $F_{A_{2}}$, we may assume without loss of generality
that $A_{2}=\Delta^{0}$. The bottom square becomes
\[
\xymatrix@R=2pc@C=3pc{\fun\left(B_{1},\mathcal{C}_{1}\right)\times\mathcal{C}_{2}\ar[d]^{q_{1}^{*}\times\Id}\ar[r]^{\quad\boxtimes} & \fun\left(B_{1},\mathcal{D}\right)\ar[d]^{q_{1}^{*}}\\
\fun\left(A_{1},\mathcal{C}_{1}\right)\times\mathcal{C}_{2}\ar[r]^{\quad\boxtimes} & \fun\left(A_{1},\mathcal{D}\right).
}
\]
By the exponential rule (\lemref{Exponential_Rule_BC}), it is enough
to show that the left square in the following diagram satisfies the
$\bc_{!}$ (resp. $\bc_{*}$) condition:
\[
\xymatrix@R=2pc@C=3pc{\fun\left(B_{1},\mathcal{C}_{1}\right)\ar[d]^{q_{1}^{*}}\ar[r]^{\boxtimes\qquad} & \fun\left(\mathcal{C}_{2},\fun\left(B_{1},\mathcal{D}\right)\right)\ar[d]^{\left(q_{1}^{\mathcal{C}}\right)^{*}}\ar[r]^{\sim} & \fun\left(B_{1},\fun\left(\mathcal{C}_{2},\mathcal{D}\right)\right)\ar[d]^{q_{1}^{*}}\\
\fun\left(A_{1},\mathcal{C}_{1}\right)\ar[r]^{\boxtimes\qquad} & \fun\left(\mathcal{C}_{2},\fun\left(A_{1},\mathcal{D}\right)\right)\ar[r]^{\sim} & \fun\left(A_{1},\fun\left(\mathcal{C}_{2},\mathcal{D}\right)\right).
}
\]
Equivalently, it is enough to show that the outer square $\square$
satisfies the $\bc_{!}$ (resp. $\bc_{*}$) condition. Observe that
$\square$ is the $\left(F^{\vee},q_{1}\right)$-square for the functor
\[
F^{\vee}\colon\mathcal{C}_{1}\to\fun\left(\mathcal{C}_{2},\mathcal{D}\right),
\]
which is the mate of $F$. From the assumption on $F$, the functor
$F^{\vee}$ preserves $q_{1}$-colimits (resp. $q_{1}$-limits) and
therefore $\square$ satisfies the $\text{BC}_{!}$ (resp. $\text{BC}_{*}$)
condition by the univariate version of \propref{BC_Functor}.
\end{proof}
\begin{defn}
Let $\mathcal{C}_{1},\dots,\mathcal{C}_{k}$ and $\mathcal{D}$ be
$m$-semiadditive $\infty$-categories. An \emph{$m$-semiadditive
multi-functor} $F\colon\prod\limits _{i=1}^{k}\mathcal{C}_{i}\to\mathcal{D}$
is a functor that preserves $m$-finite colimits in each coordinate
separately.
\end{defn}
In particular, we get
\begin{cor}
\label{cor:Semi_Add_Multi_Functor} Let $\mathcal{C}_{1},\dots,\mathcal{C}_{k}$
and $\mathcal{D}$ be $m$-semiadditive $\infty$-categories. Let
$F\colon\prod\limits _{i=1}^{k}\mathcal{C}_{i}\to\mathcal{D}$ be
an $m$-semiadditive multi-functor. For every collection of $m$-finite
maps $q_{i}\colon A_{i}\to B_{i}$ for $i=1,\dots k$, the external
product square $\left(*\right)$ from \defref{External_Peoduct} satisfies
both $\bc$-conditions.
\end{cor}
\subsection{Symmetric Monoidal Structure}
In this section, we study the interaction of higher semiadditivity
with (symmetric) monoidal structures.
\subsubsection{Monoidal Local Systems}
Let $\left(\mathcal{C},\otimes,\one\right)$ be a (symmetric) monoidal
$\infty$-category. For every space $A$, the $\infty$-category $\fun\left(A,\mathcal{C}\right)$
acquires a point-wise (symmetric) monoidal structure. Moreover, given
a map of spaces $q\colon A\to B$, the functor
\[
q^{*}\colon\fun\left(B,\mathcal{C}\right)\to\fun\left(A,\mathcal{C}\right)
\]
is (symmetric) monoidal in a canonical way (\cite[Example 3.2.4.4]{ha}).
\begin{prop}
\label{prop:Local_Systems_Tensor_Normed}Let $\left(\mathcal{C},\otimes,\one\right)$
be a monoidal $\infty$-category. Let $q\colon A\to B$ be a weakly
$\mathcal{C}$-ambidextrous map of spaces, such that $\otimes$ distributes
over $q$-colimits. The normed functor
\[
q^{\can}\colon\fun\left(A,\mathcal{C}\right)\nto\fun\left(B,\mathcal{C}\right)
\]
is $\otimes$-normed in a canonical way (see \defref{Tensor_Normed_Functor}).
\end{prop}
\begin{proof}
Consider the diagram
\[
\xymatrix@C=4pc{
q_!(q^*X \otimes Y)\ar@{-->}@/_2pc/[ddr]\ar[r]^-{u_{!,X} \otimes u_{!,Y}} &
q_!((q^*\underline{q_!)q^*}X \otimes (q^*q_!)Y)\ar[d]^{\wr}\ar[r]^-{c_{!,X}\otimes \Id} &
q_!(q^*X \otimes q^*q_!Y)\ar[d]^{\wr} \\
& \underline{q_!q^*}(\underline{q_!q^*}X \otimes q_!Y)\ar[d]^{c_{!,(X\otimes q_!Y)}}\ar[r]^-{c_{!,X}\otimes \Id} &
\underline{q_!q^*}(X \otimes q_!Y)\ar[d]^{c_{!,(X\otimes q_!Y)}} \\
& \underline{q_!q^*}X \otimes q_!Y\ar[r]^-{c_{!,X}\otimes \Id} &
X \otimes q_!Y.
}
\]
The triangle on the left commutes by definition, where the dashed arrow is induced by the colax monoidality of $q_!$. The rest of the diagram commutes for formal reasons. The composition along the bottom path of the diagram is the second map in \defref{Tensor_Normed_Functor} and we shall show it is an isomorphism (the proof for the first one follows by symmetry). Since the diagram commutes, it suffices to show that the composition along the top and then right path is an isomorphism. By the zig-zag identities, the latter is homotopic to the composition
\[
\xymatrix@C=4pc{
q_!(q^*X \otimes Y)\ar[r]^-{\Id \otimes u_{!,Y}} &
q_!(q^*X \otimes q^*q_!Y)\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}
q_!q^*(X \otimes q_!Y)\ar[r]^-{c_{!,(X\otimes q!Y)}} &
X \otimes q_!Y.
}
\]
Finally, this composition is by definition the $\bc_!$ map $\beta_!$ for the square
\[
\xymatrix@C=4pc{
\fun(B,\mathcal{C})\ar[d]^{q^*}\ar[r]^-{X \otimes (-)} &
\fun(B,\mathcal{C})\ar[d]^{q^*} \\
\fun(A,\mathcal{C})\ar[r]^-{q^*X \otimes (-)} &
\fun(A,\mathcal{C}).
}
\]
To see that $\beta_!$ is an isomorphism, it is enough to check this after pulling back to every point $b\in B$. This in turn follows from the assumption that $\otimes$ distributes over $q$-colimits.
\end{proof}
This allows us to apply the general results about $\otimes$-normed
functors to the setting of local systems.
\begin{cor}
\label{cor:Semi_Add_Mode}Let $F\colon\mathcal{C}\to\mathcal{D}$
be an $m$-finite colimit preserving monoidal functor between monoidal
categories that admit, and the tensor product distributes over, $m$-finite
colimits.
\begin{enumerate}
\item An $m$-finite map of spaces $q\colon A\to B$, that is $\mathcal{C}$-ambidextrous
and weakly $\mathcal{D}$-ambidextrous, is $\mathcal{D}$-ambidextrous.
\item If $\mathcal{C}$ is $m$-semiadditive, then $\mathcal{D}$ is also
$m$-semiadditive.
\end{enumerate}
\end{cor}
\begin{proof}
By \propref{Local_Systems_Tensor_Normed}, $q^{\can}$ is $\otimes$-normed.
By \thmref{Ambi_Functors}, the $\left(F,q\right)$-square is weakly ambidextrous.
Since $F$ preserves $m$-finite colimits, the $\left(F,q\right)$-square
satisfies the $\bc_{!}$-condition. (1) now follows from \propref{Tensor_Ambi}.
We prove (2) by induction on $m$. For $m=-2$, there is nothing to
prove, and so we assume $m\ge-1$. By the inductive hypothesis, we
may assume $\mathcal{D}$ is $\left(m-1\right)$-semiadditive. In
this case, every $m$-finite map $q\colon A\to B$ is weakly $\mathcal{D}$-ambidextrous
and $\mathcal{C}$-ambidextrous, hence by (1), is $\mathcal{D}$-ambidextrous.
\end{proof}
The following definition is the natural notion of (symmetric) monoidal
structure in the realm of $m$-semiadditive $\infty$-categories.
\begin{defn}
An \emph{$m$-semiadditively} (symmetric) monoidal $\infty$-category,
is an $m$-semiadditive (symmetric) monoidal $\infty$-category $\mathcal{C}$,
such that the tensor product distributes over $m$-finite colimits.
\end{defn}
\begin{lem}
\label{lem:Box_Unit}Let $\left(\mathcal{C},\otimes,\one\right)$
be an $m$-semiadditively monoidal $\infty$-category and $A$ an
$m$-finite space.
\begin{enumerate}
\item For every $X\in\mathcal{C}$, we have $|A|_{X}\simeq\Id_{X}\otimes|A|_{\one}$.
\item $A$ is $\mathcal{C}$-amenable if and only if $|A|_{\one}$
is an isomorphism.
\end{enumerate}
\end{lem}
\begin{proof}
We start with (1). Given an object $X\in\mathcal{C}$, the functor
$F_{X}\colon\mathcal{C}\to\mathcal{C}$, given by
\[
F_{X}\left(Y\right)=X\otimes Y,
\]
preserves $m$-finite colimits. Thus, by \corref{Integral_Functor}
we have:
\[
\Id_{X}\otimes|A|_{\one}=F_{X}\left(|A|_{\one}\right)=|A|_{F_{X}\left(\one\right)}=|A|_{X}.
\]
(2) is an immediate corollary of (1).
\end{proof}
\begin{notation}
For an $m$-semiadditively symmetric monoidal $\infty$-category $\left(\mathcal{C},\otimes,\one\right)$
and an $m$-finite space $A$, we abuse notation by identifying $|A|_{\one}$
with $|A|$. If we want to emphasize the $\infty$-category
$\mathcal{C}$, we write $|A|_{\mathcal{C}}$. By \lemref{Box_Unit},
this conflation of terminology is rather harmless.
\end{notation}
We also have the following consequence for dualizability.
\begin{prop}
\label{cor:Ambi_Dualizable_Spaces} Let $\left(\mathcal{C},\otimes,\one\right)$
be a monoidal $\infty$-category. For every $\mathcal{C}$-ambidextrous space $A$ such that $\otimes$ distributes over $A$-colimits, the object
$\one_{A}$ (see Notation \ref{not:Canonical_Norm_Notation}) is dualizable. In particular, if $\left(\mathcal{C},\otimes,\one\right)$
is $m$-semiadditively monoidal $\infty$-category, then $\one_{A}$
is dualizable for every $m$-finite space $A$.
\end{prop}
\begin{proof}
By \propref{Local_Systems_Tensor_Normed}, the map $q\colon A\to\pt$
corresponds to a $\otimes$-normed functor
\[
q^{\can}\colon\fun\left(A,\mathcal{C}\right)\nto\mathcal{C}
\]
and by definition $\one_{A}=\one_{q}=q_{!}q^{*}\one$. Thus, the claim
follows from \propref{Iso_Normed_Duality}.
\end{proof}
\subsubsection{Symmetric Monoidal Dimension}
We now specialize to the \emph{symmetric} monoidal case. We begin
with recalling the definition of dimension for a dualizable object
of a symmetric monoidal $\infty$-category. As in \cite[Section 5.1]{HopkinsLurie}, a dualizable object $X$
in a symmetric monoidal $\infty$-category $\left(\mathcal{C},\otimes,\one\right)$
has a notion of dimension, which is defined as follows. Let $X^{\vee}$
be the dual of $X$ and let
\[
\varepsilon\colon X^{\vee}\otimes X\to\one,\quad\eta\colon\one\to X\otimes X^{\vee}
\]
be the evaluation and coevaluation maps respectively.
\begin{defn}
We denote by
\[
\dim_{\mathcal{C}}\left(X\right)\in\End_{\mathcal{C}}\left(\one\right)
\]
the composition
\[
\one\oto{\eta}X\otimes X^{\vee}\oto{\sigma}X^{\vee}\otimes X\oto{\varepsilon}\one,
\]
where $\sigma$ is the swap map of the symmetric monoidal structure.
We say that a space $A$ is dualizable in $\mathcal{C}$, if $\one_{A}$
is dualizable in $\mathcal{C}$ and we denote
\[
\dim_{\mathcal{C}}\left(A\right)=\dim_{\mathcal{C}}\left(\one_{A}\right).
\]
\end{defn}
Dualizability of $m$-finite spaces in $\mathcal{S}_{m}^{m}$ assumes
a particularly simple form.
\begin{prop}
\label{prop:Dim_Sym_Span}Every $m$-finite space $A$ is self dual
in $\mathcal{S}_{m}^{m}$ and satisfies
\[
\dim_{\mathcal{S}_{m}^{m}}\left(A\right)=(\pt\from A^{S^{1}}\to\pt)=|A^{S^{1}}|\in\End_{\mathcal{S}_{m}^{m}}\left(\pt\right).
\]
\end{prop}
\begin{proof}
It is straightforward to check that the spans
\[
\varepsilon\colon(A\times A\xleftarrow{\Delta}A\to\pt)
\]
\[
\eta\colon(\pt\from A\oto{\Delta}A\times A),
\]
satisfy the zig-zag identities and therefore $\varepsilon$ is a duality
pairing exhibiting $A$ as self dual. Moreover, since $\varepsilon\circ\sigma$
is homotopic to $\varepsilon$, where $\sigma\colon X\times X\to X\times X$
is the symmetric monoidal swap, we get $\dim\left(A\right)=\varepsilon\circ\eta$
. Computing the relevant pullback explicitly,
\[
\xymatrix@R=1pc@C=1pc{ & & \ar[dl]\quad A^{S^{1}}\ar[dr]\\
& \ar[dl]A\ar[dr] & & \ar[dl]A\ar[dr]\\
\quad\pt\quad & & A\times A & & \quad\pt\quad
}
\]
we obtain the desired result.
\end{proof}
As a symmetric monoidal $\infty$-category, $\mathcal{S}_{m}^{m}$
has also the following universal property.
\begin{thm}
[Harpaz, {{\cite[Corollary 5.8 ]{Harpaz}}}]\label{thm:=00005BHarpaz=00005DSymm_Mon}
Let $\left(\mathcal{C},\otimes,\one\right)$ be an $m$-semiadditively
symmetric monoidal $\infty$-category. There exists a unique $m$-semiadditive
symmetric monoidal functor $\mathcal{S}_{m}^{m}\to\mathcal{C}$ and
its underlying functor is $\one_{\left(-\right)}$.
\end{thm}
From this we immediately get
\begin{cor}
\label{cor:Dim_Sym}Let $\left(\mathcal{C},\otimes,\one\right)$ be
an $m$-semiadditively symmetric monoidal $\infty$-category. Every
$m$-finite space $A$ is dualizable in $\mathcal{C}$ and
\[
\dim_{\mathcal{C}}\left(A\right)=|A^{S^{1}}|\quad\in\hom_{h\mathcal{C}}\left(\one_{\mathcal{C}},\one_{\mathcal{C}}\right).
\]
In particular, if $A$ is a loop space (e.g. $A=B^{k}C_{p}$), we
have
\[
\dim_{\mathcal{C}}\left(A\right)=|A||\Omega A|.
\]
\end{cor}
\begin{proof}
By \thmref{=00005BHarpaz=00005DSymm_Mon}, there is a canonical $m$-finite
colimit preserving symmetric monoidal functor $F\colon\mathcal{S}_{m}^{m}\to\mathcal{C}$.
Since $F\left(A\right)=\one_{A}$ and $F$ is symmetric monoidal,
we have
\[
F\left(\dim_{\mathcal{S}_{m}^{m}}A\right)=\dim_{\mathcal{C}}\left(\one_{A}\right).
\]
Since $F$ also preserves $m$-finite colimits, we have by \corref{Integral_Functor},
that
\[
F\left(|B|_{\pt}\right)=|B|_{F\left(\pt\right)}=|B|_{\one_{\mathcal{C}}}
\]
for all $m$-finite $B$. We are therefore reduced to the universal
case $\mathcal{C}=\mathcal{S}_{m}^{m}$, which is given by \propref{Dim_Sym_Span}.
The last claim follows from the fact that if $A$ is a
loop-space, then $A^{S^{1}}\simeq A\times\Omega A$ and \corref{Distributivity}.
\end{proof}
\subsection{Equivariant Powers}
Let $\mathcal{C}$ be a symmetric monoidal $\infty$-category and
$p$ a prime. As we shall recall below, for every object $X\in\mathcal{C}$, the $p$-th tensor
power $X^{\otimes p}$ carries a natural action of the cyclic group
$C_{p}\ss\Sigma_{p}$. Moreover, given a map $f\colon X\to Y$, we
get a $C_{p}$-equivariant morphism $f^{\otimes p}\colon X^{\otimes p}\to Y^{\otimes p}$.
Namely, there is a functor
\[
\Theta^{p}\colon\mathcal{C}\to\fun\left(BC_{p},\mathcal{C}\right),
\]
whose composition with $e^{*}\colon\fun\left(BC_{p},\mathcal{C}\right)\to\mathcal{C}$
(where $e\colon\pt\to BC_{p}$) is homotopic to the $p$-th power
functor $\left(-\right)^{\otimes p}\colon\mathcal{C}\to\mathcal{C}$.
In this section, we study the functor $\Theta^{p}$, its naturality
and additivity properties.
\subsubsection{Functoriality \& Integration}
We begin by describing $\Theta^{p}$ formally. It will be useful to work in the greater level of generality of $\mathcal{C}$-valued \emph{local-systems} instead of single objects.
Given a simplicial set $K$ we define the $C_{p}$-equivariant $p$-power of $K$ to be the simplicial set
$K^p_{hC_p} = (K^p\times EC_p)/C_p.$
For $K=\mathcal{C}$ a quasi-category, one can easily varify that $\mathcal{C}^p_{hC_p}$ is a quasi-category as well.
Moreover, since the $C_p$ action on $\mathcal{C}^p\times EC_p$ is \emph{free}, the quasi-category $\mathcal{C}^p_{hC_p}$ is a model for the $\infty$-categorical quotient of $\mathcal{C}^p$ by $C_p$\footnote{Compare \cite[Section 6.1.4]{ha}, where the analogous construction of $\Sigma_n$-equivariant powers is discussed.}. In particular, we can consider this construction for every $A\in \mathcal{S}$, which we also denote by
$A\wr C_{p}=\left(A^{p}\right)_{hC_{p}}.$
\begin{lem} \label{lem:Wr_Fiber_Product}
The functor
$\left(-\right)\wr C_{p}\colon \mathcal{S} \to \mathcal{S}$
preserves fiber products.
\end{lem}
\begin{proof}
The functor $\left(-\right)\wr C_{p}$ can be identified with the composition
\[
\mathcal{S} \oto{e_*} \fun(BC_p,\mathcal{S})
\simeq \mathcal{S}_{/BC_p} \oto{\pi} \mathcal{S},
\]
where $\pt \oto{e} BC_p$ is a choice of a base point. The functor $e_*$ preserves limits as it is a right adjoint, and the canonical projection $\pi$ preserves limits of contractible shape \cite[Proposition 4.4.2.9]{ha}
\end{proof}
The construction $ \left(-\right)\wr C_{p}$ induces a functor
\[
\left(-\right)_{hC_{p}}^{p}:\fun\left(A,\mathcal{C}\right)\to\fun(\left(A^{p}\right)_{hC_{p}},\left(\mathcal{C}^{p}\right)_{hC_{p}}).
\]
Using this have the following:
\begin{defn}
\label{def:Theta}Given a symmetric monoidal $\infty$-category $\mathcal{C}$,
we define the functor
\[
\Theta_A^{p}:\fun\left(A,\mathcal{C}\right)\to\fun\left(A\wr C_{p},\mathcal{C}\right)
\]
to be the composition of $\left(-\right)_{hC_{p}}^{p}$ with
\[
\left(\mathcal{C}^{p}\right)_{hC_{p}}\to\left(\mathcal{C}^{p}\right)_{h\Sigma_{p}}\oto{\otimes}\mathcal{C}.
\]
We shall suppress the subscript $A$ in $\Theta_A$ when the space $A$ is understood from the context.
\end{defn}
The $\Theta^{p}$ operation is functorial in the following sense.
\begin{lem}
\label{lem:Theta_Functoriality} Let $F\colon\mathcal{C}\to\mathcal{D}$
be a symmetric monoidal functor between symmetric monoidal $\infty$-categories.
For every space $A$, the diagram
\[
\xymatrix@C=3pc{\fun\left(A,\mathcal{C}\right)\ar[d]_{F_{*}}\ar[r]^-{\Theta^{p}} & \fun\left(A\wr C_{p},\mathcal{C}\right)\ar[d]^{F_{*}}\\
\fun\left(A,\mathcal{D}\right)\ar[r]^-{\Theta^{p}} & \fun\left(A\wr C_{p},\mathcal{D}\right)
}
\]
commutes up to homotopy.
\end{lem}
\begin{proof}
The square in question is the outer square of the following diagram
\[
\xymatrix@C=3pc{\fun\left(A,\mathcal{C}\right)\ar[d]_{F_{*}}\ar[r] & \fun(A\wr C_{p},\mathcal{C}_{hC_{p}}^{p})\ar[d]^{F_{*}}\ar[r]^-{\otimes} & \fun\left(A\wr C_{p},\mathcal{C}\right)\ar[d]^{F_{*}}\\
\fun\left(A,\mathcal{D}\right)\ar[r] & \fun(A\wr C_{p},\mathcal{D}_{hC_{p}}^{p})\ar[r]^-{\otimes} & \fun\left(A\wr C_{p},\mathcal{D}\right).
}
\]
The left square commutes by the functoriality of $\mathcal{C}\mapsto\mathcal{C}_{hC_{p}}^{p}$
and the right, since $F$ is symmetric monoidal.
\end{proof}
\begin{defn}
For a map of spaces $q\colon A\to B$, the naturality of \defref{Theta}
gives a commutative square
\[
\xymatrix@C=3pc{\fun\left(B,\mathcal{C}\right)\ar[d]^{q^{*}}\ar[r]^-{\Theta_B^{p}} & \fun\left(B\wr C_{p},\mathcal{C}\right)\ar[d]^{\left(q\wr C_{p}\right)^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar[r]^-{\Theta_A^{p}} & \fun\left(A\wr C_{p},\mathcal{C}\right).
}
\]
\end{defn}
We call this the $\Theta^{p}$-square of $q$. If
$q$ is $m$-finite, then so is $q\wr C_{p}$. If additionally $\mathcal{C}$
is $\left(m-1\right)$-semiadditive and admits $m$-finite (co)limits,
the $\Theta^{p}$-square is canonically normed.
\begin{example}
For a space $A$, we have a canonical fiber sequence
\[
A^{p}\to A\wr C_{p}\xrightarrow{\pi}BC_{p}.
\]
The $\Theta^{p}$-square of $q\colon A\to\pt$ is
\[
\xymatrix@C=3pc{\fun\left(\pt,\mathcal{C}\right)\ar[d]^{q^{*}}\ar[r]^-{\Theta^{p}} & \fun\left(BC_{p},\mathcal{C}\right)\ar[d]^{\pi^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar[r]^-{\Theta^{p}} & \fun\left(A\wr C_{p},\mathcal{C}\right).
}
\]
\end{example}
\begin{lem}
\label{lem:Theta_BC}Let $q\colon A\to B$ be a map of spaces and
let $\left(\mathcal{C},\otimes,\one\right)$ be a symmetric monoidal
$\infty$-category that admits all $q$-(co)limits. If $\otimes$
distributes over all $q$-colimits (resp. $q$-limits), then the $\Theta^{p}$-square
satisfies the $\text{BC}_{!}$ (resp. $\text{BC}_{*}$) condition.
\end{lem}
\begin{proof}
We horizontally paste the $\Theta^{p}$-square for $q$ with the square
induced by the pullback diagram
\[
\xymatrix@C=3pc{A^{p}\ar[d]_{q^{p}}\ar[r]^-{\pi_{A}} & A\wr C_{p}\ar[d]^{q\wr C_{p}}\\
B^{p}\ar[r]^-{\pi_{B}} & B\wr C_{p},
}
\]
to obtain
\[
\qquad
\vcenter{
\xymatrix@C=3pc{\fun\left(B,\mathcal{C}\right)\ar[d]^{q^{*}}\ar[r]^-{\Theta_{B}^{p}} & \fun\left(B\wr C_{p},\mathcal{C}\right)\ar[d]^{\left(q\wr C_{p}\right)^{*}}\ar[r]^-{\ \pi_{B}^{*}} & \fun\left(B^{p},\mathcal{C}\right)\ar[d]^{\left(q^{p}\right)^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar[r]^-{\Theta_{A}^{p}} & \fun\left(A\wr C_{p},\mathcal{C}\right)\ar[r]^-{\ \pi_{A}^{*}} & \fun\left(A^{p},\mathcal{C}\right).
}}
\qquad\left(*\right)
\]
The right square $\square_{R}$ satisfies both $\text{BC}$-conditions
by \lemref{Base_Change_BC}. Since $\pi_{B}^{*}$ is conservative
($\pi_{B}$ is surjective on connected components), by \corref{Horizontal_Pasting_BC}(2),
it is enough to show that the outer square $\square$ satisfies the
$\text{BC}_{!}$ (resp. $\text{BC}_{*}$) condition. We can now write
$\square$ as a horizontal pasting of two squares $\square_{L}'$
and $\square_{R}'$ in a different way:
\[
\xymatrix@C=3pc{\fun\left(B,\mathcal{C}\right)\ar[d]^{q^{*}}\ar[r]^-{\Delta} & \fun\left(B,\mathcal{C}\right)^{p}\ar[d]^{\left(q^{*}\right)^{p}}\ar[r]^-{\boxtimes^{p}} & \fun\left(B^{p},\mathcal{C}\right)\ar[d]^{\left(q^{p}\right)^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar[r]^-{\Delta} & \fun\left(A,\mathcal{C}\right)^{p}\ar[r]^-{\boxtimes^{p}} & \fun\left(A^{p},\mathcal{C}\right).
}
\]
The square $\square_{L}'$ satisfies the $\text{BC}$-conditions trivially and
$\square_{R}'$ by \propref{BC_External_Product}.
\end{proof}
\begin{prop}
\label{prop:Theta_Ambi}Let $\left(\mathcal{C},\otimes,\one\right)$ be an $m$-semiadditively symmetric monoidal $\infty$-category and let $q\colon A\to B$ be an $m$-finite map of spaces. The corresponding $\Theta^{p}$-square is ambidextrous.
\end{prop}
\begin{proof}
Since $\mathcal{C}$ is $m$-semiadditive, the $\Theta^{p}$-square
for $q$ is iso-normed and hence it suffices to show that it is weakly
ambidextrous. Namely, that the associated norm-diagram commutes. The
proof is very similar to the argument given in \thmref{Ambi_Functors},
and therefore we shall use similar notation and indicate only the
changes that need to be made. We proceed by induction on $m$ using
the diagram of spaces
\[
\qquad
\vcenter{
\xymatrix@C=3pc{A\ar[rd]^{\delta}\ar@/^{1pc}/@{=}[rrd]\ar@/_{1pc}/@{=}[rdd]\\
& A\times_{B}A\ar[d]^{\pi_{2}}\ar[r]^{\pi_{1}} & A\ar[d]^{q}\\
& A\ar[r]^{q} & B.
}}
\qquad\left(\heartsuit\right)
\]
Denoting $\tilde{\left(-\right)}=\left(-\right)\wr C_{p}$, we consider
the diagram of functors from $\fun(A,\mathcal{C})$ to $\fun(A\wr C_{p},\mathcal{C})$
(where all unnamed arrows are $\bc$-maps)
\[
\xymatrix@C=3pc{\tilde{q}^{*}\tilde{q}_{!}\red{\Theta_{A}^{p}}{}\ar[dd]^{\beta_{!}}\ar[r]^-{\beta^{-1}} & \left(\tilde{\pi}_{2}\right)_{!}\left(\tilde{\pi}_{1}\right)^{*}\red{\Theta_{A}^{p}}{}\ar@{-->}[dd]^{\wr}\ar[r]^-{\mu_{\tilde{\delta}}} & \left(\tilde{\pi}_{2}\right)_{!}\tilde{\delta}_{!}\tilde{\delta}^{*}\left(\tilde{\pi}_{1}\right)^{*}\red{\Theta_{A}^{p}}{}\ar@{-->}[d]^{\wr}\ar@/^{1pc}/[rrdd]^{\sim}\\
& & \left(\tilde{\pi}_{2}\right)_{!}\tilde{\delta}_{!}\tilde{\delta}^{*}\red{\Theta_{A\times_{B}A}^{p}}{}\left(\pi_{1}\right)^{*}\ar@{-->}[d]^{\wr}\\
\tilde{q}^{*}\red{\Theta_{B}^{p}}{}q_{!}\ar[dd]^{\wr} & \left(\tilde{\pi}_{2}\right)_{!}\red{\Theta_{A\times_{B}A}^{p}}{}\left(\pi_{1}\right)^{*}\ar@{-->}[dd]\ar@{-->}[ru]^{\mu_{\tilde{\delta}}}\ar@{-->}[rd]_{\mu_{\delta}} & \left(\tilde{\pi}_{2}\right)_{!}\tilde{\delta}_{!}\red{\Theta_{A}^{p}}{}\delta^{*}\left(\pi_{1}\right)^{*}\ar@{-->}[d]\ar@{-->}[rr]^{\sim} & & \red{\Theta_{A}^{p}}.\\
& & \left(\tilde{\pi}_{2}\right)_{!}\red{\Theta_{A\times_{B}A}^{p}}{}\delta_{!}\delta^{*}\left(\pi_{1}\right)^{*}\ar@{-->}[d]\\
\red{\Theta_{A}^{p}}{}q^{*}q_{!}\ar[r]^-{\beta^{-1}} & \red{\Theta_{A}^{p}}{}\left(\pi_{2}\right)_{!}\left(\pi_{1}\right)^{*}\ar[r]^-{\mu_{\delta}} & \red{\Theta_{A}^{p}}{}\left(\pi_{2}\right)_{!}\delta_{!}\delta^{*}\left(\pi_{1}\right)^{*}\ar@/_{1pc}/[rruu]_{\sim}
}
\]
By \lemref{Triangle_Unit_Counit_Norm_Diagram}(1), it suffices to
show that the above (solid) diagram commutes. As in the proof of \thmref{Ambi_Functors},
all the parts except for the rectangle on the left and the triangle in the middle, commute for formal reasons. The functor
$\left(-\right)\wr C_{p}\colon \mathcal{S}\to \mathcal{S}$
preserves fiber products (\lemref{Wr_Fiber_Product}) and therefore $\tilde{\delta}$ can be identified with the diagonal of $\tilde{q}$.
By \lemref{Theta_BC}, the $\bc_{!}$
map in the middle triangle is an isomorphism. Thus, the middle triangle
commutes by the inductive hypothesis and \lemref{Triangle_Unit_Counit_Norm_Diagram}(2).
As for the rectangle, we apply a similar argument to the one in \thmref{Ambi_Functors},
using again that the functor $\left(-\right)\wr C_{p}$ preserves
fiber products, and the commutative cubical diagram
\[
\qquad
\vcenter{
\xymatrix@R=1pc@C=1pc{\fun\left(B,\mathcal{C}\right)\ar[dd]\sb(0.3){q^{*}}\ar[rrr]\sp(0.6){\red{\Theta_{B}^{p}}{}}\ar[rd]^{q^{*}}\ar@{..>}[rrrrd] & & & \fun\left(B\wr C_{p},\mathcal{C}\right)\ar@{->}'[d][dd]\sp(0.3){\tilde{q}^{*}}\ar[rd]^{\tilde{q}^{*}}\\
& \fun\left(A,\mathcal{C}\right)\ar[dd]\sb(0.3){\pi_{2}^{*}}\ar[rrr]\sp(0.4){\red{\Theta_{A}^{p}}{}} & & & \fun\left(A\wr C_{p},\mathcal{C}\right)\ar[dd]\sp(0.65){\tilde{\pi}_{2}^{*}}\\
\fun\left(A,\mathcal{C}\right)\ar@{->}'[r][rrr]\sp(0.4){\red{\Theta_{A}^{p}}{}\qquad}\ar[rd]^{\pi_{1}^{*}}\ar@{..>}[rrrrd] & & & \fun\left(A\wr C_{p},\mathcal{C}\right)\ar[rd]^{\tilde{\pi}_{1}^{*}}\\
& \fun\left(A\times_{B}A,\mathcal{C}\right)\ar[rrr]\sp(0.4){\red{\Theta_{A\times_{B}A}^{p}}{}} & & & \fun\left((A\times_{B}A)\wr C_{p},\mathcal{C}\right).}
}
\qquad\left(\spadesuit\right)
\]
\end{proof}
\begin{thm}
\label{thm:Theta_Integral} Let $\mathcal{C}$ be an $m$-semiadditively
symmetric monoidal $\infty$-category and $q\colon A\to B$ an $m$-finite
map of spaces. For every $X,Y\in\fun\left(B,\mathcal{C}\right)$ and
$f\colon q^{*}X\to q^{*}Y$, we have
\[
\Theta_{B}^{p}\left(\int\limits _{q}f\right)=\int\limits _{q\wr C_{p}}\Theta_{A}^{p}\left(f\right)\quad\in\hom_{h\fun\left(B\wr C_{p},\mathcal{C}\right)}\left(\Theta^{p}X,\Theta^{p}Y\right).
\]
\end{thm}
\begin{proof}
By \lemref{Theta_BC}, the $\Theta^{p}$-square satisfies the $\bc$
conditions, and by \propref{Theta_Ambi}, it is ambidextrous. Thus,
the claim follows from \propref{Integral_Ambi}.
\end{proof}
\subsubsection{Additivity of Theta}
We now investigate the interaction of $\Theta^{p}$ with addition
of morphisms. Let $\mathcal{C}$ be a $0$-semiadditively symmetric
monoidal $\infty$-category. Given two objects $X,Y\in\mathcal{C}$
and two maps $f,g\colon X\to Y$, we can express $f+g$ as an integral
of the pair $\left(f,g\right)$ over $q\colon\pt\sqcup\pt\to\pt$
(see \exaref{Integral_Sum}). Applying \thmref{Theta_Integral} to
this special case and analyzing the result, we will derive a formula
of the form
\[
\Theta^{p}\left(f+g\right)=\Theta^{p}\left(f\right)+\Theta^{p}\left(g\right)+\text{``induced terms''}.
\]
The $\Theta^{p}$-square for $q\colon\pt\sqcup\pt\to\pt$ is
\[
\qquad
\vcenter{
\xymatrix{\fun\left(\pt,\mathcal{C}\right)\ar[d]^{q^{*}}\ar[rr]^{\Theta^{p}_{\pt}\qquad} & & \fun\left(BC_{p},\mathcal{C}\right)\ar[d]^{\left(q\wr C_{p}\right)^{*}}\\
\fun\left(\pt\sqcup\pt,\mathcal{C}\right)\ar[rr]^{\Theta_{\pt\sqcup\pt}^{p}\quad} & & \fun\left(\left(\pt\sqcup\pt\right)\wr C_{p},\mathcal{C}\right).
}}
\qquad\left(*\right)
\]
Our first goal is to make this diagram more explicit. First, we can
identify $q^{*}$ with the diagonal
\[
\Delta\colon\mathcal{C}\to\mathcal{C}\times\mathcal{C}.
\]
Next, let $S$ be the set
\[
S=\left\{ w\in\left\{ x,y\right\} ^{p}\mid w\neq x^{p},y^{p}\right\} ,
\]
with $x,y$ formal variables and let $\overline{S}$ is the set of
orbits of $S$ under the action of $C_{p}$ by cyclic shift. We have
a homotopy equivalence of spaces
\[
\left(\pt\sqcup\pt\right)\wr C_{p}\simeq BC_{p}\sqcup BC_{p}\sqcup\overline{S},
\]
and therefore an equivalence of $\infty$-categories
\[
\fun\left(\left(\pt\sqcup\pt\right)\wr C_{p},\mathcal{C}\right)\simeq\mathcal{C}^{BC_{p}}\times\mathcal{C}^{BC_{p}}\times\prod_{\overline{w}\in\overline{S}}\mathcal{C}.
\]
Choosing a base point map $e\colon\pt\to BC_{p}$, we see that up
to homotopy, we have
\[
q\wr C_{p}= (\Id,\Id, e,\dots, e).
\]
Similarly, the bottom arrow of $\left(*\right)$ can be identified
with a functor
\[
\Phi\colon\mathcal{C}\times\mathcal{C}\to\mathcal{C}^{BC_{p}}\times\mathcal{C}^{BC_{p}}\times\prod_{\overline{w}\in\overline{S}}\mathcal{C},
\]
which we now describe. For each $\overline{w}\in\overline{S}$, let
\[
e_{\overline{w}}\colon\pt\to\left(\pt\sqcup\pt\right)\wr C_{p}
\]
be the map choosing the point $\overline{w}\in\overline{S}$ and let
$e_{w}\colon\pt\to\overline{w}$ be the map choosing the point $w\in\overline{w}$.
Given an element $w\in\left\{ x,y\right\} ^{p}$ we define a functor
$w\left(-,-\right):\mathcal{C}\times\mathcal{C}\to\mathcal{C}$ as
the composition
\[
\xymatrix{\fun\left(\pt\sqcup\pt,\mathcal{C}\right)\ar[r]^-{\Delta} & \fun\left(\pt\sqcup\pt,\mathcal{C}\right)^{p}\ar[r]^-{\boxtimes} & \fun\left(\left(\pt\sqcup\pt\right)^{p},\mathcal{C}\right)\ar[r]^-{w^{*}} & \fun\left(\pt,\mathcal{C}\right).}
\]
Informally, for objects $X,Y\in\mathcal{C}$, we have
\[
w\left(X,Y\right)=Z_{1}\otimes Z_{2}\otimes\cdots\otimes Z_{p},\qquad Z_{i}=\begin{cases}
X & \text{if}\quad w_{i}=x\\
Y & \text{if}\quad w_{i}=y
\end{cases}.
\]
\begin{lem}
\label{lem:Phi_Theta}
There is a natural isomorphism of functors
\[
\Phi\simeq\left(\Theta^{p}\circ p_{1},\Theta^{p}\circ p_{2},\left\{ w\left(-,-\right)\right\} _{\overline{w}\in S}\right),
\]
where $p_{i}\colon\mathcal{C}\times\mathcal{C}\to\mathcal{C}$ is
the projection to the $i$-th component (it does not matter which
representative $w$ we take for each $\overline{w}\in S$).
\end{lem}
\begin{proof}
The claim about the first two components follows from the commutativity
of the $\Theta^{p}$-square applied to the two inclusion maps $\pt\into\pt\sqcup\pt$.
The pullback square
\[
\xymatrix{\overline{w}\ar[d]\ar[rr] & & \left(\pt\sqcup\pt\right)^{p}\ar[d]^{\sigma}\\
\pt\ar[rr]^{e_{\overline{w}}\qquad} & & \left(\pt\sqcup\pt\right)\wr C_{p}
}
\]
induces the commutative square in the following diagram
\[
\xymatrix{\fun\left(\pt\sqcup\pt,\mathcal{C}\right)\ar[rr]^{\Theta_{\pt\sqcup\pt}^{p}\quad} & & \fun\left(\left(\pt\sqcup\pt\right)\wr C_{p},\mathcal{C}\right)\ar[d]^{e_{\overline{w}}^{*}}\ar[r]^{\sigma^{*}} & \fun\left(\left(\pt\sqcup\pt\right)^{p},\mathcal{C}\right)\ar[d]\ar@{-->}[rd]^{w^{*}}\\
& & \fun\left(\pt,\mathcal{C}\right)\ar[r]^{\Delta} & \fun\left(\overline{w},\mathcal{C}\right)\ar[r]^{e_{w}^{*}} & \fun\left(\pt,\mathcal{C}\right).
}
\]
Observe that the composition of the leftmost horizontal functor and
the left vertical functor is the $\overline{w}$ component of $\Phi$.
Since the composition of the two bottom horizontal functors is the
identity, it suffices to show that the resulting functor
\[
\fun\left(\pt\sqcup\pt,\mathcal{C}\right)\to\fun\left(\pt,\mathcal{C}\right),
\]
obtained from the composition along the entire bottom path of the
diagram, is naturally isomorphic to $w\left(-,-\right)$. Since the
diagram commutes, this is isomorphic to the composition along the
top path of the diagram, which is $w\left(-,-\right)$ by definition.
\end{proof}
Summing up, we have identified the $\Theta^{p}$-square $\left(*\right)$
with the following square
\[
\qquad
\vcenter{
\xymatrix{\mathcal{C}\ar[d]^{\Delta}\ar[rrrrr]^{\Theta^{p}} & & & & & \mathcal{C}^{BC_{p}}\ar[d]^{\left(\Id,\Id,e,\dots,e\right)^{*}}\\
\mathcal{C}\times\mathcal{C}\ar[rrrrr]^{\left(\Theta^{p}\circ p_{1},\Theta^{p}\circ p_{2},\left\{ w\left(-,-\right)\right\} _{\overline{w}\in\overline{S}}\right)\qquad\qquad} & & & & & \mathcal{C}^{BC_{p}}\times\mathcal{C}^{BC_{p}}\times\prod\limits _{\overline{w}\in\overline{S}}\mathcal{C}.
}}
\qquad\left(**\right)
\]
Using this we can compute the effect of $\Theta^{p}$ on the sum of
two maps.
\begin{prop}
\label{prop:Theta_Additiive}Let $\mathcal{C}$ be a $0$-semiadditively
symmetric monoidal $\infty$-category, Given $X,Y\in\mathcal{C}$
and a pair of maps $f,g\colon X\to Y,$ we have
\[
\Theta^{p}\left(f+g\right)=\Theta^{p}\left(f\right)+\Theta^{p}\left(g\right)+\sum_{\overline{w}\in\overline{S}}\left(\int\limits _{e}w\left(f,g\right)\right).
\]
\end{prop}
\begin{proof}
The pair $\left(f,g\right)$ can be considered as a map $\left(f,g\right)\colon q^{*}X\to q^{*}Y$.
By \thmref{Theta_Integral}, \lemref{Phi_Theta} and the additivity of the integral (\propref{Integral_Additivity})
we have
\[
\Theta^{p}\left(f+g\right)=\Theta^{p}\left(\int\limits _{q}\left(f,g\right)\right)=\int\limits _{\left(\Id,\Id,e,\dots,e\right)}\left(\Theta^{p}\left(f\right),\Theta^{p}\left(g\right),\left\{ w\left(f,g\right)\right\} _{\overline{w}\in\overline{S}}\right)
\]
\[
=\Theta^{p}\left(f\right)+\Theta^{p}\left(g\right)+\sum_{\overline{w}\in\overline{S}}\left(\int\limits _{e}w\left(f,g\right)\right).
\]
\end{proof}
\section{Higher Semiadditivity and Additive Derivations}
Let $\mathcal{C}$ be a \emph{stable} symmetric monoidal $\infty$-category such that the tensor product distributes over finite coproducts. For every pair of objects $X,Y\in \mathcal{C}$, the set
\[
\hom_{h\mathcal{C}}\left(X,Y\right)=\pi_{0}\map_{\mathcal{C}}\left(X,Y\right)
\]
has a canonical structure of an abelian group. Furthermore, if
\[
X\in\cocalg\left(\mathcal{C}\right),\quad Y\in\calg\left(\mathcal{C}\right),
\]
then the set $\hom_{h\mathcal{C}}\left(X,Y\right)$ acquires a commutative ring structure in the following way. Given $f,g\colon X\to Y$, we define their product as the composition
\[
X \oto{\mathrm{co-mult}} X\otimes X \oto{f\otimes g} Y\otimes Y \oto{\mathrm{mult}} Y.
\]
Fixing a prime $p$ and assuming further that $\mathcal{C}$
is $1$-semiadditively symmetric monoidal, we will construct in this section an operation (which depends on $p$)
\[
\delta\colon\hom_{h\mathcal{C}}\left(X,Y\right)\to\hom_{h\mathcal{C}}\left(X,Y\right),
\]
and show that it is an ``additive $p$-derivation''. We begin with
a general discussion of the algebraic notion of an additive $p$-derivation.
We proceed to construct an auxiliary operation $\alpha$ (which does
not require stability) and study its properties. We then specialize
to the stable case, construct the operation $\delta$ above, and study
its behavior on elements of the form $|A|$. Finally,
we shall use the properties of the operation $\delta$ to provide
a general criterion for deducing $\infty$-semiadditivity
of a presentably symmetric monoidal, $1$-semiadditive, stable, $p$-local
$\infty$-category.
\subsection{Additive \(p\)-derivations}
This section is devoted to the algebraic notion of an additive $p$-derivation.
We recall the definition and establish some of its basic properties.
\subsubsection{Definition \& Properties}
The following is a variant on the notion of a $p$-derivation (e.g.
see \cite[Definition 2.1]{buium2005arithmetic}), in which we do not
require the multiplicative property.
\begin{defn}
\label{def:Delta}Let $R$ be a commutative ring. An \emph{additive
$p$-derivation} on $R$, is a function of sets
\[
\delta\colon R\to R,
\]
that satisfies:
\begin{enumerate}
\item (additivity) $\delta\left(x+y\right)=\delta\left(x\right)+\delta\left(y\right)+\frac{x^{p}+y^{p}-\left(x+y\right)^{p}}{p}$
for all $x,y\in R$.
\item (normalization) $\delta\left(0\right)=\delta\left(1\right)=0.$
\end{enumerate}
The pair $\left(R,\delta\right)$ is called a \emph{semi-$\delta$-ring}.
A \emph{semi-$\delta$-ring homomorphism} from $\left(R,\delta\right)$
to $\left(R',\delta'\right)$, is a ring homomorphism $f\colon R\to R'$,
that satisfies $f\circ\delta=\delta'\circ f$.
\end{defn}
\begin{rem}
\label{rem:Integer_Coeff_Polynom} The expression
\[
\frac{x^{p}+y^{p}-\left(x+y\right)^{p}}{p}
\]
is actually a polynomial with integer coefficients in the variables
$x$ and $y$ and does not involve division by $p$. In particular,
this is well defined for all $x,y\in R$, even when $R$ has $p$-torsion.
\end{rem}
\begin{rem}
In fact, the condition $\delta\left(0\right)=0$ is superfluous, as
it follows from the additivity property, and we include it in the
definition only for emphasis.
\end{rem}
The following follows immediately from the definitions.
\begin{lem}
\label{lem:Delta_Frob}Let $\delta\colon R\to R$ be an additive $p$-derivation
on a commutative ring $R$. The function $\psi\colon R\to R$ given
by
\[
\psi\left(x\right)=x^{p}+p\delta\left(x\right)
\]
is an additive lift of Frobenius, i.e. it is a homomorphism of abelian groups and agrees with the Frobenius modulo $p$.
\end{lem}
\begin{example}
\label{exa:Delta_Canonical} The following are some examples of additive
$p$-derivations.
\end{example}
\begin{enumerate}
\item For $R$ a subring of $\bb Q$, the \emph{Fermat quotient}
\[
\tilde{\delta}\left(x\right)=\frac{x-x^{p}}{p}
\]
is an additive $p$-derivation (we shall soon show that it is the
\emph{unique} additive $p$-derivation on any such $R$).
\item The same formula as for the Fermat quotient defines the unique additive $p$-derivation on the ring of $p$-adic integers $\bb Z_{p}$.
\item Fix $m\ge1$. Let $\mathcal{R}_{m}^{\square}$ be the commutative
ring freely generated by formal elements $|A|$, where
$A$ is an $m$-finite space, subject to the relations
\[
|A\sqcup B|=|A|+|B|,\quad|A\times B|=|A||B|.
\]
It is easy to verify that the operation
\[
\delta\left(|A|\right)=|BC_{p}\times A|-|A\wr C_{p}|
\]
is well defined and is an additive $p$-derivation on $\mathcal{R}_{m}^{\square}$.
\end{enumerate}
\begin{defn}
For every $x\in\bb Q$, we denote by $v_{p}\left(x\right)\in\bb Z\cup\left\{ \infty\right\} $
the $p$-adic valuation of $x$.
\end{defn}
The fundamental property of the Fermat quotient is
\begin{lem}
\label{lem:Delta_Valuation}For every $x\in\bb Q$, if $0<v_{p}\left(x\right)<\infty$,
then
\[
v_{p}\left(\tilde{\delta}\left(x\right)\right)=v_{p}\left(x\right)-1.
\]
\end{lem}
\begin{proof}
Since $v_{p}\left(x\right)>0$, we have
\[
v_{p}\left(x^{p}\right)=pv_{p}\left(x\right)>v_{p}\left(x\right).
\]
Thus,
\[
v_{p}\left(\frac{x-x^{p}}{p}\right)=v_{p}\left(x-x^{p}\right)-1=v_{p}\left(x\right)-1.
\]
\end{proof}
\begin{defn}
Let $R$ be a commutative ring. Let $\phi_{0}\colon\bb Z\to R$ be
the unique ring homomorphism and let $S_{R}$ be the set of primes
$p$, such that $\phi_{0}\left(p\right)\in R^{\times}$. We denote
\[
\bb Q_{R}=\bb Z[S_{R}^{-1}]\ss\bb Q
\]
and $\phi\colon\bb Q_{R}\to R$, the unique extension of $\phi_{0}$.
We call an element $x\in R$ \emph{rational} if it is in the image
of $\phi$. By \exaref{Delta_Canonical}(1), $\left(\bb Q_{R},\tilde{\delta}\right)$
is a semi-$\delta$-ring.
\end{defn}
The following elementary lemma will have several useful consequences.
\begin{lem}
\label{lem:Delta_Module}Let $\left(R,\delta\right)$ be a semi-$\delta$-ring
and let $\tilde{\delta}$ denote the Fermat quotient on $\bb Q_{R}$.
For all $t\in\bb Q_{R}$ and $x\in R$, we have
\[
\delta\left(tx\right)=t\delta\left(x\right)+\tilde{\delta}\left(t\right)x^{p}.
\]
\end{lem}
\begin{proof}
Fix $x\in R$ and consider the function $\varphi\colon\bb Q_{R}\to R$
given by
\[
\varphi\left(t\right)=\delta\left(tx\right)-\tilde{\delta}\left(t\right)x^{p}.
\]
Since
\[
\delta\left(tx+sx\right)=\delta\left(tx\right)+\delta\left(sx\right)+\frac{\left(tx\right)^{p}+\left(sx\right)^{p}-\left(tx+sx\right)^{p}}{p}=
\]
\[
\delta\left(tx\right)+\delta\left(sx\right)+\left(\tilde{\delta}\left(t+s\right)-\tilde{\delta}\left(t\right)-\tilde{\delta}\left(s\right)\right)x^{p}=\varphi\left(t\right)+\varphi\left(s\right)+\tilde{\delta}\left(t+s\right)x^{p}.
\]
we get
\[
\varphi\left(t+s\right)=\delta\left(tx+sx\right)-\tilde{\delta}\left(t+s\right)x^{p}=\varphi\left(t\right)+\varphi\left(s\right).
\]
Hence, $\varphi$ is additive and $\varphi\left(1\right)=\delta\left(x\right)$.
Since $\bb Q_{R}$ is a localization of $\bb Z$, $\varphi$ is a
map of $\bb Q_{R}$-modules and we deduce that $\varphi\left(t\right)=t\delta\left(x\right)$
for all $t\in\bb Q_{R}$.
\end{proof}
\subsubsection{$p$-Local Rings}
In the case where $R$ is a \emph{$p$-local} commutative ring, which
is the case we are mainly interested in, the existence of an additive
$p$-derivation on $R$ has several interesting implications.
\begin{prop}
\label{prop:Delta_Torsion_Nilpotent}Let $\left(R,\delta\right)$
be a $p$-local semi-$\delta$-ring. If $x\in R$ is torsion, then
$x$ is nilpotent.
\end{prop}
\begin{proof}
Since $R$ is $p$-local, if $x$ is torsion, then there is $d\in\bb N$,
such that $p^{d}x=0$. By \lemref{Delta_Module}, we have
\[
0=\delta\left(0\right)=\delta\left(p^{d}x\right)=p^{d}\delta\left(x\right)+\tilde{\delta}\left(p^{d}\right)x^{p}.
\]
Multiplying by $x$, we obtain $\tilde{\delta}\left(p^{d}\right)x^{p+1}=0$.
By \lemref{Delta_Valuation}, $v_p\left(\tilde{\delta}\left(p^{d}\right)\right)=d-1$,
and since $R$ is $p$-local, we get $p^{d-1}x^{p+1}=0$. Iterating
this $d$ times we get $x^{\left(p+1\right)^{d}}=0$.
\end{proof}
\begin{prop}
\label{prop:Delta_Injective} Let $\left(R,\delta\right)$ be a non-zero
$p$-local semi-$\delta$-ring. The map $\phi\colon\bb Q_{R}\to R$
is an injective semi-$\delta$-ring homomorphism. In particular $\tilde{\delta}$
is the unique additive $p$-derivation on $\bb Q_{R}$.
\end{prop}
\begin{proof}
Applying \lemref{Delta_Module} to $x=1$, we see that $\phi\circ\tilde{\delta}=\delta\circ\phi$.
If $\phi$ is non-injective, then so is $\phi_{0}\colon\bb Z\to R$
and hence $1\in R$ is torsion. By \propref{Delta_Torsion_Nilpotent},
$1$ is nilpotent and hence $R=0$.
\end{proof}
\begin{rem}
For a non-zero $p$-local semi-$\delta$-ring $\left(R,\delta\right)$,
we abuse notation by identifying $\bb Q_{R}$ with the subset of rational
elements of $R$. There are two options:
\begin{enumerate}
\item If $p\in R^{\times}$, then $\bb Q_{R}=\bb Q\ss R$ and all non-zero rational elements are invertible.
\item If $p\notin R^{\times}$, then $\bb Q_{R}=\bb Z_{\left(p\right)}\ss R$,
and $x\in\bb Q_{R}$ is invertible if and only if $v_{p}\left(x\right)=0$.
\end{enumerate}
\end{rem}
\begin{prop}
\label{prop:Delta_Torsion_Ideal}Let $\left(R,\delta\right)$ be a
$p$-local semi-$\delta$-ring. The ideal $I_{\tor}\ss R$ of torsion
elements is closed under $\delta$.
\end{prop}
\begin{proof}
For $x\in I_{\tor}$, there is $d\in\bb N$, such that $p^{d}x=0$.
By \lemref{Delta_Module},
\[
0=\delta\left(p^{d+1}x\right)=p^{d+1}\delta\left(x\right)+\tilde{\delta}\left(p^{d+1}\right)x^{p}.
\]
By \lemref{Delta_Valuation}, $v_p\left(\tilde{\delta}\left(p^{d+1}\right)\right)=d$
and therefore $\tilde{\delta}\left(p^{d+1}\right)x^{p}=0$. We get
$p^{d+1}\delta\left(x\right)=0$ and hence $\delta\left(x\right)\in I_{\tor}$.
\end{proof}
\begin{defn}
For every commutative ring $R$, we define $I_{\tor}\ss R$ to be
the ideal of torsion elements, and $R^{\tf}=R/I_{\tor}$ to be the
torsion free ring obtained from $R$.
\end{defn}
The following proposition will allow us to ``ignore torsion'' when
dealing with questions of invertibility in $p$-local semi-$\delta$-ring.
First,
\begin{defn}
Given a ring homomorphism $f\colon R\to S$, we say that $f$ \emph{detects
invertibility} if for every $x\in R$, if $f\left(x\right)$ is invertible,
then $x$ is invertible.
\end{defn}
\begin{prop}
\label{prop:Delta_Torsion_Free}Let $\left(R,\delta\right)$ be a
$p$-local semi-$\delta$-ring. There is a unique additive $p$-derivation
$\delta$ on $R^{\tf}$, such that the quotient map $g\colon R\onto R^{\tf}$
is a homomorphism of semi-$\delta$-rings. In addition, $g$ detects
invertibility.
\end{prop}
\begin{proof}
Let $x\in R$ and $y\in I_{\tor}$. We have
\[
\delta\left(x+y\right)-\delta\left(x\right)=\delta\left(y\right)+\left(\frac{x^{p}+y^{p}-\left(x+y\right)^{p}}{p}\right)\in I_{\tor}
\]
since $\delta\left(y\right)\in I_{\tor}$ by \propref{Delta_Torsion_Ideal}
and the expression in parenthesis is a multiple of $y$. Thus,
\[
\delta\left(x+I_{\tor}\right):=\delta\left(x\right)+I_{\tor}
\]
is a well defined function on $R^{\tf}$. The operation $\delta$
is an additive $p$-derivation and makes $g$ a homomorphism of semi-$\delta$-rings.
The operation $\delta$ is unique by the surjectivity of $g$. For
the second claim, the kernel of $g$ consists of nilpotent elements
by \propref{Delta_Torsion_Nilpotent} and hence $g$ detects invertibility.
\end{proof}
\subsection{The Alpha Operation}
Let $\mathcal{C}$ be a $0$-semiadditively symmetric monoidal $\infty$-category
and let
\[
X\in\cocalg\left(\mathcal{C}\right),\qquad Y\in\calg\left(\mathcal{C}\right).
\]
Fix a prime $p$. The set
\[
\hom_{h\mathcal{C}}\left(X,Y\right)=\pi_{0}\map_{\mathcal{C}}\left(X,Y\right)
\]
has a structure of a \emph{commutative rig} (i.e. like a ring, but
without additive inverses). Assuming further that $\mathcal{C}$ is
$1$-semiadditively symmetric monoidal, we construct an operation
$\alpha$ (which depends on $p$) on $\hom_{h\mathcal{C}}\left(X,Y\right)$
and study its properties and interaction with the rig structure.
Throughout the section we denote
\[
\pt\oto eBC_{p}\oto r\pt.
\]
\subsubsection{Definition and Naturality}
The $\bb E_{\infty}$-coalgebra and $\bb E_{\infty}$-algebra structures,
on $X$ and $Y$ respectively, provide symmetric comultiplication
and multiplication maps:
\[
\overline{t}_{X}\colon X\to\left(X^{\otimes p}\right)^{hC_{p}}=r_{*}\Theta^{p}\left(X\right)
\]
\[
\overline{m}_{Y}\colon r_{!}\Theta^{p}\left(Y\right)=\left(Y^{\otimes p}\right)_{hC_{p}}\to Y.
\]
These maps have mates
\[
t_{X}\colon r^{*}X\to\Theta^{p}\left(X\right),\qquad m_{Y}\colon\Theta^{p}\left(Y\right)\to r^{*}Y,
\]
such that
\[
e^{*}t_{X}\colon X=e^{*}r^{*}X\to e^{*}\Theta^{p}\left(X\right)=X^{\otimes p}
\]
\[
e^{*}m_{Y}\colon Y^{\otimes p}=e^{*}\Theta^{p}\left(Y\right)\to e^{*}r^{*}Y=Y,
\]
are the ordinary comultiplication and multiplication maps.
\begin{defn}
\label{def:Alpha}Let $\mathcal{C}$ be a $1$-semiadditively symmetric
monoidal $\infty$-category and let
\[
X\in\cocalg\left(\mathcal{C}\right),\qquad Y\in\calg\left(\mathcal{C}\right).
\]
\begin{enumerate}
\item Given $g\colon\Theta^{p}\left(X\right)\to\Theta^{p}\left(Y\right)$,
we define $\overline{\alpha}\left(g\right)\colon X\to Y$ to be either
of the compositions in the commutative diagram
\[
\xymatrix@C=3pc{X\ar[r]^{\overline{t}_{X}\quad} & r_{*}\Theta^{p}\left(X\right)\ar[d]^{g}\ar[r]^{\nm_{r}^{-1}} & r_{!}\Theta^{p}\left(X\right)\ar[d]^{g}\\
& r_{*}\Theta^{p}\left(Y\right)\ar[r]^{\nm_{r}^{-1}} & r_{!}\Theta^{p}\left(Y\right)\ar[r]^{\quad\overline{m}_{Y}} & Y.
}
\]
\item Given $f\colon X\to Y$, we define $\alpha\left(f\right)=\overline{\alpha}\left(\Theta^{p}\left(f\right)\right)$.
\end{enumerate}
\end{defn}
\begin{rem}
\label{rem:H_Infty}In fact, the definition of $\alpha$ uses only
the $H_{\infty}$-algebra structure of $Y$ and the $H_{\infty}$-coalgebra
structure of $X$. Moreover, everything we state and prove in this
section about the properties of $\alpha$ holds when we replace $\bb E_{\infty}$
with $H_{\infty}$.
\end{rem}
\begin{lem}
\label{lem:Alpha_Bar_Additive}The map $\overline{\alpha}\colon \pi_0\map(\Theta^p X,\Theta^p Y) \to \pi_0\map(X,Y)$
is additive.
\end{lem}
\begin{proof}
Since $r_*$ is an additive functor, it induces an additive map
\[
\pi_0\map(\Theta^p X,\Theta^p Y) \to \pi_0\map(r_*\Theta^p X,r_*\Theta^p Y).
\]
The operation $\bar{\alpha}$ consists of the application of this followed by pre- and post-composition with fixed maps in a $0$-semiadditive $\infty$-category.
\end{proof}
The operation $\alpha$ is natural with respect to (co)algebra homomorphisms
in the following sense.
\begin{lem}
\label{lem:Alpha_Naturality} Let $\mathcal{C}$ be a $1$-semiadditively
symmetric monoidal $\infty$-category and let
\[
X,X'\in\cocalg\left(\mathcal{C}\right),\qquad Y,Y'\in\calg\left(\mathcal{C}\right).
\]
Given maps $g\colon Y\to Y'$ and $h\colon X'\to X$ of commutative
algebras and coalgebras respectively, for every map $f\colon X\to Y$,
we have
\[
\alpha\left(g\circ f\circ h\right)=g\circ\alpha\left(f\right)\circ h\quad\in\hom_{h\mathcal{C}}\left(X',Y'\right).
\]
\end{lem}
\begin{proof}
Consider the diagram
\[
\xymatrix@C=3pc{X'\ar@{.>}[r]^{\overline{t}_{X'}\ }\ar@{.>}[d]^{h} & r_{*}\Theta^{p}X'\ar[r]^{\nm_{r}^{-1}}\ar[d]^{h}\ar@{.>}[dddr] & r_{!}\Theta^{p}X'\ar[d]^{h}\\
X\ar[r]^{\overline{t}_{X}\ }\ar@{.>}[drrr] & r_{*}\Theta^{p}X\ar[r]^{\nm_{r}^{-1}}\ar[d]^{f} & r_{!}\Theta^{p}X\ar[d]^{f}\\
& r_{*}\Theta^{p}Y\ar[r]^{\nm_{r}^{-1}}\ar[d]^{g} & r_{!}\Theta^{p}Y\ar[r]^{\ \overline{m}_{Y}}\ar[d]^{g} & Y\ar@{.>}[d]^{g}\\
& r_{*}\Theta^{p}Y'\ar[r]^{\nm_{r}^{-1}} & r_{!}\Theta^{p}Y'\ar@{.>}[r]^{\ \overline{m}_{Y'}} & Y'.
}
\]
The squares in the middle column commute by the naturality of the
norm map. The homotopy rendering the bottom right square commutative
is provided by the data that makes $g$ into a morphism of commutative
algebras and similarly for the upper left square and $h$. The composition
along one of the dotted paths is $\alpha\left(g\circ f\circ h\right)$,
while composition along the other dotted path is $g\circ\alpha\left(f\right)\circ h$,
which completes the proof.
\end{proof}
The operation $\alpha$ is also functorial in the following sense.
\begin{lem}
\label{lem:Alpha_Functoriality}Let $F\colon\mathcal{C}\to\mathcal{D}$
be a $1$-semiadditive symmetric monoidal functor between two $1$-semiadditively
symmetric monoidal $\infty$-categories, and let $X\in\cocalg\left(\mathcal{C}\right)$
and $Y\in\calg\left(\mathcal{C}\right)$. The induced map of commutative
rings
\[
F\colon\hom_{h\mathcal{C}}\left(X,Y\right)\to\hom_{h\mathcal{D}}\left(FX,FY\right),
\]
commutes with the operation $\alpha$.
\end{lem}
\begin{proof}
Given a map $g\colon X\to Y$, consider the following diagram:
\[
\xymatrix@C=3pc{ & \red{F}\left(r_{*}\Theta^{p}\left(X\right)\right)\ar[d]_{\wr}^{\beta_{*}}\ar[r]^{\nm_{r}^{-1}} & \red{F}\left(r_{!}\Theta^{p}\left(X\right)\ar[r]^{g}\right) & \red{F}\left(r_{!}\Theta^{p}\left(Y\right)\ar[rd]^{\quad\overline{m}_{Y}}\right)\\
\red{F}X\ar[ru]^{\overline{t}_{X}\quad}\ar[rd]_{\overline{t}_{FX}\quad} & r_{*}\red{F}\left(\Theta^{p}\left(X\right)\right)\ar@{-}[d]^{\wr}\ar[r]^{\nm_{r}^{-1}} & r_{!}\red{F}\left(\Theta^{p}\left(X\right)\right)\ar@{-}[d]^{\wr}\ar[u]_{\wr}^{\beta_{!}}\ar[r]^{g} & r_{!}\red{F}\left(\Theta^{p}\left(Y\right)\right)\ar@{-}[d]^{\wr}\ar[u]_{\wr}^{\beta_{!}} & \red{F}Y.\\
& r_{*}\Theta^{p}\left(\red{F}X\right)\ar[r]^{\nm_{r}^{-1}} & r_{!}\Theta^{p}\left(\red{F}X\right)\ar[r]^{g} & r_{!}\Theta^{p}\left(\red{F}Y\right)\ar[ru]_{\quad\overline{m}_{\red{F}Y}}
}
\]
The vertical isomorphisms in the bottom squares are defined by \lemref{Theta_Functoriality} and the squares commute by the interchange law. The top left square commutes by the
ambidexterity of the $\left(F,r\right)$-square (\thmref{Ambi_Functors})
and the top right square by naturality of the $\bc_{!}$ map. The
triangles commute by the definition of the commutative coalgebra (resp.\!
algebra) structure on $F\left(X\right)$ (resp.\! $F\left(Y\right)$).
Thus, the composition along the top path, which is $F\left(\alpha\left(g\right)\right)$,
is homotopic to the composition along the bottom path, which is $\alpha\left(F\left(g\right)\right)$.
\end{proof}
\subsubsection{Additivity of Alpha}
Our next goal is to understand the interaction of $\alpha$ with sums.
For this, we first need to describe the effect of $\overline{\alpha}$
on ``induced maps''. Recall the notation
\[
\pt\oto eBC_{p}\oto r\pt.
\]
\begin{lem}
\label{lem:Alpha_Induced}Let $\mathcal{C}$ be a $1$-semiadditively
symmetric monoidal $\infty$-category and let $X\in\cocalg\left(\mathcal{C}\right)$
and $Y\in\calg\left(\mathcal{C}\right)$. For every map
\[
h\colon X^{\otimes p}=e^{*}\Theta^{p}\left(X\right)\to e^{*}\Theta^{p}\left(Y\right)=Y^{\otimes p},
\]
the map $\overline{\alpha}\left(\int\limits _{e}h\right)$ is homotopic
to the composition
\[
X\oto{e^{*}t_{X}}X^{\otimes p}\oto hY^{\otimes p}\oto{e^{*}m_{Y}}Y.
\]
\end{lem}
\begin{proof}
Unwinding the definition of the integral, the map $\int\limits _{e}h$
is homotopic to the composition of the following maps
\[
\Theta^{p}\left(X\right)\oto{u_{*}^{e}}e_{*}e^{*}\Theta^{p}\left(X\right)\oto he_{*}e^{*}\Theta^{p}\left(X\right)\oto{\nm_{e}^{-1}}e_{!}e^{*}\Theta^{p}\left(X\right)\oto{c_{!}^{e}}\Theta^{p}\left(X\right).
\]
Plugging this into the definition of $\overline{\alpha}$, we get
that $\overline{\alpha}\left(\int\limits _{e}h\right)$ equals the
composition along the top and then right path in the following diagram
\[
\xymatrix{X\ar[rrd]_{e^{*}t_{X}}\ar[r]^{\overline{t}_{X}\quad} & r_{*}\Theta^{p}\left(X\right)\ar[r]^{u_{*}^{e}\quad} & r_{*}e_{*}e^{*}\Theta^{p}\left(X\right)\ar@{=}[d]\ar[r]^{h} & r_{*}e_{*}e^{*}\Theta^{p}\left(Y\right)\ar@{=}[d]\ar[r]^{\nm_{e}^{-1}} & r_{*}e_{!}e^{*}\Theta^{p}\left(Y\right)\ar[d]^{\nm_{r}^{-1}}\ar[r]^{c_{!}^{e}} & r_{*}\Theta^{p}\left(Y\right)\ar[d]^{\nm_{r}^{-1}}\\
& & e^{*}\Theta^{p}\left(X\right)\ar[r]^{h} & e^{*}\Theta^{p}\left(Y\right)\ar[rrd]_{e^{*}m_{Y}}\ar@{=}[r] & r_{!}e_{!}e^{*}\Theta^{p}\left(Y\right)\ar[r]^{c_{!}^{e}} & r_{!}\Theta^{p}\left(Y\right)\ar[d]^{\overline{m}_{Y}}\\
& & & & & Y.
}
\]
We denote this diagram by $\left(*\right)$. The left square commutes
for trivial reasons, the right square by the interchange law and the
middle by
\[
\nm_{r}^{-1}\circ\nm_{e}^{-1}=\left(\nm_{e}\circ\nm_{r}\right)^{-1}=\left(\nm_{re}\right)^{-1}=\Id.
\]
To see that the left triangle commutes, consider the diagram
\[
\xymatrix@C=3pc{X\ar[rd]_{u_{*}^{r}}\ar[r]^{\overline{t}_{X}\quad} & r_{*}\Theta^{p}\left(X\right)\ar[r]^{u_{*}^{e}\quad} & r_{*}e_{*}e^{*}\Theta^{p}\left(X\right)\\
& r_{*}r^{*}X\ar[u]_{t_{X}}\ar[r]^{u_{*}^{e}} & r_{*}e_{*}e^{*}r^{*}X\ar@{=}[r]\ar[u]_{t_{X}} & X\ar[lu]_-{e^{*}t_{X}}.
}
\]
The square commutes by naturality, and the left triangle by the definition
of mates. Note that the composition along the bottom path is the unit
of the composed adjunction
\[
\Id=e^{*}r^{*}\dashv r_{*}e_{*}=\Id,
\]
and hence is the identity map. It follows that the left triangle in
$\left(*\right)$ is commutative. The proof that the right triangle
in $\left(*\right)$ commutes is completely analogous. Thus, $\left(*\right)$
is commutative and $\overline{\alpha}\left(\int\limits _{e}h\right)$
equals the composition along the bottom diagonal path in $\left(*\right)$,
which completes the proof.
\end{proof}
The main property of $\alpha$ is that it satisfies the following
``addition formula''.
\begin{prop}
\label{prop:Alpha_Additvity}Let $\mathcal{C}$ be a $1$-semiadditively
symmetric monoidal $\infty$-category and let
\[
X\in\cocalg\left(\mathcal{C}\right),\quad Y\in\calg\left(\mathcal{C}\right).
\]
For every $f,g\colon X\to Y$, we have
\[
\alpha\left(f+g\right)=\alpha\left(f\right)+\alpha\left(g\right)+\frac{\left(f+g\right)^{p}-f^{p}-g^{p}}{p}\quad\in\hom_{h\mathcal{C}}\left(X,Y\right)
\]
(as in \remref{Integer_Coeff_Polynom}, this expression does not actually
involve division by $p$).
\end{prop}
\begin{proof}
Since $\overline{\alpha}$ is additive (see \lemref{Alpha_Bar_Additive}),
we get by \propref{Theta_Additiive},
\[
\alpha\left(f+g\right)=\overline{\alpha}\left(\Theta^{p}\left(f+g\right)\right)=\overline{\alpha}\left(\Theta^{p}\left(f\right)+\Theta^{p}\left(g\right)+\sum_{\overline{w}\in\overline{S}}\left(\int\limits _{e}w\left(f,g\right)\right)\right)
\]
\[
=\alpha\left(f\right)+\alpha\left(g\right)+\sum_{\overline{w}\in\overline{S}}\overline{\alpha}\left(\int\limits _{e}w\left(f,g\right)\right).
\]
Now, by \lemref{Alpha_Induced}, the map $\overline{\alpha}\left(\int\limits _{e}w\left(f,g\right)\right)$
is homotopic to the composition
\[
X\oto{e^{*}t_{X}}X^{\otimes p}\oto{w\left(f,g\right)}Y^{\otimes p}\oto{e^{*}m_{Y}}Y.
\]
This is by definition $f^{w_{x}}g^{w_{y}}$, where $w_{x}$ and $w_{y}$
are the number of $x$-s and $y$-s in $w$ respectively and this
completes the proof.
\end{proof}
\subsubsection{Alpha and The Unit}
We shall now apply the above discussion of the operation $\alpha$
to the special case where $X=Y=\one$ is the unit of a symmetric monoidal
$\infty$-category $\mathcal{C}$. The unit $\one\in\mathcal{C}$
has a unique $\mathbb{E}_\infty$-algebra structure and this structure makes
it initial in $\text{CAlg}\left(\mathcal{C}\right)$. The same argument
applied to $\mathcal{C}^{op}$ shows that $\one$ has also a unique
$\mathbb{E}_\infty$-coalgebra structure and it is terminal with respect to
it.
\begin{defn}
Let $\left(\mathcal{C},\otimes,\one\right)$ be a symmetric monoidal
$\infty$-category. We denote
\[
R_{\mathcal{C}}=\hom_{h\mathcal{C}}\left(\one,\one\right)
\]
as a commutative monoid. If $\mathcal{C}$ is $0$-semiadditive, then
$R$ is naturally a commutative \emph{rig} and if $\mathcal{C}$ is
stable, then it is a commutative \emph{ring}. Given a symmetric monoidal
functor $F\colon\mathcal{C}\to\mathcal{D},$ the induced map $\varphi:R_{\mathcal{C}}\to R_{\mathcal{D}}$
is a monoid homomorphism. It is also a rig (resp. ring) homomorphism,
when $\mathcal{C}$ and $\mathcal{D}$ are 0-semiadditive (resp. stable)
and $F$ is a $0$-semiadditive functor.
\end{defn}
The goal of this section is to study the operation $\alpha$ on $R_{\mathcal{C}}$.
We begin with a few preliminaries. Recall the notation
\[
\pt\oto eBC_{p}\oto r\pt.
\]
\begin{lem}
Let $\left(\mathcal{C},\otimes,\one\right)$ be a symmetric monoidal
$\infty$-category. The action of $C_{p}$ on $\one^{\otimes p}\simeq\one$
is trivial. Namely, $\Theta^{p}\left(\one\right)=r^{*}\one$.
\end{lem}
\begin{proof}
We shall show more generally that the action of $\Sigma_{k}$ on $\one^{\otimes k}\simeq\one$
is trivial. The forgetful functor $U\colon\calg\left(\mathcal{C}\right)\to\mathcal{C}$
is symmetric monoidal with respect to the coproduct on $\calg\left(\mathcal{C}\right)$
and $\otimes$ on $\mathcal{C}$ (by \cite[Example 3.2.4.4]{ha} and
\cite[Proposition 3.2.4.7]{ha}). In particular, for every commutative algebra
$A$, the action of $\Sigma_{k}$ on $U\left(A\right)^{\otimes k}$
is induced by the action of $\Sigma_{k}$ on $A^{\sqcup k}$. Since
$\one$ has a canonical commutative algebra structure, and as an object of
$\calg\left(\mathcal{C}\right)$ it is \emph{initial}
(\cite[Corollary 3.2.1.9]{ha}), any $\Sigma_{k}$ action on it as
a commutative algebra is trivial.
\end{proof}
It follows by the above that $r_{!}\Theta^{p}\left(\one\right)\simeq r_{!}r^{*}\one$
and $r_{*}\Theta^{p}\left(\one\right)\simeq r_{*}r^{*}\one$.
\begin{lem}
\label{lem:Co_Algebra_Unit}Let $\left(\mathcal{C},\otimes,\one\right)$
be a symmetric monoidal $\infty$-category. The maps
\[
\overline{t}_{\one}\colon\one\to r_{*}\Theta^{p}\left(\one\right),\quad\overline{m}_{\one}\colon r_{!}\Theta^{p}\left(\one\right)\to\one,
\]
induced from the commutative algebra and coalgebra structures, are
equivalent to the unit and counit maps (respectively)
\[
u_{*}\colon\one\to r_{*}r^{*}\one,\qquad c_{!}\colon r_{!}r^{*}\one\to\one.
\]
\end{lem}
\begin{proof}
This is equivalent to showing that the mate (in both cases) is the
identity map $r^{*}\one\to r^{*}\one$. The algebra structure on $\one\in\mathcal{C}$
is induced from the algebra structure on $\underline{\one}\in\calg\left(\mathcal{C}\right)$,
where $\calg\left(\mathcal{C}\right)$ is endowed with the coCartesian
symmetric monoidal structure and in which $\underline{\one}$ is initial
(\cite[Corollary 3.2.1.9]{ha}). Now, the object $r^{*}\underline{\one}$
is initial in $\fun\left(BC_{p},\calg\left(\mathcal{C}\right)\right)$,
and therefore the only map $r^{*}\underline{\one}\to r^{*}\underline{\one}$
is the identity. A similar argument applies for the comultiplication
map.
\end{proof}
As a consequence, we can describe the effect of $\alpha$ on any element
of $R_{\mathcal{C}}$ using the integral operation.
\begin{prop}
\label{prop:Alpha_One}Let $\left(\mathcal{C},\otimes,\one\right)$
be a $1$-semiadditively symmetric monoidal $\infty$-category. For
every $f\in R_{\mathcal{C}}$, we have
\[
\alpha\left(f\right)=\int\limits _{BC_{p}}\Theta^{p}\left(f\right)\quad\in\mathcal{R}_{\mathcal{C}}.
\]
\end{prop}
\begin{proof}
Unwinding the definition of $\overline{\alpha}$ (\defref{Alpha})
in this case and using \lemref{Co_Algebra_Unit}, we get $\overline{\alpha}\left(-\right)=\int\limits _{BC_{p}}\left(-\right)$.
Hence,
\[
\alpha\left(f\right)=\overline{\alpha}\left(\Theta^{p}\left(f\right)\right)=
\int\limits _{BC_{p}}\Theta^{p}\left(f\right).
\]
\end{proof}
In particular, we get an explicit formula for the operation $\alpha$
on elements of the form $|A|\in\mathcal{R}_{\mathcal{C}}$.
\begin{thm}
\label{thm:Alpha_Box} Let $\mathcal{C}$ be an $m$-semiadditively
symmetric monoidal $\infty$-category for $m\ge1$. For every $m$-finite
space $A$, we have
\[
\alpha\left(|A|\right)=|A\wr C_{p}|\quad\in\mathcal{R}_{\mathcal{C}}.
\]
\end{thm}
\begin{proof}
Consider the maps
\[
q\colon A\to\pt,\quad\pi=q\wr C_{p}\colon A\wr C_{p}\to BC_{p},\quad r\colon BC_{p}\to\pt.
\]
By definition of $\alpha$, \propref{Alpha_One}, the definition of
$|A|$, the ambidexterity of the $\Theta^{p}$-square (\thmref{Theta_Integral})
and Fubini's Theorem (\propref{Fubini}) (in that order) we have
\[
\alpha\left(|A|\right)=\overline{\alpha}\left(\Theta^{p}\left(|A|\right)\right)=\int\limits _{r}\Theta^{p}\left(|A|\right)=\int\limits _{r}\Theta^{p}\left(\int\limits _{q}\Id_{\one}\right)=\int\limits _{r}\int\limits _{\pi}\Theta^{p}\left(\Id_{\one}\right)=\int\limits _{r\pi}\Id_{\one}=|A\wr C_{p}|.
\]
\end{proof}
As a consequence, we can identify the action of $\alpha$ on the identity
element of the rig $\hom_{h\mathcal{C}}\left(X,Y\right)$, for any
$X\in\cocalg\left(\mathcal{C}\right)$ and $Y\in\calg\left(\mathcal{C}\right)$.
\begin{lem}
\label{lem:Alpha_Normalization}Let $\mathcal{C}$ be a $1$-semiadditively
symmetric monoidal $\infty$-category and let
\[
X\in\cocalg\left(\mathcal{C}\right),\quad Y\in\calg\left(\mathcal{C}\right).
\]
Denoting $\mathcal{R}=\hom_{h\mathcal{C}}\left(X,Y\right)$, we have
\[
\alpha\left(1_{\mathcal{R}}\right)=|BC_{p}|\circ 1_{\mathcal{R}}\quad\in\mathcal{R},
\]
where $1_{\mathcal{R}}\in\mathcal{R}$ is the multiplicative unit element.
\end{lem}
\begin{proof}
The map $1_{\mathcal{R}}\colon X\to Y$ is the composition of the canonical
maps $X\oto x\one\oto yY$, encoding the counit and unit of the coalgebra
and algebra structures of $X$ and $Y$ respectively. The maps $x$
and $y$ are naturally maps of commutative coalgebras and commutative
algebras respectively. By \lemref{Alpha_Naturality}, we have
\[
\alpha\left(1_{\mathcal{R}}\right)=\alpha\left(y\circ1\circ x\right)=y\circ\alpha\left(1\right)\circ x,
\]
where $1\in \mathcal{R}_{\mathcal{C}}$ is the multiplicative unit element. Observing that $1=|\pt|$ and using \thmref{Alpha_Box},
we get (we can commute $|BC_p|$ because it is a natural transformation)
\[
y\circ\alpha\left(1\right)\circ x=y\circ\alpha\left(|\pt|\right)\circ x=y\circ|BC_{p}|\circ x=|BC_{p}|\circ y\circ x=|BC_{p}|\circ1_{\mathcal{R}}.
\]
\end{proof}
\subsection{Higher Semiadditivity and Stability}
In this section, we specialize to the \emph{stable} case. Using the
operation $\alpha$ and stability, we construct additive $p$-derivations
and use their properties to formulate a general detection principle
for higher semiadditivity.
\subsubsection{Stability and Additive $p$-Derivations}
\begin{defn}
\label{def:Delta_Semi_Add}Let $\mathcal{C}$ be a \emph{stable} $1$-semiadditively
symmetric monoidal $\infty$-category with
\[
X\in\cocalg\left(\mathcal{C}\right),\quad Y\in\calg\left(\mathcal{C}\right),
\]
and so $R=\hom_{h\mathcal{C}}\left(X,Y\right)$ is a commutative \emph{ring}.
We define an operation $\delta\colon R\to R$ by
\[
\delta\left(f\right)=|BC_{p}|f-\alpha\left(f\right),
\]
for every $f\in R$. In particular, this applies to $\mathcal{R}_{\mathcal{C}}=\hom_{h\mathcal{C}}\left(\one,\one\right)$.
\end{defn}
\begin{thm}
\label{thm:Delta_Semi_Add}Let $\mathcal{C}$ be a stable $1$-semiadditively
symmetric monoidal $\infty$-category with
\[
X\in\cocalg\left(\mathcal{C}\right),\quad Y\in\calg\left(\mathcal{C}\right).
\]
The operation $\delta$ from \defref{Delta_Semi_Add} is an additive
$p$-derivation on $R=\hom_{h\mathcal{C}}\left(X,Y\right)$.
\end{thm}
\begin{proof}
The additivity condition follows from \propref{Alpha_Additvity} and
the normalization follows from \lemref{Alpha_Normalization}.
\end{proof}
The additive $p$-derivation of \thmref{Delta_Semi_Add} is natural
in the following sense.
\begin{prop}
\label{prop:Delta_Semi_Add_Naturality} Let $\mathcal{C}$ be a stable
$1$-semiadditively symmetric monoidal $\infty$-category with
\[
X,X'\in\cocalg\left(\mathcal{C}\right),\qquad Y,Y'\in\calg\left(\mathcal{C}\right).
\]
Given maps $g\colon Y\to Y'$ and $h\colon X'\to X$ of commutative
algebras and coalgebras respectively, the function
\[
g\circ(-)\circ h\colon\hom_{h\mathcal{C}}\left(X,Y\right)\to\hom_{h\mathcal{C}}\left(X',Y'\right)
\]
is a homomorphism of semi-$\delta$-rings.
\end{prop}
\begin{proof}
This follows from \lemref{Alpha_Naturality} and naturality of $|BC_{p}|$.
\end{proof}
The additive $p$-derivation of \thmref{Delta_Semi_Add} is also functorial
in the following sense.
\begin{prop}
\label{prop:Delta_Semi_Add_Functoriality}Let $F\colon\mathcal{C}\to\mathcal{D}$
be a symmetric monoidal $1$-semiadditive functor between \emph{stable}
$1$-semiadditively symmetric monoidal $\infty$-categories. Given
\[
X\in\cocalg\left(\mathcal{C}\right),\quad Y\in\calg\left(\mathcal{C}\right),
\]
the map
\[
F\colon\hom_{h\mathcal{C}}\left(X,Y\right)\to\hom_{h\mathcal{D}}\left(FX,FY\right),
\]
is a homomorphism of semi-$\delta$-rings.
\end{prop}
\begin{proof}
By \lemref{Alpha_Functoriality}, $F$ preserves $\alpha$,
and by \corref{Integral_Functor}, $F$ preserves multiplication
by $|BC_{p}|$. Combined with ordinary additivity, it follows
that $F$ preserves $\delta$.
\end{proof}
The theory of $p$-local semi-$\delta$-rings has the following consequence
for stable, $p$-local, $1$-semiadditive, symmetric monoidal $\infty$-categories.
\begin{cor}
\label{cor:Semiadd_Torsion_Nilpotent}Let $\mathcal{C}$ be a stable,
$p$-local, $1$-semiadditively symmetric monoidal $\infty$-category
with
\[
X\in\cocalg\left(\mathcal{C}\right),\qquad Y\in\calg\left(\mathcal{C}\right),
\]
and consider the commutative ring $R=\hom_{h\mathcal{C}}\left(X,Y\right)$.
Every torsion element of $R$ is nilpotent. In particular, if $\bb Q\otimes R=0$,
then $R=0$.
\end{cor}
\begin{proof}
The commutative ring $R$ is $p$-local and admits an additive $p$-derivation
by \thmref{Delta_Semi_Add}, and so the result follows by \propref{Delta_Torsion_Nilpotent}.
The last claim follows by considering the element $1\in R$.
\end{proof}
\subsubsection{Detection Principle for Higher Semiadditivity}
We now formulate the main detection principle for $m$-semiadditivity
for symmetric monoidal, stable, $p$-local $\infty$-categories. For
convenience, we formulate these results for presentable $\infty$-categories
and colimit preserving functors, though what we actually use is only
the existence and preservation of certain limits and colimits.
\begin{lem}
\label{lem:Amenable_Semi_Add} Let $m\ge 1$ and let $\mathcal{C}$ be an $m$-semiadditive presentably symmetric monoidal, stable, $p$-local $\infty$-category.
If there exists a connected $m$-finite $p$-space $A$, such that
$\pi_{m}\left(A\right)\neq0$ and $|A|_{\one}$ is an isomorphism,
then $\mathcal{C}$ is $\left(m+1\right)$-semiadditive.
\end{lem}
\begin{proof}
Since $m\ge1$, the space $B^{m+1}C_{p}$ is connected. Since the $\infty$-category $\mathcal{C}$ is $m$-semiadditive,
the map $q\colon B^{m+1}C_{p}\to\pt$ is weakly $\mathcal{C}$-ambidextrous.
By \cite[Corollary 4.4.23]{HopkinsLurie}, it suffices to show that
$q$ is $\mathcal{C}$-ambidextrous. Since $m$-finite $p$-spaces
are nilpotent, and we assumed that $\pi_m(A)\neq0$, there is a fiber sequence $A\to B\oto{\pi}B^{m+1}C_{p}$
with $B$ an $m$-finite space. Since $|A|_{\one}$ is
invertible, by \lemref{Box_Unit}(2), $A$ is $\mathcal{C}$-amenable.
Hence, by \propref{Amenable_Space}, the space $B^{m+1}C_{p}$ is
$\mathcal{C}$-ambidextrous.
\end{proof}
We can exploit the extra structure given by the additive
$p$-derivation on $\mathcal{R}_{\mathcal{C}}$ to find a space $A$
as in \lemref{Amenable_Semi_Add}.
\begin{prop}
\label{prop:Bootstrap_Algebraic}Let $m\ge 1$ and let $\mathcal{C}$ be an $m$-semiadditive presentably symmetric monoidal, stable, $p$-local $\infty$-category.
Let $h\colon\mathcal{R}_{\mathcal{C}}\to S$
be a semi-$\delta$-ring homomorphism that detects invertibility,
and such that $h\left(|BC_{p}|\right),h\left(|B^{m}C_{p}|\right)\in S$
are rational and non-zero. Then $\mathcal{C}$ is $\left(m+1\right)$-semiadditive.
\end{prop}
\begin{proof}
A space $A$ will be called \emph{$h$-good} if
\begin{itemize}
\item [(a)] $A$ is a connected $m$-finite $p$-space, such that $\pi_{m}\left(A\right)\neq0$.
\item [(b)] $h\left(|A|\right)$ is rational.
\end{itemize}
By \lemref{Amenable_Semi_Add}, it is enough to show that there exists
an $h$-good space $A$, such that $|A|$ is invertible
in $R_{\mathcal{C}}$. Since $h$ detects invertibility, it suffices
to find such $A$ with $h\left(|A|\right)$ invertible
in $S$. By assumption, $h\left(|B^{m}C_{p}|\right)$ is
rational and therefore $B^{m}C_{p}$ is $h$-good. If $p\in S^{\times}$,
then all non-zero rational elements in $S$ are invertible and we
are done by the assumption that $h\left(|B^{m}C_{p}|\right)\neq0$.
Hence, we assume that $p\notin S^{\times}$. In this case, a rational
element $x\in S$ is invertible if and only if $v_{p}\left(x\right)=0$.
Denoting $v\left(A\right)=v_{p}\left(h\left(|A|\right)\right)$,
it is enough to show that there exists an $h$-good space $A$ with
$v\left(A\right)=0$.
Since $h\left(|B^{m}C_{p}|\right)$ is non-zero and $p$ is not invertible, we get
$0\le v\left(B^{m}C_{p}\right)<\infty$. It therefore
suffices to show that given an $h$-good space $A$ with $0<v\left(A\right)<\infty$,
there exists an $h$-good space $A'$ with
$v\left(A'\right) = v\left(A\right)-1$.
For this, we exploit the operation $\delta$. We compute using \thmref{Alpha_Box} and \corref{Distributivity}:
\[
\delta\left(|A|\right)=|BC_{p}||A|-\alpha\left(|A|\right)=|BC_{p}||A|-|A\wr C_{p}|=|BC_{p}\times A|-|A\wr C_{p}|.
\]
Thus,
\[
\delta\left(h\left(|A|\right)\right)=h\left(\delta\left(|A|\right)\right)=h\left(|BC_{p}\times A|\right)-h\left(|A\wr C_{p}|\right).
\]
Since by assumption $h\left(|BC_{p}|\right)$ is rational,
then by \corref{Distributivity} we get that
\[
h\left(|BC_{p}\times A|\right)=h\left(|BC_{p}|\right)h\left(|A|\right)
\]
is also rational, and moreover, as $p \not\in S^{\times}$, we obtain $v(A)\le v\left(BC_p \times A\right)$. Furthermore, since $h\left(|A|\right)$ is rational, by \propref{Delta_Injective},
the same is true for $\delta\left(h\left(|A|\right)\right)$.
Therefore,
\[
h\left(|A\wr C_{p}|\right)=h\left(|BC_{p}\times A|\right)-h\left(\delta\left(|A|\right)\right)
\]
is also rational.
It is clear that $A\wr C_{p}$ satisfies (a), and so is $h$-good. Since $0<v\left(A\right)<\infty$, by \lemref{Delta_Valuation}, we get
$v_{p}\left(\delta\left(h\left(|A|\right)\right)\right)=v\left(A\right)-1$.
Thus, $v\left(A\wr C_{p}\right) = v\left(A\right)-1$ and this completes the
proof.
\end{proof}
\begin{rem}
The proof did not actually use anything specific to the space $B^{m}C_{p}$.
It would have sufficed to have some good space $A$ with $h\left(|A|\right)$
rational and non-zero. The space $B^{m}C_{p}$ is just the ``simplest''
one.
\end{rem}
In practice, the situation of \propref{Bootstrap_Algebraic} arises
as follows.
\begin{prop}
\label{prop:Bootstrap_Categorical} Let $m\ge1$, and let $F\colon\mathcal{C}\to\mathcal{D}$
be a colimit preserving symmetric monoidal functor between presentably
symmetric monoidal, stable, $p$-local, $m$-semiadditive $\infty$-categories.
Assume that the map $\varphi\colon \mathcal{R}_{\mathcal{C}}\to \mathcal{R}_{\mathcal{D}}$,
induced by $F$, detects invertibility and that the images of $|BC_{p}|_{\mathcal{D}},|B^{m}C_{p}|_{\mathcal{D}}\in\mathcal{R}_{\mathcal{D}}$
in the ring $\mathcal{R}_{\mathcal{D}}^{\tf}$ are rational and non-zero.
Then $\mathcal{C}$ and $\mathcal{D}$ are $\left(m+1\right)$-semiadditive.
\end{prop}
\begin{proof}
It is enough to prove that $\mathcal{C}$ is $\left(m+1\right)$-semiadditive,
since by \corref{Semi_Add_Mode}, this implies that $\mathcal{D}$
is $\left(m+1\right)$-semiadditive. We shall apply \propref{Bootstrap_Algebraic}
to the composition
\[
\mathcal{R}_{\mathcal{C}}\oto{\varphi}\mathcal{R}_{\mathcal{D}}\oto g\mathcal{R}_{\mathcal{D}}^{\tf},
\]
where $g$ is the canonical projection. By \propref{Delta_Semi_Add_Functoriality},
$\varphi$ is a semi-$\delta$-ring homomorphism and it detects invertibility
by assumption. On the other hand, $g$ is a semi-$\delta$-ring homomorphism
and it detects invertibility by \propref{Delta_Torsion_Free}. It
is only left to observe that $\varphi\left(|A|_{\mathcal{C}}\right)=|A|_{\mathcal{D}}$,
which follows from \corref{Integral_Functor}.
\end{proof}
We conclude with a variant of \propref{Bootstrap_Categorical}, in
which the condition on the elements $|B^{m}C_{p}|_{\mathcal{D}}$,
is replaced by a condition on the closely related elements $\dim_{\mathcal{D}}\left(B^{m}C_{p}\right)$,
and which assembles together the individual statements for different
$m\in\bb N$.
\begin{thm}
\label{thm:Bootstrap_Machine}(Bootstrap Machine) Let $1\le m\le\infty$
and let $F\colon\mathcal{C}\to\mathcal{D}$ be a colimit preserving
symmetric monoidal functor between presentably symmetric monoidal,
stable, $p$-local $\infty$-categories. Assume that
\begin{enumerate}
\item $\mathcal{C}$ is $1$-semiadditive.
\item The map $\varphi\colon R_{\mathcal{C}}\to R_{\mathcal{D}}$, induced
by $F$, detects invertibility.
\item For every $0\le k<m$, if the space $B^{k}C_{p}$ is dualizable in
$\mathcal{D}$, then the image of $\dim_{\mathcal{D}}\left(B^{k}C_{p}\right)$
in $\mathcal{R}_{\mathcal{D}}^{\tf}$ is rational and non-zero.
\end{enumerate}
Then $\mathcal{C}$ and $\mathcal{D}$ are $m$-semiadditive.
\end{thm}
\begin{proof}
It suffices to show that $\mathcal{C}$ is $m$-semiadditive, since by \corref{Semi_Add_Mode}, $\mathcal{D}$ is then also $m$-semiadditive.
We prove by induction on $k$, that the images of the elements $|B^{i}C_{p}|_{\mathcal{D}}$ in $\mathcal{R}_{\mathcal{D}}^{\tf}$ are rational and non-zero for all $0\le i < k$, and that $\mathcal{C}$ is $k$-semiadditive.
The base case $k=1$
holds by assumption (1) and the fact that $|C_{p}|_{\mathcal{D}}=p$
is rational and nonzero in $\mathcal{R}_{\mathcal{D}}^{\tf}$, since
the unique ring homomorphism $\bb Z\to\mathcal{R}_{\mathcal{D}}^{\tf}$
is injective by \propref{Delta_Injective}. Assuming the inductive
hypothesis for some $k<m$, we first prove that $|B^{k}C_{p}|_{\mathcal{D}}$ in $\mathcal{R}_{\mathcal{D}}^{\tf}$ are rational and non-zero.
By \corref{Dim_Sym}, $B^{k}C_{p}$ is
dualizable in $\mathcal{D}$, and we have
\[
\dim_{\mathcal{D}}\left(B^{k}C_{p}\right)=|B^{k}C_{p}|_{\mathcal{D}}|B^{k-1}C_{p}|_{\mathcal{D}}\in\mathcal{R}_{\mathcal{D}}^{\tf}.
\]
By assumption (3), $\dim_{\mathcal{D}}\left(B^{k}C_{p}\right)$ is
rational and non-zero and by the inductive hypothesis, the image of
$|B^{k-1}C_{p}|_{\mathcal{D}}$ in $\mathcal{R}_{\mathcal{D}}^{\tf}$
is rational and non-zero as well. Consequently, the image of $|B^{k}C_{p}|_{\mathcal{D}}$
in $\mathcal{R}_{\mathcal{D}}^{\tf}$ must also be rational and non-zero
since $\mathcal{R}_{\mathcal{D}}^{\tf}$ is torsion-free.
We shall now deduce that $\mathcal{C}$ is
$\left(k+1\right)$-semiadditive by applying \propref{Bootstrap_Categorical}
to the functor $F$. Since $|B^{k}C_{p}|_{\mathcal{D}}$ is rational and non-zero, it suffices to show that $|BC_{p}|_{\mathcal{D}}$ is rational and non-zero.
For $k=1$ there is nothing to prove and for $k\ge2$ this follows by the inductive hypothesis.
\end{proof}
\begin{rem}
The proof shows that the assumptions of the theorem above \emph{imply}
that the spaces $B^{k}C_{p}$ are dualizable in $\mathcal{D}$. Thus,
in retrospect, the ``if'' in assumption (3) is superfluous.
\end{rem}
\subsection{Nil-conservativitiy }
In this subsection we introduce and study a natural condition on a symmetric monoidal functor $\mathcal{C}\to\mathcal{D},$
which ensures that the induced map
$\mathcal{R}_{\mathcal{C}}\to \mathcal{R}_{\mathcal{D}}$
detects invertibility.
For simplicity, we shall work throughout under the assumption of \emph{presentability},
though most of the arguments do not require the full strength of this
assumption.
\begin{defn}
\label{def:Nil_Conservativity}We call a monoidal colimit preserving functor $F\colon\mathcal{C}\to\mathcal{D},$
between stable presentably monoidal $\infty$-categories \emph{nil-conservative}, if for every ring
$R\in\alg(\mathcal{C})$, if $F(R)=0$ then $R=0$\footnote{This notion is closely related to the notion of ``nil-faithfulness'' defined in \cite{BalmerNil}.}.
\end{defn}
The fundamental example of nil-conservativity in chromatic homotopy theory is provided by the Nilpotence Theorem (\propref{Nilpotence_Support}).
It is immediate from the definition that:
\begin{lem}
\label{lem:Nil_Cancelation} Let $F\colon{\cal C}\to{\cal D}$ and
$G\colon{\cal D}\to{\cal E}$ be monoidal colimit preserving functors between stable presentably monoidal $\infty$-categories.
\begin{enumerate}
\item If $F$ is conservative it is nil-conservative.
\item If $F$ and $G$ are nil-conservative then $GF$ is nil-conservative.
\item If $GF$ is nil-conservative then $F$ is nil-conservative.
\end{enumerate}
\end{lem}
The property of nil-conservativity has a useful
equivalent characterization in terms of conservativity on dualizable
modules. For this we shall need a non-symmetric version of the known fact that
dualizable objects are closed under cofibers in the stable setting.
\begin{lem}
\label{lem:Dualizable_Cofiber}Let $\mathcal{C}$ be a stable presentably monoidal $\infty$-category
and let $R,S\in\alg(\mathcal{C})$. For every cofiber sequence
\[
X\to Y\to Z\quad\in\quad_{S}\BMod_{R}(\mathcal{C}),
\]
if two out of $X,Y$, and $Z$ are right dualizable, then so is the
third.
\end{lem}
\begin{proof}
We treat the case that $X$ and $Y$ are right dualizable (the other
cases are analogous). Given $\mathcal{M}\in\LMod_{\mathcal{C}}(\Pr^L)$
we have a functor
\[
X\otimes_{R}(-)\colon\LMod_{R}(\mathcal{M})\to\LMod_{S}(\mathcal{M}).
\]
Moreover, given a morphism $\mathcal{M}^{\prime}\oto U\mathcal{M}^{\prime\prime}$
in $\LMod_{\mathcal{C}}(\Pr^L)$, we have a commutative diagram
\[
\xymatrix{\LMod_{R}(\mathcal{M}^{\prime})\ar[d]_{U}\ar[rr]^{X\otimes_{R}(-)} & & \LMod_{S}(\mathcal{M}^{\prime})\ar[d]^{U}\\
\LMod_{R}(\mathcal{M}^{\prime\prime})\ar[rr]^{X\otimes_{R}(-)} & & \LMod_{S}(\mathcal{M}^{\prime\prime}).
}
\]
By the dual of \cite[Proposition 4.6.2.10]{ha}, the module $X$ is right dualizable if
and only if for every $\mathcal{M}\in\LMod_{\mathcal{C}}(\Pr^L)$ the
functor $X\otimes_{R}(-)$ admits a left adjoint $F_{X}$, and for
every map $\mathcal{M}^{\prime}\oto U\mathcal{M}^{\prime\prime}$
in $\LMod_{\mathcal{C}}(\Pr^L)$ the Beck-Chevalley map $F_{X}^{\prime\prime}U\oto{\beta_{X}}UF_{X}^{\prime}$ in the above diagram
is an isomorphism\footnote{The statement of \cite[Proposition 4.6.2.10]{ha} considers more general $\mathcal{C}$-left
tensored $\infty$-categories $\mathcal{M}$. The proof however uses
only the special cases $\mathcal{M}=\RMod_{T}(\mathcal{C})$ for $T\in\alg(\mathcal{C})$.}. Since $\mathcal{C}$ is stable, so is every $\mathcal{M}\in\LMod_{\mathcal{C}}(\Pr^L)$
and $\LMod_{S}(\mathcal{M})$.
For every such $\mathcal{M}$, we have a cofiber sequence of functors
\[
X\otimes_{R}(-)\to Y\otimes_{R}(-)\to Z\otimes_{R}(-).
\]
Since the first two admit left adjoints $F_{X}$ and $F_{Y}$ respectively,
so does $Z\otimes_{R}(-)$. Moreover, we have a cofiber sequence of
functors
\[
F_{Z}\to F_{Y}\to F_{X}.
\]
Unwinding the definitions, for every $\mathcal{M}^{\prime}\to\mathcal{M}^{\prime\prime}$
in $\LMod_{\mathcal{C}}(\Pr^L)$, we have a commutative diagram of Beck-Chevalley maps:
\[
\xymatrix{F_{Z}^{\prime\prime}U\ar[d]^{\beta_{Z}}\ar[r] & F_{Y}^{\prime\prime}U\ar[d]^{\beta_{Y}}\ar[r] & F_{X}^{\prime\prime}U\ar[d]^{\beta_{X}}\\
UF_{Z}^{\prime}\ar[r] & UF_{Y}^{\prime}\ar[r] & UF_{X}^{\prime}.
}
\]
Hence, if $\beta_{X}$ and $\beta_{Y}$ are isomorphisms then so is
$\beta_{Z}$.
\end{proof}
\begin{prop}
\label{prop:Nil_Conservativity_Dualizable}A monoidal colimit preserving functor $F\colon\mathcal{C}\to\mathcal{D}$
between stable presentably monoidal $\infty$-categories is nil-conservative, if and only if for every
$S\in\alg(\mathcal{C})$, the induced functor
\[
\overline{F}\colon\LMod_{S}(\mathcal{C})\to\LMod_{F(S)}(\mathcal{D})
\]
is conservative when restricted to the full subcategories of right
dualizable modules.
\end{prop}
\begin{proof}
The `if' part follows from the fact that every ring $R$ is right dualizable
as a left module over itself. Conversely, let $f\colon N_{1}\to N_{2}$
be a map of right dualizable left $S$-modules and let $M$ be the
cofiber of $f$, which is also right dualizable (\lemref{Dualizable_Cofiber}).
It suffices to show that if $\overline{F}M=0$, then $M=0$. Let $M^{\vee}\in\RMod_{S}(\mathcal{C})$
denote the right dual of $M$. We have
\[
M^{\vee}\otimes_{S}M=\hom_{S}(M,M)\quad\in\quad\mathcal{C}
\]
the ring of endomorphisms of $M$. Since $F$ is monoidal and preserves
all, and in particular sifted, colimits we have
\[
F(\hom_{S}(M,M))=F(M^{\vee}\otimes_{S}M)=\overline{F}M^{\vee}\otimes_{FS}\overline{F}M=0.
\]
By assumption, we get $\hom_{S}(M,M)=0$ and hence $M=0$.
\end{proof}
Applying \propref{Nil_Conservativity_Dualizable} to $S=\one_{\mathcal{C}},$
we see that a nil-conservative functor is in particular conservative
on right dualizable objects of $\mathcal{C}$ itself.
\begin{cor}
\label{cor:Nil_Conservative_Detects_Inv}Let $F\colon\mathcal{C}\to\mathcal{D}$
be a nil-conservative functor. The induced ring
homomorphism $\mathcal{R}_{\mathcal{C}}\to \mathcal{R}_{\mathcal{D}}$
detects invertibility. In particular, if $A$ is a $\mathcal{C}$-ambidextrous
and $\mathcal{D}$-amenable space, then it is also $\mathcal{C}$-amenable.
\end{cor}
\begin{proof}
This follows from \propref{Nil_Conservativity_Dualizable}, as $\one_{\mathcal{C}}$
is a dualizable object.
\end{proof}
\section{Applications to Chromatic Homotopy Theory
\label{sec:Applications_to_Chromatic}}
In this final section, after fixing some notation and terminology\footnote{We refer the reader to \cite{ravenel2016nilpotence}, for a comprehensive treatment of the fundamentals of chromatic homotopy theory.}
we apply the general theory developed in the previous sections to chromatic homotopy theory. We begin by studying the consequences of $1$-semiadditivity to nilpotence in the homotopy groups of $\bb E_{\infty}$ (and $H_{\infty}$)-ring spectra and May's conjecture. Then, we prove the main theorem regarding the $\infty$-semiadditivity of $\Sp_{T\left(n\right)}$ and derive some corollaries. Finally, we study higher semiadditivity for localizations with respect to general weak rings (a generalization of a homotopy ring) and the various notions of ``bounded height'' for them.
Throughout, we fix a prime $p$ which will be implicit in all definitions that depend on it, except when explicitly stated otherwise.
\subsection{Generalities of Chromatic homotopy Theory}
We begin with some generalities, mainly to fix terminology and notation.
Let $\left(\Sp,\otimes,\bb S\right)$ be the symmetric monoidal $\infty$-category
of spectra (see \cite[Corollary 4.8.2.19]{ha})\footnote{This $\infty$-category can be also obtained using a symmetric monoidal model category as in \cite{ekmm} or \cite{SymSpectra}.}.
\subsubsection{Localizations, Rings and Modules}
Recall from \cite[Section 5.2.7]{htt} that a functor $L\colon \Sp\to \Sp$ is called a \emph{localization functor} if it factors as a composition $\Sp\to \Sp_L \to \Sp$, where the second functor is fully faithful and the first is its left adjoint. We abuse notation and denote by $L$ also the left adjoint $\Sp\to \Sp_L$ itself.
We call a map $f$ in $\Sp$ an $L$-equivalence, if $L(f)$ is an isomorphism.
As in \cite[Definition 2.2.1.6, Example 2.2.1.7]{ha}, a functor $L\colon \Sp \to \Sp$ is said to be compatible with the symmetric monoidal structure, if $L$-equivalences are closed under tensor product with all objects of $\mathcal{C}$.
\begin{defn}
\label{defn: tensor localization}
A localization functor $L\colon \Sp \to \Sp$ is called a $\otimes$-localization if $L$ is compatible with the symmetric monoidal structure.
\end{defn}
Note that a localization functor $L\colon \Sp \to \Sp$ is a $\otimes$-localization, if and only if the $L$-acyclic objects are closed under desuspension.
\begin{prop}
\label{prop:Sp_Localization} For every $\otimes$-localization $L\colon \Sp\to \Sp$, the $\infty$-category
$\Sp_L$ is stable, presentable and admits a structure
of a presentably symmetric monoidal $\infty$-category $\left(\Sp_{L},\widehat{\otimes},L\bb S \right)$, such that the functor $L\colon \Sp\to \Sp_L$ is symmetric monoidal. Moreover, the inclusion $\Sp_L\into \Sp$ admits a canonical lax symmetric monoidal structure.
Finally, for all $X,Y\in \Sp_{L}$ we have
\[
X\widehat{\otimes}Y\simeq L\left(X\otimes Y\right).
\]
\end{prop}
\begin{proof}
Applying \cite[Proposition 5.5.4.15]{htt} to the collection of $L$-equivalences, we deduce that $\Sp_L$ is presentable. Since $L$ is a $\otimes$-localization, all claims except for the stability of $\Sp_L$ follow from \cite[Proposition 2.2.1.9]{ha}.
Now, since $\Sp$ is pointed, so is $\Sp_L$ (e.g. from \corref{Semi_Add_Mode}). To show the stability of $\Sp_L$ by \cite[Corollary 1.4.2.27]{ha} it is enough to show that $\Sigma \colon \Sp_L \to \Sp_L$ is an equivalence. Indeed, this functor has an inverse, given by tensoring with $L\left(\Sigma^{-1} \bb{S}\right)$.
\end{proof}
For every spectrum $E\in\Sp$, we denote by $L_E\colon \Sp \to \Sp$ the $\otimes$-localization with essential image the $E$-local spectra\footnote{This functor is also called Bousfield localization after Bousfield who originally constructed it in \cite{BousLoc}.}. We denote $\Sp_{L_E}$ by $\Sp_E$ and $L_E(\bb{S})$ by $\bb{S}_E$ .
For a prime $p$, we shall consider also $\otimes$-localizations $L\colon \Sp_{(p)}\to \Sp_{(p)}$. The analogous results and notation apply to the $p$-local case as well.
\begin{prop}
Let $E\in\Sp$ and let $R$ be an $E$-local $\bb E_{\infty}$-ring. The
$\infty$-category $\Mod_{R}^{\left(E\right)}$ of left modules over
$R$ in the symmetric monoidal $\infty$-category $\Sp_{E}$, is presentable
and admits a structure of a presentably symmetric monoidal $\infty$-category.
Moreover, we have a free-forgetful adjunction
\[
F_{R}\colon\Sp_{E}\adj\Mod_{R}^{\left(E\right)}\colon U_{R},
\]
in which $F_{R}$ is symmetric monoidal.
\end{prop}
\begin{proof}
\cite[Corollary 4.5.1.5]{ha} identifies modules over $R$ as an $\bb E_{\infty}$-ring
with left modules over $R$ as an $\bb E_{1}$-ring. By \cite[Theorem 4.5.3.1]{ha}
and \cite[Corollary 4.2.3.7]{ha} this $\infty$-category is equipped
with a presentably symmetric monoidal structure. By \cite[Remark 4.2.3.8]{ha}
and \cite[Remark 4.5.3.2]{ha} applied to the map of algebras $\bb S_{E}\to R$,
we have the adjunction $F_{R}\dashv U_{R}$, such that $F_{R}$ is
symmetric monoidal.
\end{proof}
We shall also consider the following much weaker notion of a ``ring'' spectrum:
\begin{defn}
\label{def:Weak_Ring}
A \emph{weak ring}\footnote{It is called \emph{$\mu$-spectrum} in \cite[Definition 4.8]{hoveystrickland}}
is a spectrum $R\in\Sp$, together with a ``unit'' map $u{\colon}\bb S\to R$
and a ``multiplication'' map $\mu{\colon}R\otimes R\to R$, such that the
composition
\[
\xymatrix{
R\ar[r]^-{u\otimes\Id} & R\otimes R\ar[r]^-{\mu} & R,
}
\]
is homotopic to the identity.
\end{defn}
\begin{example}
\label{example: telescopic weak rings}
Every homotopy-ring is a weak ring.
\end{example}
\begin{lem}
\label{lem:Tensor_Weak_Rings}
Let $R$ and $S$ be weak rings. Then $R\otimes S$
is a weak ring.
\end{lem}
\begin{proof}
This follows directly from the definition.
\end{proof}
Our interest in weak rings stems from the fact that they include
a large class of spectra of interest and have just enough structure
to invoke the Nilpotence Theorem (see \thmref{Nilpotence}).
\subsubsection{Morava Theories}
Given an integer $n\ge0$, let $E_{n}$ be a 2-periodic Morava $E$-theory
of height $n$ with coefficients (for $n\ge1$)
\[
\pi_{*}E_{n}\simeq\bb Z_{p}[[u_{1},\dots,u_{n-1}]][u^{\pm1}],\quad\left|u_{i}\right|=0,\ \left|u\right|=2,
\]
and let $\K\left(n\right)$ be a $2$-periodic Morava $K$-theory
of height $n$ with coefficients ($n\ge1$)
\[
\pi_{*}\K\left(n\right)=\bb F_{p}[u^{\pm1}],\quad\left|u\right|=2.
\]
The spectrum $E_{n}$ admits an $\bb E_{\infty}$-ring structure in
$\Sp$ (by \cite{GoerssH}). The spectrum $\K\left(n\right)$ is obtained from the even $\bb{E}_{\infty}$-ring $E_{n}$ by taking the quotient with respect to the (regular) sequence $(p,u_1,\dots,u_{n-1})$ and hence admits an $\bb{E}_1$-ring structure with an $\bb{E}_{1}$-ring map $E_{n}\to\K\left(n\right)$ (see e.g. \cite{QuotEvenRings}).
Since $E_{n}$ is $\K\left(n\right)$-local,
we can also view it as an $\bb E_{\infty}$-ring in the $\infty$-category $\Sp_{K\left(n\right)}$. We shall use the notation $\widehat{\Mod}_{E_n}$ for $\Mod_{E_n}^{(K(n))}$.
We shall make an essential use of the dualizability and dimension of Eilenberg-MacLane spaces in $\widehat{\Mod}_{E_n}$. First, there is a general criterion for a space to be dualizabe in $\widehat{\Mod}_{E_n}$:
\begin{lem}
\label{lem:Morava_Dimension}Let $n\ge0$ and let $X$ be a space.
If
\[
\dim_{\bb F_{p}}\left(\K\left(n\right)_{0}\left(X\right)\right)=d<\infty\qquad and\qquad\K\left(n\right)_{1}\left(X\right)=0,
\]
then $X$ is dualizable in $\widehat{\Mod}_{E_n}$
and
\[
\dim_{\widehat{\Mod}_{E_n}}\left(X\right)=d.
\]
\end{lem}
\begin{proof}
By \cite[Proposition 3.4.3]{HopkinsLurie} (see also \cite[Proposition 8.4]{hoveystrickland}), there is an isomorphism
of $E_{n}$-modules
\[
L_{K\left(n\right)}\left(E_{n}\otimes\Sigma^{\infty}X_{+}\right)\simeq E_{n}^{d},
\]
from which the claim follows immediately.
\end{proof}
\begin{rem}
Using \cite[Proposition 3.4.3]{HopkinsLurie} together with \cite[Proposition 10.11]{mathew_galois}, one can deduce that for every dualizable object $M\in\widehat{\Mod}_{E_n}$,
we have
\[
\dim_{\widehat{\Mod}_{E_n}}\left(M\right)=\dim_{\bb F_{p}}\left(\pi_{0}\left(\K\left(n\right)\otimes_{E_{n}}M\right)\right)-\dim_{\bb F_{p}}\left(\pi_{1}\left(\K\left(n\right)\otimes_{E_{n}}M\right)\right).
\]
But we shall not need this fact.
\end{rem}
Using the classical computations of Ravenel and Wilson we get the following:
\begin{cor}
\label{cor:Morava_Dimension_EM} For all $k\in\bb N$, we have
\[
\dim_{\widehat{\Mod}_{E_n}}\left(B^{k}C_{p}\right)=p^{\binom{n}{k}}\quad\in\pi_{0}\left(E_{n}\right).
\]
In particular, these are all rational and non-zero.
\end{cor}
\begin{proof}
By \cite[Theorem 9.2]{RavenelWilson}, we have
\[
\dim_{\bb F_{p}}\K\left(n\right)_{0}\left(B^{k}C_{p}\right)=p^{\binom{n}{k}}\qquad\text{and}\qquad\K\left(n\right)_{1}\left(B^{k}C_{p}\right)=0.
\]
Hence, the result follows from \lemref{Morava_Dimension}.
\end{proof}
\subsubsection{Telescopic Localizations}
\begin{defn}
A finite $p$-local spectrum $X$, i.e. a compact object in the $\infty$-category
$\Sp_{\left(p\right)}$, is said to be of \emph{type $n$}, if $\K\left(n\right)\otimes X\neq0$
and $\K\left(j\right)\otimes X=0$ for $j=0,\dots,n-1$.
\end{defn}
Let $\X\left(n\right)$ be a finite $p$-local spectrum of type $n$.
Let $\bb D\X\left(n\right)=\underline{\hom}\left(\X\left(n\right),\bb S_{(p)}\right)$
be the Spanier-Whitehead dual of $\X\left(n\right)$. The finite $p$-local
spectrum
\[
R=\bb D\X\left(n\right)\otimes\X\left(n\right)=\underline{\hom}\left(\X\left(n\right),\X\left(n\right)\right),
\]
is also of type $n$ by the K\"unneth isomorphism. By replacing $\X\left(n\right)$
with $R$, we may assume that $\X\left(n\right)$ is an $\bb E_{1}$-ring
(see \cite[Section 4.7.1]{ha}). Every type $n$ spectrum $\X\left(n\right)$
admits a $v_{n}$-self map, which is a map
\[
v\colon\Sigma^{k}\X\left(n\right)\to\X\left(n\right),
\]
that is an isomorphism on $\K\left(n\right)_{*}X$ and zero on $\K\left(j\right)_{*}\left(X\right)$
for $j\neq n$. We let
\[
\T\left(n\right)=v^{-1}\X\left(n\right)=\colim\limits _{k}\left(\X\left(n\right)\oto v\Sigma^{-k}\X\left(n\right)\oto v\Sigma^{-2k}\X\left(n\right)\oto v\dots\right),
\]
be the telescope on $v$. The canonical map $\X\left(n\right)\to\T\left(n\right)$
exhibits $\T\left(n\right)$ as the $\T\left(n\right)$-localization
of $\X\left(n\right)$ (e.g. \cite[Proposition 3.2]{mahowaldsadofskyloc}).
Since the functor $L_{T\left(n\right)}$ is symmetric monoidal, we
can consider $\T\left(n\right)=L_{T\left(n\right)}\X\left(n\right)$
as an $\bb E_{1}$-ring in $\Sp_{T\left(n\right)}$. By the Thick Subcategory and Periodicity theorems \cite{nilp2}, the localization $\Sp_{T(n)}$ depends only on the prime $p$ and the height $n$ and in particular is independent of the choice of $F(n)$ and $v$.
It is known (e.g.
\cite[Section 6 (4)]{mahowaldsadofskyloc}) that
\[
\Sp_{K\left(n\right)}\ss\Sp_{T\left(n\right)}\ss\Sp.
\]
Thus, both $E_{n}$ and $\K\left(n\right)$ are also $\T\left(n\right)$-local,
and so we can consider them as an $\bb E_{\infty}$-ring and an $\bb E_{1}$-ring
in $\Sp_{T\left(n\right)}$ respectively.
\subsubsection{Nilpotence Theorem}
Morava $K$-theories are used in the following definition of support:
\begin{defn}
Let $L{\colon}\Sp_{(p)} \to \Sp_{(p)}$ be a $\otimes$-localization functor. The (chromatic) \emph{support} of $L$ is the set
\[\supp(L)=\left\{0\le n\le \infty \mid L\left(\K\left(n\right)\right)\ne 0 \right\}\subseteq \mathbb{N}\cup \{\infty\}.\]
For $E\in \Sp_{(p)}$ we denote $\supp(E)=\supp(L_E)$.
\end{defn}
Note that $L_E(X)=0$ if and only if $E\otimes X=0$ and so $\supp(E)$ coincides with the usual notion of chromatic support of a spectrum. By the K\"unneth Theorem we have
\[\supp(E\otimes E') = \supp(E)\cap \supp(E').\]
\begin{lem}
\label{lem:Kn_LSp}
Let $L{\colon}\Sp_{\left(p\right)}\to\Sp_{\left(p\right)}$
be a $\otimes$-localization functor and let $0\le n \le \infty$. Then $n\in \supp\left(L\right)$ if and only if $\Sp_{\K\left(n\right)}\ss\Sp_{L}$.
\end{lem}
\begin{proof}
If $\Sp_{\K\left(n\right)} \ss \Sp_{L}$ then in particular $\K\left(n\right) \in \Sp_{L}$ and hence \[L(\K\left(n\right))=\K\left(n\right)\ne 0.\]
Conversely, we need to show that if $L\left(X\right)=0$, then $\K\left(n\right)\otimes X$
is zero. We have
\[
L\left(\K\left(n\right)\otimes X\right)\simeq L\left(\K\left(n\right)\right)\,\widehat{\otimes}\,L\left(X\right)=0.
\]
Since $\K\left(n\right)\otimes X$ is a direct sum of
suspended copies of $\K\left(n\right)$, if $\K\left(n\right)\otimes X\neq0$,
then up to a suspension, $\K\left(n\right)$ is a retract of $\K\left(n\right)\otimes X$.
This would imply that $L\left(\K\left(n\right)\right)=0$ in contradiction to the hypothesis.
\end{proof}
\begin{example}\label{exa:Chromatic_Support} The following are examples for the support of some particular types of localization:
\begin{enumerate}
\item For a finite spectrum $\X\left(n\right)$ of type $n$, we have by definition
\[\supp(\X\left(n\right)) = \{n,n+1,\dots,\infty\}.\]
\item For every integer $n\ge0$, we have
\[\supp(\K\left(n\right))= \supp(\T\left(n\right)) = \{n\}\] (see e.g. \cite[Proposition A.2.13]{RavBook}).
\item For a non-zero smashing localization $L$, we have
\[\supp(L)=\{0,\dots,n\}\]
for some $0\le n \le \infty$ (see e.g. \cite[Lemma 4.1]{barthel2015short}).
\item For the Brown-Comenetz spectrum $I_{\mathbb{Q}/\mathbb{Z}}$, we have $\supp({I_{\mathbb{Q}/\mathbb{Z}}})=\operatorname{{\small \normalfont{\text{\O}}}}$ (see e.g. \cite[Proposition 7.4.2]{ravbook2}
\end{enumerate}
\end{example}
When considering localizations with respect to non-zero \emph{weak rings}, the Nilpotence Theorem of Devintaz-Hopkins-Smith guarantees that the support can not be empty. Since it is not usually stated in this generality, we include the argument for deriving it from the standard version.
\begin{thm}[Devinatz-Hopkins-Smith]
\label{thm:Nilpotence}
Let $R$ be a $p$-local weak ring. Then $R=0$ if and only if $\supp(R)=\operatorname{{\small \normalfont{\text{\O}}}}$.
\end{thm}
\begin{proof}
Consider the unit map $u{\colon}\bb S\to R$. If $\K\left(n\right)\otimes R=0$
for all $0\le n\le\infty$, then by \cite[Theorem 3(iii)]{nilp2},
the map $u$ is \emph{smash nilpotent}. Namely, $u^{\otimes r}{\colon}\bb S\to R^{\otimes r}$
is null for some $r\ge1$. The commutative diagram
\[
\xymatrix@R=2pc@C=3pc{\bb S\otimes\bb S\ar[d]_{\Id\otimes u}\ar[r]^{u\otimes u} & R\otimes R\ar[d]^{\mu}\\
\bb S\otimes R\ar[ru]^{u\otimes\Id}\ar[r]^{\Id} & R
}
\]
shows that $u$ factors through $u\otimes u$. Applying this iteratively,
we can factor $u$ through the null map $u^{\otimes r}$ and deduce
that $u$ itself is null. Consequently, the factorization of the identity map of
$R$ as the composition
\[
\xymatrix{
R\ar[r]^-{u\otimes\Id} & R\otimes R\ar[r]^-{\mu} & R,
}
\]
implies that it is null and thus $R=0$.
\end{proof}
This provides the main example of a nil-conservative functor.
\begin{prop}
\label{prop:Nilpotence_Support}Let $R$ be a $p$-local weak ring. The functor
\[
L\colon\Sp_{R}\to\prod_{n\in\supp(R)}\Sp_{K(n)},
\]
whose $n$-th component is $K(n)$-localization, is nil-conservative.
\end{prop}
\begin{proof}
Let $S$ be an $R$-local ring spectrum. If $L(S)=0$, then $S\otimes K(n)=0$
for all $n\in\supp(R)$. On the other hand, by definition, $R\otimes K(n)=0$ for all
$n\notin\supp(R)$. Consequently, $S\otimes R\otimes K(n)=0$ for
all $n\in\bb N\cup\{\infty\}$. By \lemref{Tensor_Weak_Rings}, $S\otimes R$ is a weak ring and hence by the Nilpotence Theorem (\thmref{Nilpotence}) we get $S\otimes R=0$.
Finally, since $S$ is $R$-local, $S=0$.
\end{proof}
\begin{defn}
\label{def:En_Hat}
Let $\widehat{E}_{n}[-]$ be the composition
\[
\Sp_{T\left(n\right)}\oto{L_{K\left(n\right)}}\Sp_{K\left(n\right)}\oto{F_{E_{n}}}\widehat{\Mod}_{E_n},
\]
where we abuse notation and write $L_{K\left(n\right)}$ also for
the left adjoint of the inclusion $\Sp_{K\left(n\right)}\ss\Sp_{T\left(n\right)}$.
The functor $\widehat{E}_{n}[-]$ is a colimit preserving
symmetric monoidal functor as a composition of two such.
\end{defn}
\begin{cor}
\label{cor:Nil_Conservativity_Telescopic}For every $0\le n<\infty$,
the functor
\[
\widehat{E}_{n}[-]\colon \Sp_{T(n)}\to \widehat{\Mod}_{E_n},
\]
is nil-conservative. Consequently, the canonical map
$\pi_0\bb{S}_{T(n)}\to \pi_0 E_{n}$
detects invertibility.
\end{cor}
\begin{proof}
By \propref{Nilpotence_Support} and the fact that
$\mathrm{supp}(T(n))=\{n\}$ (\exaref{Chromatic_Support}(2)), we get that $L_{K(n)}\colon\Sp_{T(n)}\to\Sp_{K(n)}$
is nil-conservative. Since
\[
E_n \,\widehat{\otimes}\, (-)\colon \Sp_{K(n)}\to \widehat{\Mod}_{E_n}
\]
is conservative, it is in particular nil-conservative and hence the composition $\Sp_{T(n)}\to \widehat{\Mod}_{E_n}$
is nil-conservative. The claim now follows from \corref{Nil_Conservative_Detects_Inv}.
\end{proof}
\begin{rem}
The fact that the functor
\(
\widehat{E}_{n}[-]\colon \Sp_{T(n)}\to \widehat{\Mod}_{E_n}
\)
is conservative on dualizable objects is what gives us the handle on $\Sp_{T(n)}$, which will allow us to prove the $\infty$-semiadditivity of $\Sp_{T(n)}$. However, it has other uses as well. As a consequence of the $\infty$-semiadditivity of $\Sp_{T(n)}$, we have a large supply of dualizable objects in $\Sp_{T(n)}$,
including for example all $\pi$-finite spaces.
In an upcoming work, we shall exploit this fact together with nil-conservativity to lift the maximal abelian Galois extension of $\Sp_{K(n)}$ to $\Sp_{T(n)}$.
\end{rem}
\subsection{Consequences of 1-Semiadditivity}
In this section, we discuss some applications of the theory of $1$-semiadditivity in stable $\infty$-categories to chromatic homotopy theory.
\subsubsection{Power Operations}
\begin{defn}
We denote by $\rm{CRing}$ the category of commutative rings and by $\rm{CRing^{\delta}}$ the category of semi-$\delta$-rings and semi-$\delta$-ring homomorphisms.
\end{defn}
\begin{thm}\label{thm:Frob_Lift}
The functor
\[
\pi_{0}\colon\calg(\Sp_{T\left(n\right)})\to\rm{CRing}
\]
has a lift to a functor
\[
\calg(\Sp_{T\left(n\right)})\to \rm{CRing^{\delta}}
\]
along the forgetful functor $\rm{CRing^{\delta}} \to \rm{CRing}$.
\end{thm}
\begin{proof}
The $\infty$-category $\Sp_{T\left(n\right)}$ is $1$-semiadditive
by \cite{Kuhn} and therefore satisfies the conditions of \thmref{Delta_Semi_Add}.
Thus, for every $R\in\calg(\Sp_{T\left(n\right)})$, the commutative
ring
\[
\pi_{0}R=\hom_{h\Sp_{T\left(n\right)}}(\bb S_{T\left(n\right)},R)
\]
admits an additive $p$-derivation $\delta$. The functoriality follows from \propref{Delta_Semi_Add_Naturality}.
\end{proof}
Note that for a semi-$\delta$-ring $(R,\delta)$, the operation
\[
\psi(x) \coloneqq x^p + p\delta(x)\quad \colon \quad R \to R
\]
is an additive group homomorphism which satisfies $\psi(1)=1$ and $\psi(x) \equiv x^p\,\in\,R/pR$. Thus, \thmref{Frob_Lift} also provides a functorial additive lift of Frobenius for $T(n)$-lcoal commutative ring spectra.
\begin{rem}
For every $R\in \calg(\Sp_{K(1)})$, Hopkins defined in \cite{HopkinsK1} an additive $p$-derivation
\[
\theta \colon \pi_0(R) \to \pi_0(R).
\]
We warn the reader that our operation $\delta$ is \emph{not} the same as this $\theta$. We can describe precisely the relationship between the two by expressing both in terms of the operation $\alpha(x) = \int_{BC_p}x^p$ as follows:
\begin{align*}
\delta(x) &= |BC_p|x - \alpha(x) \\
\theta(x) &= \frac{1}{p-1}(x^p - \alpha(x)).
\end{align*}
The additivity property of $\theta$ can be easily deduced from the corresponding property of $\alpha$ (\propref{Alpha_Additvity}).
However, the identity $\theta(1)=0$ amounts to $|BC_p|_{\Sp_{K(1)}}= 1$, a fact which does not generalize to higher heights. On the other hand, the operation $\theta$ also satisfies the multiplicativity rule
\[
\theta(xy) = \theta(x)\theta(y) + \theta(x)y^p + x^p\theta(y),
\]
making the associated lift of Frobenius $x^p+p\theta(x)$ a \emph{ring} homomorphism.
\end{rem}
\subsubsection{May's Conjecture}\label{sec:May}
\begin{defn}
\label{def:sofic}Let $\mathcal{C}$ be a stable, presentably symmetric
monoidal $\infty$-category. We say that $\mathcal{C}$ is \emph{sofic},\footnote{The term \emph{sofic} is derived from the Hebrew word "sofi" for $finite$ (See \cite{weiss1973subshifts}).}
if there exists a stable, $1$-semiadditive, presentably symmetric
monoidal $\infty$-category $\mathcal{D}$ and a colimit preserving,
conservative, lax symmetric monoidal functor $\mathcal{C}\to\mathcal{D}$.
We call a spectrum $E\in\Sp$ sofic, if $\Sp_{E}$ is sofic.
\end{defn}
Every stable, $1$-semiadditive, presentably symmetric monoidal $\infty$-category
is of course sofic, but the latter condition is considerably weaker.
\begin{example}
The spectrum $H\bb Q$ is sofic and more generally, the spectra $\K\left(n\right)$
and $\T\left(n\right)$ for all $n$. Any sum of sofic spectra is
sofic, and since being sofic depends only on the Bousfield class (that is, the collection of acyclic objects) of
the spectrum, so are the Morava theories $E_{n}$ for all $n$ and
the telescopic localizations of the sphere spectrum $L_{n}^{f}\bb S$.
\end{example}
\begin{thm}
\label{thm:May}Let $E\in\Sp$ be a sofic homotopy commutative ring
spectrum and let $R$ be an $\bb E_{\infty}$-ring. For every $x\in\pi_{*}R$,
if the image of $x$ in $\pi_{*}\left(H\bb Q\otimes R\right)$ is
nilpotent, then the image of $x$ in $\pi_{*}\left(E\otimes R\right)$
is nilpotent.
\end{thm}
Namely, the single homology theory $H\bb Q$ detects nilpotence in
all sofic homology theories.
\begin{proof}
First, observe that
\[
\pi_{*}\left(H\bb Q\otimes R\right)\simeq\bb Q\otimes\pi_{*}R.
\]
Replacing $x$ with a suitable power, we can assume that $x$ is torsion
in $\pi_{*}R$. Since the homogeneous components of a torsion element
are torsion, we may assume without loss of generality that $x\in\pi_{k}R$
for some $k$ (i.e. $x$ is homogeneous). Consider the corresponding
map $|x|\colon R\to\Sigma^{-k}R$ given by multiplication
by $x$. The telescope
\[
x^{-1}R=\colim\left(R\oto{|x|}\Sigma^{-k}R\oto{|x|}\Sigma^{-2k}R\oto{|x|}\dots\right)
\]
carries a structure of an $\bb E_{\infty}$-ring and the map $R\to x^{-1}R$
induces the localization by $x$ map on $\pi_{*}$. In particular,
the unit map $\bb S\to x^{-1}R$ is torsion. Let $F\colon\Sp_{E}\to\mathcal{C}$
be a functor as in \defref{sofic}. Since $F$ and $L_{E}$ are both
lax symmetric monoidal and exact, $F\left(L_{E}\left(x^{-1}R\right)\right)$
is an $\bb E_{\infty}$-algebra and its unit is also torsion. By \corref{Semiadd_Torsion_Nilpotent},
$F\left(L_{E}\left(x^{-1}R\right)\right)=0$ and since $F$ is conservative,
$L_{E}\left(x^{-1}R\right)=0$. It follows that $E\otimes x^{-1}R=0$.
Let $\tilde{x}$ be the image of $x$ in $\pi_{*}\left(E\otimes R\right)$.
Since,
\[
\pi_{*}\left(E\otimes x^{-1}R\right)\simeq\pi_{*}\left(\tilde{x}^{-1}\left(E\otimes R\right)\right)\simeq\tilde{x}^{-1}\pi_{*}\left(E\otimes R\right),
\]
we get that $\tilde{x}$ is nilpotent in $\pi_{*}\left(E\otimes R\right)$.
\end{proof}
\begin{rem}
We could have replaced $\bb E_{\infty}$ by $H_{\infty}$ (see \remref{H_Infty}).
Applying the theorem in this form for $E=\K\left(n\right)$, and using
the Nilpotence Theorem, one can deduce the conjecture of May, that
was proved in \cite{MathewMay}. We also note that the above
theorem can be extended to a general stable presentably symmetric
monoidal $\infty$-category $\mathcal{C}$ with a compact unit (instead
of $\Sp$) and $x\colon I\to R$ any map from an invertible object
$I$ (i.e. an object of the Picard group of $\mathcal{C}$).
\end{rem}
\subsection{Higher Semiadditivity of $T(n)$-Local Spectra}
In this section, we prove the main theorem of the paper. Namely, we show that
the $\infty$-category $\Sp_{T\left(n\right)}$ is $\infty$-semiadditive for all $n\ge0$ and draw some consequences from this.
Our strategy is to apply the ``Bootstrap Machine'' (\thmref{Bootstrap_Machine})
to the functor $\widehat{E}_n[-]$ given in \defref{En_Hat}.
\begin{thm}
\label{thm:Tn_Semiaddi}
For all $n\ge0$, the $\infty$-categories
$\Sp_{T\left(n\right)}$ and $\widehat{\Mod}_{E_n}$
are $\infty$-semiadditive.
\end{thm}
\begin{proof}
We verify the assumptions (1)-(3) of \thmref{Bootstrap_Machine} for the colimit preserving symmetric monoidal functor
\[
\widehat{E}_{n}[-]\colon\Sp_{T\left(n\right)}\to\widehat{\Mod}_{E_n}.
\]
Namely, we need to show that
\begin{enumerate}
\item The $\infty$-categories $\Sp_{T\left(n\right)}$ are $1$-semiadditive.
\item The functor $\widehat{E}_{n}[-]$ detects invertibility.
\item The symmetric monoidal dimensions of the spaces $B^{k}C_{p}$ in $\widehat{\Mod}_{E_n}$ are rational and non-zero.
\end{enumerate}
Claim (1) is proved in \cite{Kuhn}, claim (2) follows from \corref{Nil_Conservativity_Telescopic}, and claim (3) is given by \corref{Morava_Dimension_EM}.
\end{proof}
This readily implies the original result of \cite{HopkinsLurie}.
\begin{cor}
\label{cor:Kn_Semiadd}For all $0\le n<\infty$, the $\infty$-category $\Sp_{K\left(n\right)}$
is $\infty$-semiadditive.
\end{cor}
\begin{proof}
Apply \corref{Semi_Add_Mode} to the localization functor $L_{T\left(n\right)}\colon\Sp_{T\left(n\right)}\to\Sp_{K\left(n\right)}$.
Alternatively, one could just use the same argument as in \thmref{Tn_Semiaddi}.
\end{proof}
By \thmref{Tn_Semiaddi}, both $\infty$-categories $\Sp_{T\left(n\right)}$
and $\widehat{\Mod}_{E_n}$ are $\infty$-semiadditive.
Hence, for every $\pi$-finite space $A$, we have an element $|A|\in\pi_{0}\bb S_{T\left(n\right)}$,
which maps to the corresponding element $|A|\in\pi_{0}E_{n}$
(since the map is induced by a colimit preserving functor). We shall
make some computations regarding these elements and use them to deduce
some new facts about $\Sp_{T\left(n\right)}$.
\begin{lem}
\label{lem:EM_Box_Morava}For every $k,n\ge0$ we have
\[
|B^{k}C_{p}|_{\widehat{\Mod}_{E_n}}=p^{\binom{n-1}{k}}\quad\in\pi_{0}E_{n}.
\]
\end{lem}
\begin{proof}
By \corref{Dim_Sym} and \corref{Morava_Dimension_EM}, we have
\[
p^{\binom{n}{k}}=\dim_{\widehat{\Mod}_{E_n}}\left(B^{k}C_{p}\right)=|B^{k}C_{p}||B^{k-1}C_{p}|.
\]
The result now follows by induction on $k$, using the identity
\[
\binom{n-1}{k}+\binom{n-1}{k-1}=\binom{n}{k}
\]
and the fact that the ring $\pi_{0}E_{n}$ is torsion free.
\end{proof}
\begin{lem}
\label{lem:EM_Box_Tn_Inv}For every $k\ge n\ge0$ the element $|B^{k}C_{p}|_{\Sp_{\T(n)}} \in\pi_{0}\bb S_{T\left(n\right)}$
is invertible.
\end{lem}
\begin{proof}
For $n=0$ this is clear, so we may assume $n\ge 1$. By \corref{Nil_Conservativity_Telescopic}, the map
\[
f\colon\pi_{0}\bb S_{T\left(n\right)}\to\pi_{0}E_{n}
\]
detects invertibility and by \lemref{EM_Box_Morava},
\[
f\left(|B^{k}C_{p}|\right)=p^{\binom{n-1}{k}}=1.
\]
\end{proof}
\begin{thm}
\label{thm:Tn_Contractiblity}
Let $n\ge0$ and let $f\colon A\to B$
be a map with $\pi$-finite $n$-connected homotopy fibers. The induced
map $\Sigma_{+}^{\infty}f\colon\Sigma_{+}^{\infty}A\to\Sigma_{+}^{\infty}B$
is a $\T\left(n\right)$-equivalence.
\end{thm}
\begin{proof}
We begin with a standard general argument that reduces the statement
to the case $B=\pt$, by passing to the fibers. Consider the equivalence
of $\infty$-categories
\[
\mathcal{S}_{/B}\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,}\fun\left(B,\mathcal{S}\right),
\]
given by the Grothendieck construction. Let $X\in\fun\left(B,\mathcal{S}\right)$
be the local system of spaces on $B$, that corresponds to $f$ and
let $Y\in\fun\left(B,\mathcal{S}\right)$ be the constant local system
with value $\pt\in\mathcal{S}$. As $Y$ is terminal, there is an essentially
unique map $X\to Y$, which at each point $b\in B$, is the essentially
unique map from $X_{b}$, the homotopy fiber of $f$ at $b$, to $Y_{b}=\pt$.
We recover $f$, up to homotopy, as the induced map on colimits
\[
A\simeq\colim X\to\colim Y\simeq B.
\]
For each $E\in\Sp$, the functor
\[
E\otimes\Sigma_{+}^{\infty}\left(-\right)\colon\mathcal{S}\to\Sp
\]
preserves colimits. Therefore, if the induced map for each homotopy
fiber
\[
E\otimes\Sigma_{+}^{\infty}X_{b}\to E\otimes\Sigma_{+}^{\infty}\pt,
\]
is an isomorphism, then the induced map on colimits is also an isomorphism
\[
E\otimes\Sigma_{+}^{\infty}A\xrightarrow{\,\smash{\raisebox{-0.5ex}{\ensuremath{\scriptstyle\sim}}}\,} E\otimes\Sigma_{+}^{\infty}B.
\]
Now, if $B=\pt$, we have that $A$ is a $\pi$-finite $n$-connected
space. For $n=0$, the claim is obvious, and so we may assume that
$n\ge1$. Therefore, $A$ is simply connected and in particular nilpotent.
Thus, we can refine the Postnikov tower of $A$ to a finite tower
\[
A=A_{0}\to A_{1}\to\dots\to A_{d}=\pt,
\]
such that the homotopy fiber of each $A_{i}\to A_{i+1}$ is of the
form $B^{k}C_{q}$, for $q$ a prime and $k\ge n+1$. It thus suffices
to show that the map
\[
\Sigma_{+}^{\infty}B^{k}C_{q}\to\Sigma_{+}^{\infty}\pt\simeq\bb S,
\]
induced by
\[
g\colon B^{k}C_{q}\to\pt,
\]
is a $\T\left(n\right)$-equivalence. For $q\neq p$ this is clear.
For $q=p$ we apply \propref{Amenable_Contractible} to the map $g$.
For this, we need to check that
\[
|\Omega B^{k}C_{p}|=|B^{k-1}C_{p}|
\]
is invertible in $\pi_{0}\bb S_{T\left(n\right)}$, which follows
from \lemref{EM_Box_Tn_Inv}. Alternatively,
$B^kC_q$ is dualizable in $\Sp_{T(n)}$ by \thmref{Tn_Semiaddi} and \corref{Dim_Sym}, and $L_{K(n)}\colon \Sp_{T(n)}\to \Sp_{K(n)}$ in nil-conservative by \propref{Nilpotence_Support} and \exaref{Chromatic_Support}(2).
Thus, by \propref{Nil_Conservativity_Dualizable}, it suffices to check that the map $g$ is a $K(n)$-equivalence, which follows from the computation of $K(n)_*B^kC_q$ carried out in \cite{RavenelWilson}.
\end{proof}
\begin{rem}
The analogous result for $\K\left(n\right)$ instead of $\T\left(n\right)$
is a consequence of the \cite{RavenelWilson} computation of the $\K\left(n\right)$-homology
of Eilenberg-MacLane spaces. A weaker result for $\T\left(n\right)$,
namely that the conclusion holds if the homotopy fibers of $f$ are
$\pi$-finite and $k$-connected for $k\gg0$, can be deduced from
\cite[Theorem 3.1]{Bousfield82}.
\end{rem}
\begin{cor}
\label{cor:General_Contractibility}Let $n\ge0$ and let $f\colon A\to B$
be a map with $\pi$-finite $n$-connected homotopy fibers. For every
localization $L\colon\Sp_{\left(p\right)}\to\Sp_{\left(p\right)}$
such that $L\left(\X\left(n+1\right)\right)=0$, the induced map
\[
L\left(\Sigma_{+}^{\infty}f\right)\colon L\left(\Sigma_{+}^{\infty}A\right)\to L\left(\Sigma_{+}^{\infty}B\right)
\]
is an isomorphism.
\end{cor}
\begin{proof}
The condition $L\left(\X\left(n+1\right)\right)=0$ ensures that $L\colon\Sp_{\left(p\right)}\to\Sp_{L}$
factors through the finite chromatic localization $L_{n}^{f}\colon\Sp_{\left(p\right)}\to\Sp_{L_{n}^{f}}$,
which is also the localization with respect to the spectrum $T\left(0\right)\oplus\cdots\oplus T\left(n\right)$.
Hence, the claim follows from \thmref{Tn_Contractiblity}.
\end{proof}
\subsection{Higher Semiadditivity and Weak Rings}
\label{sec:semiadd_and_chrom}
In this section, we study higher semiadditivity for more general localizations of spectra. In particular, with respect to \emph{weak rings}, which are a very weak version of a homotopy ring (see \defref{Weak_Ring}). We begin by studying the chromatic support of a localization and show that three different notions of ``bounded chromatic height'' for weak rings coincide. We then study localizations of the $\infty$-category $\Sp$ with respect to weak rings, which are $1$-semiadditive.
We show that $p$-locally, those are precisely the intermediate localizations between $\Sp_{K(n)}$ and $\Sp_{T(n)}$. We deduce that such localizations are always $\infty$-semiadditive and also derive a characterization of higher semiadditivity in terms of the Bousfield-Kuhn functor.
\subsubsection{General Localizations}
We begin with a discussion regarding general $\otimes$-localizations. The following result relates the $1$-semiadditivity of a $\otimes$-localization and the support of the corresponding localization functor. In a sense, a 1-semiadditive $\otimes$-localization of $\Sp_{\left(p\right)}$ is monochromatic of finite height.
\begin{prop}
\label{prop:Semiadd_Monochrom}Let $L\colon\Sp_{\left(p\right)}\to\Sp_{\left(p\right)}$
be a $\otimes$-localization functor. If $\Sp_{L}$ is 1-semiadditive,
then either $\supp(L)= \operatorname{{\small \normalfont{\text{\O}}}}$ or $\supp(L)=\{n\}$ for some $0 \le n < \infty$.
\end{prop}
\begin{proof}
We start by showing that $\infty \notin \supp(L)$.
Assuming the contrary, by \lemref{Kn_LSp}, we get that
\[
H\mathbb{F}_{p}=\K\left(\infty\right)\in\Sp_{\K\left(\infty\right)}\ss\Sp_{L}.
\]
This is a contradiction to \corref{Semiadd_Torsion_Nilpotent} as $H\mathbb{F}_{p}$
is an $\mathbb{E}_{\infty}$-ring.
It remains to show that $\supp(L)$ cannot contain two different natural numbers. We shall prove this by contradiction. Suppose that there are $0\le m<n<\infty$ such that $m,n\in \supp(L)$.
By \lemref{Kn_LSp} again, it follows that $\Sp_{\K\left(m\right)},\Sp_{\K\left(n\right)}\ss\Sp_{L}$.
In particular, we get $E_{m},E_{n}\in\Sp_{L}$. Consider the object
\[
E_{n}\widehat{\otimes}E_{m}=L\left(E_{n}\otimes E_{m}\right)\in\Sp_{L}.
\]
We begin by showing that $E_{n}\widehat{\otimes}E_{m}\neq0$. Indeed, since
$\Sp_{\K\left(m\right)}\ss\Sp_{L}$ we have
\[
L_{\K\left(m\right)}\left(E_{n}\widehat{\otimes}E_{m}\right)\simeq L_{\K\left(m\right)}L\left(E_{n}\otimes E_{m}\right)\simeq L_{\K\left(m\right)}\left(E_{n}\otimes E_{m}\right).
\]
The spectrum $L_{\K\left(m\right)}\left(E_{n}\otimes E_{m}\right)$
is non-zero by the K\"unneth isomorphism and the fact that
\[
\K\left(m\right)\otimes E_{m},\,\K\left(m\right)\otimes E_{n}\neq0.
\]
The object $E_{n}\widehat{\otimes}E_{m}$ is an
$\bb E_{\infty}$-ring in the $1$-semiadditive $\infty$-category
$\Sp_{L}$ and therefore we have a well defined element $a=|BC_{p}|\in\pi_{0}\left(E_{n}\widehat{\otimes}E_{m}\right)$.
By naturality, $a$ is the image of the elements
\[
a_{n}=|BC_{p}|\in\pi_{0}\left(E_{n}\right),\quad a_{m}=|BC_{p}|\in\pi_{0}\left(E_{m}\right)
\]
under the canonical maps of $\bb E_{\infty}$-rings $E_{n}\to E_{n}\widehat{\otimes}E_{m}$
and $E_{m}\to E_{n}\widehat{\otimes}E_{m}$ respectively. However, the computation in \lemref{EM_Box_Morava}
shows that $a_{n}=p^{n-1}$ and $a_{m}=p^{m-1}$. Hence, their equality is a contradiction to the injectivity of the unit map $\bb{Z}\to \pi_0(E_n \widehat{\otimes} E_m)$, which follows from \propref{Delta_Injective}.
\end{proof}
\begin{rem}
The definition of $a_{n}$ as $|BC_{p}|\in\pi_{0}\left(E_{n}\right)$
is unambiguous, since by \corref{Integral_Functor} for the symmetric monoidal
localization functor $\Sp_{L}\to\Sp_{\K\left(n\right)},$ it does
not matter whether we consider $E_{n}$ as an object of $\Sp_{L}$
or $\Sp_{\K\left(n\right)}$.
\end{rem}
\begin{rem}
We are not aware of any example of a non-zero $1$-semiadditive $\otimes$-localization $L$, for which $\supp(L) = \operatorname{{\small \normalfont{\text{\O}}}}$.
The techniques of this paper can be used to show that if no such examples exist (as we suspect), then \corref{Semiadd_Collapse} can be generalized to every $\otimes$-localization.
Namely, a $\otimes$-localization of $\Sp$ is $1$-semiadditive if and only if it is $\infty$-semiadditive.
\end{rem}
\subsubsection{Weak Rings}
There are several notions of being ``of height $\le n$'' for the Bousfield
class of a weak ring. The following theorem shows that they are all equivalent.
\begin{thm}
\label{thm:Height_Below_n} Let $R$ be a non-zero $p$-local weak
ring and let $0\le n <\infty$. The following are equivalent:
\begin{enumerate}
\item $\X\left(n+1\right)\otimes R=0$ for some finite spectrum $\X\left(n+1\right)$ of type $n+1$.
\item $\Sigma^{\infty}B^{n+1}C_{p}\otimes R=0$.
\item $\supp(R) \subseteq \{0,\dots,n\}$.
\end{enumerate}
\end{thm}
\begin{proof}
We prove the equivalence by showing first that the implications labeled by solid arrows in the diagram below hold for a general $\otimes$-localization $L$, where the condition
$R\otimes X=0$ is interpreted as $L\left(X\right)=0$. Then we turn to the remaining implication, labeled by the dashed arrow in the diagram.
\[
\xymatrix{
& & \\
\left(1\right)\ar@{=>}[r] & \left(2\right)\ar@{=>}[r] & \left(3\right) \ar@/_1.5pc/@{==>}[ll].}
\]
The implication
$\xymatrix{\left(1\right)\ar@{=>}[r] & \left(2\right)}$ follows from \corref{General_Contractibility}.
Given (2), assume by contradiction that $L\left(\K\left(m\right)\right)\neq0$
for some $n< m\le\infty$. On the one hand, by \lemref{Kn_LSp},
we have
\[
\Sigma^{\infty}B^{n+1}C_{p}\otimes\K\left(m\right)\in\Sp_{K\left(m\right)}\ss\Sp_{L}.
\]
On the other hand, by assumption, $L\left(\Sigma^{\infty}B^{n+1}C_{p}\right)=0$ and
hence (using {\cite{RavenelWilson}}), we have
\[
0\neq \Sigma^{\infty}B^{n+1}C_{p}\otimes\K\left(m\right)=L(\Sigma^{\infty}B^{n+1}C_{p}\otimes\K\left(m\right))=L(\Sigma^{\infty}B^{n+1}C_{p})\widehat{\otimes}L\left(\K\left(m\right)\right)=0
.\]
It now suffices to show that for a localization with respect to a
non-zero $p$-local weak ring $R$, the implication
$\xymatrix{\left(3\right) \ar@{==>}[r] & \left(1\right)}$ holds as well.
By Example \ref{example: telescopic weak rings}, we may assume that $\X\left(n+1\right)$ is
a weak ring and hence by Lemma \lemref{Tensor_Weak_Rings}, the spectrum
$\X\left(n+1\right)\otimes R$ is a weak ring as well. Moreover, we have
\[\supp(\X\left(n+1\right)\otimes R) = \supp(\X\left(n+1\right)) \cap \supp(R)=\operatorname{{\small \normalfont{\text{\O}}}}\] and thus by \thmref{Nilpotence}, we get
$\X\left(n+1\right)\otimes R=0$.
\end{proof}
\begin{rem}
Condition $(2)$ in \thmref{Height_Below_n} has an alternative formulation. As in the proof of \thmref{Tn_Contractiblity}, if $E$ is a $p$-local spectrum, $\Sigma^\infty B^{n+1}C_p \otimes E = 0$ if and only if the following condition is satisfied:
\begin{enumerate}
\item[$\left(2'\right)$] For every map $f{\colon}A\to B$ of $\pi$-finite spaces,
that induces an isomorphism on the $n$-th Postnikov truncation, the map
\[ \Sigma^\infty_+f \otimes E \colon \Sigma^\infty_+A\otimes E \to \Sigma^\infty_+B\otimes E\]
is an isomorphism.
\end{enumerate}
\end{rem}
We now show that for localization with respect to a weak ring, being monochromatic is even more closely related to higher semiadditivity. For this we need the following general lemma.
\begin{lem}
\label{lem:Bousfield_Class_Expansion}Let $E\in\Sp_{\left(p\right)}$.
For every $n\ge0$, the spectrum $E$ is Bousfield equivalent to
\[
\left(\T\left(0\right)\otimes E\right)\oplus\left(\T\left(1\right)\otimes E\right)\oplus\cdots\oplus\left(\T\left(n\right)\otimes E\right)\oplus\left(\X\left(n+1\right)\otimes E\right).
\]
\end{lem}
Note that the Bousfield class of $X\otimes Y$ depends only on the
Bousfield classes of $X$ and $Y$. Hence, in the statement and proof
of the above lemma, we are free to choose the $\T\left(i\right)$-s
and $\X\left(n+1\right)$ as we please.
\begin{proof}
Using the Periodicity Theorem (\cite{nilp2}) we can construct a sequence of finite type $n$ spectra $\X\left(n\right)$
with $v_{n}$-self maps
\[
v_{n}{\colon}\Sigma^{d_{n}}\X\left(n\right)\to\X\left(n\right),
\]
such that
\begin{enumerate}
\item $\X\left(0\right)=\bb S_{\left(p\right)}$
\item $\X\left(n+1\right)$ is the cofiber of $v_{n}$.
\item $\T\left(n\right)=v_{n}^{-1}\X\left(n\right)$.
\end{enumerate}
The claim now follows from a repeated application of \cite[Lemma 1.34]{ravconj}.
\end{proof}
\begin{thm}
\label{thm:Monochrom} Let $R$ be a non-zero $p$-local weak ring.
The following are equivalent:
\begin{enumerate}
\item There exists a (necessarily unique) integer $n\ge0$, such that
\(\Sp_{\K\left(n\right)}\ss\Sp_{R}\ss\Sp_{\T\left(n\right)}.\)
\item Either $\Sp_{R}=\Sp_{H\bb Q}$, or $\Omega^{\infty}{\colon}\Sp_{R}\to\mathcal{S_{*}}$
admits a retract.
\item $\Sp_{R}$ is $\infty$-semiadditive.
\item $\Sp_{R}$ is $1$-semiadditive.
\item $\supp(R) = \{n\}$ for some $0 \le n<\infty$.
\end{enumerate}
Moreover, the integer $n$ in $\left(1\right)$ and $\left(5\right)$
is the same one.\end{thm}
\begin{proof}
Consider the following slight variant of condition (5):
\begin{enumerate}
\item[(5)'] Either $\supp(R)=\operatorname{{\small \normalfont{\text{\O}}}}$ or $\supp(R)=\{n\}$ for some $0 \le n < \infty$.
\end{enumerate}
We shall prove the theorem by verifying all the implications in the following diagram:
\[
\vcenter{
\xymatrix@R=0.2pc@C=4pc{
& & & & & \\ \\
\ar@{==>}[d] & (2)\ar@{=>}@/^0.3pc/[rd] & & & \\
(1)\ar@{=>}@/^0.35pc/[ru]\ar@{=>}@/_0.35pc/[rd] & & (4)\ar@{=>}[r] & (5')\ar@{==>}[r] & (5)\ar@<-1pt>@{--}`u[uuullll]`[llllu][llllu]
\ar@<+1pt>@{--}`u[uuullll]`[llllu][llllu] \\ & (3)\ar@{=>}@/_0.3pc/[ru]
}
}
\]
In fact, we show that the implications labeled by solid arrows hold for a general $\otimes$-localization $L$, and those labeled by dashed arrows hold for $L_R$, where $R$ is a non-zero $p$-local weak ring.
We start by showing that $\xymatrix{\left(1\right)\ar@{=>}[r] & \left(2\right).}$ If $n=0$ then
\[\Sp_{\K\left(0\right)}=\Sp_{\T\left(0\right)}=\Sp_{\mathbb{Q}}\] and we
are done. Otherwise, let $\Phi_{n}{\colon}\mathcal{S}_{*}\to \Sp_{T(n)}$
be the Bousfield-Kuhn functor (see \cite{kuhn1989morava,bousfield2001telescopic}). We get that $L\circ\Phi_{n}$ is a
retract of $\Omega^{\infty}{\colon}\Sp_{L}\to\mathcal{S}_{*}$. To show that $\xymatrix{\left(1\right)\ar@{=>}[r] & \left(3\right),}$
consider the symmetric monoidal colimit preserving functor $L{\colon}\Sp_{T(n)}\to \Sp_{L}$.
The claim now follows from Theorem \ref{thm:Tn_Semiaddi} and Corollary \ref{cor:Semi_Add_Mode}. The implication
$\xymatrix{\left(2\right)\ar@{=>}[r] & \left(4\right)}$ is proved in \cite[Theorem 2.6]{ClausenAkhil}. Finally, $\xymatrix{\left(3\right)\ar@{=>}[r] & \left(4\right)}$ is
trivial and $\xymatrix{\left(4\right)\ar@{=>}[r] & \left(5\right)}$ is \propref{Semiadd_Monochrom}.
It is left to show the implications $\xymatrix{\left(5\right)' \ar@{==>}[r] & \left(5\right)}$ and $\xymatrix{\left(5\right) \ar@{==>}[r] & \left(1\right),}$ where $L=L_R$ for a non-zero $p$-local weak ring $R$. The first implication follows from \thmref{Nilpotence}. For the second, let $0\le n < \infty$ be such that $\supp(R) = \{n\}$. By \lemref{Kn_LSp}, we have $\Sp_{\K\left(n\right)}\ss\Sp_{R}$.
It remains to show that $\Sp_{R} \subseteq \Sp_{\T\left(n\right)}$. By \lemref{Bousfield_Class_Expansion}, the spectrum
$R$ is Bousfield equivalent to
\[
\left(\T\left(0\right)\otimes R\right)\oplus\left(\T\left(1\right)\otimes R\right)\oplus\cdots\oplus\left(\T\left(n\right)\otimes R\right)\oplus\left(\X\left(n+1\right)\otimes R\right).
\]
From the assumption $\supp \left(R\right)=\{n\}$ and \thmref{Height_Below_n}, we get that $\X\left(n+1\right)\otimes R=0$.
By Example \ref{example: telescopic weak rings} and Lemma \lemref{Tensor_Weak_Rings}
we may assume that the spectra $\T\left(m\right)\otimes R$ for $m<n$
are weak rings. Now, for $m\ne n$ we have
\[\supp(\T\left(m\right)\otimes R) = \supp(\T\left(m\right))\cap \supp(R)=\{m\}\cap\{n\}=\operatorname{{\small \normalfont{\text{\O}}}}\]
and therefore $\T\left(m\right)\otimes R=0$ by \thmref{Nilpotence}. It follows that
\[
\Sp_{R}\simeq\Sp_{\T\left(n\right)\otimes R}\ss\Sp_{\T\left(n\right)}.
\]
\end{proof}
We conclude by showing that the equivalence of conditions $\left(3 \right)$ and $\left(4 \right)$ in \thmref{Monochrom} holds for general, not necessarily $p$-local, weak rings.
\begin{lem}
\label{lem:torsion_than_product}Let $E$ be a spectrum and let $\ell$
be an integer such that $E\oto{\times \ell} E$ is null. The canonical
functor $F{\colon}\Sp_{E}\to\prod_{p\mid\ell}\Sp_{E_{(p)}}$ is an equivalence,
where $E_{(p)}$ is the $p$-localization of $E$ at the prime $p$.
\end{lem}
\begin{proof}
The functor $F$ admits a right adjoint $G$ given on objects by
\[
(X_{p})_{\{p\mid \ell\}}\mapsto\bigoplus_{p\mid \ell} X_{p}.
\]
It suffices to show that $F$ is conservative and that the counit of the adjunction is an isomorphism. Since multiplication by $\ell$ is null on $E$, all homotopy groups of $E$ are $\ell$-torsion. It follows that the canonical map $E \to \bigoplus_{p\mid \ell} E_{(p)}$ is an isomorphism on homotopy groups and hence an isomorphism. Thus, $F$ is conservative. The components of the counit are given by
\[
L_{E_{(p)}}\left(\bigoplus_{q\mid \ell}X_{q}\right) \to X_{p}.
\]
Since $L_{E_{(p)}}$ is exact, it is enough to show that $L_{E_{(p)}}(X_q)=0$ for all $q \neq p$. Indeed, multiplication by $p$ acts invertibly on $X_q$ and nilpotently on $E_{(p)}$, hence $E_{(p)}\otimes X_q=0$.
\end{proof}
\begin{cor} \label{cor:Semiadd_Collapse}
Let $R\in \Sp$ be a (not necessarily $p$-local) weak ring. Then $\Sp_{R}$
is 1-semiadditive if and only if it is $\infty$-semiadditive.
\end{cor}
\begin{proof}
Denote by $R_{(p)}$ the $p$-local weak ring $R\otimes\mathbb{S}_{(p)}$.
By \corref{Semi_Add_Mode} applied to the localization functor $F_{p}\colon \Sp_{R}\to \Sp_{R_{(p)}}$, the $\infty$-category $\Sp_{R_{(p)}}$ is 1-semiadditive
and hence by \thmref{Monochrom} it is $\infty$-semiadditive. We divide into cases according to whether $R\otimes H\mathbb{Q}$ vanishes or not.
If $R\otimes H\mathbb{Q}=0$, then the unit $u_{R}{\colon}\mathbb{S}\to R$ has finite order $\ell$ in $\pi_{0}R$. Hence,
\[
\ell \cdot \Id_R =\ell \cdot \mu_R(u_R \otimes \Id_R)=
\mu_R((\ell \cdot u_R) \otimes \Id_R)=0.
\]
By \lemref{torsion_than_product} we have
\[\Sp_{(R)}\cong \prod_{p\mid \ell} \Sp_{R_{(p)}}\]
and by \corref{product_semiadditive} it is $\infty$-semiadditive. Now, consider the case where $H\mathbb{Q}\otimes R\ne0$.
For every prime $p$, since $\Sp_{R_{(p)}}$ is 1-semiadditive and \[K(0)\otimes R_{(p)}=H\mathbb{Q}\otimes R\ne0,\] we get from \thmref{Monochrom} that $\supp(R_{(p)})=\{0\}$.
By \thmref{Height_Below_n} applied to the Moore spectrum $M(p)=\X\left(1\right)$, we obtain
\[
R\otimes M(p) \simeq R_{(p)}\otimes M(p)=0.
\]
It follows that $R\in \Sp_{H\mathbb{Q}}=\Mod_{H\mathbb{Q}}$ and hence $\Sp_{R}=\Sp_{H\mathbb{Q}}$ is $\infty$-semiadditive.
\end{proof}
\bibliographystyle{alpha}
\phantomsection\addcontentsline{toc}{section}{\refname} |
2,869,038,154,263 | arxiv | \section{Introduction}
We study a combinatorial object, which we call a
GRRS (generalized reflection root system), see~\Defn{defnGRRS}.
The classical root systems are finite GRRSs without isotropic roots. Our definition of GRRS is motivated by Serganova's definition
of GRS introduced in~\cite{VGRS}, Sect. 1, and by
the following examples:
the set of real roots $\Delta_{re}$ of a symmetrizable
Kac-Moody superalgebra introduced in~\cite{S2} and its subsets $\Delta_{re}(\lambda)$ ("integral real roots"),
see~\cite{GK}.
Each GRRS $R$ is, by definition, a subset of a finite-dimensional complex
vector space $V$ endowed with a symmetric bilinear form $(-,-)$.
The image of $R$ in $V/Ker (-,-)$ is denoted by $cl(R)$; it satisfies weaker
properties than GRRS and is called a WGRS.
An infinite GRRS is called {\em affine} if its image
$cl(R)$ is finite (in this case $cl(R)$ is a finite WGRS, which
were classified in~\cite{VGRS}).
We show that an irreducible GRRS
containing an isotropic root is either finite or affine.
Recall a theorem of C.~Hoyt that a symmetrizable
Kac-Moody superalgebra with an isotropic simple root
and an indecomposable Cartan matrix (this corresponds to
the irreducibility of GRRS)
is finite-dimensional or affine, see~\cite{H}.
Finite GRRSs correspond to the root systems of finite-dimensional Kac-Moody superalgebras.
In this paper we describe all affine GRRSs $R$ and classify them
for most cases of $cl(R)$. Irreducible affine
GRRSs
with $\dim Ker (-,-)=1$ correspond to symmetrizable affine Lie superalgebras. This case
was treated in~\cite{Sh}; in particular,
it implies that an "irreducible subsystem" of the set of real roots of an affine Kac-Moody superalgebra is a set of real roots
of an affine or a finite-dimensional Kac-Moody superalgebra
(this was used in~\cite{GK}).
For each GRRS we introduce a certain subgroup of $Aut R$, which is denoted
by $GW(R)$; if $R$ does not contain isotropic roots, then $GW(R)$ is the usual Weyl group. Let $R$ be an irreducible
affine GRRS (i.e., $cl(R)$ is finite). We show
that if the action of
$GW(cl(R))$ to $cl(R)$ is transitive and $cl(R)\not=A_1$,
then $R$ is either the affinization of $cl(R)$
(see~\S~\ref{defaff} for definition) or, if $cl(R)$ is the root system
of $\mathfrak{psl}(n,n), n>2$, $R$ is
a certain ``bijective quotient'' of the affinization
of the root system of $\mathfrak{pgl}(n,n)$, see~\S~\ref{bijquo} for definition. The action of
$GW(cl(R))$ to $cl(R)$ is transitive if and only if
$cl(R)$ is the root system of a simply laced
Lie algebra or a Lie superalgebra $\fg\not=B(m,n)$, which is not a Lie algebra. If $R$ is such that $cl(R)=B(m,n), m,n\geq 1$ or
$cl(R)=B_n,C_n, n\geq 3$, then
$R$ is classified by non-empty subsets of the affine
space $\mathbb{F}_2^k$ up to affine autormorphisms of $\mathbb{F}_2^k$, where $\dim Ker(-,-)=k$;
a similar classification holds for $cl(R)=A_1$.
In the cases $cl(R)=G_2, F_4$, the GRRSs $R$
are parametrized by $s=0,1,\ldots,\dim Ker(-,-)$.
In the remaining case either $cl(R)$ is a finite WGRS, which is not a GRRS, or $cl(R)=BC_n$; we partially classify
the corresponding GRRSs (we describe all possible $R$).
Another combinatorial object, an extended affine root supersystem (EARS),
was introduced and described in a recent paper of
M.~Yousofzadeh~\cite{You}. The main differences between
a GRRS and an EARS are the following: EARS has a "string property"
(for each $\alpha,\beta$ in an EARS with $(\alpha,\alpha)\not=0$
the intersection of $\beta+\mathbb{Z}\alpha$
with the EARS is a string $\{\beta-j\alpha|\ j\in \{-p,p+1,\ldots,q\}\}$ for some $p,q\in\mathbb{Z}$ with
$p-q=2(\alpha,\beta)/(\alpha,\alpha)$), and a GRRS
should be invariant with respect to the "reflections"
connected to its elements. The string property implies the
invariance with respect to the reflections connected to
non-isotropic roots ($\alpha$ such that $(\alpha,\alpha)\not=0$).
A finite GRRS corresponds to the root system
of a finite-dimensional Kac-Moody superalgebra, and the
finite EARSs include two additional series.
The root system of a symmetrizable affine
Lie superalgebra is an EARS and the set of real roots is a GRRS.
Moreover, the set of roots of a symmetrizable Kac-Moody superalgebra is an EARS only if this algebra is affine or finite-dimensional
(by contrast, the set of real roots is always a GRRS).
For example, the real roots of a Kac-Moody algebra with the Cartan matrix $\begin{pmatrix} 2 &-3\\-3 &2 \end{pmatrix}$ form a GRRS, which can not be embedded in an EARS. However, according to theorem of C.~Hoyt~\cite{H}, an indecomposable symmetrizable Kac-Moody superalgebra with an isotropic real root is affine, so
there are no examples of this nature if the GRRS contains an isotropic root. Eventhough the GRRSs are not exhausted
by GRRSs coming from Kac-Moody algebra, from Prop. 3.2
in~\cite{You} it follows that
an affine GRRS $R$ can be always embedded in an EARS, i.e.
there exists an EARS $R'$ such that
$R=\{\alpha\in R'|\ \exists\beta\in R'\ (\alpha,\beta)\not=0\}$.
This allows to obtain a description of affine GRRSs from the description of EARS in~\cite{You}, \cite{Yos}
and, using~\Thm{thmdirsum}, to obtain a description of
the irreducible GRRSs containing isotropic roots.
In Section~\ref{sect1} we give all definitions, examples of GRRSs
and explain the connection between GRRS, GRS introduced in~\cite{VGRS} and root systems of Kac-Moody superalgebras.
In Section~\ref{sectnondeg} we prove that if $R$ is an irreducible GRRS with a non-degenerate symmetric bilinear form and $R$ contains an isotropic root, then $cl(R)$ is finite (and is classified in~\cite{VGRS}).
In Section~\ref{sect3} we prove some lemmas, which are used
later.
In Section~\ref{sect4} we obtain a classification of $R$
for the case when $cl(R)$ is finite and is generated by a basis of $cl(V)$.
In Section~\ref{Ann} we obtain a classification of $R$
for the case when $cl(R)$ is the roots system of
$\mathfrak{psl}(n+1,n+1), n>1$. This is the only situation when
the form $(-,-)$ is degenerate and
$R$ can be finite; this holds in the case $\mathfrak{gl}(n,n)$.
In Section~\ref{sect6} we obtain a classification of $R$
for the case when $cl(R)$ is a finite WGRS, which is not a GRS
($cl(R)=BC(m,n), C(m,n)$) and describe $R$ for the remaining case
$cl(R)=BC_n$. This completes the description of GRRSs $R$ with finite $cl(R)$.
In Section~\ref{sect7} we present the correspondence
between the irreducible affine GRRSs
with $\dim Ker (-,-)=1$ and the symmetrizable affine Lie superalgebras.
\section{Definitions and basic examples}\label{sect1}
In this section we introduce the notion GRRS (generalized reflection root systems) and consider several examples.
\subsection{Notation}
Throughout the paper $V$ will be a finite-dimensional complex vector space with a symmetric bilinear form $(-,-)$.
For $\alpha\in V$ with $(\alpha,\alpha)\not=0$
we introduce a notation
$$k_{\alpha,\beta}:=\frac{2(\alpha,\beta)}{(\alpha,\alpha)}$$
for each $\beta\in V$,
and we define the reflection
$r_{\alpha}\in \End V$ by the usual formula
$$r_{\alpha}(v):=v-k_{\alpha,v} v.$$
Clearly, $r_{\alpha}$ preserves $(-,-)$. Note that
\begin{equation}\label{kkk}
k_{\alpha,r_{\gamma}\beta}=k_{\alpha,\beta}-k_{\alpha, \gamma}
k_{\gamma,\beta}
\end{equation}
if $(\alpha,\alpha), (\gamma,\gamma)\not=0$.
We use the following notation: if $X$ is a subset of $V$, then
$X^{\perp}:=\{v\in V|\ \forall x\in X\ (x,v)=0\}$ and
$\mathbb{Z}X$ is the additive subgroup of $V$ generated by $X$
(similarly, $\mathbb{C}X$ is a subspace of $V$ generated by $X$).
\subsection{}\label{defGRRS}
\begin{defn}{defnGRRS}
Let $V$ be a finite-dimensional complex vector space with a symmetric bilinear form $(-,-)$.
A non-empty set $R\subset V$ is called a {\em generalized reflection root system (GRRS)} if the following axioms hold
(GR0) $\Ker (-,-)\cap R=\emptyset$;
(GR1) the canonical map $\mathbb{Z}R\otimes_{\mathbb{Z}}\mathbb{C}\to V$ is a bijection;
(GR2) for each $\alpha\in R$ with $(\alpha,\alpha)\not=0$
one has $r_{\alpha}R=R$; moreover, $\beta-r_{\alpha}\beta\in\mathbb{Z}\alpha$
for each $\beta\in R$;
(GR3) for each $\alpha\in R$ with $(\alpha,\alpha)=0$ there exists
an invertible map $r_{\alpha}: R\to R$ such that
\begin{equation}\label{riso}
\begin{array}{l}
r_{\alpha}(\alpha)=-\alpha,\ \ r_{\alpha}(-\alpha)=\alpha,\\
r_{\alpha}(\beta)=\beta\ \text{ if }\beta\not=\pm\alpha,\ (\alpha,\beta)=0,\\
r_{\alpha}(\beta)\in\{\beta\pm\alpha\}\ \text{ if }(\alpha,\beta)\not=0.
\end{array}\end{equation}
\end{defn}
\subsubsection{}\label{nullity}
We sometimes write $R\subset V$ is a GRRS, meaning that
$R$ is a GRRS in $V$. If $R\subset V$ is a GRRS, we call $\alpha\in R$ a {\em root};
we call a root $\alpha$ {\em isotropic} if $(\alpha,\alpha)=0$.
\subsubsection{}
\begin{defn}{}
We call a GRRS $R\subset V$ {\em affine}
if $R$ is infinite and the image of $R$ in $V/Ker (-,-)$ is finite.
\end{defn}
\subsubsection{Remarks}\label{alphabeta}
Observe that $R=-R$ if $R$ is a GRRS.
By~\cite{VGRS}, Lem. 1.11, the axiom
(GR3) is equivalent to $R=-R$ and
the condition that for each $\alpha,\beta\in R$
with $(\alpha,\alpha)=0\not=(\alpha,\beta)$
the set $\{\beta\pm\alpha\}\cap R$
contains exactly one element.
In particular, if $R$ is a GRRS, then $r_{\beta}$ is an involution
and it is uniquely defined for any $\beta\in R$.
In~\Thm{thmdirsum} we will show that if $(-,-)$ is non-degenerate, then
(GR1) is equivalent to the condition that $R$ spans $V$.
\subsubsection{Weyl group and $GW(R)$}\label{Weyl}
For any $X\subset V$ denote by $W(X)$ the group
generated by $\{r_{\alpha}|\ \alpha\in X, (\alpha,\alpha)\not=0\}$. Clearly, $W(R)$ preserves the bilinear form $(-,-)$.
If $R\subset V$ is a GRRS, we call $W(R)$
{\em the Weyl group} of $R$. By (GR2)
$R$ is $W(R)$-invariant.
If $R$ is a GRRS, then to each $\alpha\in R$ we assigned an involution
$r_{\alpha}\in Aut(R)$; we denote by $GW(R)$ the subgroup
of $Aut(R)$ generated by these involutions.
\subsubsection{}
In~\cite{VGRS}, Sect. 7,
V.~Serganova considered another object,
where $r_{\alpha}$ were not assumed to be invertible, i.e.
(GR3) is substituted by
(WGR3) for each $\alpha\in R$ with $(\alpha,\alpha)=0$ there exists
a map $r_{\alpha}: R\to R$ satisfying~(\ref{riso}).
If $V$ is endowed with a non-degenerate form and $R\subset V$
satisfies (GR0)-(GR2) and (WGR3), we
call $R$ a {\em weak GRS (WGRS)}; the finite WGRS were
classified in~\cite{VGRS}, Sect. 7.
\subsubsection{}\label{alphabeta}
Note that $R=-R$ if $R$ is a WGRS.
By~\cite{VGRS}, Lem. 1.11, the axiom
(WGR3) (resp., (GR3)) is equivalent to $R=-R$ and
for each isotropic $\alpha\in R$ the set $\{\beta\pm\alpha\}\cap R$
is non-empty (resp., contains exactly one element) if $\beta\in R$
is such that $(\beta,\alpha)\not=0$.
\subsection{Other definitions}
Classical root systems can be naturally viewed as examples of GRRS, see~\S~\ref{clRS}
below. The following definitions are motivated by this example.
\subsubsection{Subsystems}\label{subsystem}
For a GRRS $R\subset V$ we call $R'\subset R$
a {\em subsystem of $R$} if $R'$ is a GRRS in $\mathbb{C}R'$.
It turns out that $GW(R)$ does not preserve the subsystems:
$B_2$ can be naturally viewed as a subsystem of $B(2,1)$,
but $r_{\alpha}(B_2)$ is not a subsystem if $\alpha$ is isotropic.
If $R'\subset R$ does not contain isotropic roots, then $R'$ is a subsystem if and only if $R'$ is non-empty and $r_{\alpha}R'=R'$
for any $\alpha\in R'$ (note that if $\alpha$ is isotropic, then
$R':=\{\pm \alpha\}$ is not a GRRS, even though
$r_{\alpha}R'=R'$).
Note that for any non-empty $S\subset R$ the intersection $span S\cap R$
is a GRRS in $span S$ if and only if (GR0) holds
(for any $\alpha\in (span S\cap R)$ there exists $\beta\in S$
such that $(\alpha,\beta)\not=0$).
We say that a non-empty set $X\subset R$ {\em generates a subsystem $R'\subset R$}
if $R'$ is a unique minimal (by inclusion) subsystem
containing $X$ (i.e.,
for any subsystem $R''\subset R$ with $X\subset R''$
one has $R'\subset R''$). In particular,
$R$ is generated by $X$
if $R$ is a minimal GRRS containing $X$.
\subsubsection{}\label{reducible}
We call a GRRS $R$ {\em reducible} if $R=R_1\cup R_2$,
where $R_1, R_2$ are non-empty
and $(R_1,R_2)=0$. Note that in this case $R=R_1\coprod R_2$ and
$R_1, R_2$ are subsystems of $R$.
We call a GRRS $R$ {\em irreducible} if
$R$ is not reducible.
If the bilinear form $(-,-)$ is non-degenerate on $V$,
then any GRRS is of the form $\coprod_{i=1}^k R_i\subset \oplus_{i=1}^k V_i$,
where $R_i\subset V_i$ is an irreducible GRRS.
\subsubsection{Isomorphisms}
\label{iso}
We say that two GRRSs $R\subset V,\ R'\subset V'$ are isomorphic
if there exists a linear homothety $\iota: V\to V'$ such that
$\iota(R)=R'$ (by a "homothety" we mean that $\iota$ is a
is linear isomorphism and there exists $x\in\mathbb{C}^*$
such that $(\iota(v), \iota(w))=x(v,w)$ for all $v,w\in V$).
\subsubsection{Reduced GRRS}
From (GR2), (GR3) one has $R=-R$.
A GRRS $R$ is called {\em reduced} if $\alpha,\lambda\alpha\in R$ for some $\lambda\in\mathbb{C}$ forces $\lambda=\pm 1$.
(It is easy to see that this always holds if $\alpha$ is isotropic; if $\alpha$ is non-isotropic, then (GR2)
gives $\lambda\in\{\pm 1,\pm \frac{1}{2},\pm 2\}$).
\subsection{Examples}
Let us consider several examples of GRRSs.
\subsubsection{Classical root systems}\label{clRS}
Recall that a classical root system is a finite subset $R$
in a Euclidean space $V$ with the properties:
$0\not\in R$, $r_{\alpha}R=R$ for each $\alpha\in R$
and $r_{\alpha}\beta-\beta\in \mathbb{Z}\alpha$ for each
$\alpha,\beta\in R$. We see that $R$ is a finite GRRS in the complexification of $V$. Using~\cite{Ser}, Ch. V, it is easy to show that all finite GRRSs without
isotropic roots are of this form: $R\subset V$ is
a finite GRRS without
isotropic roots if and only if $R\subset\mathbb{R}R$ is a classical root system.
The classical root systems were classified by W.~Killing and E.~Cartan: the reduced irreducible classical root systems
are the series $A_n, n\geq 1$,
$B_n, n\geq 2$, $C_n, n\geq 3$, $D_n, n\geq 4$ and the exceptional root systems
$E_6, E_7, E_8, F_4, G_2$ (the lower index always stands for the dimension of $V$);
sometimes we use the notations $C_1:=A_1,C_2:=B_2$ and $D_3:=A_3$.
The irreducible non-reduced root systems of finite type are of the form $BC_n=B_n\cup C_n, n\geq 1$.
The reduced irreducible classical root systems
are the root systems of finite-dimensional
simple complex Lie algebras.
\subsubsection{Example: GRSs introduced by V.~Serganova}
\label{GRS}
A GRS introduced by V.~Serganova in~\cite{VGRS}, Sect. 1 is
a finite GRRS $R\subset V$ with a non-degenerate form
$(-,-)$. V.~Serganova classified these systems.
Recall the results of these classification.
A complex simple finite-dimensional Lie superalgebras
$\fg=\fg_{\ol{0}}\oplus \fg_{\ol{1}}$ is called {\em basic classical} if
$\fg_{\ol{0}}$ is reductive and $\fg$
admits a non-degenerate invariant symmetric bilinear form $B$ with $B(\fg_{\ol{0}},\fg_{\ol{1}})=0$.
This bilinear form induces a non-degenerate
symmetric bilinear form on $\fh^*$, where $\fh$ is a Cartan subalgebra of $\fg$.
The set of roots of $\fg$ is a GRS in $\fh^*$ if $\fg\not=\mathfrak{psl}(2,2)$.
Conversely, any GRS
is the root system of a basic classical Lie superalgebra
differ from $\psl(2,2)$ (in particular, the non-reduced classical root system
$BC_n$ is the root system of a basic classical
Lie superalgebra $B(0,n)=\mathfrak{osp}(1,2n)$).
The finite WGRSs were classified in~\cite{VGRS}, Sect. 7.
They consist of GRSs and two additional series $BC(m,n), C(m,n)$,
which can be described as follows. Let
$V$ be a complex vector space endowed with a symmetric bilinear form and an orthogonal basis
$\{\vareps_i\}_{i=1}^m\cup\{\delta_j\}_{j=1}^n$
such that $||\vareps_i||^2=-||\delta_j||^2=1$ for
$i=1,\ldots,m, j=1,\ldots,n$.
One has
$$\begin{array}{l}
C(m,n)=\{\pm\vareps_i\pm\vareps_j;\pm 2\vareps_i \}_{1\leq i<j\leq m}
\cup\{\pm \delta_i\pm\delta_j; \pm 2\delta_i\}_{1\leq i<j\leq n}\cup\{\pm\vareps_i\pm\delta_j \}_{1\leq i\leq m}^{1\leq j\leq n},\\
BC(m,n)=C(m,n)\cup
\{\pm\vareps_i;\pm\delta_j \}_{1\leq i\leq m}^{1\leq j\leq n}.\end{array}$$
In particular, $C(1,1)$ is the root system of $\psl(2,2)$.
\subsubsection{Real roots of symmetrizable Kac-Moody algebras}
\label{exasymm}
Let $C$ be a symmetric $n\times n$ matrix with non-zero diagonal entries satisfying the condition $2c_{ij}/c_{ii}\in \mathbb{Z}$
for each $i,j$. Let $\Pi:=\{\alpha_1,\ldots,\alpha_n\}$
be a basis of a complex vector space $V$ and $(-,-)$ is
a symmetric bilinear form on $V$ given by $(\alpha_i,\alpha_j)=c_{ij}$.
Let $W$ be the subgroup of $GL(V)$ generated by $r_{\alpha_i}$ for
$i=1,\ldots,n$. Then $R(C):=W\Pi$ is a reduced GRRS without isotropic roots. If $C$ is such that $2c_{ij}/c_{ii}<0$ for each $i\not=j$,
then $C$ is a symmetric Cartan matrix and $R(C)$
is the set of real roots
of a symmetrizable Kac-Moody algebra
$\fg(C)$. Using the classification
of Cartan matrices in~\cite{K2} Thm. 4.3,
one readily sees that for a symmetric Cartan matrix $C$,
$R(C)$ is affine and irreducible if and only if $\fg(C)$ is an affine Kac-Moody algebra.
Recall that
a basic classical Lie superalgebra $\fg\not=\mathfrak{psl}(n,n)$
is a symmetrizable Kac-Moody superalgebra and that
a finite-dimensional Kac-Moody superalgebra $\fg\not=\mathfrak{gl}(n,n)$ is a basic classical Lie superalgebra. The root system of a finite-dimensional Kac-Moody superalgebra is a GRRS.
The set of real roots of a symmetrizable affine Kac-Moody superalgebra $\fg$
is an affine GRRS (with $\dim Ker (-,-)$ equals $1$
if $\fg\not=\fgl(n,n)^{(1)}$ and equals $2$ for
$\fgl(n,n)^{(1)}$); these algebras were classified by van de Leur in~\cite{vdL}.
Let $\fh$ be a Cartan subalgebra of a symmetrizable Kac-Moody algebra $\fg(C)$. Then $V$ is a subspace of $\fh^*$ spanned by $\Pi$. Take $\lambda\in\fh^*$ and define
$$\Delta_{re}(\lambda):=\{\alpha\in \Delta_{re}|\
\frac{2(\lambda,\alpha)}{(\alpha,\alpha)}\in\mathbb{Z}\}.$$
Then $\Delta_{re}(\lambda)$ is a subsystem of $\Delta_{re}$.
The above construction gives a reduced GRRS. Examples of non-reduced GRRSs without isotropic roots
can be obtained by the following procedure.
Fix $J\subset \{1,\ldots,n\}$ such that
$c_{ji}/c_{jj}\in\mathbb{Z}$ for each $j\in J, i\in \{1,\ldots,n\}$
and introduce
$$R(C)_J:=(\cup_{j\in J} W 2\alpha_j)\cup R.$$
It is easy to check that
$R(C)_J$ is a GRRS (which is not reduced for $J\not=\emptyset$).
If $2c_{ij}/c_{ii}<0$ for each $i\not=j$, then $R(C)_J$
is the set of real roots of a symmetrizable Kac-Moody superalgebra $\fg(C, J)$; as before $R(C)_J$ is affine and irreducible
if and only if $\fg(C, J)$ is affine. By~\cite{H},
an indecomposable symmetrizable Kac-Moody superalgebra with an isotropic real root is finite-dimensional or affine. In~\Cor{corimR} we show that
an irreducible GRRS which contains an isotropic root is either finite or affine.
\subsection{Quotients}\label{bijquo}
Let $R\subset V$ be a GRRS and
$V'$ be a subspace of $\Ker (-,-)$.
One readily sees that the image of $R$
in $V/V'$ satisfies the axioms (GR0), (GR2) and (WGR3).
We call this image a {\em quotient} of $R$ and
a {\em bijective quotient} if the restriction
of the canonical map $V\to V/V'$ to $R$ is injective.
The minimal quotient of $R$, denoted by $cl(R)$, is the image of $R$ in $V/Ker (-,-)$; by~\Cor{corimR} (i), $cl(R)$
is a WGRS.
\subsubsection{}\label{lem3}
Let $R\subset V$ be a GRRS, $V'$ be a subspace of $\Ker (-,-)$,
and $\iota: V\to V/V'$ be the canonical map.
Assume that the quotient $\iota(R)$ is a GRRS.
We claim that for any subsystem $R'\subset \iota(R)$,
the preimage of $R'$ in $R$, i.e. $\iota^{-1}(R')\cap R$,
is again a GRRS (and a subsystem of
$R$). The claim
follows from the formula
$\iota(r_{\alpha}\beta)=r_{\iota(\alpha)} \iota(\beta)$
for each $\alpha,\beta\in R$
(note that $r_{\iota(\alpha)}$ is well-defined, since $\iota(R)$ is a GRRS).
\subsection{Direct sums}
Let $R_1\subset V_1$, $R_2\subset V_2$ be GRRSs.
Then $(R_1\cup R_2)\subset (V_1\oplus V_2)$ is again a GRRS.
Let $R=\cup_{i=1}^k R_i\subset V$, where $(R_i,R_j)=0$ for $i\not=j$,
and let $V_i$ be the span of $R_i$. Clearly, $R_i$ is a GRRS in $V_i$.
Since the natural map $\oplus_{i=1}^k V_i \to V$
preserves the form $(-,-)$, $R$ is a bijective quotient
of $\cup_{i=1}^k R_i\subset \oplus_{i=1}^k V_i$.
We conclude that any GRRS is a bijective quotient of
$\cup_{i=1}^k R_i\subset \oplus_{i=1}^k V_i$, where
$R_i\subset V_i$ are irreducible GRRSs. In particular,
if the form $(-,-)$ on $V$ is non-degenerate, then
$V=\oplus_{i=1}^k V_i$.
\subsection{Affinizations}\label{defaff}
Let $V$ be as above and $X\subset V$ be any subset.
Take $V^{(1)}=V\oplus\mathbb{C}\delta$ with the bilinear form $(-,-)'$
such that $\delta\in\Ker (-,-)'$ and the restriction
of $(-,-)'$ to $V$ coincides with the original form $(-,-)$ on $V$.
Set $X^{(1)}:=X+\mathbb{Z}\delta=\{\alpha+s\delta|\ \alpha\in X, s\in \mathbb{Z}\}$.
One readily sees that $R^{(1)}\subset V^{(1)}$ is a GRRS
if $R$ is a GRRS and $R$ is a quotient of $R^{(1)}$
in $V^{(1)}/\mathbb{C}\delta=V$.
We call $R^{(1)}\subset V^{(1)}$ the {\em affinization} of $R\subset V$
and use the notation $R^{(n)}\subset V^{(n)}$, where
$R^{(n+1)}:=(R^{(n)})^{(1)}$,
$V^{(n+1)}:=(V^{(n)})^{(1)}$.
If $R$ is a finite GRRS, then $R^{(n)}$ is an affine GRRS for any $n\geq1$.
Note that the affinizations of non-isomorphic GRRS can be isomorphic,
see~\Prop{propAnnx} (iii).
\subsection{Generators of a GRRS}
Let $R\subset V$ be a GRRS. Recall that a non-empty subset $X\subset R$ generates a subsystem
$R'$ if $R'$ is a unique minimal (by inclusion) subsystem of $R$ containing $X$.
If $R$ has no isotropic roots, then any non-empty $X\subset R$ generates a unique subsystem, namely, $W(X)X$.
The following lemma gives a sufficient condition when $X$ generates a subsystem.
\subsubsection{}
\begin{lem}{lemGRS1}
Let $R\subset V$ be a GRRS.
(i) If $R'\subset R$ satisfies (GR2), (GR3), then
$R'':=R'\setminus (R')^{\perp}$ is either empty or is a GRRS.
(ii) If a non-empty $X\subset R$ is such that $X\cap X^{\perp}=\emptyset$, then
$X$ generates a subsystem $R'$ of $R$.
\end{lem}
\begin{proof}
(i) Let $R''$ be non-empty and let $V''$ be the span of $R''$.
Let us verify that $R''\subset V''$ is a GRRS. Clearly, (GR1) holds.
If $x\in R''$, then $(x,y)\not=0$ for some $y\in R'$, so
$y\in R''$; thus $x$ is not in the kernel of the restriction
of $(-,-)$ to $V''$, so (GR0) holds. It remains to verify that for each
$\alpha,\beta\in R''$ one has $r_{\alpha}\beta\in R''$. Indeed,
since (GR2), (GR3) hold for $R'$, $r_{\alpha}\beta\in R'$.
If $(\alpha,\beta)=0$, then $r_{\alpha}\beta=\beta\in R''$;
otherwise $(r_{\alpha}\beta,\alpha)\not=0$
(for $(\alpha,\alpha)\not=0$ one has $(r_{\alpha}\beta,\alpha)=-(\beta,\alpha)$
and for $(\alpha,\alpha)\not=0$ one has $(r_{\alpha}\beta,\alpha)=(\beta,\alpha)$).
Hence $r_{\alpha}\beta\in R''$ as required.
(ii) By~\S~\ref{alphabeta}, for any $\alpha\in R$ the map $r_{\alpha}:R\to R$
satisfying (GR2), (GR3) respectively is uniquely defined.
Take
$$X_0:=X,\ \ X_{i+1}:=\{\pm r_{\alpha}\beta|\ \alpha,\beta\in X_i\},\ \
R':=\bigcup_{i=0}^{\infty} X_i.$$
Clearly, $R'$ satisfies (GR2), (GR3) and lies in any subsystem containing $X$.
Let us show $R'$ is a GRS. By (i), it is enough to
verify that $R'\cap (R')^{\perp}=\emptyset$.
Suppose that $v\in R'\cap (R')^{\perp}$; let $i$ be minimal such that
$v\in X_i$. Since $X\cap X^{\perp}=\emptyset$ we have $i\not=0$, so
$v=r_{\alpha}\beta$ for some $\alpha,\beta\in X_{i-1}$ with $(\alpha,\beta)\not=0$. Since $\beta=r_{\alpha}v\not=v$,
one has $(\alpha,v)\not=0$, a contradiction.
One readily sees that $(v,\alpha)=\pm (\alpha,\beta)$, that is $(v,R')\not=0$, a contradiction.
\end{proof}
\subsubsection{}\label{DeltaPi}
Let $\fg$ be a basic classical Lie superalgebra, $\Delta\subset\fh^*$ be its roots system
and $\Pi\subset\Delta$ be a set of simple roots.
If $\fg\not=\mathfrak{psl}(n,n)$, $\Pi$ consists
of linearly independent vectors.
If $\fg\not=\mathfrak{osp}(1,2n)$ (i.e., $\Delta\not=BC_n=B(0,n)$),
then $\Delta$ is generated by $\Pi$.
We conclude that for $\fg\not=\mathfrak{psl}(n,n),
\mathfrak{osp}(1,2n)$, the root system $\Delta\subset V$ is generated
by a basis of $V$.
\section{The case when $(-,-)$ is non-degenerate}\label{sectnondeg}
In this section $V\not=0$ is a finite-dimensional complex vector space
and $R\subset V$ satisfies (GR0), (GR2), (WGR3).
As before we say that $R\subset V$ is irreducible
if $R\not= R_1\coprod R_2$,
where $R_1,R_2$ are non-empty sets satisfying (GR2), (WGR3)
and $(R_1,R_2)=0$.
We will prove the following theorem.
\subsection{}
\begin{thm}{thmdirsum}
Assume that the form $(-,-)$ is non-degenerate and $R\subset V$ satisfies (GR0), (GR2), (WGR3)
and
(GR1'): $R$ spans $V$.
Then
(i) If $R$ is irreducible and contains an isotropic root, then
$R$ is finite (such $R$s are classified in~\cite{VGRS});
(ii) $R$ is a WGRS.
\end{thm}
\subsubsection{}
\begin{cor}{corimR}
(i) If $R$ is a GRRS, then the image of $R$ in $V/Ker (-,-)$
is a WGRS.
(ii) If $R$ is an irreducible GRRS which contains an isotropic root, then $R$ is either finite or affine.
\end{cor}
\subsubsection{}
\begin{rem}{}
By~\S~\ref{exasymm}, any symmetric $n\times n$ matrix $C$ with non-zero diagonal entries and
$2c_{ij}/c_{ii}\in\mathbb{Z}$ for each $i\not=j$, gives a GRRS.
Clearly, $(-,-)$ is non-degenerate if and only if $\det C\not=0$.
In this way we obtain a lot of examples of infinite GRRSs with
non-degenerate $(-,-)$ (but they do not contain isotropic roots!).
\end{rem}
\subsection{Proof of~\Thm{thmdirsum}}\label{prthmdir}
We will use the following lemmas.
\subsubsection{}
\begin{lem}{lem2}
For any $\beta\in R$
there exists $\alpha\in R$ such that
$r_{\alpha}\beta$ is non-isotropic and
$(\beta,r_{\alpha}\beta)\not=0$.
\end{lem}
\begin{proof}
If $\beta$ is non-isotropic we take $\alpha:=\beta$.
Let $\beta$ be isotropic. Notice that $(\beta,r_{\alpha}\beta)=0$ implies $r_{\alpha}\beta=\pm \beta$, so it is enough
to show that $r_{\alpha}\beta$ is non-isotropic for some $\alpha\in R$. By (GR0) $\beta\not\in\Ker(-,-)$, so there exists $\gamma\in R$ such that $(\gamma,\beta)\not=0$, which implies
$(\beta,r_{\gamma}\beta)\not=0$. As a consequence,
one of the roots
$r_{\gamma}\beta$ or $r_{r_{\gamma}\beta}\beta$ is non-isotropic.
\end{proof}
\subsubsection{}
\begin{lem}{isoperp}
Let $R$ be irreducible and contains an isotropic root.
For each $\alpha\in R$ there exists an isotropic root $\beta\in R$ with
$(\alpha,\beta)\not=0$.
\end{lem}
\begin{proof}
Let $R_{iso}\subset R$ be the set of isotropic roots.
Let $R_2\subset R$ be the set of non-isotropic roots
in $R\cap R_{iso}^{\perp}$ and $R_1:=R\setminus R_2$.
One readily sees that $R_2$ is a subsystem.
Let us verify that $R_1$ is also a subsystem.
Indeed, let $\alpha,\beta\in R_1$
be such that
$(\alpha,\beta)\not=0$. One has
$(r_{\beta}\alpha,\beta)\not=0$, so
$r_{\beta}\alpha\in R_1$ if $\beta$ is isotropic.
If $\beta$ is non-isotropic, then, taking $\gamma\in R_{iso}$ such that
$(\alpha,\gamma)\not=0$, we get
$(r_{\beta}\alpha,r_{\beta}\gamma)=(\alpha,\gamma)\not=0$
and $r_{\beta}\gamma\in R_{iso}$, that is
$r_{\beta}\alpha\in R_1$ as required.
Thus $\alpha,\beta\in R_1$ with
$(\alpha,\beta)\not=0$ forces $r_{\beta}\alpha\in R_1$.
Hence $R_1$ is a subsystem.
Suppose that there exist
$\alpha\in R_1$, $\beta\in R_2$ with $(\alpha,\beta)\not=0$.
By the construction of $R_2$, both $\alpha,\beta$ are non-isotropic.
Since $(\alpha,\beta)\not=0$ one has $r_{\alpha}\beta=\beta+x\alpha$
for some $x\not=0$. Taking $\gamma\in R_{iso}$ such that
$(\alpha,\gamma)\not=0$, we get
$(r_{\alpha}\beta,\gamma)\not=0$ (since $(\beta,\gamma)=0$),
so $r_{\alpha}\beta\in R_1$. Since
$\alpha$ is non-isotropic and $R_1$ is a subsystem, one has $\beta=r_{\alpha}(r_{\alpha}\beta)\in R_1$, a contradiction.
We conclude that $R=R_1\coprod R_2$ with $(R_1,R_2)=0$.
Since $R$ is irreducible, $R_2$ is empty. This implies the assertion of the lemma
for non-isotropic root $\alpha$.
In the remaining case $\alpha\in R$ is isotropic.
Since $\alpha\not\in Ker (-,-)$, there exists $\gamma\in R$ such that
$(\gamma,\alpha)\not=0$. If $\gamma$ is isotropic, take $\beta:=\gamma$;
if $\gamma$ is non-isotropic, take $\beta:=r_{\gamma}\alpha$.
The assertion follows.
\end{proof}
\subsubsection{}
\begin{cor}{cor1234}
Let $R$ be irreducible and contains an isotropic root. If $\alpha\in R$
is non-isotorpic, then for each $\gamma\in R$ one has
$\ \ k_{\alpha,\gamma}\in \{0,\pm 1,\pm 2,\pm
3,\pm 4\}\ \ $
and $\ \ k_{\alpha,\gamma}\in \{0,\pm 1,\pm 2\}\ \ $
if $\gamma$ is isotropic.
\end{cor}
\begin{proof}
Let $(\alpha,\gamma)\not=0$.
If $\gamma$ is isotropic and $\alpha+\gamma\in R$, then
$$
||\alpha+\gamma||^2=(\alpha,\alpha)(1+k_{\alpha,\gamma}),\ \
\frac{2(\alpha+\gamma,\gamma)}{||\alpha+\gamma||^2}=
\frac{k_{\alpha,\gamma}}{1+k_{\alpha,\gamma}},$$
so (GR2) gives $k_{\alpha,\gamma}\in\{-1,-2\}$.
If $\gamma$ is isotropic and $\alpha+\gamma\in R$, then
$k_{\alpha,\gamma}\in\{1,2\}$.
Let $\gamma$ be non-isotropic. By~\Lem{isoperp},
there exists an isotropic $\beta\in R$ such that $(\beta,\gamma)\not=0$.
Since $\beta$ and $r_{\gamma}\beta$ are isotropic,
one has $k_{\alpha,\beta},k_{\gamma,\beta},k_{\alpha,r_{\gamma}\beta}
\in \{0,\pm 1,\pm 2\}$
and $k_{\gamma,\beta}\not=0$. Combining~(\ref{kkk})
and $k_{\alpha,\gamma}\in\mathbb{Z}$, we obtain the required formula.
\end{proof}
\subsubsection{Proof of finiteness}
Let $R\subset V$ satisfy the assumptions of~\Thm{thmdirsum}.
Let us show that $R$ is finite.
By (GR1') $R$ contains a basis $B$ of $V$.
Since $(-,-)$ is non-degenerate, each $v\in V$ is determined
by the values $(v,b)$, $b\in B$. Thus in order to show that
$R$ is finite, it is enough to verify that
the set $\{(\alpha,\beta)|\ \alpha,\beta\in R\}$
is finite.
If $\alpha,\beta\in R$ are isotropic and $(\alpha,\beta)\not=0$,
then $r_{\alpha}\beta$ is non-isotropic and
$(r_{\alpha}\beta,\alpha)=(\beta,\alpha)$. Thus
$$\{(\alpha,\beta)|\ \alpha,\beta\in R\}=\{0\}\cup S, \text{ where } S:=\{(\alpha,\beta)|\ \alpha,\beta\in R, \ (\alpha,\alpha)\not=0\}.$$
Using~\Cor{cor1234} we conclude that the finiteness of $S$ is equivalent
to the finiteness of $N:=\{(\alpha,\alpha)|\ \alpha\in R\}$.
Let $X\subset R$ be a maximal linearly independent set of non-isotropic roots and let $\alpha$ be a non-isotropic root.
Then $\alpha$ lies in the span of $X$, so $(\alpha,\alpha)\not=0$
implies $(\alpha,\beta)\not=0$ for some $\beta\in X$.
One has $(\alpha,\alpha)/(\beta,\beta)=k_{\beta,\alpha}/
k_{\alpha,\beta}$. From~\Cor{cor1234} we get
$$N\subset \{0, a/b(\beta,\beta)|\ \beta\in X,\ a, b \in \{\pm 1 ,\pm 2,\pm 3,\pm 4\}\},$$
so $N$ is finite as required.\qed
\subsubsection{Proof of (GR1)}
It remains to verify that $R$ satisfies (GR1).
Since the form $(-,-)$ is non-degenerate, $(R,V)$ is a direct sum of its irreducible components: $V=\oplus_{i=1}^k V_i$, where $(V_i,V_j)=0$ for $i\not=j$, and $R=\coprod_{i=1}^k R_i$, where
$R_i$ spans $V_i$, $R_i$ is irreducible and satisfies (GR0), (GR2), (WGR3) for each $i=1,\ldots,k$
Thus without loss of generality we can (and will) assume that
$R$ is irreducible. Let us show that
\begin{equation}\label{Nn}
(-,-) \text{ can be normalized in such a way that }
(\alpha,\beta)\in \mathbb{Q} \text{ for all }\alpha,\beta\in R.
\end{equation}
implies (GR1).
Indeed, let
$B=\{\beta_1,\ldots,\beta_n\}\subset R$ be a basis of $V$
and let $\alpha_1,\ldots,\alpha_k\in R$ be linearly dependent.
For each $i$ write $\alpha_i=\sum_{j=1}^n y_{ij} \beta_j$.
Since $(-,-)$ is non-degenerate and
$(\alpha,\beta)\in \mathbb{Q}$ for each $\alpha,\beta\in R$, we
have $y_{ij}\in \mathbb{Q}$ for each $i,j$. Since
$\alpha_1,\ldots,\alpha_k$ are linearly dependent,
$\det Y=0$. By above, the entries of $Y$ are rational, so
there exist a rational vector
$X=(x_i)_{i=1}^k$ such that $YX=0$. Then
$\sum_{i=1}^k x_i\alpha_i=0$, so $\alpha_1,\ldots,\alpha_k\in R$
are linearly dependent over $\mathbb{Q}$.
Thus the natural map
$\mathbb{Z}R\otimes_{\mathbb{Z}} \mathbb{C}\to V$
is injective. By (GR1') it is also surjective, so (GR1) holds.
Assume that $R$ does not contain isotropic roots. Let us show that
we can normalize $(-,-)$ in such a way that~(\ref{Nn}) holds.
For $\alpha,\beta\in R$
one has $(\alpha,\alpha)/(\beta,\beta)=k_{\beta,\alpha}/
k_{\alpha,\beta}\in\mathbb{Q}$ if $(\alpha,\beta)\not=0$. From the irreducibility of $R$, we obtain $(\alpha,\alpha)/(\beta,\beta)\in\mathbb{Q}$.
Thus we can normalize the form $(-,-)$
in such a way that $(\alpha,\alpha)\in \mathbb{Q}$
for each $\alpha\in R$; in this case $(\alpha,\beta)\in\mathbb{Q}$
for any $\alpha,\beta\in R$, so~(\ref{Nn}) holds.
Now assume that $R$ contains an isotropic roots.
By above, $R$ is finite; such systems
were classified in~\cite{VGRS}. From this classification it follows
that $R$ satisfies~(\ref{Nn}) except for $R=D(2,1,a)$ with $a\not\in\mathbb{Q}$; thus (GR1) holds
for such $R$. For $R=D(2,1,a)$ (and, in fact, for each
$R\not=\mathfrak{psl}(n,n)$)
there exists $\Pi\subset R$ such that $R\subset\mathbb{Z}\Pi$
and the elements of $\Pi$ are linearly independent.
In this case, $\Pi$ is a basis of $V$ and $\mathbb{Z}R=\mathbb{Z}\Pi$.
Thus (GR1) holds.\qed
\section{The minimal quotient $cl(R)$}\label{sect3}
In this section $V$ is a complex $(l+k)$-dimensional vector space
endowed with a degenerate symmetric bilinear form $(-,-)$ with a $k$-dimensional kernel, and
$R\subset V$ is a GRRS. The map $cl$ is the canonical map
$V\to V/\Ker(-,-)$. By~\Cor{corimR} (i), $cl(R)$ is a WGRS in $V/\Ker(-,-)$.
\subsection{Gaps}\label{gaps}
Consider the case when $\dim Ker (-,-)=1$.
From (GR1) it follows that $\mathbb{Z}R\cap \Ker(-,-)=\mathbb{Z}\delta$
for some (may be zero) $\delta$.
For each $\alpha\in cl(R)$ one has
$(cl^{-1}(\alpha)\cap R)\subset \{\alpha'+\mathbb{Z}\delta\}$ for some
$\alpha'\in R$. If $\delta\not=0$, we call $g(\alpha)\in\mathbb{Z}_{\geq 0}$
the {\em gap} of $\alpha$ if
$$cl^{-1}(\alpha)\cap R=\{\alpha'+\mathbb{Z}g(\alpha)\delta\}$$
for some $\alpha'\in R$;
if $\delta=0$ we set $g(\alpha):=0$
(in this case $cl^{-1}(\alpha)\cap R$ contains
only one element).
Observe that the set of gaps is an invariant of the root system.
The gaps have the following properties:
(i) $g(\alpha)$ is defined for all non-isotropic $\alpha\in cl(R)$;
(ii) $g(\alpha)$ are $W(cl(R))$-invariant (if $g(\alpha)$ is defined, then
$g(w\alpha)$ is defined and
$g(w\alpha)=g(\alpha)$ for each $w\in W(cl(R))$);
(iii) if $\alpha,\beta\in cl(R)$ are non-isotropic, then
$k_{\alpha,\beta}g(\alpha)\in\mathbb{Z} g(\beta)$;
(iv) if $cl(R)$ is a GRRS, then $g(\alpha)$ are defined for all $\alpha\in cl(R)$ and $g(\alpha)$
are $GW(cl(R))$-invariant (see~\S~\ref{Weyl} for notation).
The properties (i)--(iii) are standard (we give a short proof in~\S~\ref{lemgap});
(iv) will be established in~\Prop{propFalpha}.
\subsubsection{}\label{lemgap}
Let us show that $g(\alpha)$ satisfies (i)--(iii).
Fix a non-isotropic $\alpha'\in R$ and set
$$M:=\{k\in\mathbb{Z}| \ \alpha'+k\delta\in R\}.$$
For each $x,y,z\in Ker(-,-)$ and $m\in\mathbb{Z}$ one has
\begin{equation}\label{rxyz}
(r_{\alpha'+x}r_{\alpha'+y})^m(\alpha'+z)=\alpha'+2m(x-y)+z.
\end{equation}
Thus for each $p,q,r\in M$ one has $2\mathbb{Z}(p-q)+r\subset M$.
Note that $0\in M$ (since $\alpha'\in R$).
Taking $q=0$ and $r=0$ or $r=p$, we get $\mathbb{Z}p\subset M$.
Hence $M=\mathbb{Z}k$ for some $k\in\mathbb{Z}$.
This gives (i). Combining
$r_{\alpha'}(\beta+p\delta)=r_{\alpha'}\beta+p\delta$ (for any $\beta\in R$)
and the fact that $W(R)$ is generated by the reflections $r_{\beta}$ with non-isotropic
$\beta\in R$, we obtain (ii).
For (iii) take $\alpha,\beta\in cl(R)$. Notice
that $g(\alpha),g(\beta)$ are defined by (i);
by (ii) $g(r_{\alpha}\beta)=g(\beta)$. Take $\alpha',\beta'\in R$
such that $cl(\alpha')=\alpha$ and $cl(\beta')=\beta$.
Since $r_{\alpha'+k\delta}(\beta')=r_{\alpha'}\beta'+a_{\alpha,\beta}k\delta$
we have $a_{\alpha,\beta}g(\alpha)\in\mathbb{Z} g(\beta)$ as required.
\subsection{Construction of $R'\subset cl(R)$}\label{Falphastr}
Set
$$L:=\mathbb{Z}R\cap \Ker(-,-).$$
From (GR1) it follows that $L=\sum_{i=1}^s \mathbb{Z}\delta_i$,
for some linearly independent
$\delta_1,\ldots, \delta_s\in Ker(-,-)$.
Since $R$ spans $V$, there exists $X:=\{v_1,\ldots,v_l\}\subset R$
whose images form a basis of $V/Ker (-,-)$. We fix $X$ and identify
$V/\Ker (-,-)$ with the vector space $V'\subset V$ spanned
by $v_1,\ldots,v_l$; then $V=V'\oplus \Ker (-,-)$ and $cl:\ V\to V'$
is the projection; in particular, $cl(R)$ is a WGRS in $V'$.
The restriction of $(-,-)$
to $V'$ is non-degenerate, so by~\Lem{lemGRS1} (ii),
the set $X$ generates
a subsystem $R'$ in $R$.
\subsection{Construction of $F(\alpha)$}
For each $\alpha\in V'$ we introduce
$$F(\alpha):=\{v\in \Ker (-,-)|\ \alpha+v\in R\}.$$
Notice that $F(\alpha)$ is non-empty if and only if $\alpha\in cl(R)$.
For each $\alpha\in cl(R)$ one has
\begin{equation}\label{eqFalpha}
F(\alpha)\subset L+\delta_{\alpha}\ \text{ for some }\delta_{\alpha}
\ \text{ where } \delta_{\alpha}=0 \text{ iff } \alpha\in R'.
\end{equation}
\subsection{}
\begin{lem}{lemaff}
If $cl(R)\subset\mathbb{Z}X$, then $R\subset cl(R)+L=cl(R)^{(k)}$
for $k:=\dim Ker(-,-)$ and $\dim Ker(-,-)=rank L$.
\end{lem}
\begin{proof}
Clearly, $cl(R)+L=cl(R)^{(s)}$, where $s=rank L$.
Fix $\alpha\in R$ and set $\mu:=\alpha-cl(\alpha)$. Then
$\mu \in Ker (-,-)$ and $\mu\in \mathbb{Z}R$, since
$cl(R)\subset\mathbb{Z}R$. Therefore $\mu\in L$. This gives $R\subset cl(R)+L$.
Since $R$ spans $V$, $s=\dim Ker(-,-)$ as required.
\end{proof}
\subsubsection{}
\begin{prop}{propFalpha}
For each $\alpha\in cl(R)$ one has
(i) $F(-\alpha)=-F(\alpha)$;
(ii) $F(w\alpha)=F(\alpha)$ for all $w\in W(R')$;
(iii) $F(\alpha)=-F(\alpha)$ if $\alpha$ is non-isotropic;
(iv) if $cl(R)$ is a GRRS, then for each $\alpha\in R'$
one has $F(\alpha)=-F(\alpha)$ and
$F(w\alpha)=F(\alpha)$ for each $w\in GW(R')$.
\end{prop}
\begin{proof}
The inclusion $R'\subset R$ implies (ii). The formula
$R=-R$ implies (i); (iii) follows from (i) and (ii).
For (iv) let $R'$ be a GRRS. Let us show
that for each $\alpha,\beta\in R'$ one has $F(r_{\beta}\alpha)=F(\alpha)$.
Clearly, this holds if $r_{\beta}\alpha=\alpha$;
by (ii) this holds if $\beta$ is non-isotropic.
Since $r_{\beta}$ is an involution, it is enough to verify that
\begin{equation}\label{mumu}
F(r_{\beta}\alpha)\subset F(\alpha)\ \text{ for isotorpic } \beta\in R'.
\end{equation}
Assume that $r_{\beta}\alpha=\alpha+\beta$.
Then $\alpha+\beta\in R'\subset cl(R)$, so $\alpha-\beta\not\in cl(R)$
by~\S~\ref{alphabeta} (since $cl(R)$ is a GRRS).
Take $v\in F(\alpha)$. Then $\alpha+v\in R$,
so $r_{\beta}(\alpha+v)\in R$. Since $cl(\alpha+v-\beta)=\alpha-\beta\not\in cl(R)$,
one has $r_{\beta}(\alpha+v)=\alpha+v+\beta$, so $v\in F(r_{\beta}\alpha)$
as required. The case $r_{\beta}\alpha=\alpha-\beta$ is similar.
Thus we established the formula $F(r_{\beta}\alpha)=F(\alpha)$ for $\alpha\not=\pm\beta$.
By~\Lem{lem2} there exists $\gamma\in R'$ such that
$r_{\gamma}\beta$ is non-isotropic. By (iii)
$F(r_{\gamma}\beta)=-F(r_{\gamma}\beta)$. By above, $F(\beta)=F(r_{\gamma}\beta)$
(since $r_{\gamma}\beta\not=\pm\beta$). Hence
$F(\beta)=F(-\beta)$. This completes the proof of~(\ref{mumu}) and of (iv).
\end{proof}
\subsubsection{}
\begin{lem}{}
For any $\alpha,\beta\in cl(R)$ such that $r_{\alpha}$
is well-defined and $r_{\alpha}\beta=\alpha+\beta$ one has
\begin{equation}\label{eqFalphabeta}
\begin{array}{l}
F(\alpha+\beta)=F(\alpha)+F(\beta).
\end{array}
\end{equation}
\end{lem}
\begin{proof}
Observe that $F(\alpha+\beta)=F(\alpha)+F(\beta)$ holds if
for any $x\in F(\alpha), y\in F(\beta)$ and $z\in F(\alpha+\beta)$ one has
$$
r_{\alpha+x}(\beta+y)=\alpha+\beta+x+y,\ \ r_{\alpha+x}(\alpha+\beta+z)=\beta+z-x.$$
If $\alpha$ is non-isotropic, then $\alpha+x$ is also non-isotropic and
the above formulae follow from the definition of $r_{\alpha+x}$.
If $\alpha$ is isotorpic and $r_{\alpha}$ is well-defined, then,
by~\S~\ref{alphabeta},
$\beta-\alpha,\beta+2\alpha\not\in cl(R)$ (since
$\beta,\alpha+\beta\in cl(R)$), which implies
the above formulae.
Thus~(\ref{eqFalphabeta}) holds.
\end{proof}
\subsubsection{}
\begin{cor}{cor1}
Assume that
$\bullet\ cl(R)=R'$;
$\bullet\ cl(R)$ is a GRRS and $cl(R)\not=A_1$;
$\bullet\ GW(R')$ acts transitively on $R'$.
Then $R=R'+L$, i.e. $R=(R')^{(s)}$.
\end{cor}
\begin{proof}
We claim that $R'$ contains two roots $\alpha,\beta$ with $r_{\alpha}\beta=\alpha+\beta$.
Indeed, if $R$ contains an isotropic root $\alpha$, then it also contains
$\beta$ such that $(\alpha,\beta)\not=0$, so $r_{\alpha}\beta\in\{\beta\pm\alpha\}$,
so one of the pairs $(\alpha,\beta)$ or $(-\alpha,\beta)$ satisfies
the required condition. If all roots in $R'$ are non-isotropic, then any non-orthogonal
$\alpha',\beta'\in R'$ generate a finite root system, that is one of $A_2,C_2, BC_2$
and such root system contains $\alpha,\beta$ as required.
By~\Prop{propFalpha} (iv) $F(\gamma)$ is the same for all $\gamma\in R'$
and $F(\gamma)=-F(\gamma)$.
Using (\ref{eqFalphabeta}) for the pair $\alpha,\beta$ as above, we
conclude that $F(\alpha)$ is a subgroup of $Ker (-,-)$, so $F(\alpha)=\mathbb{Z}F(\alpha)$.
This implies $\mathbb{Z} R\cap Ker (-,-)=F(\alpha)$, that is
$F(\alpha)=L$ as required.
\end{proof}
\section{Case when $cl(R)$ is finite and is generated by a
basis of $cl(V)$}\label{sect4}
In this section we classify the irreducible GRRSs $R$ with a finite $cl(R)$ generated by a basis of $cl(V)$.
Combining~\S~\ref{GRS}, \ref{DeltaPi}, we conclude
that if $(-,-)$ is non-degenerate, then
$R$ is a root system of a basic classical Lie superalgebra $\fg\not=\mathfrak{psl}(n,n), \mathfrak{osp}(1,2n)$, i.e. $cl(R)$
is from the following list:
\begin{equation}\label{ABCDE}
\begin{array}{l}
\text{classical root systems }: A_n,B_n,C_n,D_n, E_6,E_7,E_8, F_4, G_2,\\
A(m,n)\ m\not=n, B(m,n), m,n\geq 1; C(n), D(m,n),\ m,n\geq 2; D(2,1,a), F(4), G(3).\end{array}
\end{equation}
If the form $(-,-)$ is degenerate, then $cl(R)$ is from the list~(\ref{ABCDE});
the classification is given by the following theorem.
\subsection{}
\begin{thm}{thmAGRS1}
Let $R\subset V$ be a GRRS and
$$k=\dim Ker (-,-)>0.$$
(i) If $cl(R)$ is one of the following GRRSs
$$\begin{array}{l}
A_n, n\geq 2, D_n, n\geq 4, E_6,E_7, E_8;\\
A(m,n)\ m\not=n; C(n), D(m,n), m\geq 2, n\geq 1;
D(2,1,a), F(4), G(3),
\end{array}$$
then $R$ is the affinization of $cl(R)$:
$R=cl(R)^{(k)}$.
(ii) The isomorphism classes of GRRSs $R$ with $cl(R)=A_1$
are in one-to-one correspondence with the equivalence classes
of the subsets $S$ of the affine space $\mathbb{F}_2^k$ containing an affine base of $\mathbb{F}_2^k$, up to affine automorphisms of $\mathbb{F}_2^k$.
(iii) If $cl(R)=G_2, F_4$, then $R$ is of the form $R(s)$ for $s=0,\ldots,k$
$$R(s):=\{\alpha+\sum_{i=1}^k \mathbb{Z}\delta_i|\ \alpha\in cl(R) \text{ is short}\}\cup \{\alpha+ \sum_{i=1}^s \mathbb{Z}\delta_i+\sum_{i=s+1}^k \mathbb{Z}r\delta_i|\
\alpha\in cl(R)\text{ is long}\},$$
where $\{\delta_i\}$ is a basis of $Ker(-,-)$ and $r=2$ for $F_4$ and $r=3$
for $G_2$. The GRRSs $R(s)$ are pairwise non-isomorphic.
(iv) The GRRSs $R$ with $cl(R)=C_2$
are parametrized by the pairs $(S_1,S_2)$, where
$S_i$ are subsets of the affine space $\mathbb{F}_2^k$ containing zero
such that
(1) $S_1$ contains an affine base of $\mathbb{F}_2^k$,
(2) $S_1+S_2\subset S_1$.
Moreover, $R(S_1,S_2)\cong R(S_1',S_2')$ if and only if for $i=1,2$ one has
$S_i'=\phi(S_i)+a_i$, where $\phi\in GL(\mathbb{F}_2^k)$
and $a_1,a_2\in \mathbb{F}_2^k$ (so $v\mapsto \phi(v)+a_i$
is an affine automorphism of $\mathbb{F}_2^k$).
(v) The isomorphism classes of GRRSs $R$ with $cl(R)=B_n,C_n, n\geq 3$, $B(m,n), m,n\geq 1$ are in one-to-one correspondence with the equivalence classes of non-empty
subsets $S$ of the affine space $\mathbb{F}_2^k$
up to affine automorphisms of $\mathbb{F}_2^k$.
\end{thm}
\subsubsection{Remarks}
In (ii), (iv) we mean that $R(S)\cong R(S')$ for $S,S'\subset \mathbb{F}_2^k$ if and only $S'=\psi(S)$ for some affine
automorphism $\psi$.
Notice that all above GRRSs are infinite (so affine).
Observe that (i) corresponds to the case when
$WG(cl(R))$ acts transitively on $cl(R)$ and $cl(R)\not=A_1$.
For $cl(R)=B_n,C_n$ $F_4, G_2$ and $B(m,n), m,n\geq 1$,
$cl(R)$ has two $GW(cl(R))$-orbits (see~\S~\ref{Weyl} for notation). We denote these orbits by
$O_1, O_2$, where $O_1$ (resp., $O_2$) is the set of short (resp., long)
roots for $C_n, F_4,G_2$ and
$O_1=D_m,D(m,n)$ for $B_m,B(m,n)$
respectively (and $O_2$ is the set of short roots in
both cases).
\subsection{Description of $R(S)$}
In order to describe the above correspondences in (ii)--(iv)
between GRRSs and subsets
in $\mathbb{F}_2^k$ we fix a free abelian group
$L\subset Ker (-,-)$ of rank $k$ and
denote by $\iota_2$ the canonical map
$\iota_2: L\to L/2L\cong\mathbb{F}_2^k$ and by $\iota^{-1}_2$
the preimage of $S\subset \mathbb{F}_2^k$ in $L$.
\subsubsection{Case when $S\subset \mathbb{F}_2^k$ contains zero}
For $cl(R)=A_1=\{\pm \alpha\}$ (case (ii)) one has
$$R(S):=\{\pm \alpha+\iota_2^{-1}(S)\}.$$
For $cl(R)=C_n$ one has
$$\begin{array}{ll}
R(S_1,S_2):=\{\alpha+\iota_2^{-1}(S_1)|\ \alpha\in O_1\}\cup
\{\alpha+ \iota_2^{-1}(S_2)|\ \alpha\in O_2\}\ & \text{ for }n=2,\\
R(S):=\{\alpha+L|\ \alpha\in O_1\}\cup
\{\alpha+ \iota_2^{-1}(S)|\ \alpha\in O_2\} & \text{ for }n>2.
\end{array}$$
In these cases $L:=\mathbb{Z}R\cap Ker(-,-)$.
For $cl(R)=B_n, n\geq 3,\ B(m,n), m,n\geq 1$ we take
$$R(S):=\{\alpha+2L|\ \alpha\in O_1\}\cup
\{\alpha+ \iota_2^{-1}(S)|\ \alpha\in O_2\}.$$
\subsubsection{}\label{choices}
Now assume that $S\subset \mathbb{F}_2^k$ is an arbitrary non-empty set.
Take any $s\in S$ and consider a set $S(s):=S-s$. The set $S(s)$ contains zero (and contains an affine basis for $\mathbb{F}_2^k$ if $S$
contained such a basis), so $R(S-s)$ is defined above. We set $R(S,s):=R(S-s)$. The sets
$S(s)$ (for different choices of $s$) are conjugated by affine automorphism, so, as we will show in~\S~\ref{isoRS},
the GRRSs corresponding to different choices of $s$ are isomorphic:
$R(S,s')\cong R(S,s'')$ for any $s',s''\in S$ (in other words,
$R(S):=R(S,s)$ is defined up to an isomorphism).
\subsection{Proof of~\Thm{thmAGRS1}}
The rest of this section is devoted to the proof of~\Thm{thmAGRS1}.
We always assume that $0\in S$.
Considering $B_n$ (reps., $B(m,n)$) we always assume that $n\geq 3$ (resp., $m,n\geq 1$).
\subsubsection{}\label{pfAGRS11}
Recall that $cl(R)$ is generated (as a GRRS) by a basis $\Pi$ of $cl(V)$. We take
$X:=\Pi$ in the construction of $R'$ (see~\S~\ref{Falphastr}).
We obtain $R'=cl(R)$, so $cl(R)\subset R$.
Using~\Lem{lemaff} we obtain
$$cl(R)\subset R\subset cl(R)^{(k)}.$$
\subsubsection{}
It is easy to verify that
if $cl(R)$ is as in (i), then $WG(cl(R))$ acts transitively on $cl(R)$, so (i) follows from~\Cor{cor1}.
\subsubsection{Case $cl(R)=A_1$}
Let $cl(R)=A_1=\{\pm\alpha\}$; set $L:=\mathbb{Z}R\cap Ker (-,-)$.
Then $R\subset cl(R)^{(k)}$ and so, by (GR1), $L$ is a free group
of rank $k$. If $k=1$, then $R=A_1^{(1)}$ by~\S~\ref{gaps}.
Consider the case $k>1$. Recall that $R=\{\pm (\alpha+H)\}$, where $H\subset L$ contains $0$, so (GR1) is
equivalent to the condition that $H$ contains a basis of $L$.
For each $x,y\in Ker (-,-)$ one has $r_{\alpha+x}(\alpha+y)=-\alpha+y-2x$,
so (GR2) is equivalent to $2x-y\in H$ for each $x,y\in H$, that is $H+2L\subset H$.
Hence $H$ is a set of equivalence classes of $L/2L=\mathbb{F}_2^k$ which contains $0$ and a basis of $\mathbb{F}_2^k$.
View $\mathbb{F}_2^k$ as an affine space. Recall that an affine basis of a $k$-dimensional
affine space $\mathbb{F}^k$ is a collection of points $x_1,\ldots,x_{k+1}$
such that any point $y\in \mathbb{F}^k$ is of the form $\sum_{i=1}^{k+1} \lambda_i x_i$
for some $\lambda_i\in\mathbb{F}$ with $\sum_{i=1}^{k+1}\lambda_i=1$.
We conclude that $R=\{\pm (\alpha+H)\}$ is a GRRS if and only if
the set $S:=\iota_2(H)\subset\mathbb{F}_2^k$ has the following properties: $0\in S$ and $S$
contains an affine basis of $\mathbb{F}_2^k$.
\subsubsection{Construction of $H_1,H_2$}
Assume that $WG(cl(R))$ does not act transitively on $cl(R)$. Then $cl(R)$ has two orbits $O_1$ and $O_2$, see above.
By~\Prop{propFalpha} one has
$$R=\{\alpha+H_1|\ \alpha\in O_1\}\cup \{\alpha+H_2|\ \alpha\in O_2\},$$
where $H_1,H_2\subset Ker(-,-)$ and $0\in H_1, H_2$ (since $cl(R)\subset R)$.
Except for the case $cl(R)=C_2$ the orbit $O_1$ is an irreducible GRRS with the transitive action of
$WG(O_1))$ (one has $O_1=D_n$ for $B_n,C_n, F_4$, $O_1=D(m,n)$ for $B(m,n)$ and $O_1=A_2$ for $G_2$).
Combining~\Lem{lem3} and (i), we obtain
that $H_1$ is a free abelian subgroup of $Ker(-,-)$ if
$cl(R)\not=C_2$. We introduce $L$ as follows:
\begin{equation}\label{eqL}
L:=\left\{ \begin{array}{ll}
H_1 & \text{ for }cl(R)\not=C_2, B(m,n), B_n;\\
\frac{1}{2}H_1 & \text{ for }cl(R)=B(m,n), B_n;\\
\mathbb{Z}R\cap Ker(-,-) & \text{ for }cl(R)=C_2.
\end{array}\right.\end{equation}
\subsubsection{Cases $F_4, G_2$}
For these cases $O_2\cong O_1$, so $O_2$ is also
an irreducible GRRS with the transitive action of
$WG(O_2))$, and thus $H_2$ is a free abelian subgroup of $Ker(-,-)$.
One readily sees that (GR2) is equivalent to
$$H_2+rH_1, H_2+H_2\subset H_2,\ \ H_1+H_2\subset H_1,$$
where $r=2$ for $F_4$ and $r=3$ for $G_2$. This gives
$rL\subset H_2\subset L$, so $H_2/(rL)$ is an additive subgroup of $\mathbb{F}_r^k$. Thus $H_2/(rL)\cong \mathbb{F}_r^s$ for some $0\leq s\leq k$ and $s$ is an invariant of $R$.
This establishes (iii).
\subsubsection{Case $C_n$}
Take $n>2$.
One readily sees that (GR2) is equivalent to
$$H_2+2H_1, H_2+2H_2\subset H_2,\ \ H_1+H_2\subset H_1.$$
Since $H_1=L$, we get $H_2+2L\subset H_2\subset L$.
Taking $S:=\iota_2(H_2)$, we obtain $R\cong R(S)$.
Take $n=2$. In this case (GR2) is equivalent to
$$H_1+H_2, H_1+2H_1\subset H_1,\ H_2+2H_1, H_2+2H_2\subset H_2.$$
Since $0\in H_1$, we obtain $H_2\subset H_1$, so $L=\mathbb{Z}R\cap Ker(-,-)$ is spanned by $H_1$.
Thus (GR2) is equivalent to $H_i+2L\subset H_i$ for $i=1,2$
and $H_1+H_2\subset H_1$. Taking $S_i:=\iota_2(H_i)$
for $i=1,2$, we obtain $R\cong R(S_1,S_2)$ as required.
\subsubsection{Cases $B_n, B(m,n)$}
One readily sees that (GR2) is equivalent to
$$H_2+2H_2, H_2+H_1\subset H_2,\ \ H_1+2H_2\subset H_1.$$
Since $H_1=2L$, we get $H_2\subset L$ and $H_2+2L\subset H_2$.
Taking $S:=\iota_2(H_2)$, we obtain $R\cong R(S)$.
\subsection{Isomorphisms $R(S)\cong R(S')$}\label{isoRS}
It remains to verify that in (ii)-(v) one has
$R(S)\cong R(S')$ if and only if
$S=\psi(S')$ for some affine transformation $\psi$
(for $C_2$ we have $S_i=\psi(S'_i)$ for $i=1,2$).
\subsubsection{}
Let $R(S)\subset V,\ R(S')\subset V'$ be two isomorphic GRRSs and let $\phi: V\iso V'$
with $\phi(R(S))=R(S')$ be the isomorphism.
Define $L,L'$ and
$H_i, H_i' \ (i=1,2)$
for $R(S)$ and $R(S')$ as above (for $cl(R)=A_1$ we set $O_1:=O_2:=A_1$ and $H_1:=H_2:=H$). From~(\ref{eqL}) one has
$\phi(L)=L'$ and thus $\phi(2L)=2L'$, so $\phi$
induces a linear isomorphism $\phi_2: \mathbb{F}_2^k\iso\mathbb{F}_2^k$
such that $\iota_2'\circ \phi=\phi_2\circ \iota_2$
(where $\iota_2: L/2L\iso \mathbb{F}_2^k$ and
$\iota_2': L'/2L'\iso \mathbb{F}_2^k$ are the natural isomorphisms).
By the above construction,
$R(S)$ and $R(S')$ contain $cl(R(S))\cong cl(R(S'))$.
Take $\alpha\in O_2\in cl(R(S))$ and let $\alpha'$ be the corresponding element in
$cl(R(S'))$.
Then $\phi(\alpha)=\alpha'+v$ for some $v\in H'_2$. Since
$\phi$ is linear,
$\phi(\alpha+x)=\alpha'+v+\phi(x)$ for each $x\in L$.
This implies $H_2'=v+\phi(H_2)$, that is
$$S'=\iota'_2(H'_2)=\iota'_2(v)+\iota'_2(\phi(H_2))=\iota'_2(v)+
\phi_2(\iota_2(H_2))=\iota'_2(v)+\phi_2(S).$$
This shows that $S'$ is obtained from $S$ by an affine automorphism $\psi:=\iota'_2(v)+\phi_2$
of $\mathbb{F}_2^k$ as required.
For the case $cl(R)=C_2$ the above argument gives
$S'_i=a_i+\phi_2(S_i)$ for some $a_i\in S_i'$ ($i=1,2$).
\subsubsection{}
Let $R(S), R(S')\subset V$ be two GRRSs with $cl(R(S))=cl(R(S'))$ (and the same
$L$), and let $S'=\psi_2(S)+\ol{a}$ if $cl(R)\not=C_2$
(resp., $S_i=\psi_2(S)+\ol{a}_i$ for $i=1,2$ if $cl(R)\not=C_2$), where $a\in L$ and $\ol{a}\in \mathbb{F}_2^k=L/2L$ (resp., $a_i\in L$ and $\ol{a}\in \mathbb{F}_2^k$)
and $\psi_2$ is a linear automorphism of $\mathbb{F}_2^k$.
Fix a linear isomorphism $\psi:L\to L$ such that
$\iota_2\circ \psi=\psi_2\circ \iota_2$.
Recall that $V=Ker(-,-)\oplus \mathbb{C}\Pi$, where
$Ker (-,-)=L\otimes_{\mathbb{Z}}\mathbb{C}$ and
$\Pi\subset cl(R(S))=cl(R(S'))$ is linearly independent in $V$.
Extend $\psi$ to a linear automorphism of $V$ by
putting $\psi(\alpha):=\alpha+a$ for each $\alpha\in \Pi\cap O_2$
and $\psi(\alpha):=\alpha$ for each $\alpha\in \Pi\cap O_1$ if
$cl(R)\not=A_1,C_2$
(resp., $\psi(\alpha):=\alpha+a$ for $\alpha\in \Pi$ if $cl(R)=A_1$ and
$\psi(\alpha):=\alpha+a_i$ for each $\alpha\in \Pi\cap O_i$, where $i=1,2$
for $cl(R)=C_2$).
One readily sees that $\psi$ preserves $(-,-)$ and $\psi(R(S))=R(S')$.
Thus $R(S)\cong R(S')$ (resp., $R(S_1,S_2)\cong R(S'_1,S'_2)$) as required.
\section{Case when $cl(R)$ is the root system
of $\mathfrak{psl}(n+1,n+1)$ for $n>1$}
\label{Ann}
In this section we describe $R$ such that $cl(R)$ is the root system
of $\mathfrak{psl}(n+1,n+1)$ for $n>1$.
\subsection{Description of $A(n,n), A(n,n)_f, A(n,n)_x$}
A finite GRRS $A(n,n)\subset V$ can be described as follows.
Let $V_1$ be a complex vector space endowed
with a symmetric bilinear form
and an orthogonal basis $\vareps_1,\ldots,\vareps_{2n+2}$ such that $(\vareps_i,\vareps_i)=-(\vareps_{n+1+i},\vareps_{n+1+i})=1$ for $i=1,\ldots, n+1$. One has
$$A(n,n)=\{\vareps_i-\vareps_j\}_{i\not=j},\ \ \ V=\{\sum_{i=1}^{2n+2}
a_i\vareps_i|\ \sum_{i=1}^{2n+2} a_i=0\},$$
where the reflection $r_{\vareps_i-\vareps_j}$ is
the restriction of the linear map
$\tilde{r}_{\vareps_i-\vareps_j}\in End(V_1)$
which interchanges $\vareps_i\leftrightarrow \vareps_j$
and preserves all other elements of the basis.
One readily sees that $A(n,n)\subset V$ is a finite GRRS; it is
the root system of the Lie superalgebra $\mathfrak{pgl}(n+1,n+1)$
($V_1$ corresponds to $\fh^*$, where $\fh$ is a Cartan subalgebra of
$\mathfrak{gl}(n+1,n+1)$ and $V\subset V_1$ is dual to the Cartan subalegbra
of $\mathfrak{pgl}(n+1,n+1)$).
The kernel of the bilinear form on $V$ is spanned by
$$I:=\sum_{i=1}^{n+1}(\vareps_i-\vareps_{n+1+i}).$$
The root system of $\mathfrak{psl}(n+1,n+1)$ is the quotient of $A(n,n)$
by $\mathbb{C}I$ (it is a bijective quotient if $n>1$);
we denote it by $A(n,n)_f$: $A(n,n)_f:=cl(A(n,n))$. Recall that
$A(n,n)_f$ is a GRRS if and only if $n>1$ and
$A(1,1)_f$ is a WGRS (denoted by $C(1,1)$ in~\cite{VGRS}, see~\S~\ref{GRS}).
Let $A(n,n)^{(1)}\subset V^{(1)}:=V\oplus \mathbb{C}\delta$ be the affinization of $A(n,n)$. We denote by $cl_x$ the canonical map
$$cl_x: V\oplus\mathbb{C}\delta\to V_x:=(V\oplus\mathbb{C}\delta)/\mathbb{C}(I+ x\delta)$$
and by $A(n,n)_x$ the corresponding quotient of $A(n,n)^{(1)}$:
$$A(n,n)_x:=cl_x(A(n,n)^{(1)}).$$
Note that $A(n,n)_0=(A(n,n)_f)^{(1)}$.
The kernel of $(-,-)$ on $V_x$ is one-dimensional and $$cl(A(n,n)_x)\cong A(n,n)_f.$$
Note that (GR1) holds for
$A(n,n)_x$ only if $x\in\mathbb{Q}$ (since $cl_x(\delta),
xcl_x(\delta)\in \mathbb{Z}A(n,n)_x$). We will see that
for $n>1$ this condition is sufficient: $A(n,n)_x$ is a GRRS if and only if
$x\in\mathbb{Q}$; for $n=1$ integral values of $x$ should be excluded, see~\ref{uksi}
below.
\subsection{Description of $A(1,1)_x, \ x\in\mathbb{Q}$}\label{uksi}
Let $x=p/q$ be the reduced form ($p,q\in\mathbb{Z}, q>1, GCD(p,q)=1$).
Set $\delta':=cl_x(\delta)/q,\ e:=cl_x(\vareps_1-\vareps_2)/2,\
d:=cl_x(\vareps_3-\vareps_4
)/2$; note that $\delta', e,d$ form
an orthogonal basis of $V_x$ satisfying $(\delta',\delta')=0$ and
$(e,e)=-(d,d)=1/2$. One has
$$A(1,1)_x=\{\pm 2e+\mathbb{Z}q\delta',\ \pm 2d+\mathbb{Z}q\delta',\
\pm e\pm d+ (\mathbb{Z}q\pm p/2)\delta'\},$$
and $cl(A(1,1)_x)=A(1,1)_f=C(1,1)=\{\pm 2e,\ \pm 2d,\ \pm e\pm d\}$.
Note that $\mathbb{Z}A(1,1)_x\cap Ker (-,-)$ is $\mathbb{Z}\delta'$
(since $GCD(p,q)=1$), so the non-isotropic roots in $C(1,1)$
has the gap $q$ (and the gap of isotropic roots is not defined).
Observe that $A(1,1)_x$ is not a GRRS for $x\in\mathbb{Z}$, since
$\alpha:=e+d+p/2\delta', \beta:=e-d-p/2\delta'$ are isotropic non-orthogonal roots and
$\alpha\pm\beta\in R$ which contradicts to (GR3), see~\S~\ref{alphabeta}.
\subsection{}
In this section we prove the following proposition describing the
affine GRRSs $R$ with $cl(R)=A(n,n)_f, n>1$.
\begin{prop}{propAnnx}
(i) $A(1,1)_x$ is a GRRS if and only if $x\in\mathbb{Q},\ x\not\in\mathbb{Z}$;
$A(n,n)_x$ for $n>1$ is a GRRS if and only if $x\in\mathbb{Q}$.
(ii) Let $R$ be a GRRS with $\dim Ker(-,-)=1$ and $cl(R)=A(n,n)_f, n>1$.
If $R$ is finite,
then $R\cong A(n,n)$. If $R$ is infinite, then $R\cong A(n,n)_x$ for some
$x\in\mathbb{Q}$ and it is a bijective quotient of $A(n,n)^{(1)}$;
each $\alpha\in A(n,n)_f$ has the gap $q$.
Moreover $A(n,n)_x\cong A(n,n)_y$ if and only if either $x+y$ or $x-y$ is integral.
(iii) If $R$ is a GRRS with $\dim Ker(-,-)=k+1>1$ and $cl(R)=A(n,n)_f, n>1$, then
$R$ is isomorphic to
$A(n,n)^{(k+1)}$ or to its bijective quotient
$A(n,n)_{1/q}^{(k)}$ for some $q\in\mathbb{Z}_{>0}$ and these GRRS are
pairwise non-isomorphic. Moreover,
$A(n,n)_{p/q}^{(k)}\cong A(n,n)_{1/q}^{(k)}$ if $GCD(p,q)=1$.
\end{prop}
\subsubsection{Remark}
Recall that $A(n,n)_{0}=A(n,n)_f^{(1)}$, so for each $p\in\mathbb{Z}$ one has
$A(n,n)_f^{(k+1)}\cong A(n,n)_p^{(k)}$ for $k\geq 0$.
\subsection{Proof}
By above, $A(n,n)_x$ satisfies (GR1) only if
$x\in\mathbb{Q}$ and, in addition, $x\not\in\mathbb{Z}$
for $n=1$. One readily sees that the converse holds
(these conditions imply (GR1)).
Since $A(n,n)^{(1)}$ is a GRRS, its quotient
$A(n,n)_x$ satisfies (GR0), (GR2) and (WGR3).
Using~\S~\ref{alphabeta} it is easy to show that
(GR3) does not hold if and only if
$n=1$ and $x\in\mathbb{Z}$. This establishes (i).
It is easy to see that $A(n,n)_x$ is a bijective quotient of
$A(n,n)^{(1)}$ for $n>1$.
\subsubsection{}
Let $R$ be a GRRS with $cl(R)=A(n,n)_f, n>1$.
Set $L:=Ker (-,-)\cap \mathbb{Z}R$; by (GR1) one has $L\cong \mathbb{Z}^{k+1}$,
where $k+1=\dim Ker(-,-)$.
Recall that $\tilde{\Pi}:=\{\vareps_i-\vareps_{i+1}\}_{i=1}^{2n+1}$
is a set of simple roots for $A(n,n)$ and
$\Pi:=\{\vareps_i-\vareps_{i+1}\}_{i=1}^{2n}$
is a set of simple roots for a GRRS $A(n,n-1)$. Applying the procedure described in~\S~\ref{Falphastr} to $X:=\Pi$, we get $R'=A(n,n-1)$.
Let $V'$ be the span of $R'$. One has $V=\mathbb{C} I\oplus V'$, so
$R'=A(n,n-1)$ can be naturally viewed as a subsystem of $A(n,n)_f$.
Note that $A(n,n)_f$ has three $GW(A(n,n-1))$-orbits: $A(n,n-1)$ itself, $O_1:=\{\vareps_i-\vareps_{2n}\}_{i=1}^{2n-1}$
and $-O_1$. By~\Prop{propFalpha} for $i\not=j<2n$ one has
$$F(\vareps_i-\vareps_j)=L',\ \ F(\pm(\vareps_i-\vareps_{2n}))=\pm S,$$
where $S,L'\subset L$ and, by~\Thm{thmAGRS1} (since $n>1$),
$L'$ is a free group. By~(\ref{eqFalphabeta}),
$$S=L'+S,\ \ S+(-S)=L',$$
so $S=a+L'$ for some $a\in L$. Note that $L=L'+\mathbb{Z}a$.
If $a\not\in \mathbb{Q}L'$, then $L=L'\oplus \mathbb{Z}a$.
Extending the embedding $A(n-1,n)\to A(n,n)$ by $a\mapsto I$
we obtain the isomorphism $R\cong A(n,n)^{(k)}$.
(If $k=0$, then $L'=0$, so $R\cong A(n,n)$).
If $a\in L'$, then $S=-S=L$ and $R=(A(n,n)_f)^{(k+1)}=(A(n,n)_0)^{(k)}$.
Consider the remaining case $a\in \mathbb{Q}L'\setminus L'$.
Take the minimal $q\in\mathbb{Z}_{>1}$ such that $qa\in L'$ and
the maximal $p\in\mathbb{Z}_{>0}$ such that $qa\in pL$. Then
$GCD(p,q)=1$ and
\begin{equation}\label{LL'}
L'=\mathbb{Z} e\oplus L'',\ S=(p/q+\mathbb{Z})e\oplus L'',\
L=\mathbb{Z} \frac{e}{q}\oplus L''
\ \text{ where }
L''\cong \mathbb{Z}^k\end{equation}
where $e:=\frac{q}{p}a$. Hence
$$R\cong\bigl(A(n,n)/(I-\frac{p}{q}\delta)\bigr)^{(k)}
=\bigl(A(n,n)_{p/q}\bigr)^{(k)}.$$
\subsubsection{}
Let us show that
$A(n,n)_x\cong A(n,n)_y$ if either $x+y$ or $x-y$ is integral.
Consider the linear endomorphisms $\psi, \phi\in End(V\oplus\mathbb{C}\delta)$
defined by
$$\psi(v):=v\ \text{ for }v\in V;
\ \ \psi(\delta):=-\delta,$$
and $\phi(\delta)=\delta$,
$$\phi(\vareps_i-\vareps_{i+1})=\vareps_i-\vareps_{i+1}\ \text{ for }
i=1,\ldots,2n;\ \phi(\vareps_{2n+1}-\vareps_{2n+2})=\vareps_{2n+1}-\vareps_{2n+2}+\delta.
$$
These endomorphisms preserve $(-,-)$ and $A(n,n)^{(1)}$.
Since $\psi(I+x\delta)=I-x\delta$ and $\phi(I+x\delta)=I+(x+1)\delta$,
$\psi$ (resp., $\phi$) induces an isomorphism $V_x\to V_{-x}$
(resp., $V_x\to V_{x+1}$)
which preserves
the bilinear forms and maps $A(n,n)_x$ to $A(n,n)_{-x}$
(resp., to $A(n,n)_{x+1}$).
Hence $A(n,n)_x\cong A(n,n)_{-x}\cong A(n,n)_{x+1}$ as required.
Let us show that
$A(n,n)_x\cong A(n,n)_y$ imples that either $x+y$ or $x-y$ is integral.
For each subset $J$ of $A(n,n)_x$ we
set $sum(J):=\sum_{\alpha\in J}\alpha$ and we let $U$ be the set
of subsets $J$ of $A(n,n)_x$ containing exactly $n+1$ roots.
It is not hard to see that
\begin{equation}\label{Upq}
Ker (-,-)\cap \{sum(J)|\ J\in U\}=\left\{\begin{array}{ll}
(\pm p+\mathbb{Z}q)\delta' & \text{ for even } n,\\
(\pm p+\mathbb{Z}q)\delta' \cup \{\mathbb{Z}q\delta'\}& \text{ for odd } n,
\end{array}\right.
\end{equation}
where $\delta'$ is a generator of $\mathbb{Z}R\cap Ker(-,-)\cong \mathbb{Z}$ and
$x=p/q$ with $GCD(p,q)=1$. Thus $A(n,n)_x\cong A(n,n)_y$ implies
$\pm p+\mathbb{Z}q=\pm p'+\mathbb{Z}q'$, where $y=p'/q'$ with $GCD(p',q')=1$.
The claim follows; this completes the proof of (ii).
\subsubsection{}
Now take $R$ such that $cl(R)=A(n,n)_f$ with $n>1$ and
fix $\alpha\in R$. Set $L:=Ker(-,-)\cap \mathbb{Z}R$
and $L':=\{v\in Ker(-,-)| \alpha+v\in R\}$.
One readily sees from above that $L/L'=\mathbb{Z}$
if $R=A(n,n)^{(k)}$ and
$L/L'=\mathbb{Z}/q\mathbb{Z}$ if $R=A(n,n)_{p/q}^{(k)}$ (with $GCD(p,q)=1$).
Therefore $A(n,n)^{(k)}\not\cong A(n,n)_{p/q}^{(k')}$ and
$A(n,n)_{p/q}^{(k)}\cong A(n,n)_{p'/q'}^{(k')}$ with $GCD(p',q')=1$ forces $q=q', k=k'$.
It remains to check that
for $k\geq 1$ one has $A(n,n)_{p/q}^{(k)}\cong A(n,n)_{1/q}^{(k)}$.
Clearly, it is enough to verify this for $k=1$. Note that
$A(n,n)_x^{(1)}$ is the quotient of $A(n,n)^{(2)}$ by $\mathbb{C}(I+x\delta)$:
$A(n,n)\subset V$ and
$$V^{(2)}=V\oplus (\mathbb{C}\delta\oplus\mathbb{C}\delta'),\ \
A(n,n)^{(2)}=A(n,n)+\mathbb{Z}\delta+\mathbb{Z}\delta',$$
where $(V^{(2)},\delta)=(V^{(2)},\delta')=0$.
Consider the linear endomorphism $\phi\in End(V^{(2)})$
defined by $\phi(\delta)=a\delta+q\delta'$, $\phi(\delta')=b\delta+p\delta'$
where $a,b\in\mathbb{Z}$ are such that $pa-qb=1$ and
$$\phi(\vareps_i-\vareps_{i+1})=\vareps_i-\vareps_{i+1}\ \text{ for }
i=1,\ldots,2n;\ \phi(\vareps_{2n+1}-\vareps_{2n+2})=\vareps_{2n+1}-
\vareps_{2n+2}-b\delta-p\delta'.
$$
Then $\phi(I)=I-b\delta-p\delta'$, so
$\phi(I+\frac{p}{q}\delta)=I+\frac{1}{q}\delta$ and
$\phi$ induces an isomorphism $A(n,n)_{p/q}^{(k)}\cong A(n,n)_{1/q}^{(k)}$.
This completes the proof of (iii).
\section{The cases $BC_n, BC(m,n), C(m,n)$}\label{sect6}
\subsection{Case $BC_n$}
Let $cl(R)=BC_n$ and $k=\dim Ker(-,-)$.
Let $\{\vareps_i\}_{i=1}^n$ be an orthonormal basis of $cl(V)$.
Recall that $cl(R)=BC_n$ have three $W(BC_n)$-orbits
$$O_1:=\{\pm\vareps_i\}_{i=1}^n,\ \ O_2:=\{\pm 2\vareps_i\}_{i=1}^n,\ \
O_3:=\{\pm\vareps_i\pm\vareps_j\}_{1\leq i<j\leq n}^n$$
for $n>1$ and two $W$-orbits, $O_1$ and $O_2$, for $n=1$.
We take $X$ to be a set of simple roots of $B_n=O_1\cup O_3$
($X=\{\vareps_1-\vareps_2,\ldots,\vareps_{n-1}-\vareps_n,\vareps_n\}$)
in the construction of $R'$ (see~\S~\ref{Falphastr}).
Then $R'=B_n$ and $W(BC_n)=W(B_n)$.
We set $H_i:=F(\gamma_i)$ for $\gamma_i\in O_i$ ($i=1,2,3$).
Recall that $-H_i=H_i$ for $i=1,2,3$ and $0\in H_1,H_3$.
\subsubsection{Case $n=1$}
One has $BC_1=\{\pm\vareps_1,\pm 2\vareps_1\}$, $X:=\{\vareps_1\}$.
(GR2) is equivalent to
\begin{equation}\label{eqBC1}
0\in H_1,\ \ \ H_1+2H_1, H_1+H_2\subset H_1,\ \ \ H_2+2H_2, H_2+4H_1\subset H_2.
\end{equation}
Therefore $L:=\mathbb{Z}R\cap Ker(-,-)$ is spanned by $H_1$ and
$$H_1+2L\subset H_1,\ H_2+4L\subset H_2,\ H_2\subset H_1,\ H_2+2H_2\subset H_2.$$
As in~\Thm{thmAGRS1}, we introduce the canonical map
$\iota_2: L\to L/2L\cong\mathbb{F}_2^k$ (where
$k:=\dim Ker(-,-)$). Using~\Thm{thmAGRS1} (ii) we conclude that
$$R=\{\pm (\vareps_1+ \iota_2^{-1}(S)\}\cup\{\pm (2\vareps_1+ H_2)\},$$
where $S\subset \mathbb{F}_2^k=L/2L$ contains zero and a basis
of $\mathbb{F}_2^k$ and $H_2\subset \iota_2^{-1}(S)$ satisfying
$$H_2+4L, H_2+2H_2\subset H_2.$$
\subsubsection{Case $n\geq 2$}
(GR2) is equivalent to~(\ref{eqBC1}) and the following
conditions on $H_3$:
$$0\in H_3,\ \ H_1+H_3\subset H_1,\ \ H_2+2H_3\subset H_2,\
\ H_3+2H_1, H_3+H_2, H_3+2H_3\subset H_3,$$
and $H_3+H_3\subset H_3$ for $n>2$.
Set
$$L:=\mathbb{Z}H_3;$$
by above, $Ker(-,-)=L\otimes_{\mathbb{Z}}\mathbb{C}$, so
$L$ has rank $k$.
For $n>2$ each $R$ is of the form $R(S_1,S_2)$, where
$S_1,S_2\subset \mathbb{F}_2^k$ and $0\in S_1$ and $R(S_1,S_2)$
can be described as follows:
$H_1\subset \frac{1}{2}L$ is the preimage of $S_1$ in $\frac{1}{2}L\to \mathbb{F}_2^k=\frac{1}{2}L/L$;
$H_2\subset L$ is the preimage of $S_2$ in $L\to \mathbb{F}_2^k=L/2L$; $H_3=L$.
For $n=2$ each $R$ is of the form
$R(S_1,S_2,H_3)$, where $S_1,S_2$ as above
($S_1,S_2\subset \mathbb{F}_2^k$ and $0\in S_1$),
and $H_3\subset L$ contains $0$, a basis of $L$ and satisfies
$H_2+H_3=2H_1+H_3\subset H_3$ (where $H_1,H_2$ are as for $n>2$).
\subsection{}
\begin{prop}{caseBCmn}
Let $R\subset V$ be a GRRS with $k:=\dim Ker(-,-)$.
(i) The isomorphism classes of GRRSs $R$ with $cl(R)=C(m,n), mn>1$
are in one-to-one correspondence with the equivalence classes
of the proper non-empty
subsets $S$ of the affine space $\mathbb{F}_2^k$
up to to the action of an affine automorphism of $\mathbb{F}_2^k$, see~\S~\ref{BCRS} for the description of $R(S)$.
For $m=n$ there is an additional isomorphism
$R(S)\cong R(\mathbb{F}_2^k\setminus S)$.
(ii) The isomorphism classes of GRRSs $R$ with $cl(R)=BC(m,n)$
are in one-to-one correspondence with the equivalence classes
of the pairs of a proper non-empty subset $S$
and a non-empty subset $S'$ of the affine space $\mathbb{F}_2^k$
up to the action of an affine automorphism of $\mathbb{F}_2^k$,
see~\S~\ref{BCRS} for the description of $R(S,S')$.
For $m=n$ there is an additional isomorphism
$R(S, S')\cong R(\mathbb{F}_2^k\setminus S, S')$.
(iii) If $R$ is a GRRS such that $cl(R)=C(1,1)$, then either $R\cong A(1,1)^{(k-1)}$ or $R$ is a "rational quotient" $A(1,1)^{(k)}_x$ (for $k=1$ one has
$x\in \mathbb{Q}, 0<x<1/2$, and
for $k>1$ one has $x=1/q$, where $q\in\mathbb{Z}_{>0}$)
of $A(1,1)^{(k)}$, or
$R\cong C(1,1)(S)$ for some non-empty $S\subset\mathbb{F}_2^k$,
see~\S~\ref{BCRS}.
The only isomorphic GRRSs are
$C(1,1)(S)\cong C(1,1)(S')$, where
$S'=\psi(S)$, where $\psi: \mathbb{F}_2^k\to\mathbb{F}_2^k$
is an affine automorphism and
$R(S)\cong R(\mathbb{F}_2^k\setminus S)$.
\end{prop}
\subsubsection{Description of $R(S)$}\label{BCRS}
In order to describe the above correspondences in (i)--(iii)
between GRRSs and subsets
in $\mathbb{F}_2^k$ we fix a free abelian group
$L\subset Ker (-,-)$ of rank $k$ and
denote by $\iota_2$ the canonical map
$\iota_2: L\to L/2L\cong\mathbb{F}_2^k$ and by $\iota^{-1}_2$
the preimage of $S\subset \mathbb{F}_2^k$ in $L$.
If $S$ contains zero, then for $cl(R)=C(m,n)$ we take
$$R(S):=\{\pm\vareps_i\pm\vareps_j+L; \pm\delta_s\pm\delta_t+L;
\pm \vareps_i\pm\delta_j+L;
\pm 2\vareps_i+ \iota_2^{-1}(S); \pm 2\delta_s+(L\setminus\iota_2^{-1}(S))\}_{{1\leq i\not=j\leq m}
\atop{1\leq s\not=t\leq n}}.$$
For $BC(m,n)$ we construct $R(S,S')$ by adding to $R(S)$ the roots
$$\{\pm \vareps_i+\frac{1}{2}\iota_2^{-1}(S'); \pm \delta_s+
\frac{1}{2} \iota_2^{-1}(S')\}_{1\leq i\not=j\leq m, 1\leq s\not=t\leq n}.$$
For an arbitrary subset $S$, we take $R(S):=R(S-s)$ (resp., $R(S,S'):=R(S-s,S')$) for some $s\in S$
the result does not depend on the choice of $s\in S$,
see~\S~\ref{choices}.
\subsubsection{Case $\dim Ker (-,-)=1$}
In this case~\Prop{caseBCmn} gives the following:
for $cl(R)=C(1,1)$, $R$ is either a finite GRRS $A(1,1)$
or $A(1,1)_x$ for $x\in\mathbb{Q}$, or $R(0)$ ($\cong A(1,1)_{1/2}$);
for $cl(R)=C(m,n)$, $R$ is $R(0)$ ($\cong A(2m-1,2n)^{(2)}$);
for $cl(R)=BC(m,n)$, $R$ is either $R(0,0)$ ($\cong A(2n,2m-1)^{(2)}$), or $R(0,1)$ ($\cong A(2m,2n-1)^{(2)}$),
or $R(0,\mathbb{F}_2)$ ($\cong A(2m,2n)^{(4)}$).
Note that $R(0,0)\cong R(1,0)$, $R(1,1)\cong R(0,1)$
and all these GRRSs are isomorphic if $m=n$.
\subsubsection{Isomorphisms}\label{Cmniso}
The conditions when $R(S), R(S')$ (resp., $R(S,S')$ and $R(S_1, S_1')$) are isomorphic can be proven similarly to~\S~\ref{isoRS}. For $m=n$ the involution
$\vareps_i\mapsto\delta_i$ gives rise to the isomorphism
$R(S)\cong R(\mathbb{F}_2^k\setminus S)$
(resp., $R(S, S')\cong R(\mathbb{F}_2^k\setminus S, S')$).
Remark that for $cl(R)=C(1,1)$ one has $A(1,1)_{1/2}\cong R(0)$.
However, in~\Prop{caseBCmn}
we consider only $A(1,1)_x$ for $0<x<1/2$, so
this isomorphism is not mentioned.
\subsection{Proof of~\Prop{caseBCmn}}
Let $X$ be a set of simple roots of $C_m\coprod C_n\subset C(m,n)\subset
BC(m,n)$ (i.e., $X=:\{\vareps_1-\vareps_2,\ldots,\vareps_{m-1}-\vareps_m,
2\vareps_m,\delta_1-\delta_2,\ldots,2\delta_n\}$);
applying the procedure described in~\S~\ref{Falphastr},
we get $R'=C_m\coprod C_n$. The $W(R')$ orbits in $cl(R)$ are
the following: the set of isotropic roots, the set of long
roots of $C_m$ (resp., of $C_n$),
the set of short roots of $C_m$ (resp., of $C_n$), and
for $BC(m,n)$, the set of short roots of $B_m$ (resp., of $B_n$).
Recall that $F(\alpha)$ is the same for elements in the same orbit.
Since all isotropic
roots form one $W(R')$-orbit, $F(-\alpha)=F(\alpha)$
for each isotropic $\alpha$; since $F(-\alpha)=-F(\alpha)$, we get $F(\alpha)=-F(\alpha)$.
We claim that
\begin{equation}\label{MM}
\begin{array}{l}
\forall x,y\in F(\vareps_1-\delta_1)\ \text{ exactly one holds }
x+y\in F(2\vareps_1)\ \text{ or } x-y\in F(2\delta_1),\\
F(\vareps_1-\delta_1)+F(2\vareps_1),F(\vareps_1-\delta_1)+F(2\delta_1)
\subset
F(\vareps_1-\delta_1)
\end{array}
\end{equation}
Indeed, for each $x,y\in F(\vareps_1-\delta_1)$
one has $\vareps_1-\delta_1+x,
\vareps_1+\delta_1+y\in R$ so exactly one of two elements $2\vareps_1+x+y$ and
$2\delta_1+y-x$ lies in $R$ (see~\S~\ref{alphabeta}). This establishes the first formula.
The other formulae follow from~(\ref{eqFalphabeta}).
Set
$$L':=\mathbb{Z}(F(2\vareps_1)\cup F(2\delta_1)).$$
Take any $a\in F(\vareps_1-\delta_1)$. By~(\ref{MM}),
\begin{equation}\label{Fe1d1}
F(\vareps_1-\delta_1)=L'\pm a
\end{equation}
and, moreover, for each $b\in L'$ exactly one holds:
$b\in F(2\vareps_1)$ or $b+2a\in F(2\delta_1)$, and, similarly,
$b+2a\in F(2\vareps_1)$ or $b\in F(2\delta_1)$.
Therefore
\begin{equation}\label{C11eq}
L'=F(2\vareps_1)\coprod (F(2\delta_1)-2a)\cap L')=F(2\delta_1)\coprod (F(2\vareps_1)-2a)\cap L').
\end{equation}
Note that $0\in F(2\vareps_1), F(2\delta_1)$ gives $a\not\in L'$.
\subsubsection{Case $C(1,1)$}\label{C11}
If $2a\not\in L'$, then
$F(2\vareps_1)=F(2\delta_1)=L'$. If $a\not\in \mathbb{Q}L'$, then
$R\cong A(1,1)^{(k-1)}$, where $k=\dim \mathbb{Z}R\cap (-,-)$
(if $k=1$, then $L'=0$ and $R=A(1,1)$), otherwise
$R\cong A(1,1)_x^{(k-1)}$, see the proof of~\Prop{propAnnx}.
Notice that for $x=p/q$, $2q a\in L'$; we exclude
$q=2$, since $A(1,1)_{1/2}\cong R(0)$, see~\ref{BC11} below.
Consider the case $2a\in L'$.
Since~(\ref{C11eq}) holds for each $a\in F(\vareps_1-\delta_1)$, one has $F(2\delta_1)+2L'=F(2\delta_1)$
and $F(2\vareps_1)+2L'=F(2\vareps_1)$. Now taking
$S:=\iota_2(F(2\vareps_1))$ and the automorphism $\delta_i\mapsto
\delta_i-a$ we get $R\cong R(S)$ as required.
\subsubsection{Case $C(m,n)$ with $mn>1$}\label{Cmn3}
Since $C(m,n)\cong C(n,m)$ we can (and will) assume that $m\geq 2$. Using~(\ref{Fe1d1}) we get
$$F(\vareps_1-\vareps_2)=
F(\vareps_1-\delta_1)+F(\vareps_1-\delta_1)=L'\pm 2a.$$
Since $0\in F(\vareps_1-\vareps_2)$ we obtain
$2a\in L'$ and thus $R\cong R(S)$.
\subsubsection{Case $BC(m,n)$}\label{BC11}
The additional relations include
$$\begin{array}{l}
F(\vareps_1-\delta_1)+F(\delta_1)=F(\vareps_1),\ \ \ \
F(\vareps_1-\delta_1)+F(\vareps_1)=F(\delta_1),\\
F(\vareps_1-\delta_1)+2F(\delta_1),
F(\vareps_1-\delta_1)+2F(\vareps_1)\subset F(\vareps_1-\delta_1),\\
F(\vareps_1)+F(2\vareps_1)\subset F(\vareps_1),
4F(\vareps_1)+F(2\vareps_1)\subset F(2\vareps_1),
\end{array}$$
and similar relations between $F(\delta_1)$ and $F(2\delta_1)$.
In particular,
$$F(\vareps_1-\delta_1)+F(\vareps_1-\delta_1)+F(\vareps_1)=
F(\vareps_1)$$
(so $L'+F(\vareps_1)=F(\vareps_1)$),
and, since $F(\vareps_1-\delta_1)=L'\pm a$,
$2F(\vareps_1)\subset L'\cup (L'-2a)$. Moreover,
$4F(\vareps_1)\subset L'$.
Take $b\in F(\vareps_1)$
and observe that $b\pm 2a\in F(\vareps_1)$.
Since $2F(\vareps_1)\subset L'\cup (L'-2a)$
we get $4a\in L'$ (and $a\not\in L'$ by (\ref{C11eq})).
If $2a\in L'$, we obtain $2F(\vareps_1)\subset L'$
and taking
$S:=\iota_2(F(2\vareps_1))$, $S':=\iota_2(2F(\vareps_1))$
and the automorphism $\delta_i\mapsto
\delta_i-a$ we get $R\cong R(S, S')$ as required.
Let $2a\not\in L'$ (and $2a\in \frac{1}{2}L'$).
Consider an automorphism $\psi: V\to V$ which maps $\delta_1$ to
$\delta_1+a$, and stabilizes
$\vareps_1$ and the elements of $Ker(-,-)$.
Note that $L'$ constructed for $\psi(R)$ is $L'\cup (L'+2a)$,
which is a free group of the same rank as $L'$; moreover,
$$F(\psi(\vareps_1+\delta_1))=L'\cup (L'+2a),\ \
F(\psi(2\vareps_1))L',\ \ F(\psi(2\delta_1))=(L'+2a),$$
and so $\psi(R)=R(\mathbb{F}_2^{k-1},S')$, see above.
This completes the proof of~\Prop{caseBCmn}.
\section{GRRS with finite $cl(R)$ and $\dim Ker(-,-)=1$}\label{sect7}
From the above results, it follows that
the only finite GRRS with a degenerate form $(-,-)$ is $A(n,n)$
(the root system of $\mathfrak{gl}(n,n)$).
As a consequence, if $cl(R)$ is finite and
$R\not=A(n,n)$, then $R$ is affine.
Symmetrizable affine Kac-Moody superalgebras were classified
in~\cite{K2},\cite{vdL}.
Summarizing the above results in the special case when $R$ is
an affine GRRS and
$\dim Ker(-,-)=1$, we see that such GRRSs correspond to
the real roots of symmetrizable affine Kac-Moody superalgebras.
More precisely, except for the case when
$cl(R)$ is the root system of $\mathfrak{psl}(n,n)$,
$n\geq 2$, $R$ is the set of real roots
of some affine Kac-Moody superalgebra $\fg$, see below.
If $cl(R)$ is the root system of $\mathfrak{psl}(n,n)$,
$n\geq 2$, $R$ is a quotient of the set of real roots
of $\mathfrak{pgl}(n,n)^{(1)}$. Conversely: the set of real roots of any affine Kac-Moody superalgebra other than $\mathfrak{gl}(n,n)^{(1)}$ is an affine GRRS with $\dim Ker(-,-)=1$.
If $cl(R)$ is one of $A_n,D_n, E_6,E_7,E_8, A(m,n), m\not=n, C(n), D(m,n), D(2,1,a), F(4), G(3)$,
then $\fg$ is the corresponding non-twisted affine Kac-Moody superalgebra ($R=cl(R)^{(1)}$).
If $cl(R)$ is one of the GRRSs $B_n,C_n, F_4,G_2$ and $B(m,n)$ with $m,n\geq 1$,
then $\fg$ is either the corresponding non-twisted affine Kac-Moody superalgebra or the twisted affine Lie superalgebra
$D_{n+1}^{(2)}, A_{2n-1}^{(2)}, E_6^{(2)}, D_4^{(3)}$ and $D(m+1,n)^{(2)}$ respectively.
If $cl(R)$ is the non-reduced root system
$BC_n=B(0,n)$ ($n\geq 1$), then $\fg$ can be
$B(0,n)^{(1)}, A_{2n}^{(2)}, A(0,2n-1)^{(2)}, C(n+1)^{(2)}$ or $A(0,2n)^{(4)}$ (where $A(0,1)^{(2)}\cong C(2)^{(2)}$
as $A(0,1)\cong C(2)$).
If $cl(R)=BC(m,n)$ ($m,n\geq 1$), then
$\fg=A(2m,2n-1)^{(2)},\ A(2n,2m-1)^{(2)}$ or $A(2m,2n)^{(4)}$.
If $cl(R)=C(m,n)$ with $mn>1$, then
$\fg=A(2m-1,2n-1)^{(2)}$.
|
2,869,038,154,264 | arxiv | \section{\bf Introduction and Results}\label{sec1}
Let $G$ be any group and
$n$ a non-negative integer. For any two
elements $a$ and $b$ of $G$, we define
inductively $[a,_n b]$ the $n$-Engel commutator of the pair
$(a,b)$, as follows:
$$[a,_0 b]:=a,~ [a,b]=[a,_1 b]:=a^{-1}b^{-1}ab \mbox{ and } [a,_n
b]=[[a,_{n-1} b],b]\mbox{ for all }n>0.$$ An element $x$ of $G$
is called right (left, resp.) $n$-Engel if $[x,_ng]=1$ ($[g,_n
x]=1$, resp.) for all $g\in G$. We denote by $R_n(G)$ ($L_n(G)$,
resp.), the set of all right (left, resp.) $n$-Engel elements of
$G$. A group $G$ is called $n$-Engel if $G=L_n(G)$ or
equivalently $G=R_n(G)$. It is clear that $R_0(G)=1$,
$R_1(G)=Z(G)$ the center of $G$, and W.P. Kappe
\cite{kappew} (implicitly) proved $R_2(G)$ is a characteristic
subgroup of $G$. L.C. Kappe and Ratchford \cite{kappe2} have shown that $R_n(G)$ is a subgroup of $G$ whenever
$G$ is a metabelian group, or $G$ is a
center-by-metabelian group such that $[\gamma_k(G),\gamma_j(G)]=1$
for some $k,j\geq 2$ with $k+j-2\leq n$ and $n\geq 3$. Macdonald \cite{macd} has shown
that the inverse or square of a right 3-Engel element need not be
right 3-Engel. Nickel \cite{nick2} generalized Macdonald's result to all $n
\geq 3$. In fact he constructed a group with a right $n$-Engel
element $a$ neither $a^{-1}$ nor $a^2$ is a right $n$-Engel
element. The construction of Nickel's example was guided by
computer experiments and arguments based on commutator
calculus. Although Macdonald's example shows that $R_3(G)$ is not
in general a subgroup of $G$, Heineken \cite{hein} has already shown that
if $A$ is the subset of a group $G$ consisting of all elements $a$
such that $a^{\pm 1}\in R_3(G)$, then $A$ is a subgroup if either
$G$ has no element of order $2$ or $A$ consists only of elements
having finite odd order. Newell \cite{newell} proved that the
normal closure of every right $3$-Engel element is nilpotent of
class at most $3$. In Section 2 we prove that if $G$ is a
$2'$-group, then $R_3(G)$ is a subgroup of $G$. Nickel's example
shows that the set of right 4-Engel elements is not a subgroup in
general (see also first Example in Section 4 of \cite{ab1}). In
Section 3, we prove that if $G$ is a locally nilpotent
$\{2,3,5\}'$-group, then $R_4(G)$ is a subgroup of $G$.
Traustason \cite{traus} proved that any locally nilpotent 4-Engel
group $H$ is Fitting of degree at most $4$. This means that the
normal closure of every element of $H$ is nilpotent of class at
most 4. More precisely he proved that if $H$ has no element of
order $2$ or $5$, then $H$ has Fitting degree at most $3$. Now by a result of Havas and Vaughan-Lee \cite{havas}, one knows any 4-Engel group is locally nilpotent and so Traustason's result is true for all $4$-Engel groups. In Section 3, by another result of Traustason \cite{traus2} we show that the normal closure of every right $4$-Engel element in a locally nilpotent
$\{2,3,5\}'$-group, is nilpotent of class at most $7$.
Newman and Nickel \cite{newman} have shown that for every $n\geq
5$ there exists a nilpotent group $G$ of class $n+2$ containing a
right $n$-Engel element $a$ and an element $b$ such that $[b,_na]$
has infinite order. As we mentioned above, Nickel
\cite{nick2} has shown that for every $n\geq 3$ there exists a
nilpotent group of class $n+2$ having a right $n$-Engel element
$a$ and an element $b$ such that $[a^{-1},_n b]=[a^2,_n b]\neq 1$. We have checked that the latter element in Nickel's example
is of finite order whenever $n\in\{5,6,7,8\}$. In Section \ref{se4}, using
the group constructed by Newman and Nickel we show
that there exists a nilpotent group $G$ of class $n+2$ such that
$x\in R_n(G)$ and both $[x^{-1},_n a]$ and $[x^{k},_na]$ have infinite order for every integer
$k\geq 2$.\\
In \cite{ab1} the following question has been proposed:
\begin{qu}\label{q1}
Let $n$ be a positive integer. Is there a set of prime
numbers $\pi_n$ depending only on $n$ such that the set
of right $n$-Engel elements in any nilpotent or finite
$\pi'_n$-group forms a subgroup?
\end{qu}
In Section 4 we negatively answers
Question \ref{q1}.\\
As far as we know there is no published example of a group whose set of (bounded) right Engel elements do not form a subgroup. But for the set of bounded left Engel elements there are some evidences supporting this idea that the subgroupness of bounded left Engel elements of a an arbitrary group should be false. We finish the paper by proving that at least one of the following happens:
\begin{enumerate}
\item There is an infinite finitely generated $k$-Engel group of exponent $n$ for some positive integer $k$ and some $2$-power number $n$.
\item There is a group generated by finitely many bounded left Engel elements which is not an Engel group, where by an Engel group we mean a group in which for every two elements $x$ and $y$, there exists an integer $k=k(x,y)\geq 0$ such that $[x,_k y]=1$.
\end{enumerate}
Throughout the paper we have frequently use {\sf GAP nq}
package of Werner Nickel.
All given timings were obtained on an Intel
Pentium 4-1.70GHz processor with 512 MB running Red Hat Enterprise Linux 5.\\
\section{\bf Right 3-Engel elements}\label{se2}
Throughout for any positive integer $k$ and any group $H$, $\gamma_k(H)$ denotes the $k$th term of the lower central series of $H$.
The main result of this section implies that $R_3(G)$ is a subgroup of $G$ whenever $G$ is a $2'$-group.
Newell \cite{newell} proved that
\begin{thm}\label{th1}
Let $G=\langle a,b,c\rangle$ be a group such that $a,b\in R_3(G)$.
Then
\begin{enumerate}
\item $\langle a,c\rangle$ is nilpotent of class at most $5$ and
$\gamma_5(\langle a,c\rangle)$ has exponent $2$.
\item $G$ is nilpotent of class at most $6$.
\item $\frac{\gamma_5(G)}{\gamma_6(G)}$ has exponent $10$.
Furthermore $[a,c,b,c,c]^2\in \gamma_6(G)$.
\item $\gamma_6(G)$ has exponent $2$.
\end{enumerate}
\end{thm}
\begin{thm}\label{th0}
Let $G$ be a group such that $\gamma_5(G)$ has no element of order $2$. Then $R_3(G)$ is a subgroup of $G$.
\end{thm}
\begin{proof}
Let $a,b\in R_3(G)$ and $c\in G$. We first show that $a^{-1}\in R_3(G)$. We have
\begin{eqnarray}
[a^{-1},c,c,c]&=&[[[a,c,a^{-1}]^{-1}[a,c]^{-1},c],c]\nonumber\\
&=&[[a,c,a,a^{-1},c][a,c,a,c],c][[a,c,c,[a,c]^{-1}][a,c,c]^{-1},c]\nonumber\\
&=&[a,c,a,c,c][a,c,c,c]^{[a,c,c]}\nonumber\\
&=&[a,c,a,c,c]\nonumber
\end{eqnarray}
Therefore by Theorem \ref{th1} (2), $a^{-1}\in R_3(G)$. On the
other hand
\begin{eqnarray}
[ab,c,c,c]&=&[[a,c][a,c,b][b,c],c,c]\nonumber\\
&=&[[a,c,c][a,c,c,[a,c,b]][[a,c,c],[b,c]][a,c,b,c]
[b,c,b,c,[b,c]][b,c,c],c]\nonumber\\
&=&[a,c,c,[b,c],c][a,c,b,c,c].\nonumber
\end{eqnarray}
Now by Theorem \ref{th1} $[a,c,c,[b,c],c],~[a,c,b,c,c]^2 \in
\gamma_6(G)$ and thus $ab\in R_3(G)$.
\end{proof}
Now we give a proof of Theorem \ref{th0} by using {\sf GAP nq}
package of Werner Nickel.
\noindent{\bf Second Proof of Theorem \ref{th0}.} By Theorem
\ref{th1}, we know that $\langle x,y,z\rangle$ is nilpotent if
$x,y\in R_3(G)$ and $z\in G$. We now construct the largest
nilpotent group $H=\langle a,b,c \rangle$ such that $a,b\in
R_3(H)$ and $c\in H$, by {\sf nq} package.
\begin{verbatim}
LoadPackage("nq"); #nq package of Werner Nickel #
F:=FreeGroup(4);a1:=F.1; b1:=F.2; c1:=F.3; x:=F.4;
L:=F/[LeftNormedComm([a1,x,x,x]),LeftNormedComm([b1,x,x,x])];
H:=NilpotentQuotient(L,[x]);
a:=H.1; b:=H.2; c:=H.3; d:=LeftNormedComm([a^{-1},c,c,c]);
e:=LeftNormedComm([a*b,c,c,c]); Order(d); Order(e);
C:=LowerCentralSeries(H); d in C[5]; e in C[5];
\end{verbatim}
Then if we consider the elements $d=[a^{-1},c,c,c]$ and
$e=[ab,c,c,c]$ of $H$, we can see by above command in {\sf GAP} that $d$ and $e$ are elements of $\gamma_5(H)$ and have orders $2$ and
$4$, respectively. So, in the group $G$, we have $d=e=1$.
This completes the proof. $\hfill \Box$\\
Note that, the second proof of Theorem \ref{th0} also shows the necessity of assuming that $\gamma_5(G)$ has no element of order $2$.
\section{\bf Right 4-Engel elements}\label{se3}
Our main result in this section is to prove the following.
\begin{thm}\label{th2}
Let $G$ be a $\{2,3,5\}'$-group such that $\langle a,b,x\rangle$ is nilpotent for all $a,b\in R_4(G)$ and any $x\in G$. Then $R_4(G)$
is a subgroup of $G$.
\end{thm}
\begin{proof}
Consider the `freest' group, denoted by $U$, generated by two elements $u$,$v$
with $u$ a right 4-Engel element. We mean this by the group $U$
given by the presentation
$$\langle u,v \;|\; [u,_4 x]=1 \;\;\text{for all words}\;\; x \in F_2\rangle,$$
where $F_2$ is the free group generated by $u$ and $v$.
We do not know whether $U$ is nilpotent or not.
Using the {\sf nq} package shows that the group
$U$ has a largest nilpotent quotient $M$ with
class $8$.
By the following code, the
group $M$ generated by a right $4$-Engel element $a$ and an
arbitrary element $c$ is constructed.
We then see that the element $[a^{-1},c,c,c,c]$ of $M$
is of order $375=3\times 5^3$. Therefore the inverse of a right
$4$-Engel element of $G$ is again a right $4$-Engel element. The
following code in {\sf GAP} gives a proof of the latter claim. The computation
was completed in about 24 seconds.
\begin{verbatim}
F:=FreeGroup(3); a1:=F.1; b1:=F.2; x:=F.3;
U:=F/[LeftNormedComm([a1,x,x,x,x])];
M:=NilpotentQuotient(U,[x]);
a:=M.1; c:=M.2;
h:=LeftNormedComm([a^-1,c,c,c,c]);
Order(h);
\end{verbatim}
We now show that the product of every two
right 4-Engel elements in $G$
is a right 4-Engel element. Let $a,b\in R_4(G)$ and $c\in G$. Then
we claim that
$$H=\langle a,b,c\rangle \;\; \text{is nilpotent of class at most}\; 7. \;\;\;(*)$$ By induction on the nilpotency class of $H$, we may assume that $H$ is
nilpotent of class at most 8. Now we construct the largest nilpotent group
$K=\langle a_1,b_1,c_1\rangle$ of class 8 such that $a_1,b_1\in R_4(K)$.
\begin{verbatim}
F:=FreeGroup(4);A:=F.1; B:=F.2; C:=F.3; x:=F.4;
W:=F/[LeftNormedComm([A,x,x,x,x]),LeftNormedComm([B,x,x,x,x])];
K:=NilpotentQuotient(W,[x],8);
LowerCentralSeries(K);
\end{verbatim}
The computation took about 22.7 hours. We see that $\gamma_8(K)$
has exponent $60$. Therefore, as $H$ is a $\{2,3,5\}'$-group, we have
$\gamma_8(H)=1$ and this completes the proof of our claim $(*)$. \\
Therefore we have proved that any nilpotent group without elements of orders $2$, $3$ or $5$ which is generated by three elements two of which are right $4$-Engel, is nilpotent of class at most $7$.\\
Now we construct, by the {\sf nq} package, the largest nilpotent group $S$ of class $7$ generated by two right $4$-Engel elements $s,t$ and an arbitrary element $g$. Then one can find by {\sf GAP} that the order of $[st,g,g,g,g]$ in $S$ is
300. Since $H$ is a quotient of $S$, we have that $[ab,c,c,c,c]$ is of order dividing $300$ and so it is trivial, since $H$ is a $\{2,3,5\}'$-group.
This completes the proof.
\end{proof}
\begin{cor}\label{co1}
Let $G$ be a $\{2,3,5\}'$-group such that $\langle a,b,x\rangle$ is nilpotent for all $a,b\in R_4(G)$ and for any $x\in G$. Then $R_4(G)$ is a nilpotent group of class at most $7$. In particular, the normal closure of every right $4$-Engel element of group $G$ is nilpotent
of class at most $7$.
\end{cor}
\begin{proof}
By Theorem \ref{th2}, $R_4(G)$ is a subgroup of $G$ and so it
is a 4-Engel group. In \cite{traus2} it is shown that every
locally nilpotent 4-Engel $\{2,3,5\}'$-group is nilpotent of class at most 7.
Therefore $R_4(G)$ is nilpotent of class at most 7. Since $R_4(G)$ is a normal set, the second part follows easily.
\end{proof}
Therefore, to prove that the normal closure of any right $4$-Engel element of a $\{2,3,5\}'$-group $G$ is nilpotent, it is enough to show that
$\langle a,b,x\rangle$ is nilpotent for all $a,b\in R_4(G)$ and for any $x\in G$. It may be surprising that Newell \cite{newell} has had a similar obstacle to prove that the normal closure of a right $3$-Engel element is nilpotent in any group.
\begin{cor}
In any $\{2,3,5\}'$-group, the normal closure of any right $4$-Engel element is nilpotent if and only if every $3$-generator subgroup in which two of the generators can be chosen to be right $4$-Engel, is nilpotent.
\end{cor}
\begin{proof}
By Corollary \ref{co1}, it is enough to show that a $\{2,3,5\}'$-group $H=\langle a,b,x\rangle$ is nilpotent whenever $a,b\in R_4(H)$, $x\in H$ and both $\langle a\rangle^H$ and $\langle b\rangle ^H$ are nilpotent. Consider the subgroup $K=\langle a\rangle ^H\langle b\rangle^H$ which is nilpotent by Fitting's theorem. Now we prove that $K$ is finitely generated. We have $K=\langle a,b\rangle^{\langle x\rangle}$ and since $a$ and $b$ are both right $4$-Engel, it is well-known that
$$\langle a\rangle^{\langle x\rangle}=\langle a,a^x,a^{x^2},a^{x^3}\rangle \;\;\text{and}\;\; \langle b\rangle^{\langle x\rangle}=\langle b,b^x,b^{x^2},b^{x^3}\rangle,$$
and so $$K=\langle a,a^x,a^{x^2},a^{x^3},b,b^x,b^{x^2},b^{x^3} \rangle.$$
It follows that $H$ satisfies maximal condition on its subgroups as it is (finitely generated nilpotent)-by-cyclic. Now by a famous result of Baer \cite{Baer} we have that $a$ and $b$ lie in the $(m+1)$th term $\zeta_m(H)$ of the upper central series of $H$ for some positive integer $m$. Hence $H/\zeta_m(H)$ is cyclic and so $H$ is nilpotent. This completes the proof.
\end{proof}
We conclude this section with the following interesting information on the group $M$ in the proof of Theorem \ref{th2}.
In fact, for the largest nilpotent group $M=\langle a,b\rangle$ relative to $a\in R_4(M)$, we have that $M/T$ is isomorphic to the largest (nilpotent) $2$-generated $4$-Engel group $E(2,4)$, where $T$ is the torsion subgroup of $M$ which is a $\{2,3,5\}$-group. Therefore, in a nilpotent $\{2,3,5\}'$-group, a right $4$-Engel element with an arbitrary element generate a $4$-Engel group. This can be seen by comparing the presentations of $M/T$ and $E(2,4)$ as follows. One can obtain two finitely presented groups {\sf G1} and {\sf G2} isomorphic to $M/T$ and $E(2,4)$, respectively by {\sf GAP}:
\begin{verbatim}
MoverT:=FactorGroup(M,TorsionSubgroup(M));
E24:=NilpotentEngelQuotient(FreeGroup(2),4);
iso1:=IsomorphismFpGroup(MoverT);iso2:=IsomorphismFpGroup(E24);
G1:=Image(iso1);G2:=Image(iso2);
\end{verbatim}
Next, we find the relators of the groups {\sf G1} and {\sf G2} which are two sets of relators on 13 generators by the following command in {\sf GAP}.
\begin{verbatim}
r1:=RelatorsOfFpGroup(G1);r2:=RelatorsOfFpGroup(G2);
\end{verbatim}
Now, save these two sets of relators by {\sf LogTo} command of {\sf GAP} in a file and go to the file to delete the terms as
\begin{verbatim}
<identity ...>
\end{verbatim}
in the sets {\sf r1} and {\sf r2}. Now call these two modified sets {\sf R1} and {\sf R2}. We show that {\sf R1=R2} as two sets of elements of the free group {\sf f} on 13 generators {\sf f1,f2,...,f13}.
\begin{verbatim}
f:=FreeGroup(13);
f1:=f.1;f2:=f.2;f3:=f.3;f4:=f.4;f5:=f.5;f6:=f.6;
f7:=f.7;f8:=f.8;f9:=f.9;f10:=f.11;f12:=f.12;f13:=f.13;
\end{verbatim}
Now by {\sf Read} function, load the file in {\sf GAP} and type the simple command
{\sf R1=R2}. This gives us {\sf true} which shows $G_1$ and $G_2$ are two finitely presented groups with the same relators and generators and so they are isomorphic. We do not know if there is a guarantee that if someone else does as we did, then he/she finds the same relators for {\sf Fp} groups {\sf G1} and {\sf G2}, as we have found. Also we remark that using function {\sf IsomorphismGroups} to test if $G_1\cong G_2$, did not give us a result in less than 10 hours and we do not know whether this function can give us a result or not. \\
We summarize the above discussion as following.
\begin{thm}
Let $G$ be a nilpotent group generated by two elements, one of which is a right $4$-Engel element. If $G$ has no element of order $2$, $3$ or $5$, then $G$ is a $4$-Engel group of class at most $6$.
\end{thm}
\section{\bf Right $n$-Engel elements for $n\geq 5$}\label{se4}
In this section we show that for every $n\geq 5$ there is a
nilpotent group $G$ of class $n+2$ containing elements $a$
and $x\in R_n(G)$ such that both $[x^{k},_n a]$ and $[x^{-1}, _na]$ have infinite order for all integers $k\geq 2$.
Note that by Nickel's example \cite{nick2}, for every $n\geq 3$ we have already had a nilpotent group $K$ of class $n+2$ containing a right $n$-Engel element $x$ such that $[x^{-1},_n y]=[x^{2},_n y]\not=1$ for some $y\in K$ i.e, neither $x^2$ nor $x^{-1}$ are right $n$-Engel. We have checked by {\sf nq} package of Nickel in {\sf GAP} that $[x^{-1},_n y]=[x^{2},_n y]$ is of finite order whenever $n\in\{5,6,7,8\}$. In fact,
\begin{enumerate}
\item $o([x^{-1},_5 y])=3$, ~~~~~~~~ NqRuntime=1.7 Sec
\item $o([x^{-1},_6 y])=7$, ~~~~~~~~ NqRuntime=54.8 Sec
\item $o([x^{-1},_7 y])=4$, ~~~~~~~~ NqRuntime=1702 Sec
\item $o([x^{-1},_8 y])=9$, ~~~~~~~~ NqRuntime=56406 Sec
\end{enumerate}
Newman and Nickel \cite{newman} constructed a group $H$ as follows.
Let $F$ be the relatively free group, generated by $\{a,b\}$ with
nilpotency class $n+2$ and $\gamma_4( F)$ abelian. Let $M$ be the
(normal) subgroup of $F$ generated by all commutators in $a$, $b$
with at least 3 entries $b$ and the commutators $[b,_{n+1}a]$ and
$[b,_na,b]$. Then $\displaystyle H=\frac{F}{M}$. Note that the
normal closure of $b$ in $H$ is nilpotent of class 2.
We denote the generators of $H$ by $a, b$ again. Put $$ t=[b,_n a],
~~~u_j=[b,_{n-1-j}a,b,_j a],~~~ 0\leq j\leq n-2,$$ $$~~
u=\prod_{j=0}^{n-2}u_j,~~~v=[u_{n-2},a],~~~
w=\prod_{j=0}^{n-3}[u_j,a]
$$ and let $N$ be the subgroup $\langle tuw, t^2w, uw\rangle$.
Then $aN$ is a right $n$-Engel element in $\displaystyle \frac{H}{N}$ and
$[b,_n a]N$ has infinite order in $\displaystyle \frac{H}{N}$.
Now let $H$ be the above group and $N_0:=\langle
u,vw,vt^{-1}\rangle$. First, note that $N_0$ is a normal subgroup of $H$. For, clearly $t,v,w\in Z(H)$ and $u^b=u$. Also it is not hard to see that
$u_j^a=u_j[u_j,a]$ and thus $u^a=u vw$. This means that $N_0^a=N_0$ and so $N_0$ is a normal subgroup of $H$. Now we can state our main result of this section:
\begin{thm}\label{th3}
$[b,_n a]N_0=[b^{-2},_n a]N_0$ and it has infinite order in $\displaystyle
\frac{H}{N_0}$ and $[b^{-1},_n h]\in N_0$ for all $h\in H$.
Furthermore $[b^{-k},_n a]N_0=v^{\binom{k}{2}}N_0$ for all $k\geq 2$.
\end{thm}
\begin{rem} \label{re1} As in \cite{newman}, the proof of Theorem \ref{th3} involves a
series of commutator calculations based, as
usual, on the basic identities as following, which are mentioned in \cite{newman}. We bring them here for reader's convenience.
\begin{enumerate}
\item $[g, cd]=[g,d][g, c] [g, c,d]$.
\item $[cd,g]=[c,g][c,g,d][d,g]$.
\item $[c^{-1},d]=[c,d,c^{-1}]^{-1}[c,d]^{-1}$.
\item $[c,d^{-1}]=[c,d,d^{-1}]^{-1}[c,d]^{-1}$.
\item $[hk,h_1,\dots,h_s]=[h,h_1,\ldots,h_s]$ for every $k$ in
$\gamma_{n+3-s}(H)$ and arbitrary $h_1,\dots,h_s\in H$
\item $[g,d,c]=[g,c,d][g,[d,c]]k$, where $k$ is a
product of commutators of weight at least $4$ with entries $g$,
$c$ and $d$.
\item $[a,_n hk]=[a,_n h]$ for all $h\in H$ and $k\in \gamma_3(H)$.
\item $[g,d^{\delta}]=[g,d]^{\delta}[g,_2 d]^{(_2^{\delta})}k$,
where $k$ is a product of commutators with at least $3$
entries $d$ and $\delta$ is positive.
\end{enumerate}
\end{rem}
\noindent{\bf Proof of Theorem \ref{th3}.} By Remark \ref{re1}(7),
we may assume that $h$ is of the form
$a^{\alpha}b^{\beta}[b,a]^{\gamma}$. The following calculations
may depend to the signs of $\alpha$ and $\beta$; we here
outline only the case in which $\alpha$ and $\beta$ are positive.
\begin{eqnarray}
[b^{-1},_n h]&=&[b^{-1},_n a^{\alpha}b^{\beta}[b,a]^{\gamma}]\nonumber\\
&=&\displaystyle [b^{-1},_n a^{\alpha}b^{\beta}]
\prod_{j=0}^{n-1}[b^{-1},_{n-1-j}a^{\alpha}b^{\beta},[b,a]^{\gamma},_j a^{\alpha}b^{\beta}]\nonumber\\
&=&\displaystyle [b^{-1},_n a^{\alpha}b^{\beta}]\big([b,[b,a],_{n-1}a]
\prod_{j=0}^{n-2}[b,_{n-1-j}a,[b,a],_j
a]\big)^{-\alpha^{n-1}\gamma}.\nonumber
\end{eqnarray}
Since
\begin{eqnarray}
[b,[b,a],_{n-1}a]&=&[[[b,a],b]^{-1},_{n-1} a]\nonumber\\
&=&[b,a,b,_{n-1} a]^{-1}\nonumber\\
&=&v^{-1}\nonumber
\end{eqnarray}
and by Remark \ref{re1} (5) and (6)
$$[b,_{n-1-j}a,[b,a],_j a]=[b,_{n-j}a,b,_j
a]^{-1}[b,_{n-1-j}a,b,_{j+1}a]$$ we have
\begin{eqnarray}
\displaystyle\prod_{j=0}^{n-2}[b,_{n-1-j}a,[b,a],_j
a]&=&\prod_{j=0}^{n-2}[b,_{n-j}a,b,_j a]^{-1}[b,_{n-1-j}a,b,_{j+1} a]\nonumber\\
&=&\prod_{j=0}^{n-3}[b,_{n-1-j}a,b,_{j+1}
a]^{-1}\prod_{j=0}^{n-2}[b,_{n-1-j}a,b,_{j+1} a]
\nonumber\\
&=&v.\nonumber
\end{eqnarray}
Therefore
\begin{eqnarray}
\displaystyle[b^{-1},_n a^{\alpha}b^{\beta}[b,a]^{\gamma}]
&=&[b^{-1},_n a^{\alpha}b^{\beta}](v^{-1}v)^{-\alpha^{n-1}\gamma}\nonumber\\
&=&[b^{-1},_n a^{\alpha}b^{\beta}].\nonumber
\end{eqnarray}
On the other hand by Remark \ref{re1} (8) we have
\begin{eqnarray}
\displaystyle[b^{-1},_n a^{\alpha}b^{\beta}]&=& [b^{-1},_n a^{\alpha}]
\prod_{j=0}^{n-2}[b^{-1},_{n-1-j} a^{\alpha},b^{\beta},_j a^{\alpha}]\nonumber\\
&=&[b^{-1},_n a]^{\alpha^n}[b^{-1},_{n+1}
a]^{n\big{(}_2^{\alpha}\big{)}\alpha^{n-1}}(\displaystyle\prod_{j=0}^{n-2}
[b,_{n-1-j}a,b,_ja])^{-\alpha^{n-1}\beta}\nonumber\\
&&\times (\prod_{j=0}^{n-3}[b,_{n-1-j}
a,b,_{j+1}a])^{-(n-2)\big{(}_2^{\alpha}\big{)}\alpha^{n-2}\beta}\nonumber\\
&&\times[b,a,b,_{n-2}a]^{-(n-2)\big{(}_2^{\alpha}\big{)}\alpha^{n-2}\beta}
[b,_na,b]^{-(n-2)\big{(}_2^{\alpha}\big{)}\alpha^{n-2}\beta}\nonumber\\
&=&(vt^{-1})^{\alpha^n}
u^{-\alpha^{n-1}\beta}(vw)^{-(n-2)\big{(}_2^{\alpha}\big{)}\alpha^{n-2}\beta}.\nonumber
\end{eqnarray}
Therefore $b^{-1}N_0$ is a right $n$-Engel element in $\displaystyle
\frac{H}{N_0}$. This completes the second part of the theorem.
Since $\langle t,u,v,w\rangle$ is a free abelian group of rank 4,
it is clear that $[b,_na]N_0$ has infinite order. On the other hand
\begin{eqnarray}
\displaystyle [b^{-2},_na]&=&[[b^{-1}, a][b^{-1},a,b^{-1}][b^{-1},a],_{n-1}a]\nonumber\\
&=&[b^{-1},_n
a][b^{-1},a,b^{-1},_{n-1}a][b^{-1},a,b^{-1},[b,a],_{n-2}a][b^{-1},_na]\nonumber\\
&\equiv&[b,a,b,_{n-1}a]\mod N_0\nonumber\\
&\equiv&v\mod N_0.\nonumber
\end{eqnarray}
Since $vt^{-1}\in N_0$ we have $[b,_n a]N_0=tN_0=vN_0=[b^{-2},_n a]N_0$. Now
let $k\geq 2$, $f(1)=0$ and $f(k)=(k-1)+f(k-1)=\binom{k}{2}$. Then
\begin{eqnarray}
\displaystyle [b^{-k},_na]&=&[[b^{-1}, a][b^{-1},a,b^{-(k-1)}][b^{-1},a],_{n-1}a]\nonumber\\
&=&[b^{-1},_n a][b^{-1},a,b^{-(k-1)},_{n-1}a][b^{-1},a,b^{-(k-1)},[b,a],_{n-2}a]
[b^{-(k-1)},_na]\nonumber\\
&\equiv&[b,a,b^{(k-1)},_{n-1}a]v^{f(k-1)}\mod N_0\nonumber\\
&\equiv&v^{f(k)}\mod N_0.\nonumber
\end{eqnarray}
This completes the proof. $\hfill \Box$\\
Now we answer negatively Question \ref{q1} which has been proposed in \cite{ab1}.
Let $T$ be the torsion subgroup of $H/N_0$ and $x=bN_0T$ and $y=aN_0T$. Then the group $\mathcal{M}=H/N_0T=\langle x,y\rangle$ is a torsion free, nilpotent of class $n+2$, $x\in R_n(\mathcal{M})$ and both $[x^{-1},_n y]$ and $[x^{k},_n a]$ are of infinite order for all integers $k\geq 2$. Since, for any given prime number $p$, a finitely generated torsion-free nilpotent group is residually finite $p$-group, it follows that for any prime number $p$ and integer $k\geq 2$, there is a finite $p$-group $G(p,k)$ of class $n+2$ containing a right $n$-Engel element $t$ such that both $t^{k}$ and $t^{-1}$ are not right $n$-Engel. This answers negatively Question \ref{q1}.
\section{\bf Subgroupness of the set of (bounded) Left Engel elements of a group}
Let $n=2^k\geq 2^{48}$ and $B(X,n)$ be the free Burnside group on the set $X=\{x_i \;|\; i\in \mathbb{N}\}$ of the Burnside variety of exponent $n$ defined by the law
$x^n=1$. Lemma 6 of \cite{IO} states that the subgroup $\langle x_{2k-1}^{n/2}x_{2k}^{n/2}\;|\; k=1,2,\dots\rangle$ of $B(X,n)$ is isomorphic to $B(X,n)$ under the map $x_{2k-1}^{n/2}x_{2k}^{n/2}\rightarrow x_k$, $k=1,2,\dots$. Therefore the subgroup
$\mathcal{G}:=\langle x_1^{n/2},x_2^{n/2},x_3^{n/2},x_4^{n/2}\rangle$ is generated by four elements of order $2$, contains the subgroup $\mathcal{H}=\langle x_1^{n/2}x_2^{n/2},x_3^{n/2}x_4^{n/2} \rangle$ isomorphic to the free $2$-generator Burnside group $B(2,n)$ of exponent $n$. One knows the tricky formulae $$[x,_ky]=[x,y]^{(-1)^{k-1}2^{k-1}}$$ holding for all elements $x$ and all elements $y$ of order $2$ in any group and all integers $k\geq 1$. It follows that the group $\mathcal{G}$ can be generated by four left $49$-Engel elements of $\mathcal{G}$. Thus $$\mathcal{G}=\langle L_{49}\big(\langle \mathcal{G}\rangle\big)\rangle=\langle L\big(\langle \mathcal{G}\rangle\big)\rangle=\langle \overline{L}\big(\langle \mathcal{G}\rangle\big)\rangle,$$
where $L(H)$ ($\overline{L}(H)$, resp.) denotes the set of (bounded, resp.) left Engel elements of a group $H$.\\
Suppose, if possible, $\mathcal{G}$ is an Engel group. Then $\mathcal{H}$ is also an Engel group. Let $Z$ and $Y$ be two free generators of $\mathcal{H}$. Thus $[Z,_k Y]=1$ for some integer $k\geq 1$. Since $\mathcal{H}$ is the free 2-generator Burnside group of exponent $n$, we have that every group of exponent $n$ is a $k$-Engel group. Therefore, $\mathcal{G}$ is an infinite finitely generated $k$-Engel group of exponent $n$, as $\mathcal{H}$ is infinite by a celebrated result of Ivanov \cite{I}. Hence, we have proved that
\begin{prop}
At least one of the following happens.
\begin{enumerate}
\item There is an infinite finitely generated $k$-Engel group of exponent $n$ for some positive integer $k$ and $2$-power number $n$.
\item There is a group $G$ such that $L(G)=\overline{L}(G)$ and $L(G)$ is not a subgroup of $G$.
\end{enumerate}
\end{prop}
We believe that the subgroup $\mathcal{H}$ cannot be an Engel group, but we are unable to prove it.
|
2,869,038,154,265 | arxiv | \section{\@startsection{section}{1}{\z@}%
{0.5ex\@plus 0ex \@minus -.5ex}%
{0.5ex\@plus 0ex}%
{\normalfont\bfseries}
}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{0.5ex\@plus 0ex \@minus -.5ex}%
{0.5ex\@plus 0ex}%
{\normalfont\bfseries}
}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{0.5ex\@plus 0ex \@minus -.5ex}%
{0.5ex\@plus 0ex}%
{\normalfont\bfseries}
}
\makeatother
\renewcommand{\thesubsubsection}{\Alph{subsubsection}}
\setlength\floatsep{5pt}
\setlength\textfloatsep{5pt}
\setlength\intextsep{5pt}
\setlength\abovecaptionskip{5pt}
\setlength{\abovedisplayskip}{2pt}
\setlength{\belowdisplayskip}{2pt}
\ninept
\maketitle
\begin{abstract}
It is well-known that a number of excellent super-resolution (SR) methods using convolutional neural networks (CNNs) generate checkerboard artifacts.
A condition to avoid the checkerboard artifacts is proposed in this paper.
So far, checkerboard artifacts have been mainly studied for linear multirate systems,
but the condition to avoid checkerboard artifacts can not be applied to CNNs due to the non-linearity of CNNs.
We extend the avoiding condition for CNNs, and apply the proposed structure to some typical SR methods to confirm the effectiveness of the new scheme.
Experiment results demonstrate that the proposed structure can perfectly avoid to generate checkerboard artifacts under two loss conditions: mean square
error and perceptual loss, while keeping excellent properties that the SR methods have.
\end{abstract}
\begin{keywords}
Super-Resolution, Convolutional Neural Networks, Checkerboard Artifacts
\end{keywords}
\section{Introduction}
\label{sec:intro}
\noindent This paper addresses the problem of checkerboard artifacts generated by some super-resolution (SR) methods using convolutional neural networks (CNNs).
SR methods using CNNs have been widely studying as one of single image SR techniques, and have superior performances \cite{SRCNN,SRCNN-Ex,VDSR,DRCN,DRRN}.
Moreover, in order to accelerate the processing speed, CNNs including upsampling layers such as deconvolution \cite{Deconv}
and sub-pixel convolution \cite{ESPCN} ones have been proposed \cite{ESPCN,FSRCNN,LapSRN,SRGAN,EnhanceNet,PSRnet}.
However, it is well-known that these SR methods generate periodic artifacts, referred to as checkerboard artifacts \cite{Perceptual_Loss}.
\par
In CNNs, it is well-known that checkerboard artifacts are generated by operations of deconvolution, sub-pixel convolution layers \cite{Checkerboard}.
To overcome these artifacts, smoothness constraint \cite{FlowNet}, post-processing \cite{Perceptual_Loss},
initialization scheme \cite{FreeSubPixel} and different upsampling layer designs \cite{Checkerboard,AdaptiveBilinear,PixelDeconv} have been proposed.
Most of them can not avoid checkerboard artifacts perfectly, although they reduce the artifacts.
Among them, Odena et al. \cite{Checkerboard} have demonstrated that
checkerboard artifacts can be perfectly avoided by using resize convolution layers instead of deconvolution ones.
However, the resize convolution layers can not be directly applied to upsampling layers such as deconvolution and sub-pixel convolution ones,
so this method needs not only large memory but also high computational costs.
\par
On the other hand, checkerboard artifacts have been studied to design linear multirate systems including filter banks and wavelets \cite{CB0,CB1,CB2,CB3}.
In addition, it is well-known that checkerboard artifacts are caused by the time-variant property of interpolators in multirate systems,
and the condition for avoiding these artifacts have been given \cite{CB0,CB1,CB2}.
However, the condition to avoid checkerboard artifacts for linear systems can not be applied to CNNs due to the non-linearity of CNNs.
\par
In this paper, we extend the avoiding condition for CNNs, and apply the proposed structure to SR methods using
deconvolution and sub-pixel convolution layers to confirm the effectiveness of the new scheme.
Experiment results demonstrate that the proposed structure can perfectly avoid to generate checkerboard artifacts under
two loss conditions: mean square error and perceptual loss, while keeping excellent properties that the SR methods have.
As a result, it is confirmed that the proposed structure allows us to offer efficient SR methods without any checkerboard artifacts.
\section{preparation}
\label{sec: preparation}
\noindent Conventional SR methods using CNNs and works related to checkerboard artifacts are reviewed, here.
\subsection{SR Methods using CNNs}
\label{subsec: SR methods using CNNs}
\noindent SR methods using CNNs are classified into two classes as shown in Fig. \ref{class}.
Interpolation based methods \cite{SRCNN,SRCNN-Ex,VDSR,DRCN,DRRN}, referred to as class A,
do not generate any checkerboard artifacts in CNNs, due to the use of an interpolated image as an input to a network.
In other words, CNNs in this class do not have any upsampling layers.
\par
On the other hand, when CNNs include upsampling layers, there is a possibility that the CNNs generate some checkerboard artifacts.
This class, called class B in this paper, have provided numerous excellent SR methods \cite{ESPCN,FSRCNN,LapSRN,SRGAN,EnhanceNet,PSRnet},
which can be executed faster than those in class A.
Class B is also classified into a number of sub-classes according to the type of upsampling layers.
This paper focuses on class B.
\par
\begin{figure}[tb]
\centering
\centerline{\includegraphics[width=0.98\linewidth]{class.pdf}}
\caption{Classification of SR methods using CNNs}
\label{class}
\end{figure}
CNNs are illustrated in Fig. \ref{sr} for an SR problem, as in \cite{ESPCN}, where the CNNs consist of two convolutional layers and one upsampling layer.
$I_{LR}$ and $f_c^{(l)}(I_{LR})$ are a low-resolution (LR) image and a $c$-th channel feature map at layer $l$, and $f(I_{LR})$ is an output of the network.
The two convolutional layers have learnable weights, biases, and ReLU \cite{ReLU} as an activation function, respectively,
where the weight at layer $l$ has $K_l \times K_l$ as a spatial size and $N_l$ as the number of feature maps.
\par
There are numerous algorithms for computing upsampling layers,
such as deconvolution, sub-pixel convolution and resize convolution ones, which are widely used as typical CNNs.
Besides, deconvolution \cite{Deconv}, sub-pixel convolution \cite{ESPCN} and resize convolution \cite{Checkerboard} layers
are well-known upsampling layers, respectively.
\begin{figure}[tb]
\centering
\centerline{\includegraphics[width=0.8\linewidth]{fsrcnn.pdf}}
\caption{CNNs with an upsampling Layer}
\label{sr}
\end{figure}
\subsection{Works Related to Checkerboard Artifacts}
\label{subsec: Works Related to Checkerboard Artifacts}
\noindent Checkerboard artifacts have been discussed to design multirate systems including filter banks and wavelets by researchers \cite{CB0,CB1,CB2,CB3}.
However, most of the works have been limited to in case of using linear systems, so they can not be directly applied to CNNs due to the non-linearity.
Some works related to checkerboard artifacts for linear systems are summarized, here.
\par
It is known that linear interpolators which consist of up-samplers and linear time-invariant systems
cause checkerboard artifacts due to the periodic time-variant property \cite{CB0,CB1,CB2}.
Figure \ref{interpolator} illustrates a linear interpolator with an up-sampler $\uparrow U$ and a linear time-invariant system $H(z)$,
where positive integer $U$ is an upscaling factor and $H(z)$ is the $z$ transformation of an impulse response.
The interpolator in Fig. \ref{interpolator}(a) can be equivalently represented as a polyphase structure as shown in Fig. \ref{interpolator}(b).
The relationship between $H(z)$ and $R_i(z)$ is given by
\begin{equation}
\label{eq0}
H(z) = \sum_{i=1}^{U}R_{i}(z^{U})z^{-(U-i)},
\end{equation}
where $R_i(z)$ are often referred to as a polyphase filter of the filter $H(z)$.
\par
The necessary and sufficient condition for avoiding the checkerboard artifacts in the system is shown as
\begin{equation}
\label{eq1}
R_{1}(1) = R_{2}(1) = \cdots = R_{U}(1) = G.
\end{equation}
This condition means that all polyphase filters have the same DC value i.e. a constant $G$ \cite{CB0,CB1,CB2}.
Note that each DC value $R_i(1)$ corresponds to the steady-state value of the unit step response in each polyphase filter $R_i(z)$.
In addition, the condition eq.(\ref{eq1}) can be also expressed as
\begin{equation}
\label{eq2}
H(z) = P(z)H_0(z),
\end{equation}
where,
\begin{equation}
\label{eq3}
H_0(z) = \sum_{i=0}^{U-1} z^{-i},
\end{equation}
$H_0(z)$ and $P(z)$ are an interpolation kernel of the zero-order hold with factor $U$ and a time-invariant filter, respectively.
Therefore, the linear interpolator with factor $U$ does not generate any checkerboard artifacts, when $H(z)$ includes $H_0(z)$.
In the case without checkerboard artifacts, the step response of the linear system has a steady-state value $G$ as shown in Fig. \ref{interpolator}(a).
Meanwhile, the step response of the linear system has a periodic steady-state signal with the period of $U$,
such as $R_{1}(1)$, ..., $R_{U}(1)$, if eq.(\ref{eq2}) is not satisfied.
\begin{figure}[htb]
\centering
\begin{minipage}{\columnwidth}
\centering
\centerline{\includegraphics[width=0.8\linewidth]{multirate-system.pdf}}
\subcaption{General structure}
\end{minipage}
\vspace{-2mm}
\begin{minipage}{\columnwidth}
\centering
\centerline{\includegraphics[width=0.9\linewidth]{multirate-system-polyphase.pdf}}
\subcaption{Polyphase structure}
\end{minipage}
\caption{Linear interpolators with upscaling factor $U$}
\label{interpolator}
\end{figure}
\section{proposed method}
\label{sec: proposed method}
\noindent CNNs are non-linear systems, so conventional works related to checkerboard artifacts can not be directly applied to CNNs.
A condition to avoid checkerboard artifacts in CNNs is proposed, here.
\subsection{CNNs with Upsampling Layers}
\label{subsec: Interpretation of upsampling layers using multirate systems}
\noindent We focus on upsampling layers in CNNs,
for which there are numerous algorithms such as deconvolution \cite{Deconv}, sub-pixel convolution \cite{ESPCN} and resize convolution \cite{Checkerboard}.
For simplicity, one-dimensional CNNs will be considered in the following discussion.
\par
It is well-known that deconvolution layers with non-unit strides cause checkerboard artifacts \cite{Checkerboard}.
Figure \ref{deconv} illustrates a system representation of deconvolution layers \cite{Deconv} which consist of some interpolators,
where $H_{c}$ and $b$ are a weight and a bias in which $c$ is a channel index, respectively.
The deconvolution layer in Fig. \ref{deconv}(a) can be equivalently represented as a polyphase structure in Fig. \ref{deconv}(b),
where $R_{c,n}$ is a polyphase filter of the filter $H_{c}$ in which $n$ is a filter index.
This is a non-linear system due to the bias $b$.
\par
Figure \ref{sub-pixel} illustrates a representation of sub-pixel convolution layers \cite{ESPCN},
where $R_{c,n}$ and $b_{n}$ are a weight and a bias, and $f_n^{\prime}(I_{LR})$ is an intermediate feature map in channel $n$.
Compared Fig.\ref{deconv}(b) with Fig.\ref{sub-pixel}, we can see that the polyphase structure in Fig. \ref{deconv}(b) is
a special case of sub-pixel convolution layers in Fig. \ref{sub-pixel}.
In other words, Fig. \ref{sub-pixel} is reduced to Fig. \ref{deconv}(b), when satisfying $b_{1}=b_{2}=...=b_{U}$.
Therefore, we will focus on sub-pixel convolution layers as the general case of upsampling layers to discuss checkerboard artifacts in CNNs.
\subsection{Checkerboard Artifacts in CNNs}
\label{subsec: Definition of Checkerboard Artifacts in CNNs}
\noindent Let us consider the unit step response in CNNs.
In Fig. \ref{sr}, when the input $I_{LR}$ is the unit step signal $I_{step}$,
the steady-state value of the $c$-th channel feature map in layer $2$ is given as
\begin{equation}
\label{eq4}
\hat{f}_c^{(2)}(I_{step}) = A_c,
\end{equation}
where $A_c$ is a positive constant value, which is decided by filters, biases and ReLU.
Therefore, from Fig. \ref{sub-pixel}, the steady-state value of the $n$-th channel intermediate feature map is given by, for sub-pixel convolution layers,
\begin{equation}
\label{eq5}
\hat{f}_n^{\prime}(I_{step}) = \,\sum_{c=1}^{N_2} \, A_c \overline{R}_{c,n}\, + b_n,
\end{equation}
where $\overline{R}_{c,n}$ is the DC value of the filter $R_{c,n}$.
\par
Generally, the condition,
\begin{equation}
\label{eq5-}
\hat{f}_1^{\prime}(I_{step}) = \hat{f}_2^{\prime}(I_{step}) = ... = \hat{f}_U^{\prime}(I_{step}),
\end{equation}
is not satisfied, so the unit step response $f(I_{step})$ has a periodic steady-state signal with the period of $U$.
To avoid checkerboard artifacts, eq.(\ref{eq5-}) has to be satisfied, as well as for linear multirate systems.
\begin{figure}[tb]
\centering
\begin{minipage}{\columnwidth}
\centering
\centerline{\includegraphics[width=0.8\linewidth]{deconv-system.pdf}}
\subcaption{General structure}
\end{minipage}
\vspace{-2mm}
\begin{minipage}{\columnwidth}
\centering
\centerline{\includegraphics[width=0.8\linewidth]{deconv-polyphase-system.pdf}}
\subcaption{Polyphase structure}
\end{minipage}
\caption{Deconvolution layer \cite{Deconv}}
\label{deconv}
\end{figure}
\begin{figure}[tb]
\centering
\centerline{\includegraphics[width=0.8\linewidth]{subpixel-system.pdf}}
\caption{Sub-pixel convolution layer \cite{ESPCN}}
\label{sub-pixel}
\end{figure}
\subsection{Upsampling Layers without Checkerboard Artifacts}
\label{subsec: Upsampling Layers without Checkerboard Artifacts}
\noindent To avoid checkerboard artifacts, CNNs must have the non-periodic steady-state value of the unit step response.
From eq.(\ref{eq5}), eq.(\ref{eq5-}) is satisfied, if
\begin{equation}
\label{eq6}
\overline{R}_{c,1} = \overline{R}_{c,2} = \cdots = \overline{R}_{c,U}, \, c = 1, 2, ..., N_{2}
\end{equation}
\begin{equation}
\label{eq7}
b_{1} = b_{2} = \cdots = b_{U},
\end{equation}
Note that, in this case,
\begin{equation}
\label{eq8}
\hat{f}_1^{\prime}(K \cdot I_{step}) = \hat{f}_2^{\prime}(K \cdot I_{step}) = ... = \hat{f}_U^{\prime}(K \cdot I_{step}),
\end{equation}
is also satisfied as for linear systems, where $K$ is an arbitrary constant value.
However, even when each filter $H_c$ in Fig.\ref{sub-pixel} satisfies eq.(\ref{eq2}),
eq.(\ref{eq7}) is not met, but eq.(\ref{eq6}) is met.
Therefore, we have to seek for a new insight to avoid checkerboard artifacts in CNNs.
\par
In this paper, we propose to add the kernel of the zero-order hold with factor $U$, i.e. $H_0$ in eq.(\ref{eq3}), after upsampling layers as shown in Fig. \ref{Pro}.
In this structure, the output signal from $H_0$ can be a constant value, even when an arbitrary periodic signal is inputted to $H_0$.
As a result, Fig. \ref{Pro} can satisfy eq.(\ref{eq5-}).
\par
There are three approaches to use $H_0$ in CNNs by the difference in training CNNs as follows.
\subsubsection{Training CNNs without $H_0$}
\noindent The simplest approach for avoiding checkerboard artifacts is to add $H_0$ to CNNs after training the CNNs.
This approach allows us to perfectly avoid checkerboard artifacts generated by a pre-trained model.
\subsubsection{Training CNNs with $H_0$}
\noindent In approach B, $H_0$ is added to CNNs before training the CNNs, and then the CNNs with $H_0$ are trained.
This approach also allows us to perfectly avoid checkerboard artifacts as well as for approach A.
Moreover, this approach provides higher quality images than those of approach A.
\subsubsection{Training CNNs with $H_0$ inside upsampling layers}
\noindent Approach C is applicable to only deconvolution layers,
although approaches A and B are available for both of deconvolution layers and sub-pixel convolution ones.
Deconvolution layers always satisfy eq.(\ref{eq7}), so eq.(\ref{eq6}) only has to be considered.
Therefore, CNNs do not generate any checkerboard artifacts when each filter $H_c$ in Fig.5 satisfies eq.(\ref{eq2}).
In approach C, checkerboard artifacts are avoided by convolving each filter $H_c$ with the kernel $H_0$ inside upsampling layers.
\begin{figure}[tb]
\centering
\centerline{\includegraphics[width=0.9\linewidth]{subpixelnn-system.pdf}}
\caption{Proposed upsampling layer structure without checkerboard artifacts}
\label{Pro}
\end{figure}
\begin{figure*}[tb]
\centering
\centerline{\includegraphics[width=0.9\linewidth]{vgg.pdf}}
\caption{Experimental results of super-resolution under perceptual loss (PSNR(dB))}
\label{VGG}
\end{figure*}
\begin{figure*}[tb]
\centering
\centerline{\includegraphics[width=0.9\linewidth]{mse.pdf}}
\caption{Experimental results of super-resolution under MSE loss (PSNR(dB))}
\label{MSE}
\end{figure*}
\section{experiments and results}
\label{sec: experiments and results}
\noindent The proposed structure without checkerboard artifacts was applied to
the SR methods using deconvolution and sub-pixel convolution layers to demonstrate the effectiveness.
CNNs in the experiments were carried out under two loss functions: mean squared error (MSE) and perceptual loss.
\subsection{Datasets for Training and Testing}
\label{subsec: Datasets for Training and Testing}
\noindent We employed 91-image set from Yang et al. \cite{91image} as our training dataset.
In addition, the same data augmentation (rotation and downscaling) as in \cite{FSRCNN} was used.
As a result, the training dataset consisting of 1820 images was created for our experiments.
Besides, we used two datasets, Set5 \cite{Set5} and Set14 \cite{Set14}, which are often used for benchmark, as test datasets.
\par
To prepare a training set, we first downscaled the ground truth images $I_{HR}$
with a bicubic kernel to create the LR images $I_{LR}$, where the factor $U=4$ was used.
The ground truth images $I_{HR}$ were cropped into $72 \times 72$ pixel patches
and the LR images were also cropped $18 \times 18$ pixel ones, where the total number of extracted patches was $8,000$.
In the experiments, the luminance channel (Y) of images was used for the MSE loss,
although the three channels (RGB) of images were used for the perceptual loss.
\subsection{Training Details}
\label{subsec: Training Details}
\noindent Table \ref{CNNs} illustrates CNNs used in the experiments, which were carried out based on CNNs in Fig. \ref{sr}.
For other two layers in Fig. \ref{sr}, we set $(K_1,N_1)=(5,64)$, $(K_2,N_2)=(3,32)$ as in \cite{ESPCN}.
In addition, the training of all networks was carried out to minimize the mean squared error $\frac{1}{2}\|I_{HR}-f(I_{LR})\|^2$
and the perceptual loss $\frac{1}{2}\|\phi(I_{HR})-\phi(f(I_{LR}))\|^2$ averaged over the training set, respectively,
where $\phi$ calculates feature maps at the fourth layer of the pre-trained VGG-16 model as in \cite{Perceptual_Loss}.
It is well-known that the perceptual loss results in sharper SR images despite lower PSNR values,
and generates checkerboard artifacts more frequently than under the MSE loss.
\begin{table}[tb]
\caption{CNNs used in the experiments}
\scalebox{0.73}{
\begin{tabular}{l|l|c}
Network Name & Upsampling Layer & $K_3 \times K_3$ \\ \hline
\textbf{Deconv} & Deconvolution \cite{Deconv} & $9 \times 9$ \\
\textbf{Sub-pixel} & Sub-pixel Convolution \cite{ESPCN} & $3 \times 3$ \\
\textbf{ResizeConv} & Resize Convolution \cite{Checkerboard} & $9 \times 9$ \\
\textbf{Deconv+$\rm H_0$} & Deconvolution with $H_0$ ( Approach A or B ) & $9 \times 9$ \\
\textbf{Deconv+$\rm H_0$ (Ap. C)} & Deconvolution with $H_0$ ( Approach C ) & $9 \times 9$ \\
\textbf{Sub-pixel+$\rm H_0$} & Sub-pixel Convolution with $H_0$ ( Approach A or B ) & $3 \times 3$ \\
\end{tabular}}
\label{CNNs}
\end{table}
\par
For training, Adam \cite{Adam} with $\beta_1=0.9, \beta_2=0.999$ was employed as an optimizer.
Besides, we set the batch size to $4$ and the learning rate to $0.0001$.
The weights were initialized with the method described in He et al. \cite{He}.
We trained all models for $200$K iterations.
All models were implemented by using the tensorflow framework \cite{Tensorflow}.
\subsection{Experimental Results}
\noindent Figure \ref{VGG} shows examples of SR images generated under the perceptual loss, where mean PSNR values for each dataset are also illustrated.
In this figure, (b) and (f) include checkerboard artifacts, although (c), (d), (e), (g), (h) and (i) do not include any ones.
Moreover, it is shown that the quality of SR images was significantly improved by avoiding checkerboard artifacts.
Approach B and C also provided better quality images than approach A.
In Fig. \ref{MSE}, (b) and (f) also include checkerboard artifacts as well as in Fig. \ref{VGG}, although the distortion is not so large,
compared to under the perceptual loss.
Note that ResizeConv does not generate any checkerboard artifacts, because it uses a pre-defined interpolation like in \cite{SRCNN}.
\par
Table \ref{Time} illustrates the average executing time when each CNNs were carried out 10 times for some images in Set14.
ResizeConv needs the highest computational cost in this table, although it does not generate any checkerboard artifacts.
From this table, the proposed structures have much lower computational costs than with resize convolution layers.
Note that the result was tested on PC with a 3.30 GHz CPU and the main memory of 16GB.
\begin{table}[tb]
\begin{center}
\caption{Execution time of super-resolution (sec)}
\label{Time}
\scalebox{0.75}{
\begin{tabular}{ C{2cm} || C{2.4cm} | C{2.4cm} | C{2.4cm} } \hline
Resolution & \multirow{2}{*}{Deconv} & Deconv+$\rm H_0$ & Deconv+$\rm H_0$ \\
of Input Image & & ( Ap. A or B ) & ( Ap. C ) \\ \hline\hline
$69\times69$ & 0.00871 & 0.0115 & 0.0100 \\
$125\times90\;\,$ & 0.0185 & 0.0270 & 0.0227 \\
$128\times128$ & 0.0244 & 0.0348 & 0.0295 \\
$132\times164$ & 0.0291 & 0.0393 & 0.0377 \\
$180\times144$ & 0.0343 & 0.0476 & 0.0421 \\ \hline \noalign{\vskip2.5mm} \hline
Resolution & \multirow{2}{*}{Sub-pixel} & Sub-pixel+$\rm H_0$ & \multirow{2}{*}{ResizeConv} \\
of Input Image & & ( Ap. A or B ) & \\ \hline\hline
$69\times69$ & 0.0159 & 0.0242 & 0.107 \\
$125\times90\;\,$ & 0.0398 & 0.0558 & 0.224 \\
$128\times128$ & 0.0437 & 0.0619 & 0.299 \\
$132\times164$ & 0.0696 & 0.0806 & 0.383 \\
$180\times144$ & 0.0647 & 0.102 & 0.450 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\section{conclusion}
\label{conclusion}
\noindent This paper addressed a condition to avoid checkerboard artifacts in CNNs including upsampling layers.
The proposed structure can be applied to both of deconvolution layers and sub-pixel convolution ones.
The experimental results demonstrated that the proposed structure can perfectly avoid to generate checkerboard artifacts under two loss functions:
mean squared error and perceptual loss, while keeping excellent properties that the SR methods have.
As a result, the proposed structure allows us to offer efficient SR methods without any checkerboard artifacts.
The proposed structure will be also useful for various computer vision tasks
such as semantic segmentation, image synthesis and image generation.
\newpage
\bibliographystyle{IEEEbib}
|
2,869,038,154,266 | arxiv | \section{Introduction}
\label{sec:introduction}
The case for physics beyond the Standard Model, in the form of non-baryonic Dark Matter (DM), is overwhelming. The evidence, based solely on gravitational interactions, has cast light on the large-scale properties of the dominant DM components. However the detailed composition and dynamics of DM remains a mystery and a particle physics interpretation would yield a true understanding of the majority of matter in the Universe. Exploration of the particle nature of DM has typically progressed through the complementary theoretical and experimental study of certain well-motivated DM candidates, leading to specific experiments targeted at detecting the interactions expected. However, as these DM candidates and the overarching frameworks within which they reside remain undiscovered it is becoming increasingly important to broaden theoretical horizons and to cast a wider experimental net.
One often under-appreciated possibility for DM behavior is that, although we know the majority of DM should be currently cold and collisionless, there may exist rich and varied subcomponents of the DM budget with a host of unexpected properties. In fact, we already know that a small component of the total matter budget exhibits such behavior: the baryonic matter that makes up the visible Universe. This possibility, which was recently emphasized and explored in \Refs{Fan:2013yva,Fan:2013tia}, could have profound consequences for the detection of DM and is deserving of serious consideration.
The possibilities for complex dynamics in a subcomponent of DM are vast. As well as pointing out that DM can have this richer structure, \Refs{Fan:2013yva,Fan:2013tia} highlighted one particularly interesting scenario. If some subcomponent of the DM has significant self-interactions and can cool sufficiently rapidly through scattering and emission of very light, or massless, dark states then this subcomponent could collapse to form structures similar to those observed in the visible sector.\footnote{See also \cite{darkint} for early work concerning long-range dark forces.} In particular this may lead to the formation of galactic dark disks, coexisting within visible galactic baryonic disks. This scenario was termed ``Double-Disk Dark Matter'' or ``DDDM'' for short \cite{Fan:2013yva,Fan:2013tia}. It should be emphasized that this extra subcomponent of DM is distinct from the dominant cold and collisionless component which makes up the majority of the DM.\footnote{It should be noted that the dominant component of DM may also form a disk-like structure \cite{Read:2008fh,Read:2009iv,Purcell:2009yp,Bruch:2009rp} which may influence direct detection signals \cite{MarchRussell:2008dy,Ling:2009eh,Ling:2009cn,Green:2010gw,Billard:2012qu}. However, in the DDDM scenario the dark disk is an entirely different species of particle from that which comprises the main DM halo.}
The DDDM may have interesting non-gravitational interactions with visible-sector particles and can lead to enhanced indirect detection signals, which could be distinguished from conventional DM scenarios as the spatial distribution of the indirect signal could be significantly different for DDDM \cite{Fan:2013yva,Fan:2013tia}. However, as described in \cite{Fan:2013yva,Fan:2013tia}, the direct detection prospects are limited for elastically scattering DDDM. This is because the disk of DDDM is expected to rotate with a comparable velocity to the visible baryonic disk. Hence the average relative velocity between DM and an Earth based detector is small, leading to nuclear recoil energies below typical detector thresholds. There is some relative velocity due to the peculiar velocity of the Sun, the orbit of the Earth, and the velocity dispersion of DDDM. However these components combined are expected to be much smaller than the typical relative velocity expected for standard halo DM.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{figures/diagram.pdf}
\caption{A schematic diagram of an exothermic DM-nucleus scattering event where speeds are indicated by the length of arrow. An incoming DM particle is in an excited state and de-excites upon scattering at the centre. The exothermic energy release depends on the mass-splitting between DM states, $M_+-M_- = \delta$, some of which is deposited as nuclear recoil energy. Due to the exothermic nature of the scattering, nuclear recoils are possible even if the scattering event occurs at rest, or at low relative velocity, enabling DDDM direct detection; an otherwise challenging prospect.} \label{fig:exoDM}
\end{figure}
However, this does not rule out the possibility of DDDM direct detection signals. In future experiments perhaps dedicated low-threshold analysis could detect scattering events. Alternatively, if the DDDM is very massive then the kinetic energy is increased and could potentially lead to observable signatures above the low energy thresholds \cite{HeavyDDDM}.
In this paper we consider a modified scenario that can give rise to direct detection signals in current and planned experiments even in a DDDM context. This scenario involves exothermic DDDM scattering on nuclei \cite{Graham:2010ca}. In exothermic DM-nucleus scattering an excited DM state collides with a nucleus at which point the DM de-excites, depositing some part of the DM kinetic energy plus an additional component proportional to the mass-splitting of DM states, $M_+-M_- = \delta$. The energy deposit is manifest in the detector as nuclear recoil energy. This process is depicted in \Fig{fig:exoDM}. Although the final result (a recoiling nucleus) is similar to the result of standard elastic DM-nucleus scattering, the kinematics are quite distinct and the typical energy deposited does not depend on the phase space distribution of DM in the usual way, leading to a novel recoil spectrum.
To enable clarity in comparisons between the phenomenology of DDDM and standard cold and collisionless DM we will refer to the exothermic scattering of DDDM as ExoDDDM and the exothermic scattering of a standard halo DM candidate as ExoDM. In this work we will demonstrate that ExoDDDM may exhibit novel and distinctive energy spectra and modulation characteristics in direct detection experiments.\footnote{Throughout we will assume that the component of DM which is cold and collisionless, and dominates the energy density of DM does not lead to detectable scattering events in direct detection experiments.} In \Sec{sec:pheno} we will outline the main qualitative features which distinguish direct detection signals of ExoDDDM from ExoDM or elastic DM, and show that it may be possible to infer that DM signals are coming from a DDDM subcomponent rather than a standard DM candidate. In \Sec{sec:quantitative} we will make these arguments quantitative by considering current direct detection limits on ExoDDDM. We will also demonstrate that the three candidate events recently reported by the CDMS collaboration \cite{Agnese:2013rvf} can be readily explained by ExoDDDM scattering, in complete consistency with limits from other detectors. In particular, for certain values of the exothermic splitting there is virtually no tension between the strongest limits from the XENON10 and XENON100 experiments and the majority of the $90\%$ best-fit parameter space for an ExoDDDM explanation of the CDMS-Si events, even when the coupling of ExoDDDM to protons and neutrons is equal. We will also discuss the possibility of concurrently explaining the DAMA, CoGeNT and CRESST-II anomalies, finding that this is not possible for all four experiments with ExoDDDM. However consistent interpretations of CDMS-Si and CRESST-II excesses may be possible. Variations of the proton and neutron couplings \cite{Kurylov:2003ra,Giuliani:2005my,Chang:2010yk,Feng:2011vu,Feng:2013vod} are also considered in the context of ExoDDDM and new high-mass explanations of the CDMS-Si events are found where $M \lesssim 80$ GeV, in consistency with other bounds. In \Sec{sec:colliderindirect} we consider collider and solar capture constraints on ExoDDDM. In \Sec{sec:model} we construct a complete model of ExoDDDM which exhibits the properties required for cooling and collapse into DDDM as well as the mass splitting and long-lived excited state required for exothermic scattering on nuclei. We conclude in \Sec{sec:conclusions}.
\section{Direct Detection Phenomenology}
\label{sec:pheno}
Before considering the current experimental status of ExoDDDM in \Sec{sec:quantitative} it is first useful to outline some distinguishing qualitative features of ExoDDDM direct detection.\footnote{See also \cite{Graham:2010ca} for discussions of ExoDM scattering.} These features arise due to a combination of effects from the exothermic scattering and the DDDM phase space distribution.
We begin with the features specific to exothermic scattering, first focussing on comparing scattering rates at different detectors. The phenomenology can be best understood by considering the minimum velocity an incoming DM particle must have, $v_{\text{min}}$, to generate a measurable energy deposit in the detector, $E_R$. If the DM down-scatters on a nucleus as $X_+ + N \rightarrow X_- + N$ where $M_{\pm} = M \pm \delta/2$, then the minimum velocity any incoming DM particle must have to produce a given nuclear recoil energy, $E_R$, is
\begin{equation}
v_{\text{min}} (E_R) = \frac{1}{\sqrt{2 M_N E_{R}}} \left| \frac{M_N E_{R}}{\mu_{N}} - \delta \right| ~~,
\label{eq:vmin}
\end{equation}
where $\mu_{N}$ is the reduced mass of the DM-nucleus system and we have assumed $|\delta| \ll M$.
The local velocity distribution of DM is assumed to be the same at each detector. For a given nuclear recoil energy \Eq{eq:vmin} demonstrates that there is a detector-dependent DM velocity threshold below which scattering cannot lead to detectable scattering events. Thus the sensitivity of a detector depends on the experimental low energy threshold since this determines the fraction of the total DM distribution which leads to detectable scattering events. Let us consider an idealized situation where there are detectors with different nuclear targets, but with the same low-energy threshold, $E_{\text{thr}}$, and no high energy threshold.\footnote{This is not the situation in reality, but serves to delineate the comparison between experiments. For the sake of simplicity we also ignore finite resolution effects in this general discussion.} In this case a given detector is sensitive to all DM particles with velocity greater than
\[v_{\text{thr}} = \left\{
\begin{array}{l l}
v_{\text{min}} (E_{\text{thr}}) & \quad \delta < E_{\text{thr}} M_N/\mu_N \\
0 & \quad \delta > E_{\text{thr}} M_N/\mu_N
\end{array} \right.\]
The latter arises since if $\delta > E_{\text{thr}} M_N/\mu_N$ there exists some recoil energy within the detector range, for which $v_{\text{min}} (E_R)=0$. This means that the detector is sensitive to the full velocity distribution of the DM, even though $v_{\text{min}} (E_{\text{thr}}) > 0$. For a given detector only DM particles with velocity $v_{\text{thr}}$ or above will be capable of producing a measurable signal. Let us consider the dependence of $v_{\text{thr}}$ on the nuclear mass for the major categories of models:
\begin{itemize}
\item Elastic, heavy DM: In this case $v_{\text{thr}} \approx \sqrt{E_{\text{thr}}/2 M_N}$. Heavier nuclei lead to reduced minimum velocity thresholds and will thus sample more of the DM velocity distribution, leading to greater sensitivity.
\item Elastic, light DM: In this case $v_{\text{thr}} \approx \sqrt{E_{\text{thr}} M_N}/\sqrt{2}M$, the minimum velocity threshold is reduced for lighter nuclei, and detectors with lighter nuclei will sample more of the DM velocity distribution, improving the sensitivity to light DM.
\item Exothermic: To the minimum velocity of the previous two elastic scattering cases we subtract an additional component such that $v_{\text{thr}} = |v_{\text{thr}}({\delta=0})-\delta/\sqrt{2 M_N E_{\text{thr}}}|$. This extra exothermic term leads to a reduction in the minimum velocity, and can in some cases reduce it to zero. The reduction is greatest for light nuclei, leading to preferential scattering of the DM on lighter target nuclei. For light exothermic DM this further enhances the sensitivity of light nuclei detectors over heavy nuclei detectors.
\end{itemize}
Thus we see that with exothermic DM scattering the minimum DM velocity threshold is reduced relative to an elastically scattering DM candidate. Importantly, this opens up the possibility of detecting DDDM in \emph{current} direct detection experiments since in some cases scattering events can occur above thresholds even if the DDDM is essentially stationary relative to the DM detector. For light ExoDDDM there is also a clear preference for lighter target nuclei, which allows an ExoDDDM interpretation of the three events recently reported by CDMS-Si ($A_{\text{Si}} = 28$), completely consistent with bounds from the XENON ($A_{\text{Xe}} = 131$) experiments.
\begin{figure}[t!]
\centering
\includegraphics[height=0.44\textwidth]{figures/spec1.pdf} \includegraphics[height=0.44\textwidth]{figures/spec2.pdf}
\caption{Low energy threshold for the CDMS-Si experiment (dotted black) and the low energy threshold and single photelectron (S1) threshold for XENON100 (dotted and dot-dashed red respectively). Nuclear recoil energy spectra are also shown for ExoDM (left) and ExoDDDM (right) scattering on various nuclei for two benchmark parameter points. As the mass of a nucleus is increased the typical recoil energies are driven to lower values, and can be below detector thresholds. The recoil spectrum vanishes at low energies and exhibits a peak, in contrast to elastically scattering DM which shows an exponential rise towards lower energies. A significant difference between ExoDM and ExoDDDM is that, due to the small velocity dispersion and relative velocity, ExoDDDM scattering leads to very narrow recoil spectra. With ExoDDDM, limits from XENON100 are further weakened due to the narrow width which keeps all events below the single photoelectron threshold, unlike for ExoDM where Poisson fluctuations can push some events above threshold.}
\label{fig:spectra}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[height=0.38\textwidth]{figures/modamp.pdf}
\caption{The ratio of modulated to unmodulated scattering rates in the CDMS-Si detector for $M=5.5$ GeV as a function of the exothermic splitting for ExoDM and ExoDDDM. In both cases the introduction of an exothermic splitting leads to a reduction in the signal modulation. Below $\delta \sim 32$ keV there is no signal for ExoDDDM since all scattering events are below threshold, hence the ratio is unity, but it should be kept in mind that the signal is vanishing in this region.}
\label{fig:modamp}
\end{figure}
Exothermic scattering also leads to interesting spectral features. In \Fig{fig:spectra} we show sample nuclear recoil spectra for ExoDM and ExoDDDM. A feature common to both scenarios is that the recoil spectrum vanishes at low energies and is peaked at an energy which depends on the parameters of the model and the nucleus in question. This is quite distinct from a standard elastically scattering DM spectrum which typically shows falling exponential behavior. As direct detection experiments push to lower recoil energy thresholds this distinction will become increasingly important as the exponential rise at low energies for light (ordinary elastically-scattering) DM can be constrained more efficiently, whereas constraints on exothermic DM will depend instead on the typical exposure at the expected peak of the spectrum along with the width of the spectrum, which is determined by the velocity distribution.
There are also interesting implications for the annual modulation of the signal. Since the exothermic splitting allows a detector to sample more of the DM velocity distribution than in the elastic scattering case, a smaller fraction of the total events arise from DM particles in the tail of the velocity distribution. An implication of this is that the amplitude of the annual modulation signal is reduced relative to the unmodulated signal, which is a generic signature for exothermic DM scattering. In \Fig{fig:modamp} we show the reduction in the modulation amplitude with increasing exothermic splitting for ExoDM and ExoDDDM scattering in the CDMS-Si detector.
This concludes the discussion of features arising due to exothermic scattering. We now consider the specific differences that arise from the distinctive phase space distribution of DDDM.
At the core of the DDDM proposal is the possibility that a subdominant component of DM may behave in some ways similarly to visible matter and have small typical velocities so that it may also have collapsed within galaxies such as our own, forming structures similar to the galactic disk of visible matter with its higher density. Without detailed numerical simulations we cannot make definite predictions for the properties of such a disk of DDDM, but we can use the visible baryonic disk and the discussion in \Refs{Fan:2013yva,Fan:2013tia} as guidance to estimate the phase space properties of the DDDM. The quantities relevant to direct detection experiments are the local DDDM density, $\rho_0$, the velocity dispersion of the DDDM, $\tilde{v}$, and the relative velocity between the galactic disk and the visible baryonic disk, $v_{\text{rel}}$.
Due to uncertainties in the calculation of the dark disk thickness we treat the local DDDM density as a free parameter (estimates are discussed in \cite{Fan:2013yva,Fan:2013tia}). We will plot results for an assumed local density of $0.3 \text{ GeV/cm}^3$ since this local density is possible for DDDM and it allows straightforward comparison between the required cross-sections for DDDM and standard halo DM. However it should be kept in mind that the density may be higher or lower depending on the specific model under consideration.
In \cite{Fan:2013yva} a specific model of DDDM was constructed and the velocity dispersion estimated. The particular value depends on the dark force coupling strength and the mass of the light states within the dark sector, so a broad range of values are possible. We can estimate a reasonable value by considering a specific benchmark point discussed in \cite{Fan:2013yva}, where for a DM mass $M\approx 10$ GeV, the DDDM velocity dispersion is $\tilde{v} \approx 10^{-4} \text{ c}$. Unless otherwise stated we choose a value $\tilde{v} = 25 \text{ km/s}$, and keep in mind that increasing or reducing $\tilde{v}$ from this value will increase or reduce the width of the recoil spectrum.
Finally we must consider the relative velocity of the visible baryonic and DDDM disks. If a disk of DDDM exists in our galaxy then it may or may not lie in the plane of the visible baryonic disk. If the two disks are not aligned then direct detection signals are unlikely unless the line of intersection of the two disks contains our solar system.\footnote{Such a scenario could lead to interesting signatures due to potentially large relative velocities between DDDM and the Earth. Combined with the small velocity dispersion this scenario would look very much like a DM stream from the perspective of direct detection experiments. We do not consider this scenario further here.} Hence we consider only the possibility that both disks lie in the same plane. In this case, if the dynamics of gravitational collapse and cooling were similar, it is likely that both disks have similar rotational velocities, and small relative velocities. For this reason we will choose a benchmark relative velocity of $v_{\text{rel}}= 0 \text{ km}/\text{s}$, although different relative velocities are possible.\footnote{We will discuss the effect of varying the relative disk velocity in \Sec{sec:varying}.} This completes the parameter specification required to begin extracting the qualitative features of ExoDDDM scattering, which we now outline.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/peakday.pdf} \hspace{0.2in}
\caption{The relationship between the rotational velocity of DDDM relative to the visible baryonic disk, and the date on which the maximum and minimum Earth-DDDM relative velocity occurs. The day number refers to the numbers of days relative to January $1^{\text{st}}$ 2000. For standard halo DM the maximum relative velocity occurs around day $140$, due to a relative rotational velocity of $v_{\text{rel}} \approx220 \text{ km/s}$. However for smaller relative velocities the peak velocity can occur much earlier in the year, leading to a significant shift in the phase of the annual modulation. The phase of the signal is more sensitive to the relative rotational velocity at small values. Due to the decreased modulation amplitude signature of exoDDDM, a large number of events would be required to observe this phase.}
\label{fig:phase}
\end{figure}
\begin{itemize}
\item Narrow recoil spectra: For a standard halo the DM velocity dispersion is expected to be as large as the visible baryonic disk rotational velocity, typically $\mathcal{O} (220) \text{ km/s}$, and this leads to a broad recoil energy spectrum, even for ExoDM. On the contrary, for ExoDDDM, as the velocity dispersion is expected to be small, the recoil spectrum can be very narrow. This feature is shown in \Fig{fig:spectra} where the typical ExoDM spectrum is much broader than the ExoDDDM spectrum. With a large number of events a narrow recoil spectrum would be smoking gun for a small DM velocity dispersion and ExoDDDM scattering.
\item Out-of-phase signal modulation: For DM in a standard halo the relative velocity of the DM and the Earth is dominated by the large rotational velocity of the visible baryonic disk relative to the DM halo, which is $v_{\text{rel}} \approx \mathcal{O} (220) \text{ km/s}$. Hence, once one also accounts for the peculiar velocity of the Sun and the orbit of the Earth, the dates of maximum and minimum relative velocity are predetermined, leading to clear predictions for the phase of the annual modulation signal. For DDDM the relative disk-halo velocity is expected to be smaller, so the extremal velocities and scattering rates can occur on different dates. For DDDM co-rotating with the visible baryonic disk the dates of maximum and minimum relative DM velocity are offset by $\mathcal{O}(100)$ days compared to standard halo DM, and small changes in the relative rotational velocity near $v_{\text{rel}} \approx 0 \text{ km/s}$ can lead to significant changes in the phase of the annual modulation, as shown in \Fig{fig:phase}. Thus, if an annual modulation were observed with an unexpected phase this could point towards DDDM scattering. However it should be kept in mind that a complementary signature of ExoDDDM is that the modulation amplitude can be greatly suppressed (see \Fig{fig:modamp}), and such a modified-phase signature would occur only if the splitting $\delta$ is not so great as to completely suppress any modulation.
\end{itemize}
With all of these features in mind we can now make a simple qualitative estimate of the recoil spectra for ExoDDDM scattering. Since the nuclear recoil spectrum is narrow, and the DM is scattering essentially at rest, we can compare between different target nuclei by simply considering the recoil energy for scattering at rest
\begin{equation}
E_R (\mb{v} = 0) = \frac{\delta M_\chi}{M_N+M_\chi} ~~.
\label{eq:rest}
\end{equation}
The true recoil spectrum, after integrating over DM phase space, will be narrow and peaked at characteristic energies given by \Eq{eq:rest}.
These qualitative features apply to the broad scenario of ExoDDDM and make it clear that optimal ExoDDDM search strategies require detectors with light nuclei, good energy resolution, and, in the event of DM discovery, an integrated exposure great enough to observe the phase of the annual modulation, although we note that the latter could be very challenging.
\section{An ExoDDDM Interpretation of the CDMS-Si Events}
\label{sec:quantitative}
For more than a decade the steady progress in DM direct detection technology has been punctuated by occasional experimental anomalies -- in particular, the long-standing DAMA annual modulation \cite{Bernabei:2010mq}, followed by the CoGeNT excess and modulation \cite{Aalseth:2011wp}, CRESST-II excess \cite{Angloher:2011uu}, and now the CDMS-Si excess \cite{Agnese:2013rvf}. In most cases the anomalous scattering events arise near the experimental low-energy threshold, leading to tentative excitement over light DM interpretations, which is accordingly tempered by concerns over the significance of the signal and the tension between the simplest DM interpretations and null search results.
The differing targets and detection techniques of each experiment make a truly model-independent comparison among the various experiments difficult. However powerful halo-independent strategies have been developed \cite{Fox:2010bu,Fox:2010bz,Frandsen:2011gi,Gondolo:2012rs,HerreroGarcia:2012fu,Bozorgnia:2013hsa} and have made it clear that elastically scattering light DM interpretations of all anomalies are now in tension with results from the XENON experiment \cite{Frandsen:2013cna,DelNobile:2013cta}. However, this tension doesn't necessarily apply to all dark matter models and can be avoided if the microphysics is different.\footnote{For a thorough discussion of the elastic scattering case considering a potential resolution of the tension based on different XENON100 efficiencies see \cite{Hooper:2013cwa}.} We know little about the dark sector, and are in no position to decide which possibilities are and are not reasonable, so any interesting modifications of DM scattering should be considered seriously. Allowing for such modifications may then alleviate tensions between particular DM hints and exclusions.\footnote{In many scenarios where the scattering dynamics are altered, the local DM phase space distribution can become an important factor in determining scattering rates at different experiments and the significant uncertainties therein must always be considered, particularly when the disagreement between different experimental results is marginal \cite{Fairbairn:2008gz,MarchRussell:2008dy,Fox:2010bu,Fox:2010bz,McCabe:2010zh,McCabe:2011sr,Frandsen:2011gi,Green:2011bv,Fairbairn:2012zs,Gondolo:2012rs,HerreroGarcia:2012fu,Bozorgnia:2013hsa}.}
The three candidate scattering events in the CDMS-Si experiment, corresponding to a $\sim 3 \sigma$ excess, recently announced by the CDMS collaboration \cite{Agnese:2013rvf}, are intriguing for a number of reasons. The experiment can distinguish nuclear and electromagnetic scattering events very well and the three observed events have properties consistent with nuclear recoils, as expected for DM scattering. Furthermore, due to extensive calibration the background is well understood and explanations of the events based on backgrounds are not forthcoming. For these reasons, and in light of the significant tension between limits from the XENON experiments and standard light DM interpretations of the CDMS-Si events, it is interesting to consider whether ExoDDDM may offer an explanation of these events in better agreement with the null search results. Even if this signal does not survive, we will learn about ways to distinguish among different categories of DM and what types of measurements will prove important. We delay specific details of the analysis methods to \App{sec:details} and focus here on the results.
\begin{figure}[t]
\centering
\includegraphics[height=0.39\textwidth]{figures/plot1.pdf} \hspace{0.2in} \includegraphics[height=0.39\textwidth]{figures/plot2.pdf} \\ \vspace{0.2in}
\includegraphics[height=0.39\textwidth]{figures/plot3.pdf} \hspace{0.2in} \includegraphics[height=0.39\textwidth]{figures/plot4.pdf}
\caption{$90\%$ best-fit regions (CDMS-Si shaded gray and CRESST-II shaded orange) and $90\%$ exclusion limits (XENON10 solid black, XENON100 dashed red, CDMS-Ge dot-dashed blue, CRESST-II low threshold analysis solid green and SIMPLE in dotted purple). Elastic and exothermic scattering of standard halo DM are shown in the upper panels, and ExoDDDM below. Elastic scattering of light DM gives a good fit to the CDMS-Si events, although there is significant tension with null results. ExoDM reduces the tension and opens up additional parameter space consistent with CDMS-Si and limits from the null search results \cite{Frandsen:2013cna}. ExoDDDM scattering allows for a CDMS-Si interpretation with heavier DM mass (lower right). For lighter ExoDDDM (lower left), the majority of the favored parameter space is consistent with the strongest bounds and the DM mass favored in asymmetric DM models.}
\label{fig:fits}
\end{figure}
We consider limits from all experiments with strong sensitivity to spin-independent light DM scattering. The strongest bounds typically come from the S2-only XENON10 analysis \cite{Angle:2011th} and from XENON100 \cite{Aprile:2012nq}. We also derive limits from the CDMS-Ge dedicated low-threshold analysis \cite{Ahmed:2010wy}. However we do not consider the CDMS-Ge annual modulation analysis \cite{Ahmed:2012vq} as the low-energy threshold of $5$ keV weakens constraints on ExoDDDM in this case, as evident from \Fig{fig:spectra}. Also, as the modulation amplitude is suppressed for any exothermic model of DM, limits from modulation studies will be further weakened. For the sake of thoroughness we also consider limits from the SIMPLE experiment \cite{Felizardo:2011uw} and the analysis of CRESST-II commissioning run data which includes oxygen recoils to push sensitivity to lower DM masses \cite{Brown:2011dp}. As well as the $90\%$ best-fit region for the CDMS-Si events \cite{Agnese:2013rvf}, we also consider $90\%$ best-fit regions for the CRESST-II excess of events \cite{Angloher:2011uu}, however it should be noted that this excess, or at least a significant portion of it, may be accounted for with additional background sources \cite{Kuzniak:2012zm}.\footnote{The DAMA annual modulation \cite{Bernabei:2010mq} and the CoGeNT modulation \cite{Aalseth:2011wp} will be discussed in \Sec{sec:multiple}.}
In \Fig{fig:fits} we show best-fit contours for the CDMS-Si and CRESST-II anomalies alongside experimental limits for elastically scattering standard halo DM, ExoDM and ExoDDDM. While an interpretation of the CDMS-Si and CRESST-II anomalies based on elastic scattering of light DM is in tension with XENON10 and XENON100 under the assumption of the standard halo model, an ExoDM interpretation considerably alleviates this tension \cite{Frandsen:2013cna}. Intriguingly, if the scattering were instead due to DDDM particles then there is virtually no constraint on an ExoDDDM interpretation of the CDMS-Si excess for DM with mass in the region predicted by many asymmetric DM models \cite{asymm}.
The almost total consistency between an ExoDDDM explanation of the CDMS-Si excess and the XENON limits results from the combination of exothermic scattering kinematics with the DDDM phase space distribution, as described in \Sec{sec:pheno}. The CDMS-Si events are at energies of 8.2, 9.5, and 12.3 keV and the energy resolution of the detector is 0.5 keV. By considering the silicon recoil spectrum shown in \Fig{fig:spectra} for the 5.5 GeV ExoDDDM scenario it is clear that the 8.2 and 9.5 keV events are accommodated well by the ExoDDDM scattering. For the DDDM velocity distribution chosen here, the 12.3 keV event does not originate from ExoDDDM scattering. However, this does not lead to a bad fit for ExoDDDM scattering since $\mathcal{O}(0.7)$ events are expected from background alone, and the 12.3 keV event is accommodated by this expected background rate.
If the CDMS-Si events really were due to ExoDDDM then this explanation could be verified in two ways. First, since the recoil spectrum is much more peaked than for standard halo DM or ExoDM, further integrated exposure with a silicon detector should see events accumulate in the $\sim 9$ keV region, but not at lower or higher recoil energies. The ExoDDDM interpretation of the CDMS-Si excess would also lead to a modification of the amplitude and phase of the annual modulation. \Fig{fig:modamp} shows that for the parameters chosen the amplitude of the modulation would be very suppressed, which is a generic feature of exothermic DM scattering, and the modified phase of the modulation would be difficult to observe.
\subsection{Varying DDDM Phase Space Parameters}
\label{sec:varying}
As discussed in \Sec{sec:pheno} there exist uncertainties in the phase space distribution of the dark disk. Thus it is interesting to consider if the ExoDDDM explanation of the CDMS-Si excess demonstrated in \Sec{sec:quantitative} is strongly influenced by changes in the disk parameters. The two relevant parameters which could lead to changes in the agreement between the ExoDDDM CDMS-Si interpretation and limits from the XENON experiments are the relative velocity between the dark disk and the visible baryonic disk, $v_{\text{rel}}$, and the velocity dispersion of the dark disk $\tilde{v}$.
\begin{figure}[h]
\centering
\includegraphics[height=0.39\textwidth]{figures/plotvrot.pdf} \hspace{0.2in} \includegraphics[height=0.39\textwidth]{figures/plotv0.pdf}
\caption{$90\%$ best-fit regions and exclusion limits for ExoDDDM (colors as in \Fig{fig:fits}) with varied dark disk phase space parameters. On the left panel the dark disk rotational velocity is slower than the visible baryonic disk by $50$ km/s. On the right panel the velocity dispersion of the dark disk is increased to $50$ km/s. Both cases demonstrate that the consistency between an ExoDDDM explanation of the CDMS-Si events and exclusion limits from the XENON experiments is robust to uncertainties in the dark disk parameters.}
\label{fig:vary}
\end{figure}
In the left panel of \Fig{fig:vary} we show the best-fit regions and exclusion limits for a relative rotational velocity which has been increased from a co-rotating disk $v_{\text{rel}} = 0$ km/s (\Fig{fig:fits}) to a disk which lags the visible baryonic disk rotation by $v_{\text{rel}} = 50$ km/s. On the right panel of \Fig{fig:vary} we show the impact of doubling the velocity dispersion from $\tilde{v} = 25$ km/s (\Fig{fig:fits}) to $\tilde{v} = 50$ km/s. In both cases the consistency between the ExoDDDM interpretation of the CDMS-Si excess and limits from the XENON limits persists, demonstrating that the ExoDDDM interpretation of the CDMS-Si excess is robust to uncertainties in the dark disk phase space parameters.
\subsection{Looking Below Thresholds}
\label{sec:belowthresholds}
Further interesting ExoDDDM signatures arise when considering \emph{below} threshold events.\footnote{This was emphasized in \cite{NealTalk} and will be discussed in detail in \cite{Weineretal}. We thank Patrick Fox and Neal Weiner for conversations concerning the importance of below-threshold events.} Here, and in \cite{Aprile:2012nq,Frandsen:2013cna}, when calculating limits from XENON100 \cite{Aprile:2012nq} a low-energy threshold of $6.6$ keV, corresponding to three S1 photoelectrons, is employed. This is shown in \Fig{fig:spectra} where one can see that in both ExoDM and ExoDDDM scenarios the vast majority of xenon scattering events lie below this threshold for the parameters chosen to fit the CDMS-Si excess. For scattering energies below $3.4$ keV, also shown in \Fig{fig:spectra}, the detection efficiency is effectively zero since one expects less than one S1 photoelectron. Hence this represents a hard low energy threshold. However it may be possible to consider a lower energy threshold, from $6.6$ keV down to $3.4$ keV if a reliable extrapolation of the efficiency could be performed.
In Fig.\ $2$ of \cite{Aprile:2012nq} no signal events are observed below the low-energy threshold of $6.6$ keV down to $3.4$ keV. Hence if one could reliably extend the low energy threshold into this region then, by comparison with \Fig{fig:spectra}, the XENON100 limits could become significantly more constraining on light elastically scattering DM and ExoDM interpretations of the CDMS-Si excess. On the other hand, due to the narrow spectrum for ExoDDDM the xenon scattering events all lie below the single photoelectron threshold, and it is unlikely that ExoDDDM interpretations of the CDMS-Si events would become significantly constrained in this case. Thus an improved understanding of the low energy efficiency of the XENON100 experiment could significantly narrow the range of models which could explain the CDMS-Si events.
We also show the low energy threshold for the CDMS-Si analysis in \Fig{fig:spectra}. For ExoDM increased integrated exposure with a silicon detector would lead to additional events below the $7$ keV threshold set by CDMS \cite{Agnese:2013rvf}. However for ExoDDDM the spectrum is considerably more narrow, and additional events would not be expected below the $7$ keV threshold.
A push to lower thresholds with both xenon and silicon-based detectors could illuminate DM interpretations of the CDMS-Si events, and further constrain, or support, ExoDM and ExoDDDM scenarios.
\subsection{Fitting Multiple Anomalies}
\label{sec:multiple}
In \Fig{fig:fits} we have shown best-fit regions for CDMS-Si and CRESST-II. We have not shown best-fit regions for the DAMA and the CoGeNT modulation signals. Interpretations of these anomalies are in tension with limits from the XENON experiments for elastic scattering. ExoDM and ExoDDDM also both give a bad fit to these anomalies for parameters which fit the CDMS-Si events, as we now demonstrate.
We can use \Eq{eq:rest} to estimate the typical exothermic splitting, $\delta$, required to fit a particular anomaly by requiring that the typical recoil energy lies within the energy range of anomalous nuclear recoils at each detector. We estimate these energy ranges in \Tab{tab:energies} where we have used the sodium quenching factor $q_{N_a} = 0.3$ for DAMA.
\begin{table}[t!!]
\centering
\begin{tabular}{c l}
Typical $E_R$ & Experiment \\ \hline
$10.0 \lesssim E_O \lesssim 22.0$ keV & (CRESST-II) \\
$8.2 \lesssim E_{Si} \lesssim 12.3$ keV & (CDMS-Si) \\
$7.5 \lesssim E_{N_a} \lesssim 12.5$ keV & (DAMA) \\
$0.5 \lesssim E_{Ge} \lesssim 2.0$ keV & (CoGeNT)
\end{tabular}
\caption{Typical nuclear recoil energies of anomalous events at various experiments.}
\label{tab:energies}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[height=0.43\textwidth]{figures/approxoverlap.pdf} \hspace{0.2in} \includegraphics[height=0.41\textwidth]{figures/damafit.pdf}
\caption{The required exothermic splitting $\delta$ required to give nuclear recoil energies in the typical range required by the various anomalies (left panel). Required values for a given experiment lie between the two corresponding contours. Splittings required for CoGeNT are inconsistent with those required for CRESST-II, CDMS-Si and unquenched DAMA. We also show the modulation spectrum at unquenched DAMA for $\delta = 50$ keV, $M=5.5$ GeV, and $\sigma_n = 10^{-40} \text{cm}^2$ (right panel). We have increased the relative rotational velocity to $v_{rel} = 100 \text{ km/s}$ to give dates of maximum and minimum relative velocity closer to the dates of maximum and minimum event rates in DAMA, however the spectrum is in anti-phase with the data.}
\label{fig:anomalies}
\end{figure}
In the left panel of \Fig{fig:anomalies} we find the required values of $\delta$ for these energy ranges, and show the preferred range of $\delta$ between corresponding contours. It is immediately clear that the CDMS-Si and CoGeNT modulation regions do not overlap, making a consistent ExoDDDM explanation for the CoGeNT modulation and CDMS-Si signals unlikely.\footnote{If the channelling effect \cite{Bozorgnia:2010xy} is included for DAMA then the required values of $\delta$ for DAMA are shifted downwards by a factor of $0.3$, and a common ExoDM origin for the DAMA and CoGeNT modulations can be found, as demonstrated in \cite{Graham:2010ca}. However this would not move the CRESST-II and CDMS-Si regions into agreement with CoGeNT, and it is clear that a common ExoDDDM interpretation of CRESST-II, CDMS-Si and CoGeNT is unlikely.
}
For a relative rotational velocity between the dark disk and visible baryonic disk of $v_{rel} = 0 \text{ km/s}$ \Fig{fig:phase} shows that the expected phase of the annual modulation would be incorrect for fitting DAMA. However, increasing the rotational velocity to $v_{rel} = \mathcal{O} (\gtrsim 60) \text{ km/s}$ would bring the dates of maximum and minimum relative DM velocities in line with typical expectations for standard halo DM scattering. Thus the date of maximum DM velocity for DAMA can be broadly compatible with an ExoDDDM explanation of CRESST-II and CDMS-Si. However, due to the exothermic scattering, the amplitude of the modulation predicted at DAMA has the wrong sign. This is shown for typical masses and splittings, but an enhanced cross-section on the right panel of \Fig{fig:anomalies}, demonstrating that parameter choices which give a good fit to CDMS-Si typically give a bad fit to the DAMA modulation as the sign of the modulation is wrong and the cross-section would be too low to explain the amplitude of the annual modulation. Hence a common ExoDDDM explanation of all four direct detection anomalies is unlikely. However, a consistent ExoDDDM interpretation of the CDMS-Si and CRESST-II events is possible.
\begin{figure}[t!]
\centering
\includegraphics[height=0.43\textwidth]{figures/plotiso.pdf}
\caption{Bounds on a heavy ExoDDDM interpretation of the CDMS-Si events when DM couplings to protons and neutrons are tuned to suppress bounds from XENON10 and XENON100. This introduces new higher-mass interpretations of the CDMS-Si events which are consistent with exclusion limits.}
\label{fig:isospin}
\end{figure}
\subsection{Introducing Isospin-Dependent Couplings}
All results presented thus far have been under the assumption that the DM couples equally to protons and neutrons, i.e.\ $f_n=f_p$. However, if this is not the case then the relative scattering rates at different detectors are modified depending on the number of protons and neutrons within the nucleus \cite{Kurylov:2003ra,Giuliani:2005my,Chang:2010yk,Feng:2011vu,Feng:2013vod}. For silicon at CDMS-Si and oxygen at CRESST-II we have $(A-Z)=Z$. Hence the scattering rates scale as $\propto (f_n+f_p)^2$ and isospin-dependent couplings will not improve the agreement between the best-fit regions for both experiments. However, isospin-dependent couplings can significantly weaken limits from XENON10 and XENON100 for certain values of these couplings.
For ExoDDDM with $M\sim 5$ GeV there is already almost complete agreement between the best-fit regions and XENON limits and there is no motivation to include isospin-dependent couplings in this case. However, if ExoDDDM scattering and isospin violation are combined a DM interpretation of the CDMS-Si events for larger DM masses is allowed. For small splittings $\delta \sim 15$ keV a fit of ExoDDDM to CDMS-Si prefers greater DM masses. We calculate XENON limits using the isotope abundances of \cite{Feng:2011vu}, and in \Fig{fig:isospin} show that, by allowing for isospin-dependent couplings to be tuned to suppress the sensitivity of XENON100, some regions with relatively large DM masses, $M\lesssim 80$ GeV can open up.
\section{Indirect Constraints}
\label{sec:colliderindirect}
Here we estimate and discuss the indirect bounds from colliders and solar capture on broad classes of models which provide an ExoDDDM interpretation of the CDMS-Si events. We find that collider production and solar capture do not lead to strong constraints although solar capture can be interesting in the future.
Although the DM is neutral and cannot be observed directly, it could in principle show up indirectly at colliders as a missing energy signature: $\cancel{E}_T+X$, where $X$ could be a mono-photon, mono-Z, or mono-gluon. The cross-sections for these processes depend on the particular SM operator coupling the DM to SM states, the size of the coupling, and also the mass of the states which mediate the interactions. For heavy mediators the limits on spin-independent couplings do not typically constrain DM-nucleon cross-sections below $\sigma_n \sim 10^{-40} \text{ cm}^2$ \cite{Primulando:2012xla}, and these limits are further weakened if the mediator is light. Thus collider bounds place no strong constraint on the best-fit parameters considered here. However, if the local DDDM density were lower a larger cross-section would be required to explain the CDMS-Si events, the implied DM-visible sector interaction strength may be greater, and collider signatures may be possible in the near future.
If the DM is symmetric and can annihilate to SM final states additional indirect constraints arise from neutrino detectors. If $u$ is the speed of a DM particle, determined by the DM velocity distribution, and $v(r)$ is the speed which a particle picks up after falling into the Sun at a radius $r$ from the centre, then at this point a DM particle which has fallen into the Sun will have a total speed $w(r) = \sqrt{u^2+v^2 (r)}$. At this point it could scatter on any of the nuclei contained within the Sun and if the new speed after scattering, which we denote $\widetilde{w}(r)$, satisfies $\widetilde{w}(r) < v (r)$ it will become gravitationally bound. Over time it may continue to scatter, lose kinetic energy, and fall into the centre of the Sun, eventually annihilating with other captured DM particles. If the DM annihilation cross-section is large enough then the annihilation rate becomes limited only by the capture rate, and the total rate of DM annihilation in the Sun is determined solely from the total capture rate. If the annihilation final states contain SM states then decays or re-scattering of these states can produce neutrinos which propagate to the Earth and can be detected. This leads to indirect constraints on DM from neutrino observatories such as Super-Kamiokande \cite{Desai:2004pq} and IceCube \cite{Aartsen:2012kia}. The IceCube limits are typically most constraining for DM masses $M \gtrsim 10$'s GeV, and limits from Super-Kamiokande are typically strongest for lighter DM, in the region of $M \sim 5$ GeV.
For DM with non-standard kinematic properties, such as ``Inelastic DM'' \cite{TuckerSmith:2001hy} the dynamics of capture in the Sun \cite{Nussinov:2009ft,Menon:2009qj,Shu:2010ta} or other astrophysical bodies \cite{McCullough:2010ai} has been shown to exhibit significant departures from the standard elastically scattering case. Therefore consideration of indirect limits from annihilation in the Sun, specifically for ExoDDDM, is deserving of its own complete study. Here we will simply estimate the bounds. If the scattering is exothermic then some portion of the exothermic energy released will go into giving the DM a small kick, sometimes increasing its kinetic energy. Since this kick increases the speed, it reduces the fraction of particles which, after scattering, have speeds below the escape velocity and can in some cases suppress the solar capture of ExoDDDM. Whether or not this suppression is important depends on the balance between the exothermic splitting energy $\delta$, and the kinetic energy picked up by falling into the Sun. If the latter is dominant then the exothermic splitting will be unimportant. If the former dominates then the suppression of capture can be non-negligible.
Once the first exothermic scattering has occurred the DM is in the lower of the two states. To scatter again at tree-level will require inelastic scattering, and if the DM particle does not have sufficient kinetic energy this will be kinematically blocked. However, the DM will also be capable of scattering elastically at one-loop and this cross-section is typically large enough to enable further scattering and energy loss of the DM particle.
\begin{figure}[t!]
\centering
\includegraphics[height=0.43\textwidth]{figures/solarcapture.pdf}
\caption{Solar capture rates for scattering on different nuclei for ExoDM (solid) and ExoDDDM (dashed). The benchmark DM mass and cross-section considered here are $M=5$ GeV and $\sigma_n = 10^{-43} \text{ cm}^2$. Due to the small relative velocity DDDM capture is typically enhanced relative to standard halo DM, and there is little variation due to the exothermic splitting for ExoDDDM, with some suppression when the splitting is increased in the ExoDM case.}
\label{fig:capture}
\end{figure}
To estimate the solar capture rate we follow the calculation of \cite{Shu:2010ta} and use the Solar element abundances of \cite{Asplund:2009fu}. The capture rates are shown in \Fig{fig:capture} for a choice of DM mass and cross-section which allows a fit to the CDMS-Si events. In the elastic limit, which corresponds to the $\delta\rightarrow 0$ limit in \Fig{fig:capture}, one can see that DDDM capture is greatly enhanced over standard halo DM capture. This results from the much lower velocity of DDDM compared to standard halo DM falling into the Sun. If the initial speed of the DM is much larger than the escape velocity then, after scattering, only a small fraction of DM particles will have speeds less than the escape velocity, and only this small fraction will become captured. However if the initial speed is comparable to, or smaller than the escape velocity then a larger fraction of scattered particles will have new speeds below the escape velocity, and the capture rate is subsequently enhanced.\footnote{This is also described in \cite{Gould:1987ir,Gould:1987ww}.}
In \Fig{fig:capture}, as the exothermic splitting is increased the capture rate is suppressed for ExoDM but not significantly for ExoDDDM. For ExoDDDM the kinematics is entirely dominated by the kinetic energy picked up by falling into the Sun, and the exothermic splitting is largely irrelevant. For ExoDM the initial speeds are greater and so a smaller fraction of scattered particles become captured. The effect of the exothermic splitting ``kicking'' the DM back out of the Sun is more pronounced for this small fraction of events, and hence the splitting becomes more important for ExoDM.
To estimate solar capture limits we consider a recent analysis \cite{Kappl:2011kz} which studied spin-independent elastic scattering of light DM. The most constrained final state is when the DM annihilates $100\%$ into neutrinos. This final state is unlikely for concrete models, however we consider limits on this final state as a way of estimating the strongest bounds possible. These limits exclude DM-nucleon cross-sections $\sigma_n \gtrsim 10^{-41} \text{ cm}^2$ for standard halo DM of mass $M \sim 5 \text{ GeV}$ which scatters elastically \cite{Kappl:2011kz}. \Fig{fig:capture} shows that the capture rate for ExoDDDM with $M \sim 5 \text{ GeV}$ and $\delta \sim 50$ keV is enhanced relative to standard elastically scattering DM by a factor of $\sim 6$.\footnote{The enhancement follows almost entirely from the lower relative velocity of the DDDM.} Hence we can estimate the scattering cross-section bound for ExoDDDM annihilation into neutrinos to be $\sigma_n \lesssim 1.7 \times 10^{-42} \text{ cm}^2$. This is approaching, but does not exclude, the required typical cross-section for an explanation of the CDMS-Si excess, which is $\sigma_n \approx 10^{-43} \text{ cm}^2$.\footnote{As stated in \Sec{sec:pheno} there is a large uncertainty in the local DDDM density, and hence the cross-section required to explain the CDMS-Si events may vary greatly. However the solar capture rate scales with the product of cross-section and DM density in the same way, hence variations in the DDDM density will not improve or degrade the limits from solar capture.} Thus solar limits do not strongly constrain an ExoDDDM explanation of the CDMS-Si bounds, even if the most constrained final state is chosen. However since the bounds are within two orders of magnitude it would be interesting to perform a more precise study which includes re-scattering effects in the approach to equilibrium.
Since the DDDM must have large self-interactions additional contributions to the capture rate can arise from DM self-capture. However, as already stated, for any symmetric component of the DM the annihilation rate is typically faster than the capture rate. Hence the number of DM particles within the Sun, which must be scattered upon for self-capture, is kept low due to the efficient annihilation. Thus for models of symmetric ExoDDDM this effect is unlikely to be important.
Alternatively, in models of ExoDDDM which contain an asymmetric component, such as the model presented in \Sec{sec:model}, not all of the DM annihilates. In this case the self-capture of ExoDDDM can be important and the build up of ExoDDDM could lead to interesting effects on the inner dynamics of the Sun. However at present any constraints are not strong \cite{Frandsen:2010yj,Taoso:2010tg,Kouvaris:2010jy,McDermott:2011jp,Iocco:2012wk}. A complete calculation including both symmetric and asymmetric components, allowing for ExoDDDM capture and self-capture (including re-scattering effects) followed by the subsequent annihilation of the symmetric components into various final states is beyond the scope of this work. However, we have demonstrated that strong constraints on ExoDDDM interpretations of the CDMS-Si excess are unlikely to arise from solar capture. That said, it would be very interesting to determine how all of these processes would fit together for specific models.
\section{A Theory of ExoDDDM}
\label{sec:model}
ExoDM scenarios have been considered previously in the context of standard halo DM direct detection, and studies have shown that DM candidates with a cosmologically long-lived excited state can arise in complete theoretical constructions \cite{ArkaniHamed:2008qn,Batell:2009vb,Essig:2010ye,Graham:2010ca}. However, since models of DDDM require a number of fields and involved dynamics we will present a model of ExoDDDM here as proof-of-principle. To do this we must combine the ingredients of ExoDM models \cite{ArkaniHamed:2008qn,Batell:2009vb,Essig:2010ye,Graham:2010ca} with those of DDDM models \cite{Fan:2013yva,Fan:2013tia}.
We begin in the UV with two Abelian gauge symmetries, $\text{U}(1)' \times \text{U}(1)_D$. The $\text{U}(1)'$ will be spontaneously broken by a scalar Higgs field $H'$ at the GeV-scale leading to a massive dark gauge boson $Z'$. We assume this dark gauge boson is kinetically mixed with hypercharge and has mass mixing with the Z-boson \cite{Babu:1997st}, enabling it to mediate exothermic DDDM scattering on nuclei following standard constructions with light mediators \cite{ArkaniHamed:2008qn,Batell:2009vb,Essig:2010ye,Graham:2010ca,Frandsen:2011cg}.\footnote{We thank Felix Kahlhoefer for discussions on this point.}
The $\text{U}(1)_D$ will be unbroken, leaving a massless dark photon $\gamma_D$ which enables the efficient cooling of the DDDM to form a disk structure as in \cite{Fan:2013yva,Fan:2013tia}. Unlike the $\text{U}(1)'$ we require no kinetic mixing between $\text{U}(1)_D$ and $\text{U}(1)_Y$. This can be enforced if the $\text{U}(1)_D$ is embedded in a non-Abelian gauge symmetry in the UV \cite{Dienes:1996zr}, or by other means.\footnote{See e.g.\ the discussion in \Refs{Fan:2013yva,Fan:2013tia}.}
\begin{table}[ht]
\caption{ExoDDDM Gauge Charges}
\centering
\begin{tabular}{c | c c c c}
& $C$&$Y_1$&$Y_2$ & $H'$ \\
\hline
$U(1)'$ &$0$&$1$&$-1$ & $2$ \\
$U(1)_D$ & $1$ & $1$ & $1$ & $0$
\end{tabular}
\label{tab:charges}
\end{table}
The gauge eigenstates in the matter sector are Dirac fermions, $C,Y_1,Y_2$, with charges as detailed in \Tab{tab:charges}. The Lagrangian is given by
\begin{eqnarray}
\mathcal{L} & = & \mathcal{L}_{\text{SM}} + \epsilon' F'_{\mu \nu} F^{\mu \nu} + \delta m^2 Z'_{\mu} Z^{\mu} + \mathcal{L}_{\text{Kin}} - V \left(H_D,{H_D}^\ast \right) \\
& & - m_C \overline{C} C - m_Y ( \overline{Y}_1 Y_1 + \overline{Y}_2 Y_2) - (\lambda H' \overline{Y}_1 Y_2 + \text{h.c.}) ~~,
\label{eq:lag}
\end{eqnarray}
where we have given equal vector-like mass to both $Y$ fermions.\footnote{The equality of masses can enforced by some symmetry in the UV, such as an $\text{SU}(2)$ gauge symmetry of which $Y_1$ and $Y_2$ form a doublet. This could be broken by an $\text{SU}(2)$ adjoint to $\text{SU}(2) \rightarrow \text{U}(1)'$, leading to the charges of \Tab{tab:charges}. In this case $H'$ could arise as the off-diagonal component of an $\text{SU}(2)$ adjoint, which breaks $\text{U}(1)' \rightarrow \emptyset$.} We assume that the scalar potential is minimized with non-zero vacuum expectation for $\langle H' \rangle \sim $ GeV generating a mass for the $Z'$ boson in the $M_{Z'} \sim 100$ MeV range. We also assume that the Yukawa couplings are small, and after $\text{U}(1)'$ breaking we have $\langle \lambda H' \rangle = \delta/2 \sim 10\text{'s keV}$.\footnote{These small Yukawa couplings of $\lambda \sim \mathcal{O} (10^{-5})$ are technically natural, and so we put them in by hand at this value, as in the SM. However it would be interesting to consider generating them through some mechanism, originating perhaps at the loop level.} Including this symmetry breaking the gauge eigenstates $Y_{1,2}$ will mix, forming $\text{U}(1)_D$ eigenstates of mass $m_\pm = m_Y \pm \delta/2$, which we will denote $X_{\pm}$. After diagonalizing to the mass eigenstate basis the dark photon $\gamma_D$ couples diagonally and the dark $Z'$ boson entirely off-diagonally,
\begin{equation}
\mathcal{L} \supset \sum_\pm g_D \overline{X}_\pm \cancel{\gamma}_D X_\pm + g' \overline{X}_\pm \cancel{Z}' X_\mp ~~.
\end{equation}
Thus the matter sector of the DDDM will consist of a light Dirac fermion $C$ which can be thought of as the DDDM ``electron'' \cite{Fan:2013yva,Fan:2013tia}, and the two heavy Dirac fermions $X_{\pm}$ which can be thought of as DDDM ``protons''. All have unit charge under the $\text{U}(1)_D$ symmetry and are subject to the long-range forces it mediates.
Before we consider cosmology and direct detection it is pertinent to establish the lifetime of these states. Both $C$ and $X_-$ are absolutely stable due to symmetries analogous to lepton and baryon number, however the situation is more subtle for $X_+$. The dark photon couples only diagonally to the $X_\pm$ mass eigenstates, and so the decay process $X_+ \rightarrow X_- + n\times \gamma_D$ is forbidden at tree-level.\footnote{Since the $Z'$ couples entirely off-diagonally this process does not arise at one-loop level either. This is because any decay $X_+ \rightarrow X_- + n\times \gamma_D$ requires an incoming $X_+$ and outgoing $X_-$. In any loop diagram an internal vertex involving a $Z'$ boson changes $X_+ \rightarrow X_-$, however there are an even number of such vertices since each internal $Z'$ has two endpoints, and no net change leading to $X_+ \rightarrow X_-$ can be generated. If the $Z'$ coupled both off-diagonally and diagonally this would not be the case and loops could generate $X_+ \rightarrow X_- + n\times \gamma_D$, hence the assumption of equal vector-like masses in \Eq{eq:lag} is critical to the lifetime of the excited state.} The decay $X_+ \rightarrow Z' + X_- $ is kinematically forbidden, however the decays $X_+ \rightarrow X_- + \overline{\nu} \nu$ and $X_+ \rightarrow X_- + 3 \gamma$ are generated due to mixing between the $Z'$ boson and the SM photon and $Z$-boson. The lifetime for these decays has been calculated in \cite{Batell:2009vb} for the parameters of interest and comfortably exceeds the age of the Universe. De-excitation in the early Universe which would deplete the number density of excited states is inefficient \cite{Batell:2009vb} and the current relic abundance of $X_+$ is similar to $X_-$.
The dark photon $\gamma_D$ drives efficient annihilation of the light fermion $C$ in the early Universe, washing out any relic abundance \cite{Fan:2013yva,Fan:2013tia}. However this state is required for the cooling and eventual collapse into a dark disk, so we follow \cite{Fan:2013yva,Fan:2013tia} and assume that in the early Universe a number asymmetry is generated in $C$ fermions along with an opposite asymmetry in $X_\pm$. This is analogous to the generation of the baryon asymmetry and, given the plethora of successful models that can generate such an asymmetry in the dark sector \cite{asymm}, we consider this to be a reasonable assumption. Since this asymmetry is shared equally between $X_\pm$ we have $n_{\overline{C}} = 2 n_{X_+} = 2 n_{X_-}$ and there is no net $\text{U}(1)_D$ charge asymmetry. Thus an asymmetric component of $X_\pm$ will exist in this model, however there may also be an additional symmetric component, depending on the coupling strength of the $\text{U}(1)_D$ and $\text{U}(1)'$.
Finally we must determine the direct detection cross-section. The ExoDDDM-nucleon cross-section in the elastic scattering limit is given by
\begin{eqnarray}
\sigma_n & = & 16 \pi \alpha' \alpha_{EM} {\epsilon'}^2 \frac{\mu_n^2}{M_{Z'}^4} \\
& \approx & \left( \frac{\epsilon'}{10^{-6}} \right)^2 \left( \frac{\alpha_D}{10^{-4}} \right) \left( \frac{100 \text{ MeV}}{M_{Z'}} \right)^4 \times 1.4 \times 10^{-40} \text{ cm}^2 ~~,
\end{eqnarray}
hence, due to the light mediator, the required direct detection cross-section is easily obtained even for extremely small kinetic mixing. This completes the model, which contains all of the necessary ingredients for ExoDDDM and is consistent with current bounds.
\section{Conclusions}
\label{sec:conclusions}
The idea that the entire DM abundance consists of a single species of cold and collisionless particle has dominated thinking in DM research for a long time. An alternative possibility, where there also exist subdominant components of the total DM abundance which exhibit rich and complex dynamics is relatively unexplored, even though it is both plausible and interesting. One recently proposed concrete scenario, ``Double-Disk Dark Matter'' \cite{Fan:2013yva,Fan:2013tia} involves a component of DM which has long range interactions and can cool to form complex structures such as galactic DM disks, in analogy with the behavior of visible matter. DDDM has many novel phenomenological signatures. However in the simplest scenarios the direct detection prospects appear limited \cite{Fan:2013yva,Fan:2013tia}.
In this work we have demonstrated that if the DDDM contains excited states which can scatter exothermically on nuclei (ExoDDDM) the direct detection phenomenology can instead be very rich, leading to novel signatures that could distinguish DDDM signals from standard DM candidates. The signatures particular to ExoDDDM include highly peaked recoil spectra, a reduced annual modulation amplitude and, if a large number of events were accumulated and the modulation amplitude has not been suppressed too greatly due to the exothermic splitting, an unexpected phase of the annual modulation.
As well as outlining the broad qualitative features of ExoDDDM we have also calculated current direct detection limits on ExoDDDM and investigated whether any of the direct detection anomalies could be explained by such a scenario. Intriguingly the $\sim 3 \sigma$ excess recently announced by the CDMS collaboration \cite{Agnese:2013rvf} can be well explained with ExoDDDM scattering, in some cases with the majority of the preferred parameter space unconstrained by limits from other experiments. We have demonstrated that an ExoDDDM interpretation of the CDMS-Si excess is consistent with collider and indirect limits, and have also sketched a simple model which accommodates exothermic scattering on nuclei and sufficiently rapid cooling and collapse of DM into a dark disk.
As always with anomalous events near threshold in direct detection experiments, only time and further experimental investigation, including a push to understand lower nuclear recoil thresholds, will ultimately determine whether the CDMS-Si excess is unexpected background or tentative hints of DM signal. In the latter case ExoDDDM offers a novel and self-consistent interpretation.
\acknowledgments{We are grateful to Jiji Fan, Patrick Fox, Andrey Katz, Christopher McCabe, Matthew Reece, and Jessie Shelton for useful discussions and to Adam Anderson, Julien Billard, Felix Kahlhoefer and Neal Weiner for useful conversations and comments on an early version of the draft. M.M. is supported by a Simons Postdoctoral Fellowship. The work of L.R. was supported in part by by the Fundamental Laws Initiative of the Harvard Center for the Fundamental Laws of Nature and by NSF grants PHY-0855591 and PHY-1216270}
|
2,869,038,154,267 | arxiv | \section{Introduction}
Treatment effect estimation in the presence of unobserved confounding variables is a very challenging problem, arising in many areas including statistics, biology, computer science and economics. With some additional domain knowledge, such as the existence of instrumental variables or negative controls, the effect of unobserved confounding variables can be removed, leading to consistent estimation of the treatment effect; see \cite{wooldridge2015introductory,lipsitch2010negative} for a review. However, if such information is unavailable, how to correct for the bias due to the unobserved confounding variables is largely unexplored.
In this context, a class of methods known as surrogate variable analysis (SVA) has been proposed to account for the hidden variables (e.g., batch effect) in the analysis of genomics data (e.g. \cite{alter2000singular}, \cite{leek2007capturing}, \cite{sun2012multiple}). These methods relax the assumptions commonly used in the literature on instrumental variables or negative controls, but still require substantial apriori knowledge of the data or impose a strict structure on the underlying model. For example, \cite{gagnon2012using} required knowing a null set of features in the exposure variables, \cite{mckennan2019accounting} required row-wise sparsity of the non-null features, and \cite{wang2017confounder} imposed a linear causal relationship of the observed variables and the hidden variables. More recently, \cite{bing2022adaptive} extended the methods to deal with the model with high-dimensional features. However, all of these methods ignore the possibility of any interaction between the observed treatment variable and the hidden variables, which appears frequently in the presence of heterogeneous treatment effect (e.g., the treatment effect may vary according to the value of the confounding variables). Failing to account for this structure may lead to model specification, so that the validity and interpretability of statistical findings may be severely limited.
In this paper, we consider the following model
\begin{align}\label{eq:first_model}
\bY=\bA^T\bX+\bB^T\bZ+ \sum_{j=1}^{p} \bC^T_j X_j \bZ + \bE,
\end{align}
where $\bY \in \RR^m$ are the response variables, $\bX \in \RR^p$ are the observed covariates (including the treatment variable and observed confounders), $\bZ \in \RR^K$ are the unobserved covariates or hidden variables, and $\bE \in \RR^m$ are the random errors. The matrices $\bA, \bB, \bC_1,...,\bC_p$ are unknown parameters. The model (\ref{eq:first_model}) allows the hidden variables to interact through the $\bC^T_j X_j \bZ$ terms for $j=1,\dots,p$ in a multiplicative manner. As a special case of (\ref{eq:first_model}), consider $p=1$ and take $X_1\in\{0,1\}$ to be a binary treatment variable. Then the conditional average treatment effect (CATE) can be shown to be $\bA^T+\bC^T_j \bZ$, which depends on the value of unobserved confounding variable $\bZ$. Thus, model (\ref{eq:first_model}) provides a parsimonious way to account for the heterogeneous treatment effect.
Given $n$ i.i.d samples $(\bY_i,\bX_i)_{i=1}^n$, our goal is to estimate the unknown matrix $\bA\in\RR^{p\times m}$, the association between $\bX$ and $\bY$ in the presence of hidden variables, which can be also interpreted as the direct effect of the treatment on the response in a causal inference framework \citep{bing2022adaptive}. However, in general, $\bA$ is not identifiable by only observing $(\bY, \bX)$, as $\bZ$ can be correlated with $\bX$ in an arbitrary way. To address this problem, we construct a parameter $\bTheta$ that approximates $\bA$ by teasing out the effect induced by the hidden variables. In particular, we propose a debiased estimator by projecting the response variables to an appropriate singular vector space. Theoretically, we characterize the stochastic error and approximation error of our debiased estimator under both the homoscedastic and the more general heteroscedastic and correlated noises.
The paper is organized as follows. We first give a detailed estimation algorithm in the homoscedastic setting in Section \ref{sec:method}. Section \ref{sec:theoretical} presents our main theoretical result concerning the convergence rate of our debiased estimator. We then extend the method to the heteroscedastic setting in Section \ref{sec:heteroscedastic} by proposing a modified version of our algorithm and giving an adjusted convergence rate. Finally, Sections \ref{sec:simulation} and \ref{sec:dataapplication} give simulation results and a real world data application to high-throughput microarray data.
\subsection{Notation}
For any set $\cS$, we write $|\cS|$ for its cardinality. For any vector $\bv\in \RR^d$, we define its $\ell_q$ norm as $\|\bv\|_q = (\sum_{j=1}^d |\bv_j|^q)^{1/q}$ for some real number $q\ge 0$. For any matrix $\bM \in \RR^{d_1 \times d_2}$, we denote $\|\bM\|_{op}$ and $\|\bM\|_{F}$ as the operator and Frobenius norm, respectively. For any square matrix $\bM$, we also write $\lambda_k(\bM)$ as its $k$th largest eigenvalue. For any two sequences $a_n$ and $b_n$, we write $a_n \lesssim b_n$ (or $a_n=\mathcal{O}(b_n)$) if there exists some positive constant $C$ such that $a_n \le Cb_n$ for any $n$. We let $a_n \asymp b_n$ stand for $a_n\lesssim b_n$ and $b_n \lesssim a_n$. Denote $a\vee b=\max (a,b)$ and $a\wedge b=\min(a,b)$.
\section{Debiased Estimator via SVD}\label{sec:method}
Recall that in model (\ref{eq:first_model}), $\bY \in \RR^m$ are the response variables, $\bX \in \RR^p$ are the observed covariates and $\bZ \in \RR^K$ are the hidden variables, where the number of hidden variables $K$ is unknown and is assumed to be much less than $m$. In addition, we assume the random noise $\bE$ is independent of $\bX$ and $\bZ$. In this section, we focus on the setting with homoscedastic errors, i.e., $\Cov(\bE) = \sigma^2 \bI_m$, where $\bI_m$ is an identity matrix. The extension to heterogeneous and correlated errors is studied in Section \ref{sec:heteroscedastic}.
To motivate the proposed method, write $\bW=\bZ-\bpsi^T \bX$, where $\bpsi := \{ \EE(\bX \bX^T)\}^{-1} \EE(\bX\bZ^T) \in \RR^{p \times K}$ is obtained by the $\mathcal{L}_2$ projection of $\bZ$ onto $\bX$. Note that we do not require $\bZ$ and $\bX$ to be linearly dependent when defining $\bW$. Using this decomposition of $\bZ$, we may rewrite (\ref{eq:first_model}) as
\begin{align}\label{model}
\bY &= \bA^T \bX + (\bB^T \bW + \bB^T \bpsi^T \bX) + \sum_{j=1}^{p} \bC_j^T X_j (\bW + \bpsi^T \bX) + \bE \notag \\
&= \sum_{j=1}^{p} ( \bA_{(\cdot,j)}^T + \bB^T \bpsi_{(\cdot,j)}^T ) X_j + \sum_{j=1}^{p} \bC_j^T X_j \cdot \sum_{k=1}^{p} \bpsi_{(\cdot,k)}^T X_k + \bB^T \bW + \sum_{j=1}^{p} \bC_j^T X_j \bW + \bE \notag \\
&= \sum_{j=1}^{p} \bL_{1j}^T X_j + \sum_{1\leq j\leq k\leq p} \bL_{2,jk}^T X_j X_k + \bepsilon,
\end{align}
where $\bL_{1j}^T = \bA_{(\cdot,j)}^T + \bB^T \bpsi_{(\cdot,j)}^T\in\RR^m$, $\bL_{2,jk}^T = \bC_j^T \bpsi_{(\cdot,k)}^T+\bC_k^T \bpsi_{(\cdot,j)}^T\in \RR^m$ for $j\neq k$ and $\bL_{2,jk}^T = \bC_j^T \bpsi_{(\cdot,k)}^T\in \RR^m$ for $j=k$, and $\bepsilon = \bB^T \bW + \sum_{j=1}^{p} \bC_j^T X_j \bW + \bE$. Thus, given $n$ i.i.d copies of $(\bX, \bY)$, we can estimate the coefficient matrices $\bL_{1j}, \bL_{2,jk}$ from a linear regression with all the linear and pairwise interactions among $\bX$.
The second step is to estimate the covariance matrix of the residuals $\bepsilon$, which has the following structures
\begin{align}\label{covariance residual}
\EE(\bepsilon\bepsilon^T|\bX)&=\EE[(\bB^T \bW + \bE)(\bB^T \bW + \bE)^T|\bX] + \sum_{1\leq j,k\leq p} \EE(\bC_j^T \bW\bW^T\bC_k|\bX) X_jX_k \notag \\
&~~~+\sum_{j=1}^{p} \EE(\bC_j^T \bW(\bB^T \bW + \bE)^T|\bX) X_j + \sum_{j=1}^{p} \EE((\bB^T \bW + \bE)\bW^T\bC_j|\bX) X_j \notag \\
&=\bphi_{\bB} + \sum_{1\leq j\leq k\leq p} \bphi_{(\bC_j,\bC_k)} X_jX_k + \sum_{j=1}^{p} \bphi_{(\bB, \bC_j)} X_j
\end{align}
where we use the fact that $\bE$ is independent of $\bX, \bZ$, and $\bphi_{\bB} = \bB^T \Cov(\bW|\bX) \bB + \Cov(\bE)$, $\bphi_{(\bC_j,\bC_k)} = \bC_j^T \Cov(\bW|\bX) \bC_k+\bC_k^T \Cov(\bW|\bX) \bC_j$ for $j\neq k$ and $\bphi_{(\bC_j,\bC_k)} = \bC_j^T \Cov(\bW|\bX) \bC_k$ for $j=k$, and $\bphi_{(\bB,\bC_j)}=\bC_j^T \Cov(\bW|\bX) \bB+\bB^T\Cov(\bW|\bX)\bC_j$.
If $\bSigma_W:=\Cov(\bW|\bX)$ does not depend on $\bX$, we can regress the estimated covariance matrix of the residuals on $\bX$ to estimate $(\bphi_{\bB}, \bphi_{(\bC_j,\bC_k)}, \bphi_{(\bB,\bC_j)})$ for $1\leq j,k\leq p$. For notational simplicity, we write $\bphi_{(\bC_j,\bC_j)}$ as $\bphi_{\bC_j}$. Suppose $\bphi_{\bB}$ and $\bphi_{\bC_j}$ for $1\leq j\leq p$ are known (or well estimated via least square estimation). Under some conditions detailed in Section \ref{sec:theoretical}, we can recover the right singular space of $\bB$ and $\bC_j$ for $1\leq j\leq p$. With some simple algebra, this gives us the projection matrix $\bP_{\bD}=\bD^T(\bD\bD^T)^{-1}\bD$, where $\bD^T := (\bB^T, \bC_1^T...,\bC_p^T)\in\RR^{m\times (p+1)K}$. Define $\bP_{\bD}^{\perp}=\bI_m-\bP_{\bD}$.
The key step in our method is to project the $m$-dimensional response $\bY$ to the orthogonal complement of the right singular space of $\bD$. Multiplying $\bP_{\bD}^{\perp}$ on both sides of equation (\ref{eq:first_model}), we get
\begin{align}
\bP_{\bD}^{\perp}\bY&=\bP_{\bD}^{\perp}\bA^T\bX+\bP_{\bD}^{\perp}\bB^T\bZ+ \sum_{j=1}^{p} \bP_{\bD}^{\perp}\bC^T_j X_j \bZ + \bP_{\bD}^{\perp}\bE \nonumber\\
&=\bP_{\bD}^{\perp}\bA^T\bX+ \bP_{\bD}^{\perp}\bE,\label{eq_projection}
\end{align}
where we use the fact that $\bP_{\bD}^{\perp}\bB^T=\bP_{\bD}^{\perp}\bC^T_j=0$ by the definition of the projection matrix. Thus, by leveraging the singular vector space of $\bB$ and $\bC_j$, we eliminate the effect of hidden variables $\bZ$ from the linear regression as shown in (\ref{eq_projection}). However, in this case, we can only recover the coefficient matrix $\bTheta^T := \bP_{\bD}^{\perp} \bA^T$, which differs from the parameter of interest $\bA^T$ in general. The difference between $\bTheta$ and $\bA$ leads to an approximation error in our theoretical analysis. In particular, under the conditions in Section \ref{sec:theoretical}, we show that the approximation error is asymptotically ignorable.
For clarity, we summarize our debiased estimation procedure in Algorithm \ref{algo:homoscedastic}. Note that the algorithm requires the number of hidden variables $K$ as the input. The discussion on selection of $K$ is deferred to Section \ref{sec:k-selection}.
\begin{algorithm}
\caption{Debiased estimator with homoscedastic noise $\Cov(\bE) = \sigma^2 \bI_m$}\label{algo:homoscedastic}
\textbf{Require: } $n$ i.i.d data $(\bY_i,\bX_i)_{i=1}^n$, rank $K$;
1. Solve $(\hat{\bL}_{1j}, \hat{\bL}_{2,jk})_{1\leq j,k\leq p} = \arg \min \sum_{i=1}^{n}\| \bY_i - \sum_{j=1}^{p} \bL_{1j}^T X_{ji} - \sum_{1\leq j\leq k\leq p} \bL_{2,jk}^T X_{ji}X_{ki}\|_2^2$.
2. Obtain $\hat\Sigma_{\bepsilon_i} = \hat{\bepsilon}_i \hat{\bepsilon}_i^T \in \RR^{m \times m}$ where $\hat{\bepsilon}_i = \bY_i - \sum_{j=1}^{p} \hat\bL_{1j}^T X_{ji} - \sum_{1\leq j\leq k\leq p} \hat\bL_{2,jk}^T X_{ji}X_{ki}$.
3. Solve \begin{align*}
(\hat\bphi_{\bB}, \hat\bphi_{(\bC_j,\bC_k)}, \hat\bphi_{(\bB,\bC_j)})_{1\leq j,k\leq p} = \arg \min \sum_{i=1}^{n} \|\hat\Sigma_{\bepsilon_i} - \bphi_{\bB} - \sum_{1\leq j\leq k\leq p} \bphi_{(\bC_j,\bC_k)} X_{ji}X_{ki} - \sum_{j=1}^{p} \bphi_{(\bB, \bC_j)} X_{ji}\|_F^2.
\end{align*}
4. Compute $\hat \bU_{\bB}\in\RR^{m\times K}$ and $\hat \bU_{\bC_j}\in\RR^{m\times K}$, the first $K$ eigenvectors of $\hat{\bphi}_{\bB}$ and $\hat{\bphi}_{\bC_j}:=\hat\bphi_{(\bC_j,\bC_j)}$. Via SVD, obtain $\hat{\bU}_{\bD} \in \RR^{m \times (p+1) K}$, the first $(p+1) K$ left singular vectors of $(\hat{\bU}_{\bB},\hat{\bU}_{\bC_1},...,\hat{\bU}_{\bC_p})$. Then estimate the projection matrix $\hat{\bP}_{\bD} = \hat{\bU}_{\bD} \hat{\bU}_{\bD}^T$, and $\hat{\bP}_{\bD}^{\perp}=\bI_m-\hat{\bP}_{\bD}$.
5. Obtain the debiased estimator $\hat{\bTheta}$ by solving \begin{align}\label{eq:step 5}
\hat{\bTheta} = \arg \min_{\Theta^T \in \RR^{m\times p}} \sum_{i=1}^{n} \| \hat{\bP}_{\bD}^{\perp} \bY_{i} - \bTheta^T \bX_i\|_2^2.
\end{align}
\end{algorithm}
\section{Theoretical Results}\label{sec:theoretical}
In this section, we establish the rate of convergence of the proposed debiased estimator. In particular, we focus on the asymptotic regime with $p$ fixed and $K,m,n\rightarrow\infty$.
\begin{assumption}\label{ass_1}
Assume that $\bSigma_W:=\Cov(\bW|\bX)$ does not depend on $\bX$, the matrix $\bSigma_W$ has full rank and the noise is homoscedastic, i.e., $\Cov(\bE) = \sigma^2 \bI_m$.
\end{assumption}
\begin{assumption}\label{ass_2}
Denote $\bar \bX=(X_1,...,X_p, X_1^2, X_1X_2,...,X_p^2)^T\in\RR^{p(p+1)/2}$. Assume that the matrix $\EE(\bar \bX\bar \bX^T)$ is positive definite. We further assume $\max_{1\leq k\leq m}\EE(\bar \bX^T\bar \bX\epsilon^2_k)$ and $\max_{1\leq j,k\leq m}\EE(\bar \bX^T\bar \bX\phi^2_{jk})$ are bounded by a constant, where $\epsilon_k$ is the $k$th entry of $\bepsilon = \bB^T \bW + \sum_{j=1}^{p} \bC_j^T X_j \bW + \bE$, and $\phi_{jk}$ is the $(j,k)$th entry of $\bphi=\bepsilon\bepsilon^T-\bphi_{\bB} - \sum_{1\leq j\leq k\leq p} \bphi_{(\bC_j,\bC_k)} X_jX_k - \sum_{j=1}^{p} \bphi_{(\bB, \bC_j)} X_j$. Finally, for $j=1,..,p$, $\EE(X_j^6)$ is finite.
\end{assumption}
\begin{assumption}\label{ass_3}
Let $\lambda_B$ and $\lambda_{C_j}$ be the $K$th largest eigenvalues of $\bB^T \bSigma_W \bB$ and $\bC_j^T \bSigma_W \bC_j$, respectively. Assume $\lambda_B \asymp \lambda_{C_j} \asymp m$ for $j=1,..,p$. The matrix $\bD^T := (\bB^T, \bC_1^T...,\bC_p^T)\in\RR^{m\times (p+1)K}$ has rank $(p+1)K$, and the $(p+1)K$th largest eigenvalue of $\bD^T \bD$ denoted by $\lambda_D$ satisfies $\lambda_D\asymp m$.
\end{assumption}
Assumption \ref{ass_1} is already required in Section \ref{sec:method} in order to develop the proposed method. Assumption \ref{ass_2} is a standard moment condition that guarantees the desired rate for the least square type estimators in Algorithm \ref{algo:homoscedastic}. Finally, Assumption \ref{ass_3} is known as the pervasiveness assumption in the factor model literature for identification and consistent estimation of the right singular space of $\bB$ and $\bC_j$; see \cite{bai2003inferential,fan2013large}. In particular, $\lambda_B \asymp m$ holds if the smallest and largest eigenvalues of $\bSigma_W$ are bounded away from 0 and infinity by some constants, and the columns of $\bB$ are i.i.d. copies of a $K$-dimensional sub-Gaussian random vector whose covariance matrix has bounded eigenvalues. In this assumption, we also require that $\bD^T$ has full column rank (and $(p+1)K\leq m$), which is used to construct the estimated singular vectors $\hat{\bU}_{\bD}$ in Algorithm \ref{algo:homoscedastic}.
We are now ready to present the main result concerning the convergence rate of our estimator $\hat{\bTheta}$ to the true coefficient matrix $\bA$.
\begin{theorem}\label{thm:thm 1}
Under Assumptions \ref{ass_1}-\ref{ass_3}, the estimator $\hat{\bTheta}$ from Algorithm \ref{algo:homoscedastic} satisfies:
\begin{equation}
\frac{1}{m}\| \hat{\bTheta} - \bA \|_F^2 = \mathcal{O}_p\Big(\frac{1}{n}+\frac{K}{m}\eta\Big),
\end{equation}
where $\eta=\frac{1}{Km}\|\bD \bA^T\|_F^2$.
\end{theorem}
Note that the term $\frac{1}{m}\| \hat{\bTheta} - \bA \|_F^2$ can be interpreted as the squared error per response. This theorem shows that the error can be decomposed into two parts, $\mathcal{O}_p(\frac{1}{n})$ the typical parametric rate for estimating a finite dimensional parameter and $\mathcal{O}_p(\frac{K}{m}\eta)$ the approximation error due to the difference between $\bTheta$ and $\bA$. To further examine the approximation error, we can show that $\|\bD \bA^T\|_F^2=\mathcal{O}_p(Km)$, when each row of $\bA$ and $\bD$ are independently generated from $N(0, \bI_m)$ (or more generally $N(0, \bSigma)$ where $\bSigma$ has a bounded operator norm). This implies $\eta=\mathcal{O}_p(1)$. In addition, if $m\gg nK$, the approximation error is asymptotically ignorable compared to the parametric rate, leading to $\frac{1}{m}\| \hat{\bTheta} - \bA \|_F^2 = \mathcal{O}_p(\frac{1}{n})$ which is the best possible rate for estimating a finite dimensional parameter in regular parametric models. Even if $m\gg nK$ does not hold, the estimator is still consistent in terms of the squared error per response, as long as $K/m\rightarrow 0$.
Finally, we note that when $\eta=\mathcal{O}_p(1)$, the error bound decreases with $m$, which implies that by collaborating more types of responses, the treatment effect estimation can be more accurate. This phenomenon can be viewed as the bless of dimensionality in our problem.
\section{Extension to Heteroscedastic and Correlated Noise}\label{sec:heteroscedastic}
In this section, we generalize our method to the setting with heteroscedastic and correlated errors, i.e., $\Cov(\bE)$ can be any positive definite matrix. Recall that $\bphi_{\bB} = \bB^T \Cov(\bW|\bX) \bB + \Cov(\bE)$. In view of Algorithm \ref{algo:homoscedastic}, when the noise is heteroscedastic and correlated, the main challenge is that the eigenspace of $\bphi_B$ corresponding to the first $K$ eigenvalues no longer coincides with the right singular space of $\bB$. Because of this, we may no longer be able to identify $\bP_D$ via the eigenspaces of the coefficient matrices in step 4 of Algorithm \ref{algo:homoscedastic}. To address this problem, we turn to a recently developed procedure called HeteroPCA \citep{zhang2022heteroskedastic} that allows us to recover the desired right singular space of $\bB$ by iteratively imputing the diagonal entries of $\bphi_B$ via the diagonals of its low-rank approximations. For completeness, we restate their procedure in Algorithm \ref{algo:heteroPCA}. The estimation of $\bA$ remains nearly identical to the homoscedastic setting, except we now estimate $\tilde{\bU}_{\bB}$ by performing HeteroPCA rather than PCA on the coefficient matrix $\hat{\bphi}_{\bB}$ obtained from step 3 in Algorithm \ref{algo:homoscedastic}. The full procedure is detailed in Algorithm \ref{algo:heteroscedastic}. This algorithm requires to specify the number of iterations $T$ in the HeteroPCA algorithm. Usually, a small $T$ such as $T = 5$ yields satisfactory results in our simulations.
To establish the rate of convergence of the debaised estimator in Algorithm \ref{algo:heteroscedastic}, we further impose the following condition.
\begin{assumption}\label{ass_4}
Let $\bU \in \RR^{m \times K}$ be the first $K$ left singular vectors of $\bB^T$, and $\lambda_1$ and $\lambda_B$ be the first and $K$th largest eigenvalues of $\bB^T \Sigma_{\bW} \bB$, respectively. Let $\{e_j\}_{j=1}^{m}$ denote the canonical basis vector of $\RR^m$. Then, $\exists \hspace{1mm} C_U > 0$ such that
\begin{align*}
\frac{\lambda_1}{\lambda_B} \max_{1\leq j\leq m} \|e_j^T U\|_2^2 \leq C_U.
\end{align*}
\end{assumption}
This condition is a variation of the incoherence condition in the matrix completion literature \citep{candes2010power}, which is mainly used to recover the singular vector space $\bU$ in the HeteroPCA algorithm. This assumption controls how the singular vector space $\bU$ may coincide with the canonical basis vectors. Intuitively, if $\bU$ becomes more aligned with the canonical basis vectors, it is more difficult to separate $\bB^T \Cov(\bW|\bX) \bB$ and $\Cov(\bE)$ from $\bphi_{\bB}$, even if $\Cov(\bE)$ is just a diagonal matrix.
We note that, unlike the original HeteroPCA algorithm \citep{zhang2022heteroskedastic} which requires the error matrix $\Cov(\bE)$ to be a diagonal matrix, we also allow $\Cov(\bE)$ to have non-zero off-diagonal entries (i.e., correlated errors). For any matrix $\bM$, let $D(\bM)$ be the matrix with diagonal entries equal to the diagonal entries of $\bM$, but with all off-diagonal entries equal to 0. Let $\Gamma(\bM) = \bM - D(\bM)$. The following theorem shows the rate of convergence of the debiased estimator in Algorithm \ref{algo:heteroscedastic}.
\begin{theorem}\label{thm:thm 2}
Assume that $\bSigma_W=\Cov(\bW|\bX)$ does not depend on $\bX$, the matrix $\bSigma_W$ has full rank and Assumptions \ref{ass_2}-\ref{ass_4} hold. If $\|\Gamma(\Cov(\bE))\|_F = \mathcal{O}(\lambda_B\sqrt{K})$ and $T \gg (1 \vee \log \frac{\sqrt{K} \lambda_B}{\|\Gamma(\Cov(E))\|_F})$, then the debiased estimator $\hat{\bTheta}$ from Algorithm \ref{algo:heteroscedastic} satisfies:
\begin{equation}\label{eq_rate_2}
\frac{1}{m}\| \hat{\bTheta} - \bA \|_F^2 = \mathcal{O}_p\Big(\frac{1}{n}+\frac{K}{m}\eta+\frac{\bar \eta}{m^2}\Big),
\end{equation}
where $\eta=\frac{1}{Km}\|\bD \bA^T\|_F^2$ and $\bar\eta=\|\Gamma(\Cov(\bE))\|_F^2$.
\end{theorem}
Compared to the results in Theorem \ref{thm:thm 1}, the error bound with heteroscedastic and correlated noise includes an additional term $\mathcal{O}_p(\frac{\bar \eta}{m^2})$, which comes from the correlation among the noise vector $\bE$. In particular, if the correlation among $\bE$ is relatively weak with $\bar\eta\ll m^2/n$ and $\eta\ll m/(nK)$ holds as discussed after Theorem \ref{thm:thm 1}, we obtain that the squared error per response has the parametric rate $\mathcal{O}_p(\frac{1}{n})$.
Finally, we comment that if one applies the proposed Algorithm \ref{algo:homoscedastic} tailored for the homoscedastic noise to deal with the model with heteroscedastic and correlated noise in this section, the rate of the estimator would be slower than (\ref{eq_rate_2}), as it contains an additional error related to the degree of heteroscedasticity of the noise variance. We refer to \cite{bing2022adaptive} for a similar result when the model has no interaction terms.
\begin{algorithm}
\caption{HeteroPCA($\hat{\Sigma}$, $K$, $T$)} \label{algo:heteroPCA}
\textbf{Require: } Matrix $\hat{\Sigma} \in \RR^{m \times m}$, rank $K$, number of iterations $T$;
1. Set $N_{ij}^{(0)} = \hat{\Sigma}_{ij}$ for all $i \neq j$ and $N_{ii}^{(0)}$ for all $i$.
2. For $t=0,1,...,T$:
\hspace{5mm} Compute SVD of $N^{(t)} = \sum_{i=1}^{m} \lambda_i^{(t)} u_i^{(t)} (v_i^{(t)})^T$, where $\lambda_1^{(t)} \geq \lambda_2^{(t)} \geq ... \geq 0$.
\hspace{5mm} Let $\tilde{N}^{(t)} = \sum_{i=1}^{r} \lambda_i^{(t)} u_i^{(t)} (v_i^{(t)})^T$.
\hspace{5mm} Set $N_{ij}^{(t+1)} = \hat{\Sigma}_{ij}$ for all $i \neq j$ and $N_{ii}^{(t+1)} = \tilde{N}_{ii}^{(t)}$.
3. Return $U^{(T)} = (u_1^{(T)}, ..., u_r^{(T)})$.
\end{algorithm}
\begin{algorithm}
\caption{Debiased estimator with heteroscedastic and correlated noise}\label{algo:heteroscedastic}
\textbf{Require: } $n$ i.i.d data $(\bY_i,\bX_i)_{i=1}^n$, rank $K$, number of iterations $T$;
1-3. Same as steps 1-3 of Algorithm \ref{algo:homoscedastic}.
4. Compute $\tilde{\bU}_{\bB}$ from HeteroPCA($\hat{\bphi}_{\bB}$, $K$, $T$) in Algorithm \ref{algo:heteroPCA}. Then compute $\hat \bU_{\bC_j}$, the first $K$ eigenvectors of, $\hat{\bphi}_{\bC_j}:=\hat\bphi_{(\bC_j,\bC_j)}$. Via SVD, obtain $\tilde{\bU}_{\bD} \in \RR^{m \times (p+1) K}$, the first $(p+1) K$ left singular vectors of $(\tilde{\bU}_{\bB},\hat{\bU}_{\bC_1},...,\hat{\bU}_{\bC_p})$. Then estimate the projection matrix $\tilde{\bP}_{\bD} = \tilde{\bU}_{\bD} \tilde{\bU}_{\bD}^T$.
5. Obtain $\hat{\bTheta}$ by solving (\ref{eq:step 5}) with $\tilde{\bP}_{\bD}$ in lieu of $\hat{\bP}_{\bD}$.
\end{algorithm}
\section{Simulation Results}\label{sec:simulation}
We compare our methods outlined in Algorithms \ref{algo:homoscedastic} and \ref{algo:heteroscedastic} with the following list of competitors:
\begin{itemize}
\item Oracle: the estimator obtained from (\ref{eq:step 5}) by using the true values of $\bB$ and $\bC$'s to construct $\bP_D$. This estimator is non-practical and only served as a benchmark.
\item Non-interaction: the estimator obtained assuming there is no interaction between $\bX$ and $\bZ$ in model (\ref{eq:first_model}). The estimation procedures are adapted from \cite{bing2022adaptive}. We similarly consider two versions of the algorithms with the homoscedastic and heteroscedastic noise.
\item OLS: the ordinary least squares estimator without accounting for the presence of hidden variables.
\end{itemize}
To simplify the presentation, we mainly focus on the non-interaction method for comparison against our method, since the recently proposed methods in surrogate variable analysis (SVA) that seek to account for hidden variables are similar to the non-interaction method presented above.
\subsection{Data generating mechanism}\label{subsec: data generating mechanism}
The design matrix is generated as $\bX_i \sim \mathcal{N}_p(0, \bSigma)$ independently for all $1 \leq i \leq n$ and for $\Sigma_{jk} = (-1)^{j+k} (0.5)^{|j-k|}$ for all $1\leq j,k \leq p$. Let $\bZ_i=\bpsi^T\bX_i+\bW_i$ with $\bpsi_{jk} \sim \eta \cdot \mathcal{N}(0.5,0.1)$ independently for all $1\leq j \leq p$ and $1 \leq k \leq K$, where $\eta$ controls the level of dependence the observed and hidden variables. Set $\bA_{\ell j}^T \sim \mathcal{N}(0.5,0.1)$, and $\bB_{\ell k}^T$ and $\bC_{j, \ell k}^T$ $\sim \mathcal{N}(0.1,1)$ independently for all $1 \leq \ell \leq m$, $1 \leq k \leq K$, and $1\leq j \leq p$. The stochastic error $\bW_i$ is sampled as $\bW_{ik} \sim \mathcal{N}(0,1)$ independently for all $1 \leq i \leq n$ and $1 \leq k \leq K$. In the homoscedastic setting, $\bE_{i\ell} \sim \mathcal{N}(0,1)$ independently for all $1 \leq i \leq n$ and $1 \leq \ell \leq m$; and for the heteroscedastic setting, $\bE_{i \ell} \sim \mathcal{N}(0,\tau_{\ell}^2)$ where $\tau_{\ell}^2 = m v_{\ell}^{\alpha} / \sum_{\ell} v_{\ell}^{\alpha} \cdot (p+1)$ and $v_{\ell} \sim \textrm{Unif}\,[0,1]$ independently for $1 \leq \ell \leq m$. This specification of $\tau_{\ell}$ guarantees the overall level of variability scales with the number of hidden interaction terms (i.e. $\sum_{\ell} \tau_{\ell}^2 = p+1$) with larger values of $\alpha$ corresponding to a higher degree of heteroscedasticity.
The number of hidden variables $K$ is fixed at 3 and is assumed to be known in all methods except in the experiments concerning selection of $K$ in Section \ref{sec:k-selection}.
\subsection{Experiment Setup}
We fix $p=2$ and consider 2 separate settings: (i) small $m$ and large $n$ ($m=25$, $n=1000$) and (ii) large $m$ and small $n$ ($m=500, n=100$). For the homoscedastic case, we iterate over $\eta \in \{0.1,0.3,...,1.1,1.3\}$; while, for the heteroscedastic case, we iterate over $\alpha \in \{0,3,6,...,12,15\}$ and fix $\eta = 0.5$. Within each parameter setting, we independently generate 100 datasets according to Section \ref{subsec: data generating mechanism} and report the average Sum Squared Error (SSE) in log scale $\log (\frac{1}{m} \|\hat{\bTheta} - \bA\|_F^2)$,
and the average Prediction Mean Squared Error (PMSE) in log scale $\log (\frac{1}{n^*m} \|\bY^* - \hat{\bTheta}^T \bX^* \|_F^2)$ on a newly generated test set $(\bX^*, \bY^*)$ with $n^* = 5000$ data points.
\subsection{Results and Discussion}
\subsubsection{SSE}\label{sec:sse}
\begin{figure}[h]
\caption{SSE $\log (\frac{1}{m} \|\hat\bTheta - \bA\|_F^2)$ for homoscedastic (top row) and heteroscedastic (bottom row) settings.}
\centering
\includegraphics[width=0.90\textwidth]{plots/homoscedastic/homoscedastic_SSE.png}
\includegraphics[width=0.90\textwidth]{plots/heteroscedastic/heteroscedastic_SSE.png}
\label{fig:SSE}
\end{figure}
\textit{Homoscedastic setting.} As seen in the top row of Figure \ref{fig:SSE}, our method outperforms the other methods across both settings whether or not we use HeteroPCA. In particular, it outperforms the methods without considering the interactions among the treatment and the hidden variables, which shows the importance of accounting for the effect of interactions in this model. Under setting 1, where we have sufficiently large amount of data relative to the number of responses ($n\gg m$), the proposed Algorithm \ref{algo:homoscedastic} tailored for the homoscedastic noise outperforms our Algorithm \ref{algo:heteroscedastic}. Under setting 2, when $m$ is sufficiently large, the two algorithms perform relatively the same.
\textit{Heteroscedastic setting.} As illustrated in the bottom row of Figure \ref{fig:SSE}, our method once again outperforms the other methods across both settings. On the other hand, employing Algorithm \ref{algo:heteroscedastic} with HeteroPCA for setting 1 yields substantially better results over using Algorithm \ref{algo:homoscedastic} with PCA. This suggests that our algorithm tailored for the heteroscedastic setting, when the underlying noise has a high degree of heteroscedasticity, is most preferable for large $n$ and small $m$. This is consistent with the discussion following Theorem \ref{thm:thm 2}. Similar to the homoscedastic setting, in setting 2 when $m$ is large enough, the two algorithms yield very similar results.
\subsubsection{PMSE}\label{sec:testmses}
\begin{figure}[h]
\caption{PMSE $\log (\frac{1}{n^*m} \|\bY^* - \hat{\bTheta}^T \bX^* \|_F^2)$ for homoscedastic settings.}
\centering
\includegraphics[width=0.90\textwidth]{plots/homoscedastic/homoscedastic_PMSE.png}
\label{fig:PMSE}
\end{figure}
The PMSE as depicted in Fig \ref{fig:PMSE} exhibits a more interesting behavior than SSE. It appears that the prediction performance of our method is heavily dependent on the relative magnitude of $m$ and $n$. When $m$ is small and $n$ is large, the prediction error (in the test set) of our method is similar to the rest of the methods. One possible explanation is that, while our method can remove the bias due to the hidden variables, this effect is dominated by the variance of the noise $\Var(\bE)$ in the test prediction error, so that all the methods including OLS yield similar results. However, in the large $m$ and small $n$ regime, our interaction based methods lead to much smaller test error, as correcting for the bias due to the hidden variables becomes more imperative. While we mainly focus on the treatment effect estimation instead of prediction in this paper, our numerical results also demonstrate the favorable performance of our method in prediction when $m$ is relatively large. The PMSE under the heteroscedastic setting has a similar pattern and is omitted.
\subsubsection{Selection of $K$}\label{sec:k-selection}
In practice, the number of hidden variables $K$ is often unknown. In this section, we propose a practical approach to estimate $K$ via a variant of the ratio test proposed by \cite{bing2022adaptive}. In particular, we select $K$ as the most common index of the largest eigenvalue gap across the coefficient matrices $\hat{\bphi}_B$ and $\hat{\bphi}_{C_j}$ obtained from step 3 of our algorithms. More formally, define
\begin{align*}
\hat{K} = \arg \max_{i\in\{1,...,K^*\}} | S_i |, ~~\textrm{where}~~S_i=\Big\{ j \in \{1,...,p+1\} : \frac{\hat{\lambda}_{j,i}}{\hat{\lambda}_{j,i+1}} \geq \frac{\hat{\lambda}_{j,k}}{\hat{\lambda}_{j,k+1}} \forall k \neq i\Big\},
\end{align*}
$\hat{\lambda}_{1,i}$ and $\hat{\lambda}_{j,i}$, $1\leq i \leq m$, are the ordered nonincreasing eigenvalues of $\hat{\bphi}_B$ and $\hat{\bphi}_{C_j}$, respectively, and $K^*$ is an upper bound, often $\frac{\lfloor (n \wedge m) \rfloor}{2}$, on the number of hidden variables. In other words, the set $S_i$ includes all the indices $j$ such that the eigenvalue ratio $\frac{\hat{\lambda}_{j,i}}{\hat{\lambda}_{j,i+1}}$ is maximized at $i$. Then $\hat K$ corresponds to the one with the largest cardinality of $S_i$. This approach can be viewed as a generalization of the elbow method in clustering and PCA.
To evaluate the performance of this approach, we consider the following experiments. Recall that $\lambda_{K}(\bB^T \bSigma_W \bB)$ is the $K$th largest eigenvalue of $\bB^T \bSigma_W \bB$. We define the signal-to-noise ratio (SNR) as
\begin{align*}
\frac{1}{m} \lambda_{K}(\bB^T \bSigma_W \bB),
\end{align*}
which quantifies how the $K$th largest eigenvalue is separated from 0. In our experiments, we generate $\bB_{\ell k}^T \sim \mathcal{N}(0.1,1)$ independently for all $1 \leq \ell \leq m$ and $1 \leq k \leq K$, and vary the SNR by setting $\bSigma_W = \sigma_W^2 \bI_K$ across $\sigma_W \in \{0.1,0.3,...,1.3,1.5\}$ while fixing $\alpha = 0$ and $\eta = 0.5$. For comparison, we also consider the method for selecting $K$ used in the Non-interaction approach \citep{bing2022adaptive}. We also consider two settings: (i) small $m$ and large $n$ ($m=25, n=1000$) and (ii) large $m$ and large $n$ ($m=500, n=1000$).
From Figures \ref{fig:k_selection1} and \ref{fig:k_selection2}, we see that the non-interaction-based method consistently overestimates the number of hidden variables even when the SNR is large. On the other hand, our method consistently selects the correct value of $K$ (i.e., $K=3$) for large enough SNR.
\begin{figure}[h]
\caption{$\hat{K}$ for non-interaction and interaction based methods $n \gg m$. The green dashed line indicates the true number of hidden features $K=3$.}
\centering
\includegraphics[width=0.90\textwidth]{plots/k_selection/k_selection_noninteract_m25_n1000.png}
\includegraphics[width=0.90\textwidth]{plots/k_selection/k_selection_interact_m25_n1000.png}
\label{fig:k_selection1}
\end{figure}
\begin{figure}[h]
\caption{$\hat{K}$ for non-interaction and interaction based methods $m\gg n$. The green dashed line indicates the true number of hidden features $K=3$.}
\centering
\includegraphics[width=0.90\textwidth]{plots/k_selection/k_selection_noninteract_m500_n1000.png}
\includegraphics[width=0.90\textwidth]{plots/k_selection/k_selection_interact_m500_n1000.png}
\label{fig:k_selection2}
\end{figure}
\newpage
\section{Real Data}\label{sec:dataapplication}
We consider a microarray dataset gathered by \cite{vawter2004gender} which is comprised of postmortem microarray samples of 5 men and 5 women from 3 separate regions of the brain. This dataset was originally curated to answer questions surrounding gender differences in the prevalence of neuropsychiatric diseases and has since been used to test various statistical methods that seek to account for hidden variables (see \cite{gagnon2012using} and \cite{wang2017confounder}).
Each individual was sampled by 3 different universities, resulting in $10 \times 3 \times 3 = 90$ chips of which 6 were missing, leaving a total of $n=84$ samples. Of the remaining $84$ samples, $3$ were replicated to yield $87$ data points. In our analysis, we decide to omit the replicated samples. The sequences were preprocessed via Robust Multichip Average (\cite{irizarry2003exploration}) and then normalized feature-wise to yield readings of length $m=12600$. We take gender to be the primary variable with the brain pH level at the time of sampling to be a secondary covariate for a total of $p=2$ observed features.
In our data analysis, since the true parameter value is unknown, we decided to compute the test prediction MSE (defined in Section \ref{sec:testmses}) via 10-fold cross validation for the same 5 methods as the simulation study (we exclude the oracle, since it is unavailable). To select the number of hidden variables, we chose $\hat{K}$ as in Section \ref{sec:k-selection}. As seen in Table \ref{tab:real_data}, our interaction based method for the homoscedastic setting (i.e., Algorithm \ref{algo:homoscedastic}) achieves the lowest cross-validated prediction error. In Section \ref{sec:testmses}, our simulations suggested that one should not ignore the interaction present between observed and unobserved covariates for data with large $m$ and small $n$ when attempting to predict the response $\bY$. Our results here corroborate this claim.
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
Method & cross validated PMSE \\
\hline
OLS & 1.0602 \\
\hline
Non-interaction (homo) & 1.0493 \\
Non-interaction (hetero) & 1.0499 \\
\hline
Interaction (homo) & \textbf{1.0252} \\
Interaction (hetero) & 1.0308 \\
\hline
\end{tabular}
\caption{Cross validated PMSE for various methods on real data.}
\label{tab:real_data}
\end{table}
\section*{Acknowledgment}
Ning is supported in part by National Science Foundation (NSF) CAREER award DMS-1941945
and NSF award DMS-1854637.
\bibliographystyle{abbrvnat}
|
2,869,038,154,268 | arxiv | \section{Introduction}
Pedestrian detection is an important task in computer vision. It is a fundamental component of many applications, such as video surveillance, autonomous driving, etc. Benefiting from the development of deep learning in recent years, especially the proposal of Convolutional Neural Network~(CNN) based general objects detection methods~\cite{girshick2014rich,ren2015faster,he2017mask,yolov3,liu2016ssd,dai2016rfcn,lin2017feature,lin2017focal,cai2019cascadercnn}, many works~\cite{optimizedpedestrian,tian2015deep,stewart2016end,zhou2017multilabelpart,discriminativeoccluding,Tang2014,chi2019pedhunter} have been done to adapt them to pedestrian detection, leading to great progress in this field.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=1.\linewidth]{latex/images/intro_example.pdf}
\end{center}
\caption{Illustration of V2F-Net. (a) Results of Faster-RCNN~\cite{ren2015faster} with FPN~\cite{lin2017feature} baseline. (b) Results of our method. The solid boxes indicate detected boxes and the dashed boxes indicate false suppression by NMS. Boxes belong to the same pedestrian are drawn in the same color. Arrow in Fig.~\ref{fig:illustration} (b) represents full body estimation from visible region. Directly detecting full body pedestrian tends to cause inaccurate regression; meanwhile, using full boxes to perform NMS is responsible for false suppression. Our method predicts visible box and full-body box of pedestrian sequentially, and we do calculation of IoU in NMS on visible boxes instead of full boxes. Therefore, both the two full boxes are precise and can be kept even IoU between them is high. Best viewed in color.}
\label{fig:illustration}
\end{figure}
However, occlusion remains a big challenge in detecting pedestrian. Many efforts have been made to handle this problem. Some of prior works ~\cite{wang2018repulsion,discriminativeoccluding,chu2020crowddet} only use the full-body box of pedestrian to train their networks, which implies an assumption more or less: the annotated full-body box is visible completely~\cite{pang2019mask}. However, this assumption may not be true in the occlusion cases. Therefore, some other researchers try to leverage visible box to assist the full-body pedestrian detection~\cite{pang2019mask,zhang2018Occlusionaware,Zhou_2018_bibox,zhang2018occludedattention,pang2019mask,huang2020R2nms}, and have achieved inspiring improvements.
Intuitively, the process of pedestrian detection will go through two stages for human: \textbf{detect} visible region by ``seeing'' first, then \textbf{estimate} full body based on the structure and proportion of the human body. The visible region is critical in this process, as it is a strong evidence for identifying person and discriminating between two different pedestrians. However, prior works adopt different pipelines from the process of human vision.~\cite{wang2018repulsion, chu2020crowddet} predict only full body box directly, while~\cite{Zhou_2018_bibox, huang2020R2nms} output visible box and full box parallelly. Both of them consider the full-body pedestrian detection as a single problem. Obviously this would increase the difficulty of network learning.~\cite{prnet} takes visible box as an auxiliary priori to initialize the full-body anchors. It still fails to realize the
key role of visible box, thus will suffer from false suppression by NMS severely and be limited to datasets with fixed ratios of full body annotation.
Inspired by above observation, we propose a new solution from the perspective of pipeline: V2F-Net. We decompose occluded pedestrian detection into two sub-problems explicitly: visible region detection and full body estimation from visible region. Each sub-problem corresponds to a sub-network of V2F-Net, called Visible region Detection Network (VDN) and Full body Estimation Network (FEN) respectively. V2F-Net has a straightforward pipeline and can be trained end-to-end. An input image is first processed by VDN to detect visible boxes of all pedestrians, after \textit{Non-maximum Suppression} (NMS) these kept boxes are fed into FEN to estimate final full-body box for each pedestrian. Fig.~\ref{fig:illustration} shows an illustration of V2F-Net.
One may doubt that detecting visible box is harder than full-body box, as reported in \cite{shao2018crowdhuman}. Based on our observations, we found that the higher $\text{MR}^{-2}$ of visible box detection is mostly because of relatively lower quality of detected visible boxes. This can be partly validated from Table~\ref{tbl:crowdhuman_iou}. If we loose the matching \textit{Intersection over Union} (IoU) threshold between detected box and ground truth, both AP and $\text{MR}^{-2}$ of visible boxes can outperform the results of full-body boxes, which indicates the location of detected visible boxes are not precise as full-body boxes. We suspect there are two main reasons for it. Compared to full boxes, visible boxes have a \textit{wider range of aspect ratio} and are usually \textit{smaller}. The former causes regression offsets with larger variance, and the latter
leads to higher susceptibility to minor deviation of offsets. However, the detected visible boxes with minor offsets still can be utilized to estimate the full body box precisely because of the robustness of FEN and EPM. Therefore, our method can work well although $\text{MR}^{-2}$ of visible box detection is higher than full box detection.
\begin{table}[ht]
\centering
\caption{Results of full-body box detection and visible box detection at different matching thresholds. The matching threshold means that at which level of IoU will a detection count as true positive. }
\label{tbl:crowdhuman_iou}
\footnotesize
\begin{tabularx}{1.\linewidth}{X<{\centering}|X<{\centering}X<{\centering}|X<{\centering}X<{\centering}}
\toprule
& \multicolumn{2}{|c|}{Full box detection} & \multicolumn{2}{|c}{Visible box detection} \\
\hline
IoU thres.& AP/\% & $\text{MR}^{-2}$/\% & AP/\% & $\text{MR}^{-2}$/\% \\
\hline
0.5 & 85.18 & 49.18 &85.01 &53.75 \\
\hline
0.4 &88.68&44.23 &88.85&47.38\\
\hline
0.3 &90.76&41.89 &91.39&41.22 \\
\hline
0.2 &92.35&38.95 &93.20&35.08 \\
\hline
0.1 &93.82&35.85 &94.86&29.23 \\
\bottomrule
\end{tabularx}
\end{table}
Thanks to the aforementioned decomposition strategy, V2F-Net has the following advantages: (1) Unlike prior works tackle the above two tasks in one single network, each sub-network is responsible for its own task in V2F-Net. As a consequence, the learning of V2F-Net becomes simpler and can converge to better minimum. (2) The direct detection target is changed from full-body box to visible box, which can greatly reduce distraction of occluded region on features of pedestrian. (3) The intermediate product, visible box, can be utilized in NMS. By replacing \textit{Intersection over Union} (IoU) calculation on full-body boxes with visible boxes as~\cite{huang2020R2nms}, the dilemma for the single threshold of greedy-NMS can be eased a lot.
In order to make the estimation of full-body box of pedestrian from visible box be more accurate, the features of visible region must contain enough information about human body parts, so that the network can know which direction to expand from the visible box and how much the offsets. Therefore, we propose a novel module that can perceive visibility of human body parts, called Embedding-based Part-aware Module (EPM). By adding a visibility loss for each divided part, the network is encouraged to extract features with essential part information.
To summarize, our contributions are as follows:
\begin{itemize}
\item We propose a simple yet effective pipeline to handle occlusion in pedestrian detection by explicit decomposition. It can be taken as a stronger baseline for occluded pedestrian detection.
\item We propose a novel Embedding-based Part-aware Module (EPM) to further improve accuracy of full-body estimation. This module can be discarded during inference, thus will not bring extra computation cost.
\item Our method improves the FPN baseline by 5.85\% AP on CrowdHuman and 2.24\% $\text{MR}^{-2}$ on CityPersons, achieving the state-of-the-art results on both the two challenging benchmarks. Besides, the consistent gains on both one-stage and two-stage detectors demonstrate the generalizability of our method.
\end{itemize}
\section{Related Work}
\paragraph{General Object Detection.}
CNN-based object detectors~\cite{ren2015faster, yolo, liu2016ssd, lin2017focal} have shown great superiority over methods using hand-crafted features~\cite{dollar2014fast, papageorgiou2000trainable}. The state-of-the-art detectors can be divided into two-stage methods and one-stage methods. Two-stage detectors~\cite{girshick2014rich, girshick2015fast, ren2015faster, cai2019cascadercnn} first generate a set of region proposals by methods like Selective Search~\cite{uijlings2013selective} and Region Proposal Network~\cite{ren2015faster}, then these proposals are fed into RCNN to do the final classification and localization. In contrast, one-stage detectors~\cite{yolov3, liu2016ssd, lin2017focal} directly predict objects based on dense sampling of possible locations, skipping the proposal stage. Generally speaking, two-stage detectors can achieve better accuracy while one-stage detectors have an advantage in computational efficiency.
\paragraph{Occluded Pedestrian Detection.}
Part-based approaches handle the occlusion by learning a series of part detectors at first, then fusing these detection results to generate final pedestrian boxes. Despite of effectiveness, it is time-consuming in inference phase. A few of previous works design novel loss functions without modifying the network architecture: Repulsion Loss~\cite{wang2018repulsion} takes consideration of the repulsion by other surrounding objects, in addition to the attraction by target; Aggregation Loss~\cite{zhang2018Occlusionaware} encourages proposals corresponding to the same pedestrian to be compact.
However, it is difficult for network to discriminate features between visible regions and occluded regions with annotated full-body box only. Therefore, some works try to leverage visible box to assist full-body pedestrian detection. Bi-box~\cite{Zhou_2018_bibox} is the first work to predict full-body box and visible box parallelly.~\cite{huang2020R2nms} improves it by proposal pairing.~\cite{zhang2018occludedattention} adds an extra occlusion pattern classification loss. These approaches enforce the detectors to focus on visible regions of pedestrians implicitly. Besides of them, there are also some works using an explicit way to achieve the same purpose. Feature re-weighting and re-scoring are two common strategies. Feature re-weighting methods re-weight features of pedestrian with information about visible parts:~\cite{zhang2018Occlusionaware} applies element-wise summation on RoI features of divided parts weighted by corresponding visibility scores;~\cite{pang2019mask} generates modulated features by a Mask-Guided Attention Branch, which takes RoI features and predicted spatial attention mask as input. In contrast, re-scoring approaches refine the pedestrian scores by scores of visible regions:~\cite{Zhou_2018_bibox} fuses scores of visible region and full body with softmax;~\cite{Noh_2018_CVPR} computes occlusion-aware detection score by applying a MLP layer to scores of parts.
Besides,~\cite{prnet} tackles occluded pedestrian detection as a progressive refinement process. It takes visible box as an auxiliary priori to initialize the full-body anchors, so as to build a fast one-stage detector. This is achieved through calibrating visible box anchors to a full-body template derived from occlusion statistics. Different from all these works, we propose to decompose occluded pedestrian detection into two more intuitive and simpler tasks. In the decomposed pipeline, the visible box can be fully taken advantage of to improve the detection performance.
\paragraph{NMS and its variants.}
NMS is adopted as a post-processing step of most object detectors to remove duplicate proposals belong to the same identity. In greedy-NMS, a proposal will be discarded if its IoU with more confident proposals is higher than the given IoU threshold. As we know, it will cause false suppression when using commonly used relatively low threshold in crowded scenes. Soft-NMS~\cite{softnms} improves it by leveraging a soft mechanism to decay the detection scores of pedestrian proposals, instead of eliminating them. However, only the locations and scores of full boxes are not enough to decide whether a proposal is redundant, thus can not adapt to complicate scenes well.
To handle this problem, some works propose to predict extra information by detection network in addition to locations and scores of object proposals, and utilize them in NMS. Adaptive-NMS~\cite{adaptiveNMS} outputs object density indicating the level of occluded occlusion. CaSe~\cite{xie2020count} predicts the number of pedestrians in the corresponding boxes and an embedding for discriminating different pedestrians in feature level. NOH-NMS~\cite{zhou2020NOH-NMS} introduces the nearby-objects distribution into the NMS pipeline. A more intuitive idea is using visible box to calculate IoU instead of full box. R$^2$NMS~\cite{huang2020R2nms} has shown its effectiveness on crowded pedestrian benchmarks. We also leverage this strategy to improve performance in our method.
\section{Motivation}
Given an image consists of occluded pedestrians like the input image in Fig.~\ref{fig:method}, it is hard to tell the precise full-body locations of them immediately even for human. Intuitively, the process of full-body pedestrian detection for human will go through two stages: in the first stage, we identify each pedestrian by its visible region; in the second stage, we estimate the full body box from the visible region. Human can ''see'' the invisible part because we have an empirical estimation about the structure and proportion of human body. However, most prior works fuse the detection task and estimation task into one single harder task, which would increase the difficulty of network learning.
A straightforward idea is to divide a hard problem and conquer each sub-problem separately. Such thought of decomposing has been proved to be effective in many computer vision tasks, e.g. pose regression~\cite{dollar2010cascaded_pose}, general object detection~\cite{cai2019cascadercnn} and face alignment~\cite{cao2014face,yan2013learn}. Inspired by these works, we propose to decompose occluded pedestrian detection into two sub-problems: visible region detection and full body estimation. The goal of our method is building a more intuitive and stronger pipeline to handle occlusion. In the decomposed pipeline, visible box can be fully taken advantage of to improve detection performance.
\section{Proposed Approach}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=1.\linewidth]{latex/images/method.pdf}
\end{center}
\caption{Framework of V2F-Net. The input image is first processed by Visible region Detection Network (VDN) to detect visible regions of all pedestrians. After NMS (only necessary during inference) these kept boxes are fed into Full body Estimation Network (FEN) to estimate the full body box for each pedestrian. During training, the visible boxes will also be passed to Embedding-based Part-aware Module (EPM) to predict visibility for each part of corresponding pedestrian. By supervising the part visibility, EPM works as an auxiliary module to make the estimation of full body be more accurate. $\bigodot$ is dot product operation. The dashed line and rectangle indicate they can be discarded during inference. In the output image, the green boxes and digits represent divided parts and predicted scores by EPM respectively.}
\label{fig:method}
\end{figure*}
In this section, we first clarify how V2F-Net decomposes occluded pedestrian detection into visible region detection and full body estimation. Then we introduce EPM to further improve accuracy of full-body box estimation from visible box. At last, we show how to train V2F-Net end-to-end.
\subsection{Pipeline of V2F-Net}
\label{sec:ppl}
Fig.~\ref{fig:method} presents the framework of our method. The V2F-Net consists of two sub-networks: Visible region Detection Network (VDN) and Full body Estimation Network (FEN), and an auxiliary module Embedding-based Part-aware Module (EPM). We will give a detailed introduction of them below. The pipeline of V2F-Net is straightforward during inference: an image is first processed by VDN to detect visible regions of all pedestrians. After NMS the kept visible boxes are fed into FEN to generate final full-body box for each pedestrian, no need for NMS anymore.
It is worth noting that we perform NMS only on the set of visible boxes for two reasons: (1) As suggested by~\cite{huang2020R2nms}, IoU between visible regions of two full-body boxes is a better indicator showing whether they belong to the same pedestrian, compared to using IoU between full-body boxes directly. Therefore, with an appropriate single IoU threshold, many duplicate pedestrian proposals can be removed. Meanwhile, both of boxes of two different pedestrians can be kept, even in crowded scenes. (2) After NMS on visible regions, only few of them can be passed to FEN, thus the FEN will not cost much time in inference phase.
\paragraph{VDN.} The goal of VDN is to detect visible region of pedestrian. VDN can be implemented with minor modifications on the original detectors like Faster-RCNN~\cite{ren2015faster} and RetinaNet~\cite{lin2017focal}. The only thing we need to do is replacing the regression target from full box to visible box. Adjustment of anchors setting is optional, depending on whether anchor is necessary in chosen base detector. Many works utilize visible box to reduce interference of occluded regions on features of pedestrians by explicit attention~\cite{zhang2018occludedattention,pang2019mask} or implicit multi-task learning~\cite{Zhou_2018_bibox}. Compared to these methods, our solution is simpler and easier to implement.
The loss of VDN is the same as base detector, here we denote it by $\mathcal{L}_{VDN}$. With Faster-RCNN we have
\begin{equation}
\mathcal{L}_{VDN} = \mathcal{L}_{cls1}+\mathcal{L}_{reg1}+\mathcal{L}_{cls2}+\mathcal{L}_{reg2},
\label{equ:vdn_loss}
\end{equation}
where $\mathcal{L}_{cls1}$, $\mathcal{L}_{reg1}$, $\mathcal{L}_{cls2}$ and $\mathcal{L}_{reg2}$ are classification loss and regression loss in RPN and RCNN, respectively.
\paragraph{FEN.} FEN aims to estimate the full body box from visible box that have already been detected by VDN. The architecture of FEN is almost the same as RCNN head in~\cite{ren2015faster}, except we replace RoI-Pooling with RoI-Align. Specifically, given visible box of a pedestrian \textit{v}, we use RoI-Align to extract corresponding features $\bm{F_v}$, then these features are fed into two consecutive FC layers with ReLU activation to transform $\bm{F_v}$ to more task-specific features $\bm{F_v^r}$, followed by another FC layer to predict full-body offsets from the input visible box.
To determine which full-body pedestrian box to predict from the input visible box \textit{v} during training, we use a \textit{vdt$\xrightarrow{}$vgt$\xrightarrow{}$fgt} label assignment strategy. To be formulated, let $\mathcal{G}={\{(v_i^*, f_i^*)|1 \leq i \leq N_g \}}$ be all ground truth pedestrians, the $i$-th pedestrian $\mathcal{G}_i$ is represented by its visible box $v_i^*$ and full box $f_i^*$ in a pair, $N_g$ is the total number of all these pedestrians. A visible box \textit{v} will be considered as negative if $\operatorname{IoU}(v, v_i^*)<0.5$ for all $i \in [1, N_g]$, otherwise it will be assigned to $\mathcal{G}_j$ as a positive sample, where $j=\operatorname*{argmax}_k\mathrm{IoU}(v, v_k^*), k\in{[1, N_g]}$. For these positive samples, we take the corresponding full box of assigned ground truth pedestrian as regression target. The loss of FEN $\mathcal{L}_{FEN}$ adopts similar formulation as regression loss of RCNN in Faster-RCNN, we refer to~\cite{ren2015faster} for more details.
\subsection{Embedding-based Part-aware Module}
The Embedding-based Part-aware Module is proposed to improve accuracy of full body estimation from visible region. At first, full body of pedestrian is divided into $n_p=5$ parts as~\cite{zhang2018Occlusionaware}, then we create a part embedding matrix $\bm{E}\in R^{n_p\times d_p}$, where $i$-th $d_p$-dimensional embedding $\bm{E_i}$ represents concept of $i$-th part $P_i$. The design of part embedding matrix is inspired by Word Embedding in NLP, where each word is represented by a vector to indicate its semantic information. Similarly, the part embedding matrix is shared across all data and trained with other parameters together. For discriminating
general concept of part and the part of a specific pedestrian, we use different notations, that is $P_i$ and $p_i$, to represent the $i$-th part.
Given the feature of detected visible region, we compute its response on each part to determine whether this part is visible in the given visible box. Specifically, we use inner product operation on $\bm{F_v^r}$ and $\bm{E}$, followed by sigmoid function to limit the response in range $(0, 1)$. Note that here we use the same feature $\bm{F_v^r}$ with FEN rather than transform $\bm{F_v}$ again, so that the gradients from the loss of EPM can back propagate through the FEN, guiding the regression of FEN to be more accurate. The above procedure can be formulated as:
\begin{equation}
r_i^p=\mathrm{sigmoid}(\bm{F_v^r}\cdot \bm{E_i}), i\in{[1, n_p]},
\label{equ:part_resp}
\end{equation}
where ``$\cdot$'' is inner product operation, and $r_i^p$ represents the response of feature $\bm{F_v^r}$ on part $p_i$. The higher the response $r_i^p$, the higher the visibility of part $p_i$. For each part, we use sigmoid cross-entropy loss to supervise the response to be close to corresponding ground truth label, enforcing the features of pedestrian to be aware of parts. The total loss of EPM is sum of losses for $n_p$ parts:
\begin{equation}
\mathcal{L}_{EPM} = \sum_{i=1}^{n_p}y_i^p \operatorname{log} r_i^p + (1-y_i^p) \operatorname{log} (1-r_i^p),
\label{equ:part_loss}
\end{equation}
where $y_i^p$ represents visibility label for $i$-th part. $y_i^p$ is defined according to the IoA between input visible box \textit{v} and part $p_i$ of assigned pedestrian, we try different modes: hard label and soft label separately:
\begin{equation}
y_i^p =\left\{
\begin{array}{lr}
\mathbb{I}{(\operatorname{IoA}(v, p_i) \ge 0.5)}, & mode=hard \\
\operatorname{IoA}(v, p_i), & mode=soft\\
\end{array},
\right.
\label{equ: part_gt}
\end{equation}
\begin{equation}
\operatorname{IoA}(a, b) = \frac{\operatorname{area}(a \cap b)}{\operatorname{area}(a)},
\label{equ: ioa}
\end{equation}
where $\mathbb{I}{(.)}$ is an indicator function. We experimentally find the hard way performs slightly better than the soft , so we use hard label in this paper unless otherwise specified. Note that EPM only works in training phase, so no extra computation cost is brought by this module during inference.
\subsection{Training}
Although we decompose occluded pedestrian detection into two successive sub-problems, V2F-Net still can be trained end-to-end. We omit the training of VDN as there is no difference between base detector and VDN.
We start with the visible boxes detected by VDN $V$. In case of too few positive samples for FEN and EPM, we skip NMS on these visible boxes during training and enrich the training set with ground truth visible boxes. We denote the augmented set of visible boxes as $\mathcal{V}=V \cup \mathcal{G}^v$, where $\mathcal{G}^v=\{v_i^*|i\in[1, N_g]\}$ represents visible boxes of all ground truth pedestrians. After label assignment as Sec.~\ref{sec:ppl}, we randomly sample $\operatorname{max}(1000, |\mathcal{V}|)$ visible boxes from $\mathcal{V}$ by ratio positive:negative=9:1. All these samples are fed into EPM, while only positive samples are passed to FEN. The total loss of V2F-Net $\mathcal{L}$ is weighted sum of above three losses:
\begin{equation}
\mathcal{L} = \mathcal{L}_{VDN}+\alpha\mathcal{L}_{FEN}+\beta\mathcal{L}_{EPM},
\label{equ: total_loss}
\end{equation}
where $\alpha$, $\beta$ are balanced factors of $\mathcal{L}_{FEN}$ and $\mathcal{L}_{EPM}$, respectively.
\section{Experiments}
To verify the effectiveness of our method, we conduct experiments on two standard crowded datasets: CrowdHuman~\cite{shao2018crowdhuman} and CityPersons~\cite{zhang2017citypersons}. Besides comparison of quantified results, we also visualize the learned part visibility to prove that EPM works as we expected. At last, we give some analysis and discussions about how to further improve V2F-Net.
\subsection{Implementation Details}
We use Faster-RCNN~\cite{ren2015faster} with FPN~\cite{lin2017feature} as our baseline. RoI-Pooling is replaced with RoI-Align~\cite{he2017mask}, the backbone is ResNet-50 pre-trained on ImageNet~\cite{russakovsky2015imagenet}. The anchor setting is the same as~\cite{wang2018repulsion,chu2020crowddet}, which uses scale \{1\} and ratio $H/W=\{1, 2, 3\}$ for both CrowdHuman and CityPersons. As for our method, we use the same anchor scale with baseline, but change ratio to \{0.5, 1, 1.5\} for visible box detection. All the models are trained with batch of 16 images on 8 GPUs and optimized by Stochastic Gradient Descent (SGD) with 0.9 momentum, the weight decay is set to 0.0001. IoU threshold in NMS during inference is set to 0.5, regardless of whether NMS is performed on visible boxes or full boxes. All added FC layers are initialized following~\cite{lin2017feature}. Each element in Part Embedding Matrix $E$ is randomly sampled from uniform distribution with range [-0.0005, 0.0005]. For all experiments in this paper, we use AP and $\text{MR}^{-2}$ as evaluation metrics. Higher AP indicate better performance, while $\text{MR}^{-2}$ is the opposite.
For CrowdHuman, we resize image to make sure the shorter edge equals to 800 pixels, while the longer edge is smaller than 1400 pixels during both training and inference. We train 30 epoches with initial learning rate 0.00125. The learning rate will be decayed by factor of 0.1 at epoch 24 and 27. By default we use $\alpha=0.3$, $\beta=1$.
For CityPersons, the image size is upsampled by 1.3x in both training and test following~\cite{zhang2017citypersons}. We train our models on all training set for 35 epoches. We set the initial learning rate to 0.002, then decay it to 0.0002 at 20th epoch and 0.00002 at 30th epoch. The balanced factors of losses are set to $\alpha=0.5$, $\beta=1$.
\subsection{Experiments on CrowdHuman}
\label{sec:exp_crowdhuman}
CrowdHuman~\cite{shao2018crowdhuman} is a recently released dataset for better evaluating various pedestrian detectors in crowded scenes. On the average, there are 22.6 pedestrians in an image and 2.4 pedestrians have IoU$>$0.5 with other pedestrians. This dataset is split into training set, validation set and test set, each of them contains 15000, 4370, 5000 images respectively. All the models in this paper are trained on the training set and evaluated on validation set.
\paragraph{Ablation study.} Table~\ref{tbl:crowd_ablation} shows the ablation results on CrowdHuman validation set. Our re-implemented baseline is better than results in original paper~\cite{shao2018crowdhuman}. Obviously, both the decomposed pipeline V2F and the auxiliary module EPM can improve performance of full-body pedestrian detection. By simply modifying the pipeline from detecting full body of pedestrian directly, to do visible region detection and full body estimation sequentially, we achieve 5.63\% AP and 3.89\% $\text{MR}^{-2}$ gains. Based on the decomposed V2F pipeline, the cost-free EPM can further improve 0.22\% AP and 0.78\% $\text{MR}^{-2}$. The results validate the effectiveness of our proposed method. Not only the recall can be improved by a large margin, but also it will not bring more false positives.
\begin{table}[ht]
\centering
\caption{ Ablation experiments conducted on CrowdHuman validation set. \emph{V2F} indicates our proposed pipeline which do visible region detection and full body estimation sequentially. \emph{EPM} is the Embedding-based Part-aware Module. Best results are boldfaced.
}
\footnotesize
\label{tbl:crowd_ablation}
\begin{tabularx}{1.\linewidth}{c|cc|X<{\centering}X<{\centering}X<{\centering}}
\toprule
Method & V2F & EPM & AP/\% & $\text{MR}^{-2}$/\% & Recall/\% \\
\hline
baseline in~\cite{shao2018crowdhuman} & & & 84.95 & 50.42 & ---\\
our baseline & & & 85.18 & 46.95 & 76.90 \\
\hline
\multirow{2}*{V2F-Net} & \checkmark & & 90.81 & 43.06 & 83.92\\
& \checkmark & \checkmark & \textbf{91.03} & \textbf{42.28} & \textbf{84.20}\\
\bottomrule
\end{tabularx}
\end{table}
\paragraph{Different pipelines for full-body pedestrian detection.} In addition to our proposed pipeline \emph{V2F}, there are two strategies for generating full-body box of a pedestrian: detect full body pedestrian only, and predict full body box and visible box parallelly like~\cite{Zhou_2018_bibox}. We denote them by \emph{F} and \emph{V\&F}, respectively. One may doubt that the improvement of \emph{V2F} is because of extra computation cost brought by the FEN. Therefore, we add an extra RCNN head with the same architecture as FEN to our baseline following \textit{iterative bounding box regression} as~\cite{gidaris2015object,Gidaris2016Attend}. Here we denote this strategy as \emph{$F^2$}. For fair comparison, we do not add classification branch for re-scoring in \emph{F$^2$}, which means the second RCNN is only used to refine locations of full-body boxes detected by the first one.
Table~\ref{tbl:crowd_ablation_ppl} presents the results of aforementioned pipelines on CrowdHuman validation set. We can draw the following conclusions from it: (1) By just adding an extra task of visible region detection, the performance of full-body pedestrian detection can be improved by 1.78\% $\text{MR}^{-2}$ , which is consistent with the conclusion in~\cite{Zhou_2018_bibox}. (2) The mode of iterative regression has a positive effect indeed, but still 3.3\% AP and 0.83\% $\text{MR}^{-2}$ worse than our decomposed pipeline when using visible boxes for NMS. (3) Replacing calculation of IoU on full-body boxes with visible boxes can not guarantee to bring improvements. It works only when the visible boxes are precise enough (see Table~\ref{tbl:crowd_ablation_ppl_v} for quantitative results). (4) Our decomposed pipeline beats all others by 3.3\%$ \sim $5.63\% AP and 0.83\%$ \sim $3.89\% $\text{MR}^{-2}$. In conclusion, our proposed pipeline can be taken as a stronger baseline for occluded pedestrian detection
\begin{table}[ht]
\centering
\caption{Results of different pipelines for full-body pedestrian detection on CrowdHuman validation set. In the first column, \emph{F} means detecting full-body pedestrian only, \emph{V\&F} means predicting visible box and full box parallelly, \emph{F$^2$} means iterative full body regression by 2 steps, \emph{V2F} is our proposed method without EPM. The second column indicates the inputs of NMS are visible boxes or full body boxes.}
\label{tbl:crowd_ablation_ppl}
\footnotesize
\begin{tabularx}{1.\linewidth}{X<{\centering}|X<{\centering}|X<{\centering}X<{\centering}X<{\centering}}
\toprule
Pipeline & NMS & AP/\% & $\text{MR}^{-2}$/\% & Recall/\% \\
\hline
F & full & 85.18 & 46.95 & 76.90 \\
\hline
\multirow{2}*{V\&F} & full & 85.19 & 45.17 & 77.71 \\
& visible & 86.70 & 51.94 & 79.61 \\
\hline
F$^2$ & full & 87.51 & 43.89 & 79.96 \\
\hline
\multirow{2}*{V2F} & full & 86.04 & 43.84 &80.20\\
& visible & \textbf{90.81} & \textbf{43.06} & \textbf{83.92} \\
\bottomrule
\end{tabularx}
\end{table}
\begin{table}[ht]
\caption{Evaluation of visible region detection for different pipelines. Ours is much better than result of \emph{V\&F}.}
\label{tbl:crowd_ablation_ppl_v}
\centering
\footnotesize
\begin{tabular}{c|ccc}
\toprule
Pipeline & AP/\% & $\text{MR}^{-2}$/\% & Recall/\%\\
\hline
V\&F & 78.92 & 65.78 & 74.78 \\
\hline
V2F & \textbf{84.90} & \textbf{51.93} & \textbf{78.54}\\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Generalizability of V2F-Net.} In V2F-Net, the VDN can be implemented with minor modifications based on chosen detector. The FEN and EPM only take the multi-scale feature maps and detected visible boxes from VDN as inputs. Therefore, theoretically both one-stage and two-stage detectors can be incorporated into V2F-Net. To demonstrate this, we take Faster-RCNN~\cite{ren2015faster} and RetinaNet~\cite{lin2017focal} as the representative of the two types of detectors respectively, and implement our method based on them. Both the two methods utilize FPN~\cite{lin2017feature}. The implementation details of RetinaNet are almost the same as original paper, except we use anchor ratios \{1, 2, 3\} for better performance. From Table~\ref{tbl:crowd_ablation_detector} we can see that V2F-Net can achieve consistent gains when taking Faster-RCNN or RetinaNet as base detector. The results validate the generalizability of our method.
\begin{table}[ht]
\centering
\caption{Comparison of results when using different detectors with/without V2F-Net. Both the Faster-RCNN and RetinaNet utilize FPN~\cite{lin2017feature} for better performance.}
\label{tbl:crowd_ablation_detector}
\begin{tabularx}{1.\linewidth}{c|X<{\centering}|X<{\centering}X<{\centering}X<{\centering}}
\toprule
Detector & Method & AP & $\text{MR}^{-2}$ & Recall \\
\hline
\multirow{2}*{FRCNN~\cite{ren2015faster}} & baseline & 85.18 & 46.95 & 76.90 \\
& ours & \textbf{91.03} & \textbf{42.28} & \textbf{84.20} \\
\hline
\multirow{2}*{RetinaNet~\cite{lin2017focal}} & baseline & 81.81 & 56.64 & 74.58 \\
& ours & \textbf{84.92} & \textbf{53.99} & \textbf{76.75}\\
\bottomrule
\end{tabularx}
\end{table}
\paragraph{Hard label \vs~Soft label in EPM.} We compare the results between hard label and soft label for each divided part in Table~\ref{tbl:crowd_label_EPM}. The hard way performs slightly better than the soft. We suspect that is because the adopted strategy of dividing part does not consider the human pose, causing the soft label increases the ambiguity of part visibility instead.
\begin{table}[ht]
\centering
\caption{Comparison of different modes for part label in EPM. All the other settings are the same.
}
\label{tbl:crowd_label_EPM}
\begin{tabular}{c|cc}
\toprule
Mode & AP/\% & $\text{MR}^{-2}$/\% \\
\hline
hard & \textbf{91.03} & \textbf{42.28} \\
soft & 90.35 & 42.67 \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Visualization of EPM.} To demonstrate the EPM works as expected, we visualize the scores predicted by EPM for each divided part. Fig.~\ref{fig:vis_part} presents some examples. As we can see, there is a roughly positive correlation between the score and visibility for each part.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=1.\linewidth]{latex/images/part_vis.pdf}
\end{center}
\caption{Visualization of predicted scores for each part by EPM. The example images are cropped from original images according to detected full-body box of pedestrian. The green and red rectangles represent detected visible boxes and five parts divided as~\cite{zhang2018Occlusionaware}. Each predicted score for a part indicates its visibility. Best viewed in color.}
\label{fig:vis_part}
\end{figure}
\paragraph{Comparison with the state-of-the-art methods.} We show the comparison between our method and the state-of-the-art methods on CrowdHuman validation set in Table~\ref{tbl:crowdhuman_eval}. All methods take the same backbone (Res-50 + FPN), and are evaluated using the same image size. Our method outperforms most of the state-of-the-art pedestrian detectors, but slightly worse than CrowdDet~\cite{chu2020crowddet} due to our relatively weaker baseline result. In Sec.~\ref{sec:discussion} we will discuss about the combination of these SOTA methods and V2F-Net.
\begin{table}[ht]
\centering
\footnotesize
\caption{Comparison of various crowded detection methods on CrowdHuman validation set. All methods employee the FPN with Res-50 backbone as baseline, and are evaluated using the same image size.}
\label{tbl:crowdhuman_eval}
\begin{tabularx}{1.\linewidth}{p{40mm}<{\centering}|X<{\centering}X<{\centering}X<{\centering}}
\toprule
Method & AP/\% & $\text{MR}^{-2}$/\% & Recall/\%\\
\hline
Baseline & 85.18 & 46.95 & 76.90 \\
\hline
Adaptive-NMS~\cite{adaptiveNMS} & 84.71 & 49.73 & --- \\
R$^2$NMS~\cite{huang2020R2nms} & 89.29 & 43.35 & --- \\
CaSe~\cite{xie2020count} & --- & 47.9 & ---\\
NOH-NMS~\cite{zhou2020NOH-NMS} & 89.0 & 43.9 & ---\\
CrowdDet~\cite{chu2020crowddet} & 90.7 & 41.4 & 83.68 \\
\hline
Ours & \textbf{91.03} & 42.28 & \textbf{84.20}\\
\bottomrule
\end{tabularx}
\end{table}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=1.\linewidth]{latex/images/examples.pdf}
\end{center}
\caption{Visualization of detection results. The first row come from FPN baseline and the second are our results. All detected boxes are filtered by score$\ge$0.3. The boxes in solid and dashed line represent kept boxes and false suppression by NMS respectively.}
\label{fig:example_cropped}
\end{figure*}
\subsection{Experiments on CityPersons}
CityPersons~\cite{zhang2017citypersons} is a subset of CityScapes~\cite{cordts2016cityscapes} which focus on pedestrian detection. There are 2975, 500, 1575 images for training, validation and testing, respectively. Compared to CrowdHuman, CityPersons is less crowded, but still contains lots of occluded cases. We train our models on all training set, and report results on reasonable validation subset.
\paragraph{Comparison with the state-of-the-art methods.} Table~\ref{tbl:citypersons_eval} lists results of the state-of-the-art methods and V2F-Net on CityPersons. Similar to the results on CrowdHuman, both the decomposed strategy and EPM can consistently improve the detection performance. Our method achieves 2.24\% $\text{MR}^{-2}$ and 0.94\% AP gains compared to our FPN baseline. The improved result is comparable or even better than the state-of-the-art methods.
\begin{table}[ht]
\centering
\caption{Comparison between the state-of-the-art methods and ours on CityPersons validation set. The third column indicates the enlarge number of original image in both training and testing. \emph{Ours-V2F} is the simplified version of our V2F-Net without EPM.}
\label{tbl:citypersons_eval}
\begin{tabular}{c|c|c|cc}
\toprule
Method & Backbone & Scale & $\text{MR}^{-2}$ & AP \\
\hline
Baseline & Res-50 & $\times$1.3 & 12.32 & 95.25 \\
\hline
AF-RCNN~\cite{zhang2017citypersons} & \multirow{9}*{VGG-16} & $\times$1.3 & 12.81 & ---\\
OR-CNN~\cite{zhang2018Occlusionaware} & & $\times$1.3 & 11.0 & ---\\
FRCN~\cite{zhou2019discriminative} & & $\times$1.3 & 11.1 & --- \\
Adaptive~\cite{adaptiveNMS} & & $\times$1.3 & 10.8 & --- \\
MGAN~\cite{pang2019mask} & & $\times$1.3 & 10.5 & --- \\
GA~\cite{zhang2018occludedattention} & & $\times$1 & 15.96 & --- \\
Bi-box~\cite{Zhou_2018_bibox} & & $\times$1.3 & 11.24 & --- \\
R$^2$NMS~\cite{huang2020R2nms} & & $\times$1 & 11.1 & --- \\
\hline
Repulsion~\cite{wang2018repulsion}& \multirow{4}*{Res-50} & $\times$1.3 & 11.6 & --- \\
ALFNet~\cite{liu2018ALFNet} & & $\times$1 & 12.0 & --- \\
CrowdDet~\cite{chu2020crowddet} & & $\times$1.3 & 10.7 & 96.1 \\
PRNet~\cite{prnet} & & $\times$1 & 10.8 & --- \\
\hline
Ours-V2F & \multirow{2}*{Res-50} & $\times$1.3 & 11.33 & 95.89 \\
Ours & & $\times$1.3 & \textbf{10.08} & \textbf{96.19} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Discussion}
\label{sec:discussion}
Although our method has achieved an inspiring improvement, we believe the V2F-Net is just a new baseline based on the effective decomposed pipeline. Here we will make a discussion about our method, aiming to give some inspiration to the following works.
\paragraph{Performance Analysis.} Thanks to the decomposed solution proposed in this paper, we can do more detailed analysis to guide how to optimize it further. To explore the overall performance limited by each component in V2F-Net, we conduct three experiments, assuming the VDN, VDN+NMS or FEN to be perfect respectively: (1) \emph{P-VDN}: replace the boxes detected by VDN with the ground truth visible boxes $\mathcal{G}^v$. (2) \emph{P-VDN+NMS}: take $\mathcal{G}^v$ as inputs of FEN. (3) \emph{P-FEN}: the full box of each detected visible boxes are obtained by label assignment as training, instead of estimated by FEN. The difference between the first two experiments is whether to do NMS on $\mathcal{G}^v$. Setting the scores of these ground truth visible boxes equally will confuse the NMS and evaluation metrics like AP and $\text{MR}^{-2}$, as they require the input boxes are sorted by confidence. Actually we tried this strategy and found it will cause $\text{MR}^{-2}$ increasing rapidly. Therefore, we predict scores for ground truth boxes by feeding them into VDN.
From Table~\ref{tbl:upper-analysis} we find the performance of above experiments showing the same trend on both CrowdHuman and CityPersons. Both \emph{P-VDN} and \emph{P-FEN} improve AP slightly but reduce $\text{MR}^{-2}$ by a large margin. This indicates VDN produces many false positives with high score, and FEN should pay more attention to these highly confident false positives visible boxes. Compared to \emph{P-VDN}, \emph{P-VDN+NMS} improves AP and $\text{MR}^{-2}$ to some extent, showing that there is still false suppression even using visible boxes in NMS.
\begin{table}[ht]
\centering
\caption{Qualitative analysis about the detection performance limited by each component in V2F-Net. \emph{P-VDN}, \emph{P-VDN+NMS} and \emph{P-FEN} indicate upgraded V2F-Net using perfect VDN, VDN+NMS and FEN respectively, which are obtained by ``cheating'' with ground truth boxes.
}
\label{tbl:upper-analysis}
\begin{tabularx}{1.\linewidth}{c|c|X<{\centering}X<{\centering}}
\toprule
Dataset & Method & AP/\% & $\text{MR}^{-2}$/\% \\
\hline
\multirow{4}*{CrowdHuman} & V2F-Net & 91.03 & 42.31 \\
& P-VDN & 91.99 & 20.52 \\
& P-VDN+NMS & 95.14 & 17.67 \\
& P-FEN & 92.25 & 35.56 \\
\hline
\multirow{4}*{CityPersons} & V2F-Net & 96.19 & 10.08 \\
& P-VDN & 96.83 & 0.0310 \\
& P-VDN+NMS & 99.68 & 0.0025 \\
& P-FEN & 97.37 & 0.0929 \\
\bottomrule
\end{tabularx}
\end{table}
\paragraph{Beyond V2F-Net.} Despite the effectiveness, V2F-Net still suffer from some common failure cases as other pipelines: crowd errors, false suppression by NMS, etc. Fortunately, many excellent works have proposed effective approaches to solve these problems, e.g.~\cite{chu2020crowddet} for label assignment ambiguity,~\cite{wang2018repulsion,zhang2018Occlusionaware} for crowd error,~\cite{adaptiveNMS, Noh_2018_CVPR, xie2020count} for NMS. We believe the performance of V2F-Net can be much better when combined with these brilliant ideas.
\section{Conclusion}
We propose a simple yet effective method to handle occlusion in pedestrian detection: V2F-Net. By decomposing occluded pedestrian detection into visible region detection and full body estimation, the learning of network becomes easier and can converge to better minimum. To further improve the accuracy of full body estimation, we propose a novel module called EPM, which is cost-free during inference. We experimentally show the effectiveness of the decomposed pipeline and EPM, and validate the generalizability of our method on both one-stage and two-stage detectors. We consider V2F-Net as a new baseline for occluded pedestrian detection. And we believe when combined with other brilliant ideas, the evolution of this pipeline will lead to much better performance beyond ours.
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,154,269 | arxiv | \section{Introduction}
\label{sec.1}
In this note we characterize the set of all Hermitian
positive semidefinite matrices $A$
whose entries have modulus $1$ or
$0$. We comment that the
real matrices in this set whose diagonal
entries are all nonzero, and hence necessarily $1$, belong
to the set of {\bf correlation matrices} (see Horn and Johnson
\cite[p.400]{HJ}).\\
Our characterization here is motivated by a surprising result due
to
Jain and Snyder \cite{JS} which can be stated as follows:\\
\begin{2}
\label{thm.js}
{\rm (Jain and Snyder \cite[Theorems 2 and 3]{JS})}
If $A$ is any positive semidefinite $(0,1)$--matrix, then $A$ is
permutationally
similar to a direct sum of matrices each of which is either an
all $1$'s matrix or a
zero matrix.
In particular,
if $A$ is {\rm (}also{\rm )} irreducible, then $A$ is the
$n\times n$ all
$1$'s matrix.
\end{2}
Jain's and Snyder's proof of Theorem \ref{thm.js} rests on
their observation
that any
positive semidefinite $(0,1)$--matrix has a Cholesky
factorization with
a $(0,1)$--Cholesky factor. The
proof of our generalization of Theorem \ref{thm.js} relies on the
following property which is possessed by Hermitian positive
semidefinite
matrices and from which results on Cholesky factorizations
follow:\\
\begin{6}
\label{def.psrp}
{\rm A matrix $A\in \C^{n,n}$ is said to have the
{\it principal submatrix rank property} {\rm (}PSRP{\rm )}
if the
following conditions holds:\\
(i) The column space determined by every set of rows of $A$ is
equal
to the column space of the principal submatrix lying in these
rows.\\
(ii) The row space determined by every set of columns of $A$ is
equal
to the row space of the principal submatrix lying in these
columns.}
\end{6}
It is known (cf. Hershkowitz and Schneider \cite{HS,HS1}
and Johnson \cite{J})
that a positive semidefinite
Hermitian matrix has PSRP. We give the following simple proof for
the sake of completeness:\\
Since a
permutation similarity applied to a positive semidefinite
Hermitian matrix
again yields a positive semidefinite Hermitian matrix,
it is enough to show that the row space
determined by the first
$k$ columns of a positive semidefinite Hermitian matrix $A$ is
equal to the
row space determined by the leading principal minor $B$ of $A$ of
size $k$. This is equivalent to showing the following: If $v^T =
[w,0]^T$, where $w$ is of length $k$, and $Bw = 0$ then $Av = 0$.
But, for such $v$, we have $v^*Av = w^*Bw = 0$, and the result
follows since $A$ is positive semidefinite.\\
\section{Main Result}
\label{sec.2}
To facilitate our extension
of Theorem \ref{thm.js}, we
introduce the following notion:
\begin{6}
\label{def.1}
{\rm A matrix $P\in \C^{n,n}$ is called a {\em unitary monomial
matrix} if $P = QD$, where $Q$ is a permutation matrix and $D$ is
a diagonal matrix all of whose diagonal entries are of modulus
$1$.}
\end{6}
We are now ready to state the main result of this note:\\
\begin{2}
\label{diningroom}
A matrix $A \in \C^{n,n}$ is Hermitian positive
semidefinite and all its entries have modulus $1$ or $0$
if and only if $A$ is similar, by means of a unitary monomial
matrix, to a direct sum of matrices each of which is either an
all $1$'s matrix or a
zero matrix.
\end{2}
\underline{\bf Proof}:\
The proof of the ``if'' part is obvious, so we proceed to prove
the ``only if'' part.
This is done by induction on $n$, the size
of $A$. The result is trivial if $n=1$. So let $n > 1$ and assume
theorem holds for matrices of all sizes less than $n$.\\
If $A$ is reducible, then $A$ is permutation similar to a direct
sum of
positive semidefinite Hermitian matrices of size less than $n$
whose
nonzero entries are all of modulus $1$. By our inductive
assumption each
direct summand is unitarily monomially similar to the direct sum
of all $1$'s
matrices and a $0$ matrix, and hence the same is true for $A$.\\
So assume that $A$ is irreducible. We shall first show that there
exists
$(0,1)$--matrix $E = D^{-1}P^{-1}APD$, where $D$ is diagonal and
$P=(p_{i,j})$ is a
unitary monomial matrix with $p_{n,n} = 1$. Further all elements
of the
last row and column of $E$ are $1$.\\
No diagonal element of $A$ is $0$, for then the corresponding row
and
column would also be $0$. Since the leading submatrix $B$ of
size $n-1$
of $A$ is positive semidefinite, it follows there is a unitary
monomial
similarity of $B$ such that the resulting matrix is a direct sum
of all
$1$'s matrices and a $0$ matrix. We extend this similarity to a
unitary
monomial similarity of $A$ which leaves the last row and column
of $A$ in
place. We thus obtain a matrix $C = P^{-1}AP$, where $p_{n,n} =
1$, such
that all nonzero elements in the last row and column of $C$ are
of modulus
1 and the leading submatrix of size $n-1$ of $C$ is a direct sum
of all
$1$'s matrices and a $0$ matrix. We partition the last row and
column of
$C$ in conformity with this direct sum. Since $C$ is positive
semidefinite, it follows by PSRP that each subvector of the last
row and
column determined by this partition is a multiple of the all
$1$'s vector
of the appropriate size by a number of modulus $1$ or by $0$.
But if one of the last column is a $0$ multiple of the all $1$'s
vector,
then is $C$ reducible. Hence each subvector of the last column is
a
multiple of an all $1$'s vector by a number of modulus $1$. Let
$D$ be the
unitary diagonal matrix whose diagonal entries coincide with the
last
column of $C$. Since $D$ has equal entries corresponding to the
blocks of
$B$, it follows that $E = D^{-1}CD$ is a $(0,1)$--matrix whose
last row
and column consists of $1$'s. \\
Our proof shows that the last row and column of $A$ have no $0$
entries. By applying permutation similarities to $A$ and
repeating the above construction, we deduce that the same is true
of every row and column. Hence the matrix $E$ we have obtained is
the all $1$'s matrix.
{\hfill $\Box$}\\
Theorem \ref{diningroom} has several corollaries:\\
\begin{10}
\label{cor.1}
A matrix $A \in \C^{n,n}$ is a positive
definite Hermitian matrix whose nonzero entries are all of
modulus $1$ if and only if
$A$ is the identity matrix.
\end{10}
\begin{10}
\label{cor.2}
A matrix $A \in \C^{n,n}$ is an irreducible positive
semidefinite Hermitian matrix whose
nonzero entries are all of
modulus $1$ if and only if $A$ can be transformed
to the all $1$'s matrix by a unitary diagonal similarity.
\end{10}
\begin{10}
\label{cor.3}
Let $A \in \C^{n,n}$ be a positive
semidefinite Hermitian matrix whose nonzero entries are all of
modulus $1$. Then there is an LU factorization of $A$ with $L$
nonsingular where $L$
and $U$ are similar to $(0,1)$--matrices via the same unitary
diagonal matrix.
\end{10}
\underline{\bf Proof}:\ If $C$ is the $k \times k$ block of all $1$'s
then
it admits the factorization $C = L_1U_1$, where the first column
of
$L_1$ consists of $1$'s, the diagonal entries of $L_1$ are $1$,
the
first row of $U_1$ consists of $1$'s and all other elements of
$L_1$
and $U_1$ are $0$. Now, if $C$ is a direct sum of such blocks and
a $0$
matrix, it easily follows that $C$ admits an LU factorization
where $L_2$ is a lower triangular nonsingular $(0,1)$--matrix
and $U_2$ is an upper triangular $(0,1)$--matrix. If $B$ is
permutation similar to $C$ then we can find a permutation matrix
$P$ which does not change the order of the order of rows and
columns in any given block and for which $B = P^TCP$. Then
$L_3 = P^TL_2P$ is a lower triangular nonsingular $(0,1)$--matrix
and $U_3 = P^TU_2P$ is an upper triangular $(0,1)$--matrix. If $A
= D^*BD$, where $D$ is a unitary diagonal matrix, it follows that
$L = D^*L_3D$ and $U = D^*U_3D$ satisfy the conditions of the
corollary. The conclusion of the corollary now follow by applying
Theorem \ref{diningroom}.
{\hfill $\Box$}\\
In a very similar way to the proof of the above corollary, we
can prove the following corollary:\\
\begin{10}
\label{cor.4}
Let $A \in \C^{n,n}$ be a positive
semidefinite Hermitian matrix whose nonzero entries are all of
modulus $1$. Then there is an Cholesky $LL^*$ factorization of
$A$ where $L$ is similar to $(0,1)$--matrices via a unitary
diagonal matrix.
\end{10}
\vspace{.3in}
\hspace{-.25in} \underline{\bf ACKNOWLEDGEMENT} \ The authors
wish to thank
Professor
Emeric Deutsch for bringing the question of characterizing the
$n\times n$ $(0,1)$--matrices which are positive semidefinite to
their attention.
|
2,869,038,154,270 | arxiv | \section{Introduction}
The current popularity of transformer-based models in Natural Language Processing is owed to their capacity in constructing semantically rich representations and to their ability to accommodate transfer-learning. This enables users to pre-train transformer-based language models on large unlabeled corpora, and then fine-tune the representations using smaller sets of labeled examples \cite{howard2018}. How small the labeled dataset can be, depends on the complexity of the task, the domain-similarity between the pre-training and labeled datasets, and the model architecture\cite{aharoni2020}. Recent research in few-shot learning has focused largely on the last aspect, proposing neural architectures that capture semantic and compositional information efficiently and allow for faster convergence on new tasks \cite{zhang2020few}. However, the state-of-the-art research is still far from human performance on many tasks \cite{schick2020}. This is even more pronounced in zero-shot experiments, where no labeled data is available at training time \cite{brown2020gpt3}.
Due to the emphasis on scalability and generalizability of these models, what is often left out of consideration is the practical aspects of how these models are commonly applied in real-world settings. As an example, datasets are very often accompanied by metadata tags that include some signal about the nature and content of each document in the corpus. Corporate communications, financial reports, regulatory disclosures, policy guidelines, Wikipedia entries, social media messages, and many other forms of textual records often bear tags indicating their source, type, purpose, or an enterprise categorization standard.
This metadata is often stripped before models are applied to the text, in order to avoid convoluted and bespoke architectures. In some cases, the metadata is simply concatenated to the document \cite{zhang2020minsup} without specific controls on how representations are generated from raw text versus metadata tags. A flexible solution that can generalize to various types of metadata can prove useful in augmenting the richness of semantic representations.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{overview.png}
\caption{Approach Overview. Multi-part input document is converted into an embedding representation using a neural encoder. Self-supervision during training is based on latent topic distribution and (optional) reconstruction for metadata. Input points are then classified based on its neighborhood in the representation space.}
\label{fig_overview}
\end{figure*}
Another aspect that current research often leaves out is the topic distribution of the unlabeled dataset. Transformer models are commonly trained in a stochastic fashion, where the global composition of topics in the corpus is not explicitly built into the loss function. Often the pre-training corpus is large enough for this effect to likely be negligible, but when the corpus is smaller than the common multi-million-document setting, this lack of insight into global distributional statistics may have an impact on the compostionality of the resulting representations.
In this study, we explore how addressing both of these issues can improve the performance of transformer-based models on document classification tasks. We propose a simple yet effective framework that encapsulates universal distributional statistics about the raw text, as well as encodings for metadata tags. The framework includes several components that bring together useful characteristics of self-supervised and multi-task learning:
\begin{enumerate}
\item An LDA-based topic model \cite{blei2003latent} is used to train a transformer-based language model to effectively encode distributional characteristics of the unlabeled dataset. This is done by modeling the text in each document as a distribution of topics, and training the transformer on a KL-Divergence loss against those distributions.
\item Metadata artifacts are used to enrich document representations. Each metadata tag is encoded separately and concatenated with the text representation. The encoding objective is adjusted based on the type of metadata.
\item A multi-task objective is used to jointly learn text and metadata representations.
\item The resulting document representations show strong compositionality and can be plugged into a K-NN algorithm for document-level classification tasks.
\end{enumerate}
Figure \ref{fig_overview} illustrates our framework when applied to a hypothetical dataset of product reviews with a diverse set of metadata artifacts. The raw text of each review is paired with other metadata artifacts available, such as user profile information, product identifiers, and location information. All the artifacts are fed into a deep learning model with the self-learning mechanism adjusted to match the type of artifact presented. For example, for the raw text of the reviews, instead of using a standard Masked Language Model (MLM) objective \cite{devlin2018bert}, the model learns by predicting the latent topic distribution of the text based on a pre-trained generative model such as LDA \cite{blei2003latent}. For categorical metadata such as product identifiers and location, the model predicts the specific metadata tag. Certain metadata can also be directly encoded, bypassing the self-learning task. The resulting representations are semantically rich, and can be plugged into a simple K-NN model for various label-prediction tasks, bypassing the need for complicated, task-specific classification models.
We demonstrate how the representations created by our framework exhibit compositional characteristics that can be useful to granular classification tasks. While small and scalable to many different settings, our framework improves the performance of transformer-based language models on classification tasks on a variety of datasets. Our experiments show that regardless of the underlying neural architecture, performance is enhanced by a robust minimum of 5\% over a conventional fine-tuned model that uses the special CLS embedding for document classification. We also explore the robustness of the framework through a series of ablation studies. The remaining sections of this paper lay out our methodology, describe our datasets, and present experimental results.
\section{Related Work}\label{sec_relwork}
Semi-supervised learning in neural language models has largely focused on ``pre-train and fine-tune'' pipelines \cite{howard2018}, which take advantage of large unlabeled datasets, paired with small labeled datasets. Having produced rich representations during the pre-training phase, the model leverages the labeled dataset to calibrate its parameters in an inductive fashion against a new task \cite{devlin2018bert}. To keep the models simple and flexible, the fine-tuning process needs to be parameter-efficient, fast, and amenable to ``plug-and-play'' applications. This can prove difficult when the distributions of labeled and unlabeled datasets diverge \cite{aharoni2020} or when the labeled dataset is prohibitively small \cite{brown2020gpt3}. A large body of research has thus focused on few-sample learning, attempting to make the pre-training process more robust against such issues \cite{sharaf2020metalearning}. However, performance on NLP tasks remains far from human baselines \cite{wang2020few}. This is partly due to the fact that the inductive fine-tuning process fails to directly take advantage of universal distributional characteristics of the dataset beyond what is encoded in the pre-trained representations. Since the pre-trained representations are themselves stochastically generated, they are not guaranteed to seamlessly encode universal statistics. As a result in many applied studies the representations are paired with other distributional signal such as topic models and tf-idf vectors \cite{lim2020uob}.
In response, a growing body of literature in recent years has focused on transductive learning approaches, especially in the computer vision domain \cite{lu2020few}. This paradigm allows the model to actively take advantage of sample distributions during inference. In natural language processing, this paradigm has been used to improve performance on tasks such as cross-domain text classification \cite{ionescu2018transductive} and neural machine translation \cite{poncelas2019transductive}. Transductive Support Vector Machines \cite{joachims1999transductive} have also been applied to granular classification tasks \cite{selvaraj2012extension}. These studies take advantage of a robust labeled dataset to scale to unseen datasets or domains. However, they do not address cases where the original dataset lacks enough labeled examples. Similarly, multi-task learning studies have addressed cases where the model can be robustly trained for one task such as entity extraction, and scale to other tasks such as co-reference resolution \cite{sanh2018hierarchical}.
In this study, we propose a transductive framework that can take advantage of a limited labeled dataset paired with a larger unlabeled dataset to generate rich representations for document classification tasks. The framework brings together useful characteristics of the above-mentioned approaches in a unique pipeline that leverages latent topic models and encodings of useful metadata via a multi-task loss.
\section{Model}\label{sec_model}
In traditional text classification tasks, the input consists of labeled pairs of training examples. The text is fine-tuned on a pre-trained model such as \cite{devlin2018bert} using a classification specific loss function based on these training labels. Our problem is different from the traditional setting on three aspects - (a) there are no labels available during training, (b) the model must be shared across different types of classification tasks and (c) the input contains metadata in addition to plain text.
We first introduce the encoder architecture that is used to obtain an input representation that captures both the text and metadata information in a task agnostic manner. Then we present the decoder structure that employs self-supervision to define the training objective. Finally, we discuss how the learned representation is directly used in downstream classification tasks. Figure ~\ref{fig_model1} provides an overview of our model.
\subsection{Representation Learning}\label{subsec_rep}
Given $N$ training examples $X=\{x^1,...,x^N\}$, let $x=(\tau,m_1,...,m_P)$ be an input that contains text $\tau$ accompanied with ${P}$ different metadata artifacts ${m}$. The text consist of $T$ tokens $(\tau_1,...,\tau_T)$ from a fixed vocabulary and each metadata $m_p$ is a sequence $(m_{p1},...,m_{pl},...m_{pL})$ of fixed length $L$. Sequences shorter than $T$ or $L$ are simply padded. Let $m_{pl} \in \Omega^p$, where $\Omega^p$ is the discrete set of information for $p^{th}$ metadata.
The text input is converted into an interim embedding representation $\phi$ using a function
\begin{align}\label{eq_1}
\textbf{f} : \tau \rightarrow \phi,\qquad \phi \in \mathbb{R}^{D_t}
\end{align}
where $D_t$ is the text embedding size. The function $\textbf{f}$ is a Transformer \cite{vaswani2017attention} model that employs a large number of layers with self-attention mechanism to capture dependencies between arbitrary positions of text in an efficient manner. The model is initialized with pre-trained parameters, thereby incorporating prior knowledge gained from training on large text corpora. The input tokens are augmented with a special token $[CLS]$ that represents the aggregate information of the entire sequence. The output corresponding to this token at the last layer is used as $\phi$.
There are $P$ independent non-linear functions to convert each metadata input into an interim embedding representation $\psi$:
\begin{equation}\label{eq_2}
\textbf{g}_p : m_{pl} \rightarrow \psi_{pl},\qquad \psi_{pl} \in \mathbb{R}^{D_p},\quad \forall p=1...P,l=1...L
\end{equation}
where $D_p$ is the metadata embedding size. We make use of a feed-forward network with multiple layers as the conversion function $\textbf{g}$, with each layer comprising of a linear transformation followed by a non-linear activation. If a metadata cannot be meaningfully interpreted (e.g. product code or user id), we use one-hot encoding of the metadata values as input. Otherwise, the input is set to a 300 dimensional vector derived by averaging the Glove \cite{pennington-etal-2014-glove} vectors corresponding to the words in metadata text. The embedding for a metadata sequence is aggregated using a mean function that masks out padded positions $\gamma \in \{0,1\}$ as
\begin{equation}
\psi_p = \frac{1}{L}\sum_l\gamma_{pl}{\psi_{pl}}.
\end{equation}
The final embedding representation for an input $x$ is obtained by first concatenating the text and metadata embeddings and then projecting them to a lower-dimensional space using a linear transformation as follows:
\begin{equation}\label{eq_4}
z = W_z^\intercal(\phi \oplus \psi_1 \oplus ... \oplus \psi_P),\qquad z \in \mathbb{R}^{D_e}
\end{equation}
where $z$ is the final input embedding of size $D_e$ and $W_z \in \ \mathbb{R}^{(D_t+\sum_p D_p)\times D_e}$ is a parameter matrix.
\begin{figure*}[htbp]
\centering
\includegraphics[height=7cm]{model.png}
\caption{Model Architecture. Embeddings learned independently for different input types are combined and then projected to a lower dimensional space. The model is trained using a multi-task objective function.}
\label{fig_model1}
\end{figure*}
\subsection {Self-supervised Loss}
In the absence of external supervisory information, self-supervision has emerged as a promising solution. The key idea here is to generate synthetic labels automatically from the data and use these labels to construct loss functions. For text input, typically, the identities of some words in the text are masked and the model is trained to recover the original input.
As an alternative, we propose a cross-model fusion approach. A topic model for the text corpora is learnt in an unsupervised manner and the inferred topic distribution is used as synthetic labels. By discovering latent semantics embedded in the text, topic models introduce a partition of the input space. Importantly, the mixed membership of topics for an input enables a soft partition rather than a hard partition. Such a fuzzy clustering approach is preferable since it can handle overlapping boundaries and complex structures in a better manner. This partition based training objective promotes separation of input data points into groups of similar points. Thus these points can now be classified by simply examining their neighborhood.
Formally, the topic distribution $\varphi$ corresponding to an input text is obtained using a function
\begin{align}
\textbf{h} : \tau \rightarrow \varphi,\qquad \varphi \in \mathbb{R}^{K}
\end{align}
where $K$ is the number of topics and $\textbf{h}$ is a topic modeling function based on LDA \cite{blei2003latent}. The input embeddings obtained using \eqref{eq_4} are first linearly projected into the topic space as:
\begin{align}\label{eq_6}
\lambda^n = W_t^\intercal z^n,\qquad \lambda^n \in \mathbb{R}^K.
\end{align}
Here $\lambda^n$ is the projected distribution in topic space for the $n^{th}$ training example and $W_t \in \mathbb{R}^{D_e \times K}$ is a parameter matrix. The Kullback-Leibler (KL) divergence between the pre-learned topic distribution and the input projection is minimized during training. This translates into the following loss function for text inputs:
\begin{align}\label{eq_L1}
\mathcal{L}^{text} = \sum_n \sum_k \varphi^n_k\:log\: \frac{\varphi^n_k}{\lambda^n_k}.
\end{align}
For the metadata, we employ a loss function that aims to minimize the reconstruction error between the input metadata value and the value decoded from the final embedding representation. Given that each metadata is a sequence of values from a discrete set, a multi-label binary cross-entropy loss is an appropriate choice.
Let there be $V^p$ possible values for the $p^{th}$ metadata i.e. $|\Omega^p|=V^p$ and let $y_p \in \{0,1\}^{V^p}$ denote the multi-label values consolidated from an input metadata sequence\footnote{For instance, if the dataset is a collection of news articles, then the $p^{th}$ metadata artifact might indicate the set of countries related to a given news article. In that case $\Omega^p$ would be the set of all possible countries, $V^p$ would be the number of possible countries, and each $y_p$ would be a vector of size $V^p$ where indices corresponding to relevant countries are set to 1.}. A linear transformation decoder layer first converts the input embeddings into the metadata space as
\begin{align}
\zeta^n_p = W_p^\intercal z^n,\qquad \zeta^n_p \in \mathbb{R}^{V^p}
\end{align}
where $\zeta^n_p$ is the projection for the $n^{th}$ input and $W_p \in \mathbb{R}^{D_e \times V^p}$ is a parameter matrix as before. The reconstruction loss function is formulated as:
\begin{align}\label{eq_L2}
\mathcal{L}^{meta}_{p} = \sum_n \sum_v -y_{p,v}\:log\:\sigma(\zeta^n_{p,v})-(1-y_{p,v})\:log\:(1-\sigma(\zeta^n_{p,v}))
\end{align}
where $\sigma$ is the standard sigmoid function.
Using the text and metadata losses in ~\eqref{eq_L1} and ~\eqref{eq_L2}, the training objective is framed as
\begin{align}\label{eq_L10}
\min_{\substack{\theta}}\;\omega^{text} \mathcal{L}^{text} + \sum_p \omega^{meta}_{p}\mathcal{L}^{meta}_{p}
\end{align}
where $\theta$ is the set of all model parameters and $\omega$ is a real-valued hyper-parameter that controls the relative importance between the text and various metadata.
\subsection {Unseen Classification Tasks}
The input representations obtained using the above model have several desirable characteristics:
\begin{itemize}
\item the embeddings are in a compact form because of the projection into a lower-dimensional space,
\item the salient information in the inputs are preserved by the reconstruction loss, and
\item similar points are grouped together with the use of soft-partition labels.
\end{itemize}
Hence these embeddings can be used as-is in a non-parametric setting for downstream classification tasks. This approach is particularly attractive in situations where it is expensive to train new classification models or it may not be possible to perform training due to the scarcity of labeled examples.
Given a small number of labeled examples, we use nearest neighbor technique to compute the classification label of a query point. Specifically, the embeddings of the query point and the labeled points are obtained as outlined in Sec. \ref{subsec_rep} and the Euclidean distance between them is used to identify the nearest neighbors. The query point's class label is computed using a mode function on the nearest neighbor labels.
\section{Experiments}\label{sec_eval}
Experiments are conducted on three real-world datasets that vary in text style and nature of side information. We demonstrate below using these datasets that the proposed solution outperforms standard self-supervised training objectives for a variety of benchmark transformer models.
\begin{table*}[htbp]
\caption{Data samples}
\label{tab_dataset}
\centering
\begin{tabular}{|l|l|r|}
\hline
\multirow{4}{*}{Github-AI} & \multicolumn{1}{l|}{Description} & \multicolumn{1}{l|}{ Handbag GAN. In this project I will implement a DCGAN to see if I can generate handbag designs...} \\\cline{2-3}
& \multicolumn{1}{l|}{Repo Name} & \multicolumn{1}{l|}{GAN experiments} \\\cline{2-3}
& \multicolumn{1}{l|}{Tags} & \multicolumn{1}{l|}{gan,dcgan,deep-learning,google-cloud} \\\cline{2-3}
& \multicolumn{1}{l|}{Labels} & \multicolumn{1}{l|}{Image Generation (granular), Computer Vision (coarse)} \\\hline
\multirow{3}{*}{Amazon} & \multicolumn{1}{l|}{Review} & \multicolumn{1}{l|}{Best little ice cream maker works well, not too noisy, easy to clean. Recommend buying extra...} \\\cline{2-3}
& \multicolumn{1}{l|}{Product} & \multicolumn{1}{l|}{B00000JGRT} \\\cline{2-3}
& \multicolumn{1}{l|}{Label} & \multicolumn{1}{l|}{Home\_and\_Kitchen} \\\hline
\multirow{3}{*}{Twitter} & \multicolumn{1}{l|}{Tweet} & \multicolumn{1}{l|}{greek yogurt fresh fruit honey granola healthy living bakery.} \\\cline{2-3}
& \multicolumn{1}{l|}{Hashtags} & \multicolumn{1}{l|}{\#healthyliving, \#bakeri} \\\cline{2-3}
& \multicolumn{1}{l|}{Label} & \multicolumn{1}{l|}{Food} \\\hline
\end{tabular}
\end{table*}
\subsection{Datasets}\label{subsec_dset1}
The datasets correspond to three different domains and are publicly available. Table \ref{tab_dataset} lists a few sample data.
\textbf{Github-AI \cite{zhang2019higitclass}:} This dataset contains a list of source code repositories that implement algorithms for various machine learning tasks. We use the description summary of a repository as text data. The repository name and the tagged keywords constitute metadata. Note that there can be multiple tags for a repository, and we use up to 5 tags as part of the metadata sequence. If there are fewer than 5 tags (or if the tags are absent), then the sequence is padded. The dataset also contains two different label fields for each repository that relate to the machine learning task hierarchy. The first set of labels denote the task domain in a coarse manner (e.g. \emph{Computer Vision}) while the second set of labels define a granular domain (e.g. \emph{Object Detection, Semantic Segmentation}, etc.). These labels are employed to evaluate the learned input representation for classification.
\textbf{Amazon \cite{mcauley2013hidden}:} Review data from online retailer Amazon is collected in this dataset. There are over 35 million reviews and we sample 10,000 reviews similar to the procedure in \cite{zhang2020minimally}. The comments entered by a reviewer is used as the text data while the product identifier is used as side information. Since there is a single unique product identifier for each review, the metadata sequence degenerates to a scalar in this case. The product category (e.g. \emph{books}, \emph{sports}, etc.) is used as the classification label.
\textbf{Twitter \cite{zhang2017react}:} This social media dataset contains tweets collected during a three month period in 2014. The hashtags corresponding to a tweet (e.g. \emph{\#delicious, \#gym,} etc.) are used as side information. These tags do not conform to normal English style text with complex phrases and unstructured expressions. Hence we tokenize and segment \cite{baziotis2017} these tags so that a tag such as \emph{\#currentsituation} or \emph{\#makemyday} is meaningfully interpreted as \emph{current situation} or \emph{make my day}. The tweet itself is used as the text data. Each tweet is associated with a category (e.g. \emph{food}, \emph{nightlife} etc.) which we use as its classification label.
\subsection{Baselines for Comparison}\label{subsec_baseline}
For comparison purposes, we use five state-of-the-art neural language models namely BERT~\cite{devlin2018bert}, DistilBERT~\cite{sanh2019distilbert}, XLNet~\cite{yang2019xlnet}, RoBERTa~\cite{liu2019roberta} and Electra~\cite{clark2020electra}. All these models are based on the Transformer~\cite{vaswani2017attention} architecture - however, they vary in training procedure. BERT masks the identities of some input tokens and aims to reconstruct the original tokens, while XLNet uses a generalized auto-regressive setup. Electra on the other hand replaces certain input tokens with plausible alternatives sampled from a generator network. RoBERTa and DistilBERT optimize the BERT training procedure with improved choice of hyper-parameters and parameter size reduction respectively.
There are three different evaluation setups corresponding to these models:
\begin{itemize}
\item \textbf{No Finetuning:} The side information such as repository name or hashtag is augmented with text data to create a single text block. This text block is then used as input to the model for inference, and the $[CLS]$ token's embeddings from the last layer is used as the input representation.
\item \textbf{LM Finetuning:} As in the previous \emph{No Finetuning} setup, a single text block is created from the multi-part input. However, the language model is finetuned using this text before performing inference. This allows the model to adapt its weights based on the nature of the data. Finally, the input representation is inferred from the $[CLS]$ token of this finetuned model.
\item \textbf{Our Approach:} This setup reflects the architecture described in Section \ref{sec_model}. Text and side information are treated independently, with each input part being encoded by its corresponding embedding learner. Unlike the auto-encoding or auto-regressive training objective used in the \emph {LM Finetuning} setup, the transformer weights are updated by the custom self-supervised loss in equation ~\eqref{eq_L10}.
\end{itemize}
\subsection{Settings}\label{subsec_settings}
We employ the base configuration of a transformer model, which typically has $12$ layers and $768$ hidden neurons per token. This allows the models to be trained using a single GPU instance (Tesla V100-SXM2-16GB).
The models are implemented using the pytorch version of Transformers library \cite{Wolf2019HuggingFacesTS}. We use Adam Optimizer with an initial learning rate of $5e-5$ and an epsilon of $1e-8$. Other parameters include a drop out probability of $0.1$, sequence length of $512$ and batch size of $8$. The models were trained for $3$ epochs on the GitHub-AI dataset and for a single epoch on the Amazon and Twitter datasets to avoid overfitting. All hyper-parameters were chosen after a careful grid search.
For the metadata embedding learner, $tanh$ is used as the activation function. One-hot encoded input vectors are used for the tags in GitHub-AI dataset and product identifier in Amazon dataset while the repository name in GitHub-AI dataset and hashtags in Twitter dataset are initialized with Glove vectors as outlined in Section \ref{subsec_rep}. The metadata embedding size $D_p$ is set to $50$ while the final input embedding size $D_e$ is set to $500$.
When using nearest neighbor classification, only 10\% of the data is used as exemplars and the rest of the data is used for evaluation. The number of neighbors is set to $10$, and all points in the neighborhood are weighted equally.
\subsection{Results}\label{subsec_results}
The classification results for GitHub-AI dataset is shown in Table \ref{tab_ghubaisub} and \ref{tab_ghubaisuper}. The former contains F1 scores for $14$ granular class labels while the latter is for $3$ coarse class labels. The granular classification task is particularly challenging due to the large number of classes and a small sample size. However, this reflects a typical real-world few-shot learning scenario. We see that there is no big difference in performance between \emph{No Finetuning} and \emph{LM Finetuning} setups. This dataset is very small with only $1600$ examples and hence the pre-trained weights dominate even after fine-tuning. In contrast, we see that our approach of using a loss function based on topic distribution significantly improves the classifier performance. DistilBERT with its compact parameter size has the best F1 Score of $50.0\%$ for the granular labels task and $85.2\%$ for the coarse labels task.
\begin{table}[htbp]
\caption{Github-AI dataset granular class labels - F1 scores}
\label{tab_ghubaisub}
\centering
\begin{tabular}{lrrr}
\toprule
\textbf{Transformer} & \textbf{No} & \textbf{LM} & \textbf{Our}\\
\textbf{} & \textbf{Finetuning} & \textbf{Finetuning} & \textbf{Approach}\\
\midrule
BERT~\cite{devlin2018bert} & 22.8 & 27.2 & 45.0\\
DistilBERT~\cite{sanh2019distilbert} & 26.4 & 28.4 & 50.0\\
XLNet~\cite{yang2019xlnet} & 21.6 & 19.2 & 46.5\\
RoBERTa~\cite{liu2019roberta} & 21.3 & 19.9 & 34.4\\
Electra~\cite{clark2020electra} & 21.3 & 20.8 & 29.5\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Github-AI dataset coarse class labels - F1 scores}
\label{tab_ghubaisuper}
\centering
\begin{tabular}{lrrr}
\toprule
\textbf{Transformer} & \textbf{No} & \textbf{LM} & \textbf{Our}\\
\textbf{} & \textbf{Finetuning} & \textbf{Finetuning} & \textbf{Approach}\\
\midrule
BERT~\cite{devlin2018bert} & 68.9 & 66.7 & 80.8\\
DistilBERT~\cite{sanh2019distilbert} & 68.7 & 70.6 & 85.2\\
XLNet~\cite{yang2019xlnet} & 67.1 & 66.5 & 85.2\\
RoBERTa~\cite{liu2019roberta} & 67.2 & 66.7 & 69.9\\
Electra~\cite{clark2020electra} & 66.8 & 66.6 & 69.1\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Amazon dataset - F1 scores}
\label{tab_amazon1}
\centering
\begin{tabular}{lrrr}
\toprule
\textbf{Transformer} & \textbf{No} & \textbf{LM} & \textbf{Our}\\
\textbf{} & \textbf{Finetuning} & \textbf{Finetuning} & \textbf{Approach}\\
\midrule
BERT~\cite{devlin2018bert} & 32.0 & 78.1 & 86.5\\
DistilBERT~\cite{sanh2019distilbert} & 54.3 & 84.6 & 88.6\\
XLNet~\cite{yang2019xlnet} & 19.0 & 28.1 & 88.8\\
RoBERTa~\cite{liu2019roberta} & 20.8 & 73.3 & 88.2\\
Electra~\cite{clark2020electra} & 22.0 & 37.9 & 86.4\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Twitter dataset - F1 scores}
\label{tab_twitter1}
\centering
\begin{tabular}{lrrr}
\toprule
\textbf{Transformer} & \textbf{No} & \textbf{LM} & \textbf{Our}\\
\textbf{} & \textbf{Finetuning} & \textbf{Finetuning} & \textbf{Approach}\\
\midrule
BERT~\cite{devlin2018bert} & 22.2 & 40.1 & 46.0\\
DistilBERT~\cite{sanh2019distilbert} & 32.4 & 44.3 & 50.7\\
XLNet~\cite{yang2019xlnet} & 18.0 & 26.9 & 40.9\\
RoBERTa~\cite{liu2019roberta} & 15.0 & 29.3 & 22.7\\
Electra~\cite{clark2020electra} & 21.0 & 15.3 & 42.9\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{cm_twitter.png}
\caption{Confusion Matrix for Twitter dataset.}
\label{fig_cmtwitter}
\end{figure}
\begin{figure*}[!htbp]
\centering
\includegraphics[height=10cm]{embeds_amazon_2plot.png}
\caption{Embedding Visualization in 2-D. \emph{top}: Embeddings produced using standard language model training objective. \emph{bottom}: Embeddings produced with loss function based on topic distribution. There is a perceptible grouping of points from the same class in the bottom picture.}
\label{fig_embeds}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=\textwidth]{cluster_qual.png}
\caption{Semantic composition in learned embeddings. \emph{left}: Discussions around \emph{lock} topic occur in the same cluster. \emph{right}: Semantically similar labels appear close together in the embedding space.}
\label{fig_clus}
\end{figure*}
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{topic_ghubai.png}
\caption{Topic size effect on classifier performance.}
\label{fig_topic}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{tsize_amazon.png}
\caption{Impact of training set size on classifier performance.}
\label{fig_tsize}
\end{figure}
Table \ref{tab_amazon1} contains the results for Amazon dataset which has $10$ class labels. We see here that finetuning the language model significantly improves the classifier performance in general. This is unsurprising because unlike the GitHub-AI dataset, there are more examples to update the pre-trained weights. A noteworthy aspect is that our proposed training objective outperforms all the other baselines. The difference is less profound for models such as DistilBERT and BERT. The XLNet model performs the best with a F1 Score of $88.8\%$.
The results for Twitter dataset is presented in Table \ref{tab_twitter1} and the confusion matrix for the class labels is shown in Figure \ref{fig_cmtwitter}. The combination of informal language style in tweets, a small sample size and the large number of labels results in low F1 scores across all methods. Still, the DistilBERT model using our solution performs the best with a F1 score of $50.7\%$.
\subsection{Discussion}
It is important to understand why the proposed self-supervision based on topic modeling performs better over standard masked language modeling for the above classification tasks. Our approach aligns the input embeddings to reflect topic distributions, thereby partitioning the input space in a soft manner. This induces a clustering of input points with those points that share similar characteristics appearing together. The embeddings generated by standard language models do not necessarily have this clustering property.
Figure \ref{fig_embeds} makes this partitioning effect evident. It plots in 2-D the inferred input embeddings for Amazon dataset. The top section of this figure contains the embeddings from a finetuned language model and there are no discernible clusters here. However, in the bottom section that corresponds to the same model trained using our objective function, we can clearly see patterns of points appearing together. This natural grouping of the inputs enable identification of class labels based on neighborhood search very effective.
The learned embeddings also capture topic characteristics with input points corresponding to the same latent topic placed together. To validate this, we over-cluster the embeddings using KMeans and qualitatively examine the cluster contents. The left side of Figure \ref{fig_clus} contains one such cluster, where the discussions around \emph{locks} in the Amazon review are consolidated into the same cluster. Furthermore, semantically similar labels appear near to each other in the embedding space. We measure the distances between the cluster centroids and observe that the cluster closest to an \emph{Apps\_for\_Android} cluster is the \emph{Video\_Games} cluster. Similarly, a \emph{CDs\_and\_Vinyl} cluster is close to \emph{Movies\_and\_TV} cluster as illustrated in the right side of Figure \ref{fig_clus}. This level of compositionality opens the model up for applications to hierarchical classification and clustering.
For topic modeling using LDA~\cite{blei2003latent}, we need to specify the number of topics. This hyper-parameter influences the self-supervised loss function. We note that while it is essential to tune the number of topics, the model is not sensitive to an exact value. The effect of different topic size on the classifier performance for GitHub-AI dataset is plotted in Figure \ref{fig_topic}. Having extremely few topics does affect the model performance. However, the results are stable for a wide range of topic sizes.
Finally, we also study the impact of training set size on the proposed model. While the setting described here is intended for a sparse label scenario, we ask how the baseline model performs in the presence of large amounts of training data. Figure \ref{fig_tsize} compares the performance between our model and \emph{LM Finetuning} setup for the Amazon dataset. We see that our model performs significantly better when there is fewer training data available. However, F1 score for the finetuned model converges or gets better when training size increases beyond a threshold. We hypothesize that with large training set size, language semantics are understood better and the neighborhood search becomes easier.
\section{Conclusion}\label{sec_concl}
In this paper, we presented a flexible framework that combines latent topic information and metadata encodings with transformer-based models to learn semantically rich document representations that can be used for classification tasks in a transductive fashion. We demonstrate the flexibility of our framework by applying it to three datasets with diverse characteristics, various sizes, and different types of metadata. We show 4\%+ improvement over out-of-the-box pre-trained embeddings as well as conventional fine-tuning. We also qualitatively illustrate the semantic compositionality of the resulting embeddings. Our framework is especially effective when training data is smaller, when the classification task has a larger number of labels, or when metadata tags provide useful semantic signal that would otherwise be missed. In future work, we hope to explore the effectiveness of our framework in unsupervised hierarchical clustering.
\begin{acks}
This paper was prepared for information purposes by the Artificial Intelligence Research group of JPMorgan Chase \& Co and its affiliates (``JP Morgan''), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. \copyright 2020 JPMorgan Chase \& Co. All rights reserved
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,271 | arxiv | \section{Introduction}
Many problems in applied mathematics can be formulated and solved with the aid of matrix
functions. This includes the solution of linear discrete ill-posed problems \cite{CR},
the solution of time-dependent partial differential equations \cite{DKZ}, and the
determination of the most important node(s) of a network that is represented
by a graph and its adjacency matrix \cite{EH,FMRR}.
Usually, all entries of the adjacency matrix are
assumed to be known. This paper is concerned with the situation when only some columns, and/or
rows, of the matrix are available. This situation arises, for instance, when one
samples columns, and possibly rows, of a large matrix. We will consider applications in
network analysis,
where column and/or row sampling arises naturally in the process of collecting network
data by accessing one node at a time and finding all the other nodes it is connected to.
This is particularly important when
it is too expensive or impractical to collect a full census of all the connections.
A network is represented by a graph $G=\{V,E\}$, which consists of a set
$V=\{v_j\}_{j=1}^n$ of
\emph{vertices} or \emph{nodes}, and a set $E=\{e_k\}_{k=1}^m$ of \emph{edges},
the latter being the links between the vertices. Edges may be
directed, in which case they emerge from a node and end at a node, or undirected.
Undirected edges are ``two-way streets'' between nodes. For notational convenience and
ease of discussion, we consider simple (directed or undirected) unweighted graphs $G$
without self-loops. Then the
adjacency matrix $A=[a_{ij}]_{i,j=1}^n\in{{\mathbb R}}^{n\times n}$ associated with the graph $G$
has the entry $a_{ij}=1$ if there is a directed edge emerging from vertex $v_i$ and
ending at vertex $v_j$; if there is an undirected edge between the vertices $v_i$ and
$v_j$, then $a_{ij}=a_{ji}=1$. Other matrix entries vanish. In particular, the diagonal
entries of $A$ vanish. Typically, $1\le m\ll n^2$, which makes the matrix $A$ sparse. A
graph is said to be undirected if all its edges are undirected, otherwise the graph is
directed. The adjacency matrix for an undirected graph is symmetric; for a directed graph
it is nonsymmetric. Examples of networks include:
\begin{itemize}
\item
Flight networks, with airports represented by vertices and flights by directed edges.
\item
Social networking services, such as Facebook and Twitter, with members or accounts
represented by vertices and interactions between any two accounts by edges.
\end{itemize}
Numerous applications of networks are described in \cite{CEHT,Esbook,Nebook}.
We are concerned with the situation when only some of the nodes and edges of a
graph are known. Each node and its connections to other nodes determine one row
and column of the matrix $A$. Specifically, all edges that point to node $v_i$
determine column $i$ of $A$, and all edges that emerge from this node define the
$i^{\rm th}$ row of $A$. We are interested in
studying properties of networks associated with partially known adjacency matrices.
An important task in network analysis is to determine which vertices of an associated
graph are the most important ones by measuring how well-connected they are to other
vertices of the graph. This kind of importance measure often is referred to as a
\emph{centrality measure}. The choice of a suitable centrality measure depends on what
the graph is modeling. All commonly used centrality measures ignore intrinsic properties
of the vertices, and provide information about their importance within the graph just by
using connectivity information.
A simple approach to measure the centrality of a vertex $v_j$ in a directed graph is to
count the number of edges that point to it. This number is known as the \emph{indegree} of
$v_j$. Similarly, the \emph{outdegree} of $v_j$ is the number of edges that emerge from
this vertex. For undirected graphs, the \emph{degree} of a vertex is the number of edges
that ``touch'' it. However, this approach to measure the centrality of a vertex often is
unsatisfactory, because it ignores the importance of the vertices that $v_j$ is connected
to. Here we consider the computation of certain centrality indices quantifying the
``importance'' of a vertex on the basis of the importance of its neighbors, according to
different criteria of propagation of the vertex importance. Such centrality indices are
based on matrix functions of the adjacency matrix of the graph, and are usually called
spectral centrality indices. In particular, we focus on the Katz index and the subgraph
centrality index. Moreover, we also consider eigenvector centrality, that is, the Perron
eigenvector of the adjacency matrix.
To discuss measures determined by matrix functions, we need the notion of a \emph{walk} in
a graph. A walk of length $k$ is a sequence of $k+1$ vertices
$v_{i_1},v_{i_2},\ldots,v_{i_{k+1}}$ and a sequence of $k$ edges
$e_{j_1},e_{j_2},\ldots,e_{j_k}$, such that $e_{j_\ell}$ points from $v_{i_\ell}$ to
$v_{i_{\ell+1}}$ for $\ell=1,2,\ldots,k$. The vertices and edges of a walk do not have to
be distinct.
It is a well known fact that $[A^k]_{ij}$, i.e., the $(ij)^{\rm th}$
entry of $A^k$, yields the number of walks of length $k$ starting at node $v_i$ and ending
at node $v_j$. Thus, a matrix function evaluated at the adjacency matrix $A$, defined by a
power series $\sum_{k=0}^\infty \alpha_k A^k$ with nonnegative coefficients, can be
interpreted as containing weighted
sums of walk counts, with weights depending on the length of the walk. Unless $A$ is
nilpotent (i.e., the graph is directed and contains no cycles), convergence of the power
series requires
that the coefficients $\alpha_k$ converge to zero; this corresponds well with the
intuitively natural requirement that long walks be given less weight than short walks
(which is the case in \eqref{matfun1a} and \eqref{matfun1b} below).
Commonly used matrix functions for measuring the centrality of the vertices
of a graph are the exponential function $\exp(\gamma_e A)$ and the resolvent
$(I-\gamma_r A)^{-1}$, where $\gamma_e$ and $\gamma_r$ are positive user-chosen scaling
parameters; see, e.g., \cite{EH}. These functions can be defined by their power series
expansions
\begin{eqnarray}\label{matfun1a}
\exp(\gamma_e A)&=&I+\gamma_e A+\frac{1}{2!}(\gamma_e A)^2+\frac{1}{3!}(\gamma_e A)^3+
\ldots~,\\
\label{matfun1b}
(I-\gamma_r A)^{-1}&=&I+\gamma_r A+(\gamma_r A)^2+(\gamma_r A)^3+\ldots~.
\end{eqnarray}
For the resolvent, the parameter $\gamma_r$ has to be chosen small enough so that the
power series converges, which is the case when $\gamma_r$ is strictly smaller than
$1/\rho(A)$, where $\rho(A)$ denotes the spectral radius of $A$.
Matrix functions $f(A)$, such as \eqref{matfun1a} and \eqref{matfun1b}, define several
commonly used centrality measures: If $f(A)=\exp(A)$, then $[f(A)\mathbf{1}]_i$ is called the
\emph{total subgraph communicability} of node $v_i$, while the diagonal matrix entry
$[f(A)]_{ii}$ is the \emph{subgraph centrality} of node $v_i$; see, e.g., \cite{BK,EH}.
Moreover, if $f(A)=(I-\alpha A)^{-1}$, then $[f(A)\mathbf{1}]_i$ gives the \emph{Katz index}
of node $v_i$; see, e.g., \cite[Chap. 7]{Nebook}.
It may be beneficial to complement the centrality measures above by the measures
$[f(A^T)]_{ii}$ and $[f(A^T)\mathbf{1}]_i$, $i=1,2,\ldots,n$, when the graph $G$ that defines
$A$ is directed. Here and below the superscript $^T$ denotes transposition; see, e.g.,
\cite{BK,DLCMR,EH,Esbook} for discussions on centrality measures defined by functions of
the adjacency matrix.
We are interested in computing useful approximations of the largest diagonal entries of
$f(A)$, or the largest entry of $f(A)\mathbf{1}$ or $f(A^T)\mathbf{1}$, when only $1\le k\ll n$ of
the columns and/or rows of $A$ are known. The need to compute such approximations arises when
the entire graph $G$ is not completely known, but only a small subset of the columns or
rows of the adjacency matrix $A$ of $G$ are available. This happens, e.g., when not all
nodes and edges of a graph are known, a situation that is common for large,
complex, real-life networks. The
situation we will consider is when the columns and rows of the adjacency matrix are not
explicitly known, but can be sampled. It is then of considerable interest to investigate
how the sampling should be carried out, as simple random sampling of columns and possibly rows
of a large adjacency matrix does not give the best results. We will describe a sampling
method in Section~\ref{sec2}. A further reason for our interest in computing
approximations of functions of a
large matrix $A$, that only use a few of the columns and/or rows of the matrix, is that
the evaluation of these approximations typically is much cheaper than the evaluation of
functions of $A$.
Another approach to measure centrality is to compute a left or right eigenvector
associated with the eigenvalue of largest magnitude of $A$. In many situations the
entries of these eigenvectors live in a one-dimensional invariant subspace, have only
nonvanishing entries, and can be scaled so that all entries are positive. The so-scaled
eigenvectors are commonly referred to as the left and right Perron vectors for the
adjacency matrix $A$. The left and right Perron vectors are unique up to scaling provided
that the adjacency matrix is irreducible or, equivalently, if the associated graph is
strongly connected. The centrality of a node is given by the relative size of its
associated entry of the (left or right) Perron vector for the adjacency matrix. If the
$j^{\rm th}$ entry of the, say left, Perron vector is the largest, then $v_j$ is the most
important vertex of the graph. This approach to determine node importance is known as
\emph{eigenvector centrality} or \emph{Bonacich centrality}; see, e.g.,
\cite{Bo,Esbook,Nebook} for discussions of this method. We will consider the application
of this method to partially known adjacency matrices.
This paper is organized as follows. Section \ref{sec2} discusses our sampling method for
determining (partial) knowledge of the graph and its associated adjacency matrix. The
evaluation of matrix functions of adjacency matrices that are only partially known is
considered in Section \ref{sec3}, and Section \ref{sec4} describes how an approximation of
the left Perron vector of $A$ can be computed quite inexpensively by using low-rank
approximations determined by sampling. A few computed examples are presented in Section
\ref{sec5}, and concluding remarks can be found in Section \ref{sec6}.
\section{Sampling adjacency matrices}\label{sec2}
Let $\sigma_1\geq\sigma_2\geq\dots\geq\sigma_n\geq0$ be the singular values of a large
matrix $A\in{\mathbb R}^{n\times n}$ and let, for some $1\leq k\ll n$, $\mathbf{u}_1,\mathbf{u}_2,\dots,\mathbf{u}_k$
and $\mathbf{v}_1,\mathbf{v}_2,\dots,\mathbf{v}_k$ be left and right singular (unit) vectors associated with
the $k$ largest singular values. Then the truncated singular value decomposition (TSVD)
\begin{equation}\label{svd}
A^{(k)}=\sum_{j=1}^{k} \sigma_j \mathbf{u}_j\mathbf{v}_j^T,
\end{equation}
furnishes a best approximation of $A$ of rank at most $k$ with respect to the spectral and
Frobenius matrix norms; see, e.g., \cite{TB}. However, the computation of the
approximation \eqref{svd} may be expensive when $n$ is large and $k$ is of moderate size.
This limits the applicability of the TSVD-approximant \eqref{svd}. Moreover, the
evaluation of this approximant requires that all entries of $A$ be explicitly known.
As mentioned above, we are concerned with the situation when $A$ is an adjacency matrix
for a simple (directed or undirected) unweighted graph without self-loops and that, while
the whole matrix is not known, we can
sample a (relatively small) number of rows and columns.
Then, approximations different from \eqref{svd} have to be used. This
section discusses methods to sample columns and/or rows of $A$. The
low-rank
approximations of $A$ determined in this manner are used in Sections \ref{sec3} and
\ref{sec4} to compute approximations of spectral node centralities.
In the first step, a random non-vanishing column of $A$ is chosen. Let its index
be $j_1$, and denote the chosen column by $\mathbf{c}_1$.
If the columns $\mathbf{c}_1,\dots,\mathbf{c}_k$ have been chosen, corresponding to the indices
$j_1,\dots,j_k$, at the next step we pick an index $j_{k+1}$ according to a probability
distribution on $\{1,\dots,n\}$ proportional to $\mathbf{c}_1+\cdots+\mathbf{c}_k$.
Thus, at the $(k+1)^{\rm st}$ step, the probability of choosing column $i$ as the next
sampled column is proportional to the number of edges in the network from node $v_i$ to nodes
$v_{j_1},\dots,v_{j_k}$. At each step, if a column has
already been picked, or the new column consists entirely of zeros, this choice is discarded
and the procedure is repeated until a new, nonzero column $\mathbf{c}_{k+1}$ is obtained. We
denote by $J$ the set of indices of the chosen columns; using MATLAB notation, the matrix
$A_{(:,J)}$ is made up of the chosen columns of $A$. Another way of describing this sampling
method is that we pick the first vertex at random, and then pick subsequent vertices
randomly using a probability distribution proportional to $\mathbf{c}_1+\cdots+\mathbf{c}_k$.
We remark that this scheme for selecting columns can just as easily be used in the
case when the edges have positive weights (that is, the nonzero entries of $A$ may be
positive numbers other than 1). Also, if a row-sampling scheme is needed, rows of the
adjacency matrix $A$
can be selected similarly by applying the above scheme to the columns of the matrix
$A^T$; in this case we denote by $I$ the set of row indices. The matrix
$A_{(I,:)}\in{\mathbb R}^{k\times n}$ contains the selected rows of $A$. By alternating column
and row sampling, sets of columns and rows can be determined simultaneously.
The adaptive cross approximation method (ACA) applied to a matrix $A$ also samples rows
and columns to obtain an approximation of the whole matrix. In ACA, one uses the fact that
the rows and columns of $A_{(I,:)}$ and $A_{(:,J)}$ have common entries. These
entries form the matrix $A_{(I,J)}\in{\mathbb R}^{k\times k}$. When the latter matrix is
nonsingular, the cross approximation of $A$ is given by
\begin{equation}\label{cross}
M_k=A_{(:,J)}A_{(J,I)}^{-1}A_{(I,:)};
\end{equation}
see \cite{FVB,GTZ,GTZ2,MRVBV} for details.
Let $\sigma_{k+1}\geq 0$ be the $(k+1)^{\rm st}$ singular value of $A$. Then the matrix
\eqref{svd} satisfies $\|A-A^{(k)}\|_2=\sigma_{k+1}$, where $\|\cdot\|_2$ denotes the
spectral norm. Goreinov et al. \cite{GTZ} show that there is a matrix $M_k^*$ of rank $k$,
determined by cross approximation of $A$, such that
\begin{equation}\label{cvbd}
\|A-M_k^*\|_2={\mathcal O}(\sigma_{k+1} \sqrt{kn}).
\end{equation}
Thus, cross approximation can determine a near-best approximation of $A$ of rank $k$
without computing the first $k$ singular values and vectors of $A$.
However, the selection of columns and rows of $A$ so that \eqref{cvbd} holds is
computationally difficult. In their analysis, Goreinov et al. \cite{GTZ2} select sets $I$
and $J$ that give the submatrix $A_{(I,J)}$ maximal ``volume'' (modulus of the
determinant). It is difficult to compute these index sets in a fast manner. Therefore,
other methods to select the sets $I$ and $J$ have been proposed; see, e.g.,
\cite{FVB,MRVBV}. They are related to incomplete Gaussian elimination with complete
pivoting. These methods work well when the matrix $A$ is not very sparse. The adjacency
matrices of concern in the present paper typically are quite sparse, and we found the
sampling methods described in \cite{FVB,MRVBV} often to give singular matrices
$A_{(I,J)}$. This makes the use of adaptive cross approximation difficult. We therefore
will not use the expression \eqref{cross} in subsequent sections.
\section{Functions of low-rank matrix approximations}\label{sec3}
This section discusses the approximation of functions $f$ of a large matrix
$A\in\mathbb{R}^{n\times n}$ that is only partially known. Specifically, we assume
that only $1\leq\ell\ll n$ columns of $A$ are available, and we would like to determine an
approximation of $f(A)$. We will tacitly assume that the function $f$ and matrix $A$ are
such that $f(A)$ is well defined; see, e.g., \cite{GVL,Hi} for several definitions of matrix
functions. For the purpose of this paper, the definition of a matrix function by its power
series expansion suffices; cf. \eqref{matfun1a} and \eqref{matfun1b}. We first will assume
that the matrix $A$ is nonsymmetric. At the end of this section, we will address the
situation when $A$ is symmetric.
Let $P\in\mathbb{R}^{n\times n}$ be a permutation matrix such that the known columns of
the matrix $AP$ have indices $1,2,\ldots,\ell$. Thus, the first columns of $AP$ are
$\mathbf{c}_1,\ldots,\mathbf{c}_\ell$. Let $\widetilde{\mathbf{c}}_j=P^T\mathbf{c}_j$ for $1\leq j\leq\ell$. We first
approximate $P^TAP$ by
\begin{equation}\label{All}
A_\ell=[\widetilde{\mathbf{c}}_1,\dots,\widetilde{\mathbf{c}}_\ell,\underbrace{\mathbf{0},\ldots,\mathbf{0}}_{n-\ell}]
\end{equation}
Thus,
\[
A_\ell=P^TAP\left[\begin{array}{cc} I_\ell & 0 \\ 0 & 0 \end{array}\right]\approx P^TAP,
\]
and then approximate $f(A)=Pf(P^TAP)P^T$ by
\begin{equation}\label{fapprox}
f(A)\approx Pf(A_\ell)P^T.
\end{equation}
Hence, it suffices to consider the evaluation of $f$ at an $n\times n$ matrix, whose
$n-\ell$ last columns vanish. We will tacitly assume that $f(A_\ell)$ is well defined.
The computations simplify when $f(0)=0$. We therefore will consider the functions
\begin{equation}\label{modfun}
f(A_\ell)=\exp(\gamma_e A_\ell)-I\mbox{~~~and~~~}f(A_\ell)=(I-\gamma_r A_\ell)^{-1}-I.
\end{equation}
The subtraction of $I$ in the above expressions generally is of no significance for the
analysis of networks, because one typically is interested in the relative sizes of the
diagonal entries of $f(A_\ell)$, or of the entries of the vectors $f(A_\ell)\mathbf{1}$ or
$f(A_\ell^T)\mathbf{1}$.
The power series representations of the functions in \eqref{modfun},
\[
f(A_\ell)=c_1A_\ell+c_2A_\ell^2+\ldots~,
\]
show that only the first $\ell$ columns of the matrix $f(A_\ell)$ contain nonvanishing
entries.
Let $\mathbf{v}_1$ be a random unit vector (not belonging to ${\rm span}\{\mathbf{c}_1,\ldots,\mathbf{c}_\ell\}$).
Application of $\ell$ steps of the Arnoldi process to
$A_\ell$ with initial vector $\mathbf{v}_1$, generically, yields the Arnoldi decomposition
\begin{equation}\label{arndec}
A_\ell V_{\ell+1}=V_{\ell+1}H_{\ell+1},
\end{equation}
where $H_{\ell+1}\in\mathbb{R}^{(\ell+1) \times (\ell+1)}$ is an upper Hessenberg matrix
and the matrix $V_{\ell+1}\in\mathbb{R}^{n\times(\ell+1)}$ has orthonormal columns. The
computation of the Arnoldi decomposition \eqref{arndec} requires the evaluation of $\ell$
matrix-vector products with $A_\ell$, which is quite inexpensive since $A_\ell$ has at
most $\ell$ nonvanishing columns. We assume that the decomposition \eqref{arndec} exists.
This is the generic situation. Breakdown of the Arnoldi process, generically, occurs at
step $\ell+1$; see Saad \cite[Chapter 6]{Sa} for a thorough discussion of the Arnoldi
decomposition and its computation.
Introduce the spectral factorization
\begin{equation}\label{Hfact}
H_{\ell+1}=S_{\ell+1}\Lambda_{\ell+1}S_{\ell+1}^{-1},
\end{equation}
which we tacitly assume to exist. This is the generic situation. Thus, the matrix
$\Lambda_{\ell+1}$ is diagonal; its diagonal entries are the eigenvalues of $H_{\ell+1}$.
We may assume that the eigenvalues are ordered by nonincreasing modulus. Then the last
diagonal entry of $\Lambda_{\ell+1}$ vanishes. It follows that the last column of the
matrix $S_{\ell+1}$ is an eigenvector that is associated with a vanishing eigenvalue.
There may be other vanishing diagonal entries of $\Lambda_{\ell+1}$ as well, but this
will not be exploited. The situation when the factorization \eqref{Hfact} does not exist
can be handled as described by Pozza et al. \cite{PPS}.
We have
\[
A_\ell V_{\ell+1}S_{\ell+1}=V_{\ell+1}S_{\ell+1}\Lambda_{\ell+1}.
\]
The columns of $V_{\ell+1}S_{\ell+1}$ are eigenvectors of $A_\ell$. The last column of
$V_{\ell+1}S_{\ell+1}$ is an eigenvector that is associated with a vanishing eigenvalue.
Let $\mathbf{w}_j=V_{\ell+1}S_{\ell+1}\mathbf{e}_j$, $j=1,2,\ldots,\ell$, where $\mathbf{e}_j$ denotes the
$j^{\rm th}$ column of an identity matrix of appropriate order. Then
\[
S_n=[\mathbf{w}_1,\dots,\mathbf{w}_{\ell}, \mathbf{e}_{\ell+1}, \dots,\mathbf{e}_{n}]\in\mathbb{R}^{n\times n}
\]
is an eigenvector matrix of $A_\ell$, and
\[
A_\ell S_{n}=S_{n}\begin{bmatrix}\Lambda_{\ell}& & & \\& 0 &&\\ && \ddots & \\& & & 0
\end{bmatrix},
\]
where $\Lambda_{\ell}$ is the $\ell\times\ell$ leading principal submatrix of
$\Lambda_{\ell+1}$. Hence,
\begin{eqnarray}
\nonumber
f(A_\ell)&=&S_{n}f\left(\begin{bmatrix}\Lambda_{\ell}& & & \\& 0 &&\\ && \ddots & \\
& & & 0\end{bmatrix}\right)S_{n}^{-1} \\
\nonumber\\
\label{fAll}
&=& S_{n}\begin{bmatrix}f(\lambda_1) & & & &&\\
& \ddots & & &&\\ & & f(\lambda_{\ell})& & &\\& & &0 &&\\ & &&&\ddots \\
& & & &&0\end{bmatrix}S_{n}^{-1},
\end{eqnarray}
where we have used the fact that $f(0)=0$.
To evaluate the expression \eqref{fAll}, it remains to determine the first $\ell$ rows of
$S_n^{-1}$. This can be done with the aid of the Sherman--Morrison--Woodbury formulas
\cite[p.\,65]{GVL}. Define the matrix $W=[\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_\ell]\in\mathbb{R}^{n\times\ell}$
and let $I_{n,\ell}\in{\mathbb R}^{n\times\ell}$ denote the leading $n\times\ell$ principal submatrix
of the identity matrix $I\in{\mathbb R}^{n\times n}$. Then the first $\ell$ rows of $S_n^{-1}$ are given by
$[(I_{\ell,n}W)^{-1},0_{\ell,n-\ell}]$, and we can evaluate
\begin{equation}\label{fAl}
f(A_\ell)=W f(\Lambda_\ell) [(I_{\ell,n}W)^{-1},0_{\ell,n-\ell}],
\end{equation}
where $0_{\ell,n-\ell}\in{\mathbb R}^{\ell\times(n-\ell)}$ denotes a matrix with only zero entries.
Our approximation of $f(A)$ is given by $Pf(A_\ell)P^T$. For a large matrix $A$, the
computationally most expensive part of evaluating this approximation, when the matrix
$A_\ell$ is available, is the computation of the Arnoldi decomposition \eqref{arndec},
which requires ${\mathcal O}(n\ell^2)$ arithmetic floating point operations.
We remark that for functions such that
\begin{equation}\label{ftrans}
f(A)=(f(A^T))^T,
\end{equation}
which includes the functions \eqref{matfun1a} and \eqref{matfun1b}, we may instead sample
rows of $A$, which are columns of $A^T$, to determine an approximation of $f(A)$ using the
same approach as described above. We remark that equation \eqref{ftrans} holds for all
matrix functions $f(A)$ that stem from a scalar function $f(t)$ for $t$.
We turn to the situation when the matrix $A\in{\mathbb R}^{n\times n}$ is symmetric, and
assume that $1\leq\ell\ll n$ of its columns are known. Let the permutation matrix $P$ be
the same as above. Then the first $\ell$ rows and columns of the symmetric matrix
$A_\ell=P^TAP$ are available. Letting $\mathbf{v}_1$ be a random unit vector and applying $\ell$
steps of the symmetric Lanczos process to $A_\ell$ with initial vector $\mathbf{v}_1$ gives,
generically, the Lanczos decomposition
\begin{equation}\label{landec}
A_\ell V_{\ell+1}=V_{\ell+1}T_{\ell+1},
\end{equation}
where $T_{\ell+1}\in\mathbb{R}^{(\ell+1) \times (\ell+1)}$ is a symmetric tridiagonal
matrix and $V_{\ell+1}\in\mathbb{R}^{n\times(\ell+1)}$ has orthonormal columns. The
computation of the decomposition \eqref{landec} requires the evaluation of $\ell$
matrix-vector products with $A_\ell$. We assume $\ell$ is small enough so that the
decomposition \eqref{landec} exists. Breakdown depends on the choice of $\mathbf{v}_1$.
Typically this assumption is satisfied; otherwise the computations can be modified.
Breakdown of the symmetric Lanczos process, generically, occurs at step $\ell+1$. We
now can derive a representation of $f(A_\ell)$ of the form \eqref{fAll}, making use
of the spectral factorization of $T_{\ell+1}$. The derivation in the present situation is
analogous to the derivation of $f(A_\ell)$ in \eqref{fAll}, with the difference that the
eigenvector matrix $S_\ell$ can be chosen to be orthogonal.
\section{The computation of an approximate left Perron vector}\label{sec4}
Let $A\in{\mathbb R}^{n\times n}$ be the adjacency matrix of a strongly connected graph. Then $A$
has a unique left Perron vector $\mathbf{y}=[y_1,y_2,\ldots,y_n]^T\in{\mathbb R}^n$ of unit length with
all entries positive. As mentioned above, the importance of vertex $v_i$ is proportional
to $y_i$. When the matrix $A$ is nonsymmetric, the left Perron vector measures the
centrality of the nodes as receivers; the right Perron vector yield the centrality of the
nodes as transmitters.
Assume for the moment that the (unmodified) adjacency matrix $A$ is nonsymmetric. We
would like to
determine an approximation of the left Perron vector by using a submatrix determined
by sampling columns and rows as described in Section \ref{sec2}. Let the set $J$ contain
the $\ell$ indices of the sampled columns of $A$. Thus, the matrix
$A_{(:,J)}\in{\mathbb R}^{n\times n}$ contains the sampled columns. Similarly, applying the
same column sampling method to $A^T$ gives a set $I$ of $\ell$ indices; the
matrix $A_{(I,:)}\in{\mathbb R}^{n\times n}$ contains the sampled rows. We will compute an
approximation of the left Perron vector of $A$ by applying the power method to the matrix
$M_\ell=A_{(:,J)}A_{(I,:)}$, which approximates $A^2$ (without explicitly forming $M_\ell$).
We instead also could have applied the power method to $A_{(I,:)}A_{(:,J)}$. Since the
matrix $M_\ell$ is not explicitly stored, the latter choice offers no advantage.
Possible nonunicity of the Perron vector and non-convergence of the power method can be
remedied by adding a matrix $E\in{\mathbb R}^{n\times n}$
to $M_\ell$, where all entries of $E$ are equal to a small parameter $\varepsilon>0$. The
computations with the power method are carried out without explicitly storing the matrix
$E$ and forming $M_\ell+E$. The iterations with the power method applied to $M_\ell+E$ are
much cheaper than the iterations with the power method applied to $A$, when $\ell\ll n$.
Moreover, our method does not require the whole matrix $A$ to be explicitly known. In the
computed examples reported in Section \ref{sec5}, we achieved fairly accurate rankings of
the most important nodes without using the matrix $E$ defined above. Moreover, we found
that only fairly few rows and columns of $A$ were needed to quite accurately determine the
most important nodes in several ``real'' examples.
When the adjacency matrix $A$ is symmetric, we propose to compute the Perron vector of the
matrix $M_\ell=A_{(:,J)}A_{(J,:)}$, which can be constructed by sampling the
columns of $A$, only, to construct $A_{(:,J)}$, since $A_{(J,:)}=A_{(:,J)}^T$.
Notice that for symmetric matrices the right and left Perron vectors are the same.
\section{Computed examples}\label{sec5}
This section illustrates the performance of the methods discussed when applied to the
ranking of nodes in several ``real'' large networks. All computations were carried out
in MATLAB with standard IEEE754 machine arithmetic on a Microsoft Windows 10 computer
with CPU Intel(R) Core(TM) i7-8550U @ 1.80GHz, 4 Cores, 8 Logical Processors
and 16GB of RAM.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.9]{soc-Epinios1Exp.pdf}
\end{center}
\caption{soc-Epinions1: The top twenty ranked nodes using the diagonal of $f(A)$ (2nd
column), and rankings determined by the diagonals of $f(A_\ell)$ for
$\ell\in\{500,1000,1500,2000,2500,3000\}$ for $f(t)=\exp(t)-1$. The columns of $A$ are
sampled as described in Section~\ref{sec2}.}\label{socFig}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.9]{soc-Epinions1ExpRandom.pdf}
\end{center}
\caption{soc-Epinions1: The top twenty ranked nodes using the diagonal of $f(A)$ (2nd
column), and rankings determined by the diagonals of $f(A_\ell)$ for
$\ell\in\{500,1000,1500,2000,2500,3000\}$ for $f(t)=\exp(t)-1$. The columns of $A$ are
sampled randomly.}\label{socFigrnd}
\end{figure}
\begin{table}[h!]
\renewcommand{\arraystretch}{1.15}
\begin{center}
\begin{tabular}{cccc}
\hline
$\ell$ & Mean & Max & Min \\
\hline
500 & 22.28 & 24.05 & 17.23 \\
1000 & 85.28 & 93.33 & 74.20 \\
1500 & 184.46 & 189.55 & 177.92 \\
2000 & 336.34 & 477.35 & 323.31 \\
2500 & 549.49 & 596.81 & 524.38 \\
3000 & 753.24 & 810.85 & 721.24 \\
\hline \\
\end{tabular}
\end{center}
\caption{soc-Epinions1. Computation time in seconds. Average, max, and min over
50 runs.}\label{soc-Epinions1T}
\end{table}
\subsection{soc-Epinions1}\label{sec:soc}
The network of this example is a ``web of trust'' among members of the website
Epinions.com. This network describes who-trusts-whom. Each user may decide to trust the
reviews of other users or not. The users are represented by nodes. An edge from node $v_i$
to node $v_j$ indicates that user $i$ trusts user $j$. The network is directed with 75,888
members (nodes) and 508,837 trust connections (edges) \cite{RAD,SNAP}. We will illustrate
that one can determine a fairly accurate ranking of the nodes by only using a fairly
small number of columns of the nonsymmetric adjacency matrix $A\in{\mathbb R}^{n\times n}$ with
$n=75888$. The node centrality is determined by evaluating
approximations of the diagonal entries of the matrix function $f(A)=\exp(A)-I$.
We sample $\ell\ll n$ columns of the adjacency matrix $A$ using the method described in
Section~\ref{sec2}. The first column, $\mathbf{c}_1$, is a randomly chosen nonvanishing column of
$A$; the remaining columns are chosen as described in Section~\ref{sec2}. Once the $\ell$
columns of $A$ have been chosen, we evaluate an approximation of $f(A)$ as described in
Section \ref{sec3}. The rankings obtained are displayed in Figure \ref{socFig}; see below
for a detailed description of this figure. When instead all columns of $A$ are chosen
randomly, then we obtain the rankings shown in Figure \ref{socFigrnd}. Computing times
are reported in Table \ref{soc-Epinions1T}.
The exact ranking of the nodes of the network is difficult to determine due to the large
size of the adjacency matrix. It is problematic to evaluate $f(A)$ both because of the
large amount of computational arithmetic required, and because of the large storage demand.
While the matrix $A$ is sparse, and therefore can be stored efficiently using a sparse storage
format, the matrix $f(A)$ is dense. In fact, the MATLAB function {\bf expm} cannot be
applied to evaluate $\exp(A)$ on the computer used for the numerical experiments. Instead,
we apply the Arnoldi process to approximate $f(A)$. Specifically, $k$ steps of the Arnoldi
process applied to $A$ with a random unit initial vector generically gives the Arnoldi
decomposition
\begin{equation}\label{arndec2}
AV_k=V_kH_k+\mathbf{g}_k\mathbf{e}_k^T,
\end{equation}
where the matrix $V_k\in{\mathbb R}^{n\times k}$ has orthonormal columns, $H_k\in{\mathbb R}^{k\times k}$ is
an upper Hessenberg matrix, and the vector $\mathbf{g}_k\in{\mathbb R}^n$ is orthogonal to the columns of
$V_k$. We then approximate $f(A)$ by $V_kf(H_k)V_k^T$; see, e.g., \cite{BR,DKZ} for
discussions on the approximation of matrix function using the Arnoldi process. These
computations were carried out for $k=4000$, $k=6000$, $k=8000$, and $k=9000$, and
rankings ${\rm diag}(V_kf(H_k)V_k^T)$ for these $k$-values were determined. We found
the rankings to converge as $k$ increases. The ranking obtained for $k=9000$ therefore
is considered the ``exact'' ranking. It is shown in the second column of Figure \ref{socFig}.
Subsequent columns of this figure display rankings determined by the diagonal entries of
$f(A_\ell)$ for $\ell=500$, $1000$, $1500$, $2000$, $2500$, and $3000$, when the columns
of $A$ are sampled by the method of Section \ref{sec2}. Each column shows the top 20
ranked nodes. To make it easier for a reader to see the rankings, we use $4$ colors, and
$5$ levels for each color. As we pick $500$ columns of $A$, $9$ of the top $20$
ranked nodes are identified, but only the most important node (35) has the correct
ranking. When $\ell=1000$, the computed ranking improves somewhat. We are able to identify
$11$ out of top $20$ nodes. As we sample more columns of $A$, we obtain improved
rankings. For $\ell=3000$, we are able to identify $17$ of the $20$ most important nodes,
and the rankings get closer to the exact ranking. The figure illustrates that useful
information about node centrality can be determined by sampling many fewer than $n$
columns of $A$. Computing times are reported in Table \ref{soc-Epinions1T}.
Figure \ref{socFigrnd} differs from Figure \ref{socFig} in that the columns of the
matrix $A$ are randomly sampled. Comparing these figures shows the sampling method of
Section \ref{sec2} to yield rankings that are closer to the ``exact ranking'' of the
second column for the same number of sampled columns.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.9]{ca-CondMatExp.pdf}
\end{center}
\caption{ca-CondMat: The top twenty nodes determined by the diagonals of $f(A_\ell)$ for
$\ell\in\{500,1000,1500,2000,2500,3000\}$ for $f(t)=exp(t)-1$. The columns of $A$ are
sampled as described in Section~\ref{sec2}.}\label{AsFig}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.9]{ca-CondMatExpRandom.pdf}
\end{center}
\caption{ca-CondMat: The top twenty nodes determined by the diagonals of $f(A_\ell)$ for
$\ell\in\{500,1000,1500,2000,2500,3000\}$ for $f(t)=\exp(t)-1$. The columns of $A$ are
sampled randomly.}\label{AsFigrnd}
\end{figure}
\begin{table}[h!]
\renewcommand{\arraystretch}{1.15}
\begin{center}
\begin{tabular}{cccc}
\hline
$\ell$ & Mean & Max & Min \\
\hline
500 & 0.89 & 1.02 & 0.76 \\
1000 & 2.38 & 3.54 & 2.06 \\
1500 & 4.61 & 7.66 & 3.55 \\
2000 & 7.55 & 12.43 & 5.78 \\
2500 & 10.87 & 19.45 & 8.45 \\
3000 & 15.53 & 37.50 & 11.03 \\
\hline \\
\end{tabular}
\end{center}
\caption{ca-CondMat. Computation time in seconds. Average, max, and min over
100 runs. }\label{ca-CondMatT}
\end{table}
\subsection{ca-CondMat}\label{sec:as22}
This example illustrates the application of the technique of Section \ref{sec3} to a
symmetric partially known matrix. We consider a collaboration network from e-print
arXiv. The 23,133 nodes of the associated graph represent authors. If author $i$
co-authored a paper with author $j$, then the graph has an undirected edge connecting the
nodes $v_i$ and $v_j$. The adjacency matrix $A$ is symmetric with 186,936 non-zero entries
\cite{LKF,SNAP}. Of the entries, 58 are on the diagonal. Since we are interested in
graphs without self-loops, we set the latter entries to zero. We use the node centrality
measure furnished by the diagonal of $f(A)=\exp(A)-I$.
Figure \ref{AsFig} shows results when using the sampling method described in Section
\ref{sec2} to choose $\ell$ columns of the adjacency matrix $A$. Due to the symmetry of
$A$, we also know $\ell$ rows of $A$. The figure compares the ranking of the nodes using
the diagonal of the matrix $f(A)$ (which is the exact ranking) with the rankings
determined by the diagonal entries of $f(A_\ell)$ for
$\ell\in\{500,1000,1500,2000,2500,3000\}$. The figure shows the top
$20$ ranked nodes determined by each matrix. For $\ell=500$, a couple of the $20$ most
important nodes can be identified among the first $20$ nodes, but their rankings are incorrect.
The most important node (5013) is in the $13^{\rm th}$ position, and the second most
important node (21052) is in the $3^{\rm rd}$ position. Increasing $\ell$ to 1000 yields
more accurate rankings. The most important nodes, i.e., (5013), (21052), and (18746),
are ranked correctly. Increasing $\ell$ further yields rankings that are closer to the
``exact'' ranking of the second column. For instance, $\ell=2000$ identifies $19$ of the $20$ most
important nodes, and $8$ of them have the correct rank. The figure suggests that we
may gain valuable insight into the ranking of the nodes by using fairly few columns (and
rows) of the adjacency matrix, only.
Computing times are shown in Table \ref{ca-CondMatT}.
Figure \ref{AsFigrnd} differs from Figure \ref{AsFig} in that the columns of the matrix
$A$ are randomly sampled. Comparing these figures shows that the sampling method of
Section \ref{sec2} gives rankings that are closer to the ``exact ranking'' of the second
column for the same number of sampled columns.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=1]{EnronLeftPerron.pdf}
\end{center}
\caption{Enron: The top $20$ ranked nodes given by the left Perron vector of $A$ and of
$M_\ell=A_{(:,J)}A_{(I,:)}$ for $\ell\in\{500,1000,1500,2000,2500,3000\}$. The columns of
$A$ are sampled as described in Section~\ref{sec2}.}
\label{EnronTable}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=1]{EnronLeftPerronRandom.pdf}
\end{center}
\caption{Enron: The top $20$ ranked nodes given by the left Perron vector of $A$ and of
$M_\ell=A_{(:,J)}A_{(I,:)}$ for $\ell\in\{500,1000,1500,2000,2500,3000\}$. The columns of
$A$ are sampled randomly.}\label{EnronTablernd}
\end{figure}
\begin{table}[h!]
\renewcommand{\arraystretch}{1.15}
\begin{center}
\begin{tabular}{cccc}
\hline
$\ell$ & Mean & Max & Min\\
\hline
500 & 0.21 & 0.37 & 0.11 \\
1000 & 0.32 & 0.58 & 0.10 \\
1500 & 0.38 & 0.66 & 0.10 \\
2000 & 0.42 & 0.72 & 0.10 \\
2500 & 0.42 & 0.84 & 0.11 \\
3000 & 0.43 & 0.95 & 0.10 \\
\hline \\
\end{tabular}
\end{center}
\caption{Enron. Computation time in seconds. Average, max, and min over 100 runs.}\label{EnronT}
\end{table}
\subsection{Enron}\label{sec:er}
This example illustrates the application of the method described in Section \ref{sec4} to
a nonsymmetric adjacency matrix. The network in this example is an e-mail exchange
network, which represents e-mails (edges) sent between Enron employees (nodes). The
associated graph is unweighted and directed with 69,244 nodes and 276,143 edges, including
1,535 self-loops. We removed the self-loops before running the experiment. This network
has been studied in \cite{CPAV} and can be found at \cite{SSMC}.
We choose $\ell$ columns of the matrix $A$ as described in Section \ref{sec2} and put
the indices of these columns in the index set $J$. Similarly, we select $\ell$ columns
of the matrix $A^T$. The indices of these rows make up the set $I$. This determines the
matrix $M_\ell=A_{(:,J)}A_{(I,:)}\in{\mathbb R}^{n\times n}$ of rank at most $\ell$. We
calculate an approximation of a left Perron vector of $A$ by computing a left Perron
vector of $M_\ell$. The size of the entries of the Perron vectors determines the ranking.
The second column of Figure \ref{EnronTable} shows the ``exact ranking'' determined by a left
Perron vector of $A$. The remaining columns show the rankings defined by Perron vectors of
$M_\ell$ for $\ell\in\{500,1000,1500,2000,2500,3000\}$ with the sampling of the columns
of $A$ carried out as described in Section~\ref{sec2}. The ranking determined by Perron
vectors of $M_\ell$ gets closer to the exact ranking in the second column as $\ell$ increases.
When $\ell=500$, we are able to identify $12$ out of the $20$ most important nodes,
but not in the correct order. The three most important nodes have the correct ranking
for $\ell\geq 1000$. When $\ell\geq 2000$, we almost can identify all the 20 important
nodes, because node (60606) is actually ranked $21^{\rm st}$.
Computing times are shown in Table \ref{EnronT}.
Figure \ref{EnronTablernd} differs from Figure \ref{EnronTable} in that the columns of
the matrix $A$ are randomly sampled. These figures show that the sampling method of
Section \ref{sec2} gives rankings that are closer to the ``exact ranking'' of the second
column for the same number of sampled columns.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=1]{cond-mat-2005Perron.pdf}
\end{center}
\caption{Cond-mat-2005: The top $20$ ranked nodes determined by the Perron vectors of $A$
and of $M_\ell=A_{(:,J)}A_{(J,:)}$ for $\ell\in\{500,1000,1500,2000,2500,3000\}$. The
columns of $A$ are sampled as described in Section~\ref{sec2}.}\label{CondFig}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=1]{cond-mat-2005LeftPerronRandom.pdf}
\end{center}
\caption{Cond-mat-2005: The top $20$ ranked nodes determined by the Perron vectors of $A$
and of $M_\ell=A_{(:,J)}A_{(J,:)}$ for $\ell\in\{500,1000,1500,2000,2500,3000\}$. The
columns of $A$ are sampled randomly.}\label{CondFigrnd}
\end{figure}
\begin{table}[h!]
\renewcommand{\arraystretch}{1.15}
\begin{center}
\begin{tabular}{cccc}
\hline
$\ell$ & Mean & Max & Min \\
\hline
500 & 0.08 & 0.13 & 0.03 \\
1000 & 0.13 & 0.17 & 0.02 \\
1500 & 0.16 & 0.24 & 0.03 \\
2000 & 0.19 & 0.26 & 0.03 \\
2500 & 0.21 & 0.25 & 0.03 \\
3000 & 0.23 & 0.28 & 0.03 \\
\hline \\
\end{tabular}
\end{center}
\caption{Cond-mat-2005. Computation time in seconds. Average, max, and min over 100 runs.}\label{Cond-mat-2005T}
\end{table}
\subsection{Cond-mat-2005}\label{sec:cond}
The network in this example models a
collaboration network of scientists posting preprints in the condensed
matter archive at www.arxiv.org. It is discussed in \cite{N} and can be found at
\cite{MN}. We use an unweighted version of the network. The associated graph is undirected
and has 40,421 nodes and 351,382 edges. We use the Perron vector as a centrality measure,
and compare the node ranking using the Perron vector of $A$ with the ranking determined by
the
Perron vector for the matrices $M_\ell=A_{(:,J)}A_{(J,:)}\in{\mathbb R}^{n\times n}$ for several
$\ell$-values. The matrix $A_{(:,J)}$ is determined as described in Section \ref{sec2},
and $A_{(J,:)}$ is just $A_{(:,J)}^T$.
Figure \ref{CondFig} shows the (exact) ranking obtained with the Perron vector for $A$
(2nd column) and the rankings determined by the Perron vector for $M_\ell$, for
$\ell\in\{500,1000,1500,2000,2500,3000\}$, when the columns of $A$ are sampled as
described in Section \ref{sec2}. We compare the ranking of the top $20$ ranked nodes in
these rankings. When $\ell=500$, the two most important nodes are ranked correctly
by using the Perron vector for $M_{500}$. Moreover, $15$ out of $20$ top ranked nodes are
identified, but their ranking is not correct. For $\ell=2000$, the nine most important
nodes are ranked correctly.
Computing times are displayed in Table \ref{Cond-mat-2005T}.
Figure \ref{CondFigrnd} differs from Figure \ref{CondFig} in that the columns of
the matrix $A$ are randomly sampled. Clearly, the sampling method of Section \ref{sec2}
gives rankings that are closer to the ``exact ranking'' for the same number of sampled
columns.
The above examples illustrate that valuable information about the ranking of nodes can
be gained by sampling columns and rows of the adjacency matrix. The last two examples
determine the left Perron vector. The most popular methods for computing this vector
for a large adjacency matrix
are the power method and enhanced variants of the power method that do not require much
computer storage. These methods, of course, also can be applied to determine the left
Perron vector of the matrices $M_\ell$. It is outside the scope of the present paper to
compare approaches to efficiently compute the left Perron vector. Extrapolation and other
techniques for accelerating the power method are described in
\cite{BRZ,BRZ2,BRZ3,CRZT,JS,JS2,WZW}.
In our experience the sampling method described performs well on many ``real'' networks.
However, one can construct networks for which sampling might not perform well. For instance,
let $G$ be an undirected graph made up of two large clusters with many edges between vertices
in the same cluster, but only one edge between the clusters. The latter edge may be
difficult to detect by sampling and the sampling method. The method therefore might only
give results for edges in one of the clusters. We are presently investigating how the
performance of the sampling method can be quantified. We would like to mention that the
sampling method can be used to study various quantities of interest in network analysis,
such as the total communicability \cite{BK}.
\section{Conclusion}\label{sec6}
In this work we have described novel methods for analyzing large networks in situations
when not all of the adjacency matrix is available. This was done by evaluating matrix
functions or computing approximations of the Perron vector of partially known matrices.
In the computed examples, we considered the situation when only fairly small subsets of
columns, or of rows, or both, are known.
There are two distinct advantages to the approaches developed here:
\begin{enumerate}
\item They are computationally much cheaper than the evaluation of matrix functions or
the computation of the Perron vector of the entire matrix when the adjacency matrix is
large.
\item The methods described correspond to a compelling sampling strategy when obtaining
the full adjacency information of a network is prohibitively costly. In many realistic
scenarios, the easiest way to collect information about a network is to access nodes (e.g.,
individuals) and interrogating them about the other nodes they are connected to. This
version of sequential sampling is described in Section~\ref{sec2}.
\end{enumerate}
Finally, in order to illustrate the feasibility of our techniques,
we have shown how to approximate well-known node centrality measures for large networks,
obtaining quite good approximate node rankings, by using only a few columns and rows of
the underlying adjacency matrix.
\section*{Acknowledgement}
The authors would like to thank Giuseppe Rodriguez and the anonymous referees for
comments and suggestions.
|
2,869,038,154,272 | arxiv |
\section{Introduction}
Research communities seek to make the deployment of general artificial intelligence (AI) and deep neural networks (DNNs) used in everyday life as dependable as possible.
Significant emphasis is placed on handling corrupted input (e.g. due to visual artifacts or to attacks) provided to the model.
However, less effort has been dedicated to studying corruptions of the internal state of the model itself, most importantly caused by faults in the underlying hardware.
Such faults can occur naturally, such as memory corruption induced by external (e.g., cosmic neutron) radiation or electric leaking in the circuitry itself, typically manifested as bit flips or stuck-at-0/1s in the memory elements \cite{Athavale2020, Li2017}, which may alter the DNN model parameters (\emph{weight faults}) or the intermediate states (\emph{neuron faults}).
Platform faults can also impact the input while it is held in memory, yet this work focuses on the computational part of the DNN as our goal is to estimate the vulnerability of the model.
The impact of these faults is often unpredictable in systems with large complexity.
Alterations can be of transient or permanent nature:
Transient faults have a short life span of the order of a few clock cycles and are therefore harder to detect by the system.
On the contrary, permanent faults may silently corrupt the system output for a longer period.
Memory protection techniques like error correcting code (ECC) can mitigate the risk of hardware faults \cite{Neale2016}; however, they are typically applied only to selected elements to avoid significant cost overheads. Given the rise in technology scaling with smaller node sizes and larger memory areas, future platforms are expected to become even more vulnerable to hardware faults \cite{Neale2016}.
Object detection DNNs are among the most common examples of highly safety-critical DNN applications as they are in autonomous vehicles or in medical image analysis.
Typically, autonomous systems process events based on perception techniques. Hence, it is critically important that any potential hazards does not impact the system-level evaluation of events.
While the chances for a hardware fault to occur (for example, the chance of a neutron radiation event hitting a memory element) can be estimated statistically, it remains unclear how to quantify the safety-related impact of the failure of a DNN applied for the purpose of object detection. In contrast to simpler classification problems, the model output here typically consists of a multitude of bounding boxes and classes per image, of which a subset can be altered in the presence of a fault while others remain intact, see Fig. ~\ref{fig:example}. We find that commonly used average precision (AP) \cite{Microsoft2017} metrics inappropriately rely on the count of false objects irrespective of their interrelations (grouping in the same image or distributed across multiple frames).
In real-time applications of DNNs, it further matters if the corrupted output is volatile or temporally stable across multiple input frames. The user is typically behind a tracking module that can regularize instantaneous alterations. We, therefore, see the need to establish a safety-related assessment of the vulnerability of object detection workloads under soft errors.
Depending on the specifications from safety assessment, we adopt a generalized notion of a safety hazard as a perturbation that causes a potentially unsafe decision by the end-user of the object detection module.
Therefore, we introduce two variants of the metric \text{\novelMetric } (Image-wise Vulnerability Metric for Object Detection), namely $\SDC$ in case of an image-wise silent data corruption (SDC), and $\DUE$ in case of detectable uncorrectable errors (DUE)s.
In this paper, we discuss the characteristics of the AP-based metrics in detail when used to quantify a model's vulnerability.
For example, AP50 is found to be hypersensitive to rare single corruption events compared to an evaluation at the image level.
Our work supports maintaining the relationship of the system-level hazard evaluation to the impact of any hardware faults.
We find that a hardware fault - if it hits the crucial bits of either neuron or weight - can silently lead to excessive amounts of additional false positives (FPs) and increase the rate of false negatives (FNs) misses.
We further study the impact of permanent faults in a real-time situation by considering continuous video sequences and observing a significant frequency that the error manifestation persists for a critical time interval.
In summary, this paper makes the following contributions:
\vspace{-8pt}
\begin{itemize}
\item We demonstrate that AP-based metrics lead to misleading vulnerability estimates for object detection DNN models (Sec. \ref{sec:ap})
\item We propose an SCD-based/DUE-based metric \text{\novelMetric} to quantify the vulnerability of object detection DNN models under hardware faults (Sec. \ref{sec: proposed metrics}).
\item We evaluate the vulnerability of various representative object detection DNN models using the proposed \text{\novelMetric}, illustrating the probability of a single bit flip resulting in a potentially safety-critical event (Sec. \ref{sec:error_probabilities}).
\item For each such event, we propose various quantitative metrics to estimate the impact severity for typical safety-critical applications (Sec. \ref{sec:error_severities}).
\item We extend our image-based evaluation to a video-based safety-critical system and measure the vulnerability of temporal persistency ($\Aoccfp$ and $\Aoccfn$) due to a permanent fault, by tracking the FPs and FNs across multiple video frames (Sec. \ref{sec:permanent_faults}).
\end{itemize}
\input{related_work}
\input{preliminaries}
\input{methodology}
\input{results_1}
\input{results_2}
\section{Conclusion}
\vspace{-5pt}
This work points out the challenges in estimating the vulnerability of object detection models under bit flip faults. Average precision-based metrics are either very sensitive or not sensitive to the corruption events, which can be misleading in a safety context.
For example, for F-RCNN+Kitti, neuron injections experiments showed almost no impact ($<0.1\%$) in the AP50 and mAP metrics. Using the image-based evaluation metric \text{\novelMetric} proposed here, however, we see that $0.7\%$ of all images lose substantial amounts ($>30\%$) of the total TP detections due to a single bit flip.
The evaluation method presented in this work allows us to come to a vulnerability estimate better addressing safety targets. Given the $\SDC$ probabilities and severities (see Fig. ~\ref{fig:fault_rates_1} and Tab.~\ref{tab:sev_features}), we conclude that the chances of safety-related corruptions due to soft errors are minor to moderate ($0.4\%-4.2\%$) in the studied setups. $\SDC$ events due to weight faults are about two times as likely as neuron faults. However, if SDC occurs, the severity can be grave.
The \text{\novelMetric} metric should always be considered in combination with severity features for safety purposes. This is because \text{\novelMetric} does not quantify the severity, but only considers the existence of false and missed bounding boxes.
Our metric is defined relative to the original performance. This means that even if a fault also acts in a beneficial way, i.e. fixing some FP or FN occurrences, it will be categorized as a SDC here.
We estimated this severity with the help of different safety-related features. We observed that high bits of the exponent of floating point numbers, when hit by either neuron or weight faults, can lead to a significant increase in $\FPd$ and $\FNnd$.
This effect is also translated into an average occupancy value that reflects the area portion of the image that is critically altered by a fault. We find that large average occupancies (up to $\Aoccfp \sim 81\%$ for FP and $\Aoccfn\sim86\%$ for FN) are common, reflecting significant safety hazards. Finally, we studied the use case of a sequential real-time image sequence from Lyft to show that permanent \textit{stuck-at} faults on neurons or at weights can induce FP objects covering as much as $\sim83\%$ of the image area, creating dangerous ghost objects. Similarly, up to $\sim63\%$ of the TP area in the scene can be missed. Overall, the weight faults are more likely impactful than neuron faults and have a higher severity in area occupancy (except for permanent FNs).
\vspace{-10pt}
\section*{Acknowledgment}
\vspace{-10pt}
This project has received funding from the European Union's Horizon $2020$ research and innovation programme under grant agreement No $956123$.
Our research was partially funded by the Federal Ministry of Transport and Digital Infrastructure of Germany in the project Providentia++ (01MM19008).
This version of the contribution has been accepted for publication, after peer review (when applicable) but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: \href{https://doi.org/10.1007/978-3-031-14835-4_20}{{https://doi.org/10.1007/978-3-031-14835-4\_20}}. Use of this Accepted Version is subject to the publisher\textquotesingle s Accepted Manuscript terms of use \href{https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms}{{https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms}}.
\vspace{-10pt}
\bibliographystyle{splncs04}
\section{Methodology of vulnerability estimation}
\subsection{Issues with average precision}
\label{sec:ap}
\begin{figure*}[!ht]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{metrics/pr_curves_1.png}
\vspace{-3pt}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{metrics/pr_curves_2.png}
\vspace{-3pt}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{metrics/pr_curves_3.png}
\vspace{-3pt}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{metrics/pr_curves_4.png}
\vspace{-3pt}
\caption{}
\end{subfigure}
%
\vspace{-5pt}
\caption{Simulation of the effect of fault injection on the AP metric. Here, an artificial data set of $100$ objects was generated, where each object was classified as TP with a chance of $0.7$ or as a FN otherwise. In addition, FPs were created with a rate of $0.3$ per true detection. Both TPs and FPs are assigned random confidence values between $0.7$ and $1$. To this setup (a), additional FPs simulating the effect of fault injection were augmented or existing TPs were randomly eliminated to model fault-induced FNs ((b)-(d)). The diagrams show the PR curves and the effect of fault injection on them. Number and confidence range of the faulty objects are given in the insets.}
\vspace{-8pt}
\label{fig:pr_example}
\vspace{-8pt}
\end{figure*}
In object detection, evaluation and benchmarking methods are most commonly selected from the family of average precision (AP)-based metrics (in combination with specific IoU thresholds such as AP50 or mAP).
Libraries such as CoCo API \cite{Microsoft2017} perform the following relevant steps to obtain AP values from a set of object predictions:
i) ground truth and the predicted objects are collected in groups of the same class label,
ii) within a group, the predicted objects are sorted w.r.t their confidence scores,
iii) the sorted predictions are consecutively assigned to the ground truth objects within the same class group, using an appropriate IoU threshold,
iv) precision and recall (PR) curves are evaluated sequentially through the confidence-ranked TP, FP, and FN objects,
v) the class-wise AP is calculated as the area under the interpolated PR curve of a class, and vi) the overall AP is determined as the average of the class-wise AP values.
It has been pointed out that such AP metrics can lead to non-intuitive results in the detection performance of a model on a specific data set \cite{Redmon2018}.
In the following, we illustrate that an AP-based evaluation can be misleading when estimating the vulnerability of a model against corruption events such as soft errors in a safety-critical real-time context concerning the probability and severity of corruption.
Corruption events lead to additional FP and FN objects merged into or eliminated from the healthy list of detected objects.
We identified the following issues when trying to quantify model vulnerability based on AP metrics:
\begin{itemize}
\item \textbf{Object-level evaluation:} The AP is calculated on an object level, i.e., the amount of TP, FP, FN objects accumulated across all images is used for evaluation. This does not consider how corrupted boxes are distributed across images, i.e., one image with a large number of fault-induced FP detections can have the same effect as many corrupted images with few FP detections each. From a real-time safety perspective, however, the amount of corrupted image frames is typically relevant, as this may determine, for example, the robustness of a video stream used for environment perception.
\item \textbf{Dependency of PR on confidence:} Due to the sequential and integration-based characteristic of the average precision, the fault-induced FP object's impact depends highly on those sample's confidence. This does not reflect the potential safety relevance a low-confidence FP object may have, see more below.
\item \textbf{Dependency of box assignment on confidence:} The strict confidence ranking can, in some cases, lead to a non-optimal global assignment of bounding boxes. For example, a better matching box might have slightly lower confidence than a global optimization would demand.
\item \textbf{Class-wise average:} Common and rare classes have the same weight in the overall AP metric. However, their detection performance and vulnerability can be quite different as they typically relate to the samples the model encountered during training.
\end{itemize}
\vspace{-6pt}
In particular, the second point above is non-intuitive; we therefore illustrate this in more detail in Fig. ~\ref{fig:pr_example} with the help of a generic example from a randomly generated data set of $100$ objects.
Additional FPs with low confidence compared to the reference set of objects have a negligible impact on the metric as they get appended to the tail of the PR curve, even when numerous and potentially safety-critical. On the contrary, few high-confidence FP objects can lead to significant drops in the AP as those samples get sorted in at the head of the PR curve to lower it. Fault-induced FNs reduce the area under the PR curve by pushing the samples towards smaller recalls.
\subsection{Proposed metrics: \text{\novelMetric}}
\label{sec: proposed metrics}
We introduce \text{\novelMetric} metrics to measure the image-wise vulnerability of the object detection DNNs.
Our evaluation strategy described in the following seeks to counter the issues with AP-based metrics described in the last section in order to reflect vulnerability estimation better addressing safety targets.
In particular, our approach is characterized by:
\begin{itemize}
\item \textbf{Image-level evaluation:} We evaluate vulnerability on an image level instead of an object level.
This approach reflects that those faults jeopardize safety applications that silently alter the free and occupied space by inducing false detections in an image, particularly sequences thereof, even if such an alteration involves only few false objects per frame.
We register image-wise SDC and DUE events, see Sec.~\ref{sec:sdc_due}, to determine the probability of a relevant fault impact. The severity of the latter is evaluated separately in terms of the amount of induced FPs and FNs. Due to their image-based character, $\SDC$ and $\DUE$ metrics are naturally independent of the object confidences.
\item \textbf{Confidence-independent box assignment:} False-positive objects can be critical whether they have high or low confidence, which is masked in the AP metric. We apply a different assignment scheme for FPs and FNs that omits confidence ranking and hence makes the model vulnerability metric independent of the confidence of FPs, see Sec.~\ref{sec:assignment_strat}. The assignment strategy can also be varied to relax class correspondence requirements, which are often overemphasized from a safety perspective. The system can perform at degraded level if its sure of object location and not much about the class.
\item \textbf{Class-independent average:} We evaluate the overall sample mean instead of the mean of individual class categories to reflect typical imbalances in the data set concerning object classes.
\end{itemize}
\vspace{-15pt}
\subsubsection{Assignment policy} \label{sec:assignment_strat}
In contrast to the sequential and class-wise matching described in Sec.~\ref{sec:ap}, we calculate the cost matrix from a set of predictions and ground truth objects for a single image. The cost for matching objects is the IoU between the bounding boxes.
If the IoU is below the specified threshold $\IoUeval=0.5$, or if the classes of the two objects are not the same, we assign a maximum cost. To analyze the relevance of exact class predictions for an application, we can harden or soften the class matching from a one-to-one correspondence to compatible class clusters or neglect class matching altogether.
A Hungarian association algorithm \cite{Kuhn1955} is then deployed to obtain the global optimal cost assignment.
As usual, the number of accepted matches per image represents the true positive (TP) cases.
False detections are registered in the following cases:
i) a FP and a simultaneous FN detection occurs if the IoU with the assigned ground truth object is below the threshold, independent of the predicted class, or if the IoU is sufficiently large, but the classes are not compatible,
ii) a single FP occurs if there is a predicted object that cannot be assigned to any ground truth object with acceptable costs,
iii) a single FN occurs if there is a non-assigned ground truth object. Fig. ~\ref{fig:example} shows an illustrative example of assigned TP, FP, FN boxes. In our setup, we clip predicted bounding boxes reaching out of the image dimensions -- e.g., due to faults -- to the actual image boundaries.
\vspace{-15pt}
\subsubsection{\text{\novelMetric} ($\SDC$ and $\DUE$)} \label{sec:sdc_due}
We define the $\SDC$ rate as the ratio of events where a fault during inference causes a silent corruption of an image and the total number of image inferences.
$\SDC$ is an SDC defined as a change in either of the TP, FP, or FN count of the respective image, compared to the original fault-free prediction, given that no irregular \textit{NaN} (not a number) or \textit{Inf} (infinite) values occur during the inference as shown in Eq. \ref{eq:SDC_DUE}. Since TPs and FNs are complementary to each other, we can eliminate either TP or FN in $\SDC$ in Eq. \ref{eq:SDC_DUE}.
On the other hand, the $\DUE$ rate is the ratio of events where irregular \textit{NaN} or \textit{Inf} values are generated during inference and detected inside the layers or in the predicted output due to the injected fault in the respective image during inference and are computed using the Eq. \ref{eq:SDC_DUE}. As DUE events are naturally detectable, they typically are less critical than SDC events. Explicitly,
\vspace{-15pt}
\begin{align}
\vspace{-25pt}
\begin{split}
\SDC &= \frac{1}{N} \sum^{N}_{i=1} \big\{\big[ (FP_{\text{orig}})_i \neq (FP_{\text{corr}})_i \lor\ (FN_{\text{orig}})_i \neq (FN_{\text{corr}})_i\big]
\land\ \lnot \text{Inf}_i\ \land\ \lnot \text{NaN} \big\}, \\
\DUE &= \frac{1}{N} \sum^{N}_{i=1} \left[ \text{Inf}_i\ \lor\ \text{NaN}_i \right].
\end{split}
\vspace{-25pt}
\label{eq:SDC_DUE}
\end{align}
\vspace{-15pt}
\section{Preliminaries}
\subsection{Hardware faults vocabulary}
\label{sec:hardware faults vocabulary}
Our fault injection technique includes transient and permanent faults.
Transient faults refer to random bit flips (0$\rightarrow $1 or 1$\rightarrow $0) of a randomly chosen bit, which occur during a single image inference and are removed afterward.
Permanent faults are modeled as \text{\statzero} and \text{\statone} errors, meaning that a bit remains consistently in state '0' or '1' without reacting on intended updates. Those faults are assumed to persist across many image inferences.
We inject faults either into intermediate computational states of the network (neurons) or into the parameters (weights) of the DNN model, focusing only on convolutional layers, which constitutes a significant part of all operations in the studied DNNs.
Both types of faults represent bit flip in the respective memory elements, holding either temporary states such as intermediate network layer outputs or learned and statically stored network parameters.
A fault can potentially induce critical alterations of the model predictions, measured by $\SDC$ or $\DUE$ as shown in Eq.~\ref{eq:SDC_DUE}.
\vspace{-10pt}
\subsection{Experimental setup: Models, datasets and system}
We use standard object detection models - Yolov3 \cite{Redmon2018}, RetinaNet \cite{Lin2020}, Faster-RCNN (F-RCNN\cite{Ren2017}) - together with the test datasets CoCo2017 \cite{Microsoft2017}, Kitti \cite{Geiger2013a} and Lyft \cite{Lyft}.
We retrained Yolov3 on the Kitti and Lyft dataset, and the Faster-RCNN model on Kitti for comparative experiments.
We used open-source trained weights for the rest of the models and datasets. The base performances of these models in terms of AP50 and mAP can be found in Fig. \ref{fig:fault_rates_1}. The parameter configurations used for these models (NMS threshold, confidence score, etc.) are taken from the original publications. Since fault injection is compute-intensive, we select a subset of $1000$ images for each dataset to perform the transient fault analysis and use a single Lyft sequence of $126$ images for the permanent fault analysis. All experiments adopt a single-precision floating-point format (FP32) according to the IEEE754 standard \cite{IEEE2019}. Our conclusions also apply to other floating-point formats with the same number of exponent bits, such as BF16 \cite{Intel2018}, since no relevant effect was observed from fault injections in mantissa bits.
\vspace{-10pt}
\section{Related Work}
The effort to estimate the vulnerability or resilience of the DNNs against hardware faults affecting the model has been explored recently to study the safety criticality of a model when used in real-time operation.
To this extent, faults are injected in DNNs during inference either at
the application layer on weights/neurons (\cite{Li2017, GeisslerQRAPDGP21}), or by neutron beam experiments ( (\cite{DosSantos2019, Hou2020}), black-box techniques).
Authors of Ref. \cite{Li2017, Beyer2020} considered transient faults, which are multiple event upsets occurring in data or buffers of DNN accelerators. Many prior works claimed DNNs to have inherent tolerance towards faults. Li. et al. (\cite{Beyer2020}) studied the vulnerability of DNNs by injecting faults in data paths and buffers with different data type levels and quantified it in the form of SDC probabilities and FIT (failure in time) rates.
It is seen that errors in buffers propagate to multiple locations compared to errors in data-path.
These works estimated the resiliency of the model by injecting multiple fault injections during the feed-forward inference. This analysis is limited to image classification models like AlexNet \cite{Krizhevsky2012}, VGG \cite{Simonyan2015}, and ResNets \cite{he2016deep}. Our analysis does not characterize the faults in buffer and faults in the data-path. We assume the faults will propagate to the application layer, which may impact either the weights or the neurons. Hence we analyze them independently, assuming equal probabilities.
The Ares framework \cite{Reagen2018} demonstrated that activations (neurons) in image classification networks are 50x more resilient than weights. These works focus mainly on fault models involving multiple bit flips captured by bit error rate (BER).
There is limited research done on understanding the vulnerability of object detection DNNs. The work in Ref. \cite{DosSantos2019} quantified the architectural vulnerability factor (AVF) of Yolov3 using metrics like SDC AVF, DUE AVF, and FIT rates.
This work studies fault propagation by injecting a random value in the selected register file and not flipping a bit. The authors argue that not all SDCs are critical, given that change in objects' confidence scores after injecting faults is tolerable. The definition of SDC used in this work is not straightforward. They use the precision and recall values computed at the object level by combining all the images, obscuring the actual vulnerability.
The vulnerability of object detection DNNs is studied by injecting faults using neutron beam \cite{Lotfi2019}.
The authors analyzed both transient and permanent faults but not on continuous video sequences. Also, the dataset considered in these experiments was primarily limited to only one object per image. Also, they injected faults into the input image. We limit our fault injections to neurons and weights and only to convolution layers of the DNNs as the fully connected layers did not change much of the observed data. We believe fault-injected images do not fall into the category of the model vulnerability. They rather find their place in adversarial input space within various adverse fault/noise models. The results obtained from many of these works are not easy to compare as the failure and SDC definitions differ and do not follow standard baseline. To our best knowledge, our paper is the first to demonstrate vulnerabilities of the object detection models in detail using the proposed (\text{\novelMetricBF}) metrics to measure the severities at the image level.
Also, we introduce a new metrics $\Aoccfp$ and $\Aoccfn$ quantifying the area occupancy of FP/FN blobs, which is essential to establish the safety criticality of the object detection models concerning specific real-time applications.
\vspace{-10pt}
\section{Transient faults}
\label{sec:transient_faults}
Our evaluation concept is guided by the assumption that in safety-critical applications, both the miss of any existing object as well as the creation of any false positive object can be potentially hazardous.
Therefore, we consider the probability that such an SDC event occurs and our primary metrics $\SDC$ and $\DUE$ (Eq.~\ref{eq:SDC_DUE}) captures the vulnerability of a model. For transient faults, this evaluation is performed in Sec.~\ref{sec:error_probabilities}. Accordingly, we independently inject 50,000 random single-bit flips in neurons and weights at each inference of the chosen test datasets.
Subsequently, Sec.~\ref{sec:error_severities} discusses the severity of each of those SDC events in terms of the average impact of additional FP and FN objects, their size, and confidence.
If a specific use case is given, the factors of probability and severity can be used to derive the risk of an error \cite{Koopman2019}.
\begin{wrapfigure}{R}{.35\textwidth}
\vspace{-20pt}
\begin{minipage}{\linewidth}
\centering
\includegraphics[width=\linewidth]{metrics/pr_curves_explain/prCurve_orig_corr_classes.pdf}
\end{minipage}
\vspace{-8pt}
\caption{Example of the AP50 PR curves of few classes from Yolov3 and Kitti in the fault free and faulty cases.}
\vspace{-18pt}
\label{fig:pr_curve_explain}
\end{wrapfigure}
\subsection{Corruption probability}
\label{sec:error_probabilities}
In Fig. ~\ref{fig:fault_rates_1}, we present the fault injection campaigns of all studied networks, comparing the typical benchmark metrics AP-50 and mAP to the $\SDC$ and $\DUE$ rate as defined in Sec.~\ref{sec:sdc_due}.
Both Yolov3 and RetinaNet show a significant change in the AP-50 and mAP metrics under the injected neuron and weight faults: The accuracy can drop as much as from $89.4\%$ to $34.4\%$ (AP-50) due to a single weight fault in the scenario of Yolov3 and Kitti.
On the other hand, F-RCNN does not showcase much sensitivity to the injected faults ($\lesssim 0.8\%$ change in AP-50). At the same time, the$\SDC$ rates vary between $0.4\%$ and $1.8\%$ (neuron faults), and from $1.5\%$ to $4.2\%$ (weight faults). This discrepancy illustrates the need for a more realistic vulnerability estimate.
As shown below, in Tab.~\ref{tab:sev_features}, fault injections in Yolov3 and RetinaNet tend to produce many FPs with statistically increased confidence. This leads to a drastic shift of the PR curves, as shown in the example in Fig. ~\ref{fig:pr_curve_explain}, where only $3.2\%$ out of 1000 samples have corrupted prediction (demonstrated the similar effect in Fig. ~\ref{fig:pr_example}(c)). Rare classes are susceptible to such faults, diminishing the class-averaged metric further. Since the induced false objects are concentrated on only a few images, the AP metric exaggerates the safety-related vulnerability of the model under software errors (see also the discussion in Sec.~\ref{sec:ap}).\\
In contrast, the F-RCNN model architecture appears to be very robust against the generation of FPs (see Tab.~\ref{tab:sev_features}). Predictions made in the presence of a soft error have nearly the same confidence as in the fault-free case. However, faults do disturb the detection of objects as a significant portion of FNs appear (on average between $10-33\%$). Nevertheless, the AP metrics for FRCNN under fault injection are hardly affected: We observe very few accuracies drops for both neurons and weights. At the same time about $0.4-0.7\%$ ($1.5-1.8\%$) of the images see silent data corruption. In this case, the AP-based metric is masking the potentially safety-critical impact of underlying faults.
We further observe that for Yolov3, $\DUE$ events are generated in $\sim0.9\%$ of the neuron injection cases, while in RetinaNet and F-RCNN and for weight injections, those are negligible ($\lesssim 0.1\%$).
We conclude that the Yolov3 architecture stimulates neuron values that have a higher chance if being flipped to a configuration encoded as \textit{NaN} or \textit{Inf} (in FP32, all exponential bits have to be in state '1'), compared to RetinaNet and F-RCNN.
The weight values of all networks, on the other hand, are closely centered around zero, which makes it very unlikely to reach a \textit{NaN} or \textit{Inf} bit configuration \cite{GeisslerQRAPDGP21} (typically MSB and at least another exponential bit are in state '0' at the same time).
We observe that the faults injected in weights at any bit of FP32 cause higher $\SDC$ rates than the faults injected at the neuron level. They showcase $\sim 2\times$ more adverse effects on predictions than faults injected at the neuron level.
\subsection{Corruption severity}
\label{sec:error_severities}
We next aimed to understand how faults leading to $\SDC$ events corrupt images and how the severity of an $\SDC$ event on a potential safety-critical application can be estimated.
Even though the relevance of a safety feature may depend on the specific application, we identified the following fundamental features to serve as a specific indicative measure of an SDC fault severity, see Tab.~\ref{tab:sev_features}:
\begin{itemize}
\item The average number of FP objects induced by a given $\SDC$ fault and the proportion of boxes lost due to a fault, referred to as $\FPd$ and $\FNnd$, respectively as described in Eq. ~\ref{eq:delta_fp_fn} (subscript 'n' represents normalization as the upper limit of FNs is known, in contrast to FPs).
\item The average size of objects in the presence and absence of SDC ($\avg(\text{size})$) since a significant change of the object size can be safety-critical,
\item The average area of the image that is erroneously occupied due to $\SDC$ induced FP objects ($\Aoccfp$) and the average portion of the vacant area created by not detecting the objects due to $\SDC$ faults ($\Aoccfn$).
\item The average confidence of objects in the presence and absence of $\SDC$, $\avg(\text{conf})$.
\end{itemize}
We motivate this choice more in the following subsections.
\begin{table}[!ht]
\vspace{-25pt}
\caption{Severity features averaged over all $\SDC$ events.}
\vspace{.1cm}
\centering
\resizebox{\textwidth}{!}
{\begin{tabular}{c||c|c|c|c|c|c}
~ & \textbf{Yolo+Coco} & \textbf{Yolo+Kitti} & \textbf{Yolo+Lyft} & \textbf{Retina+Coco} & \textbf{F-RCNN+Coco} & \textbf{F-RCNN+Kitti} \\ \hline \hline
\multicolumn{6}{l}{\textbf{Neurons:}} & \\ \hline \hline
$\avg(\FPd)$ & $\textbf{333}$ & $36$ & $174$ & $33$ & $0$ & $0$ \\ \hline
$\avg(\FNnd)(\%)$ & $42.2$ & $41.3$ & $\textbf{46.6}$ & $16.1$ & $25.3$ & $33.3$ \\ \hline
\begin{tabular}[x]{@{}c@{}}$\avg(\text{conf})$\\$(\text{corr}, \text{orig})$\end{tabular}
& $0.99, 0.52$ & $0.99, 0.51$ & $0.99, 0.65$ & $\textbf{0.79, 0.11}$ & $0.73, 0.73$ & $0.90, 0.89$ \\ \hline
\begin{tabular}[x]{@{}c@{}}$\avg(\text{size})/1e^3 \px$\\$(\text{corr}, \text{orig})$\end{tabular}
& $4.3, 11.2$ & $\textbf{34.5, 2.3}$ & $17.8, 7.3$ & $5.6, 20.3$ & $17.0, 18.6$ & $6.3, 6.8$ \\ \hline
$A_\text{fp-occ} (\%)$ & $36.8$ & $\textbf{62.5}$ & $59.8$ & $0.7$ & $1.7$ & $0.0$ \\ \hline
$A_\text{fn-vac} (\%)$ & $4.0$ & $5.1$ & $4.8$ & $\textbf{53.1}$ & $41.1$ & $39.8$ \\ \hline \hline
%
\multicolumn{6}{l}{\textbf{Weights:}} & \\ \hline \hline
$\avg(\FPd)$ & $\textbf{198}$ & $59$ & $145$ & $7$ & $0$ & $0$ \\ \hline
$\avg(\FNnd)(\%)$ & $23.3$ & $21.7$ & $21.3$ & $4.0$ & $9.6$ & $\textbf{29.6}$ \\ \hline
\begin{tabular}[x]{@{}c@{}}$\avg(\text{conf})$\\$(\text{corr}, \text{orig})$\end{tabular}
& $1.00, 0.53$ & $1.00, 0.52$ & $1.00, 0.65$ & $\textbf{0.62, 0.11}$ & $0.72, 0.73$ & $0.89, 0.88$ \\ \hline
\begin{tabular}[x]{@{}c@{}}$\avg(\text{size})/1e^3 \px$\\$(\text{corr}, \text{orig})$\end{tabular}
& $5.5, 12.1$ & $21.4, 2.5$ & $\textbf{30.8, 6.9}$ & $7.9, 19.8$ & $10.0, 15.0$ & $4.9, 5.0$ \\ \hline
$A_\text{fp-occ} (\%)$ & $40.1$ & $\textbf{81.0}$ & $79.1$ & $1.5$ & $0.3$ & $0.0$ \\ \hline
$A_\text{fn-vac} (\%)$ & $15.1$ & $2.5$ & $6.8$ & $42.3$ & $77.8$ & $\textbf{85.8}$ \\ \hline \hline
\end{tabular}}
\label{tab:sev_features}
\vspace{-25pt}
\end{table}
\vspace{-10pt}
\subsubsection{Fault-induced object generation and loss}
\label{subsec:Fault-induced object generation and loss}
Object detection is commonly used in scenarios where the number of objects, combined with their location and class, is input to safety-critical decision making. Examples include face detection or vehicle counting in traffic surveillance, automated driving, or medical object detection.
Therefore, to assess $\SDC$ severity, we quantify the impact of a fault injection by the differences (a loss in TPs equals the gain in FNs)
\vspace{-5pt}
\begin{align}
\vspace{-25pt}
\begin{split}
\FPd &= \left( FP_{\text{corr}}- FP_{\text{orig}} \right), \\
\FNnd &= (TP_{\text{orig}}- TP_{\text{corr}})/TP_{\text{orig}},
\end{split}
\vspace{-25pt}
\label{eq:delta_fp_fn}
\end{align}
In Tab.~\ref{tab:sev_features}, we observe that all Yolov3 and RetinaNet scenarios exhibit large numbers of fault-induced FPs ($\gg 100$ in Yolov3 and Coco experiments). For neuron faults, the generation of FPs is, on average, more pronounced.
Furthermore, the normalized FN rates show that already a single fault can cause a significant loss of accurate positive detections. Average FN rates are higher for neuron faults than weight faults and reach averages up to $47\%$ (Yolov3 and Lyft).
F-RCNN models are robust against the generation of FP objects but not immune against fault-induced misses (e.g. Fig.~\ref{fig:example Faster-RCNN}).
The number of generated FPs and FNs varies in a broad sample range, up to the maximum limit of allowed detections (here $1000$), due to the inhomogeneous impact of flips in different bit positions (see Sec.~\ref{sec:bit-wise transient}).
In some situations, additional objects created by faults will match actual ground truth objects, leading to a negative FP or FN difference. This effect originates from the imperfect performance of the original fault-free model and is tolerated here due to the minor impact.
By relaxing the class matching constraints from one-to-one correspondence to no class matching, we can further segment the type of FPs that the $\SDC$ events cause.
It appears that situations where an FP is due to a change in the class label only or due to a shift of the bounding box only (on average $\lesssim 3$ for Yolo models, $0$ for others). In most cases, \textit{both} the bounding box gets shifted, and the class labels is mixed up, or predicted objects cannot be matched with any ground truth object at all.
\vspace{-15pt}
\subsubsection{Object size and confidence}
Box sizes and confidence values are other severity indicators since large erroneous objects take up a more significant portion of the image space, and high-confidence objects might be handled with priority in some use cases.
Tab.~\ref{tab:sev_features} shows the change of the average box size and confidence of all model detections across the identified $\SDC$ events.
In most models, the typical box size is reduced in the presence of faults, which is partially due to the creation of boxes with zero width or height.
However, there are also scenarios where faults tend to induce overly large objects (Yolov3 and Kitti, Lyft, see Tab.~\ref{tab:sev_features}) that can even fill the entire picture.
An object's average confidence score after fault injection significantly increases in the scenario of Yolov3 and RetinaNet, while there is hardly any impact on F-RCNN predictions.
This explains why confidence-sensitive metrics based on AP react differently to fault injections in the respective architectures; see the discussion in Sec.~\ref{sec:ap}.
\vspace{-15pt}
\subsubsection{Area occupancy}
safety-related decision-making in a dynamic environment is most importantly based on the detected free and occupied space. For example, an automated vehicle will determine a driving path depending on the detected drivable space.
A large number of false-positive objects, even when small in size, can, in combination, cover a significant portion of the image, which will leave only little free space. On the opposite, in some situations, they may overlay each other and occupy only a little space.
To reflect a realistic severity of free space, we first cluster all FP and FN objects to \textit{blobs} by projecting them to a binary space of occupancy and vacancy (see Fig. ~\ref{fig:blob_example}).
As we are only interested in fault-induced false objects, our blobs for a given frame at time $t$ are defined as follows:
\vspace{-10pt}
\begin{equation}
\begin{aligned}
FP_{\text{blob}} = \mathcal{I}(\text{det}_\text{corr} - \text{det}_\text{orig}), \\
FN_{\text{blob}} = \mathcal{I}(\text{det}_\text{orig} - \text{det}_\text{corr}).
\label{eq:blob corr}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\Aoccfp = |FP_{\text{blob}}|/ A_{\text{image}}, \\
\Aoccfn = |FN_{\text{blob}}|/ |\mathcal{I}(\text{det}_\text{orig})|.
\label{eq:area_blob corr}
\end{aligned}
\end{equation}
In Eq.~\ref{eq:blob corr}, $\text{det}$ denotes the set of all detected bounding boxes (TP and FP), and $\mathcal{I}(x)$ represents the pixel-wise projection to binary occupancy space, i.e., for any pixel $u$ in a blob $x$ it is $\mathcal{I}(u<0)=0$, $\mathcal{I}(u\geq 0)=1$ (see Fig. ~\ref{fig:blob_example}). We define the occupancy coefficients in Eq. \ref{eq:area_blob corr}, where $A_{\text{image}}$ is the size of the image in pixels and $|\ldots|$ denotes the sum of all nonzero pixels in a blob.
In Tab.~\ref{tab:sev_features}, we see Yolo+Kitti creates significantly less $\FPd$ than Yolo+CoCo, but the average $\Aoccfp$, in this case, is $\sim2x$ greater than $\Aoccfp$ of Yolo+CoCo. This can even be observed using the feature $\avg(\text{size})/1e^3$ (average size of bounding boxes of all the detections combined - TPs+FPs).
In the case of Yolo+Kitti, the $\avg(\text{size})/1e^3$ is $15x$ and $\sim 8x$ larger than its original detections when a fault is injected in neurons and weights.
This implies that $\FPd$ alone cannot determine the safety impact during an $\SDC$ event.
Similarly, F-RCNN creates no $\FPd$, but large free space $\FNnd$ by missing the TPs. F-RCNN+Kitti, when induced with weight faults, is more safety-critical as the $\Aoccfn$ is highest compared to other studied models. Furthermore, in case of neuron faults, the RetinaNet and F-RCNN have higher $\Aoccfn$.
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.9\textwidth, height=4cm]{metrics/fp_diff_bpos_all_none_neurons.png}
\caption{FP Neurons}
\end{subfigure}
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.9\textwidth, height=4cm]{metrics/fp_diff_bpos_all_none_weights.png}
\caption{FP Weights}
\end{subfigure}
%
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.9\textwidth, height=4cm]{metrics/fn_diff_bpos_all_none_neurons.png}
\caption{FN Neurons}
\end{subfigure}
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.9\textwidth, height=4cm]{metrics/fn_diff_bpos_all_none_weights.png}
\caption{FN Weights}
\end{subfigure}
\vspace{-8pt}
\caption{Bit-wise analysis of the severity of $\SDC$ events. Diagrams show the FP difference (a), (b) and FN rates (c), (d) for neurons and weights, respectively. Bit 31\textsuperscript{st} is the sign bit, 30\textsuperscript{th} bit being the most significant bit and 23\textsuperscript{rd} bit is the lowest bit of exponent part.}
\label{fig:FPFNvsbit}
\vspace{-8pt}
\end{figure}
\vspace{-10pt}
\subsection{Bit-wise analysis of false object count}
\label{sec:bit-wise transient}
The severity of an $\SDC$ event typically depends on the magnitude of the altered values, where values with a considerable absolute value are more likely to propagate and disrupt the network predictions \cite{Li2017, Hong2019, GeisslerQRAPDGP21}.
Therefore, the severity features are expected to form a non-uniform distribution depending on the flipped bit position.
To gain a better intuition, we here choose to present a bit-wise analysis of the $\FPd$ and $\FNnd$ samples during the $\SDC$ events.
To quantify the impact of bits, we define the bit-averaged false-positive difference, $\bitavg(\FPd)$, which intuitively tells us how many FPs an SDC event with a particular bit position induces, on average.
Similarly, for FNs, the normalized bit-averaged difference, $\bitavg(\FNnd)$, represents what portion of the originally detected objects disappears due to an SDC event with a specific bit position.
In Fig. ~\ref{fig:FPFNvsbit}, we observe that, for neuron faults, those additional FPs are typically caused by bitflips in either of the three highest exponential bits, as long as those do not lead to DUE instead.
For weight faults, we find a situation similar to classifier networks where the specific value range of weights centered around zero is encoded in bit constellations where the MSB is in state '0' while the next higher exponential non-MSB bits are in state '1', see Ref. \cite{GeisslerQRAPDGP21}.
This explains why almost only MSB flips induce large values and $\SDC$ (with a high number of FPs).
Given the respective relevant neuron and weight bit flips, the $\FNnd$ ratio is increased up to $90\%$ (meaning that portion of all true positive detections is lost) in some of the models as shown in Fig. ~\ref{fig:FPFNvsbit}(c),(d)). In particular, due to MSB and other high exponential bit flips the average $\bitavg(\FNnd)$ is $\sim 47\%$. We observe that FN alterations can, to some extent, be induced also by lower exponential bits.
\vspace{-15pt}
\begin{figure}[h]
\centering
\begin{subfigure}[height=4cm]{0.24\textwidth}
\centering
\frame{\includegraphics[width=\textwidth]{Pics/examples/yolov3_coco2017_76625_orig.png}}
\caption{orig}
\end{subfigure}
\begin{subfigure}[height=4cm]{0.24\textwidth}
\centering
\frame{\includegraphics[width=\textwidth]{Pics/examples/yolov3_coco2017_76625_corr.png}}
\caption{corr}
\end{subfigure}
\begin{subfigure}[height=4.cm]{0.24\textwidth}
\centering
\frame{\includegraphics[width=\textwidth]{Pics/examples/test_fp_blob.png}}
\caption{$FP_{\text{blob}}$}
\end{subfigure}
\begin{subfigure}[height=4.cm]{0.24\textwidth}
\centering
\frame{\includegraphics[width=\textwidth]{Pics/examples/test_fn_blob.png}}
\caption{$FN_{\text{blob}}$}
\end{subfigure}
%
\vspace{-8pt}
\caption{Illustration of the clustering of bounding boxes to binary occupancy blobs. In this example we find from (c) and (d) that $\Aoccfp=33.3\%$, $\Aoccfn=7.5\%$ (white pixels indicate space occupied by fault FPs).}%
\label{fig:blob_example}
\end{figure}
\section{Results \romannum{2}: Permanent faults effect on DNN}
\vspace{5pt}
\section{Permanent faults}
\label{sec:permanent_faults}
Our analysis in this section aims to understand whether permanent stuck-at faults (see Sec.~\ref{sec:hardware faults vocabulary}) leads to temporally consistent errors on an object level leading to continuous failure.
The object detection model typically receives sequential images from a continuous video stream in real time applications.
We assume a permanent hardware fault hitting the inference module which in turn causes persistent miss detections on consecutive images.
In this case, they will appear either as ghost objects in the output (as FPs) or lead to a consecutive miss of an object (as FNs) - both situations can be highly safety-critical.
A perception pipeline typically also includes a tracking module for detected objects, which can then be used to predict an object's trajectory and make an informed decision concerning the next maneuver of the vehicle. Therefore, we simulate a simple tracking of instantaneous fault-induced FPs and FNs clusters to determine whether they would be persistent in a realistic scenario.
For the analysis in this section, we use Yolov3 and the Lyft data set. This is the only dataset used in our analysis that provides consecutive images from video sequences (Lyft sequence of the \textit{CAM\_FRONT} channel featuring $126$ frames is considered).
From our experiments with transient faults injections in Sec.~\ref{sec:transient_faults}, we understand that no effect is observed by altering mantissa bits or by flips in the direction $'1' \to\ '0'$ since this does not generate large values.
Therefore, the experiments of this section are accelerated by using only \text{\statone} faults in the exponential bits of FP32. However, results have been rescaled to account for the probability of injections in all $32$ bits.
In this section, we designed an experiment where we inject each of 1000 single random permanent faults (exponential bits) at neurons and weights independently for the above considered sequence to understand its safety impact.
\vspace{-12pt}
\subsection{Evaluating fault persistence}
\vspace{-8pt}
We track the movement of blobs (Eq. \ref{eq:blob corr}) using a simple pixel-wise M/N tracking scheme \cite{Blackman1999}.
The proposed tracker incorporates the following criteria to establish that a given pixel of FP or FN blob is persistent, at a given frame $t$: i) The pixel occupied in at least M/N consecutive frames. (if it is also occupied in the current frame, this corresponds to t track update; otherwise it is a coasting track), ii) If the occupancy of that pixel in the last N frames is below M, we check the vicinity around that pixel for past occupancy. Deploying a simplified unidirectional motion model, we register a persistent dynamic pixel for the current frame if occupancies above M are found in the past $N$ frames in a close enough (here $50$ pixel, abbr. px) vicinity.\\
For FN blobs, we omit coasting due to the nature of detection misses. After registering the persistent pixels computed by the pixel-wise tracker, the occupied ($\Aoccfp$) or free-space ($\Aoccfn$) area is calculated using Eq. \ref{eq:area_blob corr}.
The tracking parameters are chosen as $(10/15)$: The upper frame number is hereby estimated from a critical time of reaction to a persistent false target ($\approx0.5\text{s}$) and the frame rate of the Lyft sequence ($30\text{Hz}$), leading to $N=0.5\text{s}\cdot 30\text{s}^{-1}=15$ key sequential frames. This estimated upper number can be application specific relevant to its safety specifications.
\vspace{-8pt}
\subsection{Corruption probability and severity}
\vspace{-5pt}
In Fig.~\ref{fig:FP_blob_analysis: tracking} and Fig.~\ref{fig:FN_blob_analysis: tracking}, we show examples of persistent FP and FN blobs in selected frames. The occupied ($\Aoccfp$) and free ($\Aoccfn$) space of an entire video sequence is presented in Fig.~\ref{fig:FP_blob_analysis: explanation} and Fig.~\ref{fig:FN_blob_analysis: explanation}.
For orientation, we also give the area difference between original and ground truth predictions (Fig.~\ref{fig.Tracked area explaination}), $ \Aoccfporig = |\mathcal{I}(\text{det}_\text{orig} - \text{gt})|/ A_{\text{image}}$ and $ \Aoccfnorig = |\mathcal{I}(\text{gt} - \text{det}_\text{orig})|/ |\mathcal{I}(\text{det}_\text{orig})|$ (where \text{gt} is ground truth).
We neglect these contributions originating from the model imperfection as it is a function of training and is found to be small (in the above examples $< 1\%$) compared to the fault-induced occupancy ($\sim66\%$ and $\sim62\%$, respectively). The example demonstrates that tracked FP blobs may persist across the entire image sequence and occupy a significant amount of free space. Similarly, a significant portion of the image can be lost persistently across the sequence (it reaches as high as $\sim96\%$).
Our statistical evaluation from $1000$ permanent fault injections on the selected image sequence is given in Fig.~\ref{fig:permanent_faults_stats} for FP and FN.
The Fig.~\ref{fig:permanent_faults_stats}(a) and (d) shows both the SDC probability (in the form of persistant occurance) and the severity ((b)-(c), (e)-(f)) in detail.
We register an SDC for a given fault if any persistent FP or FN is found during the sequence with a severity of at least level $L$. The severity $L$ is quantified as the average area occupied by the blob (for FP normalized by the image size, for FN by the TP blob size, see above).
The severity levels are varied from $0\%$ to $15\%$ in Fig.~\ref{fig:permanent_faults_stats} to illustrate the effect of softening or hardening of the safety requirements.
As the severity of a fault is again expected to depend on the bit position of the injected fault, we present both bit-selected and bit-averaged numbers in Fig.~\ref{fig:permanent_faults_stats}(b,c,e,f).
In this figure, the permanent faults in neurons and weights have a probability of $1.8\%$ and $3\%$ to create persistent ghost FP objects with a minimal area of $L>0$, respectively. With $L>15\%$ of an image area, this reduces to $0.9\%$ and $2.9\%$, respectively. On average, faults hitting MSB bit in weights on this model have $96\%$ probability to manifest into a persistent FP blob of area $>81\%$.
On the other hand, persistent FN blobs incorrectly indicating vacant spaces occur with a much lower chance.
Bitflips cause persistent objects only in the highest exponential bits in case of neurons or in the MSB bits in the case of weights. This observation is consistent with the findings from transient faults in Sec.~\ref{sec:transient_faults}.
Using the given area occupancy metrics, permanent weight faults have a higher severity than neuron faults; in particular, weight faults on average induce massive ghost FP blobs of $>83\%$ of the image area.
|
2,869,038,154,273 | arxiv | \section{INTRODUCTION}
The pair density (PD) functional theory has recently attracted particular
interest because it provides the obvious way to improve on the density
functional theory (DFT)\cite{1,2,3,4,5,6,7,8,9}. Ziesche first proposed
the PD functional theory about a decade ago\cite{1,2},
and then many workers followed his work and
have developed a variety of approaches\cite{3,4,5,6,7,8,9}.
Very recently, we have proposed an approximate scheme for calculating the PD
on the basis of the extended constrained-search theory\cite{10,11,12,13,14}.
By introducing the noninteracting reference system\cite{10,11},
the resultant PD corresponds to the best solution within the set of the PDs
that are constructed from the single Slater determinant (SSD).
This PD functional theory has two kinds of merits. The first one is
that the reproduced PD is necessarily $N$-representable. This is a strong
merit because the necessary and sufficient conditions for
the $N$-representable PD have not yet been
known\cite{15,16,17,18,19,20,21,22,23,24,25,26}.
The second merit is the tractable form of the kinetic energy
functional. The kinetic energy functional cannot exactly be written by using
the PD alone. Some approximation is required to be introduced\cite{7}. In this
theory, we have successfully given an approximate form of the kinetic energy
functional with the aid of the coordinate scaling of electrons\cite{10,11}.
On the other hand, we also have the remaining problem in it\cite{10}. Namely,
there exists the possibility that the solution might be far from the correct
value of the ground-state PD. This is because the searching region of the
PDs may be smaller than the set of $N$-representable PDs. In order to improve
the PD functional theory with keeping the above-mentioned merits, we have to
extend the searching region of the PDs to the set of $N$-representable PDs as
closely as possible. At least, we had better extend the searching region
beyond the set of the SSD-representable PDs\cite{27}.
In this paper, we shall employ the strategy for reproducing the PDs not by
means of the SSD, but through the correlated wave function. As the
correlated wave function, we adopt the Jastrow wave function that is defined
as the SSD multiplied by the correlation function\cite{28,29}. Owing to the
correlation function, the searching region substantially becomes larger than
the set of the SSD-representable PDs. Of course, the reproduced PD is kept
$N$-representable because the PD is calculated via the Jastrow wave function
that is a kind of antisymmetric wave functions. Also the second merit is not
missed in the present scheme by utilizing the result of the scaling property
of the kinetic energy functional.
Organization of this paper is as follows. In Sec. II, we provide the
preliminary definitions of various quantities that appear in the following
sections. In Sec. III, by means of the variational principle with respect to
the PD, we derive simultaneous equations that yield the best PD that is
superior to the previous one\cite{10}. Such equations are quite tractable,
the computational method of which are also proposed in Sec. III.
\section{PRELIMINARY DEFINITIONS IN THE PD FUNCTIONAL THEORY}
In this section we give the preliminary definitions which will be used in
the present scheme. The PD is defined as the diagonal elements of the
spinless second-order reduced density matrix, i.e.,
\begin{equation}
\label{eq1}
\gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})=\left\langle \Psi
\right|\frac{1}{2}\int\!\!\!\int {\hat {\psi }^{+}({\rm {\bf r}},\,\eta
)\hat {\psi }^{+}({\rm {\bf {r}'}},\,{\eta }')\hat {\psi }({\rm {\bf
{r}'}},\,{\eta }')\hat {\psi }({\rm {\bf r}},\,\eta )} \mbox{d}\eta
\mbox{d}{\eta }'\left| \Psi \right\rangle ,
\end{equation}
where $\hat {\psi }({\rm {\bf r}},\,\eta )$
and $\hat {\psi }^{+}({\rm {\bf r}},\,\eta )$
are field operators of electrons, and $\Psi $ is the
antisymmetric wave function, and ${\rm {\bf r}}$ and $\eta $ are spatial and
spin coordinates, respectively. We shall consider a system, the Hamiltonian
of which is given by
\begin{equation}
\label{eq2}
\hat {H}=\hat {T}+\hat {W}+\int {\mbox{d}{\rm {\bf r}}\hat {\rho }({\rm {\bf
r}})v_{ext} ({\rm {\bf r}})} ,
\end{equation}
where $\hat {T}$, $\hat {W}$ and $\hat {\rho }({\rm {\bf r}})$ are operators
of the kinetic energy, electron-electron interaction and electron density,
respectively, and $v_{ext} ({\rm {\bf r}})$ stands for the external
potential. In the similar way to the extended constrained-search
theory\cite{12,13,14}, the universal functional is defined as
\begin{eqnarray}
\label{eq3}
F[\gamma ^{(2)}]&=&
\mathop {\mbox{Min}}\limits_{\Psi \to \gamma^{(2)}
({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})} \left\langle \Psi
\right|\hat {T}+\hat {W}\left| \Psi \right\rangle \nonumber \\
&=&\left\langle {\Psi [\gamma^{(2)}]} \right|\hat {T}+\hat {W}\left|
{\Psi [\gamma ^{(2)}]} \right\rangle,
\end{eqnarray}
where $\Psi \to \gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})$
denotes the searching over all antisymmetric wave functions that yield a
prescribed $\gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})$. In the
second line, the minimizing wave function is expressed as $\Psi [\gamma
^{(2)}]$. By using Eq. (\ref{eq3}), the Hohenberg-Kohn theorems for the PD
functional theory can be easily proved\cite{1,5}. Here we show only their
results\cite{10}:
\begin{equation}
\label{eq4}
\Psi _{0} =\Psi [\gamma _{0}^{(2)} ],
\end{equation}
and
\begin{eqnarray}
\label{eq5}
E_{0} &=&\mathop {\mbox{Min}}\limits_{\gamma ^{(2)}} E[\gamma ^{(2)}]
\nonumber \\
&=& E[\gamma _{0}^{(2)} ],
\end{eqnarray}
where $\Psi _{0}$, $E_{0}$ and $\gamma _{0}^{(2)}$ are the
ground-state wave function, ground-state energy and ground-state PD,
respectively, and where $E[\gamma ^{(2)}]$ is the total energy functional
that is given by
\begin{equation}
\label{eq6}
E[\gamma ^{(2)}]=F[\gamma ^{(2)}]+\frac{2}{N-1}\int\!\!\!\int {\mbox{d}{\rm
{\bf r}}\mbox{d}{\rm {\bf {r}'}}v_{ext} ({\rm {\bf r}})\gamma
^{(2)}({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})} .
\end{equation}
Equations (\ref{eq4}) and (\ref{eq5}) correspond to the first
and second Hohenberg-Kohn theorems, respectively. Let us suppose that
\begin{equation}
\label{eq7}
T[\gamma ^{(2)}]=\left\langle {\Psi [\gamma ^{(2)}]} \right|\hat {T}\left|
{\Psi [\gamma ^{(2)}]} \right\rangle ,
\end{equation}
then Eq. (\ref{eq6}) is rewritten as
\begin{equation}
\label{eq8}
E[\gamma ^{(2)}]=T[\gamma ^{(2)}]+e^{2}\int\!\!\!\int {\mbox{d}{\rm {\bf
r}}\mbox{d}{\rm {\bf {r}'}}\frac{\gamma ^{(2)}({\rm {\bf r{r}'}};{\rm {\bf
r{r}'}})}{\left| {{\rm {\bf r}}-{\rm {\bf {r}'}}} \right|}}
+\frac{2}{N-1}\int\!\!\!\int {\mbox{d}{\rm {\bf r}}\mbox{d}{\rm {\bf
{r}'}}v_{ext} ({\rm {\bf r}})\gamma ^{(2)}({\rm {\bf r{r}'}};{\rm
{\bf r{r}'}}),}
\end{equation}
where, in the second term, we use the fact that the expectation value of
$\hat {W}$ is exactly written in terms of
$\gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})$.
Equation (\ref{eq8}) is the starting expression for the
total energy functional in the PD functional theory.
As mentioned in Sec. I, the kinetic energy of the PD functional theory
cannot be exactly expressed by the PDs alone. In other words, we have to
employ the approximate form in Eq. (\ref{eq8}). So far, the kinetic energy
functional of the PD functional theory has been developed by several
workers\cite{10,30,31}. In this paper, we make use of an approximate form of
the kinetic energy functional which has been derived by utilizing the
scaling property of the kinetic energy functional\cite{10,31}.
The explicit form is given by
\begin{equation}
\label{eq9}
T[\gamma ^{(2)}]=K\int\!\!\!\int {\mbox{d}{\rm {\bf r}}\mbox{d}{\rm {\bf
{r}'}}\gamma ^{(2)}({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})^{\frac{4}{3}}} ,
\end{equation}
where $K$ is an arbitrary constant.
\section {SINGLE-PARTICLE EQUATIONS}
Equation (\ref{eq5}) corresponds to the variational principle
with respect to the PD. The searching region of the PDs should be of
course within the set of $N$-representable PDs.
For that purpose, we shall introduce the searching
region of the PDs that are calculated from the correlated wave functions.
The searching region is substantially extended as compared with the previous
theory\cite{10}, because it is restricted within the set of SSD-representable
PDs. Extension of the searching region can be regarded as one of appropriate
developments of the PD functional theory\cite{27}.
In this paper we adopt the Jastrow wave function as the correlated wave
function. The explicit evaluation of the PD using the Jastrow wave function
is actually very hard\cite{28,29}. As a consequence, the approximation
technique to evaluate the PD has been developed especially
in the field of nuclear physics. The expectation value of the PD operator
with respect to the Jastrow wave function can be systematically expressed
with the aid of the Yvon-Mayer diagrammatic technique\cite{28,29}.
Here we shall use the lowest-order approximation of the expectation value
of the PD operator.
The Jastrow wave function is defined as \cite{28,29}
\begin{equation}
\label{eq10}
\Psi _{J} (x_{1} ,x_{2} ,\cdot \cdot \cdot \cdot \cdot ,x_{N}
)=\frac{1}{\sqrt {C_{N} } }\prod\limits_{1\le i<j\le N} {f(r_{ij} )\Phi
_{SSD} (x_{1} ,x_{2} ,\cdot \cdot \cdot \cdot \cdot ,x_{N} ),}
\end{equation}
where $\Phi _{SSD} (x_{1} ,x_{2} ,\cdot \cdot \cdot \cdot \cdot ,x_{N} )$ is
the SSD, and where
$f(r_{ij} )=
f\left( {\left| {{\rm {\bf r}}_{i} -{\rm {\bf r}}_{j} } \right|} \right)$
is the correlation function,
and where $C_{N} $ is the normalization constant.
Suppose that the correlation function is
chosen to satisfy the cusp condition for the antisymmetric wave function.
The lowest-order approximation for the expectation value of the PD operator
is given by \cite{28}
\begin{equation}
\label{eq11}
\gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})=\left| {f\left(
{\left| {{\rm {\bf r}}-{\rm {\bf {r}'}}} \right|} \right)} \right|^{2}\gamma
_{SSD}^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}}),
\end{equation}
where $\gamma _{SSD}^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})$ is the
expectation value of the PD operator with respect to the SSD. Supposing $N$
orthonormal spin orbitals of the SSD are denoted as $\left\{ {\psi _{\mu }
(x)} \right\}$, then Eq. (\ref{eq11}) is explicitly expressed as
\begin{eqnarray}
\label{eq12}
\gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})=\frac{1}{2}\left|
{f\left( {\left| {{\rm {\bf r}}-{\rm {\bf {r}'}}} \right|} \right)}
\right|^{2}\sum\limits_{\mu _{1} ,\mu _{2} =1}^{N} {\int\!\!\!\int
{\mbox{d}\eta \mbox{d}{\eta }'
\left\{ {\psi _{\mu _{1} }^{\ast } (x)\psi _{\mu _{2} }^{\ast }
({x}')\psi _{\mu _{1} } (x)\psi _{\mu _{2} } ({x}')} \right.} } & &
\nonumber \\
-\left. {\psi _{\mu _{1} }^{\ast } (x)\psi _{\mu _{2} }^{\ast } ({x}')
\psi _{\mu _{2} } (x)\psi _{\mu _{1} } ({x}')} \right\}&.&
\end{eqnarray}
Next, let us consider the variational principle with respect to the PD, i.e.
Eq. (\ref{eq5}). The variation of the PD is performed via the spin
orbitals of Eq. (\ref{eq12}) with the restriction that they
are orthonormal to each other. Using the Lagrange method of undetermined
multipliers, we minimize the following functional without the restriction:
\begin{equation}
\label{eq13}
\Omega \left[ {\left\{ {\psi _{\mu } } \right\}} \right]=E\left[ {\gamma
^{(2)} } \right]-\sum\limits_{\mu ,\nu } {\varepsilon _{\mu \nu } \left\{
{\int {\psi _{\mu }^{\ast } (x)\psi _{\nu } (x)\mbox{d}x}
-\delta _{\mu \nu } }
\right\}} ,
\end{equation}
where Eqs. (\ref{eq8}), (\ref{eq9}) and (\ref{eq12}) are used
in the first term on the right-hand side. The minimizing condition
$\delta \Omega \left[ {\left\{ {\psi _{\mu } } \right\}} \right]=0$
immediately leads to
\begin{eqnarray}
\label{eq14}
& &\sum\limits_\nu {\int {\mbox{d}x_{1} \left\{ {\psi _{\nu }^{\ast }(x_{1} )
\psi_{\nu } (x_{1} )\psi _{\mu } (x)-\psi _{\nu }^{\ast } (x_{1} )\psi
_{\nu } (x)\psi _{\mu } (x_{1} )} \right\}} } \nonumber \\
&\times& \left| {f\left( {\left| {{\rm {\bf r}}-{\rm {\bf r}}_{1} } \right|}
\right)} \right|^{2}\left\{ {\frac{4K}{3}\gamma ^{(2)}({\rm {\bf rr}}_{1}
;{\rm {\bf rr}}_{1} )^{\frac{1}{3}}+\frac{e^{2}}{\left| {{\rm {\bf r}}-{\rm
{\bf r}}_{1} } \right|}+\frac{1}{N-1}\left( {v_{ext} ({\rm {\bf r}})+v_{ext}
({\rm {\bf r}}_{1} )} \right)} \right\} \nonumber \\
&=&\sum\limits_\nu {\varepsilon _{\mu \nu } \psi _{\nu } (x)} ,
\end{eqnarray}
where the chain rule for the functional derivatives is utilized. The
Lagrange multipliers $\varepsilon _{\mu \nu } $ should be determined by
requiring that the spin orbitals are orthonormal to each other:
\begin{equation}
\label{eq15}
\int {\psi _{\mu }^{\ast } (x)\psi _{\nu } (x)\mbox{d}x} =\delta _{\mu \nu }.
\end{equation}
In the similar way to the previous theory\cite{10}, we can simplify the above
equations by means of a unitary transformation of the spin orbitals. It is
easily shown that $\varepsilon _{\mu \nu } $ forms the Hermitian matrix.
Suppose that the unitary matrix which diagonalizes $\varepsilon _{\mu \nu }
$ is written by $U_{\mu \nu } $, then
\begin{equation}
\label{eq16}
\sum\limits_{i,j} {U_{i\mu } ^{\ast }\varepsilon _{ij} U_{j\nu } } =\tilde
{\varepsilon }_{\mu } \delta _{\mu \nu }
\end{equation}
is satisfied, where $\tilde {\varepsilon }_{\mu } $ is the diagonal element
of the diagonal matrix. Let us consider the
following transformation of the spin orbitals:
\begin{equation}
\label{eq17}
\psi _{\mu } (x)=\sum\limits_\nu {U_{\mu \nu } \chi _{\nu } (x)} .
\end{equation}
Substituting Eq. (\ref{eq17}) into Eq. (\ref{eq14}),
and using Eq. (\ref{eq16}), we obtain
\begin{eqnarray}
\label{eq18}
& &\sum\limits_\nu {\int {\mbox{d}x_{1} \left\{ {\chi _{\nu }^{\ast } (x_{1} )
\chi_{\nu } (x_{1} )\chi _{\mu } (x)-\chi _{\nu }^{\ast } (x_{1} )\chi
_{\nu } (x)\chi _{\mu } (x_{1} )} \right\}} } \nonumber \\
&\times& \left| {f\left( {\left| {{\rm {\bf r}}-{\rm {\bf r}}_{1} } \right|}
\right)} \right|^{2}\left\{ {\frac{4K}{3}\gamma ^{(2)}({\rm {\bf rr}}_{1}
;{\rm {\bf rr}}_{1} )^{\frac{1}{3}}+\frac{e^{2}}{\left| {{\rm {\bf r}}-{\rm
{\bf r}}_{1} } \right|}+\frac{1}{N-1}\left( {v_{ext} ({\rm {\bf r}})+v_{ext}
({\rm {\bf r}}_{1} )} \right)} \right\} \nonumber \\
&=& \tilde {\varepsilon }_{\mu } \chi _{\mu } (x).
\end{eqnarray}
Also, Eq. (\ref{eq15}) is transformed into
\begin{equation}
\label{eq19}
\int {\chi _{\mu }^{\ast } (x)\chi _{\nu } (x)\mbox{d}x} =\delta _{\mu \nu }.
\end{equation}
Here note that the expression for
$\gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})$
in Eq. (\ref{eq18}) is kept invariant under the unitary
transformation. This is confirmed by substituting Eq. (\ref{eq17})
into Eq. (\ref{eq12}), i.e.,
\begin{eqnarray}
\label{eq20}
\gamma ^{(2)} ({\rm {\bf r{r}'}};{\rm {\bf r{r}'}})=\frac{1}{2}\left|
{f\left( {\left| {{\rm {\bf r}}-{\rm {\bf {r}'}}} \right|} \right)}
\right|^{2}\sum\limits_{\mu _{1} ,\mu _{2} =1}^{N} {\int\!\!\!\int
{\mbox{d}\eta \mbox{d}{\eta }'
\left\{ {\chi _{\mu _{1} }^{\ast } (x)\chi _{\mu _{2} }^{\ast }
({x}')\chi _{\mu _{1} } (x)\chi _{\mu _{2} } ({x}')} \right.} }& &
\nonumber \\
-\left. {\chi _{\mu _{1} }^{\ast } (x)\chi _{\mu _{2} }^{\ast } ({x}')
\chi _{\mu _{2} } (x)\chi _{\mu _{1} } ({x}')} \right\}&.&
\end{eqnarray}
Equations (\ref{eq18}) and (\ref{eq19}) are the simultaneous equations,
and the solutions yield the best PD within the set of PDs that
are calculated from Eq. (\ref{eq20}).
Our previous work may be the first proposal of a computational approach that
deals with problems related to the PD functional theory\cite{10}. The present
scheme is also a computational approach, and further improves on the
previous theory concerning the searching region of the PDs. In that sense,
it would be useful to consider a computational procedure for solving the
simultaneous equations (\ref{eq18}) and (\ref{eq19}).
The procedure proposed here is similar to that of the Hartree-Fock
equation\cite{32}. In order to make the computational procedure readily
comprehensible, let us rewrite Eq. (\ref{eq18}) as
\begin{equation}
\label{eq21}
\left\{ {F({\rm {\bf r}})-\tilde {\varepsilon }_{\delta } } \right\}\chi
_{\delta } (x)=G_{\delta } (x)
\end{equation}
with
\begin{eqnarray}
\label{eq22}
F({\rm {\bf r}})=\int \!\!&{\mbox{d}x_{1}}&\!\!
\left| {f\left( {\left| {{\rm {\bf r}}-{\rm
{\bf r}}_{1} } \right|} \right)} \right|^{2}\sum\limits_{\nu =1}^{N} {\left|
{\chi _{\nu } (x_{1} )} \right|^{2}} \nonumber \\
\!\!&\times&\!\! \left\{ {\frac{4K}{3}\gamma ^{(2)}
({\rm {\bf rr}}_{1} ;{\rm {\bf
rr}}_{1} )^{\frac{1}{3}}+\frac{e^{2}}{\left| {{\rm {\bf r}}-{\rm {\bf
r}}_{1} } \right|}+\frac{1}{N-1}\left( {v_{ext} ({\rm {\bf r}})+v_{ext}
({\rm {\bf r}}_{1} )} \right)} \right\},
\end{eqnarray}
\begin{eqnarray}
\label{eq23}
G_{\delta } (x)=\int \!\!&{\mbox{d}x_{1}}&\!\!
\left| {f\left( {\left| {{\rm {\bf r}}-{\rm
{\bf {r}'}}} \right|} \right)} \right|^{2}\left\{ {\sum\limits_{\nu =1}^{N}
{\chi _{\nu }^{\ast } (x_{1} )\chi _{\delta } (x_{1} )\chi _{\nu }
(x)} } \right\} \nonumber \\
\!\!&\times&\!\! \left\{ {\frac{4K}{3}\gamma ^{(2)}
({\rm {\bf rr}}_{1} ;{\rm {\bf
rr}}_{1} )^{\frac{1}{3}}+\frac{e^{2}}{\left| {{\rm {\bf r}}-{\rm {\bf
r}}_{1} } \right|}+\frac{1}{N-1}\left( {v_{ext} ({\rm {\bf r}})+v_{ext}
({\rm {\bf r}}_{1} )} \right)} \right\},
\end{eqnarray}
where the spin orbital $\chi _{\delta } (x)$ is
the solution of Eq. (\ref{eq21}),
and should be determined in a self-consistent way. Here note that the
right-hand side of Eq. (\ref{eq21}) comes from the second term of
Eq. (\ref{eq20}), and explicitly depends on the spin orbital
$\chi _{\delta } (x)$. The key point to get the self-consistent solution
is that spin orbitals of the previous
iteration are used in calculating $F({\rm {\bf r}})$ and $G_{\delta }
(x)$\cite{32}. By solving simultaneously Eqs. (\ref{eq19}) and (\ref{eq21})
with this technique,
we can get a new set of spin orbitals and energy parameters $\tilde
{\varepsilon }_{\delta } \mbox{'s}$. We continue such a procedure until the
self-consistency for the solutions is accomplished\cite{32}.
\section{CONCLUDING REMARKS}
In this paper, we propose the PD functional theory that yields the best PD
within the set of PDs that are constructed from the correlated wave
functions. Compared to the previous one\cite{10,11},
the present theory has the following features.
\begin{enumerate}
\item The present theory is superior to the previous one in that
the searching region of the PDs is certainly larger than the set of
SSD-representable PDs without missing the merits of the previous
theory\cite{10}. This means that the resultant PD is more reasonable
than that of the previous theory.
\item The predominance of the present scheme can also be shown from
the viewpoint of the total energy. If the correlation function is chosen
to be unit, then the present theory is reduced to the previous one exactly.
It has been already proved that the total energy of the previous theory is
better than that of the Hartree-Fock approximation\cite{11}. If the
correlation function is chosen most appropriately, then the searching
region is substantially equivalent to the set of PDs which are calculated
by varying both correlation function and spin orbitals in Eq. (\ref{eq20}).
Therefore, the total energy of the present scheme is necessarily more sound
than the previous one\cite{10}, and needless to say,
than that of the Hartree-Fock approximation.
\item In addition to the above merits, the present scheme has the feature
that deserves special emphasis. Due to the fact that the PD functional
theory is still a developing field, there hardly exist the computational
approaches so far. Our previous work is perhaps the first paper to propose
a computational approach that incorporates both of problems related to
PD functional theory\cite{10,11}. The present scheme is also a computational
approach. The resultant simultaneous equations are quite tractable,
as well as the previous one\cite{10,11}. Also from such a viewpoint,
the present scheme seems to be valuable.
\end{enumerate}
Thus, the resultant simultaneous equations (\ref{eq18}) and
(\ref{eq19}) yield the PD which is definitely closer to
the ground-state PD than the previous theory\cite{10}.
Next step is to perform the actual calculations so as to confirm to what
extent the present scheme covers the $N$-representable PDs.
Finally, we would like to comment on the future prospect of the present
theory. Although the present scheme utilizes the lowest-order approximation
of the expectation value of the PD operator, the higher-order corrections
can proceed systematically
with the aid of the Yvon-Mayer diagrams\cite{28,29}.
Of course, it is anticipated that the equations will become more
complicated. But, from the methodological point of view, it is important
that the theoretical framework has the potentiality to improve the
approximation systematically.
\begin{acknowledgments}
This work was partially supported by a Grant-in-Aid for Scientific Research
in Priority Areas "Development of New Quantum Simulators and Quantum Design"
of The Ministry of Education, Culture, Sports, Science, and Technology,
Japan.
\end{acknowledgments}
|
2,869,038,154,274 | arxiv | \chapter{General results on binary supersimple structures}\label{ChapGenRes}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\setcounter{case}{0}
\setcounter{subcase}{0}
We collect in this chapter a number of results that will be used later. We start with an easy but extremely useful proposition saying that all the theories we are interested in are \emph{low}. The relevance of this is that in arguments using the Independence Theorem, lowness allows us to perform the amalgamation of the nonforking extensions ${\rm tp}(b/AB)$ and ${\rm tp}(c/AC)$ if ${\rm stp}(b/A)={\rm stp}(c/A)$. This condition is generally easier to verify than the standard ${\rm Lstp}(b/A)={\rm Lstp}(c/A)$, and in many of the cases that we will encounter, satisfied automatically.
Recall that a simple theory is low if for every formula $\varphi(\bar x,\bar a)$ there exists a natural number $n_\varphi$ such that given any indiscernible sequence $(\bar a_i:i\in\omega)$, if the set $\{\varphi(\bar x,\bar a_i):i\in\omega\}$ is inconsistent, then it is $n_\varphi$-inconsistent.
\begin{notation}
In the rest of this work, we often say that a relation $P$ defines an equivalence relation. Since each predicate is interpreted as an irreflexive relation, this is not strictly true. What we mean is that the reflexive closure of $P$ defines an equivalence relation, or, equivalently, that every triangle with two sides of type $P$ is a $K_3^P$.
\end{notation}
\begin{proposition}
Let $T$ be an $\omega$-categorical simple theory eliminating quantifiers in a finite relational language. Then $T$ is low.
\label{CatSimpLow}
\end{proposition}
\begin{proof}
Let $\varphi(x,a)$ be a formula in $L$. Denote by $m$ the highest arity for a relation in $L$, and let $\ell(a)$ be the length of the tuple $a$. Given any indiscernible sequence $(a_i:i\in\omega)$, the first $m$ tuples of the sequence determine the type over $\varnothing$ of $a_{i_0}...a_{i_k}$ for any $i_0<\ldots<i_k$ and any $k<\omega$.
By the Ryll-Nardzewski Theorem, there are ony finitely many types of $(\ell(a)\times m)$-tuples, so there are only finitely many kinds of indiscernible sequences over $\varnothing$. We claim that, given an $A$-indiscernible sequence $(d_i:i\in\omega)$, the set $D=\{\varphi(x,d_i):i\in\omega\}$ is consistent if and only if for any $\varnothing$-indiscernible sequence $(c_i:i\in\omega)$ such that ${\rm tp}(d_0\ldots d_{m-1})={\rm tp}(c_0\ldots c_{m-1})$, the set $C=\{\varphi(x,c_i):i\in\omega\}$ is consistent. If $D$ is consistent, then viewing $(d_i:i\in\omega)$ as indiscernible over $\varnothing$ shows one
direction.
For the other direction, suppose that $C$ is consistent but $D$ is $k$-inconsistent for some $k\in\omega$. Let $u$ satisfy $C$. In particular, $u$ satisfies
$\varphi(x,c_0)\wedge\ldots\wedge\varphi(x,c_{k-1})$. Using homogeneity, there is an automorphism $\sigma$ of $M$ taking $c_0...c_{k-1}$ to $d_0...d_{k-1}$, so $\sigma(u)$ contradicts the $k$-inconsistency of $D$.
Let $\Phi_j(x)=\{\varphi(\bar x,i):i\in I_j\}$. If $\Phi_j(x)$ is inconsistent, then by indiscernibility it is $n_j$-inconsistent for some minimal $n_j\in\omega$. If we define $n_\varphi:=\max_{j\in\{1,\ldots,k\}}n_j$, then it is clear that for any indiscernible sequence $I$ of $\ell(\bar a)$-tuples, if $\{\varphi(x,i):i\in I\}$ is inconsistent, then it is $n_\varphi$-inconsistent.
\end{proof}
The next theorem appears as Theorem 6.4.6 in Wagner's book \cite{wagner2000simple}.
\begin{theorem}
Let $T$ be a low theory. Then Lascar strong type is the same as strong type, over any set $A$.
\end{theorem}
The immediate corollary is:
\begin{corollary}
Let $T$ be an $\omega$-categorical simple theory eliminating quantifiers in a finite relational language. Then the Lascar strong type of any tuple is the same as its strong type, over any set $A$.\hfill$\Box$
\label{LascarTypes}
\end{corollary}
Recall that an equivalence relation with finitely many classes is referred to as a \emph{finite equivalence relation}. The classes of an $A$-definable finite equivalence relation correspond to strong types over $A$ in a saturated model.
\begin{proposition}
\label{PairwiseIndep}
If $M$ is a binary homogeneous simple structure in which there are no $\varnothing$-definable finite equivalence relations on $M$, then for each $n\in\omega$ greater than 1, whenever $a_1,\ldots,a_n$ are pairwise independent elements of $M$, we have for each $1\leq i\leq n$ that $a_i\indep a_1,\ldots,a_{i-1},a_{i+1},\ldots, a_n$.
\end{proposition}
\begin{proof}
We proceed by induction on $n$. The proposition is trivial for $n=2$; suppose that it holds for all $n\leq n_0$ and $a_1,\ldots,a_{n_0+1}$ are pairwise independent but such that ${\rm tp}(a_1/a_2,\ldots,a_{n_0+1})$ divides over $\varnothing$. By the induction hypothesis, $a_1\indep a_2,\ldots,a_{n_0}$ and $a_1\indep a_{n_0+1}$, so those two types are nonforking extensions of ${\rm tp}(a_1)$. We also have $a_{n_0+1}\indep a_2,\ldots,a_{n_0}$ by induction. Let $b\models{\rm tp}(a_1/a_{n_0+1})$ and $b'\models{\rm tp}(a_1/a_2,\ldots,a_{n_0})$; this also ensures that ${\rm stp}(b)={\rm stp}(b')$, and because ${\rm Th}(M)$ is low by Proposition \ref{CatSimpLow}, they are of the same Lascar strong type. Therefore, ${\rm Lstp}(b/\varnothing)={\rm Lstp}(b'/\varnothing)$. By the Independence Theorem, ${\rm Lstp}(b)\cup{\rm tp}(a_1/a_{n_0+1})\cup{\rm tp}(a_1/a_2,\ldots,a_{n_0})$ is a consistent set of formulas and is realised by some $a'\indep a_2,\ldots,a_{n_0+1}$. But in this case, because the language is binary, $${\rm tp}(a_1/a_2,\ldots,a_{n_0+1})={\rm tp}(a'/a_2,\ldots,a_{n_0+1}),$$a contradiction.
\end{proof}
By Proposition \ref{CatSimpLow}, we can carry out the argument in Proposition \ref{PairwiseIndep} over any set of parameters, as in any low theory $a\equiv^{{\rm stp}}_A b$ if and only if $a\equiv^{{\rm Lstp}}_A b$.
Reformulating \ref{PairwiseIndep} for sequences:
\begin{observation}\label{MorleySqn}
In a binary homogeneous primitive simple structure, if $(a_i:i\in\omega)$ is an $\varnothing$-indiscernible sequence of singletons such that $a_0\indep a_1$, then $(a_i:i\in\omega)$ is a Morley sequence over $\varnothing$.\hfill$\Box$
\end{observation}
\begin{proposition}
In a supersimple unstable primitive rank 1 homogeneous $n$-graph $(M;R_1,\ldots,R_n)$, $n>1$, each of the $R_i$ is unstable.
\label{Year2}
\end{proposition}
\begin{proof}
In {\rm SU}-rank 1 structures, forking is algebraic, so ${\rm tp}(a/b)$ forks iff over $\varnothing$ iff $a\in {\rm acl}(b)\setminus {\rm acl}(\varnothing)$. Therefore, each relation is non-algebraic, by primitivity, and so each relation is nonforking. Using the Independence Theorem to amalgamate partial structures over the empty set (cf. Propositions \ref{PairwiseIndep}, \ref{NonforkingAmalgamation}), we can embed infinite half-graphs for each of the $R_i$ into $M$, witnessing instability. See also Theorem \ref{PrimitiveAlice}.
\end{proof}
\begin{remark}
{\rm The argument in Proposition \ref{PairwiseIndep} can be carried out in finitely homogeneous binary simple structures even over sets of parameters as long as we guarantee that the realisations of the types we wish to amalgamate have the same \emph{strong} type over the set of parameters, by Proposition \ref{CatSimpLow}.}
\end{remark}
\begin{definition}
Let $L$ be a finite relational language in which each relation is binary. We will say that a family $\mathcal B$ of finite $L$-structures is the \emph{age of a random $L$-structure} if $B$ is an amalgamation class and all the minimal forbidden structures of $B$ are of size at most 2.
\end{definition}
\begin{proposition}
Let $M$ be a binary homogeneous simple structure in which there are no $\varnothing$-definable finite equivalence relations on $M$. Suppose that all the relations in $L=\{R_1,\ldots,R_m\}$ are realised in $M$, and $R_1,\ldots,R_k$ are the only forking relations. Then the subfamily of ${\rm Age}(M)$ consisting of all finite $\{R_1,\ldots,R_k\}$-free substructures of $M$ is the age of a random $L\setminus\{R_1,\ldots,R_k\}$-structure.
\label{NonforkingAmalgamation}
\end{proposition}
\begin{proof}
We aim to show that any finite structure not realising any of $R_1,\ldots, R_k$ embeds in $M$. All the $\{R_1,\ldots,R_k\}$-free structures of size 2 are realised in $M$ because the $R_i$ isolate 2-types. Consider an $\{R_1,\ldots,R_k\}$-free structure $B$ on $n+1$ points. We wish to show that this structure can be embedded into $M$, or, equivalently, that its isomorphism type belongs to $\rm{Age(M)}$.
Let $A=\{a_1,\ldots,a_n\}$ realise the substructure of $B$ on the first $n$ points, embedded in $M$, so $a_1\indep a_2,\ldots,a_n$. By the induction hypothesis, the type $p_1$ of $a_{n+1}$ in $B$ over $a_1$, and $p_2$, the type of $a_{n+1}$ over $a_2,\ldots,a_n$ are nonforking extensions of the unique strong type over the empty set, which by lowness (Proposition \ref{CatSimpLow}) is Lascar strong, and therefore by the Independence Theorem there is a single element $b$ of $M$ simultaneously satisfying both types, so using that $B$ is a binary structure, we get ${\rm tp}(b/a_1)\cup{\rm tp}(b/a_2,\ldots,a_n)\vdash{\rm tp}(b/a_1,\ldots,a_n)$, and conclude that $B$ can be embedded into $M$.
\end{proof}
By the same argument:
\begin{observation}
Let $M$ be a homogeneous 3-graph of {\rm SU}-rank 2 with no definable finite equivalence relations on $M$, and suppose $S,T$ are nonforking relations. Then all finite $S,T$ structures can be embedded into the {\rm SU}-rank 2 homogeneous 3-graphs $S(a)$ and $T(a)$ for any vertex $a$.
\label{AllFiniteRFree}
\end{observation}
\begin{proof}
This is a direct consequence of Proposition \ref{NonforkingAmalgamation}.
\end{proof}
The following observation is folklore, but we include a proof for completeness.
\begin{observation}
In a primitive $\omega$-categorical structure, ${\rm acl}(a)=\{a\}$.
\label{AlgClosure}
\end{observation}
\begin{proof}
The relation $x\sim y$ that holds if ${\rm acl}(x)={\rm acl}(y)$ is an equivalence relation. It is clearly reflexive and transitive, and it is symmetric because if $y\in{\rm acl}(x)$, then ${\rm acl}(y)\subseteq{\rm acl}(x)$ and $|{\rm acl}(y)|=|{\rm acl}(x)|$, so the algebraic closures of $x$ and $y$ are equal as, by $\omega$-categoricity, they are finite sets. Hence $\sim$ is a symmetric relation, and clearly invariant. By primitivity, the $\sim$-classes are finite, and this relation is trivial.
\end{proof}
Given a natural number $m$ and an irreflexive symmetric relation $R$, we denote the structure on $m$ vertices $v_0,\ldots,v_{m-1}$ in which for all distinct $v_i,v_j$ the formula $R(v_i,v_j)$ holds by $K_m^R$. In the following observation, a \emph{minimal} finite equivalence relation is a proper finite equivalence relation with minimal number of classes.
\begin{proposition}
If $(M;R_0,\ldots,R_k)$ is a simple homogeneous transitive $k+1$-graph in which $R_0$ is a minimal finite equivalence relation with $m$ classes, and $R_1$ is a nonforking relation realised across any two $R_0$-classes, then the action of ${\rm Aut}(M)$ induced on $M/R_0$ is $k+1$-transitive.
\label{EmbeddingCompleteGraphs}
\end{proposition}
\begin{proof}
It suffices to show that $M$ embeds $K_{k+1}^{R_1}$. First note that we can embed the triangle $R_1R_1R_1$ across any three $R_0$-classes. To see this, consider $a,b$ with $R_1(a,b)$. By transitivity, $a$ and $b$ are of the same type over the empty set. The relation $R_1$ is realised between any two classes; consider $a',b'$ in the same $R_0$-class such that $R_1(a,a')$ and $R_1(b,b')$. Then $a'$ and $b'$ have the same (Lascar) strong type over $\varnothing$ and ${\rm tp}(a'/a),{\rm tp}(b'/b)$ are nonforking extensions of the unique 1-type over the empty set; we can apply the Independence Theorem to find an element $c$ in the same $R_0$ class such that $abc$ is a $K_3^{R_1}$.
The result follows by iterating the same argument, amalgamating nonforking ($R_1$) extensions of smaller complete graphs over the empty set. We can only iterate as many times as the number of $R_0$-classes.
\end{proof}
\begin{proposition}
Let $M$ be a simple homogeneous transitive 3-graph in which $R$ defines an equivalence relation, and assume that the induced action of ${\rm Aut}(M)$ on $M/R$ is transitive, but not 2-transitive, so for any pair of distinct $R$-classes $C,C'$ only one of $S,T$ is realised across $C,C'$. Then the $S,T$-graph induced on a set $X$ containing exactly one element from each $R$-class is homogeneous.
\end{proposition}
\begin{proof}
Consider the graph defined on $M/R$ with predicates $\hat S,\hat T$ which hold of two distinct classes $a/R,b/R$ if for some/any $\alpha\in a/R,\beta\in b/R$ we have $S(\alpha,\beta)$ (respectively, $T(\alpha,\beta)$). This graph is clearly isomorphic to the graph induced on $X$.
\begin{claim}
The graph interpreted in $M/R$ as described in the preceding paragraph is homogeneous in the language $\{\hat S,\hat T\}$.
\end{claim}
\begin{proof}\label{ClaimTrick}
Let $\pi$ denote the quotient map $M\rightarrow M/R$. Given two isomorphic finite substructures $A,A'$ of $M/R$, then any transversals to $\pi^{-1}(A)$ and $\pi^{-1}(A')$ are isomorphic, so by the homogeneity of $M$ there exists an automorphism $\sigma$ taking $\pi^{-1}(A)$ to $\pi^{-1}(A')$. The map $\pi\sigma\pi^{-1}$ is an automorphism of $M/R$ taking $A$ to $A'$.
\end{proof}
And the result follows.
\end{proof}
The argument from Claim \ref{ClaimTrick} will appear again in the future.
\begin{observation}\label{sop}
In any homogeneous transitive $n$-graph $(M,R_1,\ldots,R_n)$, if $R_i(a)$ is an $R_i$-complete graph, then for any $b\in R_i(a)$ we have $\{a\}\cup R_i(a)=\{b\}\cup R_i(b)$.
\end{observation}
\begin{proof}
If $c\in R_i(b)\setminus R_i(a)$, then both $a$ and $c$ are in $R_i(b)$, which is $R_i$-complete by transitivity, and therefore $R_i(a,c)$ holds, contradiction.
\end{proof}
\begin{observation}
If $(M,R_1,\ldots,R_n)$, where $n>1$, is a primitive homogeneous $n$-graph, then for all $i$ with $1\leq i\leq n$, the structure $R_i(a)$ is not $R_i$-complete.
\label{NotRComplete}
\end{observation}
\begin{proof}
Suppose not. Then, using Observation \ref{sop} and homogeneity, there is $i$ with $1\leq i\leq n$ such that for all $a,b$ with $R_i(a,b)$ we have $\{a\}\cup R_i(a)=\{b\}\cup R_i(b)$. Hence, $\{a\}\cup R_i(a)$ is an $R_i$-connected component. This contradicts primitivity, since $|R_i(a)|>0$ and as $n>1$, $\{a\}\cup R_i(a)\neq M$.
\end{proof}
\begin{definition}\label{DefMultipartite}
An $n$-graph is \emph{$R$-multipartite} with $k$ ($k>1$, possibly infinite) parts if there exists a (not necessarily definable) partition $P_1,\ldots,P_k$ of its vertex set into nonempty subsets such that if two vertices $x,y$ are $R$-adjacent then they do not belong to the same $P_i$. We will say that $G$ is \emph{$R$-complete-multipartite} if $G$ is $R$-multipartite with at least two parts and for all pairs $a,b$ from distinct classes, $R(a,b)$ holds.
\end{definition}
\begin{proposition}\label{PropMultipartite}
Let $(M;R_1,\ldots,R_n)$ be an $R_i$-connected transitive homogeneous $n$-graph. If for some $a\in M$ the set $R_i(a)$ is $R_i$-complete-multipartite, then $M$ is $R_i$-complete-multipartite (and in particular is not primitive).
\end{proposition}
\begin{proof}
For simplicity, we will write $R$ and not $R_i$. Note first that the partition of $R(a)$ is invariant over $a$, defined by $R(a,x)\wedge R(a,y)\wedge\neg R(x,y)=:E_a(x,y)$. Take any $b\in R(a)$. By homogeneity, $R(b)$ consists of $a/E_b$ together with $R(a)\setminus(b/E_a)$. We claim that this is all there is in $M$. First note that there are no more classes in $R(b)\setminus R(a)$: if we had $c\in R(b)\setminus R(a)$ not $E_b$-equivalent to $a$, then by homogeneity we would have $R(a,c)$, contradicting $c\notin R(a)$. Therefore, $a/E_b\cup R(a)$ is an $R$-connected component of $M$; by connectedness, it is all of $M$, ${\rm Diam}_R(M)=2$, and $\neg R(x,y)$ is an equivalence relation.
\end{proof}
\begin{observation}\label{ObsFiniteDiameter}
If $(M,R_1,\ldots,R_n)$ is an $\omega$-categorical $n$-graph, then each connected component of $(M,R_i)$ has finite diameter.
\end{observation}
\begin{proof}
Each of the $R_i$-distances is preserved by automorphisms. If one of the connected components of $(M,R_i)$ has infinite diameter, then there are infinitely many 2-types, contradicting $\omega$-categoricity.
\end{proof}
As a consequence of this observation, in $\omega$-categorical edge-coloured graphs the relation $E_i(x,y)$ which holds if there is a path of colour $i$ between $x$ and $y$ is definable. Also, in primitive $n$-coloured graphs, each $(M,R_i)$ is connected, since the equivalence relation $x\sim_{R_i}y$ that holds if $x$ and $y$ are $R_i$-connected is invariant under ${\rm Aut}(M)$.
\begin{observation}\label{diam}
If $(M,R_1,\ldots,R_n)$ is a homogeneous $n$-graph, then the diameter of each connected component of $(M,R_i)$ is at most $n$.
\end{observation}
\begin{proof}
Suppose there are $a,b\in M$ at $R_i$-distance $n+1$, so there are distinct $a=x_0,x_1,\ldots,x_{n+1}=b$ such that $R_i(x_j,x_{j+1})$ for $0\leq j\leq n$ and $R_i$ does not hold in any other pair from $\{x_0,\ldots,x_{n+1}\}$. Then the $n$ pairs $(a,x_j)$ ($2\leq j\leq n+1$) are coloured in $n-1$ colours, so at least two of them have the same colour. Using homogeneity, there is an automorphism of $M$ taking the pair with the smaller index in the second coordinate to the other pair, and therefore we can find a shorter path from $a$ to $b$.
\end{proof}
\chapter{The Imprimitive Case: Finite Classes}\label{ChapImprimitiveFC}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\setcounter{case}{0}
\setcounter{subcase}{0}
In this chapter we classify the homogeneous simple unstable 3-graphs with an invariant equivalence relation. We will assume for definiteness that the equivalence relation is the reflexive closure of the predicate $R$. Note that this is not a limitation in any sense, since by Ryll-Nardzewski's Theorem in our context an invariant equivalence relation is defined by a disjunction of atomic formulas: given that we want the classes to be finite, this means that a disjunction of two atomic formulas cannot be an equivalence relation if $M$ is to be unstable.
\section{Imprimitive Structures with Finite Classes}
Let us describe the construction of an imprimitive homogeneous 3-graph with classes of size 2. Start by enumerating the random graph as $\{w_i:i\in\omega\}$, and define a 3-graph $C(\Gamma)$ on countably many vertices $\{v_i:i\in\omega\}$ where $R$ holds for pairs of vertices of the form $v_{2n}v_{2n+1}$,
\[
S(v_i,v_j)\mbox{ if}
\begin{cases}
i\neq j, i=2m, j=2n, E(w_m,w_n)\\
i\neq j, i=2m+1, j=2n+1, E(w_m,w_n)\\
i\neq j, i=2m, j=2n+1, \neg E(w_m,w_n)\\
i\neq j, i=2m+1, j=2n, \neg E(w_m,w_n)\\
\end{cases}
\]
and all other pairs of distinct vertices satisfy $T$ ($E$ denotes the edge relation in the random graph). This structure is a finite cover in the sense of Evans (see \cite{evans1996splitting}, \cite{evans1995finite}) of a reduct of the random graph. Its theory is supersimple of rank 1, as it can be interpreted in $\Gamma\times\{0,1\}$
Our main result in this chapter is:
\begin{theorem*}
Up to isomorphism, the only imprimitive simple unstable homogeneous 3-graph with finite classes such that all relations are realised in the union of two $R$-classes is $C(\Gamma)$.
\end{theorem*}
\subsection{The proof}\label{sec:observations}
Let $M$ be a homogeneous structure with an invariant equivalence relation $E$, and denote by $M/E$ the set of equivalence classes modulo $E$ in $M$. Then there is a homomorphism $f:{\rm Aut}(M)\rightarrow{\rm Sym}(M/E)$, given by $f(\sigma)(\ulcorner a\urcorner)=\ulcorner\sigma(a)\urcorner$, so that ${\rm Aut}(M)$ acts on $M/E$. We refer to this action as the induced action of ${\rm Aut}(M)$ on $M/E$. The orbit of a tuple of classes under this action is determined by the isomorphism type of the union of those classes in $M$.
Recall that a permutation group $G$ on $\Omega$ is $k$-transitive if it acts transitively on the set of $k$-tuples of distinct elements of $\Omega$.
\begin{observation}\label{ObsTransRE}
Let $M$ be a homogeneous structure with an invariant equivalence relation $E$, and suppose that there is a symmetric binary predicate $S$ such that whenever $A,B$ are distinct $E$-classes, there exist $a\in A,b\in B$ such that $S(a,b)$ holds. Then the induced action of ${\rm Aut}(M)$ on $M/E$ is 2-transitive.
\end{observation}
\begin{proof}
Let $(A,B)$ and $(A',B')$ be pairs of distinct $E$-classes. We wish to prove that there exists $\sigma\in{\rm Aut}(M)$ such that the image under $\sigma$ of $A$ (respectively, $B$) is $A'$ ($B'$). By hypothesis, there exists $a\in A, b\in B$ and $a'\in A, b'\in B$ such that $S(a,b)$, $S(a',b')$ holds, so that the function $a\mapsto a', b\mapsto b'$ is a local isomorphism in $M$. By homogeneity, this function is induced by some $\sigma\in{\rm Aut}(M)$, and by invariance of $E$, $A$ is mapped to $A'$ and $B$ to $B'$.
\end{proof}
\begin{remark}\label{Rmk1}
The conclusion of Observation \ref{ObsTransRE} can be strengthened to $k$-transitivity in the case of a transversal symmetric $k$-ary predicate. Note that if $M$ is a homogeneous $n$-graph in which the reflexive closure of $R$ is an equivalence relation and $S$ is realised in the union of any two distinct $R$-classes, then all other relations in the language are also realised in the union of any two classes (otherwise, our global assumption that all relations are realised in $M$ is contradicted).
\end{remark}
\begin{remark}\label{Rmk2}
If all the definable binary relations in $M$ are symmetric, then the converse to Observation \ref{ObsTransRE} is also true (as is the conclusion that all binary relations are realised in the union of any two distinct classes).
\end{remark}
\begin{observation}\label{NotCompBipart}
Let $M$ be an imprimitive homogeneous 3-graph with $R$-classes of size $n<\omega$, and suppose that ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Let $A,B$ be distinct $R$-classes in $M$ and $a\in A$. Then $1\leq |S(a)\cap B|<n$.
\end{observation}
\begin{proof}
Let $r$ denote $|S(a)\cap B|$. Suppose for a contradiction that $r=n$. By homogeneity and symmetry of $S$, for all $b\in B$ there is an automorphism of $M$ taking $b\mapsto a$ and $a\mapsto b$. This automorphism takes $B$ to $A$ by invariance and $S(a)\cap B$ to $S(b)\cap A$, so that $T$ is not realised in $A\cup B$, contradicting the hypothesis of 2-transitivity of the induced action of ${\rm Aut}(M)$ on $M/R$ by Remark \ref{Rmk2}.
\end{proof}
\begin{observation}\label{ObsTransOnClasses}
Let $M$ be an imprimitive homogeneous 3-graph with $R$-classes of size $n<\omega$, and suppose that ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Let $A,B$ be distinct $R$-classes in $M$. Then ${\rm Aut}(M)_{\{B\}}\cap{\rm Aut}(M)_{\{A\}}$ acts transitively on $B$ and on $A$.
\end{observation}
\begin{proof}
Take any $b,b'\in B$. By Observation \ref{NotCompBipart}, $S(b)\cap A\neq\varnothing$ and $S(b')\cap A\neq\varnothing$, so we can find $a_1,a_2\in A$ such that $S(b,a_1)$ and $S(b',a_2)$ hold, so that $b\mapsto b'$, $a_1\mapsto a_2$ is a local isomorphism. The conclusion follows by homogeneity and invariance of $R$.
\end{proof}
It follows from Observation \ref{ObsTransOnClasses} and our general setting that whenever $B,C$ and $B',C'$ are pairs of distinct $R$-classes, then the structure induced by $M$ on $B\cup C$ is isomorphic to that on $B'\cup C'$, and that for any vertices $b,c$ and classes $K,K'$ not including $b,c$, $|S(b)\cap K|=|S(b')\cap K'|$.
\begin{observation}\label{ObsEasyCase}
Let $M$ be an imprimitive homogeneous 3-graph with $R$-classes of size $n<\omega$, and suppose that ${\rm Aut}(M)$ acts 2-transitively on $M/R$ and $S$ is not an equivalence relation. Let $A,B$ be distinct $R$-classes in $M$ and $r\mathrel{\mathop:}=|S(a)\cap B|$ for any $a\in A$. If $r=1$, then $n=2$.
\end{observation}
\begin{proof}
Since $S$ is not an equivalence relation, there exist pairwise inequivalent $b,c,d\in M$ such that $S(b,c)\wedge S(c,d)\wedge T(b,d)$.
Let $B,C,D$ denote the classes of $b,c,d$. By 2-transitivity on $M/R$, there exists $d'\neq d$ in $D$ such that $S(b,d')$. By the same reason, there exists $c'\neq c$ in $C$ such that $S(d',c')$ holds.
If $n>2$, then for any $c''\in C, c''\neq c,c'$ we have ${\rm qftp}(c'/bcd)={\rm qftp}(c''/bcd)$ because $r=1$ implies that both will satisfy $T(b,x)\wedge T(d,x)\wedge R(c,x)$, but ${\rm tp}(c'/bcd)\neq{\rm tp}(c''/bcd)$, since $c'$ satisfies the formula $\varphi(y)=\exists x(R(d,x)\wedge S(b,x)\wedge R(x,y))$, but $c''$ does not, contradicting homogeneity. In the following diagram, the heavy lines represent $R$-cliques, solid lines are $S$-edges, and dotted lines are $T$-edges.
\[
\includegraphics[scale=0.7]{fig1.pdf}
\]
\end{proof}
As we stated before, we are interested only in simple unstable 3-graphs, because the stable ones have already been classified. A first question is which of the binary relations divide over the empty set, i.e.,\, for which $P\in\{R,S,T\}$ do we have that the formula $P(x,a)$ divides. Clearly, since we are interested in structures $M$ in which $R$ defines an equivalence relation with infinitely many classes, we have that $R$ divides. Moreover, since there are Morley sequences in any type in a simple theory, we get that at least one of $S,T$ is nonforking. Let us assume without loss of generality that $S$ is nonforking.
\begin{proposition}\label{PropSTNf}
Let $M$ be an imprimitive simple unstable 3-graph in which $R$ defines an equivalence relation with finite classes, and assume that $S$ is nonforking. Then $T$ is nonforking.
\end{proposition}
\begin{proof}
Suppose for a contradiction that $T(x,a)$ divides. Then for any Morley sequence $(c_i)_{i\in\omega}$ we have that $\{T(x,c_i):i\in\omega\}$ is inconsistent. From the fact that $R$ is algebraic and $T$ divides we can derive that any Morley sequence of vertices must be an infinite $S$-clique. Therefore, the $T$-neighbourhood of any vertex contains no infinite $S$-cliques.
Since $S$ and $T$ are unstable, there exists an infinite half-graph: we have an infinite collection of vertices $a_i,b_i$ such that $S(a_i,b_j)$ holds iff $i\leq j$. From this half-graph we can extract using Ramsey's theorem an ``indiscernible" half-graph, i.e.,\, a half-graph which, considered as a sequence of pairs $a_ib_i (i\in\omega)$ is indiscernible over $\varnothing$. We will abuse notation and name the elements of this indiscernible half-graph $a_i$ and $b_i$ ($i\in\omega$). It follows from the algebraicity of $R$ and the argument in the preceding paragraph that the $a_i$ and the $b_i$ form infinite $T$-cliques. By homogeneity, the $T$-neighbourhood of any vertex contains a copy of this indiscernible half-graph and is therefore, considered as a separete structure, an unstable 3-graph.
If $|T(a)\cap B|=1$ for any $R$-class $B$ not containing $a$, then $T(a)$ is $R$-free. By our argument, $T(a)$ is a simple unstable homogeneous $S,T$-graph, so by the Lachlan-Woodrow theorem $T(a)$ is isomorphic to the Random Graph. This contradicts our hypothesis, since the Random Graph contains infinite cliques and infinite independent sets.
Suppose then that $|T(a)\cap B|>1$. Then $R$ defines an equivalence relation with finite classes in $T(a)$, which is unstable and embeds no infinite $S$-cliques. We could have two orbits of pairs of $R$-classes in $T(a)$, so that only one of $S$ and $T$ is realised in any union of two $R$-classes. In this case, consider the (definable) structure $T(a)/R$ with the (definable) relations $\hat T$ and $\hat S$ that hold between two elements $x,y$ of $T(a)/R$ if the pair of classes in $T(a)$ that $x,y$ represent realise only the corresonding relation. Then $T(a)/R$ is a simple unstable graph, isomorphic to the Random graph by the Lachlan-Woodrow theorem, and therefore $T(a)$ embeds infinite $S$-cliques. We have reached a contradiction again.
The last case is when both $S$ and $T$ are realised in the union of any two $R$-classes in $T(a)$. Then $T(a)$ is a homogeneous simple unstable 3-graph and ${\rm Aut}(M)_a$ acts 2-transitively on the set of $R$-classes induced in $T(a)$. Notice that in $T(a)$ as a separate structure $S$ is still nonforking, since the indiscernible half-graph shows that the elements of $T(a)$ are $S$-related to an infinite $T$-clique within $T(a)$. So $T(a)$ still satisfies the hypotheses of the proposition, and we can iterate the argument (take intersections $T(a_1)\cap T(a_2)\cap\ldots\cap T(a_n)$ where $a_{i+1}\in T(a_i)$) until we reach a simple unstable $R$-free graph, and therefore a contradiction as in the other cases.
\end{proof}
\begin{remark}\label{trivialremark}
Note that a union $U=A\cup B$ of two $R$-classes is homogeneous in a restricted sense: suppose that $C,D$ are isomorphic subsets of $U$. If their union includes an $S$- or a $T$-edge, then the extension of the isomorphism to an automorphism of $M$ will fix $U$ setwise and so its restriction to $U$ will be an automorphism of $U$. But there is no guarantee that if each of $C,D$ are $R$-cliques of the same size, both contained in the same class $A$, there will be and extension of the isomorphism fixing $B$ setwise as well.
\end{remark}
\begin{observation}\label{ObsNoFER}
Let $M$ be a simple unstable homogeneous 3-graph in which $R$ defines an equivalence relation with finite classes. Then $R$ is the only invariant equivalence relation on $M$. In particular, there are no invariant proper equivalence relations with finitely many classes in $M$.
\end{observation}
\begin{proof}
Since $M$ is unstable and $R$ is stable and forking, we have by Proposition \ref{PropSTNf} that $S$ and $T$ are nonforking unstable relations. In particular, $S$ and $T$ do not define equivalence relations. Since $R$ is realised and an equivalence relation, $S\vee T$, $R\vee S$ and $R\vee T$ do not define equivalence relations either. The result follows by quantifier elimination.
\end{proof}
Let $(I,<)$ be a linearly ordered set. The sequence $(a_i:i\in I)$ is $A$-independent if for every $i\in I$, $a_i\indep[A]a_{<i}$. Recall two lemmata from simplicity theory (5.14 and 5.20 (4) in \cite{casanovas2011simple}):
\begin{lemma}
Let $(a_i:i\in I)$ be $A$-independent. If $J,K$ are subsets of $I$ such that $J<K$ (that is, $j<k$ for any $j\in J, k\in K$), then ${\rm tp}((a_i:i\in K)/A(a_i:i\in J))$ does not divide over $A$.
\label{Casanovas514}
\end{lemma}
\begin{lemma}
If $T$ is simple, then $A\indep[B]C\Leftrightarrow A\indep[B]{\rm acl}(C)\Leftrightarrow{\rm acl}(A)\indep[B]C\Leftrightarrow A\mathop{\raisebox{-.9ex}{$\underset{{\rm acl}(B)}{\smile}$}\makebox[-4.9ex]{$\mid$}
\hspace{5.0ex}}C$
\label{Casanovas520}
\end{lemma}
\begin{observation}\label{ObsRFree}
Let $M$ be a simple unstable homogeneous 3-graph in which $R$ defines an equivalence relation with finite classes. Then any $R$-free set is $\varnothing$-independent.
\end{observation}
\begin{proof}
We can use the fact that $S$, $T$ are nonforking relations (Proposition \ref{PropSTNf}) and the Independence Theorem to construct any $R$-free structure, one vertex at a time. At each step, we obtain a vertex that is independent from all the previous vertices.
\end{proof}
\begin{observation}
If $X,Y$ are unions of $R$-classes and $X\cap Y=\varnothing$, then $X\indep Y$.
\label{ObsIndepClasses}
\end{observation}
\begin{proof}
A transversal to the set $XY$ is independent by Observation \ref{ObsRFree}. We can order it in such a way that the elements $(c_i:i\in I)$ from a transversal to $X$ appear before the elements $(c_j:j\in J)$ of a transversal to $Y$. By Lemma \ref{Casanovas514}, ${\rm tp}((c_i:i\in I)/(c_j:j\in J))$ does not fork over $\varnothing$, and so by Lemma \ref{Casanovas520} $X\indep Y$.
\end{proof}
\begin{remark}\label{rmksubsets}
By monotonicity, if $K\subset A$, $K'\subset B$, and $A,B$ are distinct $R$-classes we get $K\indep K'$. We will make use of this fact in the next proposition.
\end{remark}
\begin{observation}\label{ObsPartition}
Let $M$ be a homogeneous 3-graph in which $R$ defines an equivalence relation with finite classes and $S$ is not an equivalence relation, and such that ${\rm Aut}(M)$ acts 2-transitively on $M/R$. If for distinct $R$-classes $A,B$ there exists a nontrivial partition $A=A_1\cup\ldots\cup A_m$, $B=B_1\cup\ldots\cup B_m$ such that the structure induced on $A_i\cup B_i$ is a $K_r^S$ for some $r$, and there are no other $S$-edges in $A\cup B$, then $m=2$.
\end{observation}
\begin{proof}
The partition would in this case be definable over $a\in A,b\in B$ as an equivalence relation $\sim$. Consider $a\in A$ and three elements $b,b',b''\in B$ such that $b\sim b'$ and $b\not\sim b''$. Then $abb'$ and $abb''$ form triangles $TTR$ with a common vertex $a$ and are therefore isomorphic, but there is no way to extend this isomorphism to the two classes since for $b,b'$ there exists $a'\in A$ such that $S(a',b)\wedge S(a',b')$, but this formula is not satisfied by $b,b''$.
\end{proof}
\begin{lemma}\label{Lemma1}
Let $M$ be an imprimitive simple unstable homogeneous 3-graph with $R$-classes of size $n<\omega$, and suppose that ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Let $A,B$ be distinct $R$-classes in $M$ and $r\mathrel{\mathop:}=|S(b)\cap C|$ for any $b\in B$. Then $n=2r$.
\end{lemma}
\begin{proof}
Let us denote $S(a)\cap B$ by $B_a$. Each $a\in A$ picks out (via $B_a$) one of the $n\choose r$ subsets of $B$ of size $r$. By Observation \ref{ObsTransOnClasses}, each element of $B$ is in the $S$-neighbourhood of some element from $A$, so we have $$B=\bigcup_{a\in A}(S(a)\cap B)$$
We can assume by Observation \ref{ObsEasyCase} that $n>2$. Suppose for a contradiction that $2r<n$. Then we have for all distinct $a,a'\in A$, $T(a)\cap T(a')\cap B\neq\varnothing$. From this it follows that for all $a\neq a'\in A$ the formula $B_a\cap B_{a'}\neq\varnothing$ holds; to see this, suppose that we had distinct $a,a',a''\in A$ such that $B_a\cap B_{a'}\cap B=\varnothing, B_a\cap B_{a''}\neq\varnothing$. We can find $b,b'\in B$ such that $T(b,a)\wedge T(b,a'')$ and $T(b',a)\wedge T(b',a')$ hold, so by homogeneity we can move $baa''$ to $b'aa'$ (see diagram below).
\[
\includegraphics[scale=0.7]{fig2.pdf}
\]
But any automorphism doing that should take $B_a\cap B_{a''}$ to $B_a\cap B_{a'}$, which is impossible since one of these is empty and the other is not. Furthermore, the number $r_2=|S(a)\cap S(a')\cap B|$ is constant for all distinct $a$ and $a'$ in $A$.
We have two possible cases: either there exist some $a,a'$ with $B_a=B_{a'}$, or for all $a\neq a'$ we have $B_a\neq B_{a'}$.
\begin{case}\label{Case1}
If for some $a,a'$ we have $B_a=B_a'$, define a relation $\sim$ on $A$ by $a\sim a'$ if $B_a=B_a'$. Note that for each $\sim$-class $k$, the set $k'=\bigcap_{\kappa\in k} S(\kappa)\cap B$ is an equivalence class of the corresponding equivalence relation defined in $B$ with respect to $A$. Also, the structure induced on $k\cup k'$ is $K_{r,r}^S$. Since $2r<n$, the equivalence relations have each at least three classes, each with at least two elements by Observation \ref{ObsEasyCase}. This cannot happen by Observation \ref{ObsPartition}.
\end{case}
\begin{case}\label{Case2}
Suppose then that for all $a\neq a'$ we have $B_a\neq B_{a'}$. Take $x,x'\in B$ and let $C$ be an $R$-class, $C_x=S(x)\cap C$, $C_{x'}=S(x')\cap C$. Choose $X\subset C$ of size $r$ such that $X\cap C_{x'}\neq\varnothing$ and $X\cap C_x=\varnothing$. Such an $X$ exists because there are more than $r$ elements in $C\setminus C_x$, by our assumption $2r<n$. By homogeneity, there exists a $\delta$ such that $C_\delta=X$. Now choose $Y\subset B$ of size $r$ containing $x,x'$ (we can find $Y$ because as a consequence of Observation \ref{ObsEasyCase}, $r\geq 2$); by the same argument there is a $\beta$ such that $B_\beta=Y$. By Remark \ref{rmksubsets}, $X\indep Y$, and also ${\rm tp}(\beta/Y)$ and ${\rm tp}(\delta/X)$ are nonforking over $\varnothing$. Furthermore, $\beta$ and $\delta$ have the same Lascar strong type over $\varnothing$ by Observation \ref{ObsNoFER}, so by the Independence Theorem there is $a\in M$ which is $S$-related to $x,x'$ and to $c$, so we have $S(a,x)\wedge T(x,c)\wedge S(a,c)$ and $S(a,x')\wedge T(x',c)\wedge S(a,c)$.
\[
\includegraphics[scale=0.7]{fig3.pdf}
\]
By homogeneity, there is an automorphism $\sigma$ of $M$ fixing $ac$ taking $x$ to $x'$, but then $\sigma$ takes $S(x)\cap S(b)\cap C=C_x\cap X\neq\varnothing$ to $S(x')\cap S(b)\cap C=C_{x'}\cap X=\varnothing$, a contradiction.
\end{case}
If $2r>n$, then we can carry out the same argument using $T$-neighbourhoods instead of $S$-neighbourhoods.
\end{proof}
The argument in the preceding proposition actually proves something stronger:
\begin{corollary}\label{CorExistsPartition}
Let $M$ be an imprimitive simple unstable homogeneous 3-graph with $R$-classes of size $n<\omega$, and suppose that ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Let $A=a/R,B=b/R$ be distinct $R$-classes in $M$. Then there exists a partition $A=A_1\cup A_2$, $B=B_1\cup B_2$ such that the structure induced by $M$ on $A_i\cup B_i$ is $K_{r,r}^S$ and there are no more $S$-edges in $A\cup B$.
\end{corollary}
\begin{proof}
Suppose that there exist $c,c'\in A$ such that $B_c=B_c'$. Then we can define an equivalence relation $\sim_B$ on $A$ by the formula $$\varphi(u,v;a,b): (\bar R(a,u)\wedge \bar R(a,v))\rightarrow\forall x(\bar R(x,b)\rightarrow(S(x,u)\leftrightarrow S(x,v)))$$Where $\bar R$ is the reflexive closure of $R$. Therefore, we have that for all $c,c'\in A$ either $B_c\cap B_{c'}=\varnothing$ or $B_c=B_{c'}$, so that a set of representatives for $\sim_B$ classes in $A$ induces a partition of $B$ into sets of size $r$. By symmetry of $S$ and homogeneity, the $\sim_B$-classes are also of size $r$, and by the definition of $\sim_B$ the structure induced on $c/\sim_B\cup B_c$ is $K_{r,r}^S$. The argument from Case \ref{Case1} Lemma \ref{Lemma1} shows that in this situation there are exactly two equivalence classes.
The argument from Case \ref{Case2} in Lemma \ref{Lemma1} says that we cannot have $B_c\neq B_{c'}$ for all $c\neq c'$ in $A$.
\end{proof}
In other words, in a homogeneous imprimitive 3-graph with finite classes, the classes have size $2r$ for some $r\geq 1$ and the structure induced on a pair of distinct $R$-classes is that of two disjoint copies of $K_{r,r}^S$ (and therefore the rest of the edges form two disjoint copies of $K_{r,r}^T$). Given two distinct $R$-classes $B,C$ and subsets $B_1\subset B, C_1\subset C$, we write $S(B_1,C_1)$ if the structure induced on $B_1\cup C_1$ is a complete bipartite graph (where we interpret $S$ as edges and $R$ as nonedges).
\begin{remark}\label{RmkInvariantSubsets}
The equivalence relations on $A$ and $B$ defined in the proof of Corollary \ref{CorExistsPartition} are invariant under automorphisms fixing some $a\in A$ and $b\in B$.
\end{remark}
\begin{observation}
Let $M$ be an imprimitive homogeneous simple unstable 3-graph in which the reflexive closure of $R$ is a nontrivial proper equivalence relation on $M$ with finite classes, and suppose that ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Then ${\rm SU}(M)=1$.
\end{observation}
\begin{proof}
By transitivity, there is a unique 1-type over $\varnothing$, $p$. Let $q\in S(A)$ be an extension of $p$ to $A$. We have two possibilities:
\begin{enumerate}
\item{The type $q(x)$ contains the formula $R(x,a)$ for some $a\in A$. In this case, $q$ is clearly an algebraic extension, and therefore forking and of rank 0, so $p$ is of rank 1.}
\item{The type $q(x)$ does not contain $R(x,a)$ for any $a\in A$. Then $q$ is the type of an element in an $R$-class that is not represented in $A$. By Observation \ref{ObsIndepClasses}, $q$ is a nonforking extension of $p$.}
\end{enumerate}
\end{proof}
\begin{lemma}\label{Lemma2}
Let $M$ be an imprimitive homogeneous simple unstable 3-graph in which the reflexive closure of $R$ is a nontrivial proper equivalence relation on $M$ with finite classes, and suppose that ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Then the $R$-classes have size 2.
\end{lemma}
\begin{proof}
The proof of this proposition depends on Observations \ref{ObsNoFER}, \ref{ObsRFree}, and \ref{ObsIndepClasses}.
Suppose for a contradiction that the $R$-classes in $M$ have four or more elements. Consider two distinct classes $A$ and $B$. We know from Corollary \ref{CorExistsPartition} that there exist $r$-sets $A_0,A_1$ and $B_0,B_1$ such that $S(A_0,A_1)$ and $S(B_0,B_1)$. By Observation \ref{ObsIndepClasses}, $A\indep B$; take $r$-sets $A_2\subset A, B_2\subset B$ such that all of $A_2\cap A_0, A_2\cap A_1, B_2\cap B_0, B_2\cap B_1$ are nonempty (we can do this because $r\geq 2$. By monotonicity of independence, $A_2\indep B_2$. The formulas $\bigwedge_{a\in A_2} S(x,a)$ and $\bigwedge_{b\in B_2}S(x,b)$ isolate nonforking extensions $p_{A_2}, p_{B_2}$ of the unique type over $\varnothing$; any $c_0\models p_{A_2}, c_1\models p_{B_2}$ in distinct $R$-classes are independent realisations of nonforking extensions of $p$ and satisfy the same Lascar strong type over $\varnothing$ by Observation \ref{ObsNoFER}, so we can use the Independence Theorem to find $c\indep AB$ that is $S$-related to $A_2$ and $B_2$. Similarly, we can find some $d$ that is $S$-related to $A_0$ and $B_2$. Now if we take $a\in A_2\cap A_0,b\in B_1\cap B_2$ then $abc$ and $abd$ form triangles of type $SST$ with $T(a,b)$.
\[
\includegraphics[scale=0.9]{fig4.pdf}
\]
The partial isomorphism $a\mapsto a,b\mapsto b, c\mapsto d$ cannot be extended to an automorphism by Remark \ref{RmkInvariantSubsets}, since it fixes $a$ and $b$, and therefore it fixes their $\sim$ equivalence class, but at the same time the automorphism should take the $\sim$-class of $a$, $A_0=S(d)\cap A$, to $A_2=S(c)\cap A$. This contradicts the homogeneity of $M$.
\end{proof}
We now count with all the ingredients for the proof of our main result.
\begin{theorem}\label{ThmCGamma}
Up to isomorphism, the only imprimitive simple unstable homogeneous 3-graph with finite classes such that is all predicates are realised in the union of two classes is $C(\Gamma)$.
\end{theorem}
\begin{proof}
Let $M$ denote an imprimitive simple unstable homogeneous 3-graph with finite classes. We know by Lemma \ref{Lemma2} that the $R$-classes in $M$ have size 2, and by Corollary \ref{CorExistsPartition} that the structure induced on any pair of $R$-classes is the graph on four vertices with two $R$-edges, two $S$-edges, and two $T$-edges, and each of these pairs of edges spans the four vertices.
Our argument makes use of the Independence Theorem and Observation \ref{ObsIndepClasses}. Notice that, since the automorphism group of $M$ acts 2-transitively on the set of $R$-edges, then there are no invariant equivalence relations on the set of classes. Therefore, there is a unique Lascar strong type of $R$-classes/edges.
Take any element $A$ of ${\rm Age}(C(\Gamma))$. We may assume without loss of generality that $A$ is a union of $R$-classes. We argue inductively that the structure induced by $C(\Gamma)$ on any $n$-tuple of classes can be embedded into $M$. For $n=1$, this is clear as the structure is simply an $R$-edge. Similarly, by Corollary \ref{CorExistsPartition}, we have the result for $n=2$. Now suppose that we can embed the structure induced by $C(\Gamma)$ on any $n$-tuple of $R$-classes into $M$, say into $\bar a_1,\ldots,\bar a_n$. By Observation \ref{ObsIndepClasses}, $\bar a_1\indep \bar a_2,\ldots \bar a_n$; by the primitivity of the action of ${\rm Aut}(M)$ on $M/R$, all classes have the same Lascar strong type over $\varnothing$. So let $\bar b_0\models{\rm tp}(\bar a_{n+1}/\bar a_1)$ and $\bar b_1\models{\rm tp}(\bar a_{n+1}/\bar a_2,\ldots,\bar a_n)$. These two types are realised in $M$ by quantifier elimination and the induction hypothesis. By the Independence Theorem, there exists a class $\bar b_{n+1}$ realising ${\rm tp}(\bar a_{n+1}/\bar a_1)\cup{\rm tp}(\bar a_{n+1}/\bar a_2,\ldots,\bar a_n)$, $\bar b_{n+1}\indep\bar a_1,\ldots,\bar a_{n}$. The structure induced by $M$ on $\bar a_1,\ldots,\bar a_n\bar b_{n+1}$ is isomorphic to the original structure in $C(\Gamma)$. This proves ${\rm Age}(C(\Gamma))\subseteq{\rm Age} (M)$; the same argument proves the equality, so $C(\Gamma)$ and $M$ are homogeneous structures of the same age. They are therefore isomorphic.
\end{proof}
This concludes the analysis of imprimitive unstable structures with finite classes such that all predicates are realised in the union of two classes. What happens if only two predicates are realised in the union of two classes? In that case, the structure induced on a pair of distinct classes $A,B$ is either $K_{n,n}^S$ or $K_{n,n}^T$; as a consequence, all sets containing exactly one element from each $R$-class are isomorphic.
\begin{observation}
Let $M$ be a simple homogeneous 3-graph in which $R$ defines an equivalence relation. If ${\rm Aut}(M)$ acts transitively, but not 2-transitively on $M/R$, then the $S,T$-graph induced on any $X\subset M$ containing exactly one element from each $R$-class is homogeneous.
\label{ObsInterpretedGraph}
\end{observation}
\begin{proof}
Consider the graph defined on $M/R$ with predicates $\hat S,\hat T$ which hold of two distinct classes $a/R,b/R$ if for some/any $\alpha\in a/R,\beta\in b/R$ we have $S(\alpha,\beta)$ (respectively, $T(\alpha,\beta)$). This graph is clearly isomorphic to the graph induced on $X$.
\begin{claim}
The graph interpreted in $M/R$ as described in the preceding paragraph is homogeneous in the language $\{\hat S,\hat T\}$.
\label{ClaimInterpretsHenson}
\end{claim}
\begin{proof}
Let $\pi$ denote the quotient map $M\rightarrow M/R$. Given two isomorphic finite substructures $A,A'$ of $M/R$, then any transversals to $\pi^{-1}(A)$ and $\pi^{-1}(A')$ are isomorphic, so by the homogeneity of $M$ there exists an automorphism $\sigma$ taking $\pi^{-1}(A)$ to $\pi^{-1}(A')$. The map $\pi\sigma\pi^{-1}$ is an automorphism of $M/R$ taking $A$ to $A'$.
\end{proof}
And the result follows.
\end{proof}
The graph on $M/R$ described in Observation \ref{ObsInterpretedGraph} is interpretable in $M$, so it must be a simple graph; if $S,T$ are unstable in $M$ it follows that $\hat S,\hat T$ are unstable in $M/R$. By the Lachlan-Woodrow Theorem, the graph on $M/R$ is isomorphic to the Random Graph.
\begin{corollary}\label{CorFiniteClasses}
Let $M$ be a simple homogeneous 3-graph in which $R$ defines an equivalence relation. If ${\rm Aut}(M)$ acts transitively, but not 2-transitively on $M/R$. Then $M\cong\Gamma[K_n^R]$ for some $n\in\omega$.
\end{corollary}
\chapter{The Rank of Homogeneous Simple 3-graphs}\label{ChapPrimitive}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\setcounter{case}{0}
\setcounter{subcase}{0}
In this chapter we will find all the primitive simple unstable 3-graphs and those imprimitive ones with finitely many infinite classes, but the main result is that primitive homogeneous simple 3-graphs cannot have ${\rm SU}$-rank 2 or higher. The proof of this fact consists of two parts: first we prove that there are no such structures of rank 2, and then prove by induction that there are no structures of any higher finite rank. It was recently proved by Koponen that binary homogeneous simple structures are supersimple and have finite ${\rm SU}$-rank, so this is enough.
\section{The Rank of Primitive Homogeneous Simple 3-graphs}\label{SectRank2}
We will prove in this section that all primitive simple unstable 3-graphs have {\rm SU}-rank 1. From this and some basic results from Chapter \ref{ChapGenRes}, it will follow that the only such 3-graph is the random 3-graph (the Fra\"iss\'e limit of the class of all finite 3-graphs).
Let $M$ be a simple homogeneous unstable 3-graph. Of the three relations $R,S,T$ (all of which are realised in $M$), we assume that $R$ is stable and forking, and $S,T$ are nonforking. This assumption (not needed in the proof of Theorem \ref{NoRank2Graphs} below) is justified by Theorem \ref{ThmStableForking}. Given any $a\in M$, consider $R(a)$. This is a definable set of rank at most 1 by our assumptions on $R$ and the rank of $M$. What is the structure of $R(a)$?
The main theorem of this chapter is:
\begin{theorem*}\label{MainThm}
Let $M$ be a primitive simple homogeneous 3-graph. Then the theory of $M$ is of {\rm SU}-rank 1.
\end{theorem*}
To prove this theorem, we prove first
\begin{theorem*}\label{NoRank2Graphs}
There are no simple primitive homogeneous 3-graphs of {\rm SU}-rank 2.
\end{theorem*}
This last result is proved by arguing first that $R$ defines an equivalence relation on $R(a)$ with finitely many classes; we use the imprimitivity blocks of the $R$-neighbourhoods to define an incidence structure. This incidence structure is a semilinear space. The analysis divides into two main cases, depending on the $R$-diameter of the 3-graph; most of the work goes into proving the non-existence of primitive homogeneous simple 3-graphs of {\rm SU}-rank 2 and $R$-diameter 2. The case with ${\rm Diam}_R(M)=3$ is considerably easier.
The proof of the first of these theorems rests on the possibility of defining the semilinear space. We use this observation to start an inductive argument on the rank of the structure, and the second theorem is the basis for induction in that proof.
For most of the chapter, we will assume that the {\rm SU}-rank of ${\rm Th}(M)$ is 2; by Theorem \ref{ThmStableForking}, the elements in a primitive simple homogeneous 3-graph satisfy the statement ``if ${\rm tp}(a/B)$ divides over $A\subseteq B$, then dividing is witnessed by a stable formula". In statements where the language is $\{R,S,T\}$, we assume that $R$ is a forking relation ($R(a,b)$ implies ${\rm tp}(a/b)$ divides over $\varnothing$), and therefore stable. In view of Lachlan's classification of stable homogeneous 3-graphs (see Theorem \ref{Lachlan3graphs}), we may suppose that ${\rm Th}(M)$ is unstable. Since any Boolean combination of stable formulas is stable, it follows that both $S$ and $T$ are unstable, therefore nonforking. Statements for the language $\{R_1,\ldots,R_n\}$ may be more general and refer to homogeneous $n$-graphs. Note that if all relations are nonforking then a primitive structure $M$ is random in the sense that all its minimal forbidden structures are of size 2 (examples: the Random Graph, Random $n$-edge-coloured graphs), by the Independence Theorem argument used in the proof of Theorem \ref{PrimitiveAlice}.
Recall that for any relation $P$ and tuple $\bar a$, $P(\bar a)=\{\bar x\in M| P(\bar a, \bar x)\}$. We sometimes refer to this set as the $P$-neighbourhood of $\bar a$. In Definition \ref{DefnGraph}, we defined an $n$-graph to be a structure $(M, R_1,\ldots,R_n)$ in which each $R_i$ is binary, irreflexive and symmetric; also, we assume that for all distinct $x,y\in M$ exactly one of the $R_i$ holds and $n\geq2$. Finally, if $M$ is a homogeneous $n$-graph, we assume that for each $i\in\{1,\ldots,n\}$ there exist $a_i,b_i\in M$ such that $R_i(a_i,b_i)$ holds in $M$.
By simplicity, forking and dividing coincide, so in our statements and arguments we usually prove or use dividing instead of forking. We assume that all relations in the language are realised in $M$.
At this point, we know that there are two possibilitiese for the structure of 3-graphs of rank 2 with relations $R,S,T$: either $(M,R)$ has diameter 2, or it has diameter 3. In the latter case, since ${\rm Aut}(M)$ preserves the $R$-distance, for any $a\in M$ the sets $S(a)$ and $T(a)$ correspond to $R$-distance 2 and 3 from $a$, so ${\rm Aut}(M,R)={\rm Aut}(M,R,S,T)$.
\section{Semilinear 3-graphs of {\rm SU}-rank 2}\label{SectRank2}
In this section we establish the basis for the induction that will eventually yield the main result of this chapter, namely that all primitive homogeneous simple 3-graphs have ${\rm SU}$-rank 1. By Theorem \ref{ThmStableForking}, we may assume that the forking relation $R$ is stable. We cannot have more than one forking relation because we assume that each relation in the language isolates a 2-type, so by Theorem \ref{ThmStableForking}, if we had two forking relations then both would be stable, which would imply that the third (which is equivalent to the negation of the other two) is also a stable relation, and the theory of the homogeneous 3-graph would be stable; and by Theorem \ref{Lachlan3graphs} (due to Lachlan), there are no primitive stable 3-graphs. Here we start a case-by-case analysis of these graphs.
\begin{observation}
If $M$ is a primitive simple $\omega$-categorical relational structure of {\rm SU}-rank 2, and $R$ is a forking relation, then $R(a)$ is a set of rank 1.
\label{Rank1}
\end{observation}
\begin{proof}
Given any $a\in M$, $R(a)$ is a set of rank at most 1. If it were of rank 0, then the set of solutions of $R(x,a)$ would be finite, and therefore any element satisfying it would be in the algebraic closure of $a$, impossible by Observation \ref{AlgClosure}. Therefore, the rank of $R(a)$ is 1.
\end{proof}
\begin{proposition}\label{EmbedsAllRComplete}
Suppose that $M$ is a simple primitive homogeneous $R,S,T$-graph, the formula $R(x,a)$ forks, and $S,T$ are unstable, nonforking relations. Then $M$ embeds $K_n^R$ for all $n\in\omega$.
\end{proposition}
\begin{proof}
Being $K_n^R$-free would either force $R(a)$ to be algebraic, contradicting primitivity by Observation \ref{AlgClosure}, or contradict, by Ramsey's Theorem, Observation \ref{NoInfiniteCliques}.
\end{proof}
Note that if $M$ is a simple 3-graph in which $R$ is stable, then $R$ is still a stable relation in the (homogeneous, simple) structure $R(a)$, since any model of the theory of $R(a)$ can be defined in a model of the original theory, and therefore witnesses for instability in $R(a)$ theory would also witness instability in the original theory.
What can we say about $R(a)$? We will show in the next section that the action of ${\rm Aut}(M/a)$ is imprimitive on $R(a)$, and that the vertices together with the imprimitivity blocks of their neighbourhoods form a semilinear space. In our argument, we will use Lachlan's classification of stable homogeneous 3-graphs (Theorem \ref{Lachlan3graphs}).
We summarise some properties of some of the infinite stable homogeneous 3-graphs in the table in page \pageref{table}. We present only those structures that may appear as $R(a)$ in a primitive homogeneous 3-graph.
\begin{table}[h]\label{table}
\caption{Some stable homogeneous 3-graphs}
\centering
\begin{tabular}{ccc}
\hline\hline
Structure&Equivalence relations&U-rank\\
$P^R[K_{\omega}^R]$&$R$&1\\
$K_{\omega}^R[Q^R]$&$S\vee T$&1\\
$Q^R[K_{\omega}^R]$&$R$&1\\
$K_{\omega}^R[P^R]$&$S\vee T$&1\\
$K_{\omega}^R\times K_n^S$&$R,S$&1\\
$K_{\omega}^R\times K_n^T$&$R,T$&1\\
$K_{\omega}^R[K_n^S[K_p^T]]$&$S\vee T,T$&1\\
$K_{\omega}^R[K_n^T[K_p^S]]$&$S\vee T,S$&1\\
$K_m^S[K_{\omega}^R[K_p^T]]$&$T\vee R,T$&1\\
$K_m^T[K_{\omega}^R[K_p^S]]$&$S\vee R,S$&1\\
$K_m^S[K_n^T[K_{\omega}^R]]$&$R\vee T,R$&1\\
$K_m^T[K_n^S[K_{\omega}^R]]$&$R\vee S,R$&1\\
\hline
\end{tabular}
\end{table}
\subsection{Lines}\label{lines}
In this subsection we define the main tool that we will use to eliminate candidates to be primitive homogeneous 3-graphs of {\rm SU}-rank 2, a family of definable sets we call \emph{lines}. Thus we interpret an incidence structure in $M$ in which lines are infinite and each point belongs to a finite number of lines. It is tempting to try to see this structure as a pseudoplane and use a general result of Simon Thomas on the nonexistence of binary omega-categorical pseudoplanes (see \cite{thomas1998nonexistence}), but our incidence structure falls short of being a pseudoplane or even a weak pseudoplane, which is what Thomas uses in his proof. It is a semilinear space (see Definition \ref{DefSemilinear}), which under some conditions also qualifies as a generalised quadrangle (cf. Observation \ref{Quadrangle}, see the paragraph preceding it for the definition of generalised quadrangle).
\begin{notation}
If the $R$-diameter of $M$ is 3, we adopt the convention that $S(a)$ and $T(a)$ correspond to $R^2(a)$ and $R^3(a)$ (cf. the paragraph after Proposition \ref{Year2}). Note that in $R$-diameter 3, the triangle $RRT$ is forbidden, and therefore the $R$-neighbourhood of any vertex $a$ is an $R,S$-graph, stable by the stability of $R$.
\label{Diam2Diam3}
\end{notation}
\begin{definition}
A semilinear space $S$ is a nonempty set of elements called \emph{points} provided with a collection of subsets called \emph{lines} such that any pair of distinct points is contained in at most one line and every line contains at least three points.
\label{DefSemilinear}
\end{definition}
\begin{remark}
{\rm As we have mentioned before, these structures are related to weak pseudoplanes. Given a structure $M$ and a definable family $\mathcal B$ of infinite subsets of $M$, the incidence structure $P=(M,\mathcal B)$ is a weak pseudoplane if for any distinct $X,Y\in\mathcal B$ we have $|X\cap Y|<\omega$ and each $p\in M$ lies in infinitely many elements of $\mathcal B$. The connection between our semilinear spaces and weak pseudoplanes is, then, that a semilinear space interpreted (i.e.,\, the lines form a definable family of subsets of $M$) in a homogeneous structure in which each line is infinite and each point lies in infinitely many lines is a weak pseudoplane. In all the semilinear spaces that we will encounter in this chapter, lines are infinite and each point belongs to finitely many lines.}
\end{remark}
The rest of this chapter consists of a study of the properties of a semilinear space definable in homogeneous primitive 3-graphs of {\rm SU}-rank greater than or equal to 2.
\begin{proposition}
Let $M$ be an infinite 3-graph such that $Aut(M)$ acts transitively on $M$, $R(a)$ is infinite for $a\in M$, and $R$ defines an equivalence relation on $R(a)$ with finitely many equivalence classes. Denote by $\ell(a,b)$ the maximal $R$-clique in $M$ containing the $R$-edge $ab$. Then $(M,\mathcal{L})$, where $\mathcal{L}=\{\ell(a,b):M\models R(a,b)\}$, is a semilinear space.
\label{Semilinear3Graph}
\end{proposition}
\begin{proof}
We start by justifying our use of \emph{the} when we said that $\ell(a,b)$ is ``the maximal $R$-clique in $M$ containing the $R$-edge $ab$." Since we chave $R(a,b)$, we know that $b\in R(a)$, so it is an element of one of the finitely many classes of $R$ in $R(a)$. Let $b/R^a$ denote the $R$-equivalence class of $b$ in $R(a)$; then $\{a\}\cup b/R^a$ is an infinite clique containing $a,b$. We claim that any clique containing $a,b$ is a subset of $\{a\}\cup b/R^a$. To see this, let $K$ be an $R$-clique containing $a,b$, and let $x\neq a,b\in K$. Such an $x$ exists because $R$ partitions an infinite set into finitely many subsets. Since $K$ is a clique, we have that $x\in R(a)$, and as $R$ defines an equivalence relation on $R(a)$ and $R(x,b)$ holds, we have that $x\in b/R^a$. Therefore, $x\in\{a\}\cup b/R(a)$ and $\ell(a,b)$ denotes this set.
So we have that two distinct points (vertices) belong to at most one element of $\mathcal{L}$. Any line contains at least three points, by transitivity of $M$ and the fact that $R$ forms infinite cliques within $R(a)$.
\end{proof}
\begin{definition}
A 3-graph is \emph{semilinear} if it satisfies the hypotheses of Proposition \ref{Semilinear3Graph}. In particular, whenever we refer to a semilinear 3-graph in this chapter we assume that points are incident with only finitely many lines.
\end{definition}
\begin{definition}\label{DefLines}
If $M$ is a semilinear 3-graph and $R(a,b)$ holds in $M$, then $\ell(a,b)$ is the imprimitivity block in $R(a)$ to which $b$ belongs, together with the vertex $a$. Equivalently, it is the largest $R$-clique in $M$ containing $a$ and $b$. We refer to these sets as \emph{lines}.
\end{definition}
We have introduced semilinear 3-graphs because a good deal of the analysis of homogeneous primitive 3-graphs of {\rm SU}-rank 2 depends more on this combinatorial property than on any simplicity or rank assumptions. The next two results establish that anything we prove about semilinear 3-graphs is also true of homogeneous primitive 3-graphs of {\rm SU}-rank 2.
\begin{observation}
Suppose $M$ is a primitive homogeneous simple 3-graph of {\rm SU}-rank 2, where $R$ is a forking relation, $S,T$ are nonforking, and $a\in M$. Then $R(a)$ is imprimitive.
\label{R(a)Imprimitive}
\end{observation}
\begin{proof}
If the $R$-diameter is 2, then all three predicates are realised in $R(a)$. By Proposition \ref{Rank1}, $R(a)$ is a 3-graph of rank 1, so it cannot be primitive and unstable by Proposition \ref{Year2}, as it would embed infinite $S$-cliques, contradicting Observation \ref{NoInfiniteCliques}. And by Lachlan's Theorem \ref{Lachlan3graphs}, $R(a)$ cannot be primitive and stable (see the table of 3-graphs without infinite $S$- or $T$-cliques in page \pageref{table}).
If the $R$-diameter is 3, then $R(a)$ is a homogeneous $RS$-graph. It follows from the Lachlan-Woodrow Theorem \ref{LachlanWoodrow} and simplicity that $R(a)$ is isomorphic to $I_m[K_\omega]$ or to $I_\omega[K_n]$ ($m,n\in\omega$).
\end{proof}
\begin{proposition}
If $M$ is a homogeneous simple primitive 3-graph of {\rm SU}-rank 2, then $R$ defines an equivalence relation on $R(a)$ with finitely many infinite classes.
\label{RDefinesEqRel}
\end{proposition}
\begin{proof}
We know from Observation \ref{R(a)Imprimitive} that $R(a)$ is imprimitive. By quantifier elimination and our assumption that exactly one of $R,S,T$ holds for any pair of vertices in $M$, to show that $R$ defines an equivalence relation on $R(a)$, an invariant equivalence relation on $R(a)$ is defined by a disjunction of at most two predicates from $L$. Our two main cases depend on the $R$-diameter of $M$.
\begin{case}
If ${\rm Diam}_R(M)=3$, then $R(a)$ is a homogeneous $R,S$-graph, which must be stable since $R$ is stable and in which both $R$ and $S$ are realised, by Observation \ref{NotRComplete}. The formula $S(x,y)$ does not define an equivalence relation on $R(a)$ by Proposition \ref{PropMultipartite}. Therefore, $R$ is an equivalence relation on $R(a)$ and by Observation \ref{NoInfiniteCliques}, this equivalence relation has finitely many classes, each of which is infinite by homogeneity and the fact that $R(a)$ is an infinite set.
\end{case}
\begin{case}
If ${\rm Diam}_R(M)=2$, then all predicates are realised in $R(a)$.
By Proposition \ref{PropMultipartite} the relation $S\vee T$ does not define an equivalence relation.
If $R\vee S$ defines an equivalence relation on $R(a)$, then it must have finitely many classes as any transversal to $R\vee S$ is a $T$-clique and $T$ does not form infinite cliques in $R(a)$. Each $R\vee S$-class in $R(a)$ is a homogeneous graph, so by the Lachlan-Woodrow Theorem \ref{LachlanWoodrow} it must be of the form $K_n^S[K_\omega^R]$, since $K_n^R[K_\omega^S]$ is impossible because $S$ forms infinite cliques in it. It follows that $R(a)$ is isomorphic to $K_m^T[K_n^S[K_\omega^R]]$, and $R$ defines an equivalence relation on $R(a)$ with $m\times n$ infinite classes (see the table on page \pageref{table}). The same argument shows that if $R\vee T$ defines an equivalence relation on $R(a)$, then $R$ is also an equivalence relation there, with finitely many infinite classes.
If $S$ defines an equivalence relation on $R(a)$, then it is a stable relation on $R(a)$, its classes are finite, and $R(a)$ is a stable 3-graph of one of the forms 6-11 from Lachlan's Theorem \ref{Lachlan3graphs}. We can eliminate all those stable graphs in which $S\vee T$, $R\vee S$, or $R\vee T$ defines an equivalence relation, since we have already dealt with those cases. In all other cases (see the table on page \pageref{table}), $R$ defines an equivalence relation with finitely many infinite classes.
\end{case}
\end{proof}
\setcounter{case}{0}
Observation \ref{R(a)Imprimitive} and Proposition \ref{RDefinesEqRel} tell us that in simple homogeneous primitive 3-graphs of {\rm SU}-rank 2 the forking predicate $R$ defines an equivalence relation on $R(a)$ with finitely many infinite classes. We summarise this in a lemma for easier reference:
\begin{lemma}
Primitive homogeneous simple 3-graphs of {\rm SU}-rank 2 are semilinear. The lines of the semilinear space are infinite and each point is incident with finitely many lines.
\label{LemmaSemilinear}
\end{lemma}
\begin{proof}
By primitivity, none of the relations $R,S,T$ is algebraic (cf. Observation \ref{AlgClosure}), so $R(a)$ is infinite. The transitivity of the 3-graph follows trivially from primitivity. Observation \ref{R(a)Imprimitive} and Proposition \ref{RDefinesEqRel} prove that (the reflexive closure of) $R$ is an equivalence relation on $R(a)$ with finitely many infinite classes.
\end{proof}
We have defined a semilinear space over a homogeneous structure, but there is no reason for it to be homogeneous as a semilinear space. This observation differentiates our work from Alice Devillers' study of homogeneous semilinear spaces (see \cite{DevillersUltrahomogeneous2000}).
In Devillers' formulation, a semilinear space is a two-sorted structure with one sort for points and another for lines; it is homogeneous if the usual condition on the extensibility of local isomorphisms between finite configurations of points and lines is satisfied.
Our semilinear space is defined in a primitive homogeneous simple 3-coloured graph. It is clear that we have two types of non-collinear points, corresponding to $S$- and $T$-edges in the coloured graph. If the diameter of the graph is 2, then we will see that $n=|R(c)\cap R(a)|$ and $m=|R(d)\cap R(a)|$ are not necessarily equal for $c\in S(a)$ and $d\in T(a)$, even though $ac$ and $ad$ are isomorphic as incidence structures. Any automorphism of the semilinear space extending the isomorphism $a\mapsto a, c\mapsto d$ would necessarily take $R(c)\cap R(a)$ to $R(d)\cap R(a)$, impossible. Thus, we cannot expect our linear spaces to be homogeneous in the sense of Devillers.
We will use the semilinear space to analyse the structure of {\rm SU}-rank 2 graphs.
Any two distinct vertices belong to at most one line and two distinct lines intersect in at most one vertex. Any given vertex belongs only to a finite number of lines, each of which is infinite. As a consequence:
\begin{observation}
Suppose that $M$ is a semilinear 3-graph and $a\in M$. Then for all $d\in R^2(a)$ and $\ell$ a line through $a$, $|R(d)\cap\ell|<2$.
\label{NoTriangles}
\end{observation}
\begin{proof}
If we had two different points $b_1,b_2$ on $\ell\cap R(d)$, then as we have $R(b_1,b_2)$ we get that $b_1,b_2$ belong to the same line through $d$. But then $b_1,b_2\in\ell(a,b_1)\cap\ell(d,b_1)$, contradicting the fact, obvious from Definition \ref{DefSemilinear} that the intersection of two distinct lines in a semilinear space is either empty or a singleton.
\end{proof}
The situation in primitive semilinear 3-graphs is essentially different from that in primitive structures of {\rm SU}-rank 1. Compare our next observation with Proposition \ref{aclab}.
\begin{observation}
Let $M$ be a primitive homogeneous semilinear 3-graph. If the $R$-distance between $a$ and $b$ is 2, then ${\rm acl}(a,b)\neq\{a,b\}$.
\end{observation}
\begin{proof}
The vertices $a$ and $b$ belong to a finite number of lines. Since the $R$-distance from $a$ to $b$ is 2, there exists at least one element $c\in R(a)$ such that $R(c,b)$ holds. There is at most one such $c$ in any line through $a$. These points are algebraic over $a,b$ and distinct from them.
\end{proof}
Observation \ref{NoTriangles} implies that the lines of the semilinear space interpreted in a semilinear 3-graph do not form triangles.
The sets $R(a),S(a),T(a)$ are homogeneous in the language $L$, so having the same type over $a$ is equivalent to being in the same orbit under ${\rm Aut}(M/a)$. Therefore, we cannot have more than 2 nested $a$-invariant/definable proper nontrivial equivalence relations in any of them, as we would need more than 3 types of edges to distinguish them. For the same reason, the number of lines through $a$ that $R(c)$ meets for $c\in R^2(a)$ is invariant under $a$-automorphisms (which fix the set of lines through $a$) as $c$ varies in an $a$-orbit.
\subsection{The nonexistence of primitive homogeneous 3-graphs of $R$-diameter 2 and {\rm SU}-rank 2}\label{subsectRDiam2}
We know by Lemma \ref{LemmaSemilinear} that finitely many lines are incident with any vertex $a\in M$ in a primitive simple homogeneous 3-graph of {\rm SU}-rank 2. Recall from subsection \ref{lines} that that two lines intersect in at most one point (by Observation \ref{NoTriangles} or by the definition of a semilinear space). The main question to ask is: if $a$ and $b$ are not $R$-related, how many lines containing $a$ can the $R$-neighbourhood of $b$ meet?
\begin{proposition}
Let $M$ be a homogeneous primitive semilinear 3-graph with simple theory in which $S$ and $T$ are nonforking predicates, and suppose that ${\rm Diam}_R(M)=2$. If for every $b\in R(a)$ and each line $\ell$ through $b$ other than $\ell(a,b)$ we have that $\ell\cap S(a)$ and $\ell\cap T(a)$ are both nonempty, then either $\ell\cap S(a)$ and $\ell\cap T(a)$ are both infinite, or one of them is of size 1 and the other is infinite.
\label{Intersections}
\end{proposition}
\begin{proof}
Clearly, at least one of $\ell\cap S(a)$ and $\ell\cap T(a)$ is infinite. Suppose for a contradiction that $1<|\ell\cap S(a)|<\omega$. The formula $S(x,a)$ does not divide over $\varnothing$; therefore, for any indiscernible sequence $(a_i)_{i\in\omega}$ the set $\{S(x,a_i):i\in\omega\}$ is consistent by simplicity. In particular when $R(a_0,a_1)$ holds. Therefore, $S(a)$ embeds infinite $R$-cliques and by homogeneity every $R$-related pair in $S(a)$ is in one such clique.
\[
\includegraphics[scale=0.8]{Semilinear01.pdf}
\]
Take any $c,c'\in\ell\cap S(a)$, and let $X$ be an infinite $R$-clique in $S(a)$ containing them. Consider $d\in X\setminus\ell$; $X\subset\ell(c,d)$ and $b\notin\ell(c,d)$, and the same is true of $c'$. But both belong to $\ell(b,c)$. Therefore, there are two points which lie on two different lines, contradiction.
\end{proof}
Our next observation is crucial to proving that there are no homogeneous 3-graphs of rank 2 and diameter 2. We mentioned before that the incidence structure interpreted in $M$ by the lines and vertices is close to being a generalised quadrangle. Recall that a generalised quadrangle (see \cite{thas2004symmetry}) is an incidence structure of points and lines with possibly infinite parameters $s$ and $t$ satisfying:
\begin{enumerate}
\item{any two points lie on at most one line,}
\item{any line is incident with exactly $s+1$ points, and any point with exactly $t+1$ lines, and}
\item{if $x$ is a point not incident with a line $L$, then there is a unique point incident with $L$ and collinear with $x$.}
\end{enumerate}
In \cite{macpherson1991interpreting}, Macpherson proves:
\begin{theorem}
Let $M$ be a homogenizable structure. Then it is not possible to interpret in $M$ any of the following:
\begin{enumerate}
\item{an infinite group,}
\item{an infinite projective plane,}
\item{an infinite generalised quadrangle, or}
\item{an infinite Boolean algebra.}
\end{enumerate}
\label{DugaldQuadrangle}
\end{theorem}
\begin{observation}
If $M$ is homogeneous primitive semilinear 3-graph and ${\rm Diam}_R(M)=2$, then it is not the case that for all $b\in R^2(a)$ the set $R(b)$ intersects all lines containing $a$.
\label{Quadrangle}
\end{observation}
\begin{proof}
In this case, the incidence structure interpreted in $M$ with lines of the form $\ell(x,y)$ and vertices as points is a generalised quadrangle with infinite lines and as many lines through a point as $R$-classes in $R(a)$, contradicting Theorem \ref{DugaldQuadrangle}.
\end{proof}
The following observation will help us find different points $c,c'$ in $S(a)$ or $T(a)$ such that $R(c)$ and $R(c')$ meet the same lines through $a$. Recall that given a subset $B$ of $M$, the group of all automorphisms of $M$ fixing $B$ setwise is denoted by ${\rm Aut}(M)_{\{B\}}$.
\begin{observation}
Let $M$ be a primitive homogeneous semilinear 3-graph of $R$-diameter 2. Let $X$ be a set of lines incident with $a$. Then ${\rm Aut}(M/a)_{\{\bigcup X\}}$ acts transitively on $\ell\setminus\{a\}$ for all $\ell\in X$.
\label{TransOnLines}
\end{observation}
\begin{proof}
This is trivial if only one of $S,T$ is realised in the union of two lines through $a$, so assume that all types of edges are realised in the union of two lines through $a$.
Note that at least one of $RSS, RTT$ is realised in $R(a)$. Assume without loss of generality that $RSS$ is realised in $R(a)$. Let $b,b'$ be elements of $R(a)$ satisfying $R(b,b')$. Enumerate the lines in $X$ as $\ell_1,\ldots,\ell_k$, and assume $b,b'\in\ell_k\setminus\{a\}$.
\[
\includegraphics[scale=0.7]{Semilinear02.pdf}
\]
We can find elements $d_1\in\ell_1,\ldots,d_{k-1}\in\ell_{k-1}$ such that $S(b,d_i)\wedge S(b',d_i)$ for $i\in\{1,\ldots,k-1\}$, so ${\rm tp}(b/a,d_1,\ldots,d_{k-1})={\rm tp}(b'/a,d_1,\ldots,d_{k-1})$. By homogeneity, there is an automorphism of $M$ fixing $a,d_1,\ldots,d_{k-1}$ (and therefore fixing $\bigcup X$ setwise) taking $b$ to $b'$.
\end{proof}
If only one of $S,T$ is realised in the union of two lines through $a$, then each pair of $R$-classes in $R(a)$ is isomorphic to a complete bipartite graph (the parts of the partition are $R$-cliques and the edges are of colour $S$ or $T$), so we have two orbits of pairs of lines through $a$.
\begin{observation}
Let $M$ be a primitive homogeneous simple semilinear 3-graph of $R$-diameter 2. If in $R(a)$ all relations are realised in the structure induced on a pair of lines through $a$, and there are $m$ lines through $a$, then there is only one orbit of $k$-sets of lines over $a$, for all $k\leq m$.
\label{OneOrbit}
\end{observation}
\begin{proof}
There are two cases, depending on whether we can find witnesses to the instability of $S,T$ within $R(a)$.
If $R(a)$ is a stable structure, then it is isomorphic to $K_m^S\times K_\omega^R$ or to $K_m^T\times K_\omega^R$, by Lachlan's Theorem \ref{Lachlan3graphs}, Observation \ref{NoInfiniteCliques}, and the hypothesis that all relations are realised in the structure induced on a pair of incident lines. In any of these structures there are monochromatic transversal cliques of size $m$, so the observation follows by invariance and homogeneity.
If we can find witnesses to the instability of $S,T$ within $R(a)$, then $R(a)$ is isomorphic to a simple unstable homogeneous 3-graph in which $R$ defines a finite equivalence relation. By Proposition \ref{PropImprimitive3Graphs}, we can embed transversal monochromatic cliques, and again the observation follows by invariance and homogeneity.
\end{proof}
Notice that if for some element $b\in R(a)$ and some line $\ell$ through $b$ different from $\ell(a,b)$ the sets $\ell\cap S(a)$ and $\ell\cap T(a)$ are both nonempty, then by homogeneity we can transitively permute the lines through $b$ whilst fixing $ab$, and therefore all lines through any $b\in R(a)$, except $\ell(a,b)$, meet both orbits over $a$ in $R^2(a)$. Furthermore, the size of the intersections does not change and is either 1 or infinite, with at least one of them infinite. To put it differently, if one line through $b$ (not $\ell(a,b)$) is almost entirely contained (the point $b$ is assumed to be in $R(a)$) in $S(a)$, then each line is almost entirely contained in one orbit. Now we prove that all lines that meet $R^2(a)$ meet both $S(a)$ and $T(a)$.
We will need the following well-known fact from permutation group theory (see, for example, 2.16 in \cite{cameron1990oligomorphic}) to strengthen Observation \ref{TransOnLines} :
\begin{theorem}
Let $G$ be a permutation group on a countable set $\Omega$, and let $A,B$ be finite subsets of $\Omega$. If $G$ has no finite orbits on $\Omega$, then there exists $g\in G$ with $Ag\cap B=\varnothing$.
\label{ThmNeumann}
\end{theorem}
\begin{proposition}
Suppose that $M$ is a primitive simple homogeneous semilinear 3-graph with $m$ lines through each point, and let $X=\{\ell_i:1\leq i\leq k\}$ be a set of lines through $a\in M$. Then for any transversal $A$ to the $k$ lines in $X$, there exists a transversal $B$ to $X$ such that $B\cong A$ and $B\cap A=\varnothing$.
\label{PropLotsOfTrans}
\end{proposition}
\begin{proof}
This is a direct consecuence of Theorem \ref{ThmNeumann} and Observation \ref{TransOnLines}.
\end{proof}
\begin{proposition}
Let $M$ be a simple homogeneous primitive semilinear 3-graph of $R$-diameter 2, in which all predicates are realised in the structure induced on a pair of incident lines, and $a\in M$. Then for all $b\in R(a)$, each line $\ell\neq\ell(a,b)$ through $b$ meets both $S(a)$ and $T(a)$.
\label{Intersections2}
\end{proposition}
\begin{proof}
First note that it is not possible to have $R(b)\cap S(a)=\varnothing$ or $R(b)\cap T(a)=\varnothing$. To see this, suppose for a contradiction that $R(b)\cap S(a)=\varnothing$; moving $b$ by homogeneity within $R(a)$, it follows that $R(b')\cap S(a)=\varnothing$ for all $b'\in R(a)$, contradicting the assumption that vertices in $S(a)$ are at $R$-distance 2 from $a$. Similarly, $R(b)\cap T(a)\neq\varnothing$. Therefore, this proposition can only fail if we have at least 3 lines through $a$.
Suppose for a contradiction that there are $m\geq3$ lines incident with $a$ and for all $b\in R(a)$ and $\ell\neq\ell(a,b)$ through $b$, $\ell\setminus\{b\}\subset S(a)$ or $\ell\setminus\{b\}\subset T(a)$. By Observation \ref{Quadrangle}, we may assume that $k=|R(c)\cap R(a)|<m$ for all $c\in S(a)$. We define two binary relations and a binary function on $S(a)$: for $c,c'\in S(a)$, $E(c,c')$ holds if $R(c)$ and $R(c')$ meet the same lines through $a$, and $C(c,c')$ holds if there exists $b\in R(a)$ such that $b,c,c'$ are collinear. Given two elements $x,y\in S(a)$, let $\#(x,y)$ denote the number of $R$-classes in $R(a)$ that $R(x)$ and $R(y)$ meet in common, that is $\#(x,y)=|\{z\in R(x)\cap R(a):\exists w(w\in R(y)\cap R(a)\wedge( R(w,z)\vee w=z))\}|$.
\begin{case}
If $k=1$, then there are at least four types of unordered pairs of vertices in $S(a)$. We prove this assertion as follows: let $b_c$ denote the unique element in $R(c)\cap R(a)$ for $c\in S(a)$. The relations $\hat P(c,c')$ that hold if $P(b_c, b_{c'})$ is true ($P\in \{R,S,T\}$) are invariant and imply that $b_c,c,c'$ are not collinear. It follows from the assumption that for all $\ell\neq\ell(a,b)$ through $b\in R(a)$ the set $\ell\setminus\{b\}$ is contained in $S(a)$ or in $T(a)$ and the first paragraph of this proof that $C$ is also realised in $S(a)$. That gives us too many types of distinct unordered pairs of elements in $S(a)$.
\end{case}
\begin{case}
If $2\leq k<m$, then we have two subcases:
\begin{subcase}
If $m-k\geq2$, then we can find at least five types of unordered pairs of elements in $S(a)$. The proof is as follows: Observation \ref{OneOrbit} implies that $E$ is a nontrivial proper equivalence relation on $R(a)$ and that it has $m\choose k$ classes. Now we claim that there are at least four types of $E$-inequivalent elements in $S(a)$. By Observation \ref{OneOrbit} there exist pairs of elements $c,c'\in S(a)$ with $\neg E(c,c')\wedge\#(c,c')=k-1$. Using Proposition \ref{PropLotsOfTrans}, we can find pairs which additionally satisfy $R(c)\cap R(c')\cap R(a)\neq\varnothing$ and $R(c)\cap R(c')\cap R(a)=\varnothing$. We can follow the same argument in the case $\#(c,c')=k-2$ to find two more types of unordered pairs of distinct elements from $S(a)$, giving a total of at least five.
\end{subcase}
\begin{subcase}
Suppose then that $m-k=1$, so $E$ has $m$ equivalence classes. There is at least one line through $b$ almost entirely contained in $T(a)$, so we are left with at most $m-2$ lines through $b$ distinct from $\ell(a,b)$ which may meet $S(a)$.
\begin{claim}
$R(b)$ meets $m-1$ $E$-classes in $S(a)$.
\label{ClaimMeetsAllClasses}
\end{claim}
\begin{proof}
We know that $R(b)\cap S(a)\neq\varnothing$. Let $c\in R(b)\cap S(a)$. By hypothesis, $|R(c)\cap R(a)|=m-1$. Let $X$ denote $R(c)\cap R(a)$. By Observation \ref{OneOrbit}, we can find $a$-translates of $X_i$, $i\leq m-1$, to any of the $m-1$ sets of $m-1$ lines through $a$ that include the $R$-class to which $b$ belongs. And by the transitivity of ${\rm Aut}(M/a)$ on $R(a)$ we can find translates $Y_i$ in those sets of lines such that $b\in Y_i$. By homogeneity, each of the automorphisms taking $X$ to $Y_i$ moves $c$ to a new $E$-class.
Clearly, $R(b)$ does not meet the $E$-class of elements whose $R$-neighbourhoods meet all the lines in $R(a)$ except $\ell(a,b)$.
\end{proof}
As the lines are infinite and $E$ has only finitely many classes, for each line $\ell$ through $b$ that meets $S(a)$ there is at least one $E$-class that contains infinitely many elements of $\ell$. Since we have at most $m-2$ lines through $b$ that meet $S(a)$ and $R(b)$ meets $m-1$ $E$-classes, there is at least one line that meets more than one $E$-class. Note that if a line $\ell$ meets more than one $E$-class, then the intersection of $\ell$ with each of the $E$-classes it meets is infinite, by homogeneity as elements in each class have the same type over $ab$ and at least one of the intersections of $\ell$ with an $E$-class is infinite.
Again by homogeneity (we can permute the lines over $b$ that meet $S(a)$ whilst fixing $ab$), each line through $b$ that meets $S(a)$ meets more than one class.
As $k\geq 2$, there exist $b_1,b_2\in R(a)$ such that for some $c\in S(a)$ we have $\ell(b_i,c)\setminus\{b_i\}\subset S(a)$ ($i=1,2$). Take $c'\in\ell(b_1,c)\cap S(a)$ and $c''\in\ell(b_2,c)\cap S(a)$, both distinct from $c$ and $E$-equivalent to $c$. Such elements exist because the intersections of lines through $b_i$ with the $E$-class of $c$ are infinite, by homogeneity and the fact that at least one of the intersections is infinite, so we have $E(c,c')\wedge R(c,c')$. Also, $c'$ and $c''$ are not $R$-related (because the lines of the semilinear space do not form triangles, cf. Observation \ref{NoTriangles}), but are $E$-equivalent since we have $E(c,c')$ and $E(c,c'')$, so at least one of $E(c',c'')\wedge S(c'c,'')$ and $E(c',c'')\wedge T(c',c'')$ is realised. This gives us at least two types of $E$-equivalent pairs.
Now we will show that there are at least two types of $E$-inequivalent pairs. By Proposition \ref{PropLotsOfTrans}, we can find pairs of $E$-inequivalent elements with no common $R$-neighbours in $R(a)$ and also pairs of $E$-inequivalent elements with common $R$-neighbours in $R(a)$. Again, we get at least four types of unordered pairs of distinct elements from $S(a)$.
\end{subcase}
\end{case}
\end{proof}
\setcounter{case}{0}
\setcounter{subcase}{0}
\begin{proposition}
Let $M$ be a primitive homogeneous semilinear 3-graph with simple theory and $R$-diameter 2, and assume that $R$ is a forking relation and $S,T$ are nonforking. Then each element is incident with at least three lines.
\label{atleast3}
\end{proposition}
\begin{proof}
Note first that if in any pair of lines through $a$ only two of the predicates in the language are realised, then we get the result automatically because $R,S,T$ are realised in $R(a)$ by the diameter 2 hypothesis. So we may assume that in the structure induced by $M$ on a pair of lines through $a$ all predicates are realised.
By Observation \ref{NotRComplete}, each vertex belongs to at least two lines.
If $R(a)$ has exactly two imprimitivity blocks, then by homogeneity for any $b\in R(a)$ the set $R(b)$ consists of two infinite $R$-cliques as well, one of which is $\ell(a,b)\setminus\{b\}$. Therefore, $R(b)\cap R^2(a)$ is an infinite $R$-clique, and by Proposition \ref{Intersections2}, $R(b)\cap R^2(a)$ meets both $S(a)$ and $T(a)$, as by the diameter 2 hypothesis both $S$ and $T$ are realised in $R(a)$.
\begin{claim}
For all $b\in R(a)$, $\ell^b\cap S(a)$ and $\ell^b\cap T(a)$ are infinite.
\end{claim}
\begin{proof}
Suppose that each vertex is incident with two lines. Then for all $b\in R(a)$, there is a unique line through $b$ that meets $R^2(a)$; let $\ell^b$ denote that line, for each $b\in R(a)$.
Proposition \ref{Intersections} tells us that either $\ell^b\cap S(a)$ and $\ell^b\cap T(a)$ are both infinite, or one of them is of size 1 and the other is infinite. As this line is uniquely determined for each $b\in R(a)$, if we had, say $|\ell^b\cap S(a)|=1$ and $|R(c)\cap R(a)|=1$ for all $c\in S(a)$, then this would establish a definable bijection between $R(a)$ and $S(a)$. This is impossible as the rank of $R(a)$ is lower than that of $S(a)$.
Therefore, in orded to establish the claim, we need to eliminate the case where $|\ell^b\cap S(a)|=1$ and $|R(c)\cap R(a)|=2$.
By Observation \ref{Quadrangle}, if these conditions are satisfied then $|R(d)\cap R(a)|=1$ for all $d\in T(a)$. Given any $c\in S(a)$, the set $R(c)$ consists of two infinite $R$-cliques by homogeneity; one vertex from each of these two cliques, belongs to $R(a)$.
Therefore, for any $c\in S(a)$, all relations in the language are realised in $R(c)\cap T(a)$. Define $Q(d,d')$ on $T(a)$ to hold if there exists $c\in S(a)$ such that $R(d,c)\wedge R(d',c)$ (see figure below).
\[
\includegraphics[scale=0.8]{Semilinear03.pdf}
\]
We claim that $Q\wedge R$, $Q\wedge S$, $Q\wedge T$ are realised in $T(a)$. The reason is that both lines through $c$ are almost entirely contained in $T(a)$: $c$ and the two vertices in $R(c)\cap R(a)$ are the only elements of $R(c)$ not in $T(a)$, since any other element of $R(c)\cap S(a)$ would be forced to be an element of $R(b_1)$ or of $R(b_2)$, contradicting $R(b)\cap S(a)=1$ for all $b\in R(a)$. Our claim follows from the transitivity of ${\rm Aut}(M/c)$ on $R(c)$ and Theorem \ref{ThmNeumann}.
Now, since $S$ does not divide over $\varnothing$ in $M$ we must have the triangle $SSR$ in ${\rm Age}(M)$ (otherwise, $S$ would divide, as witnessed by an $\varnothing$-indiscernible sequence $(e_i)_{i\in\omega}$ with $R(e_0,e_1)$). Notice that we have an additional $a$-definable equivalence relation $F$ on $T(a)$ with two classes, $F(d,d')$ holds if $R(d)$ and $R(d')$ meet the same line through $a$. If $Q$ and $F$ were satisfied simultaneously by a pair from $T(a)$ then $F\wedge R$, $F\wedge S$, $F\wedge T$ (realised because $R,S,T$ are realised in the union of any two incident lines), and $\neg F$ already give us too many relations on $T(a)$. And if they are not simultaneously realised by any pair, then any $F$-equivalent pair is $Q$-inequivalent, so this together with the three relations from the preceding paragraph give us four types of unordered pairs of distinct elements from $T(a)$.
\end{proof}
By Observation \ref{Quadrangle}, we may also assume that $|R(c)\cap R(a)|=1$ for all $c\in T(a)$.
Consider the relation $W(x, y)$ on $T(a)$ that holds if there exists a $b\in R(a)$ such that $R(b,x)\wedge R(y,b)$. This is clearly a symmetric and reflexive relation, and if $W(x,y)$ and $W(y,z)$, then there exist $b,b'\in R(a)$ such that $R(x,b)\wedge R(y,b)$ and $R(y,b')\wedge R(z,b')$. The hypothesis that $|R(c)\cap R(a)|=1$ for all $c\in R^2(a)$ implies $b=b'$, as they are both $R$-related to $y$ and in $R(a)$. Therefore, $x,y,z$ are all collinear with $b$ and $W(x,z)$. Given a vertex $c\in T(a)$ denote by $b_c$ the unique element of $R(c)\cap R(a)$, and define $\hat P(c,c')$ on $T(a)$ if $P(b_c,b_{c'})$ holds for $P\in\{R,S,T\}$. This gives us at least four types of unordered pairs of distinct elements in $T(a)$: $W$-equivalent and three types of $W$-inequivalent pairs (corresponding to $\hat R,\hat S,\hat T$).
\end{proof}
\begin{observation}
If $M$ is a primitive homogeneous simple semilinear 3-graph with ${\rm Diam}_R(M)=2$ in which any point $a$ is incident with at least three lines, then it is not possible for all $c\in R^2(a)$ to satisfy $|R(c)\cap R(a)|=1$.
\label{EachTouchesOne}
\end{observation}
\begin{proof}
In this case, the sets $R(b)\cap R^2(a)$, $b\in R(a)$, partition the set of maximal rank $R^2(a)$ into infinitely many infinite parts, each consisting of at least 2 infinite $R$-cliques. By Proposition \ref{Intersections}, at least one of $S(a)$ and $T(a)$ is partitioned into infinitely many infinite $R$-cliques by the family of sets $\ell\setminus\{b\}$, where $b\in R(a)$ and $\ell$ is a line through $b$ not containing $a$. We may assume it is $S(a)$.
Define the relation $Q(c,c')$ on $S(a)$ to hold if there exists $b\in R(a)$ such that $R(b,c)\wedge R(b,c')$ holds. We claim that $Q$ is an equivalence relation. It is clearly symmetric and reflexive. Now suppose $Q(x,y)\wedge Q(y,z)$. Then there exist $b,b'\in R(a)$ such that $R(b,x)\wedge R(b,y)$ and $R(b',y)\wedge R(b',z)$, so $b=b'$ since $|R(y)\cap R(a)|=1$. Therefore, $Q(x,z)$ holds.
A $Q$-equivalence class consists of a finite number $m>1$ of $R$-cliques, because we assume that at least three lines are incident with $a$. Define the binary relations $\hat P(c,c')$ to hold if $\neg Q(c,c')$ and $P(b,b')$, where $\{b\}=R(c)\cap R(a)$, $\{b'\}=R(c')\cap R(a)$ and $P\in\{R,S,T\}$. This gives 3 types of $Q$-inequivalent pairs, plus at least two more types of $Q$-equivalent pairs (collinear and not collinear), so we have too many types of unordered pairs of elements from $S(a)$.
\end{proof}
By Observation \ref{NoInfiniteCliques}, not all $R$-free structures can be embedded into $R(a)$. If $R(a)$ is a stable 3-graph, then it must be of one of the forms 6-11 in Theorem \ref{Lachlan3graphs}, as all the others are finite. Observation \ref{NoInfiniteCliques} implies that only one of $m,n,p$ is $\omega$ (and the corresponding superindex is $R$).
The sets of relations realised with endpoints in different classes of an equivalence relation partition the set of types of pairs of classes in a homogeneous binary structure. In our case, there can be no more than 2 types of pairs of $R$-classes in $R(a)$. This is implicitly used in the proof of our next result:
\begin{proposition}
There are no primitive simple homogeneous 3-graphs of $R$-diameter 2 such that all relations are realised in the union of any two maximal $R$-cliques in $R(a)$.
\label{MataCasiTodos}
\end{proposition}
\begin{proof}
By Observations \ref{Quadrangle} and \ref{EachTouchesOne}, we have two cases to analyse:
\begin{case}
For some $c\in R^2(a)$, $|R(c)\cap R(a)|=1$. By homogeneity, this is true for all the elements of the orbit of $c$ under the action or ${\rm Aut}(M/a)$. Without loss of generality, assume $S(a,c)$. We can define $E(x,y)$ on $S(a)$ if $R(x)$ and $R(y)$ meet the same line through $a$, and refine this equivalence relation with $E'(x,y)$ if they meet the same line at the same point. These two are equivalence relations, and $E'(x,y)\rightarrow E(x,y)$. For $E$-inequivalent pairs, since both $S$ and $T$ are realised in $R(a)$, we can define $\hat S(x,y)$ and $\hat T(x,y)$ if $S$ (respectively, $T$) holds between the elements of the intersections $R(x)\cap R(a)$ and $R(y)\cap R(a)$. Notice that both $\hat S$ and $\hat T$ are realised, as any element in $R(a)$ has a neighbour in $S(a)$. We have too many 2-types of distinct elements over $a$, since $E'(x,y)\wedge x\neq y, E(x,y)\wedge \neg E'(x,y), \hat S(x,y)\wedge\neg E(x,y), \hat T(x,y)\wedge\neg E(x,y)$ are all realised.\label{MCT1}
\end{case}
\begin{case}
For no element $b$ of $R^2(a)$ does $|R(b)\cap R(a)|=1$ hold. Then, without loss of generality, the elements of $S(a)$ satisfy $|R(b)\cap R(a)|=k$, where $1<k<m$. Define $E(c,c')$ on $S(a)$ if $R(c)$ and $R(c')$ meet the same lines through $a$. There are three subcases to analyse:
\begin{subcase}
If $m-k\geq3$, then define $P_i(c,c')$ on $S(a)$ for $0\leq i\leq\min\{k,m-k\}$ to hold if $R(c)\cup R(c')$ meet a total of $k+i$ lines through $a$. The $P_i$ are invariant under ${\rm Aut}(M/a)$ and mutually exclusive; therefore all cases with $\min\{k,m-k\}\geq3$ are impossible, as we would get at least four types of pairs of distinct elements in $S(a)$. This leaves us with only one more possible case, namely $m-k\geq3, k=2$, since the case $m-k\geq3, k=1$ is covered in Case \ref{MCT1}
Suppose then that $m-k\geq3$ and $k=2$. We claim that there are two types of pairs satisfying $P_1$. Let $\{b,b'\}=R(c)\cap R(a)$ for some $c\in S(a)$, and take any line $\ell$ through $a$ not including $b$ or $b'$. By homogeneity, there exists a $b''\in\ell$ satisfying the same relation with $b'$ as $b$. Therefore, there exists $c'\in S(a)$ satisfying $P_1(c,c')$ and the relation $Q(c,c')$ defined by $\exists x(R(a,x)\wedge R(c,x)\wedge R(c',x))$. Using Proposition \ref{Intersections2}, we can find pairs $d,d'$ in $S(a)$ satisfying $P_1(d,d')$ and $R(d)\cap R(d')\cap R(a)=\varnothing$. Therefore, we have at least four types of pairs of distinct elements from $S(a)$, as the relations $E$, $P_1\wedge Q$, $P_1\wedge\neg Q$, $P_2$ are all realised.
\label{CasedefsP}
\end{subcase}
\begin{subcase}
Suppose $m-k=1$. By Proposition \ref{TransOnLines}, there exist unordered pairs of distinct elements satisfying $E$ in $S(a)$, and $P_1$ (defined as in Case II\ref{CasedefsP}) is realised by homogeneity and Observation \ref{OneOrbit}.
Notice that there are two types of pairs satisfying $P_1(c,c')$, namely those with $R(c)\cap R(c')\cap R(a)=\varnothing$, and those with $R(c)\cap R(c')\cap R(a)\neq\varnothing$. Both are realised by Proposition \ref{PropLotsOfTrans}.
This leaves us with two possibilities: for distinct $c,c'\in S(a)$, either $E(c,c')$ implies $R(c)\cap R(c')\cap R(a)=\varnothing$ (this can happen if the structure on any pair of lines through $a$ is that of a perfect matching and $R(c)$ picks a transversal clique of the matching colour), or we can have $E(c,c')\wedge R(c)\cap R(c')\cap R(a)\neq\varnothing$. In the latter case, we have found four types of pairs of unordered distinct elements from $S(a)$.
Therefore, assume that $E(c,c')$ implies $R(c)\cap R(c')\cap R(a)=\varnothing$ for all $c\neq c'$ in $S(a)$. We claim that this can only happen in the situation described before, namely if the structure on two lines is that of a matching and for all pairs $b,b'\in R(c)\cap R(a)$, the edge $bb'$ is of the colour of the matching predicate, say $T$. This claim follows from the argument of Proposition \ref{TransOnLines}: if for some edge $bb'$ in $R(c)\cap R(a)$ we were able to find some $b''$ collinear with $b'$ such that $bb''$ and $bb'$ are of colour $T$, then by homogeneity we could find a $c'$ $E$-equivalent to $c$ with $b\in R(c)\cap R(c')\cap R(a)$.
It follows that in the situation we are considering $T$ is an algebraic predicate in $R(a)$ and the set of $K_{m-1}^T$ in $R(a)$ is in definable bijection with $S(a)$ by the function taking a $T$-clique $\bar c$ to the unique element of $\bigcap\{R(c):c\in\bar c\}\cap S(a)$. This is impossible, since the rank of $S(a)$ is greater than that of the set of $T$-cliques in $R(a)$, as $T$ is algebraic.
\end{subcase}
\begin{subcase}
If $m-k=2$, then the relations $E, P_1, P_2$ defined in Case \ref{CasedefsP} are realised in $S(a)$. As in Case \ref{CasedefsP}, there are two types of pairs $c,c'$ satisfying $P_1$: some with $R(c)\cap R(c')\cap R(a)\neq\varnothing$ and some with $R(c)\cap R(c')\cap R(a)=\varnothing$, by the same argument as in Case {CasedefsP}.
\end{subcase}
\end{case}
\end{proof}
\setcounter{case}{0}
\setcounter{subcase}{0}
Proposition \ref{MataCasiTodos} eliminates all cases where $R(a)$ is unstable, as in this case for some infinite $R$-cliques $A,B$ in $R(a)$ the induced structure is isomorphic to the Random Bipartite Graph. But Proposition \ref{MataCasiTodos} also covers some stable cases (for example, if $S$ or $T$ is a perfect matching on the union of the two $R$-cliques). The only cases that remain are those in which $R(a)$ is stable and the induced structure on any pair of $R$-cliques in $R(a)$ is isomorphic to a complete bipartite graph, that is, those cases in which for all pairs of lines $\ell_1,\ell_2$ through $a$ and all $(b_1,b_2),(c_1,c_2)\in\ell_1\times\ell_2$, ${\rm tp}(b_1b_2)={\rm tp}(c_1c_2)$. In all of these cases, $R(a)$ is stable.
\begin{proposition}
Let $M$ be a homogeneous primitive semilinear 3-graph of $R$-diameter 2 with finitely many lines through each point. If all types of pairs are realised in $R(a)$, but not in any pair of lines through $a$, then it is not possible for any $c\in R^2(a)$ to satisfy $|R(c)\cap R(a)|>3$.
\label{SmallIntersection}
\end{proposition}
\begin{proof}
\begin{claim}
If ${\rm tp}(ac)={\rm tp}(ac')$, then ${\rm tp}(R(c)\cap R(a))={\rm tp}(R(c')\cap R(a))$.
\label{Claim1}
\end{claim}
\begin{proof}
By homogeneity, there exists an automorphism $\sigma\in{\rm Aut}(M/a)$ taking $c\mapsto c'$; this automorphism takes $R(c)\cap R(a)$ to $R(c')\cap R(a)$.
\end{proof}
\begin{claim}
Under the hypotheses of Proposition \ref{SmallIntersection}, the isomorphism type of $R(c)\cap R(a)$ for any $c\in R^2(a)$ depends only on the set of lines through $a$ that $R(c)$ meets.
\label{Claim2}
\end{claim}
\begin{proof}
By Observation \ref{NoTriangles}, the set $R(c)\cap R(a)$ is transversal to a set of $k$ lines through $a$, and by the hypotheses of Proposition \ref{SmallIntersection}, all transversals to the same set of $k$ lines are isomorphic.
\end{proof}
Now suppose that for some $c\in S(a)$ we have $|R(c)\cap R(a)|>3$. By Claim \ref{Claim1}, the intersections of the $R$-neighbourhood of any two elements of $S(a)$ with $R(a)$ are isomorphic; let $E$ be the (not necessarily proper) equivalence relation on $S(a)$ that holds for elements that meet the same set of lines through $a$. Claim \ref{Claim2} says that if $A=R(c)\cap R(a)$ for some $c\in S(a)$ and we take any other set $B$ transversal to the same set of $k>3$ lines then there exists an automorphism taking $A$ to $B$ over $a$ that moves $c$ to an $E$-equivalent element of $S(a)$. Therefore, the $a$-invariant relations $P_i(c,c')$ holding if $E(c,c')\wedge |R(c)\cap R(a)|=i$ for $i\in\{0,\ldots,k-1\}$ are all realised. As $k\geq4$, this gives us too many invariant relations on pairs over $a$. This completes the proof of Proposition \ref{SmallIntersection}.
\end{proof}
\begin{lemma}
There are no homogeneous primitive 3-graphs of {\rm SU}-rank 2 and $R$-diameter 2.
\label{NoDiam2}
\end{lemma}
\begin{proof}
We know by Proposition \ref{atleast3} that the number $m$ of lines through $a$ is greater than or equal to 3, and that all types of pairs are realised in $R(a)$, but not in any pair of lines through $a$ (Proposition \ref{MataCasiTodos}). By Proposition \ref{SmallIntersection}, for all $c\in R^2(a)$ we have $|R(c)\cap R(a)|\leq 3$. Assume that $k=\max\{|R(c)\cap R(a)|,|R(d)\cap R(a)|\}$, where $c\in S(a)$ and $d\in T(a)$.
\begin{case}
First we prove that $k=3$ is impossible. Let $E(c,c')$ be the equivalence relation on $S(a)$ that holds if $R(c)$ and $R(c')$ meet the same lines through $a$. The key observation in this case is that the graph induced on $R(c)\cap R(a)$ is a finite homogeneous graph of size 3, so it must be a monochromatic triangle (see also Gardiner's classification \cite{gardiner1976homogeneous} of finite homogeneous graphs).
We start by arguing that $E$ is always a proper equivalence relation on $S(a)$ if $k=3$. By the preceding paragraph, $R(c)\cap R(a)$ is a complete graph in $S$ or $T$. If $E$ were universal in $S(a)$, then it follows either that there are only three lines through $a$ (impossible as in that case one of the predicates would not be realised in $R(a)$), or, assuming without loss that $R(c)\cap R(a)$ is isomorphic to $K_3^S$, that $R(a)$ is isomorphic to $K_m^T[K_n^S[K_\omega^R]]$. In the latter case, we must have $n=3$ because otherwise we could move by homogeneity the $K_3^S$ corresponding to $R(c)\cap R(a)$ to another set of 3 lines in the same $R\vee S$-class and find $E$-inequivalent elements. Finally, if $m>1$ then again we have that $E$ is a proper equivalence relation, depending on which $R\vee S$-class in $R(a)$ the set $R(c)$ meets. We reach a contradiction in any case; $E$ is a proper equivalence relation on $R(a)$.
Suppose for a contradiction that for $c\in S(a)$ we have $|R(c)\cap R(a)|=3$. Since $E$ is a proper equivalence relation, we have at least 4 invariant and exclusive relations on $S(a)$: $E$-inequivalent and three ways to be $E$-equivalent, as we can define $I_i(c,c')$ on $S(a)$ to hold if $E(c,c')$ and $|R(c)\cap R(c')\cap R(a)|=i$ for $i\in\{0,1,2\}$ (these relations are realised because the intersection of the $R$-neighbourhoods of $c$ and $a$ is a complete monochromatic graph, so any two transversals to the lines that $R(c)$ meets are isomorphic); this already gives us too many invariant relations on pairs from $S(a)$.
\end{case}
\begin{case}
Assume $\max\{|R(c)\cap R(a)|,|R(d)\cap R(a)|\}\leq 2$ ($c\in S(a), d\in T(a)$). By Observation \ref{EachTouchesOne} and Proposition \ref{atleast3}, it must be equal to 2. Suppose that the maximum is reached in $S(a)$. The equivalence relation $E(c,c')$ that holds on $S(a)$ if $R(c)$ and $R(c')$ meet the same lines through $a$ is proper: since $m\geq3$ and $k=2$, we can use homogeneity to move an element of $R(c)\cap R(a)$ to any line not containing any elements of $R(c)\cap R(a)$; this automorphism moves $c$ to an element of $S(a)$ that is not $E$-equivalent with $c$. Therefore we have at least four types of pairs on $S(a)$: two satisfying $E(c,c')$ (one with $R(c)\cap R(c')\cap R(a)$ empty, the other with $R(c)\cap R(c')\cap R(a)$ nonempty), and, similarly, two with $\neg E(c,c')$.
\end{case}
We have exhausted the list of possible cases. The conclusion follows.
\end{proof}
\subsection{The nonexistence of primitive homogeneous 3-graphs of $R$-diameter 3 and {\rm SU}-rank 2}\label{subsectRDiam3}
By homogeneity, if the $R$-diameter of the graph is 3, then, since $R$-distance is preserved under automorphisms, if there are $a,b,c$ such that $S(a,c)\wedge R(a,b)\wedge R(b,c)$, then all pairs $c,c'$ with $S(c,c')$ consist of vertices at $R$-distance 2; and similarly $T(a)$ would be the set of vertices at $R$-distance 3 from $a$. From this point on, we will follow the conventions $S(a)=R^2(a)$ and $T(a)=R^3(a)$.
The situation in diameter 3 is considerably simpler than in diameter 2, as the sets $S(a)$ and $T(a)$ are more clearly separated. The first thing to notice is that if the $R$-diameter of $M$ is 3, then $RRT$ is a forbidden triangle, as $T$ corresponds to $R$-distance 3.
\begin{proposition}
Suppose that $M$ is a semilinear homogeneous primitive 3-graph of $R$-diameter 3 and that each point $a$ is incident with $m<\omega$ lines. Then it is not possible for any $b\in S(a)$ to be collinear with $m$ elements from $R(a)$.
\label{Intersection3}
\end{proposition}
\begin{proof}
The $R$-neighbourhood of $b$ has $m$ $R$-connected components by transitivity. But by homogeneity and diameter 3, $b$ is adjacent to some element of $R^3(a)$. Therefore, if $R(b)$ meets each line through $a$, then $R(b)$ has at least $m+1$ $R$-connected components, contradicting homogeneity.
\end{proof}
\begin{proposition}
Let $M$ be a semilinear homogeneous primitive 3-graph with ${\rm Diam}_R(M)=3$ and $m<\omega$ lines through each point, and let $k$ denote $|R(b)\cap R(a)|$ for any $b\in S(a)$. Then $k=1$.
\label{equalsk}
\end{proposition}
\begin{proof}
By Proposition \ref{Intersection3}, $k<m$. The main point here is that we get the conclusion of Observation \ref{OneOrbit} for free in this situation, as the intersection of any pair of lines through $a$ with $R(a)$ is isomorphic to a complete bipartite graph (edges given by $S$, non-edges given by $R$). We can define an equivalence relation $E$ on $S(a)$ holding for $c,c'$ if $R(c)$ and $R(c')$ meet the same lines through $a$. By Proposition \ref{Intersection3} and homogeneity, $E$ is a nontrivial proper equivalence relation on $S(a)$ with $m\choose k$ classes. Notice that for any $E$-equivalent $c,c'$, the isomorphism types of $R(c)\cap R(a)$ and $R(c')\cap R(a)$ are the same over $a$, and in fact are the same as the isomorphism type of any set transversal to $k$ lines. Therefore, we can define $P_i(c,c')$ for $0\leq i< k$ if $E(c,c')\wedge |R(c)\cap R(c')\cap R(a)|=i$. All of these relations are realised by homogeneity, and invariant over $a$. This implies $k\leq 2$.
Now we eliminate the case $k=2$. If $|R(c)\cap R(a)|=2$ for $c\in S(a)$, then by Proposition \ref{Intersection3} we have at least 3 lines through $a$, and the relation $E$ defined in the preceding paragraph is a proper nontrivial equivalence relation. By the same argument, there are at least two types of $E$-equivalent pairs, plus at least two types of $E$-inequivalent pairs, depending on whether the intersections of their $R$-neighbourhoods meet $R(a)$ or not. The conclusion follows.
\end{proof}
The situation is similar to what we had in diameter 2 after Observation \ref{Quadrangle}, but we have the additional information $|R(b)\cap R(a)|=1$ for $b\in S(a)$.
\begin{proposition}
Let $M$ be a semilinear primitive homogeneous 3-graph of $R$-diameter 3 and $m<\omega$ lines through each point. Then $m=2$.
\label{twolines}
\end{proposition}
\begin{proof}
By Proposition \ref{equalsk}, for any $b\in S(a)$ we have $|R(b)\cap R(a)|=1$. Let $m$ denote the number of lines through $a$. We know by Observations \ref{NotRComplete} and \ref{R(a)Imprimitive} that $m\geq 2$. Now suppose for a contradiction that $m\geq 3$. Define $E_1,E_2$ on $S(a)$ by
\begin{equation*}
E_1(c,c')\leftrightarrow R(c)\cap R(a)=R(c')\cap R(a)\\
\end{equation*}
\begin{equation*}
E_2(c,c')\leftrightarrow R(b,b')\vee b=b'
\end{equation*}
where $\{b\}=R(c)\cap R(a)$ and $\{b'\}=R(c')\cap R(a)$. The relation $E_2$ holds iff $R(c)$ and $R(c')$ intersect the same line through $a$; $E_1$ holds iff they meet $R(a)$ at the same point. There are $m$ $E_2$-classes and each of them contains infinitely many $E_1$-classes. Since $m\geq3$ and the $R$-diameter of $M$ is 3, each $E_1$-class contains at least two infinite disjoint cliques, corresponding to the lines through a particular $b\in R(a)$. Therefore, we can define an invariant $F(c,c')$ if $E_1(c,c')\wedge R(c,c')$, breaking each $E_1$-class into finitely many $R$-cliques.
We have only three 2-types over $a$ in $S(a)$, corresponding to $R,S,T$, but we need at least four invariant relations for these three nested equivalence relations.
\end{proof}
\begin{lemma}
There are no primitive homogeneous 3-graphs of {\rm SU}-rank 2 and $R$-diameter 3.
\label{NoDiam3}
\end{lemma}
\begin{proof}
We know by Propositions \ref{equalsk} and \ref{twolines} that under the hypotheses of this proposition we have $|R(c)\cap R(a)|=1$ for all $c\in S(a)$ and there are exactly two lines through each point in $M$. So far, the main characters in our analysis have been $R(a)$ and $R^2(a)$. Now the structure on $R^3(a)$ will also come into play. The structure of $S(a)$ in diameter 3 and a single element in $|R(a)\cap R(c)|$ consists, by Proposition \ref{twolines}, of two $E_2$-classes, each divided into infinitely many $E_1$-classes ($R$-cliques), where $E_1,E_2$ are as in the proof of Proposition \ref{twolines}. We have two subcases:
\begin{case}
Suppose that $S$ holds between $E_1$-classes contained in the same $E_2$-class. Take $d\in T(a)$. The set $R(d)\cap R^2(a)$ meets each $E_1$-class in at most one vertex and one $E_2$ class ($T$ holds across $E_2$-classes; if $R(d)\cap R^2(a)$ met both $E_2$-classes, then the triangle $RRT$ would be realised, contradicting our assumption that $T(a)=R^3(a)$). Therefore, we can define an equivalence relation on $T(a)$ with two classes: define
\begin{equation*}
F(d,d')\leftrightarrow\exists(c,c'\in S(a))(c\in R(d)\cap S(a)\wedge c'\in R(d')\cap S(a)\wedge E_2(c,c'))
\end{equation*}
So $F(d,d')$ holds iff $R(d)$ and $R(d')$ meet the same $E_2$-class in $R^2(a)$. We have a further subdivision into cases, depending on how many $E_1$-classes $R(d)$ meets:
\begin{subcase}
If $|R(d)\cap R^2(a)|=1$, then we can define on $T(a)$ two more equivalence relations:
\begin{equation*}
F'(e,e')\leftrightarrow E_1(c,c')
\end{equation*}
\begin{equation*}
F''(e,e')\leftrightarrow R(e)\cap S(a)=R(e')\cap S(a)
\end{equation*}
where $\{c\}=R(e)\cap S(a)$ and $\{c'\}=R(e')\cap S(a)$. The condition $|R(d)\cap R^2(a)|=1$ ensures that these relations are transitive. Clearly, $F''\rightarrow F'\rightarrow F$; and as there are two lines through any vertex, $F$ is a proper nontrivial equivalence relation. To prove that $F'$ and $F''$ are both realised and different, take any $c\in S(a)$. There are two lines incident with it, one of which is its $E_1$-class, together with some point from $R(a)$; the other line, $\ell$, through $c$ is almost entirely contained in $T(a)$. Two points on $\ell\cap T(a)$ satisfy $F''$, and $F$-equivalent points in $T(a)$ on lines through different elements from $S(a)$ satisfy $F'\wedge\neg F''$ if the elements from $S(a)$ belong to the same $E_1$-class, and they satisfy $F\wedge\neg F'$ if the elements from $S(a)$ are $E_2$-equivalent and $S$-related.This gives us three nested invariant equivalence relations in $T(a)$. This rules out the possibility of $|R(d)\cap R^2(a)|=1$ in the situation of Case I(i).
\end{subcase}
\begin{subcase}
If $R(d)$ meets more than one $E_1$-class, then by homogeneity, since any vertex lies on two lines, it has to intersect exactly two of $E_1$-classes. Note that $R(d)\cap S(a)$ is contained in a single $E_2$-class, because the triangle $RRT$ is forbidden. Again, we find too many types realised on $T(a)$. For any pair $d,d'\in T(a)$, the number of $E_1$-classes that $R(d)\cup R(d')$ meets is invariant under $a$-automorphisms. Notice that it is not possible for $|(R'(d)\cup R(d'))\cap R^2(a)|$ to be 2, as in that case $d$ and $d'$ would belong to two different lines: by homogeneity, each element $c\in S(a)$ lies on two lines, one of which is its $E_1$-class; therefore, if $d,d'\in T(a)$ are such that $R(d,c)\cap R(d',c)\neq\varnothing$, then $c,d,d'$ must be collinear. Define $F_1(d,d')$ on $T(a)$ if $R(d)$ and $R(d')$ meet the same two $E_1$-classes, and $P(d,d')$ if $R(d)\cap R(d')\cap S(a)\neq\varnothing$. There are pairs satisfying all of $F_1\wedge P, F_1\wedge\neg P, \neg F_1\wedge P, \neg F_1\wedge\neg P$, giving us four invariant relations on pairs from $T(a)$.
\end{subcase}
\end{case}
\begin{case}
If $T$ holds between $E_1$-classes contained in the same $E_2$-class, then $S$ holds between $E_2$-classes (as each $E_1$-class is an $R$-clique). Again, we have two subcases, depending on $|R(d)\cap S(a)|$ for $d\in T(a)$:
\begin{subcase}
If $|R(d)\cap S(a)|=1$ for $d\in T(a)$, then we can define an equivalence relation $E'(e,e')$ on $T(a)$ holding if $R(e)$ and $R(e')$ meet the same $E_2$-class in $S(a)$. We will show that we already have three invariant and mutually exclusive relations on unordered pairs in each of the $E'$ classes. Define $\hat R, \hat T$ on $T(a)$ by $\hat P(e,e')$ iff $P$ holds for the points in the intersection of $R(e)$ and $R(e')$ with $S(a)$ ($P\in\{R,T\}$), and $C(e,e')$ if $e,e'$ are collinear with some $c\in S(a)$, which happens if $R(e)\cap R(e')\cap S(a)\neq\varnothing$. We would need at least one more predicate to separate the $E'$-classes.
\end{subcase}
\begin{subcase}
And if $|R(d)\cap S(a)|=2$ for $d\in T(a)$, then the intersection with each $E_2$-class is of size one, as otherwise the triangle $RRT$ would be realised. Then we can count the total number of $E_1$-classes that $R(e)$ and $R(e')$ meet, which can be 4, 3, or 2. And in the cases where this number is 3 or 2, we have another two relations, depending on whether $R(e)\cap R(e')\cap S(a)$ is empty or not. Again, we find too many invariant and mutually exclusive relations on unordered pairs of distinct elements from $T(a)$.
\end{subcase}
\end{case}
\end{proof}
We can now prove that no primitive homogeneous simple 3-graphs have {\rm SU}-rank 2.
\begin{theorem}\label{NoRank2}
There are no homogeneous primitive simple 3-graphs of {\rm SU}-rank 2.
\end{theorem}
\begin{proof}
By Observation \ref{diam}, the diameter of a primitive homogeneous simple 3-graph of {\rm SU}-rank 2 is either 2 or 3. Lemmas \ref{NoDiam2} and \ref{NoDiam3} say that both situations are impossible.
\end{proof}
\section{Higher rank}\label{SectHigher}
We have now proved that there are no homogeneous primitive simple 3-graphs of {\rm SU}-rank 2. In this section, we see that result as the basis for an inductive argument on the rank of the theory, using Theorem \ref{ThmStableForking}. We remark that in the course of the proof of nonexistence of simple 3-graphs of rank 2, we only use the rank 2 hypothesis to prove that we can define in $M$ a semilinear space with finitely many lines through each point. Also, for most of the analysis simplicity suffices, and we require supersimplicity only in Propositions \ref{atleast3} and \ref{MataCasiTodos} (and, indirectly, Lemma \ref{NoDiam2} because the proof uses Propositions \ref{atleast3} and \ref{MataCasiTodos}); in these results we use the fact that the theory is ranked by {\rm SU}, but the specific value of its rank is irrelevant.
Therefore, if we prove that simple homogeneous 3-graphs of rank 3 or greater are semilinear with finitely many lines through each point, then the rest of the argument from Section \ref{SectRank2} is valid in higher rank.
\begin{lemma}
Let $M$ be a homogeneous primitive supersimple 3-graph of {\rm SU}-rank $k\geq2$. Then $M$ is semilinear.
\label{Inductive}
\end{lemma}
\begin{proof}
Independently of the rank, if ${\rm Diam}_R(M)=3$, then $R(a)$ is a stable $RS$-graph. It cannot be primitive by Theorem \ref{LachlanWoodrow}. And $S$ is not an equivalence relation by Proposition \ref{PropMultipartite}; therefore, $R$ is an equivalence relation on $R(a)$ with finitely many infinite classes (by Observation \ref{NoInfiniteCliques}).
So we need only worry about those cases with ${\rm Diam}_R(M)=2$. We proceed by induction on $k$ (common induction, as opposed to transfinite induction, suffices by Koponen's Theorem \ref{Koponen}). The case $k=2$ corresponds to Lemma \ref{LemmaSemilinear}. Suppose that up to $k\geq3$, we know that there are no primitive homogeneous simple 3-graphs of {\rm SU}-rank $k-1$ (for $k=3$, this is the content of Theorem \ref{NoRank2}).
If we are given a homogeneous primitive simple 3-graph of {\rm SU}-rank $k+1$ and $R$-diameter 2, then we may assume by Theorem \ref{ThmStableForking} that $S$ and $T$ are nonforking predicates, so we know that $R(a)$ is a simple homogeneous 3-graph of rank at most $k$. It follows that $R(a)$ is either imprimitive or of rank 1 as a structure in its own right (it could have a higher rank as a subset of $M$ due to external parameters). If $R(a)$ is imprimitive, the same arguments as in Proposition \ref{RDefinesEqRel} show that $R$ is an equivalence relation; by Observation \ref{NoInfiniteCliques}, it has finitely many classes.
Now we argue that $R(a)$ is not primitive. By the induction hypothesis, if $R(a)$ were primitive, then its rank would be 1.
The structure on $R(a)$ cannot be stable, as in that case it would be one of Lachlan's infinite stable 3-graphs from Theorem \ref{Lachlan3graphs}, all of which are imprimitive.
And $R(a)$ cannot be isomorphic to a primitive unstable 3-graph of rank 1, as by Proposition \ref{Year2} primitivity contradicts the stability of $R$. Therefore, $R(a)$ is imprimitive and $R$ defines an equivalence relation on $R(a)$ with finitely many classes, by Observation \ref{NoInfiniteCliques}. This proves the proposition for all natural numbers $k\geq 3$.
\end{proof}
Lemma \ref{Inductive} tells us that we can define a semilinear space on $M$ just as we did in subsection \ref{lines}. The analysis from subsections \ref{subsectRDiam2} and \ref{subsectRDiam3} translates verbatim to this more general setting, as the rank hypothesis was used there only to ensure that $M$ interprets a semilinear space. As a consequence,
\begin{theorem}
Let $M$ be a primitive simple homogeneous 3-graph. Then the theory of $M$ is of {\rm SU}-rank 1.
\label{NoHigherRank}\hfill$\Box$
\end{theorem}
As a consequence of Theorems \ref{ThmStableForking} and \ref{NoHigherRank}, all the homogeneous simple unstable primitive 3-graphs of finite {\rm SU}-rank have rank 1. We know from Chapter \ref{ChapRank1} that those are random, and that in the case of imprimitive structures with finitely many classes, the transversal relations in a pair of classes are null, complete or random. This leaves us only with imprimitive structures of rank 2.
\chapter{Homogeneous Simple 3-graphs of {\rm SU}-rank 1}\label{ChapRank1}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\setcounter{case}{0}
\setcounter{subcase}{0}
This chapter contains more general results than the others. It deals with simple binary homogeneous structures of {\rm SU}-rank 1, so in particular we do not restrict ourselves to undirected edges or any particular number of colours.
Under the assumption of {\rm SU}-rank 1, ${\rm tp}(a/B)$ forks over $A\subset B$ iff $a\in{\rm acl}(B)\setminus{\rm acl}(A)$, and algebraic closure on an {\rm SU}-rank 1 structure induces a pregeometry.
\subsection{The primitive case}
\begin{proposition}
\label{aclab}
Let $M$ be a binary homogeneous structure with supersimple theory of {\rm SU}-rank 1 such that ${\rm Aut}(M)$ acts primitively on $M$. Then ${\rm acl}(a,b)=\{a,b\}$.
\end{proposition}
\begin{proof}
Suppose not. Then there is $c\in{\rm acl}(ab)\setminus({\rm acl}(a)\cup {\rm acl}(b))$ and $a\indep[\varnothing] b$, since by primitivity (Observation \ref{AlgClosure}) ${\rm acl}(a)=a$. By primitivity, there is only one strong type of elements over $\varnothing$, and since the rank is finite, this implies that all elements are of the same Lascar strong type. So we have ${\rm Lstp}(a)={\rm Lstp}(b)$. Take two elements $c', c''$ realising ${\rm tp}(c/a)$ and ${\rm tp}(c/b)$ respectively. Note that $c'\indep[\varnothing]a$ and $c''\indep[\varnothing]b$.
Therefore we can apply the Independence Theorem to produce $d\models{\rm Lstp}(a)\cup{\rm tp}(c/a)\cup{\rm tp}(c/b)$ with $d\indep[\varnothing]ab$. Since the language is binary, ${\rm tp}(d/ab)={\rm tp}(c/ab)$ (an algebraic type), so $d\in{\rm acl}(ab)$ which contradicts $d\indep[\varnothing]ab$.
\end{proof}
A stronger statement is:
\begin{proposition}\label{algclosure}
Under the hypotheses of \ref{aclab}, ${\rm acl}(A)=\bigcup_{a\in A}{\rm acl}(a)=A$.
\end{proposition}
\begin{proof}
We prove this by induction on $|A|$. The case $|A|=1$ is true by primitivity and $|A|=2$ is Proposition \ref{aclab}. Now suppose that the result holds for sets of cardinality $k$, and let $A=\{a_1,\ldots a_{k+1}\}$.
Suppose that the equality does not hold, and take $b\in {\rm acl}(A)\setminus\bigcup_{a\in A}{\rm acl}(a)$. By the rank 1 assumption, $a_{k+1}\indep[\varnothing] A_0$, where $A_0=A\setminus\{a_{k+1}\}$. Now take $b_0$ realizing ${\rm tp}(b/A_0)$ and $b_1$ realizing ${\rm tp}(b/a_{k+1})$. By the induction hypothesis, $b_0\indep[\varnothing]A_0$ and $b_1\indep[\varnothing]a_{k+1}$. By primitivity, ${\rm Lstp}(b_0)={\rm Lstp}(b_1)$ (over the empty set), so we can apply the Independence Theorem to get a $\beta\models{\rm tp}(b_0/A_0)\cup{\rm tp}(b_1/a)$ with $\beta\indep[\varnothing]A$. By rank 1, $\beta$ is not algebraic over $A$.
But ${\rm tp}(\beta/A)={\rm tp}(b/A)$; indeed, ${\rm tp}(\beta/A_0)={\rm tp}(b/A_0)$, which implies that ${\rm tp}(\beta/\alpha)={\rm tp}(b/\alpha)$ for all $\alpha\in A_0$, and also ${\rm tp}(\beta/a)={\rm tp}(b/a)$. Since the language is binary, this implies that ${\rm tp}(\beta/A)={\rm tp}(b/A)$. This is a contradiction (because $b$ is algebraic over $A$.)
\end{proof}
Let $D(\bar x)$ denote the formula expressing that the elements of the tuple $\bar x$ are all different. Recall that the theory of the Random Graph is axiomatised by the set of sentences $\{\phi_{n,m}:n,m\in\omega\}$, where $\phi_{n,m}$ is
\begin{small}
$$\forall v_1,\ldots,v_n\forall w_1,\ldots,w_m(D(v_1,\ldots,v_n,w_1,\ldots,w_m)\rightarrow\exists x(\bigwedge_{1\leq i\leq n}R(x,v_i)\wedge\bigwedge_{1\leq j\leq m}\neg R(x,w_j))$$
\end{small}
When phrased as ``whenever $V_1$ and $V_2$ are finite disjoint sets of vertices in $G$, there exists a vertex $v$ such that for all $v_1\in V_1$ and $v_2\in V_2$ the formula $R(v,v_1)\wedge\neg R(v,v_2)$ holds in $G$," the axiom schema $\phi_{n,m}$ is known as \emph{Alice's restaurant axiom}.
We will assume for the rest of this section that $M$ is a binary relational structure, homogeneous in a language $L=\{R_1,\ldots,R_n\}$, and that each 2-type over $\varnothing$ of distinct elements is isolated by one of the relations in the language. Our aim is to show that supersimple primitive binary homogeneous structures are very similar to the random graph, in the sense that we can prove analogues of Alice's restaurant axioms in them. As in other proofs in this chapter, at the core of the argument is the Independence Theorem.
\begin{theorem}
Let $M$ be a countable relational structure homogeneous in the binary language $L=\{R_1,\ldots,R_n\}$, and assume that each complete 2-type over $\varnothing$ is isolated by one of the $R_i$. Suppose that $R_1,\ldots,R_m$ are symmetric relations and $R_{m+1},\ldots,R_n$ are antisymmetric. If $M$ is primitive and $Th(M)$ is supersimple of {\rm SU}-rank 1, then for any collection $\{A_1,\ldots,A_m,A_{m+1},A'_{m+1},\ldots,A_n,A'_n\}$ of pairwise disjoint finite sets of elements from $M$ there exists $v\in M$ such that \[M\models \bigwedge_{i\in\{1,\ldots m\}}(\bigwedge_{v_i\in A_i}R_i(v,v_i))\wedge\bigwedge_{i\in\{m+1,\ldots,n\}}(\bigwedge_{v_i\in A_i}R_i(v,v_i)\wedge\bigwedge_{w_i\in A_i'}R_i(w_i,v))\]
\label{PrimitiveAlice}
\end{theorem}
\begin{proof}
To prove this, we use Proposition \ref{algclosure} and the Independence Theorem. We may assume that all the $A_i, A'_i$ are of the same size, and will prove this proposition for $|A_i|=1$ (it will be clear that the same argument can be iterated for larger sets). By Proposition \ref{algclosure}, $a_1\indep a_2$ if $a_1\neq a_2$, and for any $A,B,C$, $A\indep[C]B$ if $(A\setminus C)\cap(B\setminus C)=\varnothing$. Let $A_i=\{a_i\}$ and $A'_{j}=\{a'_j\}$ for $m+1\leq j\leq n$, and assume all the $a_i$ are different and therefore pairwise independent. Then by homogeneity, there exist $b_i$ with $R_i(a_i,b_i)$, and ${\rm tp}(b_i/a_i)$ does not fork over $\varnothing$. By primitivity, ${\rm Lstp}(b_1)={\rm Lstp}(b_2)$, so we can apply the Independence Theorem and find $b_{12}\models{\rm Lstp}(b_1)\cup{\rm tp}(b_1/a_1)\cup{\rm tp}(b_2/a_2)$ satisfying $b_{12}\indep a_1a_2$.
Now we have $b_{12}\indep a_1a_2$ and we know $a_1a_2\indep a_3$ and ${\rm tp}(b_3/a_3)$ does not fork over $\varnothing$. Also, by primitivity ${\rm Lstp}(b_3)={\rm Lstp}(b_{12})$ and we can apply the independence theorem again. Iterating this process, we find $\alpha\models{\rm Lstp}(b_1)\cup{\rm tp}(b_1/a_1)\cup\ldots\cup{\rm tp}(b_n/a_n)$ independent from $a_1,\ldots,a_m,a_{m+1},a'_{m+1},\ldots,a_n, a'_n$.
\end{proof}
\subsection{Finite equivalence relations}\label{SubsectFER}
If $M$ is a transitive, imprimitive rank 1 structure in which all the definable equivalence relations have infinite classes, then it follows from the rank hypothesis that each of the equivalence relations has finitely many classes. From homogeneity and transitivity it follows that if $E$ is a definable equivalence relation on $M$ and $\neg E(a,b)$, then $a/E$ and $b/E$ are homogeneous structures with the same age, and each has fewer definable equivalence relations than $M$. By $\omega$-categoricity, there are only finitely many definable equivalence relations, so that $M$ is in fact the union of finitely many primitive homogeneous structures (which are the equivalence classes of the finest definable equivalence relation on $M$ with infinite classes) in which all invariant equivalence relations have finite classes. Our next goal is to describe how two classes of a finite equivalence relation in a rank 1 binary homogeneous structure can relate to each other.
The archetypal example of an imprimitive simple unstable binary homogeneous structure with a finite equivalence relation is the Random Bipartite Graph. It is the Fra\"iss\'e limit of the family of all bipartite graphs with a specified partition or equivalence relation; it is not homogeneous as a graph, but is homogeneous in the language $\{R,E\}$, where $E$ is interpreted as an equivalence relation. To axiomatise this theory, it suffices to express that $E$ is an equivalence relation with exactly two infinite classes, $R$ is a graph relation, and that for any finite disjoint subsets $A_1,A_2$ of the same $E$-class there exists a vertex $v$ in the opposite class such that $R(v,a)$ holds for all $a\in A_1$ and $\neg R(v,a')$ holds for all $a'\in A_2$.
If $A,B$ are different classes of the finest definable finite equivalence relation $E$ on $M$, we will say that a relation $R$ holds \emph{transversally} or \emph{across} $A, B$ if there exist $a\in A$ and $b\in B$ such that $R(a,b)\vee R(b,a)$. Relations which hold transversally for some pair of $E$-classes are refered to as \emph{transversal relations}. Notice that by homogeneity any relation holding across $E$-classes does not hold within a class, and vice-versa. By quantifier elimination and our assumption on the disjointness of the binary relations, $E$ is defined by a disjunction of atomic formulas $\bigvee_{i\in I}R_i(x,y)$ for some $I\subset\{1,\ldots n\}$. Therefore, the transversal relations are those in $L\setminus \{R_i:i\in I\}$. We assume that each 2-type of distinct elements is isolated by a relation in the language; therefore, each relation is either symmetric or antisymmetric.
Given two $E$-classes $A,B$, if only one symmetric relation $R$ holds across $A,B$ then we say that $R$ is \emph{complete bipartite} in $A,B$, for the reason that if we forget the structure within the classes, what we obtain is a complete bipartite graph. All other relations are \emph{null} across $A,B$ in this case, i.e.,\, not realised across these classes.
If $D$ is an antisymmetric relation realised across $A,B$, we say that the ordered pair of classes $(A,B)$ is \emph{directed for} $D$ if all the $D$-edges present in $A\cup B$ go in the same direction, that is, if either $\forall(c,c'\in A\cup B)(D(c,c')\rightarrow c\in A\wedge c'\in B)$ or $\forall(c,c'\in A\cup B)(D(c,c')\rightarrow c\in B\wedge c'\in A)$. A dramatic example of a $D$-directed pair of $E$-classes is when $\forall a\in A\forall b\in B(D(a,b))$. We adopt the convention that if $(A,B)$ is directed for $D$, then the $D$-edges go from $A$ to $B$. If $(A,B)$ is not directed for any $D$, then we say that $(A,B)$ is an \emph{undirected} pair of $E$-classes.
\begin{observation}
Let $M$ be a binary homogeneous imprimitive transitive relational structure in which there are proper nontrivial invariant equivalence relations with infinite classes. Let $E$ be the finest such equivalence relation in $M$. If $(A,B)$ is a directed pair of equivalence classes for some $D\in L$, then no symmetric relations are realised across $A,B$ and for all antisymmetric relations $D'$ in the language realised across $A,B$, either $(A,B)$ or $(B,A)$ is directed for $D'$.
\label{DirectedPairsClasses}
\end{observation}
\begin{proof}
The first assertion follows from the fact that if $R(a,b)$ for some symmetric relation $R$, where $a\in A$ and $b\in B$, then by homogeneity there would exist an automorphism taking $a\rightarrow b$ and $b\rightarrow a$, which is impossible by invariance of $E$ and the fact that $(A,B)$ is directed for $D$. Similarly, if for some directed relation $D'$ we had $a,a'\in A$ and $b\in B$ with $D'(a,b)\wedge D'(b,a')$ then by homogeneity there would exist an automorphism of $M$ taking $ab$ to $ba'$, again impossible since $(A,B)$ is directed for $D$.
\end{proof}
\begin{observation}
Let $M$ be a binary homogeneous imprimitive transitive relational structure with supersimple theory of {\rm SU}-rank 1 in which there are proper nontrivial invariant equivalence relations with infinite classes. Let $E$ be the finest such equivalence relation in $M$, and assume that ${\rm Aut}(M)$ acts primitively on each $E$-class. If $a_1,\ldots,a_n$, $n\geq2$, are distinct $E$-equivalent elements of $M$, then $a_1\indep a_2,\ldots,a_n$.
\label{IndepInClass}
\end{observation}
\begin{proof}
We proceed by induction on $n$. For the case $n=2$, let $a_1,a_2$ be distinct elements of $M$, $E(a_1,a_2)$. In the situation described, each of the relations that imply $E$ is non-algebraic, since otherwise the action of ${\rm Aut}(M)$ on $a_1/E$ would not be primitive. It follows that the relation isolating ${\rm tp}(a_1a_2)$ is nonforking, so $a_1\indep a_2$.
Now suppose that any $k$ distinct $E$-equivalent elements of $M$ are independent. Suppose for a contradiction that $a_1,\ldots,a_{k+1}$ are pairwise independent $E$-equivalent elements of $M$, and $a_{k+1}\notindep a_1,\ldots a_k$. By the induction hypothesis, $a_1\indep a_2,\ldots,a_k$, $a_{k+1}\indep a_1$ and $a_{k+1}\indep a_2,\ldots,a_k$. Let $b_1\models{\rm tp}(a_{k+1}/a_1)$ and $b_2\models{\rm tp}(a_{k+1}/a_2,\ldots,a_k)$; these are nonforking extensions of the unique 1-type over $\varnothing$ to $a_1$ and $a_2,\ldots,a_k$, and are of the same strong type. Therefore, by the Independence Theorem, there exists $c$ satisfying ${\rm tp}(a_{k+1}/a_1)\cup{\rm tp}(a_{k+1}/a_2,\ldots,a_k)$ in the same class as $a_{k+1}$, independent (i.e.,\, non-algebraic) from $a_1,\ldots,a_k$. But then ${\rm tp}(c/a_1,\ldots,a_k)={\rm tp}(a_{k+1}/a_1,\ldots,a_k)$ because the language is binary, which is impossible as the type on the left-hand side of the equality is non-algebraic, while the other one is algebraic.
\end{proof}
Given a pair of $E$-classes $A,B$, denote the set of nonforking transversal relations realised in $A\cup B$ by $\mathcal I(A,B)$. If $(A,B)$ is a directed pair of classes, then $\mathcal I^*(A,B)$ is the set of nonforking relations $D$ realised in $A\cup B$ such that $D(a,b)$ for some $a\in A, b\in B$. Note that for directed pairs, $\mathcal I(A,B)=\mathcal I^*(A,B)\cup\mathcal I^*(B,A)$.
\begin{proposition}
Let $M$ be a binary homogeneous imprimitive transitive relational structure with supersimple theory of {\rm SU}-rank 1 in which there are proper nontrivial invariant equivalence relations with infinite classes. Let $E$ be the finest such equivalence relation in $M$, and assume that ${\rm Aut}(M)$ acts primitively on each $E$-class. Suppose that $(A,B)$ is a $D_1$-directed pair of $E$-classes. Enumerate $\mathcal I^*(A,B)=\{D_1,\ldots, D_n\}$ and $\mathcal I^*(B,A)=\{Q_1,\ldots,Q_m\}$. Then for all finite disjoint $V_1,\ldots,V_n\subset B$ and $W_1,\ldots,W_m\subset A$ there exist $c\in A$ and $d\in B$ such that $D_i(c,v)$ holds for all $v\in V_i$ ($1\leq i\leq n$) and $Q_j(w,d)$ holds for all $w\in W_j$ ($1\leq j\leq m$).
\end{proposition}
\begin{proof}
We will prove only that for all finite disjoint $V_1,\ldots,V_n\subset B$ there exists $c\in B$ such that $D_i(c,v)$ holds for all $v\in V_i$; the same argument produces the $d$ from the statement.
We proceed by induction on $k=|V_1|+\ldots+|V_n|$, with an inner induction argument. If $k=n$, so $V_i=\{b_i\}$ then by Observation \ref{IndepInClass} we have $b_1\indep b_2$. There exist $a,a'\in A$ such that $D_1(a,b_1)\wedge D_2(a',b_2)$; since $D_1$ and $D_2$ are nonforking relations, $a\indep b_1$ and $a'\indep b_2$, and since $a,a'$ are $E$-equivalent, they have the same strong type. By the Independence Theorem, there exists $c_{12}\in A$ such that $c_{12}\indep b_1b_2$ and $D_1(c_{12},b_1)\wedge D_2(c_{12},b_2)$. Now suppose that for $t\leq n-1$, we can find $c_{1\ldots t}\indep a_1,\ldots, a_t$ such that $D_1(c_{1\dots t},b_1)\wedge\ldots\wedge D_t(c_{1\ldots t}, b_t)$. Given distinct $b_1,\ldots, b_{t+1}$ with $t+1\leq n$, it follows from Observation \ref{IndepInClass} that $b_{t+1}\indep b_1,\ldots,b_t$. By the induction hypothesis, there exists $c_{1\ldots t}\indep b_1,\ldots,b_t$ satisfying $\bigwedge_{i=1}^t D_i(c_{1\ldots t},b_i)$; and we know that there exists $c_{t+1}\in A$ such that $D_{t+1} (c_{t+1},b_{t+1})$. Since $D_{t+1}$ is nonforking, $c_{t+1}\indep b_{t+1}$, and by the Independence Theorem, there exists $c_{1\ldots t+1}\indep b_1,\ldots,b_{t+1}$ such that $\bigwedge_{i=1}^{t+1}D_i(c_{1\ldots t+1},b_i)$. This concludes, by induction, the case $k=n$. The same argument proves the inductive step on $k$.
\end{proof}
By the same argument, we can prove:
\begin{proposition}
Let $M$ be a binary homogeneous imprimitive transitive relational structure with supersimple theory of {\rm SU}-rank 1 in which there are proper nontrivial invariant equivalence relations with infinite classes. Let $E$ be the finest such equivalence relation in $M$, and assume that ${\rm Aut}(M)$ acts primitively on each $E$-class. Suppose that $(A,B)$ is an undirected pair of $E$-classes, $\mathcal I(A,B)=\{R_1,\ldots,R_k\}\cup\{D_1,\ldots, D_s\}$, where each $R_i$ is symmetric and each $D_j$ is antisymmetric. Then for all finite disjoint subsets $V_1,\ldots,V_k,W_1,\ldots,W_s,W'_1,\ldots,W'_s\subset B$ there exists $c\in A$ such that $R_i(c,v)$ for all $v\in V_i$, $D_j(c,w)$ for all $w\in W_j$, and $D_j(w,c)$ for all $w\in W_j'$.
\label{ComplicatedAlice}
\end{proposition}
We remark here that if all the relations are symmetric, Proposition \ref{ComplicatedAlice} says that a nonforking transversal relation $R$ occurs across a pair of $E$-classes $A,B$ in one of three ways, namely:
\begin{enumerate}
\item{Complete, that is, only one relation is realised across $A,B$,}
\item{Null, so $R$ is not realised in $A\cup B$}
\item{Random bipartite: it satisfies that given two disjoint nonempty finite subsets $V,V'$ of $A$ ($B$), there is a vertex $v$ in $B$ ($A$) that is $R$-related to all vertices from $V$ and to none from $V'$}
\end{enumerate}
The results in this section tell us exactly what to expect from binary supersimple homogeneous structures of {\rm SU}-rank 1. Even though we did not phrase it as a list of structures, Proposition \ref{ComplicatedAlice} is essentially a classification result for imprimitive binary homogeneous structures of {\rm SU}-rank 1 in which one of the relations defines an equivalence relation with infinite classes. Our next proposition is, in the same sense, a classification of unstable imprimitive simple 3-graphs (language $\{R,S,T\}$, all relations symmetric and irreflexive, each pair of distinct vertices realises exactly one of them) in which one of the predicates defines a finite equivalence relation. This result is of interest in the final sections of this chapter; we make implicit use Proposition \ref{CatSimpLow}:
\begin{proposition}
Let $M$ be a transitive simple unstable homogeneous 3-graph in which $R$ defines an equivalence relation with $m<\omega$ classes. Then $M$ has supersimple theory of {\rm SU}-rank 1, the structure induced on each pair of classes is isomorphic to the Random Bipartite Graph, and for all $k\leq m$ and all $k$-sets of $R$-classes $X$, any $S,T$-graph of size $k$ is realised as a transversal to $X$.
\label{PropImprimitive3Graphs}
\end{proposition}
\begin{proof}
The first assertion follows easily from transitivity (only one 1-type $q_0$ over $\varnothing$) and the fact that if $\varphi(x,\bar a)$ is a formula not implying $x=a_i$ for some $a_i\in\bar a$, then $\varphi$ does not divide over $\varnothing$, so the only forking extensions to the unique 1-type over $\varnothing$ are algebraic. To see this, consider any such $\varphi(x,\bar a)$. We may assume that $\varphi$ is not algebraic, as in that case we would already know that any extension of $q_0$ implying $\varphi$ is algebraic and so of {\rm SU}-rank 0. Let $c$ realise this formula, $c\not\in\bar a$. We wish to prove that $c\indep\bar a$; by simplicity, this is equivalent to proving $\bar a \indep c$.
Let $\varphi'(\bar x,c)$ be the formula isolating ${\rm tp}(\bar a/c)$. Consider any $\varnothing$-indiscernible sequence $I=(c_i:i\in\omega)$ such that $c\in I$. This is an infinite sequence contained in the $R$-class of $c$. Colour the elements of $I$ according to the types they realise over $\bar a$. Since $\bar a$ is finite, there are only finitely many colours, and by the pigeonhole principle there is an infinite monochromatic subset $I'$ of $I$. Then we have $I'\equiv_c I$ and $I'$ is indiscernible over $\bar a$, so $\varphi(x,\bar a)$ does not divide over $\varnothing$ and the {\rm SU}-rank of $q_0$ (and therefore $M$) is 1.
The relation $R$ is clearly stable in $M$, so $S$ and $T$ must be unstable. By instability, there are parameters $a_i$, $b_i$ ($i\in\omega$) such that $S(a_i,b_j)$ holds iff $i\leq j$. Since $R$ is stable, we have $T(a_i,b_j)$ for all $j<i$ in this sequence of parameters. If we consider the $a_ib_i$ as pairs of type $S$ and colour the pairs of distinct pairs in the sequence by the type they satisfy over $\varnothing$, then using Ramsey's theorem we can extract an infinite $\varnothing$-indiscernible sequence of pairs, which we also call $a_i, b_i$. By indiscernibility, the new $a_i$ and $b_i$ form monochromatic cliques, which are of colour $R$ because there are no other infinite monochromatic cliques in $M$. This proves that $S$ and $T$ are realised as transversals to any pair of $R$-classes. By homogeneity, all pairs of classes are isomorphic.
The relation $R$ is clearly nonforking in $M$. By instability, both $S$ and $T$ are non-algebraic, so for any $a\in M$ the sets $S(a)$ and $T(a)$ contain infinite $R$-cliques. It follows that $S$ and $T$ are nonforking transversal relations, so by Proposition \ref{ComplicatedAlice} the structure on any pair of $R$-classes is isomorphic to the Random Bipartite Graph. Using the Independence Theorem, we can embed any $S,T$-graph of size $k$ as a transversal to a union of $k$ $R$-classes, for any $k\leq m$.
\end{proof}
The structures isolated by Proposition \ref{PropImprimitive3Graphs} consist of a finite number $n$ of infinite $R$-cliques, with $S$ and $T$ realised randomly between them. Let us call the structure with $n$ infinite $R$-classes $\mathcal B_n^{ij}$, where $i,j\in\{R,S,T\}$ are the unstable relations (these structures will appear again in Theorem \ref{ThmClassification}).
\chapter{Simple 3-graphs of {\rm SU}-rank 2}\label{ChapRank2}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\setcounter{case}{0}
\setcounter{subcase}{0}
We proved in Chapter \ref{ChapPrimitive} that all primitive homogeneous 3-graphs with simple theory have ${\rm SU}$-rank 1. Therefore, all such 3-graphs with rank higher than 1 are imprimitive. Since the invariant equivalence relation is, by quantifier elimination, definable as a disjunction of atomic formulas and formulas defining equivalence relations are stable, we are left with two cases:
\begin{enumerate}
\item{Either a single predicate, say $R$ defines an equivalence relation with infinitely many infinite classes, or}
\item{The disjunction of the unstable predicates $S,T$ defines an equivalence relation with infinitely many infinite classes}
\end{enumerate}
Note that the case in which the disjunction of a stable predicate and an unstable predicate defines an equivalence relation does not occur, since the complement of an equivalence relation is also a stable predicate.
\begin{remark}
The ${\rm SU}$-rank of a homogeneous simple unstable 3-graph $M$ cannot be higher than 2, since that would mean that $M$ is imprimitive and the equivalence relation $E$ has infinitely many infinite classes, each of which is of rank at least 2, implying again imprimitivity and thus stability (we used at least one predicate to define $E$ and one more to define the finer equivalence relation; it follows that all predicates are stable).
\end{remark}
Two structures are easy to isolate:
\begin{proposition}\label{PropInfInf1}
Let $M$ be an unstable homogeneous 3-graph in which the disjunction of two unstable predicates $S,T$ defines an equivalence relation with infinitely many infinite classes. Then each class is isomorphic to the Rado graph.
\end{proposition}
\begin{proof}
Let $C$ be an $S\vee T$-class and consider two finite isomorphic substructures $A,B$ of $C$. By homogeneity of $M$, there exists an automorphism of $C$ taking $A$ to $B$ (namely, the restriction to $C$ of the automorphism of $M$ that exists by homogeneity, which clearly fixes $C$).
Clearly, any half-graph witnessing the instability of $S$ and $T$ is contained in a $S\vee T$-class, so it follows that each class is isomorphic to the Rado graph by the Lachlan-Woodrow Theorem. The structure is therefore $K_\omega^R[\Gamma]$.
\end{proof}
\begin{proposition}\label{PropInfInf2}
Let $M$ be an unstable homogeneous simple 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that in the union of any two classes only two predicates are realised. Then $M\cong\Gamma[K_\omega^R]$.
\end{proposition}
\begin{proof}
This follows from the same argument used in Observation \ref{ObsInterpretedGraph}
\end{proof}
\section{The remaining case}
Propositions \ref{PropInfInf1} and \ref{PropInfInf2} leave us with the work of classifying those homogeneous simple unstable 3-graphs in which $R$ defines an equivalence relation with infinitely many infinite classes and the induced action of ${\rm Aut}(M)$ on $M/R$ is 2-transitive. As in Chapter \ref{ChapImprimitiveFC}, we start our analysis by identifying the structure induced on a pair of classes.
The argument from Observation \ref{NotCompBipart} implies that if we fix some vertex $a$, then $S(a)\cap C\neq\varnothing$ and $T(a)\cap C\neq\varnothing$ for all $R$-classes $C$ not containing $a$.
\begin{proposition}\label{PropPerfectMatching}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes and ${\rm Aut}(M)$ acts 2-transitively on $M/R$. If $S(a)\cap C$ is finite for some class $C$ not containing $a$, then the structure induced by $M$ on a pair of distinct classes $C,D$ is a perfect matching.
\end{proposition}
\begin{proof}
We claim that the structure induced on $C,D$ is homogeneous. Consider two isomorphic finite subsets $A,B$ of $C\cup D$. If $S$ or $T$ is realised in $A$, then the usual argument (see Proposition \ref{PropInfInf1}) proves that there exists an automorphism of $C\cup D$ taking $A$ to $B$.
And if $A\subset C,B\subset D$ are $R$-cliques of the same size, then we can find vertices $v_A, v_B$ in the opposite side of $A,B$ such that $T(v_A,a)$ and $T(v_B,b)$ for all $a\in A, b\in B$. This follows from the fact that, by homogeneity, the sets $\bigcup_{a\in A}S(a)\cap D$ is finite and $D$ is infinite. Now $Av_A$ and $Bv_B$ are isomorphic and we can apply the same argument as in the preceding paragraph. The same idea works when $A,B$ are subsets of the same class.
Thus $C\cup D$ is a homogeneous stable (because the $S$-neighbourhood of a vertex in $C\cup D$ is finite and $R$ is an equivalence relation) 3-graph in which $R$ is an equivalence relation with two infinite classes and no disjunction of two predicates is an equivalence relation. Going through Lachlan's list, we conclude that $C\cup D\cong K_\omega^R\times K_2^S$.
\end{proof}
In Chapter \ref{ChapImprimitiveFC} we proved that the only ``interesting" imprimitive homogeneous simple unstable 3-graph has the property that $S$ (and $T$) form perfect matchings in the union of any two classes. In that case, since the classes are of size 2 if one of the predicates matches the classes, then so does the other. One could imagine that an analogue with infinite classes would have either $S$ or $T$ as a matching between any two classes, possibly embedding all $S,T$-graphs as transversals. Fortunately, no such monster exists:
\begin{proposition}\label{PropIntersectionTypes}
Let $M$ be an imprimitive homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that both $S$ and $T$ are realised in a pair of classes. If a pair of distinct classes $C,D$ embed a half-graph for $S,T$, then for any finite disjoint $A,B\subset C$ of the same size there exist $d,d'\in D$ such that ${\rm tp}(d/A)={\rm tp}(d'/\sigma(B))$ for some permutation $\sigma$.
\end{proposition}
\begin{proof}
Note first that the existence of a half-graph for $S,T$ in $C\cup D$ implies that $S(c)\cap D$ and $T(c)\cap D$ are infinite (as are the corresponding neighbourhoods in $C$ for any vertex in $D$).
We proceed by induction on the size of $A$. For $|A|=1$, this follows from the fact that $S(a)\cap E\neq\varnothing$ for all classes $E$ not including $a$, so there exist vertices in $D$ which are $S$-related to $A$, $B$.
Now suppose that up to $|A|=n$ we can find vertices in $D$ connected to $A$ and $B$ only by $S$-edges, and let $a_{n+1},b_{n+1}$ be vertices in subsets $A',B'$ of $C$ of size $n+1$. We know that there exist subsets $X$ isomorphic to $A'$ such that there exists a vertex in $D$ that is $S$-related to all the elements of $X$, namely any vertex in the $D$-side of a half-graph embedded in $C\cup D$ with index larger than any of the first $n+1$ elements of the half-graph. The result follows by homogeneity.
\end{proof}
\begin{corollary}\label{CorHomPairs}
Let $M$ be an imprimitive homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that both $S$ and $T$ are realised in a pair of classes. The structure induced on a pair of classes is homogeneous.
\end{corollary}
\begin{proof}
Fix two classes $C,D$, and consider two finite isomorphic subsets $A\subset C\cup D, B\subset C\cup D$. If either of $A, B$ includes an $S$- or $T$-edge, then the isomorphism extends to an automorphism of $M$, which fixes $C\cup D$ setwise $C,D$ (and therefore its restriction to $C\cup D$ is an automorphism of the induced structure on $C\cup D$).
Now if each of $A,B$ is a subset of $C$ or $D$, we can use Proposition \ref{PropIntersectionTypes} to find an element in the opposite class $S$-related to $A$ and to $B$ and apply the same argument as before to the extended subsets.
\end{proof}
\begin{theorem}\label{ThmNoNewStr}
There is no homogeneous unstable 3-graph $M$ in which $R$ defines an equivalence relation with infinitely many infinite classes and the structure induced by $M$ on any pair of distinct classes is $K_\omega^R\times K_2^T$.
\end{theorem}
\begin{proof}
\comm
Since $R$ defines an equivalence relation, any such $M$ would satisfy $S\sim T$ by instability. Note that $S\sim^RT$ is impossible because it implies that $T(a)$ contains $R$-edges. It follows that either $S\sim^ST$ or $S\sim^TT$ holds. Any of them implies $SST,STT\in{\rm Age}(M)$. Since $T$ is a perfect matching between two classes, the triangles $RST$ and $SSR$ are also in ${\rm Age}(M)$.
The forbidden triangles of $M$ are $TTR$ (because $T$ is a perfect matching between two $R$-classes), $RRT$, and $RRS$ (because $R$ is transitive). Note that in any amalgamation problem of one-point extensions $B,C$ of $A$, if there is an element $a\in A$ such that $R(b,a)\wedge T(c,a)$ then the edge $S(b,c)$ is forced by the forbidden triangles $TTR, RRT$. If the amalgamation property holds for ${\rm Age}(M)$, then the following two structures are also in ${\rm Age}(M)$ (they are the only possible solution to the amalgamation problem of $b,c$ over the bottom edge):
\[
\includegraphics[scale=0.8]{Rank206.pdf}
\]
But now if we try to amalgamate $U,U'$ over the common $K_3^S$ we obtain the following problem without a solution in ${\rm Age}(M)$:
\[
\includegraphics[scale=0.8]{Rank207.pdf}
\]
Contradicting the amalgamation property.
\ent
This is a direct consequence of Observation \ref{ObsEasyCase}.
\end{proof}
\begin{corollary}\label{Corollary1}
The only simple homogeneous 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes and $T$ is a perfect matching between any two distinct $R$-classes is $K_\omega^R\times K_\omega^T$.
\end{corollary}
\begin{corollary}\label{Corollary2}
Let $M$ be an imprimitive homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that the induced action of ${\rm Aut}(M)$ on $M/R$ is 2-transitive. Then the structure induced by $M$ on the union of any two distinct classes $C,D$ is isomorphic to the Random Bipartite Graph.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{ThmNoNewStr} and Corollary \ref{Corollary1} that there must be half-graphs present in the union of $C$ and $D$, that is, $S\sim^R_RT$. We will prove that the axioms of the Random Bipartite Graph hold for $C\cup D$. Let $A,B$ be two finite disjoint subsets of $C$; we wish to prove that there exists $d\in D$ such that $S(d,a)$ for all $a\in A$ and $T(d,b)$ for all $b\in B$.
Let $n_A$ and $n_B$ be the sizes of $A,B$, and let $(\alpha_i)_{i\in\omega}\subset C$ and $(\beta_i)_{i\in\omega}\subset D$ be a half-graph for $S,T$ in $C\cup D$. Consider the first $n_A+n_b+1$ elements of the sequence $(\alpha_i)_{i\in\omega}$: the vertex $\beta_{n_A+1}$ is such that $S(\alpha_i,\beta_{n_A+1})$ for all $i\leq n_A$, and $T(\alpha_i,\beta_{n_A+1})$ for all $i>n_A$. Now we can apply Corollary \ref{CorHomPairs} to find an element in $D$ that satisfies over $A\cup B$ the same type as $\beta{n_A+1}$ over $\alpha_1,\ldots,\alpha_{n_A+n_b+1}$.
\end{proof}
\begin{corollary}\label{Corollary3}
If $M$ is an imprimitive homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and such that $S$ and $T$ are realised in any pair of classes. Then $M$ embeds infinite cliques in all colours.
\end{corollary}
\begin{proof}
We know already that $M$ embeds infinite $R$-cliques, and by Ramsey's Theorem it must embed an infinite clique in at least one of $S,T$. Let us suppose without loss that it embeds infinite $S$-cliques.
Note that $R$ is the only equivalence relation, so in particular there are no equivalence relations with finitely many classes on $M$. This follows from instability, quantifier elimination, and Corollary \ref{Corollary2}: any such relation would be definable as a disjunction of atomic formulas, but we know by instability that neither $S$ nor $T$ define equivalence relations. Corollary \ref{Corollary2} implies that the triangle $RST$ is in ${\rm Age}(M)$, so no disjunction of two predicates is an equivalence relation.
If $M$ does not embed infinite $T$-cliques, then it must be the case that $T(x,a)$ forks, since otherwise the Independence Theorem would guarantee the existence of infinite $T$-cliques. From this it follows that the only nonforking relation in $M$ is $S$, so $T(a)$ does not embed infinite $S$-cliques. But $T(a)$ meets all classes except $a/R$, so there is an infinite transversal in $T(a)$. This contradicts Ramsey's Theorem.
\end{proof}
\begin{proposition}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Then both $S$ and $T$ are nonforking.
\end{proposition}
\begin{proof}
Since $R$ defines an equivalence relation with infinitely many infinite classes, the formula $R(x,a)$ divides. At least one of $S$ or $T$ is nonforking, since there must be Morley sequences in the only 1-type over $\varnothing$, so let us assume without loss that $T$ is nonforking. If the formula $S(x,a)$ divides, then $S(a)$ does not contain infinite $T$-cliques. We know from Corollary \ref{Corollary2} that the structure between any two classes is isomorphic to the Random Bipartite Graph, so $R$ defines an equivalence relation in $S(a)$ with infinitely many infinite classes, and since $S(a)$ is a structure interpreted in $M$, the theory of $S(a)$ is simple.
Note that if $S$ divides, then $T(a)$ is isomorphic to $M$. From this it follows that ${\rm Aut}(M)_a$ acts 2-transitively on the set of $R$-classes in $S(a)$, so $S(a)$ is either $T$-free or both $S$ and $T$ are realised in $S(a)$ between any two classes.
\begin{claim}
$S(a)$ is not $T$-free.
\end{claim}
\begin{proof}
If $S(a)$ is $T$-free, then the triangle $SST$ is forbidden in $M$, which in particular implies that $S\not\sim^ST$, so $S\sim^R_RT$ is forced. From this we derive that there are three forbidden triangles in $M$, namely $RRT, RRS, SST$. Now we will prove that this is inconsistent with homogeneity.
We know from Corollary \ref{Corollary2} that $TTR, SSR$ are in ${\rm Age}(M)$.
\comm Since $S(a)$ is $T$-free and $R$ forms a non-trivial equivalence relation on $S(a)$, the structure on four vertices with five $S$-edges and one $R$-edge is also in ${\rm Age}(M)$. By $S\sim^R_RT$,
\[
\includegraphics[scale=0.8]{Rank201.pdf}
\]
is also in ${\rm Age}(M)$.
Therefore, in the following amalgamation problem $T$ is the only possible solution between $b,c$:
\[
\includegraphics[scale=0.8]{Rank202.pdf}
\]
\ent
And since $T(a)\cong M$ and $SSR\in{\rm Age}(M)$,
\[
C=\includegraphics[scale=0.8]{Rank203.pdf}
\]
is also in ${\rm Age}(M)$. Finally, it follows from $T(a)\cong M$ and $RST\in{\rm Age}(M)$ that
\[
B=\includegraphics[scale=0.8]{Rank204.pdf}
\]
is in ${\rm Age}(M)$. Now we amalgamate $B,C$ over a triangle $RTT$:
\[
\includegraphics[scale=0.8]{Rank205.pdf}
\]
We cannot have $R(b,c)$ because $R$ because $RRT$ would appear on $b,a_2,c$; and $T(b,c)$ would form $SST$ on the vertices $b,a_0,c$. Finally, $S(b,c)$ would form $SST$ on $b,c,a_1$. Therefore, the amalgamation problem has no solution in ${\rm Age}(M)$, contradicting homogeneity. From this we conclude that both $S$ and $T$ are realised in $S(a)$.
\end{proof}
If $S(a)$ is isomorphic to a stable graph, it follows from Corollary \ref{Corollary1} that $S(a)\cong K_\omega^R\times K_2^S$, because each pair of classes is a stable structure. But then $S(a)$ embeds infinite $T$-cliques.
So $S(a)$ is an unstable structure. By Corollary \ref{Corollary3}, $S(a)$ embeds infinite $T$-cliques. In any case, we have reached a contradiction. Therefore, $T$ is nonforking.
\end{proof}
\comm
\begin{proposition}\label{PropInfInter}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes and ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Then for any vertex $a$ and $R$-class $C$ not containing $a$, both $S(a)\cap C$ and $T(a)\cap C$ are infinite.
\end{proposition}
\begin{proof}
We know by 2-transitivity that $S(a)\cap C$ and $T(a)\cap C$ are nonempty for any $C$ not containing $a$. Suppose for a contradiction that $S(a)\cap C$ is finite, so in particular $S(x,a)$ divides (witnessed by an $R$-clique) and $T$ is nonforking. This immediately implies that $T(a)\cap C$ is infinite.
By Proposition \ref{PropPerfectMatching}, $|S(a)\cap C|=1$ and $S(a)$ is $R$-free. Then in particular $S(x,a)$ divides, as witnessed by an $\varnothing$-indiscernible sequence of vertices that forms an infinite $R$-clique.
We know that $S$ and $T$ are the only unstable relations in $M$, so we must have, in the notation from chapter \ref{ChapStableForking} $S\sim T$. By Proposition \ref{IndiscerniblePairs}, at least one of $S\sim^RT, S\sim^TT, S\sim^ST$ has witnesses in $M$. Of these three options, the first implies that $S(a)\cap C$ is infinite, and thus not $R$-free; the second one implies the existence of infinite $T$-cliques in $S(a)$, impossible since $T$ is the only nonforking predicate and therefore $T$-cliques are Morley sequences over $\varnothing$. Thus, $S\sim^ST$ must hold.
Note that $S(a)$ is a simple $S,T$-graph not embedding infinite $T$-cliques, and therefore it is stable. It follows from the Lachlan-Woodrow Theorem that $S(a)$ is either a union of finite $T$-cliques or an infinite $S$-clique.
If $S(a)$ is an $S$-clique, it follows from Observation \ref{sop} that for any $b\in S(a)$ we have $\{a\}\cup S(a)=\{b\}\cup S(b)$, so we can define an equivalence relation by $x\sim_Sy$ if $S(x,y)\vee x=y$. This relation is clearly symmetric and reflexive, and by transitivity of the action of ${\rm Aut}(M)$, if $S(a,b)\wedge S(b,c)$, then $S(a,c)$ follows because $a,c\in S(b)$. Therefore $S$ is stable and so is the theory of $M$.
The only possibility is for $S(a)$ to be a union of finite $T$-cliques, but this is impossible since $S\sim^ST$ implies that $S(a)$ is unstable.
We conclude that $S(a)\cap C$ is infinite.
\end{proof}
\begin{proposition}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes and ${\rm Aut}(M)$ acts 2-transitively on $M/R$. Then $S$ and $T$ are nonforking.
\end{proposition}
\begin{proof}
The formula $R(x,a)$ divides over $\varnothing$ because it defines one class of an equivalence relation with infinitely many infinite classes; we know by simplicity that there must be a nonforking predicate, since in simple theories there exist Morley sequences for all types. Let us suppose without loss that $T$ is nonforking.
If $S$ divides, then there are no infinite $T$-cliques in $S(a)$. Since $S(a)$ meets all classes except $a/R$, it follows that $S(a)$ contains an infinite $S$-clique; furthermore, by Proposition \ref{PropInfInter}, the intersection of $S(a)$ with any $R$-class is infinite. It follows that $S(a)$ is an infinite homogeneous structure in which $R$ defines an equivalence relation with infinitely many infinite classes.
We know that $S\sim T$ holds because $S,T$ are the only unstable predicates. Since $S(a)$ does not embed infinite $T$-cliques, $S\not\sim^TT$. Also note that by the same argument as in Observation \ref{IsoToM}, $T(a)\cong M$, so ${\rm Aut}(M)_a$ acts 2-transitively on the set of classes not including $a/R$. From this it follows by invariance of $R$ that ${\rm Aut}(M)_a$ also acts 2-transitively on $S(a)/R$. Therefore, either $S$ and $T$ are realised in any two distinct $R$-classes in $S(a)$, or only $S,R$ are realised in $S(a)$.
We know by the same argument as in the Claim that ${\rm Aut}(M)_a$ acts 2-transitively on the set of $R$-classes of $S(a)$, so both $S$ and $T$ are realised in the union of any two $R$-classes in $S(a)$.
\begin{claim}
$S\not\sim^ST$.
\end{claim}
\begin{proof}
If $S\sim^ST$, then $S(a)$ embeds an infinite half-clique for $S,T$ and therefore $T$ is not algebraic in $S(a)$
\end{proof}
\end{proof}
\ent
\begin{corollary}\label{Corollary4}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Then $M$ embeds all finite $S,T$-graphs. In particular, $S\sim^ST$ and $S\sim^TT$.
\end{corollary}
\begin{proof}
We know from the proof of Corollary \ref{Corollary3} that $R$ is the only nontrivial proper equivalence relation on $M$. From this it follows that there is a unique strong type of singletons over $\varnothing$, which is Lascar strong because these theories are low. This allows us to apply the usual argument (see for example Theorem \ref{PrimitiveAlice}) to form an $S,T$-graph.
\end{proof}
Our goal now is to prove that any $S,T$-graph of size $n$ is realised as a transversal in any union of $n$ $R$-classes. To prove this it suffices, by homogeneity, to prove that any $n$ classes embed a $K_n^S$ (so ${\rm Aut}(M)$ acts highly transitively on $M/R$).
\begin{proposition}\label{PropBasis}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Then ${\rm Aut}(M)$ acts 3-transitively on $M/R$.
\end{proposition}
\begin{proof}
It follows from Corollary \ref{Corollary4} that $S(a)$ and $T(a)$ are unstable 3-graphs, since $S\sim^ST$ and $S\sim^TT$, and from Corollary \ref{Corollary2}, that $R$ defines an equivalence relation with infinitely many infinite classes in $S(a),T(a)$. We wish to prove that ${\rm Aut}(M)_a$ acts 2-transitively on the set of $R$-classes in $S(a)$.
Suppose for a contradiction the action is not 2-transitive. Then the structure on any pair of $R$-classes in $S(a)$ must be that of a complete bipartite graph. By Proposition \ref{PropInfInf2} $S(a)\cong T(a)\cong\Gamma[K_\omega^R]$. We have two possibilities: either the structures are ``aligned" (i.e.,\, for pairs $R(c,c'), R(b,b')$ with $S(a,c)$, $T(a,c')$ and $S(a,b),T(a,b')$ we have $(S(b,c)\wedge S(b',c'))\vee(T(b,c)\wedge T(b',c'))$) or the opposite relation between two classes in $S(a)$ is realised between the corresponding classes in $T(a)$.
In the latter case, given any $c_1,c_2\in S(a)$ with $S(c_1,c_2)$ we can find a $d_1$ with $R(d_1,c_2)$ and $T(c_1,d_1)$ because $T(c_1)\cap d_1/R\neq\varnothing$ and we know that all the elements of $d_1/R\cap S(a)$ are $S$-related to $c_1$. Similarly, given $d_2$ in a third class with $S(d_2,d_1)$, there is $c_2$ satisfying $R(c_2,d_1)\wedge T(d_2,c_2)$, as in the following figure:
\[
\includegraphics[scale=0.8]{Rank208.pdf}
\]
The pairs $c_1,d_1$ and $c_2,d_2$ are isomorphic over $a$, so there must be $\sigma\in{\rm Aut}(M)_a$ such that $\sigma(c_1)=c_2, \sigma(d_1)=d_2$. But such automorphism would take $c_2$ to some $e'$ with $R(e',d_2)$, so the $S$-edge $c_1c_2$ would necessarily be taken to a $T$-edge, contradicting homogeneity.
This leaves us with the ``aligned" case. The same argument works here, but we need to justify the existence of the vertices $c_1,c_2,d_1,d_2$ slightly differently. Take three distinct classes $C,D,E$ not containing $a$, such that in $C\cap S(a)$ and $D\cap S(a)$ form an $S$-complete bipartite graph, and $D\cap S(a)$, $E\cap S(a)$ form a $T$-complete bipartite graph. We know that $C=(C\cap S(a))\cup (C\cap T(a))$ (and similar statements for $D,E$), and that each pair of classes is a Random Bipartite Graph. Let $d_0\in S(a)\cap D$ and $d_1\in T(a)\cap D$. By the extension axiom in the Random Bipartite Graph, there exists some $c_1\in C$ such that $S(d_0,c_1)\wedge T(c_1,d_0)$. This $c_1$ must be in $S(a)$, $C\cap T(a),D\cap T(a)$ are complete bipartite in the predicate $S$:
\[
\includegraphics[scale=0.8]{Rank209.pdf}
\]
And between the classes $D,E$ that whose intersections with the neighbouhoods of $a$ form $T$-complete bipartite graphs, take $d_2,d_3\in T(a)\cap E$. As before, there exists $c_2\in D$ such that $T(d_3,c_2)\wedge S(d_2,c_2)$. This $c_2$ must be in $S(a)$.
\[
\includegraphics[scale=0.8]{Rank210.pdf}
\]
And we reach a contradiction as before.
\end{proof}
Proposition \ref{PropBasis} suggests a way to prove that ${\rm Aut}(M)$ acts highly transitively on $M/R$: prove that for any finite $S$-clique $A$, ${\rm Aut}(M)_A$ acts 2-transitively on the set of $R$-classes of $\bigcap_{a\in A} S(a)$. To do this, we first need to prove that indeed $R$ defines an equivalence relation on $\bigcap_{a\in A} S(a)$.
\begin{proposition}\label{PropIntersectionNbhds}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Let $A=\{a_1,\ldots,a_n\}\subset M$ be a finite $S$-clique. Then $a_i\indep (A\setminus a_i)$ and $\bigcap_{1\leq i\leq n}S(a_i)$ is a simple unstable homogeneous 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes.
\end{proposition}
\begin{proof}
The first conclusion follows from the Independence Theorem and homogeneity (one can realise the type of the vertex over the smaller clique as an via the Independence Theorem). By simplicity, we also have $(A\setminus a_i)\indep a_i$, so for any $\varnothing$-indiscernible sequence $(c_i:i\in\omega)$ the set $\{\bigwedge_{j=1}^n S(x_j,c_i):i\in\omega\}$ is satisfiable. In particular when the $c_i$ form an infinite $R$-clique. There are infinitely many classes in $\bigcap_{1\leq i\leq n}S(a_i)$ as a consequence of Corollary \ref{Corollary4}; instability is also a consequence of Corollay \ref{Corollary4}.
\end{proof}
\begin{lemma}\label{HardLemma}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Let $A=\{a_1,\ldots,a_n\}\subset M$ be a finite $S$-clique. Then ${\rm Aut}(M)_A$ acts 2-transitively on the set of $R$-classes in $X_A=\bigcap_{1\leq i\leq n}S(a_i)$.
\end{lemma}
\begin{proof}
We know by Corollary \ref{Corollary4} that both $S$ and $T$ are realised in $X$, and we know by Proposition \ref{PropIntersectionNbhds} that $X$ is a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes. The same argument as in Proposition \ref{PropIntersectionNbhds} proves that $Y_A=\bigcap_{1\leq i\leq n}T(a_i)$ is the same type of structure.
For $n=1$, the situation is exactly that of Proposition \ref{PropBasis}, so let us use that result as a basis for induction on $n$. Suppose that up to $|A|=n$, ${\rm Aut}(M)_A$ acts 2-transitively on $X_A$. Let $A^+$ be an $S$-clique of size $n+1$, and denote a subset of $A^+$ of size $n$ by $A^-$. The induction hypothesis and Proposition \ref{PropInfInf2} imply that $X_{A^-}=\bigcap_{a\in A^-} S(a)$ is isomorphic to the Random Bipartite Graph. if ${\rm Aut}(M)_{A^+}$ does not act 2-transitively on the set of $R$ classes of $X_{A^+}$, then $X_{A^+}$, is isomorphic to $\Gamma[K_\omega^R]$.
Choose any three $R$-classes $C,D,E$ represented in $X_{A^+}$ such that $(C\cup D)\cap X_{A^+}$ is $S$-complete bipartite and $(E\cup D)\cap X_{A^+}$ is $T$-complete bipartite. Let $c_0,c_1\in C\cap X_{A^+}$. Clearly, $X_{A^+}\subset X_{A^-}$, so there exists an element $d_1\in X_{A^-}$ such that $T(d_1,c_1)\wedge S(c_0,d_1)$. This $d_1$ must be in $X_{A^-}\setminus X_{A^+}$, since $(C\cup D)\cap X_{A^+}$ is $S$-complete bipartite.
\[
\includegraphics[scale=0.8]{Rank211.pdf}
\]
By a similar argument, one can find $d_2\in D\cap X_{A^+}$ and $e\in E\cap(X_{A^-}\setminus X_{A^+})$ such that $T(d_2,e)$. Now the pairs $c_1,d_1$ and $d_2,e$ have the same type over $A^+$, since they are $c_1$ and $d_2$ are $S$-related to all vertices in $A^+$, $d_1$ and $e$ are $S$-related to all vertices in $A^-$, and $T(c_1,d_1), T(d_2,e)$. But an automorphism taking $c_1\mapsto d_2, d_1\mapsto e$ would take an $S$-edge in $X_{A^+}$ to a $T$-edge, contradicting homogeneity. We conclude that ${\rm Aut}(M)_{A^+}$ acts 2-transitively on the set of $R$-classes in $X_{A^+}$. The result follows by induction.
\end{proof}
\begin{corollary}\label{Corollary5}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Then the induced action of ${\rm Aut}(M)$ on $M/R$ is highly transitive.
\end{corollary}
\begin{proof}
By Lemma \ref{HardLemma}, any $n$-tuple of $R$-classes embeds a transversal $K_n^S$. Pick any two $n$-tuples of $R$-classes; each embeds a $K_n^S$, so by homogeneity there is an automorphism of $M$ taking one clique to the other (in any prescribed order). By invariance of $R$, this automorphism maps the one set of cliques to the other.
\end{proof}
The conditions we have isolated so far spell out the age of $M$: a simple inductive argument on $n$ following the lines of Observations \ref{ObsRFree} and \ref{ObsIndepClasses} proves:
\begin{theorem}\label{ThmAge}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Then for all $n\in\omega$ and distinct $R$-classes $C_i$ in $M$ and disjoint finite $A_1^i,A_2^i\subset C_i$ there exists $x\in M$ such that for all $a^i\in A^i_1$ and $b^i\in A^i_2$ the relations $S(x,a^i_1), T(x,b^i_2)$ hold.\hfill$\Box$
\end{theorem}
Let $\rm{Forb}(RRS,RRT)$ denote the family of all finite 3-graphs not embedding the triangles $RRS,RRT$.
\begin{corollary}\label{CorImprimInf}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $R$ and $S$ are realised. Then $\rm{Forb}(RRS,RRT)={\rm Age}(M)$.
\end{corollary}
\begin{proof}
It is clear that ${\rm Age} (M)\subset\rm{Forb}(RRS,RRT)$. The other half, $\rm{Forb}(RRS,RRT)\subset{\rm Age}(M)$, is a consequence of Theorem \ref{ThmAge}.
\end{proof}
Condensing the main results above, we get:
\begin{theorem}\label{ThmBstar}
Let $M$ be a homogeneous simple unstable 3-graph in which $R$ defines an equivalence relation with infinitely many infinite classes, and suppose that between any two $R$-classes both $S$ and $T$ are realised. Then each pair of classes is isomorphic to the Random Bipartite Graph, and each $n$-tuple of classes embeds all $S,T$-graphs of size $n$ as transversals. Furthermore, there is a up to isomorphism unique such structure.
\end{theorem}
\begin{proof}
We already knew about the structure of a pair of classes. The second assertion follows from the fact that the age of the Random Graph is contained in the age of $M$, so every finite $S,T$-graph $G$ is realised as a transversal. Since there is only one orbit of finite tuples of $R$-classes, $G$ is realised in all the unions of classes of the same size. Uniqueness is Corollary \ref{CorImprimInf}.
\end{proof}
Let us call the structure from Theorem \ref{ThmBstar}, for lack of a better name, $\mathcal B$. We have proved:
\begin{theorem}\label{ThmClassification}
The following is a list of all supersimple infinite transitive homogeneous $n$-graphs with $n\in\{2, 3\}$:
\begin{enumerate}
\item{Stable structures:
\begin{enumerate}
\item{$I_\omega[K_n]$ or its complement $K_\omega[I_n]$ for some $n\in\omega+1$}
\item{$P^i[K_m^i]$}
\item{$K_m^i[Q^i]$}
\item{$Q^i[K_m^i]$}
\item{$K_m^i[P^i]$}
\item{$K_m^i\times K_n^j$}
\item{$K_m^i[K_n^j[K_p^k]]$}
\end{enumerate}
}
\item{Unstable structures:
\begin{enumerate}
\item{Primitive structures:
\begin{enumerate}
\item{The random graph $\Gamma^{i,j}$}
\item{The random 3-graph $\Gamma^{i,j,k}$}
\end{enumerate}
}
\item{Imprimitive structures with infinite classes:
\begin{enumerate}
\item{$K_m^i[\Gamma^{j,k}]$, $m\in\omega+1$}
\item{$\Gamma^{i,j}[K_\omega^k]$}
\item{$\mathcal B_n^{i,j}$, $n\in\omega$, $n\geq2$}
\item{$\mathcal B$}
\end{enumerate}
}
\item{Imprimitive structures in which the equivalence relation has finite classes:
\begin{enumerate}
\item{Structures in which both unstable predicates are realised across any two equivalence classes: $C^i(\Gamma^{j,k})$}
\item{Structures in which only one of the unstable predicates is realised across any two equivalence classes: $\Gamma^{i,j}[K_n^k]$, $n\in\omega$.}
\end{enumerate}
}
\end{enumerate}
}
\end{enumerate}
\end{theorem}
Where $\{i,j,k\}=\{R,S,T\}$ and $\mathcal B_n^{i,j}$ is the 3-graph consisting of $n$ copies of $K^R_\omega$ in which the structure on the union of any two maximal infinite $R$-cliques is isomorphic to the random bipartite graph, and all $S,T$-structures of size $k\leq n$ are realised transversally in the union of any $k$ $R$-classes.
\chapter{Forking in Primitive Simple 3-graphs}\label{ChapStableForking}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\setcounter{case}{0}
\setcounter{subcase}{0}
This chapter contains a proof of the following statement: given two vertices $a,b$ in a primitive simple homogeneous simple 3-graph $M$, if ${\rm tp}(a/b)$ divides over $\varnothing$, then the formula (relation) isolating ${\rm tp}(a/b)$ is stable.
Supposing that we have a counterexample to that statement, since we must have at least one nonforking relation, the first distinction of cases is on the number of unstable forking relations, which can be either one or two.
This is a long argument by contradiction, and it can be at times confusing. The structure of the argument is as follows:
\[
\includegraphics[scale=0.8]{StableForking20.pdf}
\]
\section{Half-graphs in Primitive Homogeneous Simple 3-graphs}\label{SecUseful}
\begin{proposition}\label{UnstableNonforking}
Let $M$ be a simple unstable primitive homogeneous $n$-graph. If there are at least two non-forking relations, then all the non-forking relations are unstable.
\end{proposition}
\begin{proof}
Suppose that we have at least two nonforking relations, $R_1$ and $R_2$. We will prove that all finite $\{R_1,R_2\}$-graphs can be embedded into $M$, so in particular we can find an infinite half-graph for $R_2$.
We will prove by induction that we can embed every finite graph of size $m$ as an independent set (meaning that if $a_1,\ldots,a_m$ are the vertices of the induced graph, then $a_i\indep[\varnothing]a_1\ldots a_{i-1}a_{i+1}\ldots a_m$). The case $m=2$ is our basis for induction, and follows trivially from the fact that $R_1$ and $R_2$ are nonforking.
Suppose that we can embed all $\{R_1,R_2\}$-graphs of size at most $k$ as independent sets in $M$, and we wish to embed an $\{R_1,R_2\}$-graph $G$ of size $k+1$ into $M$. Enumerate the vertices of the graph as $v_1,\ldots,v_{k+1}$. We can embed the subgraph induced on $v_1,\ldots,v_k$ as an independent set $a_1,\ldots,a_k$ in $M$. We have in particular $a_k\indep a_1\ldots a_{k-1}$. By applying the induction hypothesis again, we can find a $\beta$ which satisfies the same atomic formulas over $a_1,\ldots,a_{k-1}$ as $v_{k+1}$ over $v_1,\ldots,v_{k-1}$. Similarly, we can find $\beta'\indep a_k$ satisfying the same atomic relation over $a_k$ as $v_{k+1}$ over $v_k$. We can apply the Independence Theorem to find a common solution $b$ to ${\rm tp}(\beta/a_1\ldots a_{k-1})$ and ${\rm tp}(\beta'/a_k)$ such that $b\indep a_1\ldots a_{k+1}$. The graph induced by $M$ on $a_1,\ldots,a_k,b$ is isomorphic to $G$
\end{proof}
In the case of simple 3-graphs, we have at least one and up to three non-forking relations. Under primitivity and homogeneity, assuming that the atomic formula isolating ${\rm tp}(b/a)$ is stable if ${\rm tp}(b/a)$ divides over $\varnothing$ and Koponen's result on the finiteness of rank of binary homogeneous simple structures, it is not too hard to prove that if we have three non-forking relations then each of them is unstable and $M$ is the random 3-graph (see Theorem \ref{PrimitiveAlice}). If we have two non-forking relations, then they are unstable; the remaining relation could be stable and forking (in which case we have stable forking in the formulation we have chosen for this document), or it could be unstable and forking.
We accumulate in this subsection some easy results, the conclusions of which are used repeatedly in the main proofs.
In the course of the proofs, we will make extensive use of the Lachlan-Woodrow Theorem \ref{LachlanWoodrow} from \cite{lachlan1980countable}.
\begin{remark}\label{RmkSimpleGraphs}
From the list list of structures in the Lachlan-Woodrow Theorem, graphs in the first category are $\omega$-stable of {\rm SU}-rank 1 if just one of $m,n$ is $\omega$; the graph $I_\omega[K_\omega]$ is of rank 2. The Random Graph is supersimple unstable of {\rm SU}-rank 1, and the homogeneous $K_n$-free graphs are not simple.
\end{remark}
\begin{definition}
Given an unstable theory $\mathcal T$ and $M\models\mathcal T$, we say that a predicate $P$ is unstable in a set $X\subset M$ if we can find witnesses to the instability of $P$ within the set $X$, that is, if there exist $\bar a_i,\bar b_i\in X$ ($i\in\omega$) such that $P(\bar a_i,\bar b_j)$ holds iff $i\leq j$. Similarly, we say that a predicate $P$ is nonforking in a set defined by a formula $\varphi(\bar x,\bar a)$ if the formula $\varphi(\bar x,\bar a)\wedge\varphi(\bar a,\bar b)\wedge P(\bar x,\bar b)$ does not fork over $\bar a$. If no set $X$ is specified, $X=M$ is assumed.
\end{definition}
\begin{definition}\label{DefHG}
A \emph{half-graph} for colour $P$ in an $n$-graph is a graph induced on set of vertices $a_i,b_i$ ($i\in\omega$) witnessing the instability of the formula $P(x,y)$.
\end{definition}
We will use half-graphs in the rest of the argument because of the information that they give us about the structure of the neighbourhoods of a vertex in a homogeneous 3-graph.
\begin{proposition}\label{IndiscerniblePairs}
Suppose that $(a_i,b_i)_{i\in\omega}$ is an infinite half-graph for some relation $R$ in a homogeneous $n$-graph. Then we can find an infinite half-graph $(a_i',b_i')_{i\in\omega}$ that is indiscernible as a sequence of pairs $a_i'b_i'$ of type $R(a,b)$.
\end{proposition}
\begin{proof}
The half-graph $(a_i,b_i)_{i\in\omega}$ is an infinite sequence of pairs of type $R$. Colour the pairs of $R$-edges $(a_ib_i,a_jb_j)$, $i\neq j$ according to the type of the four-vertex set $\{a_i,b_i,a_j,b_j\}$. By Ramsey's theorem, there is an infinite monochromatic $X\subset\omega$ for this colouring. The set of pairs $\{a_ib_i:i\in X\}$ is indiscernible because the language is binary, and it is a half-graph for $R$ with the ordering induced by $\omega$ on $X$.
\end{proof}
\begin{definition}
We denote the set of relations from $L$ which are unstable in models of a theory $\mathcal T$ as $L^u(\mathcal T)$. Given two distinct binary relation symbols $R,R'$ in $L^u(\mathcal T)$, we say that $R$ and $R'$ are \emph{compatible} if there exists an indiscernible sequence of pairs of type $R$ which witnesses the instability of $R$ and $R'$. We denote compatibility by $R\sim_{\mathcal T}R'$, omitting $\mathcal T$ whenever it is clear which theory we are referring to. Given witnesses $\alpha_i,\beta_i$ to the compatibility $R\sim_{\mathcal T}R'$, we refer to the sets $A=\{\alpha_i:i\in\omega\}$ and $B=\{\beta_i:i\in\omega\}$ as the \emph{horizontal} cliques of $R\sim_{\mathcal T}R'$.
\end{definition}
\begin{observation}\label{IndiscernibleHalfGraphs}
Let $M$ be an unstable $n$-graph and suppose that $R$ is an unstable relation. Then there exists $R'\in L^u({\rm Th}(M))$ such that $R$ and $R'$ are compatible.
\end{observation}
\begin{proof}
By instability of $R$, there exist parameters $a_i$ and $b_i$, $i\in\omega$, such that $R(a_i,b_j)$ holds iff $i\leq j$. Consider the set of $R$-edges $\{a_ib_i:i\in\omega\}$, and colour the pairs of distinct edges in this set according to the 4-type they satisfy. By Ramsey's theorem, there is an infinite monochromatic set. As a set of vertices, this set witnesses the instability of $R$, and because it is indiscernible over $\varnothing$ as a set of $R$-edges, all the edges $a_ib_j$ with $j<i$ are of the same type $R'\neq R$.
\end{proof}
\begin{definition}
We call a set of parameters witnessing the compatibility of two unstable relations $R,R'$ an \emph{indiscernible half-graph} for $R,R'$.
\end{definition}
\begin{remark}\label{RmkCompatibilityGraph}
Compatibility is a graph relation on $L^u(\mathcal T)$ when $\mathcal T$ is the theory of an infinite $n$-graph ($L=\{R_1,\ldots,R_n\}$). The compatibility graph on $L^u(\mathcal T)$ has no isolated vertices (all vertices have degree at least 1). In particular, when $|L|=3$ and all relations are unstable, the compatibility graph is connected.
\end{remark}
Indiscernible half-graphs for $R,R'$ give us valuable information about the $R$- and $R'$-neighbourhoods of vertices.
\begin{proposition}\label{PropCliques}
Let $M$ be a simple primitive homogeneous $n$-graph in which $R$ divides and $S_1,\ldots,S_k$ are the nonforking relations with respect to ${\rm Th}(M)$ in the language of the $n$-graph. Then for any $a\in M$ the set $R(a)$ does not contain any infinite $S_i$-cliques for all $i\in\{1,\ldots k\}$.
\end{proposition}
\begin{proof}
It follows from Observation \ref{MorleySqn} that any Morley sequence over $\varnothing$ is an infinite $S_i$-clique for some $i\in\{1,\ldots,k\}$. By simplicity, dividing is witnessed by Morley sequences, so for any $S_i$-clique enumerated as $(a_j:j\in\omega)$, the set $\{R(x,a_j):j\in\omega\}$ is inconsistent.
\end{proof}
As a consequence,
\begin{observation}\label{NoInfiniteCliques}
If $(M;R,S,T)$ is a homogeneous primitive simple 3-coloured graph in which $R$ is a forking relation and $S, T$ are nonforking, then there are no infinite $S$- or $T$-cliques in $R(a)$.
\end{observation}\hfill$\Box$
\begin{notation}
Whenever we draw a 3-graph, $R$ is represented by plain lines, $S$ by dashed lines, and $T$ by dotted lines.
\end{notation}
In any primitive $\omega$-categorical $n$-graph $M$, we necessarily have finite diameter for each of the predicates $R_i$, since $R_i$-connectivity is an equivalence relation and there are only finitely many types of pairs of elements (see Observation \ref{ObsFiniteDiameter}). We denote the $R_i$-diameter of $M$ by ${\rm Diam}_{R_i}(M)$. Given a predicate $R_i$, we denote the set of elements at $R_i$-distance $m$ from $a$ by $R_i^m(a)$.
\begin{definition}\label{DefRefineComp}
We will use the notation $R\sim^ST$ to indicate that there exists an indiscernible half-graph witnessing $R\sim T$ such that one of the horizontal cliques is of colour $S$.
\end{definition}
\begin{proposition}\label{PropHomNeigh}
Let $M$ be a homogeneous binary structure and $S$ a predicate in the language of $M$. Then $S(a)$ is a homogeneous $m$-graph for some $m\leq n$.
\end{proposition}
\begin{proof}
$A\cong B$ for finite $A,B\subset S(a)$ implies $aA\cong aB$, so by the homogeneity of $M$ there is $\sigma\in{\rm Aut}(M/a)$ taking $A$ to $B$ ($\sigma$ clearly fixes $S(a)$ setwise). The restriction of $\sigma$ to $S(a)$ is an automorphism of $S(a)$ taking $A$ to $B$.
\end{proof}
\begin{observation}
Let $M$ be a primitive simple homogeneous graph and suppose that $S$ divides, ${\rm Diam}_S(M)=3$, and $S^2(a)=T(a)$. If $T\sim^SS$, then $S(a)$ is isomorphic to the Random Graph.
\label{ObsIsoRG}
\end{observation}
\begin{proof}
It follows from ${\rm Diam}_S(M)=3$ and $S^2(a)=T(a)$ that $S(a)$ is $R$-free. Consider an indiscernible graph witnessing $T\sim^SS$:
\[
\includegraphics[scale=0.7]{StableForking1.pdf}
\]
The indiscernible half-graph sketched above can be embedded in $S(a)$ by compactness and homogeneity, since each initial segment $(a_ib_i)_{i\leq n}$ can be embedded in $S(a)$ by transitivity of $M$ and the fact that $(a_ib_i)_{i\leq n}$ is contained in $S(a_{n+1})$. Therefore $S(a)$ is isomorphic to a homogeneous (by Proposition \ref{PropHomNeigh}) unstable graph, which must be simple because it is interpretable in $M$. The result follows from Remark \ref{RmkSimpleGraphs}.
\end{proof}
\begin{proposition}
Let $M$ be a primitive simple homogeneous graph and suppose that $S$ divides, $R$ is nonforking, ${\rm Diam}_S(M)=2$ and $S\sim^ST$. If $R$ defines an equivalence relation on $S(a)$, then $S$ and $T$ are nonforking in $S(a)$, and $S(a)$ is isomorphic to $C(\Gamma)$ or to $\Gamma^{S,T}[K_n^R]$ for some $n\in\omega$.
\end{proposition}
\begin{proof}
It follows from Proposition \ref{PropCliques} that the $R$-classes in $S(a)$ are finite. From $S\sim^ST$ we get that $S,T$ are unstable in $S(a)$. The conclusions follow from Theorem \ref{ThmCGamma} and Corollary \ref{CorFiniteClasses}.
\end{proof}
\begin{proposition}\label{PropTwoInfiniteCliques}
Let $M$ be a primitive simple unstable homogeneous 3-graph. Then $M$ embeds infinite monochromatic cliques in at least two colours.
\end{proposition}
\begin{proof}
At least one of the relations, say $R$, is nonforking, so by the Independence Theorem and homogeneity $M$ embeds infinite $R$-cliques. Of the other relations, at least one, without loss of generality $S$, is unstable, and therefore non-algebraic.
If $S$ is nonforking, then there are infinite $S$-cliques by the Independence Theorem. And if $S$ divides, then the $S$-neighbourhood of any $a\in M$ does not embed infinite $R$-cliques by Proposition \ref{PropCliques}, but it must embed an infinite monochromatic clique by Ramsey's Theorem.
\end{proof}
\begin{proposition}\label{PropNoSnotTnoCliquesR}
There are no primitive homogeneous simple 3-graphs $M$ in which $R\sim^SS$, $T$ is nonforking, $S$ and $R$ divide, and $R$ does not form infinite cliques.
\end{proposition}
\begin{proof}
Suppose for a contradiction that such an $M$ exists, and consider $S(a)$ for some $a\in M$. There are two cases to analyse:
\begin{case}
If $SST\in{\rm Age}(S(a))$, then we make the following claim:
\begin{claim}
$S(a)$ is primitive.
\end{claim}
\begin{proof}
From $S\sim^SR$, we get that $S$ and $R$ are unstable in $S(a)$, so the only formulas that could define an equivalence relation in $S(a)$ are $T$ and $S\vee R$.
To eliminate the possibility of $S\vee R$ defining an equivalence relation, note that if it did define one, then by the instability of $S,R$ its classes would be infinite. And since $T$ does not form infinite cliques in $S(a)$, $S\vee R$ would have only finitely many infinite classes. Each class is a simple unstable homogeneous graph, isomorphic to the Random Graph. This contradicts the hypothesis that $R$ does not form infinite cliques.
Finally, $T$ does not define an equivalence relation because in that case we could also find infinite $R$-cliques within $S(a)$, by Theorem \ref{ThmCGamma} and Corollary \ref{CorFiniteClasses}.
\end{proof}
From this claim it follows that $S(a)$ is a homogeneous primitive simple 3-graph in which there are no infinite $R$-or $T$-cliques, contradicting Proposition \ref{PropTwoInfiniteCliques}.
\end{case}
\begin{case}
If $SST\not\in{\rm Age}(S(a))$, then $S(a)$ is an unstable homogeneous $R,T$-graph, again contradicting (by the Lachlan-Woodrow Theorem) the hypothesis of no infinite $R$-cliques in $M$.
\end{case}
\end{proof}
\setcounter{case}{0}
\setcounter{subcase}{0}
\begin{proposition}\label{PropImprimitiveInfClasses}
There are no homogeneous simple 3-graphs in which $S$ defines an equivalence relation with infinitely many infinite classes, $R$ and $T$ are unstable, and $R$ does not form infinite cliques.
\end{proposition}
\begin{proof}
Under these hypotheses, $T$ is the only nonforking relation in the language of $M$: $S$ clearly divides as it is an equivalence relation with infinitely many classes; it is not possible for $R$ to be nonforking because there is only one strong type of elements in $M$, so the Independence Theorem would give us infinite $R$-cliques if $R$ were non-dividing.
There are two cases, depending on whether the triangle $RST$ embeds into $M$ or not.
\begin{case}
If $RST$ does not embed into $M$, then the structure induced on the union of a pair of distinct $S$-classes $K, K'$ is either $T$-free or $R$-free. Therefore, the half-graphs witnessing $R\sim T$ are transversal to an infinite number of $S$-classes; in other words, any transversal to all $S$-classes is unstable and $K_n^R$-free.
Consider the infinite graph defined on $M/S$ with predicates $\hat R,\hat T$ which hold of two distinct classes $a/S,b/S$ if for some/any $\alpha\in a/S,\beta\in b/S$ we have $R(\alpha,\beta)$ (respectively, $T(\alpha,\beta)$).
\begin{claim}
The graph interpreted in $M/S$ as described in the preceding paragraph is homogeneous in the language $\{\hat R,\hat T\}$.
\end{claim}
\begin{proof}
By the same argument as in Corollary \ref{CorFiniteClasses}.
\end{proof}
As a consequence of the claim, the Lachlan-Woodrow Theorem, and the facts that in $M$ the predicates $R$ and $T$ are unstable, and that $M$ does not embed infinite $R$-cliques, we have that $M/S$ is isomorphic to some universal homogeneous $K_n$-free graph (for some $n\in\omega$), which are not simple. This contradicts the simplicity of $M$.
\end{case}
\begin{case}
If $RST$ embeds into $M$, then for any $a\in M$ the set $R(a)$ meets every $S$-class in $M$ not containing $a$. The reason is that, since $RST\in{\rm Age}(M)$, then there exists an $S$-class with two elements $c,c'$ such that $R(a,c)\wedge T(a,c')$. An element $b$ in any class that does not contain $a$ satisfies $R(a,b)$ or $T(a,b)$, so by homogeneity $ab\cong ac$ or $ab\cong ac'$, and by homogeneity there is a $b'$ in the same class as $b$ that satifies the other formula.
By the usual argument, there are no infinite $T$-cliques in $R(a)$ for any $a\in M$, and by the hypothesis of no infinite $R$-cliques in $M$, we get that the only infinite cliques in $R(a)$ are $S$-cliques. But a transversal to $R(a)$ must contain an infinite monochromatic clique, by Ramsey's Theorem. We have reached a contradiction.
\end{case}
\end{proof}
\section{One unstable forking relation}\label{SubsecOneUnstable}
If $R$ is the only forking relation, and it is unstable, then it follows from Proposition \ref{UnstableNonforking} that all relations are unstable. Let us look more closely into the infinite half-graphs witnessing the instability of the forking relation $R$. By Remark \ref{RmkCompatibilityGraph}, the compatibility graph is connected.
\begin{observation}\label{UnstableInR}
Let $M$ be a primitive homogeneous simple 3-graph, and suppose that $R$ is a forking unstable relation and $S,T$ are nonforking and unstable. Then we can find witnesses to the instability of $R$ within $R(a)$.
\end{observation}
\begin{proof}
Let $(a_i,b_i)_{i\in\omega}$ be a half-graph for $R$. Since $S,T$ are nonforking, it follows from Proposition \ref{PropCliques} that $R(a)$ does not contain infinite $S$- or $T$- cliques. We know that $R(a)$ is an infinite set because $R$ is unstable, so the only infinite cliques in $R(a)$ are of colour $R$. From this it follows that when we extract an indiscernible half-graph from a set of witnesses for the instability of $R$ as in Proposition \ref{IndiscerniblePairs}, then the $a_i$ and the $b_i$ form infinite $R$-cliques. Clearly all the elements of the indiscernible half-graph are in $R(a_0)$.
\end{proof}
\begin{remark}
As a direct consequence of Observation \ref{UnstableInR}, there are no primitive homogeneous simple 3-graphs $M$ such that $R$ is forking and unstable and the $R$-diameter of $M$ is 3, as in this case $R(a)$ would be an infinite simple unstable homogeneous graph, isomorphic to the Random Graph by the Lachlan-Woodrow Theorem. But in this is imposible since there are no infinite $S$- or $T$-cliques in $R(a)$.
\end{remark}
\begin{lemma}
There are no primitive homogeneous simple 3-graphs in which all the predicates are unstable and only one of them, $R$, divides.
\label{PropNoSimpleOneDividing}
\end{lemma}
\begin{proof}
Suppose for a contradiction that $M$ is a 3-graph as in the statement of this proposition. By simplicity and instability of $R$, $R(a)$ is an infinite simple 3-graph not embedding infinite cliques of colour $S$ or $T$. By Observation \ref{UnstableInR}, it is unstable. It follows from Proposition \ref{PropTwoInfiniteCliques} that $R(a)$ is not primitive.
By Observation \ref{UnstableInR}, there is at least one more predicate that is unstable in $R(a)$. Let us suppose without loss of generality that $S$ is such a predicate. From the instability of $R,S$ we get directly that $R,S,R\vee T,S\vee T$ do not define equivalence relations. If $T$ defined an equivalence relation on $R(a)$, then its classes would be finite and by Theorem \ref{ThmCGamma} and Corollary \ref{CorFiniteClasses} we could find infinite $T$-cliques. And if $R\vee S$ defines an equivalence relation, then it has finitely many infinite classes (since $R(a)$ does not embed infinite $T$-cliques by Proposition \ref{PropCliques}), each of which is a homogeneous unstable graph, isomorphic to the Random Graph by the Lachlan-Woodrow Theorem. Again we find infinite $S$-cliques in $R(a)$, a contradiction. So there are no invariant proper nontrivial equivalence relations on $R(a)$, contradicting the first paragraph of this proof.
\end{proof}
\section{Unstable 3-graphs not embedding an infinite $R$-clique.}
\begin{proposition}
Let $M$ be a simple homogeneous 3-graph in which $S$ and $T$ are nonforking relations, and which does not embed infinite $R$-cliques. Then $M$ is imprimitive.
\label{PropImprimitivity}
\end{proposition}
\begin{proof}
Suppose for a contradiction that $M$ is primitive. Then $R$ is a forking relation, since otherwise the Independence Theorem and homogeneity would allow us to find arbitrarily large $R$-cliques. Since $M$ is primitive, we have that each of $R,S,T$ is non-algebraic, by $\omega$-categoricity. Consider $R(a)$ for any $a\in M$; this is an infinite set which cannot contain infinite cliques of colour $S$ or $T$ because that would witness that $R$ is nonforking ($S$- and $T$-cliques form Morley sequences over $\varnothing$), and cannot contain infinite $R$-cliques either, because $M$ does not embed infinite $R$-cliques. This contradicts Ramsey's theorem.
\end{proof}
\begin{remark}
Notice that if $M$ is a primitive simple homogeneous 3-graph not embedding infinite $R$-cliques and satisfying $S\sim^ST, S\sim ^TT$, then $S$ and $T$ are nonforking. More generally, if $R$ divides and $S\sim^ST, S\sim ^TT$, then $S$ and $T$ are nonforking.
\end{remark}
\begin{proposition}
Let $M$ be a simple unstable homogeneous 3-graph in which $R$ is a stable relation and $S,T$ form infinite cliques. If $M$ does not embed infinite $R$-cliques then $M$ is imprimitive.
\end{proposition}
\begin{proof}
Since $R$ is stable, we have $S\sim T$, and as $M$ does not embed infinite $R$-cliques, we have either $S\sim^ST$ or $S\sim^TT$. Suppose for a contradiction that $M$ is primitive. Then $R$ is not algebraic, by $\omega$-categoricity, and divides, by primitivity and the Independence Theorem. This implies that one of the unstable relations, say $S$, is nonforking.
If $S\sim^ST$ and $S$ does not divide, then $T$ does not divide by simplicity (as $T(b)$ contains infinite $S$-cliques), so by Proposition \ref{PropImprimitivity} $M$ is imprimitive.
Therefore, we must have $S\not\sim^ST$ and $S\sim^TT$ and $T$ divides as otherwise we could use the Independence Theorem to embed each finite substructure of an indiscernible half-graph witnessing $S\sim^ST$ into $M$. Therefore $T(a)$ does not contain any infinite $S$- or $R$-cliques and is imprimitive by Proposition \ref{PropTwoInfiniteCliques}. From $S\sim^TT$ we get that $S$, $T$ are unstable in $T(a)$ and therefore $S,T,R\vee S,R\vee T$ do not define equivalence relations in $T(a)$. The formula $S\vee T$ does not define an equivalence relation because its classes would be isomorphic to the Random Graph, which is impossible as $T(a)$ does not contain infinite $S$-classes. So $R$ must define an equivalence relation with finite classes, and is isomorphic to $C(\Gamma^{ST})$ or $\Gamma^{ST}[K_n^R]$. In any case, $T(a)$ embeds infinite $T$-cliques. We have reached a contradiction in every possible case stemming from the assumption of primitivity, so $M$ must be imprimitive.
\end{proof}
\comm
\begin{proposition}
Let $M$ be a primitive simple unstable 3-graph without infinite $R$-cliques. If $R\sim S$, then $M$ is semilinear.
\end{proposition}
\begin{proof}
By Proposition \ref{PropOneIsStable}, $T$ is stable. Since $R$ does not form infinite cliques, we have $R\sim^SS$ or $R\sim^TS$ and by the usual arguments $R$ divides.
\begin{claim}
$R\not\sim^SS$.
\end{claim}
\begin{proof}
Suppose for a contradiction $S\sim^SR$, so $R$ and $S$ are unstable in $S(a)$. Then $S$ divides because $R$ divides and there are infinite $S$-cliques in $R(a)$. This implies that $T$ is a nonforking relation. By Proposition \ref{PropTwoInfiniteCliques}, $S$ and $T$ form infinite cliques in $M$. The instability $R\sim^SS$ also implies $SSR\in{\rm Age}(M)$. We have two cases, depending on the $S$-diameter of $M$
\begin{enumerate}
\item{If ${\rm Diam}_S(M)=3$, then $SST\not\in{\rm Age}(M)$ and $S(a)$ is an $RS$-graph without infinite $R$-cliques. This is impossible by the Lachlan-Woodrow Theorem, as the only simple unstable graph is the Random Graph.}
\item{If ${\rm Diam}_S(M)=2$, then $S(a)$ is a 3-graph in which $R,S$ are unstable, and $S(a)$ does not embed infinite $R$- or $T$-cliques. Then $S(a)$ is imprimitive by Proposition \ref{PropTwoInfiniteCliques} and one of $T$, $R\vee S$ defines an equivalence relation on $S(a)$. If $R\vee S$ defines an equivalence relation, then each class is isomorphic to the Random Graph and therefore embeds infinite $R$-cliques, a contradiction. And if $T$ defines an equivalence relation, then $S(a)$ is isomorphic to one of $C(\Gamma^{RS})$ (if ${\rm Aut}(M/a)$ acts 2-transitively on $S(a)/T$) or $\Gamma^{RS}[K_n^T]$. In any case, there are infinite $R$-cliques.}
\end{enumerate}
Therefore, $R\not\sim^SS$.
\end{proof}
From this claim and $R\sim S$ it follows that $R\sim^TS$ holds. Therefore, $T$ divides, since $R$ divides and $R(a)$ contains infinite $T$-cliques, by $R\sim^TS$. Now we have three cases:
\begin{enumerate}
\item{If $T(a)$ is a $TS$-graph, then $T(a)$ is an infinite stable graph. By the Lachlan-Woodrow Theorem, it is imprimitive, so by Proposition \ref{PropMultipartite} $T$ is an equivalence relation on $T(a)$ which must have finitely many infinite classes since there are no infinite $S$-cliques in $T(a)$. It follows that $M$ is semilinear of $T$-diameter 3.}
\item{If $T(a)$ is a $TR$-graph, then $M$ is semilinear by the same argument as in the previous case.}
\item{If $T(a)$ realises all predicates, then by Proposition \ref{PropTwoInfiniteCliques} $T(a)$ is imprimitive. }
\end{enumerate}
If $T(a)$ is an unstable 3-graph, then there are witnesses to $R\sim S$ in $T(a)$ and $T$ defines an equivalence relation ($R\vee S$ is not an equivalence relation because its classes would be isomorphic to the Random Graph) with infinite classes. It is not possible for $T$ to have infinitely many infinite classes as in that case $M$ would interpret a weak pseudoplane, contradicting Theorem \ref{ThmThomas}. Therefore $T$ has finitely many infinite classes and $M$ is semilinear.
And if $T(a)$ is stable, then it is isomorphic to one of $P^S[K_\omega^S], Q^T[K_\omega^T], K_\omega^T\times K_n^S, K_\omega^T\times K_n^T$, or a wreath product $K_m^i[K_n^j[K_t^k]]$ where $\{i,j,k\}=\{R,S,T\}$, $i\neq T$ and the subindex corresponding to the $T$ subindex is $\omega$, by Lachlan's Theorem \ref{Lachlan3graphs} and Proposition \ref{PropMultipartite}. This leaves us with only one case to eliminate, namely $T(a)\cong K_m^S[K_\omega^T[K_n^R]$ or $T(a)\cong K_m^R[K_\omega^T[K_n^S]$, in all other cases $M$ is semilinear.
Suppose then that $T(a)$ is isomorphic to $K_m^S[K_\omega^T[K_n^R]$.
\end{proof}
\ent
\begin{proposition}
Let $M$ be a homogeneous simple unstable 3-graph not embedding infinite $R$-cliques. If $R\sim^SS$, then $M$ is imprimitive.
\label{PropImprimitivity1}
\end{proposition}
\begin{proof}
Suppose for a contradiction that $M$ is primitive. Then $R$ divides over $\varnothing$ by primitivity and the Independence Theorem, and $S$ divides because $R(a)$ contains infinite $S$-cliques, by $R\sim^SS$, and $T$ is the only nonforking relation.
Again by $R\sim^SS$ the homogeneous simple 3-graph induced on $S(a)$ is unstable, contains $S$- and $R$-edges and does not embed infinite $R$- or $T$-cliques. By the Lachlan-Woodrow Theorem, $T$ is realised in $S(a)$ because the only simple unstable graph is the Random Graph. By Proposition \ref{PropTwoInfiniteCliques}, $S(a)$ is imprimitive. Additionally, $R,S$ are unstable in $S(a)$, and so $R,S,R\vee T, S\vee T$ do not define equivalence relations in $S(a)$. Therefore, either $R\vee S$ is an equivalence relation with finitely many infinite classes in $S(a)$, or $T$ is an equivalence relation with finite classes on $S(a)$. It is not possible for $R\vee S$ to be an equivalence relation because in that case each class would be isomorphic to the Random Graph, contradicting that $S(a)$ does not contain infinite $R$-cliques.
And if $T$ defines an equivalence relation with finite classes, then by Theorem \ref{ThmCGamma} and Corollary \ref{CorFiniteClasses}, all the imprimitive simple unstable homogeneous 3-graphs with finite classes embed infinite cliques in two colours, contradicting that $S(a)$ does not embed infinite $R$-cliques.
\end{proof}
\begin{proposition}
Let $M$ be an unstable simple homogeneous 3-graph in which $T$ defines an equivalence relation with infinitely many infinite classes and $R\sim^SS$. Then $M$ embeds infinite $R$-cliques.
\label{PropUnstabInfCliques}
\end{proposition}
\begin{proof}
First note that $T$ is a forking relation because it is a nontrivial equivalence relation with infinitely many classes. If both $R$ and $S$ are nonforking, then the Independence Theorem implies that we can embed any finite $R,S$-graph as a transversal to some $T$-classes, so in particular we can embed arbitrarily large finite $R$-cliques and the result follows by compactness.
Suppose then that one of $R$, $S$ divides. The instability $R\sim^SS$ implies in particular that if $R$ divides then so does $S$, and this is not possible because there must be Morley sequences of vertices (so at least one relation is nonforking). As a consequence, $R$ is a nonforking relation and by the Independence Theorem there are infinite $R$-cliques in $M$.
\end{proof}
In the next Proposition, we use the symbol $R\sim^T_TS$ to say that there exists an indiscernible half-graph witnessing $R\sim S$ such that both monochromatic cliques are of colour $T$. This is stronger than $R\sim^TS$, but weaker than $R\sim^TS\wedge R\not\sim^SS$.
\begin{proposition}
Let $M$ be an unstable simple homogeneous 3-graph in which $T$ defines an equivalence relation with infinite classes and $R\sim^T_TS$. Then ${\rm Aut}(M)$ acts 2-transitively on $M/T$ and each pair of distinct $T$-classes is isomorphic to the Random Bipartite Graph.
\label{PropUnstableImprimitiveRBG}
\end{proposition}
\begin{proof}
Consider an indiscernible half-graph witnessing $R\sim^T_TS$. This half-graph shows that there exist two $T$-classes $C,C'$ such that $R$ and $S$ are realised in the structure induced by $M$ on $C\cup C'$, so by homogeneity all pairs of $T$-classes are in the same ${\rm Aut}(M)$-orbit and so ${\rm Aut}(M)$ acts 2-transitively on $M/T$. Consequently, we can find witnesses to $R\sim^T_TS$ in the union of any pair of distinct $T$-classes.
\[
\includegraphics[scale=0.7]{StableForking4.pdf}
\]
Now let $C,C'$ be two distinct $T$-classes, and consider two disjoint finite subsets $X,Y\subset C$ of size $n$ and $m$, respectively. Using our witnesses $a_i,b_i$ ($i\in\omega$) for $R\sim^T_TS$, we know that there exists a vertex $v$ which is $T$-inequivalent to $n+m$ distinct elements (namely, $v=a_n+1$, which is $T$-inequivalent to $b_1,\ldots,b_n$, and $b_{n+1},\ldots,b_{n+m+1}$) and such that $S(v,b_i)$ holds for $i\in\{1,\ldots,n\}$ and $R(v,b_j)$ holds for $j\in\{n+1,\ldots,n+m+1\}$. Therefore, each pair of classes satisfies the extension axioms of the Random Bipartite Graph.
To see that the union of any two classes $C,C'$ is a homogeneous 3-graph, note that if we take two isomorphic finite sets that meet both classes, then there is by homogeneity of $M$ an automorphism taking $C$ to $C'$, which will fix $C\cup C'$ setwise. And if the finite sets $A,B$ are contained in the same class $C$, then by the argument in the preceding paragraph we can find elements $c_A,c_B$ such that $S(c_A,a)$ for all $a\in A$ and $S(c_B,b)$ for all $b\in B$, and so there is an automorphism $\sigma$ of $M$ fixing $C\cup C'$ such that $\sigma(c_AA)=c_bB$.
Using homogeneity and the fact that all pairs of $T$-classes are in the same ${\rm Aut}(M)$-orbit, the result follows.
\end{proof}
\begin{proposition}
Let $M$ be a homogeneous simple unstable 3-graph with $S\sim^ST$ and not embedding infinite $R$-cliques. Then $M$ is imprimitive.
\label{PropImprimitivity2}
\end{proposition}
\begin{proof}
Suppose for a contradiction that $M$ is primitive. Then $R$ divides, by the Independence Theorem. By $\omega$-categoricity and primitivity, $R$ is not algebraic.
One of $S,T$ is nonforking. If $S$ is nonforking then an indiscernible sequence witnessing $S\sim^ST$ also witnesses that $T$ is nonforking. This is impossible because in this case $R(a)$ is forced to be finite as it does not embed infinite cliques of any colour.
Assume then that $T$ is nonforking and $S$ divides. First, note that ${\rm Diam}_S(M)=2$ as we know that $SST\in{\rm Age}(M)$ by $S\sim^ST$ and so $T(a)\subset S^2(a)$; if $S(a)$ were $R$-free, then it would be isomorphic to the Random Graph by the Lachlan-Woodrow Theorem, and so it would contain infinite $T$-cliques.
So ${\rm Diam}_S(M)=2$ and $S(a)$ is an unstable 3-graph not embedding infinite $R$- or $T$-cliques. By Proposition \ref{PropTwoInfiniteCliques}, $S(a)$ is imprimitive. By $S\sim^ST$, one of $S\vee T$ or $R$ defines an equivalence relation, and it cannot be $S\vee T$ by the Lachlan-Woodrow Theorem (its classes would have to be isomorphic to the Random Graph, and thus embed infinite $T$-cliques), so $T$ defines an equivalence relation with finite classes, which is again impossible as $T$ would be algebraic and we would not be able to find witnesses to $S\sim^ST$ in $S(a)$.
\end{proof}
\section{Homogeneous unstable 3-graphs with two forking relations}\label{SubsecTwoForkingRels}
In this section we prove the non-existence of simple unstable primitive homogeneous 3-graphs with two forking relations. There are two cases: either the nonforking relation is stable, or it is unstable. We treat both cases simultaneously whenever possible.
If all relations are unstable, then none of $R,S,T$ is an algebraic predicate, so we have over any $a$ three infinite orbits of vertices. And if the nonforking relation $R$ is stable, the Independence Theorem and homogeneity guarantee that one can embed infinite $R$-cliques in $M$, so again we get three infinite orbits of vertices over $a$.
\begin{observation}
Let $M$ be a simple \comm primitive \ent homogeneous 3-graph in which $R$ is the only nonforking relation. Then $R(a)$ is isomorphic to $M$.
\label{IsoToM}
\end{observation}
\begin{proof}
The $R$-neighbourhood of any vertex $a$, $R(a)$, is homogeneous in the language $\{R,S,T\}$, so it suffices to prove that ${\rm Age}(M)={\rm Age}(R(a))$. Take any finite structure $A$ embedded into $M$, and find a vertex $v$ satisfying $v\indep[\varnothing]A$. Then for each $a\in A$, $R(v,a)$ holds, as the other two relations divide over $\varnothing$.
\end{proof}
\begin{remark}
In particular, the argument in Observation \ref{IsoToM} implies that the $R$-diameter of any simple primitive homogeneous 3-graph in which $R$ is the only nonforking relation is 2.
\end{remark}
\begin{definition}
The \emph{diameter triple} of a 3-graph $M$ is $$D(M)=({\rm Diam}_R(M), {\rm Diam}_S(M),{\rm Diam}_T(M))$$
\end{definition}
This leaves us with four possible diameter triples for a primitive homogeneous simple 3-graph with only one nonforking relation: (2,3,3), (2,3,2), (2,2,3), (2,2,2).
\begin{observation}
Let $M$ be a simple primitive homogeneous 3-graph in which $R$ is the only nonforking relation. If $S\sim T$, then at least one of the following conclusions holds:
\begin{enumerate}
\item{The predicates $S$ and $T$ are unstable in $S(a)$ and we can embed infinite $S$-cliques in both $S(a)$ and $T(a)$.}
\item{The predicates $S$ and $T$ are unstable in $T(a)$ and we can embed infinite $T$-cliques in both $S(a)$ and $T(a)$.}
\end{enumerate}
\label{ObsTwoOutcomes}
\end{observation}
\begin{proof}
We have either $S\sim^ST$ or $S\sim^TT$ by Proposition \ref{PropCliques}. The conclusion follows.
\end{proof}
\subsection{Simple homogeneous 3-graphs with $D(M)=(2,3,3)$}
\begin{proposition}\label{PropDiam233}
Let $M$ be a primitive simple homogeneous 3-graph in which all relations are unstable and only one is nonforking, and which satisfies $D(M)=(2,3,3)$. If $S^3(a)=R(a)$ and $R\sim S$, then $T^3(a)=R(a)$.
\end{proposition}
\begin{proof}
It follows from $S^3(a)=R(a)$ that $S(a)$ is $R$-free, so $SSR$ is a forbiddden triangle. Suppose for a contradiction $T^3(a)=S(a)$. Then $R(a)=T^2(a)$ and $T(a)$ is $S$-free, so $TTS$ is a forbidden triangle. By Remark \ref{RmkCompatibilityGraph}, we have $R\sim T$ or $S\sim T$.
We cannot have $S\sim T$, as this implies (by Proposition \ref{PropCliques}) one of $S\sim^ST,S\sim^TT$, and an indiscernible half-graph witnessing either of these compatibilities embeds $TTS$. Therefore, we must have $R\sim^TT$ and $R\not\sim^ST$. This implies that $T(a)$ is an unstable homogeneous $R,T$-graph, isomorphic to the Random Graph by Remark \ref{RmkSimpleGraphs}. But that contradicts Proposition \ref{PropCliques}.
\end{proof}
\begin{observation}\label{ObsSTUnstable}
Let $M$ be a simple homogeneous 3-graph and suppose that ${\rm Age}(\Gamma^{S,T})\subset{\rm Age}(M)$. Then $S$ and $T$ are unstable in $S(a)$ and $T(a)$.
\end{observation}
\begin{proof}
Any countable $S,T$-graph can be embedded in $S(a)$. In particular, the graph consisting of a half-graph $H$ for $S,T$ and an additional vertex $v$ satisfying $T(v,h)$ for all $h\in H$. This proves the result for $T(a)$; the same argument yields the corresponding statement for $S(a)$.
\end{proof}
\begin{proposition}\label{Prop233SnotsimT}
Let $M$ be a primitive simple homogeneous 3-graph in which all relations are unstable and only one is nonforking, and which satisfies $D(M)=(2,3,3)$. If $S^3(a)=R(a)$ and $R\sim S$, then $S\not\sim T$.
\end{proposition}
\begin{proof}
We know from Proposition \ref{PropDiam233} that under these hypotheses $R(a)=T^3(a)=S^3(a)$, so $T(a)$ and $S(a)$ are $R$-free. If we had $S\sim T$, then $S(a)$ and $T(a)$ are both isomorphic to the Random Graph, by the Lachlan-Woodrow Theorem and Observation \ref{ObsSTUnstable}. Consider any $b\in S(a)$; the set $R(b)$ should be isomorphic to $M$, by homogeneity and Observation \ref{IsoToM}. Notice that, since the $R$-diameter of $M$ is 2, we have $R(b)\cap R(a)\neq\varnothing$.
\begin{claim}
$R(b)\cap T(a)\neq\varnothing$.
\end{claim}
\begin{proof}
We know that $S(a)$ is isomorphic to the Random Graph in predicates $S,T$. Therefore, all triangles with edges in $S,T$ are realised in $M$. From the diameter triple we get that the triangles $SSR,TTR$ are forbidden. From this it follows by primitivity that $RST\in{\rm Age}(M)$, as otherwise $S\vee T$ would define an equivalence relation on $M$. Therefore, $R(b)\cap T(a)\neq\varnothing$.
\end{proof}
Therefore $R(b)$ consists of a nonempty subset of $T(a)$ and a nonempty subset of $R(a)$. Notice that as the $T$-diameter of $M$ is 3 and $R(a)=T^3(a)$, we do not have $T$-edges from $T(a)$ to $R(a)$, so the $T$-graph on $R(b)$ is disconnected, contradicting the primitivity of $M$ by Observation \ref{IsoToM}.
\end{proof}
\setcounter{case}{0}
\begin{lemma}
There are no primitive simple unstable homogeneous 3-graphs in which only $R$ is nonforking and $D(M)=(2,3,3)$.
\label{LemmmaNo233}
\end{lemma}
\begin{proof}
There are two main cases, depending on whether $R$ is stable or unstable.
\begin{case}Suppose that $R$ is a stable nonforking relation and $S,T$ are unstable and forking. In that case $S\sim T$ is the only edge in the compatibility graph, and we cannot have $S\sim^RT$ by Proposition \ref{PropCliques}, so we must have $S\sim^TT$ or $S\sim^ST$. In either case we obtain $R(a)=S^3(a)=T^3(a)$, so each of $S(a)$ and $T(a)$ is $R$-free and unstable, therefore isomorphic to the Random Graph. Now take $b\in T(a)$ and consider $R(b)$. Since $T(a)$ is $R$-free, we must have $R(b)=(R(b)\cap S(a))\cup (R(b)\cap R(a))$, and each of the intersections is nonempty because of the $R$-diameter 2. The set $R(b)$ should be isomorphic to the primitive structure $M$ by homogeneity and Observation \ref{IsoToM}, but ${\rm Diam}_S(M)=3$ implies that there are no $S-edges$ from $R(b)\cap S(a)$ to $R(b)\cap R(a)$, so the $S$-graph of $R(b)$ is disconnected and $R(b)$ is imprimitive, contradiction.
\end{case}
\begin{case}If $R$ is unstable, we make the following claims:
\begin{claim} $R\not\sim S$\label{Claim233RsimS}
\end{claim}
\begin{proof}
By Proposition \ref{PropDiam233}, $R(a)=S^3(a)=T^3(a)$, so $S(a)$ and $T(a)$ are $R$-free. We claim that $S(a)$ and $T(a)$ are stable graphs. The reason is that the only simple unstable homogeneous graph is the Random Graph, but if any of $S(a), T(a)$ were isomorphic to the Random Graph, then we would be able to find witnesses to $S\sim T$, which is impossible by Proposition \ref{Prop233SnotsimT}. By the Lachlan-Woodrow Theorem, all the infinite stable graphs realising more than one 2-type are imprimitive, so ${\rm Aut}(M/a)$ acts imprimitively on $S(a)$ and $T(a)$ and therefore $S$ is an equivalence relation on $S(a)$ and $T$ is an equivalence relation on $T(a)$ (this follows from Proposition \ref{PropMultipartite}).
Ir follows from the stability of $S(a), T(a)$ that $R\sim^TS$ and $R\sim^ST$, so $S$ has infinitely many classes in $S(a)$, and $T$ has infinitely many classes in $T(a)$.
\[
\includegraphics[scale=0.8]{StableForking2.pdf}
\]
Then we argue as follows: consider $b\in S(a)$. Its $T$-neighbourhood should be isomorphic to $T(a)$, but $T$ is not an equivalence relation on $T(b)$, since $T(b)\cap T(a)$ consists of infinite $S$-cliques separated by $T$, so we have triangles $TTS$ in $S(b)$.
\end{proof}
\begin{claim}
$S^3(a)\neq R(a)$\label{Claim233S3}
\end{claim}
\begin{proof}
From Claim \ref{Claim233RsimS}, we know that in this case $R\not\sim S$, so by Remark \ref{RmkCompatibilityGraph} we have $R\sim T$ and $S\sim T$. By Observation \ref{ObsTwoOutcomes}, at least one of $S(a),T(a)$ is isomorphic to the Random Graph, so both of them must be isomorphic to the Random graph. Additionally, $T^2(a)=S(a)$, so $T^3(a)=R(a)$.
Take any $b\in S(a)$. The $R$-neighbourhood of $b$ consists of a nonempty subset $X$ of $T(a)$ and a nonempty subset $Y$ of $R(a)$. From $T^3(a)=R(a)$ it follows that there are no $T$-edges from $X$ to $Y$, so the $T$ is disconnected in $R(b)$ and therefore $R(b)$ is imprimitive, contradicting the primitivity of $M$ by homogeneity and Observation \ref{IsoToM}.
\end{proof}
\end{case}
From Claim \ref{Claim233S3}, we get that $S^2(a)=R(a)$, so $S(a)$ is $T$-free. By the Lachlan-Woodrow Theorem, $S(a)$ is a stable graph, because it is simple and does not embed infinite $R$-cliques by Proposition \ref{PropCliques}.
Our first claim is that we cannot have $R\sim S$ under these hypotheses. The horizontal cliques in any half-graph for $S$ cannot be of type $R$ by Proposition \ref{PropCliques}, and cannot be of type $T$ because $S(a)$ is $T$-free, so they must be of type $S$. But this implies that $S(a)$ is unstable, contradicting the last paragraph.
Therefore we have $R\sim T$ and $S\sim T$ by Remark \ref{RmkCompatibilityGraph}. In any indiscernible half-graph for $S\sim T$, we cannot have any horizontal cliques of type $R$ by Proposition \ref{PropCliques}, and cannot have horizontal cliques of type $T$ because $S(a)$ is $T$-free. But having a horizontal clique of type $S$ implies that $S$ is unstable in $S(a)$, impossible since there are no infinite $R$-cliques in $S(a)$ (in that case $S(a)$ would be isomorphic to the Random Graph in predicates $R,S$).
\end{proof}
\comm
\begin{proposition}\label{Prop233RsimS}
There are no primitive simple homogeneous 3-graphs $M$ in which all relations are unstable and only one is nonforking with $D(M)=(2,3,3)$, $S^3(a)=R(a)$ and $R\sim S$.
\end{proposition}
\begin{proof}
And if $T$ has two classes in $T(a)$ and $S$ has two classes in $S(a)$, then $S(a)$ does not embed infinite $T$-cliques, but $S(b)$ does, for any $b\in T(a)$.
\end{proof}
\begin{proposition}
There are no primitive simple homogeneous 3-graphs $M$ in which all relations are unstable and only one is nonforking with $D(M)=(2,3,3)$ and $S^3(a)=R(a)$.
\label{Prop233S3}
\end{proposition}
\begin{proof}
\begin{proposition}
There are no primitive simple homogeneous 3-graphs $M$ in which all relations are unstable and only $R$ is nonforking with $D(M)=(2,3,3)$.
\label{PropNo233}
\end{proposition}
\begin{proof}
\end{proof}
\begin{lemma}\label{Lemming233}
There are no primitive simple unstable 3-graphs in which only $R$ is nonforking with ${\rm Diam}(M)=(2,3,3)$.
\end{lemma}
\begin{proof}
\end{proof}
\ent
\subsection{Simple homogeneous 3-graphs with $D(M)=(2,3,2)$ or $D(M)=(2,2,3)$}
We can eliminate these two cases simultaneously because of the symmetry of the hypotheses on the predicates $S$ and $T$.
First, the easy case:
\begin{proposition}
There are no primitive simple unstable 3-graphs in which $R$ is stable and nonforking and $S,T$ are unstable nonforking, with $D(M)=(2,3,2)$ or $D(M)=(2,2,3)$.
\end{proposition}
\begin{proof}
By the same argument as in Case 1 of Lemma \ref{LemmmaNo233}.
\end{proof}
\begin{observation}\label{Obs232SRfree}
Let $M$ be a primitive homogeneous simple 3-graph in which all relations are unstable and only $R$ is nonforking. If $D(M)=(2,3,2)$, then $R(a)=S^3(a)$.
\end{observation}
\begin{proof}
Suppose for a contradiction that $R(a)=S^2(a)$. Then $S(a)$ is $T$-free. We cannot have $S\sim T$ in this case: as usual, we cannot have horizontal $R$-cliques; and since $SST$ is forbidden, we cannot have horizontal cliques of colour $S$ or $T$, either. Remark \ref{RmkCompatibilityGraph} forces then $R\sim S$ and $R\sim T$.
Let us analyse a set of witnesses for $R\sim S$. The horizontal cliques are forced to be of colour $S$ because $SST$ is forbidden; but this implies that $S(a)$ is an unstable $R,S$-graph, isomorphic to the Random Graph by the Lachlan-Woodrow Theorem. This contradicts Proposition \ref{PropCliques}.
\end{proof}
\begin{observation}
Suppose that $M$ is a simple unstable homogeneous 3-graph such that ${\rm Age}(\Gamma^{S,T})\subset{\rm Age}(M)$. If $M$ embeds $K_n^R$ but not $K_{n+1}^R$, then $M$ is imprimitive and isomorphic to one of $C^R(\Gamma), \Gamma[K_n^R]$, or $K_n^R[\Gamma]$.
\label{ObsUnstableNoRCliques}
\end{observation}
\begin{proof}
Note first that if $M$ is primitive, then $R$ is a forking relation, because the Independence Theorem guarantees the existence of infinite monochromatic cliques for nonforking relations. From ${\rm Age}(\Gamma^{S,T})\subset{\rm Age}(M)$ we get that there are infinite $S$- and $T$-cliques in $M$, and that $S,T$ are unstable and that both $S(a), T(a)$ embed infinite $S$- and $T$-cliques. At least one of $S, T$ is nonforking, so both of them must be nonforking since both embed infinite $S$- and $T$-cliques (and therefore a Morley sequence over $\varnothing$). This is impossible because by primitivity $R$ is non-algebraic, so $R(a)$ contains an infinite monochromatic clique. But $R(a)$ does not contain infinite $S$- or $T$-cliques because $R$ divides, and it clearly does not embed any infinite $R$-cliques. Therefore, $M$ is imprimitive.
We know that $S\sim^ST$ and $S\sim^TT$ hold, so $S,T,S\vee R, T\vee R$ are not equivalence relations. If $R$ is an equivalence relation, then its classes are finite, isomorphic to $K_n^R$, and $M$ is isomorphic to $C^R(\Gamma)$ or $\Gamma[K_n^R]$ by Corollary \ref{CorFiniteClasses}.
And if $S\vee T$ is an equivalence relation, then since there are no infinite $R$-cliques in $M$ we know that $S\vee T$ has finitely many classes, each of which is an infinite unstable graph. Therefore, $M$ is isomorphic to $K_n^R[\Gamma]$.
\end{proof}
\begin{proposition}
Let $M$ be a primitive homogeneous simple 3-graph in which all relations are unstable and only one is nonforking. If $D(M)=(2,3,2)$ and $S\sim^ST$, then $S(a)$ is isomorphic to the Random Graph and $T(a)$ is imprimitive, isomorphic to $C^R(\Gamma)$ or $K_n^R[\Gamma]$ for some $n\geq2$.
\label{PropStructure232}
\end{proposition}
\begin{proof}
From Observation \ref{Obs232SRfree}, we know that $S(a)$ is $R$-free. From $S\sim^ST$, we get that $S(a)$ is an unstable homogeneous graph, and therefore isomorphic to the Random Graph. It follows from Observation \ref{ObsIsoRG} that $T(a)$ is also unstable, and ${\rm Age}(\Gamma^{S,T})\subset{\rm Age}(T(a))$. We know that $R$ is realised in $T(a)$ because the $T$-diameter of $M$ is 2, but $T(a)$ does not embed infinite $R$-cliques. By Observation \ref{ObsUnstableNoRCliques}, $T(a)$ is isomorphic to $C^R(\Gamma)$ or $K_n^R[\Gamma]$, since $SSR$ is a forbidden triangle.
\end{proof}
\begin{proposition}
There are no primitive homogeneous simple 3-graph in which all relations are unstable and only one is nonforking with $D(M)=(2,3,2)$ and $S\sim T$.
\label{Prop232Graph}
\end{proposition}
\begin{proof}
From Proposition \ref{PropStructure232}, we know that $S(a)$ is isomorphic to the Random Graph and $T(a)$ is imprimitive, isomorphic to $C(\Gamma)$ or to $K_n^R[\Gamma]$.
Consider any $b\in S(a)$. By Observation \ref{Obs232SRfree}, there are no $S$-edges from $S(a)$ to $R(a)$, so $S(b)\setminus\{a\}\subset S(a)\cup T(a)$. Here our proof divides into two cases:
\begin{case}
If $T(a)\cong K_n^R[\Gamma]$, then $S(b)\setminus\{a\}$ meets only one $S\vee T$-class in $T(a)$. Define an equivalence relation $E$ on $S(a)$ that holds for $b,b'\in S(a)$ if $S(b)$ and $S(b')$ meet the same $S\vee T$-class in $T(a)$. This equivalence relation is invariant under ${\rm Aut}(M/a)$, but cannot be defined over $a$ without quantifiers, since the structure of $S(a)$ is that of the Random Graph. This contradicts homogeneity.
\end{case}
\begin{case}
If $T(a)\cong C^R(\Gamma)$, then $S(b)$ contains at most one element from each $R$-class. First note that the indiscernible half-graphs witnessing $S\sim^ST$ imply that $S(b)\cap T(a)$ is an infinite set containing infinite $S$- and $T$-cliques, so it is an infinite and co-infinite subset of $S(b)\cap T(a)$. Similarly, an indiscernible half-graph witnessing $S\sim^TT$ implies that $T(b)\cap T(a)$ is infinite. Let $X$ be the set of vertices in $T(a)$ which are $R$-equivalent to an element of $S(b)\cap T(a)$.
\begin{claim}
$R(b)\cap T(a)\neq\varnothing$.
\end{claim}
\begin{proof}
By Remark \ref{RmkCompatibilityGraph}, we must have $R\sim S$ or $R\sim T$. Note that $R\sim^TS$ and $R\sim^ST$ both imply that $R(b)\cap T(a)\neq\varnothing$. Moreover, they imply that $R(b)\cap T(a)$ is infinite.
It suffices then to prove that we cannot have $R\sim^SS$ or $R\sim^TT$. The first option is impossible as $RSS$ is a forbiden triangle. The second option is impossible as $R$ is stable in $T(a)$.
\end{proof}
\begin{claim}
We have $X=T(b)\cap T(a)$ or $X=R(b)\cap T(a)$. The set $T(a)\setminus(X\cup S(b))$ is a union of $R$-classes in $T(a)$.
\label{ClaimThreePieces}
\end{claim}
\begin{proof}
We know that $X\subset T(a)$ is disjoint from $S(b)\cap T(a)$, so it is a subset of $(R(b)\cup T(b))\cap T(a)$.
Suppose for a contradiction that $X\cap T(b)\neq\varnothing$ and $X\cap R(b)\neq\varnothing$. Take $x\in X\cap R(b)$ and $y\in X\cap T(b)$, and let $x',y'$ be the elements in $S(b)\cap T(a)$ to which $x,y$ are $R$-equivalent in $T(a)$.
\[
\includegraphics[scale=0.8]{StableForking3.pdf}
\]
Then ${\rm qftp}(x'/ab)={\rm qftp}(y'/ab)$, but their complete types differ as $x'$ satisfies the formula $\psi(z)=\exists c(R(b,c)\wedge T(a,c)\wedge R(c,z))$, which $y'$ does not satisfy. The second conclusion follows easily from this and $T(a)=(S(b)\cup T(b)\cup R(b))\cap T(a)$.
\end{proof}
Consider now $T(b)$ for the same $b\in S(a)$. Observation \ref{Obs232SRfree} implies that there are no $R$-edges from $S(a)$ to $R(a)$. From Claim \ref{ClaimThreePieces} we get two cases:
\begin{subcase}
If $T(b)\cap T(a)$ is a union of $R$-classes, then all the elements of $T(b)\cap T(a)$ are already paired in $R$-classes (over $b$), so the elements completing the $R$-classes over $b$ of the $R$-free $T(b)\cap S(a)$ must be in $T(b)\cap R(a)$. The set $T(b)\cap S(a)$ is isomorphic to the Random Graph, so $T(b)\cap (Ra)\cup S(a))$ is a homogeneous imprimitive unstable 3-graph with finite $R$-classes. By Theorem \ref{ThmCGamma}, it should be isomorphic to $C(\Gamma)$, but this is impossible as there are no $S$-edges from $S(b)\cap S(a)$ and $S(b)\cap R(a)$.
\end{subcase}
\begin{subcase}
If $R(b)\cap T(a)$ is a union of $R$-classes, then we have two $R$-free parts of $T(b)$, namely $T(b)\cap S(a)$ and $T(b)\cap T(a)$; the third part of $T(b)$, $T(b)\cap R(a)$ is not $R$-free. This follows from same argument as in Claim \ref{ClaimThreePieces}: one of $T(b)\cap R(a), T(b)\cap S(a)$, and $T(b)\cap T(a)$ is a union of $R$-classes over $b$, and it can only be $R(a)\cap T(b)$. Therefore the $R$-free parts are paired by $R$.
There are no $S$-edges from $S(a)$ to $R(a)$, because the $S$-diameter of $M$ is 3 and $R(a)=S^3(a)$, and there are no $R$-edges from $T(b)\cap S(a)$ to $T(b)\cap R(a)$. This implies that $TTR$ embeds in $T(b)$, contradicting $T(b)\cong C(\Gamma)$.
\end{subcase}
\end{case}
This concludes our proof.
\end{proof}
\setcounter{case}{0}
\setcounter{subcase}{0}
For the next result, we need the following definition:
\begin{definition}\label{DefPseudoplane}
A \emph{pseudoplane} is an incidence structure of points and lines which satisfies the following axioms:
\begin{enumerate}
\item{There are infinitely many points on each line.}
\item{There are infinitely many lines through each point.}
\item{Any two lines intersect in only finitely many points.}
\item{Any two points lie on only finitely many lines.}
\end{enumerate}
If $M$ is a structure and $\mathcal L$ is a definable family of infinite subsets of $M$, then $P=(M,\mathcal L)$ is a \emph{weak} pseudoplane if the following conditions are satisfied:
\begin{enumerate}
\item{If $S\neq T\in\mathcal L$, then $|S\cap T|<\omega$.}
\item{Each $p\in M$ lies in infinitely many elements of $\mathcal L$.}
\end{enumerate}
The weak pseudoplane $(M,\mathcal L)$ is \emph{homogeneous} (binary) if the underlying structure $M$ is homogeneous with respect to its (binary) language.
\end{definition}
The following theorem was proved by Simon Thomas in \cite{thomas1998nonexistence}.
\begin{theorem}
There is no binary homogeneous weak pseudoplane.
\label{ThmThomas}
\end{theorem}
\comm
\begin{proposition}
Let $M$ be a primitive homogeneous simple 3-graph in which all relations are unstable and only one is nonforking. If $D(M)=(2,3,2)$ , then $S\not\sim T$.
\label{Prop232Graph}
\end{proposition}
\begin{proof}
Suppose for a contradiction that we have $S\sim T$. We know from Proposition \ref{Prop232Tcliques} that the horizontal cliques in any indiscernible half-graph witnessing $S\sim T$ are of colour $T$. It follows that $S$ and $T$ are unstable in $T(a)$.
\begin{claim}
$T(a)$ is primitive.
\end{claim}
\begin{proof}
By instability of $S, T$, we know that $S,T,R\vee S, R\vee T$ are not equivalence relations on $T(a)$. If $R$ defines an equivalence relation on $T(a)$, then it has finite classes by Proposition \ref{PropCliques}, and is therefore isomorphic to $C(\Gamma)$. But in this case we would be able to find witnesses to $S\sim T$ with horizontal $S$-cliques, contradicting Proposition \ref{Prop232Tcliques}. If $S\vee T$ defines an equivalence relation, then its classes are infinite and isomorphic to the Random Graph, and again we contradict Proposition \ref{Prop232Tcliques}.
\end{proof}
From this claim and the information we get from the half-graphs, we know that $T(a)$ is a 3-graph in which $S$ and $T$ are nonforking relations (meaning: the formulas $T(x,a)\wedge T(a,b)\wedge S(x,b)$ and $T(x,a)\wedge T(a,b)\wedge T(x,b)$ do not fork over $a$), so we can use the Independence Theorem to show that any finite $S,T$-structure can be embedded into $T(a)$; therefore, we can find an indiscernible half-graph for $S\sim T$ with horizontal $S$-cliques, contradicting Proposition \ref{Prop232Tcliques}.
\end{proof}
\ent
\begin{lemma}
There are no primitive simple homogeneous 3-graphs $M$ in which all relations are unstable and only one is nonforking with $D(M)=(2,3,2)$.
\label{LemmaNo232}
\end{lemma}
\begin{proof}
Suppose for a contradiction that $M$ is a primitive simple homogeneous 3-graph with $D(M)=(2,3,2)$. We know from Proposition \ref{Prop232Graph} that $S\not\sim T$.
\comm
\begin{claim}
It is not the case that $R\sim^TT$.
\label{Claim2}
\end{claim}
\begin{proof}
Suppose for a contradiction that we have witnesses for $R\sim^TT$. Our first claim is that $T(a)$ is primitive.
From $R\sim^TT$ we get that $R$ and $T$ are unstable in $T(a)$, so $R,T,R\vee S, T\vee S$ do not form equivalence relations on $T(a)$. Also, $R\vee T$ is not an equivalence in $T(a)$: if it were, then it would necessarily have infinite classes as both $R$ and $T$ are unstable and therefore non-algebraic; each class would be a homogeneous unstable graph, isomorphic to the Random Graph by the Lachlan-Woodrow Theorem.
To prove that $S$ does not define an equivalence relation on $T(a)$, note first that if it did define an equivalence relation, then it would necessarily have infinitely many infinite classes. This is impossible as Theorem \ref{ThmCGamma} and Corollary \ref{CorFiniteClasses} would imply that $T(a)$ embeds infinite $R$-cliques, contradicting Proposition \ref{PropCliques}. The infinite $T$-cliques in $T(a)$ imply that if $S$ defines an equivalence relation on $T(a)$, then it has infinitely many classes. Therefore, $M$ interprets an infinite 3-graph in which $R,T$ are unstable, $S$ defines an equivalence relation with infinitely many infinite classes, and $R$ does not form infinite cliques. This is impossible as no such 3-graph is simple by Proposition \ref{PropImprimitiveInfClasses}.
It follows then from $R\sim^TT$ that $M$ interprets (as $T(a)$) a primitive homogeneous 3-graph in which $R$ and $T$ are unstable and divide, $R\sim^TT$, $R$ does not form infinite cliques, and $S$ is nonforking. By Proposition \ref{PropNoSnotTnoCliquesR} (exchanging the roles of $S,T$), this is impossible as $M$ is simple.
\end{proof}
\ent
From $S\not\sim T$ it follows, in particular, that we cannot find half-graphs for $S$ within $S(a)$, so $S$ is stable in $S(a)$ (as is $T$), and $S(a)$ is a stable homogeneous graph realising $S$ and $T$ (indiscernible witnesses to $R\sim^ST$ imply that are infinite $S$-cliques in $M$, and therefore in $S(a)$; similarly, $R\sim^TS$ implies that $S(a)$ embeds infinite $T$-cliques). By the Lachlan-Woodrow Theorem, $S(a)$ is imprimitive. From the fact that $T$ forms infinite cliques in $S(a)$ and Proposition \ref{PropMultipartite}, we derive that $S$ is an equivalence relation on $S(a)$ with infinitely many infinite classes.
Given two vertices $c,c'\in M$ with $S(c,c')$, define the \emph{line} $\ell(c,c')$ as $c\cup c'/S^c$, where $c'/S^c$ is the imprimitivity block of $c'$ in $S(c)$. It is clear that $\ell(c,c')$ is the maximal $S$-clique in $M$ containing $c$ and $c'$. Let $\mathcal L$ be the set $\{\ell(c,c'):c,c'\in M, S(c,c')\}$.
\begin{claim}
$(M,\mathcal L)$ is a weak pseudoplane (see Definition \ref{DefPseudoplane}).
\end{claim}
\begin{proof}
We need to verify two conditions:
\begin{enumerate}
\item{Let $\ell,\ell'$ be distinct elements of $\mathcal L$. Then $|\ell\cap\ell'|\leq1$, because the lines are defined as maximal cliques.}
\item{The second condition in the definition of a weak pseudoplane (see Definition \ref{DefPseudoplane}) follows trivially from the fact (proved above) that $S(a)$ contains infinitely many infinite $S$-classes.}
\end{enumerate}
\end{proof}
We have reached a contradiction by Theorem \ref{ThmThomas}.
\end{proof}
The same argument, substituting $T$ for $S$, proves that there are no primitive homogeneous simple 3-graphs in which all three predicates are unstable and only one is nonforking with $D(M)=(2,2,3)$. Therefore, the only possibility for such a 3-graph is to have $D(M)=(2,2,2)$.
\subsection{Simple homogeneous 3-graphs with $D(M)=(2,2,2)$.}
This subsection deals with the most delicate cases in this chapter. The strategy is to prove $S\not\sim T$ first (Proposition \ref{PropOneIsStable} to Lemma \ref{lemmaST}). This reduces the compatibility graph to $R\sim^ST, R\sim^TS$, and the only possibility is for both $S(a)$ and $T(a)$ to be of the form $K_m^i[K_n^j[K_o^k]]$, with only the subindex corresponding to $R$ is finite. Most of the cases can be dealt with directly from easy results from Chapter \ref{ChapGenRes}.
\begin{proposition}
Let $M$ be a simple unstable 3-graph such that $M$ does not embed infinite $R$-cliques. Then one of $R,S,T$ is stable. Moreover, the stable relation is either an equivalence relation or the complement of an equivalence relation.
\label{PropOneIsStable}
\end{proposition}
\begin{proof}
Suppose for a contradiction that all predicates are unstable; we will prove that any such $M$ would be imprimitive, so one of $R,S,T$ is an equivalence relation or the complement of an equivalence relation and is therefore stable.
If ${\rm Aut}(M)$ acts primitively on $M$, then $R(x,a)$ divides over $\varnothing$, as otherwise the Independence Theorem and primitivity guarantee that $M$ embeds infinite $R$-cliques. By Lemma \ref{PropNoSimpleOneDividing}, one of $S$, $T$ divides. Let us say without loss of generality that $S$ divides and $T$ is nonforking. By Lemmas \ref{LemmmaNo233} and \ref{LemmaNo232}, we have $D(M)=(2,2,2)$, so for any $a\in M$ each of $R(a),S(a),T(a)$ is a homogeneous simple 3-graph (i.e.,\, all relations in the language are realised in each of these sets). By Proposition \ref{PropTwoInfiniteCliques}, both $S$ and $T$ form infinite cliques in $M$. By Proposition \ref{PropCliques}, $S(a)$ and $R(a)$ do not embed infinite $T$-cliques.
\begin{claim}
$S\not\sim T$.
\label{ClaimSNotSimT}
\end{claim}
\begin{proof}
Suppose for a contradiction that $S\sim T$ holds. Then we must have $S\sim^ST$ and $S\not\sim^TT$ because there are no infinite $T$-cliques in $S(a)$. From this it follows that $S$ and $T$ are unstable in $S(a)$, so $S(a)$ is a homogeneous simple unstable 3-graph not embedding infinite $R$- or $T$-cliques. By Proposition \ref{PropTwoInfiniteCliques}, $S(a)$ is imprimitive.
By the instability of $S$, $T$ in $S(a)$, we know that $S$, $T$, $R\vee S$, $R\vee T$ do not define equivalence relations on $S(a)$, so this leaves us with two options for the equivalence relation on $S(a)$: it could be defined by $R$ or by $S\vee T$. The latter case is impossible because each class would be an infinite homogeneous simple unstable $ST$-graph, isomorphic to the Random Graph by the Lachlan-Woodrow Theorem, contradicting the fact that $S(a)$ does not embed infinite $T$-cliques.
Similarly, if $R$ defines an equivalence relation on $S(a)$ then either ${\rm Aut}(M/a)$ acts 2-transitively on $S(a)/R$, or it acts transitively, but not 2-transitively on $S(a)/R$. In the former case $S(a)\cong C(\Gamma^{S,T})$ and we can find infinite $T$-cliques in $S(a)$, and in the latter $M$ interprets a Henson graph (cf. the proof of Observation \ref{ObsInterpretedGraph}), contradicting simplicity.
We conclude $S\not\sim T$.
\end{proof}
It follows from Claim \ref{ClaimSNotSimT} and Remark \ref{RmkCompatibilityGraph} that $R\sim S$ and $R\sim T$ hold. By the same argument as before, we have $R\sim^SS$ and $R\not\sim^TS$, $R\sim^ST$ and $R\not\sim^TT$. From $R\sim^SS$ we get that $R,S$ are unstable in $S(a)$, so $S(a)$ is a simple unstable 3-graph not embedding infinite $R$- or $T$-cliques. By Proposition \ref{PropTwoInfiniteCliques}, $S(a)$ is imprimitive and one of $T,R\vee S$ defines an equivalence relation on $S(a)$. As before, $R\vee S$ cannot define an equivalence relation because its classes would be isomorphic to the Random Graph, contradicting that $R$ does not form infinite cliques. Therefore, $T$ defines an equivalence relation with finite classes on $S(a)$ and the same argument from the proof of Claim \ref{ClaimSNotSimT} proves the impossibility of this.
We have reached a contradiction. We conclude that $M$ is imprimitive, so one of $R,S,T$ is either an equivalence relation or its complement. In any case, one of the relations is stable.
\end{proof}
\begin{proposition}\label{PropStrST}
Let $M$ be a primitive homogeneous simple 3-graph with $D(M)=(2,2,2)$ in which $S\sim T$ holds, and $S,T$ are forking relations. Then for any $a$ the sets $S(a)$, $T(a)$ are imprimitive and each is isomorphic to one of $C(\Gamma^{ST}), \Gamma^{ST}[K_n^R]$, or $K_n^R[\Gamma^{ST}]$. In particular ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$.
\end{proposition}
\begin{proof}
If $S\sim T$ holds, then, as $R$ is nonforking, we have that at least one of $S\sim^ST$ and $S\sim^TT$ holds. Suppose that $S\sim^ST$ holds, so $S,T$ are unstable in $S(a)$. Then $S(a)$ is an unstable 3-graph not embedding infinite $R$-cliques and with witnesses for $S\sim^ST$, so it is imprimitive by Proposition \ref{PropImprimitivity2}. If $R$ defines an equivalence relation on $S(a)$, then $S(a)$ is isomorphic to $C(\Gamma^{ST})$ or to $\Gamma^{ST}[K_n^R]$. And if $S\vee T$ is an equivalence relation, then $S(a)\cong K_n^R[\Gamma^{ST}]$.
Notice that any of these conclusions implies ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$, so $S\sim^TT$ also holds and we can carry out the same argument for $T(a)$.
\end{proof}
\begin{proposition}
Let $M$ be a primitive simple 3-graph in which $R,S,T$ are unstable, $R(x,a)$ is nonforking over $\varnothing$, $S,T$ are forking, and $D(M)=(2,2,2)$. Then $R\not\sim^SS$ and $R\not\sim^TT$.
\label{PropRSRT}
\end{proposition}
\begin{proof}
We will prove only $R\not\sim^SS$. Suppose for a contradiction that $R\sim^SS$ holds, so $R$ and $S$ are compatible in $S(a)$. Then $S(a)$ is imprimitive by Proposition \ref{PropImprimitivity1}, as it does not embed infinite $R$-cliques, so one of $R\vee S$ or $T$ is an equivalence relation on $S(a)$. The relation $R\vee S$ is not an equivalence relation on $S(a)$, as its classes would be isomorphic to the Random Graph. Therefore $T$ is an equivalence relation on $S(a)$.
The $T$-classes cannot be finite by Theorem \ref{ThmCGamma} and Corollary \ref{CorFiniteClasses}. And if ${\rm Aut}(M/a)$ does not act 2-transitively on $S(a)/T$, then the same argument as in Claim \ref{ClaimInterpretsHenson} proves that $M$ interprets a Henson graph, contradicting the simplicity of $M$. Since $S$ forms infinite cliques, $T$ must have infinitely many classes. But this contradicts the fact that $S(a)$ does not embed infinite $R$-cliques, by Proposition \ref{PropUnstabInfCliques}.
The proof for $R\not\sim^TT$ is similar.
\end{proof}
Proposition \ref{PropStrST} leaves us with six cases to analyse under $S\sim T$ (nine in principle, but we can eliminate three of them by the symmetry of the hypotheses on $S$ and $T$), listed in table \ref{TableCases}.
\begin{table}[!h]
\centering
\caption{Possible structures under $S\sim T$}
\begin{tabular}{ccc}
\hline\hline
Case & $S(a)$ & $T(a)$ \\ [0.5ex]
\hline
I & $C(\Gamma^{ST})$ & $\Gamma^{ST}[K_n^R]$ \\
II & $C(\Gamma^{ST})$ & $K_n^R[\Gamma^{ST}]$ \\
III & $\Gamma^{ST}[K_n^R]$ & $K_m^R[\Gamma^{ST}]$ \\
IV & $C(\Gamma^{ST})$ & $C(\Gamma^{ST})$ \\
V & $\Gamma^{ST}[K_n^R]$ & $\Gamma^{ST}[K_m^R]$ \\
VI&$K_n^R[\Gamma^{ST}]$&$K_m^R[\Gamma^{ST}]$\\[1ex]
\hline
\label{TableCases}
\end{tabular}
\end{table}
We can eliminate Case I easily by considering any $c\in T(a)$, and noticing that its $S$-neighbourhood contains a copy of $K_{2,2}$ in which the two sides of the partition are $R$-edges and the edges are of colour $S$ (or $T$). This is contradicts homogeneity because $C(\Gamma)$ does not embed that graph. Case III can be eliminated in a similar manner: for any $b\in S(a)$ the set $T(b)$ contains complete bipartite graphs of the same kind (i.e.,\, two dosjoint $R$-cliques with all other edges of colour $S$), and this does not embed in $T(a)$ (these arguments apply whether $R$ is stable or not). Case II is more complicated.
\begin{proposition}
There are no primitive simple homogeneous 3-graphs $M$ in which $R$ is the only nonforking relation, $S\sim T$, $R\sim S$ (if $R$ is unstable), $S(a)\cong C(\Gamma^{ST})$ and $T(a)\cong K_n^R[\Gamma^{ST}]$.
\label{PropCaseII}
\end{proposition}
\begin{proof}
Suppose for a contradiction that $M$ satisfies all the conditions in the statement. We will prove that there is a formula with the TP2.
Consider any $c\in T(a)$. Then $S(c)$ consists of an $R$-free subset of $T(a)$, isomorphic to $\Gamma^{ST}$, and two non-empty subsets $X\subset S(a), Y\subset R(a)$. The sets $S(c)\cap T(a)$ and $S(c)\cap S(a)$ are nonempty because $D(M)=(2,2,2)$, so the triangles $SST, TTS$ are in ${\rm Age}(M)$. Note that $S(a)\cong C(\Gamma^{ST})$ implies that the triangle $RST$ embeds into $M$, so $S(c)\cap R(a)\neq\varnothing$.
By homogeneity, $S(c)\cong S(a)$, so one of $X$, $Y$ is a union of $R$-classes in $S(c)$, while the other is $R$-free (homogeneity excludes the possibility of one of $X,Y$ containing both a full $R$-class and an unpaired element). We eliminate these cases in the following claims.
\begin{claim}\label{Claimtp21}
It is not the case that $S(c)\cap R(a)$ is a union of $R$-classes over $c$ if $R(a)\cap S(c)$ is infinite.
\end{claim}
\begin{proof}
Suppose for a contradiction that $S(c)\cap R(a)$ is a union of $R$-classes over $c$. The set $S(c)\cap T(a)$ is infinite and isomorphic to the Random Graph, and is in definable bijection (via $R$) with $S(c)\cap S(a)$.
Since $S(c)\cap R(a)$ is a union of $R$-classes and $S(c)\cap S(a)$ is $R$-free, then the following structure on four vertices is a minimal forbidden configuration.
\[
\includegraphics[scale=0.7]{StableForking5.pdf}
\]
Clearly, ${\rm Age}(\Gamma^{S,T})\subset{\rm Age}(M)$, so in particular we can find witnesses $(a_ib_i)_{i\in\omega}$ to $T\sim^T_TS$. The sequence $\Phi=(a_ib_i)_{i\in\omega}$ also witnesses that the formula $R(x,a)\wedge S(x,b)$ 2-divides over $\varnothing$.
Since $S(c)\cap R(a)$ is a union of $R$-classes over $c$, we know by the structure of $S(a)$ and homogeneity that there are no $R$-edges from $S(c)\cap R(a)$ to $S(c)\cap S(a)$, and there are edges of colours $S$ and $T$ between $S(c)\cap R(a)$ and $S(c)\cap S(a)$ (this follows from the fact that the triangles $SSR$, $TTR$ are forbidden in $C(\Gamma)$.
\[
\includegraphics[scale=0.7]{StableForking6.pdf}
\]
(Only $R$-edges are shown in the diagram.)
Given an element $p\in S(c)\cap(R(a)\cup S(a))$, its orbit under ${\rm Aut}(M/ac)$ is either $S(c)\cap R(a)$ or $S(c)\cap S(a)$. Therefore, ${\rm Aut}(M/ac)$ acts without finite orbits on $p\in S(c)\cap(R(a)\cup S(a))$ and we can find infinitely many distinct $S$-edges in $S(c)\cap(R(a)\cup S(a))$. Take any infinite sequence $\Sigma$ of $T$-edges in that set. Then $\Sigma$ is spans an $R$-free structure.
Now we can use the fact that ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$ to find an array of parameters $a_i^j,b_k^l$ ($i,j,k,l\in\omega$) such that $\{a_i^j b_k^j:i,k\in\omega\}$ is isomorphic to $\Phi$, and for any $f:\omega\rightarrow\omega$ the set $\{a_{f(i)}^i b_{f(i)}^i:i\in\omega\}$ is isomorphic to $\Sigma$. We conclude that $R(x,a)\wedge S(x,b)$ has the TP2.
\end{proof}
Note that if $R$ is unstable, then since $R\sim^TS$ (by Proposition \ref{PropRSRT}), the set $S(c)\cap R(a)$ is infinite, so Claim \ref{Claimtp21} proves in particular that $S(c)\cap R(a)$ is not a union of $R$-classes in $S(c)$ if $R$ is unstable. And if $R$ is stable and $S(c)\cap R(a)$ is a finite union of $R$-classes in $S(c)$, then we have only the instability $S\sim T$, and can find indiscernible isomorphic half-graphs $X,Y$ witnessing it in $R(a)$ and $S(c)$. There is, by homogeneity, a sequence $(\sigma_i:i\in\omega)$ in ${\rm Aut}(M)$ that takes increasingly large intial segments of $Y$ to $X$, so by closedness of ${\rm Aut}(M)$ there is some $\sigma\in{\rm Aut}(M)$ taking $Y$ to $X$. Then $Y\subset T(\sigma(c))$ and $Y\subset R(\sigma(a))$, so $T(c)\cap R(a)$ is infinite by homogeneity, and Claim \ref{Claimtp21} is also valid when $R$ is stable.
\begin{claim}
It is not the case that $S(c)\cap S(a)$ is a union of $R$-classes over $c$.
\end{claim}
\begin{proof}
The proof is similar to that of Claim \ref{Claimtp21}, but in this case the minimal forbidden configuration is
\[
\includegraphics[scale=0.7]{StableForking7.pdf}
\]
So there are no $R$-edges from $S(c)\cap T(a)$ to $S(c)\cap S(a)$ and this last set is a union of $R$-classes over $c$. Again, $S(c)\cap S(a)$ is infinite as a consecuence of $R\sim^T S$ if $R$ is unstable, and of $S\sim^ST$ if $R$ is stable, and $R(x,a)\wedge S(x,b)$ 2-divides as witnessed by a sequence of $T$-edges witnessing $S\sim^TT$ in which both monochromatic cliques are of colour $T$. The same argument as in Claim \ref{Claimtp21} proves that we can find an infinite sequence $R$-free sequence of $T$-edges $(a_ib_i)_i\in\omega$ such that $\{R(x,a_i)\wedge S(x,b_i):i\in\omega\}$ is consistent. Now using ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$, we can prove the TP2 for $R(x,a)\wedge S(x,b)$.
\end{proof}
We have reached a contradiction as homogeneity implies that one of $S(c)\cap S(a)$ or $S(c)\cap R(a)$ is a union of $R$-classes over $c$.
\end{proof}
\begin{remark}
The same argument, with the appropriate modifications, shows that there are no primitive homogeneous simple 3-graphs $M$ in which $R,S,T$ are unstable $S\sim T$, $R\sim T$, $T(a)\cong C(\Gamma^{ST})$, and $S(a)\cong K_n^R[\Gamma^{ST}]$. In all cases we find the forbidden structures from the claims and can complete the same arguments.
\label{RmkOtherCases}
\end{remark}
Remark \ref{RmkOtherCases} implies the following:
\begin{proposition}
There are no primitive homogeneous simple unstable 3-graphs in which only $R$ is nonforking, with $S\sim T$, such that $S(a)\cong C(\Gamma^{ST})$ and $T(a)\cong K_n^R[\Gamma^{ST}]$.
\end{proposition}
Note that the arguments in Claim \ref{Claimtp21} depend only on $S(c)\cap T(a)$ being $R$-free, isomorphic to the Random Graph, and on the structure of $S(a)$. This means that we can apply the same methods to eliminate Case IV.
\begin{proposition}
There are no primitive simple homogeneous 3-graphs $M$ in which $R,S,T$ are unstable, $S\sim T$, $S(a)\cong C(\Gamma^{ST})$ and $T(a)\cong C(\Gamma^{ST})$.
\end{proposition}
\begin{proof}
By the same arguments as in Proposition \ref{PropCaseII}.
\end{proof}
\begin{proposition}
There are no primitive simple unstable homogeneous 3-graphs $M$ in which only $R$ is nonforking, $S\sim T$, $S(a)\cong\Gamma^{ST}[K_n^R]$ and $T(a)\cong\Gamma^{ST}[K_m^R]$.
\end{proposition}
\begin{proof}
Suppose for a contradiction that $M$ is a homogeneous 3-graph satisfying all the conditions in the statement.
First note that $n=m$. If $n<m$ then for any $c\in T(a)$ there are $R$-cliques of size $m$ in $S(c)\cap T(a)$, contradicting homogeneity. The same argument proves that $m$ cannot be smaller than $n$.
Now consider $b\in S(a)$ and $S(b)$. The structure of $S(a)$ implies that $S(b)\cap S(a)$ is a union of infinitely many $R$-classes in $S(a)$, and in fact isomorphic to $S(a)$. Note that $S(b)\cap R(a)$ cannot be a union of $R$-classes over $b$, since a full $R$-class over $b$ in $R(a)$ would mean that $S(b)$ embeds $K_{n+1}^R$, impossible by homogeneity. From this it follows that $S(b)\cap T(a)$ is not a union of $R$-classes over $b$.
But now note that $S(b)\cap T(a)$ should embed $K_n^R$, since the structure consisting of two vertices $v,w$ joined by an $S$-edge and $c_1,\ldots,c_n$ forming an $R$-clique, with $T(w,c_i)$ and $S(v,c_i)$ is in ${\rm Age}(M)$ because it can be embedded in $S(a)$ or $T(a)$. We have reached a contradiction.
\comm
From this observation and the structure of $T(a)$, it follows that $S(c)\cap T(a)\cong S(a)$, and in particular is an infinite union of $R$-classes over $c$.
The set $S(c)\cap S(a)$ is also infinite because ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$, so in particular we can find any finite $ST$-graph $G$ such that there are two special vertices $g_1,g_2$ satisfying $T(g_1,g_2)$ and $S(x,g_1), S(x,g_2)$ for all $x\neq g_1,g_2$ in $G$. But $S(c)\neq S(a)$ because otherwise we would be able to define an equivalence relation $Q$ on $M$. Clearly, $S(c)\cap S(a)$ cannot be $R$-free, since $S(c)$ is isomorphic to $S(a)$ and $S(c)\cap T(a)$ is a union of classes.
Moreover, $S(c)\cap S(a)$ is a union of $R$-classes, as it is a proper subset of $S(a)$ and if it were not a union of $R$-classes then we could find elements $u,v\in S(a)\setminus S(c)$ such that there exists $u'\in S(c)\cap S(a)$ with $R(u,u')$ and the $R$-class of $v$ does not meet $S(c)\cap S(a)$. In this case we have ${\rm qftp}(u/ac)={\rm qftp}(v/ac)$ but ${\rm tp}(u/ac)\neq{\rm tp}(v/ac)$, contradicting homogeneity. From this it follows that each of $S(c)\cap S(a), S(c)\cap T(a)$, and $S(c)\cap R(a)$ is a union of $R$-classes.
Finally, $S(c)\cap R(a)$ is also infinite because $R\sim^TS$ witnesses the existence of an infinite $T$-clique in $S(c)\cap R(a)$. Therefore there are no $R$-edges between any two of $S(c)\cap S(a)$, $S(c)\cap T(a)$ and $S(c)\cap R(a)$.
This means that the following six configurations are forbidden in $M$:
\[
\includegraphics[scale=0.7]{StableForking8.pdf}
\]
The hypotheses already give us that the triangle $RST$ is forbidden in $S(a)$ and $T(a)$, as ${\rm Aut}(M/a)$ does not act 2-transitively on $S(a)/R$. Therefore, the following two are also forbidden:
\[
\includegraphics[scale=0.7]{StableForking9.pdf}
\]
Clearly, the structure
\[
\includegraphics[scale=0.7]{StableForking10.pdf}
\]
is in ${\rm Age}(M)$, as the triangle $SST$ can be embedded in $T(a)$. Analysing the structures $C_1$ and $\alpha$, we conclude that the amalgamation problem
\[
\includegraphics[scale=0.7]{StableForking11.pdf}
\]
has $R$ as a forced, i.e.,\, unique, solution ($T(b,c)$ gives $C_1$, and $S(b,c)$ gives $\alpha$). Similarly from $C_4$ and $\beta$ we get that
\[
\includegraphics[scale=0.7]{StableForking12.pdf}
\]
has $R$ as a forced solution; call that structure $D$. From $C_1$ and $C_3$ we find that the problem
\[
\includegraphics[scale=0.7]{StableForking13.pdf}
\]
has $T$ as a forced solution. It follows that the Amalgamation Property fails as the problem
\[
\includegraphics[scale=0.7]{StableForking14.pdf}
\]
has no solution in ${\rm Age}(M)$: by our preceding arguments, the structures on $a_0,a_1,a_2,b$ and $a_0,a_1,a_2,c$ are both in ${\rm Age}(M)$, the former being $B$ and the latter $D$, but $b,c,a_0,a_1$ has forced solution $R(b,c)$ and $b,c,a_1,a_2$ has forced solution $T(b,c)$. Homogeneity fails.
\ent
\end{proof}
\begin{proposition}
There are no primitive simple homogeneous 3-graphs $M$ in which $S,T$ are unstable, $R$ is the only nonforking relation, $S\sim T$, $S(a)\cong K_n^R[\Gamma^{ST}]$ and $T(a)\cong K_m^R[\Gamma^{ST}]$.
\label{PropLastCaseSsimT}
\end{proposition}
\begin{proof}
Suppose for a contradiction that $M$ is a primitive simple homogeneous 3-graph as in the statement.
Given any $a\in M$, let $b\in S(a)$ and $c\in T(b)\cap S(a)$. By homogeneity, $T(b)\cong T(a)$.
The set $T(b)\cap S(a)$ is contained in one $S\vee T$-class over $b$, by the structure of $S(a)$. We use the notation $x/P^y$, where $x,y$ are vertices of $M$ and $P$ is a formula defining an equivalence relation in $T(y)$, to denote the $P$-class of $x$ in $T(y)$.
\begin{claim}
If $c/(S\vee T)^b\cap T(a)\neq\varnothing$, then for any $d\in T(b)$ such that $R(c,d)$ holds we have $d/(S\vee T)^b\subset R(a)$.
\label{ClaimClasses1}\end{claim}
\begin{proof}
If the claim were false, we would be able to $d\in c/(S\vee T)^b$ and $d'\in T(b)\cap T(a)$ such that $R(d,d')$. These two elements have the same quantifier-free type over $ab$, but there is no $\sigma\in{\rm Aut}(M/ab)$ taking $d$ to $d'$, since $d$ satisfies the existential formula $\varphi(y)=\exists x(T(x,b)\wedge S\vee T(y,x)\wedge T(b,y))$ (where $c$ is the quantified $x$), but $d'$ does not satisfy such a formula (see the illustration below, assuming $T(c,d)$).
\[
\includegraphics[scale=0.7]{StableForking15.pdf}
\]
\end{proof}
By the same argument,
\begin{claim}
If $c/(S\vee T)^b\cap R(a)\neq\varnothing$, then for any $d\in T(b)$ such that $R(c,d)$ holds we have $d/(S\vee T)^b\subset T(a)$.
\label{ClaimClasses2}\hfill$\Box$
\end{claim}
From Claims \ref{ClaimClasses1} and \ref{ClaimClasses2}, it follows that $c/(S\vee T)^b\cap T(a)=\varnothing$ or $c/(S\vee T)^b\cap R(a)or\varnothing$, as otherwise $T(b)$ would consist of only one $S\vee T$-class, contradicting homogeneity as $S\vee T$ is a proper equivalence relation on $T(a)$. What happens if $c/(S\vee T)^b$ extends to only one of $T(a),R(a)$?
\begin{claim}
$c/(S\vee T)^b\cap R(a)=\varnothing$.
\label{ClaimClasses3}
\end{claim}
\begin{proof}
Suppose that $c/(S\vee T)^b\cap R(a)\neq\varnothing$. Then, by Claim \ref{ClaimClasses2} $c/(S\vee T)^b\cap T(a)=\varnothing$ and the other $S\vee T$-classes of $T(b)$ are contained in $T(a)$. It follows in particular that there are no $S$- or $T$-edges from $T(b)\cap T(a)$ to $T(b)\cap S(a)$. In particular, the structure
\[
\includegraphics[scale=0.7]{StableForking16.pdf}
\]
is forbidden, contradicting the fact (implied by $S(a)\cong K_n^R[\Gamma]$) that ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$.
\end{proof}
\begin{claim}
$c/(S\vee T)^b\cap T(a)=\varnothing$.
\label{ClaimClasses4}
\end{claim}
\begin{proof}
Arguing as in Claim \ref{ClaimClasses3}, we find that if $c/(S\vee T)^b\cap T(a)\neq\varnothing$, then there are no $S$- or $T$-edges from $T(b)\cap S(a)$ to $T(b)\cap R(a)$ or from $T(b)\cap T(a)$ to $T(b)\cap R(a)$. In particular, the following two structures are forbidden:
\[
\includegraphics[scale=0.7]{StableForking17.pdf}
\]
This is impossible as we have three unstable relations, so by Remark \ref{RmkCompatibilityGraph} at least one of $R\sim S$, $R\sim T$ holds. By Proposition \ref{PropRSRT}, one of $R\sim^ST,R\sim^TS$ holds as $R\not\sim^SS$ and $R\not\sim^TT$ and $R$ is the only nonforking relation. But $D_1$ is in the age of an indiscernible half-graph witnessing $R\sim^TS$ and $D_2$ is in the age of the half-graph for $R\sim^ST$.
\end{proof}
From Claims \ref{ClaimClasses1} to \ref{ClaimClasses4}, we conclude that $c/(S\vee T)^b=T(b)\cap S(a)$.
\begin{claim}
There are at least three $S\vee T$-classes in $T(a)$.
\label{ClaimClasses5}
\end{claim}
\begin{proof}
We know that $S\vee T$ is a proper equivalence relation in $T(a)$. Suppose for a contradiction that there are only two $S\vee T$-classes in $T(a)$. Then the $S\vee T$-class in $T(b)$ of any $d$ with $R(d,c)$ meets both $R(a)$ and $T(a)$ (it meets $R(a)$ because $T\sim^SR$, and it meets $T(a)$ because ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$). This implies that there are no $S$- or $T$-edges from $T(b)\cap S(a)$ to $T(b)\cap T(a)$ or from $T(b)\cap S(a)$ to $T(b)\cap R(a)$; in particular, this implies that the structure
\[
\includegraphics[scale=0.7]{StableForking18.pdf}
\]
is forbidden, contradicting ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$.
\end{proof}
The same argument from Claim \ref{ClaimClasses5} shows that there are no $S\vee T$-classes $C$ in $T(b)$ such that $C\cap T(a)\neq\varnothing$ and $C\cap R(a)\neq\varnothing$. From this we conclude that each $S\vee T$-class in $T(b)$ is contained in one of $S(a),T(a),R(a)$. But this implies that there are no $S$- or $T$- edges from $T(b)\cap T(a)$ to $T(b)\cap S(a)$, again contradicting ${\rm Age}(\Gamma^{ST})\subset{\rm Age}(M)$.
\end{proof}
Propositions \ref{PropStrST} to \ref{PropLastCaseSsimT} prove
\begin{lemma}\label{lemmaST}
There are no primitive homogeneous simple 3-graphs in which all relations are unstable and only $R$ is nonforking satisfying $S\sim T$.
\hfill$\Box$\end{lemma}
\begin{corollary}\label{CorTwoInstabilities}
If $M$ is a primitive homogeneous simple 3-graph in which all relations are unstable and only $R$ is nonforking, then $S\not\sim T$, $R\not\sim^SS$, $R\not\sim^TT$, $R\sim^TS$, $R\sim^ST$. In particular, there are no such graphs in which $R$ is stable.
\end{corollary}
\begin{proof}
By Remark \ref{RmkCompatibilityGraph}, Proposition \ref{PropRSRT}, and Lemma \ref{lemmaST}.
\end{proof}
We have seen that $S\sim T$ implies that both $S(a)$ and $T(a)$ are unstable 3-graphs. Is it possible to have $S(a)$ and $T(a)$ stable?
\begin{observation}
If $M$ is a primitive homogeneous simple 3-graph in which $R,S,T$ are unstable, only $R$ is nonforking, and $S(a),T(a)$ are stable 3-graphs, then each of $S(a),T(a)$ is of the form $K^i_m[K^j_n[K^k_o]]$, where $\{i,j,k\}=\{R,S,T\}$ and only the index from $m,n,o\in\omega+1$ corresponding to $R$ is finite.
\label{ObservationStableNeighbourhoods}
\end{observation}
\begin{proof}
By Remark \ref{CorTwoInstabilities}, $R\sim^ST$ and $R\sim^TS$ are the only instabilities witnessed in $M$, and from this it follows that $S(a)$ and $T(a)$ embed infinite $S$- and $T$-cliques. The observation follows from Theorem \ref{Lachlan3graphs} by inspection.
\end{proof}
Observation \ref{ObservationStableNeighbourhoods} implies that there are, in principle, 36 cases to analyse under the hypothesis of stability for $S(a)$ and $T(a)$. We can eliminate 20 of these cases using only Proposition \ref{PropMultipartite}, since in $K_n^i[K_m^j[K_o^k]]$ (where $\{i,j,k\}=\{R,S,T\}$) the relations $k$ and $j\vee k$ are equivalence relations. In other words, $S(a)$ cannot be of the form $K_\omega^S[K^i_n[K^j_m]]$, where $\{i,j\}=\{R,T\}$, and similarly $T(a)$ is not of the form $K_\omega^T[K^{i'}_n[K^{j'}_m]]$ ($\{i',j'\}=\{R,S\}$). Of the sixteen remaining cases, twelve (those in which $R\vee S$ or $R\vee T$ are the coarsest equivalence relations in $S(a)$, $T(a)$) can be eliminated by looking at the set of forbidden triangles in $S(a)$, $T(a)$ and in $T(b)\cap S(a)$ or $S(c)\cap T(a)$, where $b\in S(a)$ and $c\in T(a)$. For example, if $S(a)\cong K_\omega^T[K_n^R[K_\omega^S]]$ and $T(a)\cong K_\omega^S[K_\omega^T[K_m^R]]$, then $T(b)\cap S(a)$ contains triangles $RRS$, which are forbidden in $T(a)$, contradicting homogeneity.
This leaves us with the following four cases, listed in Table \ref{table2}.
\begin{table}\label{table2}
\centering
\caption{Cases with $S(a),T(a)$ stable.}
\begin{tabular}{ccc}
\hline\hline
Case & $S(a)$ & $T(a)$ \\ [0.5ex]
\hline
I & $K_n^R[K_\omega^S[K_\omega^T]]$ & $K_n^R[K_\omega^S[K_\omega^T]]$ \\
II & $K_n^R[K_\omega^S[K_\omega^T]]$ & $K_n^R[K_\omega^T[K_\omega^S]]$ \\
III & $K_n^R[K_\omega^T[K_\omega^S]]$ & $K_n^R[K_\omega^S[K_\omega^T]]$ \\
IV & $K_n^R[K_\omega^T[K_\omega^S]]$ & $K_n^R[K_\omega^T[K_\omega^S]]$\\
\hline
\label{TableCases}
\end{tabular}
\end{table}
Cases I, III, IV are easily eliminated via Theorem \ref{ThmThomas} because the maximal $S$- (or $T$-) cliques containing an $S$- ($T$-) edge form the lines of a weak pseudoplane. This leaves only one case under the assumption of stability for $S(a),T(a)$, namely $S(a)\cong K_n^R[K_\omega^S[K_\omega^T]]$ and $T(a)\cong K_m^R[K_\omega^T[K_\omega^S]]$.
\begin{proposition}\label{propfinalcase}
Let $M$ be a primitive simple unstable 3-graph in which $R,S,T$ are unstable relations and $R$ is the only nonforking relation. Then $S(a)$ and $T(a)$ are isomorphic to unstable 3-graphs.
\end{proposition}
\begin{proof}
By the paragraph preceding the statement, it suffices to eliminate the case where $S(a)\cong K_n^R[K_\omega^S[K_\omega^T]]$ and $T(a)\cong K_m^R[K_\omega^T[K_\omega^S]]$.
Consider $b\in S(a)$ and $T(b)$. We have $T(b)=(T(b)\cap S(a))\cup(T(b)\cap T(a))\cup(T(b)\cap R(a))$.
By the structure of $S(a)$, $T(b)\cap S(a)$ is the $T$-class of $b\in S(a)$, excluding $b$ itself. By homogeneity, $T(b)\cong T(a)$, so each element of an infinite $T$-clique in $T(b)$ is contained in an $S$-class over $b$.
Since the $S$-diameter of $M$ is 2, the triangle $TTS$ is in ${\rm Age}(M)$ and $T(b)\cap S(a)\neq\varnothing$. And since the triangle $RST$ is also in ${\rm Age}(M)$ (this follows from $R\sim^ST$), we have $R(b)\cap T(a)\neq\varnothing$. By $R\sim^TS$, $R(b)\cap T(a)$ is infinite and contains infinite $T$-cliques; by $R\sim^ST$, $R(b)\cap T(a)$ also contains infinite $S$-cliques. A similar argument proves that $S(b)\cap T(a)$ is infinite and contains infinite $S$- and $T$-cliques.
Let $X$ be the union of $S$-classes $C$ in $T(a)$ such that there are some $c\in T(b)$ and $d\in C$ with $T(d,b)\wedge S(c,d)$, and let $Y$ be the union of $S$-classes $C$ in $T(a)$ such that there are $c\in T(b)$ and $d\in C$ with $S(d,b)\wedge S(c,d)$.Then either $R(b)\cap T(a)$ is contained in $X$ or is disjoint from $X$, and likewise with $Y$. Otherwise, we would be abel to find $d\in T(b)\cap T(a)$ and $e\in R(b)\cap T(a)$ with equal quantifier-free types over $ab$ but distinct types over $ab$.
From this it follows that $R(b)\cap T(a)$ is disjoint from both $X$ and $Y$. By homogeneity, $S(b)\cap T(a)$ and $T(b)\cap T(a)$ are unions of $S$-classes in $T(a)$; but the structure of $S(a)$ must be isomorphic to that of $S(b)$. The only possible way for $S(b)\cap S(a)$ to be a union of $S$-classes in $T(a)$ is for it to consist of exactly one class. But we know that $S(b)\cap T(a)$ contains infinite $T$-classes, so we have reached a contradiction.
\end{proof}
\begin{corollary}\label{CorFinal}
There are no primitive simple unstable 3-graphs $M$ in which $R,S,T$ are unstable relations and $R$ is the only nonforking relation.
\end{corollary}
\begin{proof}
By Proposition \ref{propfinalcase}, $S(a)$ or $T(a)$ is unstable. If $S(a)$ is unstable, then, by Corollary \ref{CorTwoInstabilities}, one of $R\sim^TS$ or $R\sim^ST$ holds in each of $S(a)$, $T(a)$. We know by Proposition{PropOneIsStable} that there is a stable relation in $S(a), T(a)$, which in each case must be $R$, which is either an equivalence relation or the complement of one. If $R$ is the complement of an equivalence relation, then each class is infinite and isomorphic to the Random Graph. But then we find $S\sim T$, contradicting Corollary \ref{CorTwoInstabilities}, so $R$ must be an equivalence relation and each of $S(a)$, $T(a)$ is an imprimitive homogeneous unstable 3-graph in which $R$ defines an equivalence relation with finite classes. But then $R$ is stable in $S(a),T(a)$, so they don't embed half-graphs witnessing $R\sim^TS$ or $R\sim^ST$, a contradiction again.
\end{proof}
Now we can conclude:
\begin{theorem}\label{ThmStableForking}
Let $M$ be a homogeneous primitive simple unstable 3-graph. Then if some relation forks, it is stable.
\end{theorem}
\begin{proof}
We have proved that it is not possible to have one (Lemma \ref{PropNoSimpleOneDividing}) or two forking unstable relations (Corollaries \ref{CorTwoInstabilities}, \ref{CorFinal}). Thus, either we have all relations unstable and nonforking, or the forking relation is stable.
\end{proof}
\chapter{Introduction}
\section{Homogeneous Structures}
Homogeneous structures appear in the work of Roland Fra\"iss\'e from the 1950s as relational structures with very large automorphism groups (see \cite{fraisse1954extension}, \cite{fraisse1986theory}), but some trace the origins of the subject to Cantor's proof that any two countable dense linearly ordered sets without endpoints are isomorphic. That theorem is proved by a back-and-forth argument, which in model-theoretic terms says that the theory of $(\mathbb Q,<)$ eliminates quantifiers in the language $\{<\}$.
This subject is a meeting point for permutation group theory, model theory, and combinatorics. From the model-theoretic perspective, homogeneous structures have many desirable properties: they eliminate quantifiers, are prime, have few types, algebraic closure does not grow too quickly. All these properties made a full classification, at least for some restricted languages, accessible. There exist, for example, complete classifications of the finite and countably infinite homogeneous posets (Schmerl, \cite{schmerl1979countable}), graphs (Gardiner, \cite{gardiner1976homogeneous}, Lachlan and Woodrow \cite{lachlan1980countable}), tournaments (Woodrow, \cite{Woodrow1976}, Lachlan \cite{lachlan1984countable}), and digraphs (Cherlin, \cite{cherlin1998classification}).
During the 1970s and 80s, stability theory was a rapidly growing subject. Abstractions from the dimension or rank concepts in ``real life" theories were put to work, and whole families of theories were classified. Gardiner and Lachlan found that most finite homogeneous graphs and digraphs could be classified in a similar way: there was a partition of the set of structures into families parametrised by a few numbers. This parallel discovery led to Lachlan and Shelah's study of stable homogeneous structures \cite{lachlan1984stable}, and to Cherlin and Hrushovski's work on structures with few types in \cite{cherlin2003finite}.
\begin{definition}\label{DefHom}
A countable first-order structure $M$ for the relational language $L=\{R_i:i\in I\}$ is \emph{homogeneous} if any isomorphism between finite substructures extends to an automorphism of $M$.
\end{definition}
We will be dealing with finite languages all the time. It is essential to have a relational language: if the language has function symbols, we would have to change ``finite substructures" to ``finitely generated substructures," since functions can be iterated. Notice that this definition is stronger than the usual definition of homogeneity in model theory. The condition there is that partial \emph{elementary maps} extend to automorphisms. This is one reason why our homogeneous structures are often called \emph{ultra}homogeneous. Any partial elementary map is a local isomorphism, and so every ultrahomogeneous structure is homogeneous, but the converse is not true. The countability assumption is not necessary, but we will not consider homogeneous structures of any higher cardinality.
If $M$ is any (not necessarily homogeneous) first-order relational structure, the set of all finite structures isomorphic to substructures of $M$ is called the \emph{age} of $M$, denoted by ${\rm Age}(M)$. It is clear from Definition \ref{DefHom} that the age of a homogeneous structure $M$ is of particular importance if we wish to understand $M$.
Given countable relational structure $M$ for a countable language, the following are true:
\begin{enumerate}
\item{${\rm Age}(M)$ has countably many members, since $M$ itself is countable.}
\item{${\rm Age}(M)$ is closed under isomorphism, by definition.}
\item{${\rm Age}(M)$ is closed under forming substructures: given $A\in{\rm Age}(M)$, any substructure $B$ of $A$ will be finite, and a composition of the embeddings $B\rightarrow A$ and $A\rightarrow M$ proves that $B\in{\rm Age}(M)$.}
\item{${\rm Age}(M)$ has the \emph{Joint Embedding Property} or JEP: given two structures $A,B\in{\rm Age}(M)$, there exist embeddings $f:A\rightarrow C$ and $g:B\rightarrow C$ for some $C\in{\rm Age}(M)$.}
\end{enumerate}
The next theorem completes the picture:
\begin{theorem}[Fra\"iss\'e]
Let $L$ be a countable first-order relational language, and $\mathcal C$ a class of finite $L$-structures.
\begin{enumerate}
\item{There exists a countable structure $A$ whose age is equal to $\mathcal C$ if and only if $\mathcal C$ satisfies properties 1-4.}
\item{There exists a homogeneous structure $A$ whose age is equal to $\mathcal C$ if and only if $\mathcal C$ satisfies 1-4 and the amalgamation property: given $A,B,C\in\mathcal C$ with embeddings $f_1:A\rightarrow B$ and $f_2:A\rightarrow C$, there exists $D\in\mathcal C$ and embeddings $g_1:B\rightarrow D$ and $g_2:C\rightarrow D$ such that $g_1\circ f_1=g_2\circ f_2$. Furthermore, this structure is unique up to isomorphism.}
\end{enumerate}
\end{theorem}
In the course of the classification, the Lachlan-Woodrow Theorem will be used many times.
\begin{theorem}[Lachlan-Woodrow 1980]\label{LachlanWoodrow}
Let $G$ be an infinite homogeneous graph. Then either $G$ or $G^c$ is of one of the following forms:
\begin{enumerate}
\item{$I_m[K_n]$ with $\max(m,n)=\infty$,}
\item{Generic omitting $K_{n+1}$,}
\item{Generic (the Random Graph)}
\end{enumerate}
\end{theorem}
Consider a group of permutations $G$ acting on a set $X$. Then $G$ acts on each Cartesian power of $X$ coordinatewise. Peter Cameron introduced the term \emph{oligomorphic} action to describe the situation where $G$ acts on a countably infinite set $X$ and $G$ has finitely many orbits on $X^n$ for each natural number $n$ (see \cite{cameron1990oligomorphic}). The following theorem is an elaborate version of Ryll-Nardzewski's Theorem.
\begin{theorem}\label{RyllNardzewski}
Let $M$ be a countably infinite structure over a countable language and $T={\rm Th}(M)$. The following are equivalent:
\begin{enumerate}
\item{$M$ is $\omega$-categorical\label{Ryll1}}
\item{Every type in $S_n(T)$ is isolated, for all $n\in\omega$\label{Ryll2}}
\item{Each type space $S_n(T)$ is finite\label{Ryll3}}
\item{$(M,{\rm Aut}(M))$ is oligomorphic\label{Ryll4}}
\item{For each $n>0$ there are only finitely many formulas $\varphi(x_1,\ldots,x_n)$ up to ${\rm Th}(M)$-equivalence.\label{Ryll5}}
\end{enumerate}
\end{theorem}
We have chosen two properties to guide our classification: first, the mostly combinatorial property of homogeneity, and second, the model-theoretical property of simplicity. One of the main theorems of simplicity theory is the Independence Theorem, which in our relational settings allows us to find simultaneous solutions to sufficiently independent (in the sense of forking) systems of relations. We will state the Independence Theorem later; first, a few basic facts about $\omega$-categorical and homogeneous structures.
\begin{proposition}
Let $M$ be a countably infinite structure homogeneous over a finite relational language. Then $M$ is $\omega$-categorical.
\end{proposition}
\begin{proof}
The language is finite: there can be only finitely many isomorphism types of substructures of $M$ of size $n$; by homogeneity, any two isomorphic finite substructures are in the same orbit, so by Theorem \ref{RyllNardzewski}, $M$ is $\omega$-categorical.
\end{proof}
\begin{proposition}
The unique model $M$ of cardinality $\kappa$ of a countable $\kappa$-categorical theory is saturated.
\label{CategoricalSaturated}
\end{proposition}
\begin{proof}
By the L\"owenheim-Skolem theorem and countability.
\end{proof}
Recall that a \emph{small} substructure of a saturated model of cardinality $\kappa$ is a substructure of any cardinality $\lambda<\kappa$. In homogeneous models, partial elementary maps extend to automorphisms. It is not hard to prove that saturated models are homogeneous; as a consequence,
\begin{proposition}\label{PropSaturation}
In a saturated model $M$, two small substructures have the same type if and only if they belong to the same orbit under ${\rm Aut}(M)$.
\end{proposition}
As a direct consequence of Theorem \ref{RyllNardzewski} and Proposition \ref{PropSaturation}, we have:
\begin{proposition}
Let $M$ be a countable $\omega$-categorical structure and $A$ a finite subset of $M$. A subset $X\subset M$ is definable over $A$ if and only if $X$ is a union of orbits of the set of automorphisms of $M$ fixing $A$ pointwise.
\end{proposition}
\begin{proposition}
Let $M$ be a countable $\omega$-categorical structure over a relational language $L$. Then $M$ is homogeneous if and only if ${\rm Th}(M)$ eliminates quantifiers in the language $L$.
\end{proposition}
\begin{proof}
If ${\rm Th}(M)$ eliminates quantifiers, homogeneity follows from saturation, by Proposition \ref{CategoricalSaturated}.
Given an $n$-tuple $\bar a$ in $M$, its isomorphism type in the language $L$ can be expressed by a quantifier-free formula. And in a homogeneous structure, the isomorphism type of $\bar a$ determines its orbit under the action of ${\rm Aut}(M)$, and therefore its complete type. This is enough as we have shown that the quantifier-free type (i.e.,\, the isomorphism type of the substructure induced on the tuple) determines the complete type of the tuple.
\end{proof}
Quantifier elimination is a matter of language; we can always force it on a structure by adding relation symbols to the language for each possible formula. If the structure we start with is $\omega$-categorical, then we need only add finitely many predicates for each natural number $n$, corresponding to the finitely many elements of $S_n(T)$ or, equivalently, to the orbits of ${\rm Aut}(M)$.
\begin{definition}
An \emph{$n$-graph} is a structure $(M, R_1,\ldots,R_n)$ in which each $R_i$ is binary, irreflexive and symmetric; also, for all distinct $x,y\in M$ exactly one of the $R_i$ holds and $n\geq2$. We assume that all the relations in the language are realised in a homogeneous $n$-graph.
For any relation $P$ in the language of an $n$-graph $M$ and any element $a$, $P(a)$ denotes the set $\{x\in M: P(a,x)\}$. We often refer to this as the $P${\emph-neighbourhood of }$a$
\label{DefnGraph}
\end{definition}
Some more definitions:
\begin{definition}~
\begin{enumerate}
\item{A \emph{path} of colour $i$ and length $n$ between $x$ and $y$ is a sequence of distinct vertices $x_0, x_1,\ldots, x_n$ such that $x_0=x$, $x_n=y$ and for $0\leq j\leq n-1$ the edge $(x_j,x_{j+1})$ is of colour $i$.}
\item{Two vertices $x,y$ in an edge-coloured graph $(M, R_1,\ldots,R_n)$ are $R_i$\emph{-connected} if there exists a path of colour $i$ between them; a subset $A$ of $M$ is $R_i$-connected if any $a, a'\in A$ are $R_i$-connected by a path in $A$. A maximal $R_i$-connected subset of $M$ is an \emph{$R_i$-connected component.}}
\item{The \emph{$R_i$-distance} between two vertices $x,y$ in an edge-coloured graph, denoted by $d_i(x,y)$, is the length of a minimal $R_i$-path between $x$ and $y$ ($\infty$ if no such path exists). The \emph{$R_i$-diameter} of an $R_i$-connected graph $A$ is defined as the supremum of $\{d_i(x,y)|x,y\in A\}$.}
\item{An $n$-graph is \emph{$R$-multipartite} with $k$ ($k>1$ possibly infinite) parts if there exists a (not necessarily definable) partition $P_1,\ldots,P_k$ of its vertex set into nonempty subsets such that if two vertices $x,y$ are $R$-adjacent then they do not belong to the same $P_i$. We will say that $G$ is \emph{$R$-complete-multipartite} if $G$ is $R$-multipartite with at least two parts and for all pairs $a,b$ from distinct classes, $R(a,b)$ holds.}
\item{For any relation $R$, $n\in\omega$, and $a$, $R^n(a)$ is the set of vertices at $R$-distance $n$ from $a$.}
\item{A \emph{half-graph} for colour $R$ with $m$ pairs in an $n$-coloured graph $M$ is a set of vertices $\{a_i:i\in m\}\cup\{b_i:i\in m\}\subset M$ such that $R(a_i,b_j)$ holds iff $i<j$.}
\end{enumerate}
\end{definition}
In this document, we are concerned with homogeneous 3-graphs $M$ (that is, 3-graphs homogeneous in the language $L=\{R,S,T\}$) with simple theory. The present work is a classification of a restricted class of homogeneous structures with simple unstable theory (in fact, supersimple with finite ${\rm SU}$-rank; more on this later), and as such, extends Lachlan's classification of stable homogeneous 3-graphs:
\begin{theorem}[Lachlan 1986, \cite{lachlan1986binary}]\label{Lachlan3graphs}
Every stable homogeneous 3-graph is isomorphic to one of the following:
\begin{multicols}{2}
\begin{enumerate}
\itemsep-0.25em
\item{$P_{**}$}
\item{$Z$}
\item{$Z'$}
\item{$Q_*^i$}
\item{$P^i_*$}
\item{$P^i[K_m^i]$}
\item{$K_m^i[Q^i]$}
\item{$Q^i[K_m^i]$}
\item{$K_m^i[P^i]$}
\item{$K_m^i\times K_n^j$}
\item{$K_m^i[K_n^j[K_p^k]]$}
\end{enumerate}
\end{multicols}
where $\{i,j,k\}=\{R,S,T\}$ and $1\leq m,n,p\leq\omega$.
\end{theorem}
Items 1 to 5 are finite 3-graphs; for 6-11, if at least one of $m,n,p$ is infinite, the 3-graph is infinite. We will not explain what $Z$, the asterisks, and primes mean, since we are concerned only with infinite graphs. In the $j,k$-graph $P^i$ there are five vertices, and both the $j$-edges and the $k$-edges form a pentagon. The $j,k$-graph $Q^i$ is defined on 9 vertices; the $j$- and $k$-edges form a copy of $K_3\times K_3$.
For $1\leq m,n\leq\omega$, $K_m^i\times K_n^j$ is the 3-graph with vertex set $m\times n$ and relations
\[((a_1,b_1),(a_2,b_2))\in\begin{cases}
i &\mbox{if } a_1\neq a_2\wedge b_1=b_2\\
j &\mbox{if } a_1= a_2\wedge b_1\neq b_2\\
k &\mbox{if } a_1\neq a_2\wedge b_1\neq b_2\\
\end{cases}
\]
where we again assume $\{i,j,k\}=\{R,S,T\}$.
And if $G$, $H$ are 3-graphs, then $G[H]$ is the 3-graph with vertex set $V(G)\times V(H)$ and in which the 3-graph induced on $\{(a,v):v\in V(H)\}$ is isomorphic to $H$ for each $a\in V(G)$, and for any function $f:V(G)\rightarrow V(H)$, the 3-graph induced on $\{(a,f(a)):a\in V(G)\}$ is isomorphic to $G$. More formally, $P((a,b),(c,d))$ holds in $G[H]$ if $a=c$ and $H\models P(b,d)$, or if $G\models P(a,c)$, where $P\in\{R,S,T\}$.
We often divide binary relations in two groups: forking and nonforking. We mean:
\begin{definition}
Let $L=\{R_1,\ldots,R_n\}$ be a binary relational language. We say that $R_i$ is a \emph{forking} relation if $R(a,b)$ implies that $tp(a/b)$ forks over $\varnothing$. Otherwise, $R_i$ is \emph{nonforking}.
\end{definition}
Recall that given $A\subset B$ and $p\in S(B)$, a Morley sequence over $A$ is an $A$-indiscernible sequence $(\bar a_i:i\in I)$ that satisfies $\bar a_i\indep[A](\bar a_j:j<i)$ and ${\rm tp}(\bar a_i/Ba_0\ldots a_{i-1})$ does not fork over $A$ for all $i\in I$.
In many of our arguments, we make implicit use of the following theorem, especially the last statement, to justify the non-existence of an infinite clique of some particular colour in a neighbourhood of a vertex:
\begin{theorem}
Let $T$ be a first-order theory. The following are equivalent:
\begin{enumerate}
\item{$T$ is simple.}
\item{Forking (dividing) satisfies symmetry.}
\item{A formula $\varphi(\bar x,\bar a)$ does not divide over $A$ if and only if for some Morley sequence $I$ in ${\rm tp}(\bar a/A)$ the set $\{\varphi(\bar x,\bar c):c\in I\}$ is consistent.}
\end{enumerate}
\end{theorem}
The central theorem of simplicity is:
\begin{theorem}
If $B\indep[A]C$, ${\rm tp}(\bar b/AB)$ and ${\rm tp}(\bar c/AC)$ do not fork over $A$, and ${\rm Lstp}(\bar b/A)={\rm Lstp}(\bar c/A)$, then there is $\bar a\models{\rm Lstp}(\bar b/A)\cup{\rm tp}(\bar b/AB)\cup{\rm tp}(\bar c/AC)$, with $\bar a\indep[A]BC$.
\end{theorem}
In the primitive case, our method of classification relies on the following statement, related to the stable forking conjecture (see \cite{brower2012weak}): for a predicate $P$ in the language of a homogeneous simple primitive $3$-graph, if the formula $P(x,a)$ divides, then $P$ is a stable predicate. We will prove that this statement in Chapter \ref{ChapStableForking}.
The following theorem was recently proved by Vera Koponen in \cite{koponen2014binary}:
\begin{theorem}\label{Koponen}
Suppose that $M$ is a countable, binary, homogeneous and simple structure. Let $T$ be the complete theory of $M$. Then $T$ is supersimple with finite ${\rm SU}$-rank which is at most $|S_2(T)|$.
\end{theorem}
\begin{definition}
The {\rm SU}-rank is the least function from the collection of all types over parameters in the monster model to ${\rm On}\cup\{\infty\}$ satisfying for each ordinal $\alpha$ that ${\rm SU}(p)\geq\alpha+1$ if there is a forking extension $q$ of $p$ with ${\rm SU}(q)\geq\alpha$ .
\label{DefSU}
\end{definition}
The {\rm SU}-rank is invariant under definable bijections. Additionally, if $q$ is a nonforking extension of $p$, then ${\rm SU}(q)={\rm SU}(p)$. A theory $T$ is supersimple if and only if ${\rm SU}(p)<\infty$ for all real types $p$. In the following theorem, we denote the Hessenberg sum of ordinals by $\oplus$.
\begin{theorem}[Lascar inequalities]
The {\rm SU}-rank satisfies the following inequalities:
\begin{enumerate}
\item{${\rm SU}(a/bA)+{\rm SU}(b/A)\leq{\rm SU}(ab/A)\leq{\rm SU}(a/bA)\oplus{\rm SU}(b/A)$.}
\item{Suppose ${\rm SU}(a/Ab)<\infty$ and ${\rm SU}(a/A)\geq{\rm SU}(a/Ab)\oplus\alpha$. Then ${\rm SU}(b/A)\geq{\rm SU}(b/Aa)+\alpha$.}
\item{Suppose ${\rm SU}(a/Ab)<\infty$ and ${\rm SU}(a/A)\geq{\rm SU}(a/Ab)+\omega^\alpha n$. Then ${\rm SU}(b/A)\geq{\rm SU}(b/Aa)+\omega^\alpha n$.}
\item{If $a\indep[A] b$, then ${\rm SU}(ab/A)={\rm SU}(a/A)\oplus{\rm SU}(b/A)$.}
\end{enumerate}
\label{LascarIneqs}
\end{theorem}
Chapter \ref{ChapGenRes} is a collection of easy results that will be used again and again in the rest of the classification. In Chapter \ref{ChapImprimitiveFC}, we will focus on simple unstable homogeneous 3-graphs in which the reflexive closure of the relation $R$ defines an equivalence relation with finite classes. From this it follows in particular that $S$ and $T$ do not define equivalence relations, as in that case $M$ would be a stable graph. We show there that there is only one such structure such that all the predicates are realised in the union of two classes, and classify the rest of them.
We prove in Chapter \ref{ChapPrimitive} that primitive homogeneous simple 3-graphs have ${\rm SU}$-rank 1, which enables us to prove that the only such graph is the analogue in three predicates of the Random Graph. After that, we use the results in Chapter \ref{ChapGenRes} to elucidate the structure of imprimitive homogeneous 3-graphs with infinite classes. |
2,869,038,154,275 | arxiv | \section{Introduction}
The dual superconductor model of QCD confinement requires the vacuum to contain a condensate of (chromo)
magnetic monopoles. This led several authors to consider embedded, usually Abelian, subgroups within gauge groups.
The early focus was on the $U(1)$ subgroup of $SU(2)$, with analyses by Savvidy \cite{S77}, Nielsen and Olesen \cite{NO78} and t'Hooft \cite{tH81}
considering the maximal Abelian gauge in which the Abelian subgroup is assumed to lie along the internal $e_3$-axis. While they did find a
magnetic condensate to be a lower energy state than the perturbative vacuum, their analyses blatantly violated gauge covariance and offered no evidence that the chromomagnetic background
was due to monopoles. There was also considerable controversy regarding the stability of such a vacuum.
These issues were resolved by the Cho-Duan-Ge (CDG) decomposition \cite{Cho80a, DG79}
which introduces an internal vector to covariantly allow a subgroup embedding within a theory's gauge group to vary throughout spacetime.
Analyses based on this approach confirmed this magnetic background \cite{S77, tH81} and careful consideration of renormalisation and causality \cite{CP02,Cme04,CmeP04,KKP05}
finally resolved such a condensate to be stable through several independent arguments.
It is common for analyses of QCD based on the CDG decomposition to assume
the monopole condensate comprising the vacuum to provide a slow-moving vacuum
background to the quantum degrees of freedom (DOFs) \cite{CP02}. This was the basis of a novel approach to
Einstein-Cartan gravity, in which contorsion (or torsion) is the quantised dynamic degree of freedom confined by a slow-moving classical background gravitational
curvature \textit{et al}.~\cite{KP08,P10,CPP10}.
Their work was based on the Lorentz gauge field theory initially put forward by Utiyama-Kibble-Sciama \cite{U56,K61,S64} for which it has long been known that the non-compact
nature of the Lorentz group led to the theory not being postive semi-definite. They dealt with this by performing their initial analyses in Euclidean space, transforming
the Lorentz gauge group to $SO(4) \simeq SU(2) \times SU(2)$, until later work found the theory to be well-defined with propagators for its canonical DOFs \cite{PKT12}
Instead of including contorsion, we consider the Abelian decomposition of the Lorentz gauge field strength. Drawing on
a considerable body of literature concerning the Abelian decomposition of $SU(2)$ Yang-Mills,
we find an interesting structure without the introduction of contorsion.
To avoid third order derivatives from entering the equations of motion (EOMs), our theory does not include localised translation symmetry,
although it is accepted that spacetime respects the full Poincar\'{e} symmetry group.
We restrict ourselves to the subgroup in this work to avoid complications and so that we can
find conventional propagators for the gauge bosons with a Lagrangian quadratic in gravitational curvature.
We remain mindful, however, that this is a reduced symmetry group of gravitational dynamics rendering our model to be either low-energy
effective or perhaps even just a toy.
One of the more confusing mathematical subtleties of the CDG decomposition was the number of canonical degrees of freedom.
Shabanov argued that an additional gauge-fixing condition is needed to remove a supposed \textquotedblleft two extra degrees\textquotedblright \cite{S99b}
introduced by the internal unit vector field to covariantly describe the embedded subgroup(s). Bae, Cho and Kimm later clarified that
this internal vector did not introduce two degrees of freedom requiring to be fixed but non-canonical DOFs without EOMs \cite{BCK02},
while the proposed constraint was merely a consistency condition.
The interested reader is referred to
\cite{S99,CP02,KMS05,K06} for further details (see, also \cite{lav/mer2016,ren/wan/qu}).
Cho \textit{et al}.{} \cite{CHKP07} approached the issue with Dirac quantisation using second-order restraints.
In an earlier paper \cite{meD15} the authors however a new approach to
rigorously elucidate the dynamic DOFs from the topological. It is based on the Clairaut-type formulation, proposed by one of the authors (SD)
\cite{D10,D14}, in a constraintless generalization of the standard Hamiltonian formalism to include Hessians with zero determinant. It provides
a rigorous treatment of the non-physical DOFs in the derivation of EOMs and the quantum commutation relations.
In this paper we apply our Clairaut approach to the gauged Lorentz group \cite{COK12,COP15} theory with a Lagrangian quadratic in curvature.
A review of the CDG decomposition is given in Section \ref{sec:A2}, beginning with an introduction in the context of QCD before illustrating its application to
$SU(2)\times SU(2)$. In Section \ref{sec:parallel} we illustrate the reduction of our theory to two copies of two-colour QCD
and use one-loop results from the latter to inform us about the former. Section \ref{sec:Clairaut} gives
a brief overview of the Clairaut-Hamiltonian formalism and uses it to study the quantisation of this theory,
sorting canonical dynamic DOFs from DOFs describing the embedding of important subgroups and finding deviations from canonical second quantisation even for dynamic fields.
We consider the one-loop effective dynamics in Section \ref{sec:effective}, discussing the effective
particle spectrum in Subsection \ref{subsec:operator} and the possible emergence of the Einstein-Hilbert (EH) term in Subsection \ref{subsec:EH}.
Our final discussion is in Section \ref{sec:discussion}.
\section{A review of the covariant Abelian decomposition of gravity} \label{sec:A2}
\subsection{\label{subsec:CDG}The CDG decomposition in $SU(2)$ QCD}
\subsubsection{Formalism}
Abelian dominance has played a major role in our understanding of the QCD vacuum, facilitating the demonstration of a monopole condensate.
That a magnetic condensate suitable for colour confinement can have lower energy than the perturbative vacuum has been known since the 1970s \cite{S77,NO78,tH81}, but in early work the internal direction
supporting the magnetic background could not be specified in a covariant manner and nor was there support for the magnetic condensate
being due to monopoles. The apparent existence of destabilising tachyon modes was also an issue for some time \cite{NO78,S82,CmeP04}.
These issues were rectified by the introduction of the CDG decomposition, which
specifies the internal direction of the Abelian subgroup in a gauge covariant manner,
allowing the internal direction to vary arbitrarily throughout spacetime.
The application of the CDG decomposition in $N$-colour ($SU(N)$) QCD is as follows:\newline
The Lie group $SU(N)$ has $N^{2}-1$ generators $\lambda^{(a)}$ ($a=1,\ldots
N^{2}-1$), of which $N-1$ are Abelian generators $\Lambda^{(i)}$ ($i=1,\ldots
N-1$).
The gauge transformed Abelian directions (Cartan generators) are denoted as%
\begin{equation}
\hat{n}_{i}(x)=U(x)^{\dagger}\Lambda^{(i)}U(x).
\end{equation}
Gluon fluctuations in the $\hat{n}_{i}$ directions are described by
$c_{\mu}^{(i)}$, where $\mu$ is the Minkowski index. There is a covariant
derivative which leaves the $\hat{n}_{i}$ invariant,
\begin{equation}
\label{eq:Dhat}\hat{D}_{\mu}\hat{n}_{i}(x)\equiv(\partial_{\mu}+g\vec{V}_{\mu
}(x)\times)\hat{n}_{i}(x)=0,
\end{equation}
where $\vec{V}_{\mu}(x)$ is of the form
\begin{equation}
\label{eq:vecV}\vec{V}_{\mu}(x)=c_{\mu}^{(i)}(x)\hat{n}_{i}(x)+\vec{C}_{\mu
}(x), \quad\vec{C}_{\mu}(x)=g^{-1}\partial_{\mu}\hat{n}_{i}(x)\times\hat
{n}_{i}(x).
\end{equation}
The vector notation refers to the internal space, and summation is implied
over $i=1,\ldots N-1$. For later convenience we define
\begin{align}
F^{(i)}_{\mu\nu}(x) = \partial_{\mu}c^{(i)}_{\nu}(x)- \partial_{\nu}%
c^{(i)}_{\mu}(x),& \\
\vec{H}_{\mu\nu}(x) = \partial_{\mu}\vec{C}_{\nu}(x)- \partial_{\nu}\vec
{C}_{\mu}(x) +g\vec{C}_{\mu}(x)\times\vec{C}_{\nu}(x)= &\partial_\mu \hat{n}_i(x) \times \partial_\nu \hat{n}_i(x), \label{eq:H} \\
H^{(i)}_{\mu\nu}(x) = \vec{H}_{\mu\nu}(x) \cdot\hat{n}_{i}(x),& \\
\vec{F}^{(i)}_{\mu\nu}(x) = F^{(i)}_{\mu\nu}(x) \hat{n}_i(x) + \vec{H}_{\mu\nu}(x)&.
\end{align}
The second last term in eqn~(\ref{eq:H}) follows from the definition in eqn~(\ref{eq:vecV}). Its being a cross-product is significant as it prevents $\mu,\nu$ from having the same value.
The Lagrangian contains the square of this value, namely
\begin{equation}
H^{(i)}_{\mu\nu}(x) H_{(i)}^{\mu\nu}(x) = \left(\partial_\mu \hat{n}_i(x) \times \partial_\nu \hat{n}_i(x)\right) \cdot \left(\partial^\mu \hat{n}_i(x) \times \partial^\nu \hat{n}_i(x)\right),
\end{equation}
The form of eqn~(\ref{eq:vecV}) might suggest the possibility of third or higher time derivatives in a quadratic Lagrangian, but we have now seen that the specific form of the Cho connection
does not allow this.
The dynamical components of the gluon
field in the off-diagonal directions of the internal space vectors are denoted by $\vec{X}_{\mu}(x)$, so if $\vec{A}_{\mu}(x)$ is the gluon field then%
\begin{equation}
\vec{A}_{\mu}(x)=\vec{V}_{\mu}(x)+\vec{X}_{\mu}(x)=c_{\mu}^{(i)}(x)\hat{n}%
_{i}(x) +\vec{C}_{\mu}(x)+\vec{X}_{\mu}(x),
\end{equation}
where
\begin{equation}
\vec{X}_{\mu}(x) \bot\hat{n}_{i}(x),\; \forall\, 1\le i<N\,,\quad\vec{D}_{\mu}
=\partial_{\mu}+g\vec{A}_{\mu}(x).
\end{equation}
The Lagrangian density is still
\begin{equation} \label{eq:QCD}
\mathcal{L}_{gauge}(x) = -\frac{1}{4} \vec{R}_{\mu\nu}(x) \cdot\vec{R}^{\mu\nu}(x),
\end{equation}
where the field strength tensor of QCD expressed in terms of the CDG
decomposition is
\begin{align} \label{eq:fieldstrength}
\vec{R}_{\mu\nu}(x) & =\vec{F}_{\mu\nu}(x) +(\hat{D}_{\mu}\vec{X}_{\nu}(x)-\hat{D}_{\nu}\vec{X}_{\mu}(x)) +g\vec{X}_{\mu}(x)\times\vec{X}_{\nu}(x).
\end{align}
Gauge transformations are effected with a gauge parameter $\vec{\alpha}(x)$.
Under a gauge transformation $\delta$ with $SU(2)$ parameter $\vec{\alpha}(x)$
\begin{align} \label{eq:transform}
&\delta \hat{V}(x) = \, \hat{D}_\mu \vec{\alpha}(x) \nonumber \\
&\delta c_\mu(x) = \,(\partial_\mu \vec{\alpha}(x) \cdot \hat{n}(x)), \nonumber \\
&\delta \hat{n}(x) = \,\hat{n}(x) \times \vec{\alpha}(x), \nonumber \\
&\delta \vec{C}_\mu(x)= \,(\partial_\mu \vec{\alpha}(x))_{\perp\hat{n}} + g\vec{C}_\mu(x) \times \vec{\alpha}(x), \nonumber \\
&\delta \vec{X}_\mu(x) = \,g \,\vec{X}_\mu(x) \times \vec{\alpha}(x).
\end{align}
The form of the transform for $\vec{X}_\mu$ is the same as that for a coloured source, so that these components are sometimes described as \textquotedblleft valence\textquotedblright. This gauge transformation tell us two interesting things. The first is that the Abelian component $c_\mu$ combined with the Cho connection
$\vec{C}_\mu$ are enough to represent the full Lorentz symmetry even without the valence components $\vec{X}_\mu$, Cho \textit{et al}.~\cite{COK12,COP15} described as the "restricted" theory.
The second is that the valence components transform like
a source transforms. There is a corresponding situation in $N=2$ Yang-Mills where the valence gluons are interpreted as colour sources. The importance of this observation
is that we shall later discuss the possibility of mass generation for the valence gluons and this form for the gauge transformation leaves such mass terms covariant. We note however that
a bare mass for $\vec{X}_\mu$ cannot be inserted artificially without spoiling renormalisability.
\subsubsection{The degrees of freedom in the CDG decomposition} \label{subsubsec:DOF}
Henceforth we restrict ourselves to the $SU(2)$ theory, for which there is
only one $\hat{n}$ lying in a three dimensional internal space, and neglect
the $(i)$ indices.
The unit vector $\hat{n}$ posseses two DOFs and so its inclusion in the gluon field together with the Abelian component $c_\mu$ and
the valence gluons $\vec{X}_\mu$ raises questions about the DOF of the decomposed gluon, with one paper \cite{S99b} advocating the gauge condition
\begin{equation} \label{eq:XfixQCD}
\hat{D}_\mu \vec{X}_\mu(x) = 0,
\end{equation}
to remove two apparent
extra degrees of freedom. The matter was sorted by Bae \textit{et al}.~\cite{BCK02} who demonstrated that the DOFs of $\hat{n}$ were not canonical but topological,
indicating the embedding of the Abelian subgroup in the gauge group. The canonical DOFs are carried by the components $c_\mu, \vec{X}_\mu$ and eqn~(\ref{eq:XfixQCD}) is
a consistency condition expected of valence gluons. Kondo \textit{et al}.{}~\cite{KMS06} considered a stronger condition guaranteed not to be unaffected by Gribov copys.
The topological nature of $\hat{n}$ has significance beyond making the canonical DOFs add up correctly. As is well known, monopole configurations in gauge theories are topological
configurations corresponding to the embedding of an Abelian subgroup. The other important consequence is that $\hat{n}$ does not have
a canonical EOM from the Euler-Lagrange equation.
We took an alternative approach to this issue by applying a new method for finding the effects of degenerate variables called the Clairaut formalism.
We further assumed that, as a unit vector, its dynamics were best described by angular variables.
\subsection{CDG decomposition of $SU(2) \times SU(2)$ in Euclidean space} \label{subsec:labelEuclid}
As is well known \cite{KP08,PKT12,U56,K61,S64}, the non-compact nature of the Lorentz group causes Lorentz gauge theories to be non-positive semi-definite. In fact, our attempts to
apply the CDG decomposition to the Lorentz gauge field strength tensor in Minkowski space led to negative kinetic energy terms for some of the gauge fields
(not shown). As demonstrated by Pak \textit{et al}.{}~\cite{KP08,PKT12}, this can be avoided by Wick rotating the theory to Euclidean space and then either
considering effective theories or finding a way to rotate back later without spoiling the quantum theory.
This procedure also rotates the internal Lorentz group to $SO(4)$ which is locally isomorphic to
$SU(2)_R \times SU(2)_L$, corresponding to the right- and left- handed groups generated by
\begin{equation} \label{eq:generators}
\frac{1}{\sqrt{2}}\left(J_l \pm i K_l\right),
\end{equation}
where $J_l, K_l$ are the
rotation and boost operators, respectively.
The two $SU(2)$ subgroups in our gauge theory, though separate, are not independent but are built from the same rotation and boost operators, albeit in
combinations of opposite chirality.
It follows that their respective Abelian directions must correspond, but represent operators of different chirality.
We denote them $\hat{n}_R, \hat{n}_L$ respectively, using these suffices for other field objects also when appropriate, and apply previously published analyses \cite{S77,tH81,CP02,Cme04,CmeP04}
to each symmetry group.
We apply the CDG decomposition to $SU(2)_R \times SU(2)_L$ gauge group.
Their Abelian components we denote $_Rc_\mu$ and $_L{c}_\mu$ respectively and the valence components we denote as
$_R\vec{X}_\mu$ and $_L\vec{X}_\mu$ respectively. For each chirality $\chi \in \{R,L\}$ we have the Cho connection
\begin{equation} \label{eq:connection}
_\chi C_\mu(x) = g^{-1}\partial_{\mu} \,_\chi\hat{n}(x) \times {}_\chi\hat{n}(x),
\end{equation}
and monopole field strength
\begin{align}
_\chi \vec{H}_{\mu\nu} \equiv \partial_\mu {}_\chi\vec{C}_\nu(x) - \partial_\nu {}_\chi\vec{C}_\mu(x) + g \, {}_\chi\vec{C}_\mu(x) \times {}_\chi\vec{C}_\nu(x)
&= \partial_\mu \, \hat{n}_\chi(x) \times \partial_\nu \,\hat{n}_\chi(x) \nonumber \\
&\equiv {}_\chi H_{\mu\nu}(x) \,\hat{n}_\chi(x).
\end{align}
\section{\label{sec:parallel}The vacuum of $SU(2)_R \times SU(2)_L$}
Since the component $SU(2)$ symmetry groups have generators mutually orthogonal in the internal space their contributions to the ground state may be calculated independently
and summed. Furthermore, their identical fundamental dynamics imply that $_\chi H_{\mu\nu}$ is independent of $\chi$ when we are not considering an internal vector
and may be replaced with $H_{\mu\nu}$, which we do henceforth.
It is sufficient to calculate to one loop to find a non-zero monopole condensate in the effective action of $SU(2)$ Yang-Mills theory. References \cite{CP02,Cme04,CmeP04} have shown this by
a variety of methods. Useful material on this theory at one-loop order can also be found in references \cite{IZbook12,PSbook95,Wbook96}.
Calculating the relevant one-loop Feynman diagrams iin Feynman gauge with dimensional regularisation \cite{Cme04,CmeP04} we have
\begin{equation}
\Delta S_{eff} = -\frac{11g^2}{96} \sum_{\chi=R,L} \int d^4p\,_\chi\vec{F}_{\mu\nu}(p)\,_\chi\vec{F}_{\mu\nu}(-p) \left( \frac{2}{\epsilon} - \gamma - \ln\Big(\frac{p^2}{\mu^2}\Big)\right).
\end{equation}
An imaginary part is generated by the $\ln\frac{p^2}{\mu^2}$ term only when the momentum $p$ is timelike, leading to the well-known result \cite{S51,Cme04,CmeP04}
that it is the electric backgrounds are unstable
but magnetic ones are not. Using this information we then have the effective potential
\begin{equation}
V = \frac{H^2}{g^2}
\left[1 + \frac{11g^2}{24} \Big(\ln\frac{\sqrt{H^2}}{\mu^2} - c\Big)\right]
\end{equation}
It should be remembered that this close parallel with the corresponding $N=2$ calculation does not hold beyond one loop because then there are diagrams including fields from both $SU(2)$ subgroups.
Defining the running coupling $\bar{g}$ by \cite{Cme04,CmeP04}
\begin{equation}
\frac{\partial^2V}{\partial H^2} \Big|_{\sqrt{H^2}=\bar{\mu}^2}
=\frac{1}{\bar{g}^2} .
\end{equation}
leads to a non-trivial local minimum at
\begin{equation}
\langle H \rangle = \bar{\mu}^2 \exp\Big(-\frac{24\pi^2}{11\bar{g}^2} + 1\Big).
\end{equation}
The specific value of $H^2$ is less important than knowing it has a strictly positive value lying in two orthogonal directions in the $SU(2)_R\times SU(2)_L$ internal space.
\ignore{
Remembering the form of the cross product in eqn (\ref{eq:cross}), the three point interactions involving the Lorentz-monopole component are limited to the sets $\{C_\mu,c_\nu,\vec{X}_\lambda\}$
and $\{C_\mu(x),\tilde{c}_\nu(x),\vec{X}_\lambda(x)\}$ plus those involving ghost terms, which is clearly just a duplicate of the three-point vertices in the CDG decomposition of $N=2$ Yang-Mills
theory (see Section \ref{subsec:CDG} or references \cite{Cme04,CmeP04}). This is almost the case also for four-point interactions involving the monopole component, except for an additional vertex for
the set $\{C_\mu(x),\vec{X}_\nu(x),\vec{X}_\lambda(x)\}$. This clearly indicates that differences between the two theories will emerge, but not at
one-loop level. We may, therefore, consider effects from this theory at one-loop to be double those from Yang-Mills to this theory up to one-loop level, but must be careful beyond that.
Working through the same one-loop calculations as in the analysis of $SU(2)$ Yang-Mills theory \cite{Cme04,CmeP04},
finds two copies of each of the contributing Feynman diagrams calculated by one of the authors (MLW) in conjunction with Cho \cite{Cme04} and also Pak \cite{CmeP04},
the other two being quadratically divergent and therefore not contributing after renormalisation.
We can find the same result using $\zeta$-function renomalisation with appropriate considerations for causality.
We find therefore that the analysis in these two papers holds in this effective theory yielding a stable vacuum condensate at one loop.
}
\section{Application of Clairaut formalism to the Rotation-Boost decomposition of the gravitational connection} \label{sec:Clairaut}
\subsection{A review of the Hamiltonian-Clairaut formalism}
Here we review the main ideas and formulae of the Clairaut-type formalism for
singular theories \cite{D10,D11,dup2018e}. Let us consider a singular Lagrangian
$L\left( q^{A},v^{A}\right) =L^{\mathrm{deg}}\left( q^{A},v^{A}\right) $,
$A=1,\ldots n$, which is a function of $2n$ variables ($n$ generalized
coordinates $q^{A}$ and $n$ velocities $v^{A}=\dot{q}^{A}=dq^{A}/dt$) on the
configuration space $\mathsf{T}M$, where $M$ is a smooth manifold, for which
the Hessian's determinant is zero. Therefore, the rank of the Hessian matrix $W_{AB}%
=\tfrac{\partial^{2}L\left( q^{A},v^{A}\right) }{\partial v^{B}\partial
v^{C}}$ is $r<n$, and we suppose that $r$ is constant. We can rearrange the
indices of $W_{AB}$ in such a way that a nonsingular minor of rank $r$ appears
in the upper left corner. Then, we represent the index $A$ as follows: if
$A=1,\ldots,r$, we replace $A$ with $i$ (the \textquotedblleft
regular\textquotedblright\ or \textquotedblleft canonical\textquotedblright~index), and, if $A=r+1,\ldots,n$ we replace $A$
with $\alpha$ (the \textquotedblleft degenerate\textquotedblright\ or \textquotedblleft non-canonical\textquotedblright~index).
Obviously, $\det W_{ij}\neq0$, and $\operatorname{rank}W_{ij}=r$. Thus any set
of variables labelled by a single index splits as a disjoint union of two
subsets. We call those subsets regular (having Latin indices) and degenerate
(having Greek indices). Canonical DOFs are obviously described by the former of these subsets while other DOFs can be placed in the second if their contribution to the Wronskian vanishes.
As was shown in \cite{D10,D11}, the \textquotedblleft
physical\textquotedblright\ Hamiltonian can be presented in the form%
\begin{equation}
H_{phys}\left( q^{A},p_{i}\right) =\sum_{i=1}^{r}p_{i}V^{i}\left(
q^{A},p_{i},v^{\alpha}\right) +\sum_{\alpha=r+1}^{n}B_{\alpha}\left(
q^{A},p_{i}\right) v^{\alpha}-L\left( q^{A},V^{i}\left( q^{A}%
,p_{i},v^{\alpha}\right) ,v^{\alpha}\right) , \label{hph1}%
\end{equation}
where the functions%
\begin{equation}
B_{\alpha}\left( q^{A},p_{i}\right) \overset{def}{=}\left. \dfrac{\partial
L\left( q^{A},v^{A}\right) }{\partial v^{\alpha}}\right\vert _{v^{i}%
=V^{i}\left( q^{A},p_{i},v^{\alpha}\right) } \label{h}%
\end{equation}
are independent of the unresolved velocities $v^{\alpha}$ since
$\operatorname{rank}W_{AB}=r$. Also, the r.h.s. of (\ref{hph1}) does
not depend on the degenerate velocities $v^{\alpha}$%
\begin{equation}
\dfrac{\partial H_{phys}}{\partial v^{\alpha}}=0, \label{hpv}%
\end{equation}
which justifies the term \textquotedblleft physical\textquotedblright. The
Hamilton-Clairaut system which describes any singular Lagrangian classical
system (satisfying the second order Lagrange equations) has the form%
\begin{align}
\dfrac{dq^{i}}{dt}=&\left\{ q^{i},H_{phys}\right\} _{phys}-\sum
_{\beta=r+1}^{n}\left\{ q^{i},B_{\beta}\right\} _{phys}\dfrac{dq^{\beta}%
}{dt},\ \ i=1,\ldots r\label{q1}\\
\dfrac{dp_{i}}{dt}=&\left\{ p_{i},H_{phys}\right\} _{phys}-\sum
_{\beta=r+1}^{n}\left\{ p_{i},B_{\beta}\right\} _{phys}\dfrac{dq^{\beta}%
}{dt},\ \ i=1,\ldots r\label{q2}\\
\sum_{\beta=r+1}^{n}&\left[ \dfrac{\partial B_{\beta}}{\partial q^{\alpha}%
}-\dfrac{\partial B_{\alpha}}{\partial q^{\beta}}+\left\{ B_{\alpha}%
,B_{\beta}\right\} _{phys}\right] \dfrac{dq^{\beta}}{dt}\nonumber\\
& =\dfrac{\partial H_{phys}}{\partial q^{\alpha}}+\left\{ B_{\alpha
},H_{phys}\right\} _{phys},\ \ \ \ \ \ \ \ \ \ \ \ \alpha=r+1,\ldots,n
\label{q3}%
\end{align}
where the \textquotedblleft physical\textquotedblright\ Poisson bracket (in
regular variables $q^{i}$, $p_{i}$) is%
\begin{equation}
\left\{ X,Y\right\} _{phys}=\sum_{i=1}^{n-r}\left( \frac{\partial
X}{\partial q^{i}}\frac{\partial Y}{\partial p_{i}}-\frac{\partial Y}{\partial
q^{i}}\frac{\partial X}{\partial p_{i}}\right) . \label{xyp}%
\end{equation}
Whether the variables $B_{\alpha}\left( q^{A},p_{i}\right) $ have a
nontrivial effect on the time evolution and commutation relations is
equivalent to whether or not the so-called \textquotedblleft$q^{\alpha}$-field
strength\textquotedblright%
\begin{equation}
\mathcal{F}_{\alpha\beta}=\dfrac{\partial B_{\beta}}{\partial q^{\alpha}%
}-\dfrac{\partial B_{\alpha}}{\partial q^{\beta}}+\left\{ B_{\alpha}%
,B_{\beta}\right\} _{phys} \label{f}%
\end{equation}
is non-zero. The reader is referred to references \cite{D10,D14,D11} for more details.
\subsection{The contribution of the Clairaut formalism}
\subsubsection{$q^\alpha$ curvature} \label{subsec:A2curvature}
Substituting in this notation,
the angles $\phi, \theta$ are seen, in parallel with our previously published analysis \cite{meD15}, to be degenerate DOFs with unresolved velocities.
Indeed, their contribution to both Lagrangian and Hamiltonian vanishes when their derivatives vanish.
We use the CDG decomposition in which the embedding of a dominant direction $U(1)$ is denoted by $\hat{n}$
which is defined by,
\begin{equation} \label{eq:npolar}
\hat{n}_\chi(x)\equiv\cos\theta(x) \sin\phi(x) \,_\chi\hat{e}_{1} +\sin\theta(x) \sin
\phi(x)\,_\chi\hat{e}_{2} +\cos\phi(x)\,_\chi\hat{e}_{3}.
\end{equation}
We note that the angles are $\phi,\theta$ are independent of $\chi$ for the reasons discussed after eqn (\ref{eq:generators}) and need not be labelled.
The following will prove useful:
\begin{align} \label{eq:nphipolar}
\sin\phi(x) \,_\chi\hat{n}_{\theta}(x) & \equiv\int dy^{4} \frac{d\hat{n}%
(x)}{d\theta(y)} =\sin\phi(x)\,(-\sin\theta(x)\,_\chi\hat{e}_{1}+\cos
\theta(x)\,_\chi\hat{e}_{2}), \nonumber\\
_\chi\hat{n}_{\phi}(x) & \equiv\int dy^{4} \frac{d\hat{n}_\chi(x)}{d\phi(y)}
=\cos\theta(x)\cos\phi(x)\,_\chi\hat{e}_{1} \nonumber \\
&\hspace{20mm}+\sin\theta(x)\cos\phi(x)\,_\chi\hat{e}%
_{2}-\sin\phi(x)\,_\chi\hat{e}_{3}.
\end{align}
For later convenience we note that
\begin{align} \label{eq:nphiphi}
_\chi\hat{n}_{\phi\phi}(x) = -_\chi\hat{n}(x),\; &
_\chi\hat{n}_{\theta\theta}(x) = -\sin\phi \;_\chi\hat{n}(x) - \cos\phi(x) \;_\chi\hat{n}_\phi(x)\;, \nonumber \\
&_\chi\hat{n}_{\theta\phi}(x) = 0,\;\;_\chi\hat{n}_{\phi\theta}(x) = \cos\phi(x) \;_\chi\hat{n}_\theta(x),
\end{align}
and that the vectors $_\chi\hat{n}=\,_\chi\hat{n}_{\phi}\times \,_\chi\hat{n}_{\theta}$
form an orthonormal basis of the internal space.
Substituting the above into the Cho connection in eqn~(\ref{eq:vecV}) gives
\begin{align}
g \,_\chi\vec{C}_{\mu}(x) & =(\cos\theta(x) \cos\phi(x) \sin\phi(x)\partial_{\mu
}\theta(x) +\sin\theta(x)\partial\phi(x))\,_\chi\hat{e}_{1}\nonumber\\
& +(\sin\theta(x) \cos\phi(x) \sin\phi(x) \partial_{\mu}\theta(x) -\cos
\theta(x) \partial\phi(x))\,_\chi\hat{e}_{2} -\sin^{2}\phi(x)\partial_{\mu}%
\theta(x)\,_\chi\hat{e}_{3}\nonumber\\
& =\sin\phi(x)\,\partial_{\mu}\theta(x)\,_\chi\hat{n}_{\phi}(x)-\partial_{\mu
}\phi(x)\,_\chi\hat{n}_{\theta}(x)
\end{align}
from which it follows that
\begin{equation}
g^{2} \,_\chi\vec{C}_{\mu}(x)\times \,_\chi\vec{C}_{\nu}(x) =\sin\phi(x)(\partial_{\mu}\phi(x)
\partial_{\nu}\theta(x) -\partial_{\nu}\phi(x)\partial_{\mu}\theta(x))\hat
{n}_\chi(x),\label{eq:CxC}%
\end{equation}
where we again see that higher order time derivatives are thwarted.
Since their Lagrangian terms do not fit the form of a canonical DOFs we consider them instead to be degenerate, having no canonical DOFs of their own but manifesting through their alteration
of the EOMs of the dynamic variables.
Finding these alterations first requires the Clairaut-related quantities
\begin{align}
B_{\phi}(x) & = \int dy^{3} \frac{\delta\mathcal{L}}%
{_{x}\partial_{0} \phi(x)}\nonumber\\
& =\sum_{\chi = R,L} \int dy^{3} \int dy_0 \, \delta(x_{0} - y_{0}) \Big( \sin\phi(y)
_{y}\partial_{\mu}\theta(y)\hat{n}_\chi(y) \nonumber \\
&\hspace{25mm}+\,_\chi\hat{n}_{\theta}(y)\times {}_\chi\vec{X}_{\mu
}(y)\Big)\cdot{}_\chi\vec{R}_{0\mu}(y)\, \delta^{3}(\vec{x}-\vec{y})\nonumber\\
& = \sum_{\chi = R,L}\Big(\sin\phi(x) \,\partial_{\mu}\theta(x)\hat{n}_\chi(x) +\,_\chi\hat{n}_{\theta
}(x)\times{}_\chi\vec{X}_{\mu}(x)\Big)\cdot{}_\chi\vec{R}_{0\mu}(x),\label{eq:gphi}\\
B_{\theta}(x)&= \int dy^{3} \frac{\delta\mathcal{L}%
}{_{x}\partial_{0} \theta(x)}\nonumber\\
& =-\sum_{\chi = R,L} \int dy^{3} \int dy_{0} \delta(x_{0}-y_{0})\sin\phi(y) \Big( _{y}%
\partial_{\mu}\phi(y)\,\hat{n}_\chi(y) \nonumber\\
&\hspace{25mm}+\sin\phi(y)\,_\chi\hat{n}_{\phi}(y)\times{}_\chi\vec{X}_{\mu}(y)\Big)\cdot{}_\chi\vec{R}_{0\mu}(y)
\,\delta^{3}(\vec{x}-\vec{y})\nonumber\\
& =-\sum_{\chi = R,L} \sin\phi(x) \Big(\partial_{\mu}\phi(x)\,\hat{n}_\chi(x)+
\,_\chi\hat{n}_{\phi}(x)\times{}_\chi\vec{X}_{\mu}(x)\Big)\cdot{}_\chi\vec{R}_{0\mu}(x).
\label{eq:gtheta}%
\end{align}
\begin{align}
\frac{\delta B_{\phi}(x)}{\delta\theta(y)}=
& \sum_{\chi = R,L} \Big( \sin\phi(x)\,_\chi\hat{n}_{\theta\theta}(x) \times {}_\chi\vec{X}_\mu \cdot {}_\chi\vec{R}_{0\mu}(x) - \,_\chi T_\phi (x) \Big)
\delta^{4}(x-y), \\
\frac{\delta B_{\theta}(x)}{\delta\phi(y)}=
& -\sum_{\chi = R,L}\Big( \cos \phi(x)\Big(\partial_{\mu}\phi(x)\,\hat{n}_\chi(x)+\,_\chi\hat{n}_{\phi}(x)
\times{}_\chi\vec{X}_{\mu}(x)\Big)\cdot \Big( {}_\chi\vec{R}_{0\mu}(x) + \,_\chi\vec{H}_{0\mu}(x) \Big) \nonumber \\
& \hspace{90mm} + \,_\chi T_\theta (x) \Big)\delta^{4}(x-y),
\end{align}
where
\begin{align}
_\chi T_\phi (x) = & \,\partial_k \Big[ \sin \phi(x) \, \hat{n}_\chi \cdot {}_\chi\vec{R}_{0k} (x)
- \Big( \sin \phi(x) \partial_k \theta (x)
+ \,_\chi\hat{n}_\theta (x) \times {}_\chi\vec{X}_k \cdot \hat{n}_\chi \Big) \partial_0 \phi(x) \Big], \\
_\chi T_\theta (x) = & - \partial_k \Big[ \sin \phi(x) \Big( \hat{n}_\chi \cdot {}_\chi\vec{R}_{0k} (x)
+ \Big( \partial_k \phi (x) + \,_\chi\hat{n}_\phi (x) \times {}_\chi\vec{X}_k \cdot \hat{n}_\chi \Big) \partial_0 \theta(x)
\Big)\Big],
\end{align}
are the surface terms arising from derivatives
$\frac{\delta (\partial \theta)}{\delta \theta},\,\frac{\delta (\partial \phi)}{\delta \phi}$ and the latin index $k$
is used to indicate that only spacial indices are summed over.
This yields the $q^{\alpha}$-curvature
\begin{align}
\label{eq:curvature}\mathcal{F}_{\theta\phi}(x) =&\int dy^{4} \Big( \frac
{\delta B_{\theta}(x)}{\delta\phi(y)}-\frac{\delta B_{\phi}(x)}{\delta
\theta(y)}\Big)\delta^{4}(x-y) +\{B_{\phi}(x),B_{\theta}(x)\}_{phys}%
\nonumber\\
=&-\sum_{\chi = R,L} \cos\phi(x)\Big(\partial_{\mu}\phi(x)\,\hat{n}_\chi(x)+ \,_\chi\hat{n}_{\phi}(x) \times
{}_\chi\vec{X}_{\mu}(x) \Big) \cdot \Big({}_\chi\vec{R}_{0\mu}(x) + \,_\chi\vec{H}_{0\mu}(x) \Big) \nonumber \\
&-\sum_{\chi = R,L} \sin\phi(x)\,_\chi\hat{n}_{\theta\theta}(x) \times {}_\chi\vec{X}_\mu(x)
\cdot{}_\chi\vec{R}_{\mu 0}(x)
+ \sum_{\chi = R,L}\Big( \,_\chi T_\phi (x) - \,_\chi T_\theta (x)\Big).
\end{align}
where we have used that the bracket $\{B_{\phi},B_{\theta}\}_{phys} $
vanishes because $B_{\phi}$ and $B_{\theta}$ share the
same dependence on the dynamic DOFs and their derivatives.
In earlier work on the Clairaut formalism \cite{D11,D10} this was called the
$q^{\alpha}$-field strength, but we call it $q^{\alpha}$-curvature in quantum
field theory applications to avoid confusion.
This non-zero $\mathcal{F}^{\theta\phi}$ is necessary, and usually
sufficient, to indicate a non-dynamic contribution to the conventional
Euler-Lagrange EOMs. More
significant is a corresponding alteration of the quantum commutators, with
repurcussions for canonical quantisation and the particle number.
\subsubsection{Corrections to the equations of motion}
Generalizing eqs.~(7.1,7.3,7.5) in \cite{D10} (see also the discussion around eqn~(\ref{hph1})
\begin{equation}
\label{eq:alterEOM}\partial_{0}q(x)=\{q(x),H_{phys}\}_{new}= \frac{\delta
H_{phys}}{\delta p(x)} -\int dy^{4} \sum_{\alpha=\phi,\theta}
\frac{\delta B_{\alpha}(y)}{\delta p(x)} \partial_{0}\alpha(y),
\end{equation}
the derivative of the Abelian component, complete with corrections from the
monopole background is
\begin{equation} \label{eq:altEOMc}
\partial_{0} \,_\chi c_{\sigma}(x) =\frac{\delta H_{phys}}{\delta\,_\chi\Pi^{\sigma}(x)}
-\int dy^{4} \sum_{\alpha=\phi,\theta}
\frac{\delta B_{\alpha}(y)}{\delta\,_\chi\Pi^{\sigma}(x)}\partial_{0}\alpha(y).
\end{equation}
The effect of the second term is to remove the monopole contribution to
$\frac{\delta H_{phys}}{\delta\,_\chi\Pi^{\sigma}}$. To see this, consider that, by construction,
the monopole contribution to the Lagrangian and Hamiltonian is dependent on the
time derivatives of $\theta,\phi$, so the monopole component of
$\frac{\delta H_{phys}}{\delta\,_\chi\Pi^{\sigma}}$ is
\begin{align}
\frac{\delta}{\delta\,_\chi\Pi_{\sigma}(x)} H_{phys} |_{\dot{\theta}\dot{\phi}}
=& \frac{\delta}{\delta\,_\chi\Pi_{\sigma}(x)} \Big(
\frac{\delta H_{phys}}{\delta \partial_0 \theta(x)} \partial_0 \theta(x)
+ \frac{\delta H_{phys}}{\delta \partial_0 \phi(x)} \partial_0 \phi(x)
\Big) \nonumber \\
=& \frac{\delta}{\delta\,_\chi\Pi_{\sigma}(x)} \Big(
\frac{\delta L_{phys}}{\delta \partial_0 \theta(x)} \partial_0 \theta(x)
+ \frac{\delta L_{phys}}{\delta \partial_0 \phi(x)} \partial_0 \phi(x)
\Big) \nonumber \\
=& \frac{\delta}{\delta\,_\chi\Pi_{\sigma}(x)} \Big(
B_\theta (x) \partial_0 \theta (x) + B_\phi (x) \partial_0 \phi(x) \Big),
\end{align}
which is a consistency condition for eq.~(\ref{eq:altEOMc}). This confirms the necessity of
treating the monopole as a non-dynamic field.
We now observe that
\begin{equation}
\frac{\delta B_{\theta}(x)}{\delta \,_\chi c_{\sigma}(y)} = \frac{\delta B_{\phi}%
(x)}{\delta \,_\chi c_{\sigma}(y)} = 0,
\end{equation}
from which it follows that the EOMs of
$_\chi c_{\sigma}$ receives no correction. However its $\{,\}_{phys}$ contribution,
corresponding to the terms in the conventional EOM for the Abelian component,
already contains a contribution from the monopole field strength.
Repeating the above steps for the valence gluons $_\chi\vec{X}_{\mu}$, assuming $\sigma \ne 0$
and combining
\begin{equation}
\hat{D}_{0}\,_\chi\vec{\Pi}_{\sigma}(x) =\frac{\delta H}{\delta\,_\chi\vec{X}_{\sigma}(x)}
-\int dy^{4} \sum_{\alpha=\phi,\theta}
\frac{\delta B_{\alpha}(y)}{\delta\,_\chi\vec{X}_{\sigma}(x)}\partial_{0}\alpha(y).
\end{equation}
with
\begin{align}
\frac{\delta B_{\phi}(y)}{\delta\,_\chi\vec{X}_{\sigma}(x)} =&-\Big(\Big(
\sin\phi(y)_{y}\partial_{\sigma}\theta(y)\hat{n}_\chi(y)
+\,_\chi\hat{n}_{\theta}(y)\times\,_\chi\vec{X}_{\sigma}(y)\Big)
\times \,_\chi\vec{X}_0(y) \nonumber \\
& \hspace{55mm}-\,_\chi\hat{n}_\phi(y) \hat{n}_\chi \cdot \,_\chi\vec{R}_{0\sigma}(y) \Big)\delta^{4}(x-y),
\end{align}%
\begin{align}
\frac{\delta B_{\theta}(y)}{\delta\,_\chi\vec{X}^{\sigma}(x)} =&\Big(\Big(
\partial_{\sigma}\phi(y)\hat{n}(y)+\sin\phi(y)\,
\hat{n}_{\phi}(y)\times\,_\chi\vec{X}_{\sigma}(y)\Big)
\times \,_\chi\vec{X}_0(y) \nonumber \\
&\hspace{45mm}-\sin\phi(y) \,_\chi\hat{n}_{\theta}(y) \hat{n}_\chi(y) \cdot \,_\chi\vec{R}_{0\sigma}(y)\Big) \delta^{4}(x-y),
\end{align}
gives
\begin{align}
\hat{D}_{0}\,_\chi\vec{\Pi}_{\sigma}(x) = &
\frac{\delta H}{\delta\,_\chi\vec{X}_{\sigma}(x)}
-\frac{1}{2}\Big(\Big(\sin\phi(x)(\partial_{\sigma}\phi(x)\partial_{0}%
\theta(x) -\partial_{\sigma}\theta(x)\partial_{0}\phi(x)\Big)\hat{n}_\chi(x)\nonumber\\
& +\Big(\sin\phi(x)\,_\chi\hat{n}_{\phi}(x)\partial_{0}\theta(x) -\,_\chi\hat{n}_{\theta}%
(x)\partial_{0}\phi(x)\Big) \times\,_\chi\vec{X}_{\sigma}(x)\Big)\times \,_\chi\vec{X}_0(x) \nonumber\\
= & \frac{\delta H}{\delta\,_\chi\vec{X}_{\sigma}(x)} -\frac{1}{2}
g^{2}\Big(\,_\chi\vec{C}_{\sigma}(x)\times\,_\chi\vec{C}_{0}(x) +\,_\chi\vec{C}_{0}(x)\times
\,_\chi\vec{X}_{\sigma}(x) \Big)\times\,_\chi\vec{X}_{0}(x).\label{eq:XEOM}%
\end{align}
This is the converse situation of the Abelian gluon, where
their derivatives $_\chi\vec{X}_{\sigma}$
is uncorrected while their EOM
receives a correction which cancels the monopole's electric contribution to
$\{\hat{D}_{0}\,_\chi\vec{X}_{\sigma},H_{phys}\}_{phys}$. This is required by
the conservation of topological current.
\subsubsection{Corrections to the commutation relations} \label{subsubsec:commutation}
Corrections to the classical Poisson bracket correspond to corrections to the
equal-time commutators in the quantum regime. We shall see corrections for commutators with fields of different $SU(2)_\chi$ representations even though there were no such crossover terms
in the effective potential calculation.
Denoting conventional commutators as $[,]_{phys}$ and the corrected ones as $[,]_{new}$, for
$\mu,\nu\ne0$ we have
\begin{align} \label{eq:ccnew}
[_\chi c_{\mu}(x),\,_{\tilde{\chi}} c_{\nu}(z)]_{new} = [\,_\chi c_{\mu}(x),\,_{\tilde{\chi}} c_{\nu}(z)]_{phys}\hspace{65mm}\nonumber\\
-\int dy^{4}\Big( \frac{\delta B_{\theta}(y)}{\delta\,_\chi\Pi_{\mu}(x)}
\mathcal{F}_{\theta\phi}^{-1}(z)
\frac{\delta B_{\phi}(y)}{\delta\,_\chi\Pi_{\nu}(z)} - \frac{\delta B_{\phi}%
(y)}{\delta\,_{\tilde{\chi}}\Pi_{\mu}(x)} \mathcal{F}_{\phi\theta}^{-1}(z)
\frac{\delta B_{\theta}(y)}{\delta\,_\chi\Pi_{\nu}(z)}
\Big) \delta^{4}(x-z)\nonumber\\
= [\,_\chi c_{\mu}(x),\,_{\tilde{\chi}} c_{\nu}(z)]_{phys} \hspace{95mm}\nonumber\\
- \sin \phi(x) \sin\phi(z)( \partial_{\mu}%
\phi(x)\partial_{\nu}\theta(z)- \partial_{\nu}\phi(z)\partial_{\mu}\theta(x))
\mathcal{F}_{\theta\phi}^{-1}(z) \delta^{4}(x-z).
\end{align}
The second term on the final line, after integration over $d^4 z$,
clearly becomes
\begin{align}
H_{\mu\nu} (x) \sin\phi(x) \mathcal{F}_{\theta\phi}^{-1}(x),
\end{align}
indicating the role of the monopole condensate in the correction.
By contrast, the commutation relations
\begin{align}
[\,_\chi c_{\mu}(x),\,_{\tilde{\chi}}\Pi_{\nu}(z)]_{new} = &\,[\, _\chi c_{\mu}(x),\,_{\tilde{\chi}}\Pi_{\nu}(z)]_{phys}, \nonumber \\
[\,_\chi\Pi_{\mu}(x),\,_{\tilde{\chi}}\Pi_{\nu}(z)]_{new} = &\,[\,_\chi\Pi_{\mu}(x),\,_{\tilde{\chi}}\Pi_{\nu}(z)]_{phys} ,
\end{align}
are unchanged. Nonetheless, the deviation from the canonical commutation shown in eqn~(\ref{eq:ccnew}) is inconsistent with the
particle creation/annihilation operator formalism of conventional second quantization, so that particle number is no longer
well-defined for the $_\chi c_\mu$ fields.
Repeating for the valence part,
\begin{align} \label{eq:PiXnew}
[\,_\chi\Pi^a_{\mu}(x),&\,_{\tilde{\chi}}\Pi^b_{\nu}(z)]_{new} \newline \\
=& [\,_\chi\Pi^a_{\mu}(x),\,_{\tilde{\chi}}\Pi^b_{\nu}(z)]_{phys}
-\int dy^{4} \Big( \frac{\delta B_{\theta}%
(y)}{\delta \,_\chi X^a_{\mu}(x)} \frac{\delta B_{\phi}(y)}{\delta \,_{\tilde{\chi}} X^b_{\nu}(z)}
- \frac{\delta B_{\phi}(y)}{\delta X^a_{\mu}(x)} \frac{\delta B_{\theta
}(y)}{\delta \,_{\tilde{\chi}} X^b_{\nu}(z)} \Big) \,\mathcal{F}_{\theta\phi}^{-1}(z)\nonumber \\
= & [\,_\chi\Pi^a_{\mu}(x),\,_{\tilde{\chi}} Pi^b_{\nu}(z)]_{phys}
+ \Big(\sin\phi(z) n^a_\phi(x) n^b_\theta(z) \,_\chi\vec{R}_{0\mu}(x) \cdot \hat{n}_\chi(x)
\,_{\tilde{\chi}}\vec{R}_{0\nu}(z) \cdot \hat{n}_{\tilde{\chi}}(z) \nonumber \\
&- \sin\phi(x) n^a_\theta(x) n^b_\phi(z) \,_\chi\vec{R}_{0\mu}(z) \cdot \hat{n}_\chi(z)
\,_{\tilde{\chi}}\vec{R}_{0\nu}(x) \cdot \hat{n}_{\tilde{\chi}}(x) \Big)
\times \mathcal{F}_{\theta\phi}^{-1}(z)\,
\delta^{4}(x-z),
\end{align}
where the second term on the final line, integrates over $d^4 z$ to become
\begin{align}
(n^a_\phi(x) n^b_\theta(x) - n^a_\theta(x) n^b_\phi(x))\sin\phi(x)
\,_\chi\vec{R}^{0\mu}(x) \cdot \hat{n}_\chi(x) \,_{\tilde{\chi}}\vec{R}^{0\nu}(x) \cdot \hat{n}_{\tilde{\chi}}(x)
\mathcal{F}_{\theta\phi}^{-1}(x),
\end{align}
while other relevant commutators are unchanged
\begin{align}
[\,_\chi X^a_{\mu}(x),\,_{\tilde{\chi}}\Pi^b_{\nu}(z)]_{new} = [\,_\chi X^a_{\mu}(x),\,_{\tilde{\chi}} \Pi^b_{\nu}(z)]_{phys}, \nonumber \\
[\,_\chi X^a_{\mu}(x),\,_{\tilde{\chi}} X^b_{\nu}(z)]_{new} = [\,_\chi X^a_{\mu}(x),\,_{\tilde{\chi}} X^b_{\nu}(z)]_{phys} .
\end{align}
\ignore{
\begin{align}
\frac{\delta B_{\theta}(y)}{\delta\vec{X}^{\sigma}(x)} =&\,\Big(\epsilon^{abc}\Big(
\,_y\partial_{\sigma}\phi(y)\hat{n}_b(y) +\epsilon_{bde}\sin\phi(y)\,
\hat{n}^d_{\phi}(y)\times\vec{X}^e_{\sigma}(y)\Big)
\times \vec{X}_{c\,0}(y) \nonumber \\
&-\sin\phi \; \hat{n}^a_{\theta}(y) \mathbf{\hat{n}}(y) \cdot \mathbf{R}_{0\sigma}(y)\Big) \delta^{4}(x-y),
\end{align}
}
\ignore{
For the sake of completeness we expand the final term in these two equations, finding
\begin{align}
\mathbf{\hat{l}}(y) \cdot \mathbf{R}_{0\nu}(y) = &\,F_{0\sigma}(y) + \hat{n}(y) \cdot \Big(\vec{X}_\mu(y) \times \vec{X}_0(y) - \vec{X}_\mu(y) \times \vec{X}_0(y) \Big)
\end{align}
}
\section{Effective action} \label{sec:effective}
\subsection{Particle number and the monopole background} \label{subsec:operator}
It is textbook knowledge that gravitational curvature spoils canonical quantisation, but our approach gives a detailed
mechanism. It also provides some narrowly defined circumstances under which it may be salvaged.
For monopole background $_\chi\vec{H}_{\mu\nu}$
the form of eqn (\ref{eq:ccnew}) indicates that they would arise for ${}_\chi c_\sigma$ polarised along either of the
$\mu,\nu$ directions.
The only way to avoid this is if ${}_\chi c_\sigma$ is polarised in
the direction of the monopole field strength, requiring that the Abelian component of the connection propagate at a right-angle to the monopole field-strength.
However, the form of the monopole field strength
requires that a non-vanishing field must have a varying orientation in space, since it is proportional to the derivatives of the angles $\phi,\theta$. So even if the Abelian gauge component is
propagating at a right-angle to the monopole field-strength with its polarisation in the direction of the field strength, in general this could not be assumed to continue as the orientation of the
monopole field strength varied. However,
if the variation were gradual over space in comparison to the wavelength of ${}_\chi c_\mu$ then it might continue to propagate while adjusting to the required orientations in a manner analogous to
photon polarisation being rotated by successive, closely oriented, polarising filters.
On the other hand, if the wavelength of ${}_\chi c_\mu$ is significant compared to the length scale of the field variation then such a mechanism could not act and the particle's
energy would be either absorbed or deflected by the condensate, effectively suppressing the longer wavelengths and providing a measure of the background curvature.
One important observation is that the background field is (Lorentz) magnetic, so that at any point in spacetime a reference frame exists where the monopole field and its associated potential
lie entirely along the spatial directions.
The particle inconsistent contribution from eqn~(\ref{eq:PiXnew}) only occurs in the presence of a background electric component of the monopole field strength, vanishing when the
polarisation of ${}_\chi \vec{X}_\mu$ is orthogonal to the electric component of the background field. This restricts the polarisation for a transversally polarised field whose direction of propagation is
not in the direction of this electric component, but not otherwise. Of course, the electric component of the background
monopole field can always be removed by a suitable Lorentz transformation, but this still leaves the particle interpretation frame-dependent.
Some authors have argued that the valence gluons in two-colour QCD gain an effective mass term \cite{KMS05,K06} via their quartic interaction with the non-trivial monopole condensate.
A similar mechanism could apply to the valence components of this theory. Consider the following quartic term from eqs~(\ref{eq:QCD},\ref{eq:fieldstrength}),
\begin{align}
\frac{g^2}{4}& (_\chi\vec{C}_\mu(x) \times _\chi\vec{X}_\nu(x)) \cdot (_\chi\vec{C}^\mu(x) \times \,_\chi\vec{X}^\nu(x)) \nonumber \\
= \frac{g^2}{4}& (_\chi\vec{C}_\mu(x) \cdot \,_\chi\vec{C}^\mu(x) \; _\chi\vec{X}_\nu(x) \cdot \,_\chi\vec{X}^\nu(x)
- \,_\chi\vec{C}_\mu(x) \cdot \,_\chi\vec{X}^\mu(x) \; _\chi\vec{X}_\mu(x) \cdot \,_\chi\vec{C}^\mu(x) ).
\end{align}
Remembering that the Lorentz monopole fields $_\chi\vec{C}_\mu$ have non-zero condensates yields the terms
\begin{equation}
\frac{g^2}{4} \langle_\chi\vec{C}_\mu(x) \cdot \,_\chi\vec{C}^\mu(x) \rangle \; \,_\chi\vec{X}_\nu(x) \cdot \,_\chi\vec{X}^\nu(x) ,
\end{equation}
so that the monopole condensate is seen to generate a mass term for the valence component.
Such a mass term is covariant under the gauge transformation because, as shown in the discussion of eqs~(\ref{eq:transform}), the valence components transform as sources
although explicitly adding a mass term for these fields would spoil renormalisability.
In this case the valence components could also be longitudinally polarised. With longitudinal polarisation the only restriction is that the direction of propagation
be orthogonal to the background electric component of the monopole field strength.
The valence component might therefore enjoy a limited particle interpretation under a range of circumstances.
We observe that the two monopole field strengths $_R\vec{H}_{\mu\nu}, \,_L\vec{H}_{\mu\nu}$ sum to give a net field strength lying purely along the rotation directions in the internal space.
Exactly how this affects the observed dynamics of the theory, or even if it does, is unclear. We were unable to find a linear combination of the gauge fields to separate rotation and boost generators
which was equivalent to the original theory. If there is an effect then a reasonable scenario is that the coupling
to linear momentum would dominate that to rotational momentum at large distances, as determined by the length scale of the condensate.
\subsection{Hilbert-Einstein term} \label{subsec:EH}
Kim and Pak \cite{KP08} considered the
effects of a torsion condensate. They found the resulting background field strength, if constant, spontaneously generated an EH term if the curvature tensor is expanded around it
(see the discussion of eqn (45) in their paper \cite{KP08}).
Since our background is attributable to an Abelian background field
we expect the effective theory to have an Abelianised EH term, similar to that derived by Cho \textit{et al}.{}~\cite{COK12,COP15} when applying the CDG decomposition to the Levi-Civita tensor.
Such details must await further work, but we are
encouraged to believe that the theory may be Wick rotated back to Lorentz space for a positive semi-definite effective theory. Not only do all quantum fields have kinetic terms with the correct sign, but
the Lagrangian's lowest order derivative terms come from an emergent term sometimes added to rectify the non-semi positive definiteness.
\section{Discussion} \label{sec:discussion}
We have applied the CDG decomposition to a Lorentz gauge theory and confirmed that it has a monopole condensate at one loop. Using the Clairaut formalism we have found how the monopole background
modifies the canonical EOMs for the physical DOFs. Lorentz gauge theory has the problem of being non-positive semi-definite, which can be handled by adding a EH term. We did not add such a term
but instead postponed the problem by Wick rotating the theory into Euclidean space where the Lorentz gauge group becomes locally isomorphic to $SU(2)_R \times SU(2)_L$.
We found the spontaneous generation of a vacuum condensate which others have argued \cite{KP08,PKT12} leads to an effective Hilbert-Einstein term.
The CDG decomposition introduces an internal unit vector to indicate the local internal direction of the Abelian subgroup of the gauged symmetry group.
However, the unit vector used to specify this subgroup does not form a canonical EOM and is degenerate.
If we expand it in terms of its angular dependence, since its information content is purely directional, then those angles are
also degenerate and we do not derive canonical EOMs for them. They do however add additional terms with important consequences for the theory's physics.
They may not be ignored therefore, but require appropriate theoretical tools to analyse them. The authors addressed these issues in a previous analysis of QCD.
The purpose of this paper was to do so for a theory relevant to gravity. The main advantages of working in a gauged Lorentz theory for us is that the
gauge fields have quadratic kinetic terms well-suited to our Clairaut-based approach in addition to the opportunity to apply analyses and even results from
$SU(2)$ Yang-Mills theories.
We have not considered the effects of matter fields in the fundamental representation. We do note in passing that differences in this part of the spectrum must lead to variations in the magnitude
for the monopole condensate, so the differences in their matter spectra suggest that this theory has significantly different infrared behaviour from that of $SU(2)$ QCD.
We also observe that the net monopole condensate lies in a direction of a rotation generator. We have not been able to derive corresponding canonical DOFs to reflect this so the physical
significance of this observation, if any, remains obscure.
We have left the inclusion of translation symmetry to subsequent work.
A full gravitational theory must of course include the full Poincar\'{e} symmetry group, but we submit
that our Lorentz-only theory makes a sufficiently good approximation to indicate some relevant phenomenology.
|
2,869,038,154,276 | arxiv | \section{Introduction}
\IEEEPARstart{A}{ccess} remote control of everything in the world will intelligently be provided through the internet in the next 20 years. We will offer a method to improve energy efficient consumption for processing queries on the Internet of things. The importance of the issue becomes clear when, for example, there is a volcanic outbreak existing and outdated sensors inside do not work because the power contained in its sensors has deleted and therefore could not post the correct data as notified; so, we collected data from the last few days. They are deprived and cannot afford essential tasks before the occurrence of the accident. Suppose you want to carry the blood platelets from the blood transfusion center to the hospital and during transfer, the temperature of the enclosure inside the car rises [1] rendering the platelets useless. It could corrupt since the necessary information about the high temperature was not reported to the base station in time, which could lead to a patient's death after an injection by using the corrupt product. Assume there are sensors in rooms of babies that are not able to explain the pain. That is connected to a pain assessment center, and we have the ability to send baby face videos by using the Internet of Things; hence, we can diagnose pain. The energy sensor in this case had finished and a newborn could have been in the first stage of a disease such as cataracts diagnosed too late leading the illness to spread and cause blindness [2]. Suppose a sock for the prevention of diabetic ulcers is invented where sensors are used to recognize increase in pressure in one area of the foot and use SMS to notify the patient via text. The patients expect the sensors to work properly. Thus, the sensors should not run out of, an energy loss could lead to a lost leg. We have important materials in a warehouse outside of town where we must continuously monitor and maintain the temperature and humidity of the environment, and we do so using the Internet of Things. The power of the sensors is finished when a robber enters the store and is thus not able to notify the police. These examples show that energy efficient consumption in wireless sensor networks, which are a large part of the Internet of Things is very important. The 'thing' is a device that has several sensors. When these things connect together through the internet, the Internet of Things creates an id and a specific IP [3]. One example of the Internet of Things is a Pain evaluation System. In which baby's room is networked, a video records the baby's facial expressions, and then is sent as a data stream to the nerve center of the hospital. This method can help the families to assess whether the baby is crying out of pain or another reason. This method is useful for newborns and disabled persons who are unable to talk. The structure of the article is as follows: We will first explain the Internet of Things and then examine the methods of clustering on the Internet of Things. Since most of the Internet of Things consists of sensor nodes, this chapter is very similar to clustering in a wireless sensor network. Then we will describe our proposed method and then we will conclude with the results of our experiments.
\section{related works}
The database management system operates on the data stored in databases and the data flow management system operates on the data that must be respond to queries related to this data and generated by sensors at a given time interval. Tang et al [10] manage the data flow on their queries and split the area of wireless sensors into grid cells, and then propose a hierarchy clustering energy saving index tree for the grid cells. They created a time-dependent query technique for giving continuous queries.
Initially, sensor nodes report their values to the base station. If these values change significantly in comparison to the previously reported values,
they report their values to the base station again.
The queries are processed by collecting the values of the sensors stored at the base station and analyzing them. They reduce energy consumption and provide an energy-saving hierarchy clustering index tree to facilitate time-sensitive regional querying on the Internet of Things. Tang et al [10] represented an energy model focused on building a routing tree, finding cluster centers was not important to them. They found the weights of the cells and the stages of their proposed tree construction are as follows: Making the shortest route to the base station. In [10], four algorithms were presented in which the first two algorithms comprise the grid cells, the weigh of the grid cells was then calculated, and the hierarchy tree was drawn.
On the other hand, [10] didn't consider overlapping clusters. It often happens that some geographic coordinates are considered by two or more clusters. In fact, repeated data is taken and sent to the base station. If these geographic coordinates are considered by only one cluster, it will save energy. We proved data deletion saves energy.
So far, there are some methods for choosing the right cluster head within a cluster for wireless sensor networks. Since most elements of the Internet of Things use wireless sensor networks, these methods can be used to select the proper cluster head within a cluster in the Internet of Things. We will give a brief explanation of two methods in this regard. Abdoulsalam and Kamer in 2010 proposed a method called W-Leach. In W-Leach protocol a cluster head was selected based on the weight of the sensors and the network weights were based on sensor densities and their residual energy. The SNs density is determined by the number of live sensors in a specified region divided by the total number of live sensors in the network.
The sensitivity and density of each sensor are based on the following 1- greater density $ \Rightarrow $ more weight 2- more energy remaining $ \Rightarrow $ more weight. If no sensor was found within the range, the density is 1. If all the sensors are within the transmission range then the density is set to 'n'. In order to send data to the cluster head, sensors are selected whose densities are lower than the predefined threshold and compared to sensors that have a higher density in a network. Otherwise, they will wait for the next round to transfer the data to the corresponding cluster heads. Farooq in 2010 proposed a method called Mr.-Leach. It divides the network into different layers of clusters. Mr.-Leach applies the concept of equal clustering which every node in the layer gets to the base station with the same number of hopes. It divides the network into different layers of clusters. Choosing cluster heads and other substrings at the second level is carried out by the base station. Therefore, the cost of calculations at the sensor level can be reduced. This is done in three phases: Creating a cluster in low level, cluster recognition at different levels by the base station, and scheduling. Pattem S extended " the impact of spatial correlation or routing with compression in wireless sensor networks [5]". In sensor networks, "low latency and an energy efficient routing tree for wireless sensor networks with multiple mobile sinks" is established by Han S-W [4]. Tyagi S and Neeraj Kumar made a full study of "A systematic review on clustering and routing techniques based upon leach protocol for wireless sensor networks"[6]. Gastpar et al [7] found the repetitive messages that the sensor nodes had seen in relation to the relationship between messages that needed an environment for storing data and a technique for finding duplicate data that caused network latency and costly network implementation. In [8], Al-Turjmana FM expanded "Quantifying connectivity in wireless sensor networks with grid-based deployments."Peng I-H and Chen Y-W the process by which "Energy consumption bounds analysis and its applications for grid-based wireless sensor networks" has attracted considerable attention recently [9]. Pathan A-SK used "A secure energy-efficient routing protocol for WSN"[11]. They made the routing tree that the query was transmitted to the child node and the answer, in the opposite direction, returned to its root, which needed to know the parent-child relationship and had high energy consumption. Soheili and Kalogeraki spoke about" Spatial queries in sensor networks "in the proceeding of the 13th annual ACM international workshop on geographic information systems [12]." They reduced the number of nodes in a query, but this needed to know the parent-child relationship and had a high energy consumption. Caione, Brunelli and Benini implemented and investigated "Distributed compressive sampling for lifetime optimization in dense wireless sensor networks[13]." Sardari, Beirami, Zou, Fekri proposed " Content-aware network data compression by using joint memorization and clustering " [14]. The advantage of some methods of WSN is visible in [15] and [16]. Tan, Korpeoglu and Stojmenovic computed power-efficient data aggregation trees for sensor networks [17]. By using Service-Oriented Architecture, Guinard, Trifa, Karnouskos, Spiess, and Savio developed a process to run instances of real-world services, select and dynamically query [18]. Kang, Lee, Kim, Choi, Im, and Kang E-Y utilized In-network query for wireless sensor networks [19].
\begin{table}[htbp]
\centering
\caption{ : Comparison of 10 methods of determining of cluster head on the Internet of Things [6].\label{table:1}}
\begin{small}
\tiny
\begin{tabular}{|p{1.16cm}|p{0.22cm}|p{0.35cm}|p{0.50cm}|p{0.57cm}|p{0.45cm}|p{0.45cm}|p{0.94cm}|p{0.40cm}|p{0.40cm}}
\hline
&Delay&Energy saving&load balance&Scalability&fault tolerance&Moving Nodes&Communication cost&How to transfer data\\ \hline
Handy in 2002 & Low & Good & Not tested& Low &Not tested&Nodes are fixed&Top&Single hop\\ \hline
Nego in 2007&Much&Very good&Not tested&too much&Not tested&Nodes are quasi constants&Not tested&Not tested\\ \hline
Yang in 2010 & Not tested & Good & Not tested& Not tested &Not tested&Nodes are fixed&Not tested&Single hop\\ \hline
Wang in 2010 & Not tested & Very good & Not tested& Not tested &Not tested&Nodes are fixed&Not tested&Single hop\\ \hline
Sen in 2011 & Not tested &Good &Yes& Not tested &Not tested&Nodes are fixed&Not tested&Single hop\\ \hline
Duan in 2009 & Not tested & Good & Yes& Not tested &Not tested&Nodes are fixed&Top&Single hop\\ \hline
WALFA in 2009 & Not tested & Good & Yes&much &Not tested&Nodes are fixed&Not tested&Multi- hops\\ \hline
FAROOQ in 2010 & Not tested & Very good & Yes& Not tested &Not tested&Nodes are fixed&Not tested&Multi- hops\\ \hline
ABDOULSALAM and KAMER in 2010 &Not tested & Very good & Not tested& Not tested &Not tested&Nodes are fixed&Not tested&Single hop\\ \hline
JUT in 2011 & Not tested & Good & Yes& Not tested &Not tested&Nodes are fixed&Not tested&Single hop\\ \hline
\end{tabular}
\end{small}
\end{table}
As a matter of fact, for the minimum spanning tree and the shortest past tree approaches, we need energy efficient correlated data aggregation for wireless sensor networks [20]. Park S-J et al used a spatial index tree and broadcast queries that had high energy consumption.
In the Internet of Things, smart things communicate with each other. (Of course, it's going to happen in the next 20 years). To satisfy the specific requirements of all users when a zone is to be monitored continuously, the strategy to be implemented is to sense and independently collect the data for each query of the sub-areas. This method is not energy efficient because values reported by sensors may be the same as values sensed during the previous time interval.
In [10], there was no attention given to the problem of filtering data, though data filtering is a very effective way to reduce energy consumption. If every sensor sends its data directly to the base station, significant amounts of memory will be needed and, of course, the accuracy of the data will increase. Tang et al [10] tried to improve this method, which consumes a lot of energy.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=7in,height=12.5cm]{figure1internetofthing.png}
\centering
\caption{"Results of the implementation of algorithms of reference[7] graphically in the Java environment."}
\label{Graph A}
\end{figure*}
\section{Our solution}
Their method was that when a major modification happened in the sensed data by a sensor then it would be reported to the base station. They used a clustering method in which the size of the cluster was not important and some clusters might be large while others
could be small.
In our proposed approach, we consider data filtered meaning the data will be filtered by the header in a grid cell. Functions that are used when data is filtered are typically maximizing, minimizing, or averaging on a header node, and we focused on the averaging function in our scenario design as well as the implementation of our proposed approach.
For example, if data filtering that is executed by a header in a grid cell is an averaging function and the cluster is greater than a certain limit, the data collected at the lowest point by a node has great difference with the data that lies at the highest point of the cluster. We suggest adding a few headers to this cluster and, in fact, breaking this cluster and creating new clusters based on the K-MEANS algorithm.
Also, in the first and second algorithms given in [10], header selection inside a grid cell is based on the Leach method. According to Table 1 on the previous page, we offered a header selection based on the Abdoulsalam $\&$ Kamer method (in 2010) since this method implements hierarchical clustering and energy consumption is reduced substantially. A method can be considered in accordance with Table 1 if it uses load balancing and significantly reduces energy consumption like Mr. Farooq's method (in 2010). On the other hand, when a cluster is large, the number of leaves that are children of a higher level in the index tree provided by Tang et al [10] increases too much, and transmitting by only one parent to a higher level especially during the use of averaging method means that an accurate filter of data has not been performed.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=5in,height=10.5cm]{figure2internetofthing.png}
\centering
\caption{" Proposed index tree based on the simulation of wireless sensor nodes of reference[7]".}
\label{Graph B}
\end{figure*}
For example, in one scenario we want to make temperature averaging, the information of a leaf that is a grid cell may show a temperature of 2 degrees and the temperature of its neighbor leaf is slightly different from this temperature. But the leaf is farther than the leaf that we are concentrating on and it has a common father with our desired leaf at a higher level, his temperature is 22 degrees. In the averaging, the header that appears as the parent of these leaves sends a temperature of 12 degrees to the root of the tree or the base station. In these situations, my suggestion is that when decisions and analysis are made by the base station to reduce error, a subtree must be used that sends leaf information directly to the root of the tree. We show our proposed index tree in Figure 2 and we propose adding two functions to the algorithm 1 and 2 in [10], which are used to construct the index tree. 1- Whether the cluster is broken or not? This is done by considering an upper threshold value for the number of nodes in a cluster. For example, if the number of nodes exceeds a certain limit, the cluster will be broken.
2- Add the routine of constructing a sub-tree in a large cluster that is broken, the leaves are the grid cells and its root is the header cluster, which directly connects to the root of the tree. We have written a program in C++ that gets coordinates information including their temperature, from an input file, as well as the number of minimum possible clusters and maximum possible clusters. These items were named as m and M in the algorithms of [10]. Then, we will sort them and build a hierarchical tree. At the same time, the average values of the leaves are calculated in the father sensor.
Then we receive a query from the user and the program determines to which regions the query is related and returns the answer. We changed the number of clusters shown with m and M, and conclude that when the number of clusters increases, the query response is closer to the actual data value of a sensor. We implemented the algorithms 1, 2, 3, and 4 in the paper [10] graphically in the Java environment and by changing the value of m, we increased the depth of the tree. We found that if the depth of the tree increased, it would be better. In this status, the average data in the header is closer to the real data of the sensor. This program first creates grid cells and then calculates the weight of the grid cells. It then draws the corresponding hierarchical tree. The results of this program are presented graphically in Figure 1. Our environment can be a closed environment or a large forest or underwater data. The data in our system can be the wind blowing or temperature or humidity. Data sensed with sensors is sent to the header in the related grid cell and filtered by this header. There are several functions for filtering data, which in our scenario we use the average function to filter data. In [10], the clusters are asymmetric and clustering is done based on proximity.
\begin{figure*}[htbp]
\centering
\includegraphics[width=4in,height=9.5cm]{figure3internetthing.png}
\centering
\caption{ Comparison the difference between the header data and the actual data of one node.}
\label{Graph C}
\end{figure*}
Physically, the sensor's data is close to its next sensor, but ultimately the sensor located at the lowest point of the cluster has large difference in data to a sensor that is placed at the highest point of the cluster. It is wrong that only an average of this big cluster is to be reported to the base station, and another header should be put into this cluster. In other words, the cluster is broken. In our scenario, the sensors are the same type. There may be a sensor under the shadow and its neighbor sensor is under sunlight, or probably one or more sensors is damaged in that moment and reports incorrect data. Keep in mind, in accordance with the principle of data clustering, irrelevant data is not located in the cluster.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=5in,height=10.5cm]{figure4internetthing.png}
\centering
\caption{Comparison of energy consumption in the normal state with the case where the overlapping area of clusters is eliminated.}
\label{Graph D}
\end{figure*}
To improve the work of [10], we changed the index tree suggested by the authors, and when the error data is sent instead of correct data to the base station, we suggest using the new tree structure presented in Figure 2.
\section{Evaluation}
We implemented a program based on our solution and achieved the following results. The number of nodes did not affect our experiment.
\begin{table}[htbp]
\centering
\caption{Comparison of the difference between the cluster head data and the real data of a node.}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{|c|c|}
\hline
Number of clusters & The difference between the cluster's head and the actual data \\ \hline
1 & 1 \\ \hline
2 & 2 \\ \hline
3 & 1 \\ \hline
4 & 0.5 \\ \hline
\end{tabular}
}
\end{table}
We achieved the above results in the implementation of the program and we concluded that increasing the number of headers increases the accuracy of data sent to the base station.
In the article we chose, there are heterogeneous clusters, which means there is a lot of difference in the two ends of a cluster. Data filtering, especially for the averaging function, does not pass correct data from data of a cluster and in this case, we need to add a few headers to the cluster and transfer information to the base station with several headers instead of one header. This means that large clusters should be broken into several clusters. We performed this manner with K-Means algorithm and implemented this algorithm in C ++. We measured energy consumption in normal mode as well as the state in which overlapping cluster areas were removed. We did this work by writing a program in Java environment and we got the following results.
\begin{table}[htbp]
\centering
\caption{ Comparison of energy consumption in the normal state with the case where the overlapping area is eliminated.}
\begin{small}
\tiny
\begin{tabular}{|p{0.72cm}|p{0.44cm}|p{0.39cm}|p{0.39cm}|p{0.39cm}|p{0.39cm}|p{0.39cm}|p{0.39cm}|p{0.39cm}|p{0.39cm}|p{0.39cm}}
\hline
&\begin{sideways}10:00:00 AM \ \ \end{sideways}&\begin{sideways}10:00:01 AM\end{sideways}&\begin{sideways}10:00:03 AM\end{sideways}&\begin{sideways}10:00:05 AM\end{sideways}&\begin{sideways}10:00:07 AM\end{sideways}&\begin{sideways}10:00:09 AM\end{sideways}&\begin{sideways}10:00:11 AM\end{sideways}&\begin{sideways}10:00:13 AM\end{sideways}&\begin{sideways}10: 00:14 AM\end{sideways}\\ \hline
Consumption of energy by removing the overlapping area & 60 &30 & 60& 30 &30&60&30&30&30\\ \hline
Consumption of energy in normal mood&150&60&90&30&30&90&30&30&90\\ \hline
\end{tabular}
\end{small}
\end{table}
\section{Conclusion}
We were centralized on [10] that focused on grid-based clustering and the construction of an index tree. Data filtering, especially in the case of the average aggregation cannot transfer correct data from the data of one cluster. On these occasions, we had to add a few head nodes in clusters and data transmitted to the base station by a few head nodes; this means that clusters should be broken into a few large clusters. We did this with the K-Means algorithm and in C++ implementation and we tried to improve its clustering and made a change in its proposed index tree. On the other hand, it happens very often that repetitive information is sent by the two overlapping clusters to the base station and we were able to save energy by removing duplicate data. We are clustering in the clusterable network area so that when head clustering crashes, we prevent the network from disconnecting by adding some new sensors making it easy and energy saving.
|
2,869,038,154,277 | arxiv | \section{Introduction}
The importance of evolution can hardly be overstated, in so far as it permeates all
sciences. Indeed, in the 150 years that have passed since the publication of
{\em On the Origin of the Species} \cite{darwin:1859}, the original idea of Darwin
that evolution takes place through descent with modification acted upon by
natural selection has become a key concept in many sciences. Thus, nowadays
one can speak of course of evolutionary biology, but there are also evolutionary
disciplines in economics, psychology, linguistics, or computer science, to name a
few.
Darwin's theory of evolution was based on the idea of natural selection. Natural
selection is the process through which favorable heritable traits become more
common in successive generations of a population of reproducing organisms,
displacing unfavorable traits in the struggle for resources. In order to cast
this process in a mathematically
precise form, J.\ B.\ S.\ Haldane and Sewall Wright introduced, in the so-called
modern evolutionary synthesis of the 1920's, the concept of fitness. They applied
theoretical population ideas to the description of evolution and, in that context, they
defined fitness as the expected number of offspring of an individual that reach
adulthood. In this way they were able to come up with a well-defined measure of
the adaptation of individuals and species to their environment.
The simplest mathematical theory of evolution one can think of arises when one
assumes that the fitness of a species does not depend on the distribution of frequencies
of the different species in the population, i.e., it only depends on factors that are intrinsic
to the species under consideration or on environmental influences. Sewall Wright
formalized this idea in terms of fitness landscapes ca.\ 1930, and in that context
R.\ Fisher proved his celebrated theorem, that states that the mean fitness of a
population is a non-decreasing function of time, which increases proportionally
to variability. Since then, a lot of work has been done on
this kind of models; we refer the reader to
\cite{roughgarden:1979,peliti:1996,drossel:2001,ewens:2004} for
reviews.
The approach in terms of fitness landscapes is, however, too simple and, in general,
it is clear that the fitness of a species will depend on the composition of the population
and will therefore change accordingly as the population evolves. If one wants to
describe evolution at this level, the tool of reference is
evolutionary game theory. Brought
into biology by Maynard Smith \cite{maynard-smith:1982}
as an
``exaptation''\footnote{Borrowing the term introduced by Gould and Vrba in evolutionary
theory, see \cite{gould:1982}.}
of the game theory developed originally for economics \cite{morgenstern:1947},
it has since become a unifying framework
for other disciplines, such as sociology or anthropology \cite{gintis:2000}.
The key feature of this mathematical apparatus is that it allows to deal with evolution on a
frequency-dependent fitness landscape or, in other words, with strategic interactions
between entities, these being individuals, groups, species, etc. Evolutionary game theory
is thus the generic approach to evolutionary dynamics \cite{nowak:2006a} and contains
as a special case constant, or fitness landscape, selection.
In its thirty year history,
a great deal of research in evolutionary game theory
has focused on the properties and applications of the replicator
equation \cite{hofbauer:1998}. The replicator equation was introduced in
1978 by Taylor and Jonker \cite{taylor:1978} and describes the evolution of
the frequencies of population
types taking into account their mutual influence on their fitness. This important
property allows the replicator equation to capture the essence of selection and, among
other key results, it provides a connection between the biological concept of
evolutionarily stable strategies \cite{maynard-smith:1982}
with the economical concept of Nash equilibrium \cite{nash:1950}.
As we will see below, the replicator equation
is derived in a specific framework that involves a number
of assumptions, beginning with that of an infinite, well-mixed population with
no mutations. By well-mixed population it is understood that every individual
either interacts with
every other
one or at least has the same probability to interact with any other individual
in the population.
This hypothesis implies that any individual effectively
interacts with a player which uses the average strategy within the
population (an approach that has been
traditionally
used in physics under the name of mean-field approximation).
Deviations from the well-mixed population scenario
affect strongly and non-trivially the outcome of the evolution, in a way which
is difficult
to apprehend in principle. Such deviations can
arise when one considers, for instance, finite
size populations, alternative learning/reproduction dynamics, or some kind of
structure
(spatial or temporal) in the interactions between individuals.
In this review we will
focus on this last point, and discuss the consequences of relaxing the
hypothesis that
every player interacts or can interact with every other one. We will address
both
spatial and temporal
limitations in this paper, and refer the reader to
Refs.~\cite{nowak:2006a,hofbauer:1998} for discussions of other perturbations.
For the sake of definiteness, we will consider those effects, that go beyond
replicator dynamics, in the specific context of the emergence of cooperation,
a problem of paramount importance with implications at all levels,
from molecular biology to societies and ecosystems
\cite{pennisi:2005}; many
other applications of evolutionary dynamics
have also been proposed but it would be too lengthy to discuss all of them
here (the interested reader should
see, e.g., \cite{nowak:2006a}). Cooperation, understood as a fitness-decreasing
behavior that increases others' fitness,
is an evolutionary puzzle, and many researchers have considered
alternative approaches to the replicator equation as possible explanations of
its
ubiquity in human (and many animal) societies.
As it turns out, human behavior is unique in nature. Indeed, altruism or
cooperative behavior exists in other species, but it can be
understood in terms of genetic relatedness (kin selection,
introduced by Hamilton \cite{hamilton:1964a,hamilton:1964b}) or of repeated
interactions (as proposed by Trivers \cite{trivers:1971}).
Nevertheless, human cooperation extends to genetically unrelated
individuals and to large groups, characteristics that cannot
be understood within those schemes. Subsequently, a number of
theories based on group and/or cultural evolution have been put
forward in order to explain altruism (see
\cite{hammerstein:2003} for a review). Evolutionary game theory is also being
intensively used for this research, its main virtue being that it allows to
pose the dilemmas involved in cooperation in a simple, mathematically
tractable manner. To date, however, there is not a generally accepted
solution to this puzzle \cite{nowak:2006b}.
Considering temporal and spatial effects means, in the language of
physics, going beyond mean-field
to include fluctuations and correlations. Therefore, a first step is to
understand what are the
basic mean field results. To this end, in Section~\ref{sec:2} we briefly
summarize the main
features of replicator equations and introduce the concepts we will refer to
afterwards.
Subsequently, Section~\ref{sec:3} discusses how fluctuations
can be taken into account in evolutionary game theory, and specifically we will
consider that,
generically, interactions and dynamics (evolution) need not occur at the same
pace. We will
show that the existence of different time scales leads to quite unexpected
results, such as
the survival and predominance of individuals that would be the less fit in the
replicator
description. For games in finite populations with two types of individuals or
strategies,
the problem can be understood in terms of Markov processes and the games can be
classified
according to the influence of the time scales on their equilibrium structure.
Other situations
can be treated by means of numerical simulations with similarly non-trivial
results.
Section~\ref{sec:4} deals with spatial effects.
The inclusion of population structure in evolutionary game theory has been
the subject of intense research in the last 15 years, and a complete review
would be beyond
our purpose (see e.g.\ \cite{szabo:2007}). The existence of a network describing
the possible interactions
in the population has been identified as one of the factors that may promote
cooperation among
selfish individuals \cite{nowak:2006b}. We will discuss the results available to
date and show how they can
be reconciled by realizing the role played by different networks, different
update rules for the
evolution of strategies and the equilibrium structure of the games. As a result,
we will be
able to provide a clear-cut picture of the extent as to which population
structure promote
cooperation in two strategy games.
Finally, in Section~\ref{sec:5} we discuss the implications of the reviewed
results on a more general
context. Our major conclusion will be the lack of generality of models in
evolutionary game
theory, where details of the dynamics and the interaction modify qualitatively
the results.
A behavior like that is not intuitive to physicists, used to disregard those
details as
unimportant. Therefore, until we are able to discern what is and what is not
relevant, when dealing with problems in other sciences,
modeling properly and accurately specific problems is of utmost importance. We
will also indicate a few directions of research that arise from the
presently available
knowledge and that we believe will be most appealing in the near future.
\section{Basic concepts and results of evolutionary game theory}
\label{sec:2}
In this section, we summarize the main facts about evolutionary game theory
that we are going
to need in the remainder of the paper. The focus is on the stability of
strategies and on the
replicator equation, as an equivalent to the dynamical description of a mean
field approach
which we will be comparing with. This summary is by no means intended to be
comprehensive
and we encourage the reader to consult the review \cite{hofbauer:2003} or, for
full details,
the books \cite{maynard-smith:1982,gintis:2000,hofbauer:1998}.
\subsection{Equilibria and stability}
The simplest type of game has only two players and, as this will be the one we
will be
dealing with, we will not dwell into further complications. Player $i$ is
endowed with a
finite number $n_i$ of strategies. A game is
defined by listing the strategies available to the players
and the payoffs they yield: When a player, using strategy $s_i$, meets another,
who in
turn uses strategy $s_j$, the former receives a payoff $W_{ij}$ whereas the
latter receives
a payoff $Z_{ij}$. We will restrict ourselves to symmetric games, in which the
roles of both
players are exchangeable (except in the example considered in
Section~\ref{sec:ultimatum}); mathematically, this means that the set of
strategies are the
same for both players and that $W=Z^T$. Matrix $W$ is then called the payoff
matrix of the
normal form of the game. In the original economic formulation
\cite{morgenstern:1947}
payoffs were understood as utilities, but Maynard Smith
\cite{maynard-smith:1982} reinterpreted
them in terms of fitness, i.e.\ in terms of reproductive success of the involved
individuals.
The fundamental step to ``solving'' the game or, in other words, to find what
strategies will
be played, was put forward by John Nash \cite{nash:1950}
by introducing the concept of equilibrium. In $2\times 2$ games, a
pair of
strategies $(s_i,s_j)$ is a Nash equilibrium if no unilateral change of strategy
allows any player to improve her payoff. When we restrict ourselves to symmetric
games, one can say simply, by an abuse of language \cite{hofbauer:2003}, that a
strategy $s_i$ is a
Nash equilibrium if it is a best reply to itself: $W_{ii}\ge W_{ij},\, \forall
s_j$ (a strict Nash equilibrium if the inequality is strict).
This in turn implies that if
both players are playing strategy $s_i$, none of them has any incentive to
deviate unilaterally
by choosing other strategy. As an example, let us consider the famous
\emph{Prisoner's Dilemma game},
which we will be discussing throughout the review. Prisoner's Dilemma was
introduced
by Rapoport and Chammah \cite{rapoport:1965} as a model of the implications of
nuclear
deterrence during the Cold War, and is given by the following payoff matrix (we
use the
traditional convention that the matrix indicates payoffs to the row player)
\begin{equation} \label{eq:anxo1}
\begin{array}{ccc}
& \mbox{ }\, {\rm C} & \!\!\!\!\!\! {\rm D} \\
\begin{array}{c} {\rm C} \\ {\rm D} \end{array} & \left(\begin{array}{c} 3 \\ 5
\end{array}\right. & \left.\begin{array}{c} 0 \\ 1
\end{array}\right). \end{array} \end{equation}
The strategies are named C and D for cooperating and defecting, respectively.
This game is
referred to as Prisoner's Dilemma because it is usually posed in terms of two
persons
that are arrested accused of a crime. The police separates them and makes the
following
offer to them: If one confesses and incriminates the other, she will receive
a large reduction in the sentence, but if both confess they will only get a
minor
reduction; and if nobody confesses then the police is left only with
circumstancial
evidence, enough to imprison them for a short period. The
amounts of the sentence reductions are given by the payoffs in (\ref{eq:anxo1}).
It is
clear from it that D is a strict Nash equilibrium: To begin
with, it is a dominant strategy, because no matter what the column player
chooses to do, the row
player is always better off by defecting; and when both players defect, none
will improve
her situation by cooperating. In terms of the prisoners, this translates into
the fact that
both will confess if they behave rationally. The dilemma arises when one
realizes that both players would be better off cooperating, i.e.\ not
confessing, but rationality leads them unavoidable to confess.
The above discussion concerns Nash equilibria in pure strategies. However,
players can
also use the so-called mixed strategies, defined by a vector with as many
entries as
available strategies, every entry indicating the probability of using that
strategy. The notation
changes then accordingly: We use vectors ${\bf x}=(x_1\,x_2 \ldots x_n)^T$,
which are
elements of the simplex $S_n$ spanned by the vectors ${\bf e}_i$ of the standard
unit
base (vectors ${\bf e}_i$ are then identified with the $n$ pure strategies).The
definition of a Nash equilibrium in mixed strategies is identical to the
previous one:
The strategy profile ${\bf x}$ is a Nash equilibrium if it is a best reply to
itself in terms
of the expected payoffs, i.e.\ if ${\bf x}^TW{\bf x}\ge {\bf x}^TW{\bf y},\,
\forall {\bf y}\in S_n$.
Once mixed strategies have been introduced, one can prove, following Nash
\cite{nash:1950}
that every normal form game has at least one Nash equilibrium, albeit it need
not necessarily be a
Nash equilibrium in pure strategies. An example we will also be discussing below
is
given by the \emph{Hawk-Dove game} (also called \emph{Snowdrift} or
\emph{Chicken} in the literature \cite{sugden:1986}),
introduced by Maynard Smith and Price to describe animal conflicts
\cite{maynard-smith:1973}
(strategies are labeled H and D for hawk and dove, respectively)
\begin{equation} \label{eq:anxo2}
\begin{array}{ccc}
& \mbox{ }\, {\rm D} & \!\!\!\!\!\! {\rm H} \\
\begin{array}{c} {\rm D} \\ {\rm H} \end{array} & \left(\begin{array}{c} 3 \\ 5
\end{array}\right. & \left.\begin{array}{c} 1 \\ 0
\end{array}\right). \end{array} \end{equation}
In this case, neither H nor D are \emph{Nash equilibria}, but there is indeed
one Nash equilibrium
in mixed strategies, that can be shown \cite{maynard-smith:1982} to be given by
playing D with probability 1/3. This makes sense in terms of the meaning of the
game, which is an
anti-coordination game, i.e.\ the best thing to do is the opposite of the
other player.
Indeed, in the Snowdrift interpretation, two people are trapped by a snowdrift
at the two
ends of a road. For every one of them, the best option is not to shovel snow off
to free
the road and let the other person do it; however, if the other person does not
shovel, then
the best option is to shovel oneself. There is, hence, a temptation to defect
that creates
a dilemmatic situation (in which mutual defection leads to the worst possible
outcome).
In the same way as he reinterpreted monetary payoffs in terms of reproductive
success,
Maynard Smith reinterpreted mixed strategies as population frequencies. This
allowed
to leave behind the economic concept of rational individual and move forward to
biological
applications (as well as in other fields). As a consequence, the economic
evolutionary idea
in terms of learning new strategies gives way to a genetic transmission of
behavioral strategies
to offspring. Therefore, Maynard Smith's interpretation of the
above result is that a population consisting of one third of individuals that
always use
the D strategy and two thirds of H-strategists is a stable genetic
polymorphism. At
the core of this concept is his notion of \emph{evolutionarily stable strategy}.
Maynard Smith
defined a strategy as evolutionarily stable if the following two conditions are
satisfied
\begin{eqnarray}
\label{eq:anxo3}
{\bf x}^TW{\bf x} & \geq & {\bf x}^TW{\bf y},\, \forall {\bf y} \in S_n,\\
{\rm if}\,\, {\bf x}\neq {\bf y}\,\, {\rm and}\,\, {\bf x}^TW{\bf y}&=&{\bf
x}^TW{\bf x},\, {\rm then}\,\, {\bf x}^TW{\bf y}>
{\bf y}^TW{\bf y}.
\end{eqnarray}
The rationale behind this definition is again of a population theoretical type:
These are
the conditions that must be fulfilled for a population of ${\bf x}$-strategists
to be non-invadable
by any ${\bf y}$-mutant. Indeed, either ${\bf x}$ performs better against itself
than ${\bf y}$ or,
if they perform equally, ${\bf x}$ performs better against ${\bf y}$ than ${\bf
y}$ itself.
These two conditions guarantee non-invasibility of the population. On the other
hand,
comparing the definitions of evolutionarily stable strategy and Nash equilibrium
one
can immediately see that a strict Nash equilibrium is an evolutionarily stable
strategy
and that an evolutionarily stable strategy is a Nash equilibrium.
\subsection{Replicator dynamics}
After Nash proposed his definition of equilibrium, the main criticism that the
concept has received
relates to how equilibria are reached. In other words, Nash provided a rule to
decide which
are the strategies that rational players should play in a game, but how do
people involved in
actual game-theoretical settings but without knowledge of game theory find the
Nash
equilibrium? Furthermore, in case there is more than one Nash equilibrium, which
one
should be played, i.e., which one is the true ``solution'' of the game? These
questions
started out a great number of works dealing with learning and with refinements
of the
concept that allowed to distinguish among equilibria, particularly within the
field of
economics. This literature is out of the scope of the present review and the
reader is
referred to \cite{vegaredondo:2003} for an in-depth discussion.
One of the answers to the above criticism arises as a bonus from the ideas of
Maynard Smith.
The notion of evolutionarily stable strategy has implicit some kind of dynamics
when we
speak of invasibility by mutants; a population is stable if when a small
proportion of it
mutates it eventually evolves back to the original state. One could therefore
expect
that, starting from some random initial condition, populations would evolve to
an evolutionarily
stable strategy, which, as already stated, is nothing but a Nash equilibrium.
Thus, we would
have solved the question as to how the population ``learns'' to play the Nash
equilibrium and
perhaps the problem of selecting among different Nash equilibria. However, so
far we have
only spoken of an abstract
dynamics; nothing is specified as to what is the evolution of the population or
the strategies
that it contains.
The replicator equation, due to Taylor and Jonker \cite{taylor:1978}, was the
first and most successful proposal of an evolutionary game dynamics. Within the
population dynamics framework,
the state of the population, i.e.\ the distribution of strategy frequencies,
is given by ${\bf x}$ as above. A first key point is that we assume that the
$x_i$ are differentiable functions
of time $t$: This requires in turn assuming that the population is infinitely
large (or that
$x_i$ are expected values for an ensemble of populations). Within this
hypothesis,
we can now postulate a law of
motion for ${\bf x}(t)$. Assuming further that individuals meet randomly,
engaging in a
game with payoff matrix $W$, then $(W {\bf x})_i$ is the expected payoff for an
individual using
strategy $s_i$, and ${\bf x}^T W{ \bf x}$ is the average payoff in the
population state ${\bf x}$.
If we, consistently with our interpretation of payoff as fitness, postulate that
the per capita rate of growth of the subpopulation using strategy $s_i$ is
proportional to its payoff, we arrive at the \emph{replicator equation} (the
name was first proposed in
\cite{schuster:1983})
\begin{eqnarray}
\label{eq:anxo4}
\dot{x}_i=x_i[(W {\bf x})_i-
{\bf x}^TW{\bf x}],
\end{eqnarray}
where the term ${\bf x}^TW{\bf x}$ arises to ensure the constraint $\sum_i x_i
= 1$ ($\dot{x}_i$ denotes the time derivative of $x_i$).
This equation translates into mathematical terms the elementary principle of
natural selection:
Strategies, or individuals using a given strategy, that reproduce more
efficiently spread, displacing
those with smaller fitness. Note also that states with $x_i=1$,
$x_j=0$, $\forall j\neq i$ are solutions of
Eq.\ (\ref{eq:anxo4}) and, in fact, they are absorbing states, playing a
relevant role in the
dynamics of the system in the absence of mutation.
Once an equation has been proposed, one can resort to the tools of dynamical
systems theory
to derive its most important consequences. In this regard, it is interesting to
note that the
replicator equation can be transformed by an appropriate change of variable in a
system of
Lotka-Volterra type \cite{hofbauer:1998}.
For our present purposes, we will focus only on the relation of the
replicator dynamics with the two equilibrium concepts discussed in the preceding
subsection.
The rest points of the replicator equation are those frequency distributions
${\bf x}$ that make the
rhs of Eq.~(\ref{eq:anxo4}) vanish, i.e.\ those that verify either $x_i=0$ or
$(W {\bf x})_i = {\bf x}^TW{\bf x},\, \forall i=1,\dots,n$.
The solutions of this system of equations are all the mixed strategy Nash
equilibria of the game \cite{gintis:2000}. Furthermore, it is not difficult
to show (see e.g.\ \cite{hofbauer:1998}) that strict Nash equilibria are
asymptotically stable, and that stable rest points are Nash equilibria. We thus
see that the replicator equation provides us with an evolutionary mechanism
through which the players, or the population, can arrive at a Nash equilibrium
or, equivalently, to an evolutionarily stable strategy. The different basins of
attraction of the different equilibria further explain which of them is selected
in case there are more than one.
For our present purposes, it is important to stress the hypothesis involved
(explicitly or
implicitly) in the derivation
of the replicator equation:
\begin{enumerate}
\item The population is infinitely large.
\item Individuals meet randomly or play against every other one, such that the
payoff of
strategy $s_i$ is proportional to the payoff averaged over the current
population state ${\bf x}$.
\item There are no mutations, i.e.\ strategies increase or decrease in frequency
only due to reproduction.
\item The variation of the population is linear in the payoff difference.
\end{enumerate}
Assumptions 1 and 2 are, as we stated above, crucial to derive the replicator
equation in order
to replace the fitness of a given strategy by its mean value when the population
is described
in terms of frequencies. Of course, finite populations deviate from the values
of
frequencies corresponding to infinite ones. In a series of recent works,
Traulsen and co-workers have considered this problem
\cite{traulsen:2005,claussen:2005,traulsen:2006}.
They have identified different microscopic stochastic processes that
lead to the standard or the adjusted replicator dynamics, showing that
differences on the individual level can
lead to qualitatively different dynamics in asymmetric conflicts and,
depending on the population size, can
even invert the direction of the evolutionary process. Their analytical
framework, which they
have extended to include an arbitrary number of strategies, provides
good approximations to simulation results for very small sizes. For a recent
review of these
and related issues, see \cite{claussen:2008}. On the other hand, there has also
been
some work showing that evolutionarily stable strategies in infinite populations
may lose
their stable character when the population is small (a result not totally
unrelated to those
we will discuss in Section~\ref{sec:3}). For examples of this in the context of
Hawk-Dove games, see \cite{fogel:1997,fogel:1998}.
Assumption 3 does not pose any severe problem. In fact, mutations
(or migrations among physically separated groups, whose mathematical description
is
equivalent) can be included, yielding the so-called replicator-mutator equation
\cite{page:2002}. This is in turn equivalent to the Price equation
\cite{price:1970},
in which a term involving the covariance of fitness and strategies appears
explicitly. Mutations have been also included in the framework of finite
size populations \cite{traulsen:2006} mentioned above.
We refer the reader to references \cite{page:2002,frank:1995}
for further analysis of this issue.
Assumption 4 is actually the core of the definition of replicator dynamics. In
Section~\ref{sec:4}
below we will come back to this point, when we discuss the relation of
replicator dynamics
to the rules used for the update of strategies in agent-based models.
Work beyond the hypothesis of
linearity can proceed otherwise in different directions, by considering
generalized
replicator equations of the form
\begin{eqnarray}
\label{eq:anxo5}
\dot{x}_i=x_i[W_i({\bf x})-
{\bf x}^TW({\bf x})].
\end{eqnarray}
The precise choice for the functions $W_i({\bf x})$ depends of course on the
particular situation
one is trying to model.
A number of the results on replicator equation carry on for several such
choices.
This topic is well summarized in \cite{hofbauer:2003} and the interested reader
can proceed
from there to the relevant references.
Assumption 2 is the one to which this review is devoted to and, once again,
there are very many different possibilities in which it may not hold. We will
discuss in depth below the
case in which the time scale of selection is faster than that of interaction,
leading to the
impossibility that a given player can interact with all others. Interactions may
be also
physically limited, either for geographical reasons (individuals interact only
with those
in their surroundings), for social reasons (individuals interact only with those
with whom
they are acquainted) or otherwise. As in previous cases, these variations
prevents one from
using the expected value of fitness of a strategy in the population as a good
approximation
for its growth rate. We will see the consequences this has in the following
sections.
\subsection{The problem of the emergence of cooperation}
One of the most important problems to which evolutionary game theory is being
applied
is the understanding of the emergence of cooperation in human (albeit
non-exclusively)
societies \cite{pennisi:2005}. As we stated in the Introduction, this is an
evolutionary
puzzle that can be accurately expressed within the formalism of game theory. One
of
the games that has been most often used in connection with this problem is the
Prisoner's Dilemma introduced above, Eq.~(\ref{eq:anxo1}). As we have seen,
rational players should unavoidably defect and never cooperate, thus leading
to a very bad outcome for both players. On the other hand, it is evident that if
both players had cooperated they would have been much better off. This is a
prototypical example of a social dilemma \cite{kollock:1998} which is, in fact,
(partially) solved in societies.
Indeed, the very existence of human society, with its highly specialized labor
division, is a proof that cooperation is possible.
In more biological terms, the question can be phrased using again the concept of
fitness.
Why should an individual help other achieve more fitness, implying more
reproductive
success and a chance that the helper is eventually displaced? It is important to
realize
that such a cooperative effort is at the roots of very many biological phenomena,
from
mutualism to the appearance of multicellular organisms
\cite{maynard-smith:1995}.
When one considers this problem in the framework of replicator equation, the conclusion
is immediate and disappointing: Cooperation is simply not possible. As defection is the only
Nash equilibrium of Prisoner's Dilemma, for any initial condition with
a positive fraction of defectors, replicator dynamics will inexorably
take the population to a final state in which they all are defectors. Therefore, one needs to
understand how the replicator equation framework can be supplemented or superseded
for evolutionary game theory to become closer to what is observed in the real world
(note that
there is no hope for classical game theory in this respect as it is based in the perfect
rationality of the players). Relaxing the above discussed assumptions leads, in
some
cases, to possible solutions to this puzzle, and our aim here is to summarize
and
review what has been done along these lines with Assumption 2.
\section{The effect of different time scales}
\label{sec:3}
Evolution is generally supposed to occur at a slow pace: Many generations
may be needed for a noticeable change to arise in a species. This is indeed
how Darwin understood the effect of natural selection, and he always
referred to its cumulative effects over very many years. However, this needs
not be the case and, in fact, selection may occur faster
than the interaction between individuals (or of the individuals with their
environment). Thus, recent
experimental studies have reported observations of fast selection
\cite{hendry:1999,hendry:2000,yoshida:2003}. It is also conceivable
that in man-monitored or laboratory processes one might make
selection be the rapid influence rather than interaction. Another
context where these ideas apply naturally is that of
cultural evolution or social learning, where the time scale of selection is
much closer to the time scale of interaction. Therefore, it is
natural to ask about the consequences of the above assumption and the
effect of relaxing it.
This issue has already been considered from an economic viewpoint in
the context of equilibrium selection (but see an early biological example breaking
the assumption of purely random matching in \cite{fagen:1980}, which considered
Hawk-Dove games where strategists are more likely to encounter individuals
using their same strategy). This refers
to a situation in which for a game there is more than one equilibrium, like
in the \emph{Stag Hunt game}, given e.g.\ by the following payoff
matrix
\begin{equation} \label{eq:anxo6}
\begin{array}{ccc}
& \mbox{ }\, {\rm C} & \!\!\!\!\!\! {\rm D} \\
\begin{array}{c} {\rm C} \\ {\rm D} \end{array} & \left(\begin{array}{c} 6 \\ 5
\end{array}\right. & \left.\begin{array}{c} 1\\ 2
\end{array}\right). \end{array} \end{equation}
This game was already posed as a metaphor by Rousseau \cite{skyrms:2003},
which reads as follows: Two people go out hunting for stag, because two
of them are necessary to hunt down such a big animal. However, any one of them
can cheat the other by hunting hare, which one can do alone, leaving the other
one in the impossibility of getting the stag. Therefore, we have a coordination
game, in which the best option is to do as the other: Hunt stag together or both
hunting hare separately.
In game theory, this translates into the fact that
both C and D are Nash equilibria, and in principle one is
not able to determine which one would be selected by the players, i.e., which
one is the solution of the game. One rationale to choose was proposed by
Harsanyi and Selten\footnote{Harsanyi and Senten received the Nobel Prize
in Economics for this contribution, along with Nash, in 1994.}
\cite{harsanyi:1988}, who classified C as the Pareto-efficient
equilibrium (hunting stag is more profitable than hunting hare),
because that is the most beneficial for both players, and D as the
risk-dominant equilibrium, because it is the strategy that is better in case the other
player chooses D (one can hunt hare alone). Here the tension arises then from the
risk involved in cooperation, rather than from the temptation to defect of
Snowdrift games
\cite{macy:2002} (note that both tensions are present in Prisoner's Dilemma).
Kandori {\em et al} \cite{kandori:1993} showed that the risk-dominant equilibrium is
selected when using a stochastic evolutionary game dynamics, proposed by Foster and Young
\cite{foster:1990}, that considers that every player interacts with every other one
(implying slow selection). However, fast selection leads to another
result. Indeed, Robson and Vega-Redondo \cite{robson:1996} considered the situation in which every player is matched to another one and therefore they only play one game before selection acts. In that case, they
showed that the outcome changed and that the Pareto-efficient equilibrium is selected.
This result was qualified later by Miekisz \cite{miekisz:2005}, who showed that
the selected equilibrium depended on the population size and the mutation level of the
dynamics. Recently, this issue has also been considered in \cite{traulsen:2009}, which compares the situation where the contribution of the game to the fitness is small (weak selection, see Section 4.6 below) to the one where the game is the main source of the fitness, finding that in the former the results are
equivalent to the well-mixed population, but not in the latter, where the
conclusions of \cite{robson:1996} are recovered. It is also worth noticing in this regards the works by Boylan
\cite{boylan:1992,boylan:1995}, where he studied the types of random matching that can still be
approximated by continuous equations. In any case, even if the above are not
general results and their application is mainly in economics, we have already a
hint that time scales may play a non-trivial role in evolutionary games.
In fact, as we will show below, rapid selection affects evolutionary
dynamics in such a dramatic way that for some games it even changes
the stability of equilibria. We will begin our discussion by briefly summarizing
results on a model for the emergence of altruistic behavior, in which the
dynamics is not replicator-like, but that illustrates nicely the very important
effects of fast selection. We will subsequently proceed to present a
general theory for symmetric $2 \times 2$ games. There, in order to make
explicit the relation
between selection and interaction time scales, we use a discrete-time
dynamics that produces results equivalent to the replicator dynamics
when selection is slow. We will then show that the pace at which selection
acts on the population is crucial for the appearance and stability of
cooperation. Even in non-dilemma games such as the \emph{Harmony game}
\citep{licht:1999}, where cooperation is the only possible rational
outcome, defectors may be selected for if population renewal is very
rapid.
\subsection{Time scales in the Ultimatum game}
\label{sec:ultimatum}
As a first illustration of the importance of time scales
in evolutionary game dynamics, we begin by dealing with this problem in the context of a
specific set of such experiments, related to the \emph{Ultimatum game} \cite{guth:1982,henrich:2004}.
In this game,
under conditions of anonymity, two players are shown a sum
of money. One of the players, the ``proposer'',
is instructed to offer any amount
to the other, the ``responder''. The proposer can make only one
offer, which the responder can accept or reject. If the offer is
accepted, the money is shared accordingly; if rejected, both
players receive nothing. Note that the Ultimatum game is not
symmetric, in so far as proposer and responder have clearly
different roles and are therefore not exchangeable. This will be
our only such an example, and the remainder of the paper will only
deal with symmetric games. Since the game is played only once
(no repeated interactions) and anonymously (no reputation gain;
for more on explanations of altruism relying on reputation
see \cite{nowak:1998}),
a self-interested responder will accept any amount of money
offered. Therefore, self-interested proposers will offer the
minimum possible amount, which will be accepted.
The above prediction, based on the rational character of the players,
contrasts clearly with the results of
actual Ultimatum game experiments with human subjects,
in which average offers do not even approximate the self-interested prediction.
Generally speaking, proposers offer respondents very substantial
amounts (50\% being a typical modal offer) and respondents
frequently reject offers below 30\% \cite{camerer:2003,fehr:2003}. Most of the
experiments have been carried out with university students in
western countries, showing a large degree of individual variability
but a striking uniformity between groups in average behavior.
A large study in 15 small-scale societies \cite{henrich:2004}
found that, in all
cases, respondents or proposers behave in such a reciprocal manner.
Furthermore, the behavioral variability across groups was much
larger than previously observed: While mean offers in the case
of university students are in the range 43\%-48\%, in the
cross-cultural study they ranged from 26\% to 58\%.
How does this fit in our focus topic, namely the emergence of cooperation?
The fact that indirect reciprocity is excluded by the anonymity
condition and that interactions
are one-shot (repeated interaction, the mechanism proposed by Axelrod to
foster cooperation \cite{axelrod:1981,axelrod:1984}, does not apply)
allows one to interpret rejections in terms of the so-called
strong reciprocity \cite{gintis:2000a,fehr:2002}.
This amounts to considering that
these behaviors are truly altruistic, i.e.\ that
they are costly for the individual performing them in so far as
they do not result in direct or indirect benefit. As a consequence,
we return to our evolutionary puzzle: The negative effects of
altruistic acts must decrease the altruist's fitness as compared to
that of the recipients of the benefit, ultimately leading to
the extinction of altruists. Indeed, standard evolutionary game
theory arguments applied to the Ultimatum game lead to the expectation
that, in a well-mixed population, punishers (individuals
who reject low offers) have less chance to survive
than rational
players (individuals who accept any offer) and eventually disappear.
We will now show that this conclusion
depends on the dynamics, and that different dynamics may lead to
the survival of punishers through fluctuations.
Consider a population of $N$ agents playing the Ultimatum game,
with a fixed sum of money $M$ per game.
Random pairs of players are chosen, of which one is the proposer
and another one is the respondent. In its simplest version,
we will assume that
players are capable of other-regarding behavior (empathy); consequently,
in order to optimize their gain,
proposers offer the minimum amount of money
that they would accept. Every agent has her own, fixed
acceptance threshold, $1\leq t_i\leq M$ ($t_i$ are always integer
numbers for simplicity). Agents have only one strategy:
Respondents reject any offer
smaller than their own acceptance threshold, and
accept offers otherwise.
Money
shared as a consequence of accepted offers accumulates to the
capital of each player, and is subsequently
interpreted as fitness as usual.
After $s$ games,
the agent with the overall minimum fitness is
removed (randomly picked if there are several)
and a new agent is introduced by duplicating that
with the maximum fitness, i.e.\ with the same threshold and the
same fitness (again randomly picked if there are
several). Mutation is introduced in the duplication process by
allowing changes of $\pm 1$ in the acceptance threshold of the
newly generated player with probability 1/3 each. Agents
have no memory (interactions are one-shot) and no information
about other agents (no reputation gains are possible). We note that the
dynamics of this model is not equivalent to the replicator
equation, and therefore the results do not apply directly in that context.
In fact, such an extremal dynamics leads to an amplification of the effect
of fluctuations that allows to observe more clearly the influence of time
scales. This is the reason why we believe it will help make our main
point.
\begin{figure}[t]
\centering
\includegraphics[height=.25\textheight]{fig01a}\hspace*{5mm}
\includegraphics[height=.25\textheight]{fig01b}
\caption{Left: mean acceptance threshold as a function of simulation
time. Initial condition is that all agents have $t_i=1$.
Right: acceptance threshold distribution after $10^8$ games (note that this
distribution, for small $s$, is not stationary).
Initial condition is that all agents have uniformly distributed, random
$t_i$.
In both cases, $s$ is as indicated from the plot. \label{figure:anxo1}}
\end{figure}
Fig.~\ref{figure:anxo1} shows the typical outcome of simulations of our model
for
a population of $N=1000$ individuals. An important point to note
is that we are not plotting averages but a single realization for each value of $s$; the
realizations we plot are not specially chosen but rather are representative of the typical
simulation results. We have chosen to plot single realizations instead of averages to
make clear for the reader the large fluctuations arising for small $s$, which are the key
to understand the results and which we discuss below.
As we can see, the mean acceptance threshold rapidly evolves towards
values around 40\%, while the whole
distribution of thresholds converges to a peaked function, with
the range of acceptance thresholds for the agents covering about a
10\% of the available ones.
These are values compatible with the experimental results discussed
above. The
mean acceptance threshold fluctuates
during the length of the simulation, never reaching a stationary value
for the durations we have explored. The width of the peak fluctuates
as well, but in a much smaller scale than the position.
The fluctuations are larger for smaller values of $s$, and when $s$
becomes of the order of $N$ or larger, the evolution of the mean
acceptance threshold is very smooth. As is clear from Fig.~\ref{figure:anxo1}, for very small values
of $s$, the differences in payoff arising from the fact that only some players play are
amplified by our extreme dynamics, resulting in a very noisy behavior of the
mean threshold. This is a crucial point and will
be discussed in more detail below.
Importantly, the typical evolution
we are describing does not depend on the initial condition. In particular,
a population consisting solely of self-interested agents, i.e.\ all
initial thresholds set to $t_i=1$, evolves in the same fashion.
Indeed, the distributions shown in the left panel of Fig.~\ref{figure:anxo1} (which again correspond to
single realizations) have been obtained with such an initial condition,
and it can be clearly observed that self-interested agents disappear
in the early stages of the evolution.
The number of players and the value $M$ of the capital at stake in every
game are not important either, and increasing $M$ only leads
to a higher resolution of the threshold distribution function, whereas smaller
mutation rates simply change the pace of evolution.
To realize the effect of time scales, it is important to recall
previous studies of the Ultimatum game by Page and
Nowak \cite{page:2000,page:2002a}.
The model introduced in those works has a dynamics completely
different from ours: Following standard evolutionary game theory,
every player plays with every other one in both roles (proponent and
respondent), and afterwards players reproduce with probability
proportional to their payoff (which is fitness in the reproductive
sense). Simulations and adaptive dynamics equations show that the
population ends up composed by players with fair (50\%) thresholds.
Note that this is not what one would expect on a rational basis, but
Page and Nowak traced this result back to empathy, i.e.\ the fact that
the model is constrained to offer what one would accept. In any
event, what we want to stress here is that their findings are also
different from our observations: We only reach an equilibrium for large
$s$. The reason for this difference is that the Page-Nowak model dynamics
describes the $s/N\to\infty$ limit of our model, in which between
death-reproduction events the time average gain obtained by all players is with
high accuracy a constant $O(N)$ times the mean payoff. We thus see that our
model is more general
because it has one free parameter, $s$, that allows selecting different
regimes whereas the Page-Nowak dynamics is only one limiting case.
Those different regimes are what we have described as fluctuation dominated
(when $s/N$ is finite and not too large) and the regime analyzed by
Page and Nowak (when $s/N\to\infty$).
This amounts to saying that by varying $s$ we can
study regimes far from the standard evolutionary game theory
limit. As a result, we find a variability of outcomes for the
acceptance threshold consistent with the observations in real
human societies \cite{henrich:2004,fehr:2003}. Furthermore, if one considers that the acceptance threshold and the offer can be set independently,
the results differ even more \cite{sanchez:2005}: While in the model of Page and
Nowak
both magnitudes evolve to take very low values, close to zero, in
the model presented here the results, when $s$ is small, are very similar to the one-threshold
version, leading again to values compatible with the experimental observations.
This in turn implies that rapid selection may be an alternative to empathy as
an explanation of human behavior in this game.
The main message to be taken from this example is that
fluctuations due to the finite number of games $s$ are very important.
Among the results summarized above, the
evolution of a population entirely
formed by self-interested players into a diversified population with a
large majority of altruists is the most relevant and surprising one.
One can argue that the underlying reason for this is precisely
the presence of
fluctuations in our model. For the sake of definiteness, let us
consider the case $s=1$ (agent replacement takes place after every game)
although the discussion applies to larger (but finite) values of $s$ as
well. After one or more games, a mutation event will take place
and a ``weak altruistic punisher'' (an agent with $t_i=2$) will appear
in the population,
with a fitness inherited from its ancestor. For this new agent to be
removed at the next iteration, our model rules imply that this agent has to have
the lowest fitness, and also that it does not play as a proposer in
the next game (if playing as a responder the agent will earn nothing
because of her threshold). In any other event this altruistic punisher
will survive at least one cycle, in which an additional one can appear by mutation. It is thus clear that fluctuations indeed help altruists to take
over: As soon as a few altruists are present in the population, it is
easy to see analytically that they will survive and proliferate even
in the limit $s/N\to\infty$.
\subsection{Time scales in symmetric binary games}
The example in the previous subsection suggests that there certainly is an issue of
relative time scales in evolutionary game theory that can have serious implications.
In order to gain insight into this question, it is important to consider a general framework,
and therefore we will now look at the general problem of symmetric $2\times 2$ games.
Asymmetric games can be treated similarly, albeit in a more cumbersome manner, and
their classification involves many more types; we feel, therefore, that the symmetric
case is a much clearer illustration of the effect of time scales. In what follows, we
review and extend previous results of us \cite{roca:2006,roca:2007}, emphasizing
the consequences of the existence of different time scales.
Let us consider a population of $N$ individuals, each of whom plays with a fixed
strategy, that can be either C or D (for ``cooperate'' and ``defect''
respectively, as in Section~\ref{sec:2}). We denote the payoff that an
$X$-strategist gets when confronted to a $Y$-strategist ($X$ and $Y$ are C or
D) by the matrix element $W_{XY}$.
For a certain time individuals interact with other individuals in pairs randomly
chosen from the
population. During these interactions individuals collect payoffs. We shall refer to the
interval between two interaction events as the \emph{interaction time}. Once the interaction
period has finished reproduction occurs, and in steady state selection acts immediately
afterwards restoring
the population size to the maximum allowed by the environment. The time between two of these
reproduction/selection events will be referred to as the \emph{evolution time}.
Reproduction and selection can be implemented in at least two different ways. The first one
is through the Fisher-Wright process \cite{ewens:2004} in which each individual generates a
number of offspring proportional to her payoff. Selection acts by randomly killing
individuals of the new generation until restoring the size of the population back to $N$
individuals. The second option for the evolution is the Moran process \cite{ewens:2004,moran:1962}. It amounts
to randomly choosing an individual for reproduction proportionally to payoffs, whose single offspring replaces another randomly chosen individual, in this case with a probability $1/N$ equal for all. In this manner populations always remains constant. The Fisher-Wright process is
an appropriate model for species which produce a large number of offspring in the next
generation but only a few of them survive, and the next generation replaces the previous one
(like insects or some fishes). The Moran process is a better description for species which
give rise to few offspring and reproduce in continuous time, because individuals neither
reproduce nor die simultaneously, and death occurs at a constant rate. The original process
was generalized to the frequency-dependent fitness context of evolutionary game theory
by Taylor {\em et al.} \cite{taylor:2004}, and used to study the conditions for selection favoring the invasion and/or fixation
of new phenotypes. The results were found to depend on whether the population was infinite or
finite, leading to a classification of the process in three or eight scenarios, respectively.
Both the Fisher-Wright and Moran processes define Markov chains
\cite{karlin:1975,grinstead:1997}
on the population, characterized by the number of its C-strategists $n\in\{0,1,\dots,N\}$,
because in both cases it is assumed that the composition of the next generation is
determined solely by the composition of the current generation. Each process defines a
stochastic matrix $P$ with elements $P_{n,m}=p(m|n)$, the probability that the next generation
has $m$ C-strategists provided the current one has $n$. While for the Fisher-Wright process
all the elements of $P$ may be nonzero, for the Moran process the only nonzero
elements are those
for which $m=n$ or $m=n\pm 1$. Hence Moran is, in the jargon of Markov chains, a
\emph{birth-death process} with two absorbing states, $n=0$ and $n=N$
\cite{karlin:1975,grinstead:1997}. Such a process is mathematically simpler, and for this
reason it will be the one we will choose for our discussion on the effect of time scales.
To introduce explicitly time scales we will implement the Moran process in the
following way, generalizing the proposal by Taylor {\em et al.}
\cite{taylor:2004}. During $s$ time steps pairs of individuals will be chosen to
play, one pair every time step. After
that the above described reproduction/selection process will act according to
the payoffs
collected by players during the $s$ interaction steps. Then, the payoffs of all players are set
to zero and a new cycle starts. Notice that in general players
will play a different number of times ---some not at all--- and this will reflect in the
collected payoffs. If $s$ is too small most players will not have the opportunity to play
and chance will have a more prominent role in driving the evolution of the population.
Quantifying this effect requires that we first compute the probability that, in
a population of $N$ individuals of which $n$ are C-strategists, an
$X$-strategist is chosen
to reproduce after the $s$ interaction steps. Let $n_{XY}$ denote the number of
pairs of $X$- and $Y$-strategists that are chosen to play. The probability of
forming
a given pair, denoted $p_{XY}$, will be
\begin{equation}
p_{\text{CC}}=\frac{n(n-1)}{N(N-1)}, \qquad
p_{\text{CD}}=2\frac{n(N-n)}{N(N-1)}, \qquad
p_{\text{DD}}=\frac{(N-n)(N-n-1)}{N(N-1)}.
\end{equation}
Then the probability of a given set of $n_{XY}$ is dictated by the
multinomial distribution
\begin{equation}
M(\{n_{XY}\};s)=
\begin{cases}
\displaystyle s!\frac{p_{\text{CC}}^{n_{\text{CC}}}}{n_{\text{CC}}!}
\frac{p_{\text{CD}}^{n_{\text{CD}}}}{n_{\text{CD}}!}
\frac{p_{\text{DD}}^{n_{\text{DD}}}}{n_{\text{DD}}!}, &
\text{if $\displaystyle n_{\text{CC}}+n_{\text{CD}}+n_{\text{DD}}=s$,} \\
0, & \text{otherwise.}
\end{cases}
\label{eq:multinomial}
\end{equation}
For a given set of variables $n_{XY}$, the payoffs collected by C- and
D-strategists are
\begin{equation}
W_{\text{C}}=2n_{\text{CC}} W_{\text{CC}}+n_{\text{CD}} W_{\text{CD}}, \qquad
W_{\text{D}}=n_{\text{CD}} W_{\text{DC}}+2n_{\text{DD}} W_{\text{DD}}.
\end{equation}
Then the probabilities of choosing a C- or D-strategist for reproduction are
\begin{equation}
P_{\text{C}}(n)=\Expect_M\left[\frac{W_{\text{C}}}{W_{\text{C}}+W_{\text{D}}}\right], \qquad
P_{\text{D}}(n)=\Expect_M\left[\frac{W_{\text{D}}}{W_{\text{C}}+W_{\text{D}}}\right],
\end{equation}
where the expectations $\Expect_M\left[ \cdot \right]$ are taken over the
probability distribution $M$ (\ref{eq:multinomial}). Notice that we have to
guarantee $W_X \ge 0$ for the above expressions to define
a true probability. This forces us to choose all payoffs $W_{XY}\ge 0$.
In addition, we have studied the
effect of adding a baseline fitness to every player, which is equivalent to a
translation of the payoff matrix $W$, obtaining the same qualitative results
(see below).
Once these probabilities are obtained the Moran process accounts for the transition probabilities
from a state with $n$ C-strategists to another with $n\pm 1$ C-strategists. For $n\to n+1$
a C-strategist must be selected for reproduction (probability $P_{\text{C}}(n)$) and a
D-strategist for being replaced (probability $(N-n)/N$). Thus
\begin{equation}
P_{n,n+1}=p(n+1|n)=\frac{N-n}{N}P_{\text{C}}(n).
\end{equation}
For $n\to n-1$ a D-strategist must be selected for reproduction (probability $P_{\text{D}}(n)$)
and a C-strategist for being replaced (probability $n/N$). Thus
\begin{equation}
P_{n,n-1}=p(n-1|n)=\frac{n}{N}P_{\text{D}}(n).
\end{equation}
Finally, the transition probabilities are completed by
\begin{equation}
P_{n,n}=1-P_{n,n-1}-P_{n,n+1}.
\end{equation}
\subsubsection{Slow selection limit}
Let us assume that $s\to\infty$, i.e.\ the evolution time is much longer than the interaction time.
Then the distribution (\ref{eq:multinomial}) will be peaked at the values
$n_{XY}=
sp_{XY}$, the larger $s$ the sharper the peak. Therefore in this limit
\begin{equation}
P_{\text{C}}(n)\to\frac{\overline{W_{\text{C}}}(n)}{\overline{W_{\text{C}}}
(n)+\overline{W_{\text{D}}}(n)}, \qquad
P_{\text{D}}(n)\to\frac{\overline{W_{\text{D}}}(n)}{\overline{W_{\text{C}}}(n)+\overline{W_{\text{D}}}(n)},
\end{equation}
where
\begin{equation}
\overline{W_{\text{C}}}(n)=\frac{n}{N}\left[\frac{n-1}{N-1}(W_{\text{CC}}-W_{\text{CD}})+W_{\text{CD}}
\right], \qquad
\overline{W_{\text{D}}}(n)=\frac{N-n}{N}\left[\frac{n}{N-1}(W_{\text{DC}}-W_{\text{DD}})+W_{\text{DD}}
\right].
\end{equation}
In general, for a given population size $N$ we have to resort to a numerical
evaluation of the various quantities that characterize a birth-death process,
according to the formulas in Appendix~\ref{app:A}. However, for large $N$ the
transition probabilities can be expressed in terms of
the fraction of C-strategists $x=n/N$ as
\begin{eqnarray}
P_{n,n+1} &=& x(1-x)\frac{w_{\text{C}}(x)}{xw_{\text{C}}(x)+(1-x)w_{\text{D}}(x)}, \\
P_{n,n-1} &=& x(1-x)\frac{w_{\text{D}}(x)}{xw_{\text{C}}(x)+(1-x)w_{\text{D}}(x)},
\end{eqnarray}
where
\begin{equation}
w_{\text{C}}(x)=x(W_{\text{CC}}-W_{\text{CD}})+W_{\text{CD}}, \qquad
w_{\text{D}}(x)=x(W_{\text{DC}}-W_{\text{DD}})+W_{\text{DD}}.
\label{eq:wCwD}
\end{equation}
The terms $w_{\text{C}}$ and $w_{\text{D}}$ are, respectively, the expected payoff of a cooperator and a defector in this case of large $s$ and $N$.
The factor $x(1-x)$ in front of $P_{n,n+1}$ and $P_{n,n-1}$ arises as a consequence of
$n=0$ and $n=N$ being absorbing states of the process. There is another equilibrium $x^*$
where $P_{n,n\pm 1}=P_{n\pm 1,n}$, i.e.\ $w_{\text{C}}(x^*)=w_{\text{D}}(x^*)$,
with $x^*$ given by
\begin{equation}
x^*=\frac{W_{\text{CD}}-W_{\text{DD}}}{W_{\text{DC}}-W_{\text{CC}}+W_{\text{CD}}-W_{\text{DD}}}.
\label{eq:xstar}
\end{equation}
For $x^*$ to be a valid equilibrium $0<x^*<1$ we must have
\begin{equation}
(W_{\text{DC}}-W_{\text{CC}})(W_{\text{CD}}-W_{\text{DD}})>0.
\label{eq:mixedeq}
\end{equation}
This equilibrium is stable\footnote{Here the notion of stability implies that the process will
remain near $x^*$ for an extremely long time, because as long as $N$ is finite, no matter how
large, the process will eventually end up in $x=0$ or $x=1$, the absorbing states.} as long as
the function $w_{\text{C}}(x)-w_{\text{D}}(x)$ is decreasing at $x^*$, for then if $x<x^*$
$P_{n,n+1}>P_{n+1,n}$ and if $x>x^*$ $P_{n,n-1}>P_{n-1,n}$, i.e.\ the process will tend to restore the
equilibrium, whereas if the function is increasing the process will be led out of $x^*$ by
any fluctuation. In terms of (\ref{eq:wCwD}) this implies
\begin{equation}
W_{\text{DC}}-W_{\text{CC}}>W_{\text{DD}}-W_{\text{CD}}.
\label{eq:stability}
\end{equation}
Notice that the two conditions
\begin{equation}
w_{\text{C}}(x^*)=w_{\text{D}}(x^*), \qquad
w'_{\text{C}}(x^*)<w'_{\text{D}}(x^*),
\end{equation}
are precisely the conditions arising from the replicator dynamics for $x^*$ to
be a stable equilibrium \cite{nowak:2006a,hofbauer:1998}, albeit expressed in a
different manner than in Section~\ref{sec:2} ($w'_X$ represents the
derivative of $w_X$ with respect to $x$).
Out of the classic dilemmas, condition (\ref{eq:mixedeq}) holds for Stag Hunt and Snowdrift games,
but condition (\ref{eq:stability}) only holds for the latter. Thus, as we have
already seen, only Snowdrift has a dynamically stable mixed population.
This analysis leads us to conclude that the standard setting of evolutionary games as advanced above, in which the
time scale for reproduction/selection is implicitly (if not explicitly) assumed to be much longer
than the interaction time scale, automatically yields the distribution of equilibria dictated by
the replicator dynamics for that game. We have explicitly shown this to be true for binary games,
but it can be extended to games with an arbitrary number of strategies. In the
next section we will
analyze what happens if this assumption on the time scales does not hold.
\subsubsection{Fast selection limit}
When $s$ is finite, considering all the possible pairings and their payoffs, we
arrive at
\begin{equation}
\begin{split}
P_{\text{C}}(n)=\sum_{j=0}^s \sum_{k=0}^{s-j} &
2^{s-j-k} \frac{s!n^{s-k}(n-1)^j(N-n)^{s-j}(N-n-1)^k}{j!k!(s-j-k)!N^s(N-1)^s} \\
&\times \frac{2jW_{\text{CC}}+(s-j-k)W_{\text{CD}}}{2jW_{\text{CC}}+2kW_{\text{DD}}
+(s-j-k)(W_{\text{CD}}+W_{\text{DC}})},
\end{split}
\end{equation}
and $P_{\text{D}}(n)=1-P_{\text{C}}(n)$. We have not been able to write this
formula in a simpler way, so we have to evaluate it numerically for
every choice of the payoff matrix. However, in order to have
a glimpse at the effect of reducing the number of interactions between
successive
reproduction/selection events, we can examine analytically the extreme case
$s=1$, for which
\begin{eqnarray}
P_{n,n+1} &=& \frac{n(N-n)}{N(N-1)}\left[\frac{2W_{\text{CD}}}{W_{\text{DC}}+W_{\text{CD}}}+
\frac{n}{N}\frac{W_{\text{DC}}-W_{\text{CD}}}{W_{\text{DC}}+W_{\text{CD}}}-\frac{1}{N}\right], \\
P_{n,n-1} &=& \frac{n(N-n)}{N(N-1)}\left[1+\frac{n}{N}\frac{W_{\text{DC}}-W_{\text{CD}}}{W_{\text{DC}}
+W_{\text{CD}}}-\frac{1}{N}\right].
\end{eqnarray}
From these equations we find that
\begin{equation}
\frac{P_{n,n-1}}{P_{n,n+1}}=\frac{Dn+S(N-1)}{D(n+1)+S(N-1)-D(N+1)}, \qquad
D= W_{\text{DC}}-W_{\text{CD}}, \qquad
S= W_{\text{DC}}+W_{\text{CD}},
\end{equation}
and this particular dependence on $n$ allows us to find the following closed-form expression for
$c_n$, the probability that starting with $n$ cooperators the population ends up with all cooperators (see Appendix~\ref{app:B})
\begin{equation}
c_n=\frac{R_n}{R_N}, \qquad
R_n=
\begin{cases}
\displaystyle \prod_{j=1}^n\frac{S(N-1)+Dj}{S(N-1)-D(N+1-j)}-1, &
\text{if $D\ne 0$,} \\
n, & \text{if $D=0$.}
\end{cases}
\label{eq:cn}
\end{equation}
The first thing worth noticing in this expression is that it only depends on the two off-diagonal
elements of the payoff matrix (through their sum, $S$, and difference, $D$). This means that in an
extreme situation in which the evolution time is so short that it only allows a single pair of
players to interact, the outcome of the game only depends on what happens when two players with
different strategies play. The reason is obvious: Only those two players that have been chosen to
play will have a chance to reproduce. If both players have strategy $X$, an
$X$-strategist will be
chosen to reproduce with probability $1$. Only if each player uses a different strategy the choice of the player that reproduces will depend on the payoffs, and in this case they are precisely $W_{\text{CD}}$ and $W_{\text{DC}}$.
Of course, as $s$ increases this effect crosses over to recover the outcome for the case $s\to\infty$.
We can extend our analysis further for the case of large populations. If we denote
$x=n/N$ and $c(x)=c_n$, then we can write, as $N\to\infty$,
\begin{equation}
c(x) \sim \frac{e^{N\phi(x)}-1}{e^{N\phi(1)}-1}, \qquad
\phi(x) = \int_0^x\left[\ln(S+Dt)-\ln(S+D(t-1))\right]\,dt.
\end{equation}
Then
\begin{equation}
\phi'(x)=\ln\left(\frac{S+Dx}{S+D(x-1)}\right),
\end{equation}
which has the same sign as $D$, and hence $\phi(x)$ is increasing for $D>0$ and decreasing for
$D<0$.
Thus if $D>0$, because of the factor $N$ in the argument of the exponentials and the
fact that $\phi(x)>0$ for $x>0$, the exponential will increase sharply with $x$. Then, expanding
around $x=1$,
\begin{equation}
\phi(x)\approx \phi(1)-(1-x)\phi'(1),
\end{equation}
so
\begin{equation}
c(x)\sim \exp\{-N\ln(1+D/S)(1-x)\}.
\label{eq:asympDp}
\end{equation}
The outcome for this case is that absorption will take place at $n=0$ for almost any initial
condition, except if we start very close to the absorbing state $n=N$, namely
for $n \gtrsim N-1/\ln(1+D/S)$.
On the contrary, if $D<0$ then $\phi(x)<0$ for $x>0$ and the exponential will be peaked at $0$.
So expanding around $x=0$,
\begin{equation}
\phi(x)\approx x\phi'(0)
\end{equation}
and
\begin{equation}
c(x)\sim 1-\exp\{-N\ln(1-D/S)x\}.
\label{eq:asympDm}
\end{equation}
The outcome in this case is therefore symmetrical with respect to the case $D>0$, because
now the probability of ending up absorbed into $n=N$ is $1$ for nearly all initial conditions
except for a small range near $n=0$ determined by $n \lesssim 1/\ln(1-D/S)$.
In both cases the
range of exceptional initial conditions increases with decreasing $|D|$, and in
particular when $D=0$ the evolution becomes neutral,\footnote{Notice that if $D=0$ then $W_{\text{DC}}=
W_{\text{CD}}$ and therefore the evolution does not favor any of the two strategies.}
as it is reflected in the fact that in that special case $c_n=n/N$ (cf.\ Eq.~(\ref{eq:cn}))
\cite{ewens:2004}.
In order to illustrate the effect of a finite $s$, even in the case when $s>1$, we will consider
all possible symmetric $2\times 2$ games. These were classified by Rapoport and Guyer
\cite{rapoport:1966} in $12$ non-equivalent classes which, according to their
Nash equilibria
and their dynamical behavior under replicator dynamics, fall into three different categories:
\begin{enumerate}[(i)]
\item Six games have $W_{\text{CC}}>W_{\text{DC}}$ and $W_{\text{CD}}>W_{\text{DD}}$, or
$W_{\text{CC}}<W_{\text{DC}}$ and $W_{\text{CD}}<W_{\text{DD}}$. For them, their unique Nash
equilibrium corresponds to the dominant strategy (C in the first case and D in the second case).
This equilibrium is the global attractor of the replicator dynamics.
\item Three games have $W_{\text{CC}}>W_{\text{DC}}$ and $W_{\text{CD}}<W_{\text{DD}}$.
They have several Nash equilibria, one of them with a mixed strategy, which is an unstable
equilibrium of the replicator dynamics and therefore acts as a separator of the basins of
attractions of two Nash equilibria in pure strategies, which are the attractors.
\item The remaining three games have $W_{\text{CC}}<W_{\text{DC}}$ and $W_{\text{CD}}>W_{\text{DD}}$.
They also have several Nash equilibria, one of them with a mixed strategy, but in this case
this is the global attractor of the replicator dynamics.
\end{enumerate}
Examples of the first category are the Harmony and Prisoner's Dilemma games.
Category (ii) includes the Stag Hunt game, whereas the Snowdrift game belongs
to category (iii).
We will begin by considering one example of category (i): the Harmony game. To that aim we
will choose the parameters $W_{\text{CC}}=1$, $W_{\text{CD}}=0.25$, $W_{\text{DC}}=0.75$
and $W_{\text{DD}}=0.01$. The name of this game refers to the fact that it represents no conflict,
in the sense that all players get the maximum payoff by following strategy C. The values of $c_n$
obtained for different populations $N$ and several values of $s$ are plotted in Fig.~\ref{fig:harmony}.
The curves for large $s$ illustrate the no-conflicting character of this game as the probability
$c_n$ is almost $1$ for every starting initial fraction of C-strategists. The results for small
$s$ also illustrate the effect of fast selection, as the inefficient strategy, D, is
selected for almost any initial fraction of C-strategists. The effect is more pronounced the
larger the population. The crossover between the two regimes takes place at $s=2$ or $3$, but it
depends on the choice of payoffs. A look at Fig.~\ref{fig:harmony} reveals that
the crossing over to the $s\to\infty$ regime as $s$ increases has no connection
whatsoever with $N$, because it occurs nearly at the
same values for any population size $N$. It does depend, however, on the
precise values of the payoffs. As a further check, in Fig.~\ref{fig:asympt} we
plot the results for $s=1$ for different population sizes $N$ and
compare with the asymptotic prediction (\ref{eq:asympDp}), showing its
great accuracy for values of $N=100$ and higher; even for $N=10$ the deviation
from the exact results is not large.
\begin{figure}[t]
\subfigure[]{\includegraphics[width=55mm,clip=]{fig02a}}
\subfigure[]{\includegraphics[width=55mm,clip=]{fig02b}}
\subfigure[]{\includegraphics[width=55mm,clip=]{fig02c}}
\caption[]{Absorption probability $c_n$ to state $n=N$ starting from initial
state $n$, for a Harmony game (payoffs $W_{\text{CC}}=1$,
$W_{\text{CD}}=0.25$, $W_{\text{DC}}=0.75$ and
$W_{\text{DD}}=0.01$), population sizes $N=10$ (a), $N=100$ (b) and $N=1000$
(c), and for values of $s=1$, $2$, $3$, $10$ and $100$. The values for $s=100$
are indistinguishable from the results of replicator dynamics.}
\label{fig:harmony}
\end{figure}
\begin{figure}[t]
\centering{\includegraphics[width=85mm,clip=]{fig03}}
\caption[]{Same as in Fig.~\ref{fig:harmony} plotted
against $N-n$, for $s=1$ and $N=10$, $100$ and $1000$. The solid line is the asymptotic prediction
(\ref{eq:asympDp}).}
\label{fig:asympt}
\end{figure}
Let us now move to category (ii), well represented by the Stag Hunt game,
discussed
in the preceding subsection.
We will choose for this game the payoffs $W_{\text{CC}}=1$,
$W_{\text{CD}}=0.01$, $W_{\text{DC}}=0.8$ and $W_{\text{DD}}=0.2$. The values of $c_n$ obtained for
different populations $N$ and several values of $s$ are plotted in Fig.~\ref{fig:stag-hunt}. The panel (c) for $s=100$ reveals the behavior of the system according to the
replicator dynamics: Both strategies are attractors, and the crossover fraction
of C-strategists separating the two basins of attraction (given by
Eq.~(\ref{eq:xstar})) is, for this case, $x^*
\approx 0.49$. We can see that the effect of decreasing $s$ amounts to shifting
this crossover
towards $1$, thus increasing the basins of attraction of the risk-dominated strategy. In the extreme
case $s=1$ this strategy is the only attractor. Of course, for small population sizes
(Fig.~\ref{fig:stag-hunt}(a)) all these effects (the existence of the threshold and its shifting
with decreasing $s$) are strongly softened, although still noticeable. An
interesting feature of this game is that the effect of a finite $s$ is more
persistent compared
to what happens to the Harmony game. Whereas in the latter the replicator
dynamics
was practically recovered, for values of $s \geq 10$ we have to go up to $s=100$
to find the same in Stag Hunt.
\begin{figure}[t]
\subfigure[]{\includegraphics[width=55mm,clip=]{fig04a}}
\subfigure[]{\includegraphics[width=55mm,clip=]{fig04b}}
\subfigure[]{\includegraphics[width=55mm,clip=]{fig04c}}
\caption[]{Absorption probability $c_n$ to state $n=N$ starting from initial
state $n$, for a Stag Hunt game (payoffs $W_{\text{CC}}=1$,
$W_{\text{CD}}=0.01$, $W_{\text{DC}}=0.8$ and
$W_{\text{DD}}=0.2$), population sizes $N=10$ (a), $N=100$ (b) and $N=1000$
(c), and for values
of $s=1$, $3$, $5$, $10$ and $100$. Results from replicator dynamics are also plotted for
comparison.}
\label{fig:stag-hunt}
\end{figure}
Finally, a representative of category (iii) is the Snowdrift game, for which we
will choose the
payoffs $W_{\text{CC}}=1$, $W_{\text{CD}}=0.2$, $W_{\text{DC}}=1.8$ and $W_{\text{DD}}=0.01$.
For these values,
the replicator dynamics predicts that both strategies coexist with fractions of population given
by $x^*$ in (\ref{eq:xstar}), which for these parameters takes the value $x^*\approx 0.19$.
However, a birth-death process for finite $N$ always ends up in absorption into
one of the absorbing states. In fact, for any $s$ and $N$ and this choice of
payoffs, the population always ends up
absorbed into the $n=0$ state ---except when it starts very close to $n=N$. But
this case has
a peculiarity that makes it entirely different from the previous ones. Whereas
for the former cases the absorption time (\ref{eq:tau}) is $\tau=O(N)$
regardless of the value of $s$, for Snowdrift the absorption time is $O(N)$ for
$s=1$ but grows very fast with $s$ towards an asymptotic value
$\tau_{\infty}$ (see Fig.~\ref{fig:snowdrift}(a)) and $\tau_{\infty}$ grows
exponentially with
$N$ (see Fig.~\ref{fig:snowdrift}(b)). This means that, while for $s=1$ the process behaves
as in previous cases, being absorbed into the $n=0$ state, as $s$ increases there is a crossover
to a regime in which the transient states become more relevant than the absorbing state
because the population spends an extremely long time in them. In fact, the process oscillates around the
mixed equilibrium predicted by the replicator dynamics. This is illustrated by the distribution
of visits to states $0<n<N$ before absorption (\ref{eq:visits}), shown in Fig.~\ref{fig:visits}. Thus the
effect of fast selection on Snowdrift games amounts to a qualitative change from
the mixed
equilibrium to the pure equilibrium at $n=0$.
\begin{figure}[t]
\begin{center}
\subfigure[]{\includegraphics[width=75mm,clip=]{fig05a}}\qquad
\subfigure[]{\includegraphics[width=75mm,clip=]{fig05b}}
\end{center}
\caption[]{Absorption time starting from the state $n/N=0.5$ for a Snowdrift
game (payoffs
$W_{\text{CC}}=1$, $W_{\text{CD}}=0.2$, $W_{\text{DC}}=1.8$ and
$W_{\text{DD}}=0.01$), as a function of $s$ for population size $N=100$ (a) and
as a function of $N$ in the limit $s\to\infty$ (b). Note the logarithmic scale
for the absorption time.}
\label{fig:snowdrift}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=85mm,clip=]{fig06}
\end{center}
\caption[]{Distribution of visits to state $n$ before absorption for population $N=100$,
initial number of cooperators $n=50$ and
several values of $s$. The game is the same Snowdrift game of Fig.~\ref{fig:snowdrift}.
The curve for $s=100$ is indistinguishable from the one for $s\to\infty$ (labeled `replicator').}
\label{fig:visits}
\end{figure}
Having illustrated the effect of fast selection in these three representative
games, we can now present the general picture. Similar results arise in the
remaining $2\times 2$ games, fast selection favoring in all cases the strategy
with the highest payoff against the opposite strategy. For
the remaining five games of category (i) this means favoring the dominant
strategy (Prisoner's Dilemma is a prominent example of it). The other two cases
of category (ii) also
experience a change in the basins of attraction of the two equilibria. Finally, the remaining
two games of category (iii) experience the suppression of the coexistence state in favor of
one of the two strategies. The conclusion of all this is that fast selection changes completely the outcome of replicator dynamics. In terms of cooperation, as the terms in the off-diagonal of social dilemmas verify $W_{\text{DC}} > W_{\text{CD}}$, this change in outcome has a negative influence on cooperation, as we have seen in all the games considered. Even for some payoff matrices of a non-dilemma game such as Harmony, it can make defectors invade the population.
Two final remarks are in order. First, these results do not change qualitatively with the population size. In fact, Eqs.~(\ref{eq:asympDp}) and~(\ref{eq:asympDm}) and Fig.~\ref{fig:asympt} very clearly illustrate this.
Second, there might be some concern about this analysis which the extreme $s=1$
case puts forward: All this might just be an effect of the fact that most
players do not play and therefore have no chance to be selected for
reproduction. In order to sort this out we have made a similar analysis but
introducing a baseline fitness for all players, so that even if
a player does not play she can still be selected for reproduction. The
probability will be, of course, smaller than the one of the players who do
play; however, we should bear in
mind that when $s$ is very small, the overwhelming majority of players are of this type
and this compensates for the smaller probability. Thus, let $f_b$ be the total
baseline fitness that all players share per round, so that $sf_b/N$ is
the baseline fitness every player has at the time reproduction/selection occurs.
This choice implies that if $f_b=1$ the overall baseline fitness and that
arising from the game are similar, regardless of $s$ and $N$. If $f_b$ is very
small ($f_b\lesssim 0.1$), the result is basically the same as that for $f_b=0$.
The effect for $f_b=1$ is illustrated in Fig.~\ref{fig:baseline} for Harmony and
Stag Hunt games. Note also that at very large baseline fitness ($f_b\gtrsim 10$)
the evolution is almost neutral, although the small deviations induced by the
game ---which are determinant for the ultimate fate of the population--- still
follow the same pattern (see Fig.~\ref{fig:baseline10}). Interestingly, Traulsen {\em et al.}
\cite{traulsen:2007} arrive at similar results by using a Fermi like rule (see Sec.\ 4.1 below)
to introduce noise (temperature) in the selection process, and a interaction probability
$q$ of interactions between individuals leading to heterogeneity in the payoffs, i.e.,
in the same words as above, to fluctuations, that in turn reduce the intensity of
selection as is the case when we introduce a very large baseline fitness.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width=75mm,clip=]{fig07a}}\qquad
\subfigure[]{\includegraphics[width=75mm,clip=]{fig07b}}
\end{center}
\caption[]{Absorption probability starting from state $n$ for the Harmony game
of Fig.~\ref{fig:harmony} (a) and the Stag Hunt game of Fig.~\ref{fig:stag-hunt}
(b) when $N=100$ and baseline fitness $f_b=1$.}
\label{fig:baseline}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width=75mm,clip=]{fig08a}}\qquad
\subfigure[]{\includegraphics[width=75mm,clip=]{fig08b}}
\end{center}
\caption[]{Same as Fig.~\ref{fig:baseline} for $f_b=10$.}
\label{fig:baseline10}
\end{figure}
\clearpage
\section{Structured populations}
\label{sec:4}
Having seen the highly non-trivial effects of considering temporal fluctuations
in evolutionary games, in this section we are going to consider the effect of
relaxing the well-mixed hypothesis by allowing the existence of spatial
correlations in the population. Recall from Section~2.2 that a well-mixed
population presupposes that every individual interacts with equal probability
with every other one in the population, or equivalently that each individual
interacts with the ``average'' individual. It is not clear,
however, that this hypothesis holds in many practical situations. Territorial or
physical constraints may limit the interactions between individuals, for
example. On the other hand, an all-to-all network of relationships does not seem
plausible in large societies; other key phenomena in social life, such as
segregation or group formation, challenge the idea of a mean player that
everyone interacts with.
It is adequate, therefore, to take into consideration the existence
of a certain network of relationships in the population, which determines
who interacts with whom. This network of relationships is what we will
call from now on the \emph{structure} of the population. Consistently,
a well-mixed population will be labeled as \emph{unstructured} and will
be represented by a complete graph. Games on many different types of networks have been investigated, examples of which include regular lattices \cite{nowak:1992,hauert:2002}, scale-free networks \cite{santos:2006a}, real social networks \cite{lozano:2008}, etc. This section is not intended to be an
exhaustive review of all this existent work and we refer the reader to \cite{szabo:2007} for such a
detailed account. We rather want to give a panoramic and a more personal and
idiosyncratic view of the field, based on the main available results and our own
research.
It is at least reasonable to expect that the existence of structure in a
population could give rise to the appearance of correlations and that they
would have an impact on the evolutionary outcome. For more than fifteen
years investigation into this phenomena has been a hot topic of research, as the
seminal result by Nowak and May \cite{nowak:1992}, which reported an impressive
fostering of cooperation in Prisoner's Dilemma on spatial lattices, triggered a
wealth of work focused on the extension of this effect to other games, networks
and strategy spaces. On the other hand, the impossibility in most cases of
analytical approaches and the complexity of the corresponding numerical
agent-based models have made any attempt of exhaustive approach very demanding.
Hence most studies have concentrated on concrete settings with a particular
kind of game, which in most cases has been the Prisoner's Dilemma
\cite{nowak:1992,nowak:1994,lindgren:1994,hutson:1995,grim:1996,nakamaru:1997,
szabo:1998,brauchli:1999,abramson:2001,cohen:2001,vainstein:2001,lim:2002,
schweitzer:2002,ifti:2004,tang:2006,perc:2008}. Other games has been much less
studied in what concerns the influence of population structure, as show the
comparatively much smaller number of works about Snowdrift or Hawk-Dove games
\cite{killingback:1996,hauert:2004,sysi-aho:2005,kun:2006,tomassini:2006,
zhong:2006}, or Stag Hunt games \cite{blume:1993,ellison:1993,kirchkamp:2000}.
Moreover, comprehensive studies in the space of $2 \times 2$ games are very
scarce \cite{hauert:2002,santos:2006a}. As a result, many interesting features
of population structure and its influence on evolutionary games have been
reported in the literature, but the scope of these conclusions is rather
limited to particular models, so a general understanding of these issues, in the
broader context of $2 \times 2$ games and different update rules, is generally
missing.
However, the availability and performance of computational resources in
recent years have allowed us to undertake a systematic and exhaustive simulation
program \cite{roca:2009a,roca:2009b} on these evolutionary models. As a result
of
this study we have reached a number of conclusions that are obviously in
relation with previous research and that we will discuss in the following. In
some cases, these are generalizations of known results to wider sets of games
and update rules, as for example for the issue of the synchrony of the updating
of strategies
\cite{nowak:1992,nowak:1994,lindgren:1994,kun:2006,
tomassini:2006,kirchkamp:2000,huberman:1993} or the effect of small-world
networks vs regular
lattices \cite{abramson:2001,tomassini:2006,masuda:2003,tomochi:2004}. In
other cases, the more general view of our analysis has allowed us to
integrate apparently contradictory results in the literature, as the cooperation
on Prisoner's Dilemma vs.\ Snowdrift games
\cite{nowak:1992,killingback:1996,hauert:2004,sysi-aho:2005,tomassini:2006}, or
the importance of clustering in spatial lattices
\cite{cohen:2001,ifti:2004,tomassini:2006}. Other conclusions of ours, however,
refute what seems to be established opinions in the field, as the alleged
robustness of the positive influence of spatial structure on Prisoner's
Dilemma \cite{nowak:1992,hauert:2002,nowak:1994}. And finally, we have
reached novel conclusions that have not been highlighted by previous research,
as the robustness of the influence of spatial structure on coordination games,
or the asymmetry between the effects on games with mixed equilibria
(coordination and anti-coordination games) and how it varies with the intensity
of selection.
It is important to make clear from the beginning that evolutionary games on
networks may be sensitive to another source of variation with respect
to replicator dynamics besides the introduction of spatial correlations. This
source is the update rule, i.e.\ the rule that defines the evolution dynamics of
individuals' strategies, whose influence seems to have been overlooked
\cite{hauert:2002}. Strictly speaking, only when the model implements the
so-called \emph{replicator rule} (see below) one is considering the effect of
the restriction of relationships that the population structure implies, in
comparison with standard replicator dynamics. When using a different update
rule, however, we are adding a second dimension of variability, which amounts to
relax another assumption of replicator dynamics, namely number 4, which posits a
population variation linear in the difference of payoffs (see
Section~\ref{sec:2}). We will show extensively that this issue may have a huge
impact on the evolutionary outcome.
In fact, we will see that there is not a general influence of population
structure on evolutionary games. Even for a particular type of network, its
influence on cooperation depends largely on the kind of game and the
specific update rule. All one can do is to identify relevant
\emph{topological characteristics} that have a consistent effect on a broad
range of games and update rules, and explain this influence in terms of the same
basic principles. To this end, we will be looking
at the asymptotic states for different values of the game parameters, and not at how
the system behaves when the parameters are varied, which would be an approach
of a more statistical mechanics character. In this respect, it is worth pointing out that
some studies did use this perspective: thus, it has been shown that the extinction
transitions when the temptation parameter varies within the Prisoner's Dilemma game
and the evolutionary dynamics is stochastic fall in the directed percolation universality
class, in agreement with a well known conjecture \cite{hinrichsen:2000}. In particular,
some of the pioneering works in using a physics viewpoint on evolutionary games
\cite{szabo:1998,chiappin:1999}
have verified this result for specific models. The behavior changes under deterministic
rules such as {\em unconditional imitation} (see below), for which this extinction
transition is discontinuous.
Although our ultimate interest may be the effect on the evolution of
cooperation, measuring to which extent cooperation is enforced or inhibited
is not enough to clarify this effect. As in previous sections, our basic
observables will be the dynamical equilibria of the model, in comparison with
the equilibria of our reference model with standard replicator dynamics
--which, as we have explained in Section~\ref{sec:2}, are closely related to
those of the basic game--. The understanding of how the population structure
modifies qualitatively and quantitatively these equilibria will give us a much
clearer view on the behavior and properties of the model under study, and hence
on its influence on cooperation.
\subsection{Network models and update rules}
Many kinds of networks have been considered as models for population structure
(for recent reviews on networks, see \cite{newman:2003,boccaletti:2006}). A
first class includes networks that introduce a spatial arrangement of
relationships, which can represent territorial or physical constraints in the
interactions between individuals. Typical examples of this group are regular
lattices, with different degrees of neighborhood. Other important group is that
of synthetic networks that try to reproduce important properties that have been
found in real networks, such as the small-world or scale-free properties.
Prominent examples among these are Watts-Strogatz small-world networks
\cite{watts:1998} and Barab\'asi-Albert scale-free networks \cite{albert:2002}.
Finally, ``real'' social networks that come directly from experimental data have
also been studied, as for example in \cite{holme:2003,guimera:2003}.
As was stated before, one crucial component of the evolutionary models that we
are discussing in this section is the update rule, which determines how the
strategy of individuals evolves in time. There is a very large variety of update rules that have been used in the literature, each one arising from different backgrounds. The most important for our purposes is the \emph{replicator rule}, also known as the \emph{proportional imitation rule}, which is inspired on replicator dynamics and we describe in the following.\footnote{To our knowledge, Helbing was the first to show that a macroscopic population evolution following replicator dynamics could be induced by a microscopic imitative update rule \cite{helbing:1992a,helbing:1992b}. Schlag proved later the
optimality of such a rule under certain information constraints, and named it
\emph{proportional imitation} \cite{schlag:1998}.} Let $i = 1 \ldots N$ label
the individuals in the population. Let $s_i$ be the
strategy of player $i$, $W_i$ her payoff and $N_i$ her neighborhood, with $k_i$
neighbors. One neighbor $j$ of player $i$ is chosen at random, $j \in N_i$. The
probability of player $i$ adopting the strategy of player $j$ is given by
\begin{equation}
\label{eq:repldyn}
p^t_{ij} \equiv \Probability{ \{ s_j^t \to s_i^{t+1} \} } =
\left\{ \begin{array}{ll}
( W_j^t - W_i^t ) / \Phi \,, & W_j^t > W_i^t \,, \\
0 \,, & W_j^t \leq W_i^t \,,
\end{array} \right.
\end{equation}
with $\Phi $ appropriately chosen as a function of the payoffs to ensure $\Probability{ \{\cdot\} } \in [0,1]$.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig09}
\caption{Asymptotic density of cooperators $x^*$ with the replicator rule on a
complete network, when the initial density of cooperators is $x^0 =1/3$ (left,
A), $x^0 =1/2$ (middle, B) and $x^0 =2/3$ (right, C). This is the standard
outcome for a well-mixed population with replicator dynamics, and thus
constitutes the reference to assess the influence of a given population
structure (see main text for further details).}
\label{fig:compnet}
\end{figure}
The reason for the name of this rule is the fact that the equation of
evolution with this update rule, for large sizes of the population, is equal, up
to a time scale factor, to that of replicator dynamics
\cite{gintis:2000,hofbauer:1998}. Therefore, the complete network with
the replicator rule constitutes the finite-size, discrete-time version of replicator
dynamics on an infinite, well-mixed population in continuous time.
Fig.~\ref{fig:compnet} shows the evolutionary outcome of this model, in the same
type of plot as subsequent results in this section. Each panel of this figure
displays the asymptotic density of cooperators $x^*$ for a different initial
density $x^0$, in a grid of points in the $ST$-plane of games. The payoff
matrix of each game is given by
\begin{equation}
\label{eq:payoff-matrix}
\begin{array}{cc}
& \begin{array} {cc} \!\! \mbox{C} & \mbox{D} \end{array} \\
\begin{array}{c} \mbox{C} \\ \mbox{D} \end{array} &
\left( \begin{array}{cc} 1 & S \\ T & 0 \end{array} \right).
\end{array}
\end{equation}
We will consider the generality of this choice of parameters at the end of this section, after introducing the other evolutionary rules. Note that, in the notation of Section~\ref{sec:3}, we have taken $W_{\text
{CC}}=1, W_{\text {CD}}=S, W_{\text {DC}}=T, W_{\text {DD}}=0$; note also that
for these payoffs,
the normalizing factor in the replicator rule can be chosen as
$\Phi = \max(k_i,k_j) ( \max(1,T) - \min(0,S) )$. In this manner,
we visualize the space of symmetric $2 \times 2$ games as a plane of
co-ordinates $S$ and $T$ --for \emph{Sucker's} and \emph{Temptation}--,
which are the respective payoffs of a cooperator and a defector when confronting
each other. The four quadrants represented correspond to the following
games: Harmony (upper left), Stag Hunt (lower left), Snowdrift or Hawk-Dove
(upper right) and Prisoner's Dilemma (lower right). As expected, these results
reflect the close relationship between the equilibria of replicator dynamics and
the equilibria of the basic game. Thus, all Harmony games end up in full
cooperation and all Prisoner's Dilemmas in full defection, regardless of the
initial condition. Snowdrift games reach a mixed strategy equilibrium, with
density of cooperators $x_e = S / (S + T - 1)$. Stag Hunt games are the only
ones whose outcome depends on the initial condition, because of their bistable
character with an unstable equilibrium also given by $x_e$. To allow a
quantitative comparison of the degree of cooperation in each game, we have
introduced a quantitative index, the average cooperation over the region
corresponding to each game, which appears beside each quadrant. The results in
Fig.~\ref{fig:compnet} constitute the reference against which the effect of
population structure will be assessed in the following.
One interesting variation of the replicator rule is the \emph{multiple replicator rule}, whose difference consists on checking simultaneously all the neighborhood and thus making more probable a strategy change. With this rule the probability that player $i$ maintains her strategy is
\begin{equation}
\label{eq:multirepldyn}
\Probability{ \{ s_i^t \to s_i^{t+1} \} } =
\prod \limits_{j \in N_i} ( 1 - p_{ij}^t ) ,
\end{equation}
with $p^t_{ij}$ given by (\ref{eq:repldyn}). In case the strategy update takes place, the neighbor $j$ whose strategy is adopted by player $i$ is selected with probability proportional to $p^t_{ij}$.
A different option is the following \emph{Moran-like rule}, also called \emph{Death-Birth rule}, inspired on the Moran dynamics, described in Section~\ref{sec:3}. With this rule a player chooses the strategy of one of her neighbors, or herself's, with a probability proportional to the payoffs
\begin{equation}
\label{eq:moran}
\Probability{ \{ s_j^t \to s_i^{t+1} \} } =
\frac {W_j^t - \Psi} {\sum \limits_{k \in N^*_i} ( W_k^t - \Psi )} ,
\end{equation}
with $N^*_i = N_i \cup \{i\}$.
Because payoffs may be negative in Prisoner's Dilemma and Stag Hunt games, the constant $\Psi = \max_{j \in N^*_i}(k_j) \min(0,S)$ is subtracted from them. Note that with this rule a player can adopt, with low probability, the strategy of a neighbor that has done worse than herself.
The three update rules presented so far are imitative rules. Another important example of this kind is the \emph{unconditional imitation rule}, also known as the \emph{``follow the best''} rule \cite{nowak:1992}. With this rule each player chooses the strategy of the individual with largest payoff in her neighborhood, provided this payoff is greater than her own. A crucial difference with the previous rules is that this one is a deterministic rule.
Another rule that has received a lot of attention in the literature, specially in economics, is the \emph{best response rule}. In this case, instead of some kind of imitation of neighbor's strategies based on payoff scoring, the player has enough cognitive abilities to realize whether she is playing an optimum strategy (i.e.\ a best response) given the current configuration of her neighbors. If it is not the case, she adopts with probability $p$ that optimum strategy. It is clear that this rule is innovative, as it is able to introduce strategies not present in the population, in contrast with the previous purely imitative rules.
Finally, an update rule that has been widely used in the literature, because of being analytically tractable, is the \emph{Fermi rule}, based on the Fermi
distribution function \cite{szabo:1998,blume:2003,traulsen:2006a}. With this
rule, a neighbor $j$ of player $i$ is selected at random (as with the replicator rule) and the probability of player $i$ acquiring the strategy of $j$ is given by
\begin{equation}
\label{eq:fermidyn}
\Probability{ \{ s_j^t \to s_i^{t+1} \} } =
\displaystyle \frac {1} {1 + \exp \left( - \beta \, ( W_j^t - W_i^t )
\right) } .
\end{equation}
The parameter $\beta$ controls the intensity of selection, and can be understood
as the inverse of temperature or noise in the update rule. Low $\beta$
represents high temperature or noise and, correspondingly, weak selection
pressure. Whereas this rule has been employed to study resonance-like behavior
in evolutionary games on lattices \cite{szabo:2005b}, we use it in this work to
deal with the issue of the intensity of selection (see
Subsection~\ref{subsec:weak-selection}).
Having introduced the evolutionary rules we will consider, it is important to
recall our choice for the payoff matrix (\ref{eq:payoff-matrix}), and discuss
its generality. Most of the rules (namely the replicator, the multiple
replicator, the unconditional imitation and the best response rules) are
invariant on homogeneous networks\footnote{The invariance under
translations of the payoff matrix does not hold if the network is
heterogenous. In this case, players with higher degrees receive
comparatively more (less) payoff under positive (negative) translations. Only
very recently has this issue been studied in the literature \cite{luthi:2009}.}
under translation and (positive) scaling of the payoff matrix. Among the
remaining rules, the dynamics changes upon translation for the Moran rule and
upon scaling
for the Fermi rule. The corresponding changes in these last two cases amount to
a modification of the intensity of selection, which we also treat in this
work. Therefore, we consider that the parameterization of
(\ref{eq:payoff-matrix}) is general enough for our purposes.
It is also important to realize that for a complete network, i.e.\ for a
well-mixed or unstructured population, the differences between update rules may
be not relevant, as far as they do not change in general the evolutionary
outcome \cite{roca:2009c}. These differences, however, become crucial when the
population has some structure, as we will point out in the following.
The results displayed in Fig.~\ref{fig:compnet} have been obtained analytically,
but the remaining results of this section come from the simulation of
agent-based models. In all cases, the population size is $N=10^4$ and the
allowed
time for convergence is $10^4$ time steps, which we have checked it is enough
to
reach a stationary state. One time step represents one update event for every individual in the population, exactly in the case of synchronous update and on average in the asynchronous case, so it could be considered as one generation. The asymptotic density of cooperators is obtained
averaging over the last $10^3$ time steps, and the values presented in the plots
are the result of averaging over 100 realizations. Cooperators and defectors are
randomly located at the beginning of evolution and, when applicable, networks
have been built with periodic boundary conditions. See \cite{roca:2009a} for
further details.
\subsection{Spatial structure and homogeneous networks}
In 1992 Nowak and May published a very influential paper \cite{nowak:1992}, where they showed the dramatic effect that the spatial distribution of a population could have on the evolution of cooperation. This has become the prototypical example of the promotion of cooperation favored by the structure of a population, also known as network reciprocity \cite{nowak:2006b}. They considered the following Prisoner's Dilemma:
\begin{equation}
\label{eq:nowak-pd}
\begin{array}{cc}
& \begin{array} {cc} \! \mbox{C} & \! \mbox{D} \end{array} \\
\begin{array}{c} \mbox{C} \\ \mbox{D} \end{array} &
\left( \begin{array}{cc} 1 & 0 \\ T & \epsilon \end{array} \right),
\end{array}
\end{equation}
with $1 \le T \le 2$ and $\epsilon \lesssim 0$. Note that this one-dimensional parameterization corresponds in the $ST$-plane to a line very near the boundary with Snowdrift games.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig10}
\caption{Asymptotic density of cooperators $x^*$ in a square lattice with degree
$k=8$ and initial density of cooperators $x^0=0.5$, when the game is
the Prisoner's Dilemma as given by (\ref{eq:nowak-pd}), proposed by Nowak and
May \cite{nowak:1992}. Note that the outcome with replicator dynamics on a
well-mixed population is $x^*=0$ for all the displayed range of the temptation
parameter $T$. Notice also the singularity at $T=1.4$ with unconditional
imitation. The surrounding points are located at $T=1.3999$ and $T=1.4001$.}
\label{fig:nowak}
\end{figure}
Fig.~\ref{fig:nowak} shows the great fostering of cooperation reported by \cite{nowak:1992}. The authors explained this influence in terms of the formation of clusters of cooperators, which give cooperators enough payoff to survive even when surrounded by some defectors. This model has a crucial detail, whose importance we will stress later: The update rule used is unconditional imitation.
Since the publication of this work many studies have investigated related models
with different games and networks, reporting qualitatively consistent results
\cite{szabo:2007}. However, Hauert and Doebeli published in 2004 another
important result \cite{hauert:2004}, which casted a shadow of doubt on the
generality of the positive influence of spatial structure on cooperation. They
studied the following parameterization of Snowdrift games:
\begin{equation}
\label{eq:hauert-sd}
\begin{array}{cc}
& \begin{array} {ccc} \!\!\!\!\! \mbox{C} & \mbox{ } &
\!\!\! \mbox{D } \end{array} \\
\begin{array}{c} \mbox{C} \\ \mbox{D} \end{array} &
\left( \begin{array}{cc} 1 & 2-T \\ T & 0 \end{array} \right),
\end{array}
\end{equation}
with $1 \le T \le 2$ again.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig11}
\caption{Asymptotic density of cooperators $x^*$ in a square lattice with degree
$k=8$ and initial density of cooperators $x^0=0.5$, when the game is
the Snowdrift game as given by (\ref{eq:hauert-sd}), proposed by Hauert and
Doebeli \cite{hauert:2004}. The result for a well-mixed population is displayed
as a reference as a dashed line.}
\label{fig:hauert}
\end{figure}
The unexpected result obtained by the authors is displayed in Fig.~\ref{fig:hauert}. Only for low $T$ there is some improvement of cooperation,
whereas for medium and high $T$ cooperation is inhibited. This is a surprising
result, because the basic game, the Snowdrift, is in principle more favorable to
cooperation. As we have seen above, its only stable equilibrium is a mixed
strategy population with some density of cooperators, whereas the unique
equilibrium in Prisoner's Dilemma is full defection (see
Fig.~\ref{fig:compnet}). In fact, a previous paper by Killingback and Doebeli
\cite{killingback:1996} on the Hawk-Dove game, a game equivalent to
the Snowdrift game, had reported an effect of spatial structure equivalent to a
promotion of
cooperation.
Hauert and Doebeli explained their result in terms of the hindrance to cluster
formation and growth, at the microscopic level, caused by the payoff structure
of the Snowdrift game. Notwithstanding the different cluster dynamics in both
games, as observed by the authors, a hidden contradiction looms in their
argument, because it implies some kind of \emph{discontinuity} in the
microscopic dynamics in the crossover between Prisoner's Dilemma and Snowdrift
games ($S=0, 1 \le T \le 2$). However, the equilibrium structure of both basic
games, which drives this microscopic dynamics, is not discontinuous at this
boundary, because for both games the only stable equilibrium is full defection.
So, where does this change in the cluster dynamics come from?
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.45\textwidth,clip=]{fig12a}}\qquad
\subfigure[]{\includegraphics[width=0.45\textwidth,clip=]{fig12b}}
\caption{Asymptotic density of cooperators $x^*$ in square lattices with degree
$k=8$ and initial density of cooperators $x^0=0.5$, for both Prisoner's Dilemma
(\ref{eq:nowak-pd}) and Snowdrift games (\ref{eq:hauert-sd}), displayed
separately according to the update rule: (a) unconditional imitation (Nowak and
May's model \cite{nowak:1992}), (b) replicator rule (Hauert and Doebeli's model
\cite{hauert:2004}). The result for Snowdrift in a well-mixed population is
displayed as a reference as a dashed line. It is clear the similar influence of
regular lattices on both games, when the key role of the update rule is taken
into account (see main text for details).}
\label{fig:nowak-hauert}
\end{figure}
The fact is that there is not such a difference in the cluster dynamics between
Prisoner's Dilemma and Snowdrift games, but different update rules in the
models. Nowak and May \cite{nowak:1992}, and Killingback and Doebeli
\cite{killingback:1996}, used the unconditional imitation rule, whereas Hauert
and Doebeli \cite{hauert:2004} employed the replicator rule. The crucial role
of the update rule becomes clear in Fig.~\ref{fig:nowak-hauert}, where results
in Prisoner's Dilemma and Snowdrift are depicted separately for each update
rule. It shows that, if the update rule used in the model is the same, the
influence on both games, in terms of promotion or inhibition of cooperation, has
a similar dependence on $T$. For both update rules, cooperation is fostered in
Prisoner's Dilemma and Snowdrift at low values of $T$, and cooperation is
inhibited at high $T$. Note that with unconditional imitation the crossover
between both behaviors takes place at $T \approx 1.7$, whereas with the
replicator rule it occurs at a much lower value of $T \approx 1.15$. The logic
behind this influence is better explained in the context of the full $ST$-plane,
as we will show later.
The fact that this apparent contradiction has been resolved considering the role
of the update rule is a good example of its importance. This conclusion is in
agreement with those of \cite{tomassini:2006}, which performed an exhaustive
study on Snowdrift games with different network models and update rules, but
refutes those of \cite{hauert:2002}, which defended that the effect of
spatial lattices was almost independent of the update rule. In consequence, the
influence of the network models that we consider in the following is presented
separately for each kind of update rule, highlighting the differences in results
when appropriate. Apart from this, to assess and explain the influence of
spatial structure, we need to consider it along with games that have different
equilibrium structures, not only a particular game, in order to draw
sufficiently general conclusions. One way to do it is to
study their effect on the space of $2 \times 2$ games described by the
parameters $S$ and $T$ (\ref{eq:payoff-matrix}). A first attempt was done by
Hauert
\cite{hauert:2002}, but some problems in this study make it inconclusive (see
\cite{roca:2009a} for details on this issue).
Apart from lattices of different degrees (4, 6 and 8), we have also considered
homogeneous random networks, i.e.\ random networks where each node has exactly
the same number of neighbors. The aim of comparing with this kind of networks is
to isolate the effect of the spatial distribution of individuals from that of
the mere limitation of the number of neighbors and the \emph{context
preservation} \cite{cohen:2001} of a degree-homogeneous random network. The
well-mixed population hypothesis implies that every player plays with the
``average''
player in the population. From the point of view of the replicator rule this
means that every player samples successively the population in each evolution
step. It is not unreasonable to think that if the number of neighbors is
sufficiently restricted the result of this random sampling will differ from the
population average, thus introducing changes in the evolutionary outcome.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig13}
\caption{Asymptotic density of cooperators $x^*$ in homogeneous random networks
(upper row, A to C) compared to regular lattices (lower row, D to F), with
degrees $k=$ 4 (A, D), 6 (B, E) and 8 (C, F). The update rule is the replicator
rule and the initial density of cooperators is $x^0 =0.5$. The plots show that
the main influence occurs in Stag Hunt and Snowdrift games, specially for
regular lattices with large clustering coefficient, $k=$ 6 and 8 (see main
text).}
\label{fig:spatial-replicator}
\end{figure}
Fig.~\ref{fig:spatial-replicator} shows the results for the replicator rule with
random and spatial networks of different degrees. First, it is clear that the
influence of these networks is negligible on Harmony games and minimal
on Prisoner's Dilemmas, given the reduced range of parameters where it is
noticeable. There is, however, a clear influence on Stag Hunt and Snowdrift
games, which is always of opposite sign: An enhancement of cooperation in Stag
Hunt and an inhibition in Snowdrift. Second, it is illuminating to consider the
effect of increasing the degree. For the random network, it means that its weak
influence vanishes. The spatial lattice, however, whose result is very similar
to that of the random one for the lowest degree ($k=4$), displays remarkable
differences for the greater degrees ($k=$ 6 and 8). These differences are a
clear promotion of cooperation in Stag Hunt games and a lesser, but measurable,
inhibition in Snowdrift games, specially for low $S$.
The relevant topological feature that underlies this effect is the existence of
clustering in the network, understood as the presence of triangles or,
equivalently, common neighbors
\cite{newman:2003, boccaletti:2006}. In regular lattices, for $k=4$ there is no
clustering, but there is for $k=6$ and $8$. This point explains the difference
between the conclusions of Cohen et al.\ \cite{cohen:2001} and those of Ifti et
al.\ \cite{ifti:2004} and Tomassini et al.\ \cite{tomassini:2006}, regarding
the role of network clustering in the effect of spatial populations. In
\cite{cohen:2001}, rectangular lattices of degree $k=4$ were
considered, which have strictly zero clustering because there are not closed
triangles in the network, hence finding no differences in outcome between the
spatial and the random topology. In the latter case, on the contrary, both
studies employed rectangular lattices of degree $k=8$, which do have clustering,
and thus they identified it as a key feature of the network, for particular
parameterizations of the games they were studying, namely Prisoner's Dilemma
\cite{ifti:2004} and Snowdrift \cite{tomassini:2006}.
An additional evidence for this conclusion is the fact that small-world
networks, which include random links to reduce the average path between nodes
while maintaining the clustering, produce almost indistinguishable results from
those of Fig.~\ref{fig:spatial-replicator}~D-F. This conclusion is in agreement
with existent theoretical work about small-world networks, on Prisoner's
Dilemma \cite{abramson:2001,masuda:2003,tomochi:2004} and its extensions
\cite{wu:2005,wu:2006}, on Snowdrift games \cite{tomassini:2006}, and also with
experimental studies on coordination games \cite{cassar:2007}. The difference
between the effect of regular lattices and small-world networks consists, in
general, in a \emph{greater efficiency} of the latter in reaching the
stationary state (see \cite{roca:2009a} for a further discussion on this
comparison).
\begin{figure}
\centering
\includegraphics[height=0.8\textheight]{fig14}
\caption{Snapshots of the evolution of a population on a regular lattice of
degree $k=8$, playing a Stag Hunt game ($S=-0.65$ and $T=0.65$). Cooperators are
displayed in red and defectors in blue. The update rule is the replicator rule
and the initial density of cooperators is $x^0 =0.5$. The upper left label shows
the time step $t$. During the initial steps, cooperators with low local density
of cooperators in their neighborhood disappear, whereas those with high local
density grow into the clusters that eventually take up the complete population.}
\label{fig:spatial-replicator-snapshots}
\end{figure}
The mechanism that explains this effect is the formation and growth of clusters
of cooperators, as Fig.~\ref{fig:spatial-replicator-snapshots} displays for a
particular realization. The outcome of the population is then totally determined
by the stability and growth of these clusters, which in turn depend on the
dynamics of clusters interfaces. This means that the result is no longer
determined by the global population densities but by the local densities that
the players at the cluster interfaces see in their
neighborhood. In fact, the primary effect that the network clustering causes is
to favor, i.e.\ to maintain or to increase, the high local densities that were
present in the population from the random beginning. This favoring produces
opposite effects in Stag Hunt and Snowdrift games. As an illustrating example,
consider that the global density is precisely that of the mixed equilibrium of
the game. In Stag Hunt games, as this equilibrium is unstable, a higher local
density induces the conversion of nearby defectors to cooperators, thus making
the cluster grow. In Snowdrift games, on the contrary, as the equilibrium is
stable, it causes the conversion of cooperators to defectors. See
\cite{roca:2009a} for a full discussion on this mechanism.
In view of this, recalling that these are the results for the replicator rule,
and that therefore they correspond to the correct update rule to study the
influence of population structure on replicator dynamics, we can state that the
presence of clustering (triangles, common neighbors) in a network is a relevant
topological feature for the evolution of cooperation. Its main effects are, on
the one hand, a promotion of cooperation in Stag Hunt games, and, on the other
hand, an inhibition (of lower magnitude) in Snowdrift games.
We note, however, that clustering may not be the only relevant factor governing
the game
asymptotics: one can devise peculiar graphs, not representing proper spatial
structure, where
other influences prove relevant. This is the case of networks consisting of a
complete subgraphs
connected to each other by a few connections \cite{szabo:2005b}, a system
whose behavior, in spite of the high clustering coefficient, is similar to those
observed on the traditional square lattice where the clustering coefficient is
zero. This was subsequently related \cite{vukov:2006} to the existence of
overlapping triangles that support the spreading of cooperation. We thus see
that our claim about the outcome of evolutionary games on networks with
clustering is anything but general and depends on the translational invariance
of the network.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig15}
\caption{Asymptotic density of cooperators $x^*$ in homogeneous random networks
(upper row, A to C) compared to regular lattices (lower row, D to F), with
degrees $k=$ 4 (A, D), 6 (B, E) and 8 (C, F). The update rule is unconditional
imitation and the initial density of cooperators is $x^0 =0.5$. Again as in
Fig.~\ref{fig:spatial-replicator}, spatial lattices have greater influence than
random networks when the clustering coefficient is high ($k=$ 6 and 8). In this
case, however, the beneficial effect for cooperation goes well into Snowdrift
and Prisoner's Dilemma quadrants.}
\label{fig:spatial-imitation}
\end{figure}
Other stochastic non-innovative rules, such as the multiple replicator and Moran
rules, yield similar results, without qualitative differences \cite{roca:2009a}.
Unconditional imitation, on the contrary, has a very different influence,
as can be seen in Fig.~\ref{fig:spatial-imitation}.
In the first place, homogenous random networks themselves have a marked
influence, that \emph{increases} with network degree for Stag Hunt and Snowdrift
games, but decreases for Prisoner's Dilemmas. Secondly, there are again no
important differences between random and spatial networks if there is no
clustering in the network (note how the transitions between the different
regions in the results are the same). There are, however, stark differences when
there is clustering in the network. Interestingly, these are the cases with an
important promotion of cooperation in Snowdrift and Prisoner's Dilemma games.
\begin{figure}
\centering
\includegraphics[height=0.8\textheight]{fig16}
\caption{Snapshots of the evolution of a population on a regular lattice of
degree $k=8$, playing a Stag Hunt game ($S=-0.65$ and $T=0.65$). Cooperators are
displayed in red and defectors in blue. The update rule is unconditional
imitation and the initial density of cooperators is $x^0 =1/3$ (this lower value
than that of Fig.~\ref{fig:spatial-replicator-snapshots} has been used to make
the evolution longer and thus more easily observable). The upper left label
shows the time step $t$. As with the replicator rule (see
Fig.~\ref{fig:spatial-replicator-snapshots}), during the initial time steps
clusters emerge from cooperators with high local density of cooperators in their
neighborhood. In this case, the interfaces advance deterministically at each
time step, thus giving a special significance to flat interfaces and producing a
much faster evolution than with the replicator rule (compare time labels with
those of Fig.~\ref{fig:spatial-replicator-snapshots})}
\label{fig:spatial-imitation-snapshots}
\end{figure}
In this case, the dynamical mechanism is the formation and growth of clusters of
cooperators as well, and the fate of the population is again determined by the
dynamics of cluster interfaces. With unconditional imitation, however, given its
deterministic nature, interfaces advance one link every time step. This makes
very easy the calculation of the conditions for their advancement, because these
conditions come down to those of a flat interface between cooperators and
defectors
\cite{roca:2009a}. See Fig.~\ref{fig:spatial-imitation-snapshots} for a typical
example of evolution.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig17}
\caption{Asymptotic density of cooperators $x^*$ in regular lattices of degree
$k=8$, for different initial densities of cooperators $x^0=$ 1/3 (A, D), 1/2 (B,
E) and 2/3 (C, F). The update rules are the replicator rule (upper row, A to C)
and unconditional imitation (lower row, D to F). With the replicator rule, the
evolutionary outcome in Stag Hunt games depends on the initial condition, as is
revealed by the displacement of the transition line between full cooperation and
full defection. However, with unconditional imitation this transition line
remains in the same position, thus showing the insensitivity to the initial
condition. In this case, the outcome is determined by the presence of small
clusters of cooperators in the initial random population, which takes place for
a large range of values of the initial density of cooperators $x^0$.}
\label{fig:spatial-initial-condition}
\end{figure}
An interesting consequence of the predominant role of flat interfaces with
unconditional imitation is that, as long as there is in the initial population a
flat interface (i.e.\ a cluster with it, as for example a $3 \times 2$ cluster
in a 8-neighbor lattice), the cluster will grow and eventually extend to the
entire population. This feature corresponds to the $3 \times 3$ cluster rule
proposed by Hauert \cite{hauert:2002}, which relates the outcome of the entire
population to that of a cluster of this size. This property makes the
evolutionary outcome quite independent of the initial density of cooperators,
because even for a low initial density the probability that a suitable small
cluster exists will be high for sufficiently large populations; see
Fig.~\ref{fig:spatial-initial-condition}~D-F about the differences in initial
conditions. Nevertheless, it is important to realize that this rule is based on
the dynamics of flat interfaces and, therefore, it is only valid for
unconditional imitation. Other update rules that also give rise to clusters, as
replicator rule for example, develop interfaces with different shapes, rendering
the particular case of flat interfaces irrelevant. As a consequence, the
evolution outcome becomes dependent on the initial condition, as
Fig.~\ref{fig:spatial-initial-condition}~A-C displays.
In summary, the relevant topological feature of these homogeneous networks, for
the games and update rules considered so far, is the clustering of the network.
Its effect depends largely on the update rule, and the most that can be said in
general is that, besides not affecting Harmony games, it consistently promotes
cooperation in Stag Hunt games.
\subsection{Synchronous vs asynchronous update}
Huberman and Glance \cite{huberman:1993} questioned the generality of the
results reported by Nowak and May \cite{nowak:1992}, in terms of the
synchronicity of the update of strategies. Nowak and May used synchronous
update, which means that every player is updated at the same time, so the
population evolves in successive generations. Huberman and Glance, in contrast,
employed asynchronous update (also called random sequential update), in which
individuals are updated independently one by one, hence the neighborhood of each
player always remains the same while her strategy is being updated. They showed
that, for a particular game, the asymptotic cooperation obtained with
synchronous update disappeared. This has become since then one of the most
well-known and cited examples of the importance of synchronicity in the update
of strategies in evolutionary models. Subsequent works have, in turn,
critizised the importance of this issue, showing that the conclusions of
\cite{nowak:1992} are robust \cite{nowak:1994,szabo:2005}, or restricting the
effect reported by \cite{huberman:1993} to particular instances of Prisoner's
Dilemma \cite{lindgren:1994} or to the short memory of players
\cite{kirchkamp:2000}. Other works, however, in the different context of
Snowdrift games \cite{kun:2006,tomassini:2006} have found that the influence on
cooperation can be positive or negative, in the asynchronous case compared with
the synchronous one.
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\textwidth]{fig18}
\caption{Asymptotic density of cooperators $x^*$ in regular lattices of degree
$k=8$, with synchronous update (left, A and C) compared to asynchronous (right,
B and D). The update rules are the replicator rule (upper row) and unconditional
imitation (lower row). The initial density of cooperators is $x^0 =0.5$. For the
replicator rule, the results are virtually identical, showing the lack of
influence of the synchronicity of update on the evolutionary outcome. In the
case of unconditional imitation the results are very similar, but there are
differences for some points, specially Snowdrift games with $S \lesssim 0.3$ and
$T>5/3 \approx 1.67$. The particular game studied by Huberman and Glance
\cite{huberman:1993}, which reported a suppression of cooperation due to
asynchronous update, belongs to this region.}
\label{fig:huberman-st}
\end{figure}
We have thoroughly investigated this issue, finding that the effect of
synchronicity in the update of strategies is the exception rather than the
rule. With the replicator rule, for example, the evolutionary outcome in both
cases is virtually identical, as Fig.~\ref{fig:huberman-st}~A-B shows. Moreover,
in this case, the time evolution is also very similar
(see Fig.\ref{fig:huberman-time}~A-B). With unconditional imitation there are
important differences only in one particular subregion of the space of
parameters, corresponding mostly to Snowdrift games, to which the specific
game studied by Huberman and Glance belongs (see Fig.~\ref{fig:huberman-st}~C-D
and \ref{fig:huberman-time}~C-D).
\begin{figure}[t!]
\centering
\includegraphics[width=0.99\textwidth]{fig19}
\caption{Time evolution of the density of cooperators $x$ in regular lattices of
degree $k=8$, for typical realizations of Stag Hunt (left, A and C) and
Snowdrift games (right, B and D), with synchronous (continuous lines) or
asynchronous (dashed lines) update. The update rules are the replicator rule
(upper row) and unconditional imitation (lower row). The Stag Hunt games for the
replicator rule (A) are: a, $S=-0.4$, $T=0.4$; b, $S=-0.5$, $T=0.5$; c,
$S=-0.6$, $T=0.6$; d, $S=-0.7$, $T=0.7$; e, $S=-0.8$, $T=0.8$. For unconditional
imitation the Stag Hunt games (C) are: a, $S=-0.6$, $T=0.6$; b, $S=-0.7$,
$T=0.7$; c, $S=-0.8$, $T=0.8$; d, $S=-0.9$, $T=0.9$; e, $S=-1.0$, $T=1.0$. The
Snowdrift games are, for both update rules (B, D): a, $S=0.9$, $T=1.1$; b,
$S=0.7$, $T=1.3$; c, $S=0.5$, $T=1.5$; d, $S=0.3$, $T=1.7$; e, $S=0.1$, $T=1.9$.
The initial density of cooperators is $x^0 =0.5$. The time scale of the
asynchronous realizations has been re-scaled by the size of the population, so t
hat for both kinds of update a time step represents the same number of update
events in the population. Figures A and B show that, in the case of the replicator
rule, not only the outcome but also the time evolution is independent of the
update synchronicity. With unconditional imitation the results are also very
similar for Stag Hunt (C), but somehow different in Snowdrift (D) for large $T$,
displaying the influence of synchronicity in this subregion. Note that in all
cases unconditional imitation yields a much faster evolution than the replicator
rule.}
\label{fig:huberman-time}
\end{figure}
\clearpage
\subsection{Heterogeneous networks}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig20}
\caption{Asymptotic density of cooperators $x^*$ with the replicator update
rule, for model networks with different degree heterogeneity: homogeneous random
networks (left, A), Erd\H{o}s-R\'enyi random networks (middle, B) and
Barab\'asi-Albert scale-free networks (right, C). In all cases the average
degree is $\bar{k}=8$ and the initial density of cooperators is $x^0=0.5$.
As degree heterogeneity grows, from left to right, cooperation in Snowdrift
games is clearly enhanced.}
\label{fig:santos-replicator}
\end{figure}
The other important topological feature for evolutionary games was introduced by
Santos and co-workers \cite{santos:2006a,santos:2005,santos:2006c}, who studied
the effect of degree heterogeneity, in particular scale-free networks. Their
main result is shown in Fig.~\ref{fig:santos-replicator}, which displays the
variation in the evolutionary outcome induced by increasing the variance of the
degree distribution in the population, from zero (homogeneous random networks)
to a finite value (Erd\H{o}s-R\'enyi random networks), and then to infinity
(scale-free networks). The enhancement of cooperation as degree heterogeneity
increases is very clear, specially in the region of Snowdrift games. The effect
is not so strong, however, in Stag Hunt or Prisoner's Dilemma games. Similar
conclusions are obtained with other scale-free topologies, as for example with
Klemm-Egu\'iluz scale-free networks \cite{klemm:2002}. Very recently, it has
been shown \cite{assenza:2009} that much as we discussed above for the case of
spatial structures, clustering is also a factor improving the cooperative
behavior in scale-free networks.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig21}
\caption{Asymptotic density of cooperators $x^*$ with unconditional imitation as
update rule, for model networks with different degree heterogeneity: homogeneous
random networks (left, A), Erd\H{o}s-R\'enyi random networks (middle, B) and
Barab\'asi-Albert scale-free networks (right, C). In all cases the average
degree is $\bar{k}=8$ and the initial density of cooperators is $x^0=0.5$.
As degree heterogeneity grows, from left to right, cooperation in Snowdrift
games is enhanced again. In this case, however, cooperation is inhibited in Stag
Hunt games and reaches a maximum in Prisoner's Dilemmas for Erd\H{o}s-R\'enyi
random networks.}
\label{fig:santos-imitation}
\end{figure}
The positive influence on Snowdrift games is quite robust against changes in
network degree and the use of other update rules. On the other hand, the
influence on Stag Hunt and Prisoner's Dilemma games is quite restricted and very
dependent on the update rule, as Fig.~\ref{fig:santos-imitation} reveals. In
fact, with unconditional imitation cooperation is inhibited in Stag Hunt games
as the network becomes more heterogeneous, whereas in Prisoner's Dilemmas it
seems to have a maximum at networks with finite variance in the degree
distribution.
A very interesting insight from the comparison between the effects of network
clustering and degree heterogeneity is that they mostly affect games with one
equilibrium in mixed strategies, and that in addition the effects on these games
are different. This highlights the fact that they are different fundamental
topological properties, which induce mechanisms of different nature. In the case
of network clustering we have seen the formation and growth of clusters of
cooperators. For network heterogeneity the phenomena is the bias and
stabilization of the strategy oscillations in Snowdrift games towards the
cooperative strategy
\cite{gomezgardenes:2007,poncela:2007}, as we explain in the following. The
asymptotic state of Snowdrift games in homogeneous networks consists of a mixed
strategy population, where every individual oscillates permanently between
cooperation and defection. Network heterogeneity tends to prevent this
oscillation, making players in more connected sites more prone to
be cooperators. At first, having more neighbors makes any individual receive
more payoff, despite her strategy, and hence she has an evolutionary advantage.
For a defector, this is a short-lived advantage, because it triggers the change
of her neighbors to defectors, thus loosing payoff. A high payoff cooperator, on
the contrary, will cause the conversion of her neighbors to cooperators,
increasing even more her own payoff. These highly connected cooperators
constitute the hubs that drive the population, fully or partially, to
cooperation. It is clear that this mechanism takes place when cooperators
collect more payoff from a greater neighborhood, independently of their
neighbors' strategies. This only happens when $S>0$, which is the reason why the
positive effect on cooperation of degree heterogeneity is mainly restricted to
Snowdrift games.
\clearpage
\subsection{Best response update rule}
So far, we have dealt with imitative update rules, which are non-innovative.
Here we present the results for an innovative rule, namely best response. With
this rule each player chooses, with certain probability $p$, the strategy that
is the best response for her current neighborhood. This rule is also referred to
as myopic best response, because the player only takes into account the last
evolution step to decide the optimum strategy for the next one. Compared to the
rules presented previously, this one assumes more powerful cognitive abilities
on the individual, as she is able to discern the payoffs she can obtain
depending on her strategy and those of her neighbors, in order to chose the best
response. From this point of view, it constitutes a next step in the
sophistication of update rules.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{fig22}
\caption{Asymptotic density of cooperators $x^*$ in a square lattice with degree
$k=8$ and best response as update rule, in the model with Snowdrift
(\ref{eq:hauert-sd}) studied by Sysi-Aho and co-workers \cite{sysi-aho:2005}.
The result for a well-mixed population is displayed as a reference. Note how the
promotion or inhibition of cooperation does not follow the same variation as a
function of $T$ than in the case with the replicator rule studied by Hauert and
Doebeli \cite{hauert:2004} (Fig.~\ref{fig:hauert}).}
\label{fig:sysi-aho}
\end{figure}
An important result of the influence of this rule for evolutionary games was
published in 2005 by Sysi-Aho and co-workers \cite{sysi-aho:2005}. They studied
the combined influence of this rule with regular lattices, in the same
one-dimensional parameterization of Snowdrift games (\ref{eq:hauert-sd}) that
was employed by Hauert and Doebeli \cite{hauert:2004}. They reported a
modification in the cooperator density at equilibrium, with an increase for some
subrange of the parameter $T$ and a decrease for the other, as
Fig.~\ref{fig:sysi-aho} shows.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{fig23}
\caption{Asymptotic density of cooperators $x^*$ in random (left, A and D),
regular (middle, B and E), and scale-free networks (right, C and F) with
average degrees $\bar{k}=4$ (upper row, A to C) and 8 (lower row, D to F). The
update rule is best response with $p=0.1$ and the initial density of cooperators
is $x^0 =0.5$. Differences are negligible in all cases; note, however, that the
steps appearing in the Snowdrift quadrant are slightly different.}
\label{fig:best-response}
\end{figure}
At the moment, it was intriguing that regular lattices had opposite effects
(promotion or inhibition of cooperation) in some ranges of the parameter $T$,
depending on the update rule used in the model. Very recently we have carried
out a thorough investigation of the influence of this update rule on a wide
range of networks \cite{roca:2009b}, focusing on the key topological properties
of network clustering and degree heterogeneity. The main conclusion of this
study is that, with only one relevant exception, the best response rule
suppresses the effect of population structure on evolutionary games.
Fig.~\ref{fig:best-response} shows a summary of these results. In all cases the
outcome is very similar to that of replicator dynamics on well-mixed populations
(Fig.~\ref{fig:compnet}), despite the fact that the networks studied explore
different options of network clustering and degree heterogeneity. The steps in
the equilibrium density of Snowdrift games, as those reported in
\cite{sysi-aho:2005}, show up in all cases, with slight variations which depend
mostly on the mean degree of the network.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{fig24}
\caption{Asymptotic density of cooperators $x^*$ in regular lattices with
initial density of cooperators $x^0 =1/3$. The degrees are $k=4$ (left, A),
$k=6$ (middle, B) and $k=8$ (right, C). The update rule is best response with
$p=0.1$. Comparing with Fig.~\ref{fig:compnet}~A, there is a clear displacement
of the boundary between full defection and full cooperation in Stag Hung games,
which amounts to a promotion of cooperation. The widening of the border in panel
C is a finite size effect, which disappears for larger populations. See main
text for further details.}
\label{fig:best-response-lattices}
\end{figure}
The exception to the absence of network influence is the case of regular
lattices, and consists of a modification of
the unstable equilibrium in Stag Hunt games, in the sense that it produces a
promotion of cooperation for initial densities lower than 0.5 and a
corresponding symmetric inhibition for greater densities. An example of this
effect is given in Fig.~\ref{fig:best-response-lattices}, where the outcome
should be compared to that of well-mixed populations in
Fig.~\ref{fig:compnet}~A. The reason for this effect is that the lattice creates
special conditions for the advancement (or receding) of the interfaces of
clusters of cooperators. We refer the interested reader to \cite{roca:2009b} for
a detailed description of this phenomena. Very remarkably, in this case network
clustering is not relevant, because the effect also takes place for degree
$k=4$, at which there is no clustering in the network.
\clearpage
\subsection{Weak selection}
\label{subsec:weak-selection}
This far, we have considered the influence of population structure in the case
of \emph{strong selection pressure}, which means that the fitness of individuals
is totally determined by the payoffs resulting from the game. In general this
may not be the case, and then to relax this restriction the fitness can be
expressed as $f = 1 - w + w \pi$ \cite{nowak:2004a}. The parameter $w$
represents the intensity of selection and can vary between $w=1$ (strong
selection limit) and $w \gtrsim 0$ (weak selection limit). With a different
parameterization, this implements the same idea as the baseline fitness
discussed in Section~\ref{sec:3}. We note that another interpretation has been
recently proposed \cite{wild:2007}
for this limit, namely $\delta$-weak selection, which assumes that the game
means
much to the determination of reproductive success, but
that selection is weak because mutant and wild-type strategies are very similar.
This second
interpretation leads to different results \cite{wild:2007} and we do not deal
with it here, but
rather we stick with the first one, which is by far the most generally used.
The weak selection limit has the nice property of been tractable analytically.
For instance, Ohtsuki and Nowak have studied evolutionary games on homogeneous
random networks using this approach \cite{ohtsuki:2006}, finding an interesting
relation with replicator dynamics on well-mixed populations. Using our
normalization of the game (\ref{eq:payoff-matrix}), their main result can be
written as the following payoff matrix
\begin{equation}
\left(
\begin{array}{cc} 1 & S + \Delta \\ T - \Delta & 0 \end{array}
\right).
\end{equation}
This means that the evolution in a population structured according to a random
homogeneous network, in the weak selection limit, is the same as that of a
well-mixed population with a game defined by this modified payoff matrix. The
effect of the network thus reduces to the term $\Delta$, which depends on the
game, the update rule and the degree $k$ of the network. With respect to the
influence on cooperation it admits a very straightforward interpretation: If
both the original and the modified payoff matrices correspond to a Harmony or
Prisoner's Dilemma game, then there is logically no influence, because the
population ends up equally in full cooperation or full defection; otherwise,
cooperation is enhanced if $\Delta > 0$, and inhibited if $\Delta < 0$.
The actual values of $\Delta$, for the update
rules Pairwise Comparison (PC), Imitation (IM) and
Death-Birth (DB) (see \cite{ohtsuki:2006} for full details), are
\begin{eqnarray}
\Delta_{PC} &=& \frac {S - (T-1)} {k-2} \\
\Delta_{IM} &=& \frac {k + S - (T-1)} {(k+1)(k-2)} \\
\Delta_{DB} &=& \frac {k + 3(S - (T-1))} {(k+3)(k-2)} ,
\end{eqnarray}
$k$ being the degree of the network. A very remarkable feature of these
expressions is that for every pair of games with parameters $(S_1,T_1)$ and
$(S_2,T_2)$, if $S_1-T_1 = S_2-T_2$ then $\Delta_1 = \Delta_2$. Hence the
influence on cooperation for such a pair of games, even if one is a Stag Hunt
and the other is a Snowdrift, will be the same. This stands in stark contrast to
all the reported results with strong selection, which generally exhibit
different, and in many cases opposite, effects on both games. Besides this, as
the term $S - (T-1)$ is negative in all Prisoner's Dilemmas and half the
cases of Stag Hunt and Snowdrift games, the beneficial influence on cooperation
is quite reduced for degrees $k$ as those considered above \cite{roca:2009a}.
Another way to investigate the influence of the intensity of selection if to employ the Fermi update rule, presented above, which allows to study numerically the effect of varying the intensity of selection on any network model. Figs.~\ref{fig:fermi-lattice} and~\ref{fig:fermi-scalefree} display the results obtained, for different intensities of selection, on networks that
are prototypical examples of strong influence on evolutionary games, namely regular lattices with high clustering and scale-free networks, with large degree heterogeneity. In both cases, as the intensity of selection is reduced, the effect of the network becomes weaker and more symmetrical between Stag Hunt and Snowdrift games. Therefore, these results show that the strong and weak selection limits are not comparable from the viewpoint of the evolutionary outcome, and that weak selection largely inhibits the influence of population structure.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig25}
\caption{Asymptotic density of cooperators $x^*$ in regular lattices of degree $k=8$, for the Fermi update rule with $\beta$ equal to 10 (A), 1 (B), 0.1 (C) and 0.01 (D). The initial density of cooperators is $x^0=0.5$. For high $\beta$ the result is quite similar to that obtained with the replicator rule (Fig.~\ref{fig:spatial-replicator}~F). As $\beta$ decreases, or equivalently for weaker intensities of selection, the influence becomes smaller and more symmetrical between Stag Hunt and Snowdrift games.}
\label{fig:fermi-lattice}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig26}
\caption{Asymptotic density of cooperators $x^*$ in Barab\'asi-Albert scale-free
networks of average degree $\bar{k}=8$, for the Fermi update rule with
$\beta$ equal to 10 (A), 1 (B), 0.1 (C) and 0.01 (D). The initial density of
cooperators is $x^0=0.5$. As in Fig.~\ref{fig:fermi-lattice}, for high $\beta$
the result is quite similar to that obtained with the replicator rule
(Fig.~\ref{fig:santos-replicator}~C), and analogously, as $\beta$ decreases the
influence of the network becomes smaller and more symmetrical between Stag Hunt
and Snowdrift games.}
\label{fig:fermi-scalefree}
\end{figure}
\clearpage
\section{Conclusion and future prospects}
\label{sec:5}
In this review, we have discussed non-mean-field effects on evolutionary game dynamics. Our
reference framework for comparison has been the replicator equation, a pillar of modern evolutionary
game theory that has produced many interesting and fruitful insights on different fields. Our purpose
here has been to show that, in spite of its many successes, the replicator equation is only a part of
the story, much in the same manner as mean-field theories have been very important in physics
but they cannot (nor are they intended to) describe all possible phenomena. The main issues we
have discussed are the influence of fluctuations, by considering the existence of more than one
time scale, and of spatial correlations, through the constraints on interaction arising from an
underlying network structure. In doing so, we have shown a wealth of evidence supporting our
first general conclusion: Deviations with respect to the hypothesis of a well-mixed population
(including nonlinear dependencies of the fitness on the payoff or not) have a large influence on
the outcome of the evolutionary process and in a majority of cases do change the equilibria
structure, stability and/or basins of attraction.
The specific question of the existence of different time scales was discussed in
Section~\ref{sec:3}. This is a problem that has received some attention in economics but
otherwise it has been largely ignored in biological contexts. In spite of this, we have
shown that considering fast evolution in the case of Ultimatum game may lead to
a non-trivial, unexpected conclusion: That individual selection may be enough to explain
the experimental evidence that people do not behave rationally. This is an important point
in so far as, to date, simple individual selection was believed not to provide an understanding
of the phenomena of altruistic punishment reported in many experiments \cite{camerer:2003}.
We thus see that the effect of different time scales might be determinant and therefore must be
considered among the relevant factors with an influence on evolutionary phenomena.
This conclusion is reinforced by our general study of symmetric $2 \times 2$
symmetric games, that
shows that the equilibria of about half of the possible games change when considering
fast evolution. Changes are particularly surprising in the case of the Harmony game, in
which it turns out that when evolution is fast, the selected strategy is the ``wrong'' one,
meaning that it is the less profitable for the individual and for the population. Such a
result implies that one has to be careful when speaking of adaptation through natural
selection, because in this example we have a situation in which selection leads to a
bad outcome through the influence of fluctuations. It is clear that similar instances may
arise in many other problems. On the other hand, as for the particular question of the
emergence of cooperation,
our results imply that in the framework of the classical $2 \times 2$ social
dilemmas, fast
evolution is generally bad for the appearance of cooperative behavior.
The results reported here concerning the effect of time scales on evolution are only the
first ones in this direction and, clearly, much remains to be done. In this respect, we
believe that it would be important to work out the case of asymmetric $2 \times
2$ games,
trying to reveal possible general conclusions that apply to families of them. The work
on the Ultimatum game \cite{sanchez:2005} is just a first example, but no systematic
analysis of asymmetric games has been carried out. A subsequent extension to
games with more strategies would also be desirable; indeed, the richness of the
structures arising in those games (such as, e.g., the rock-scissors-papers game
\cite{hofbauer:1998}) suggests that considering fast evolution may lead to quite
unexpected results. This has been very recently considered in the framework of the
evolutionary minority game \cite{challet:1997} (where many strategies are possible,
not just two or three) once again from an economics perspective
\cite{zhong:2009}; the conclusion of this paper, namely that there is a phase transition as
a function of the time scale parameter that can be observed in the predictability of market behavior is a further hint of the interest of this problem.
In Section~\ref{sec:4} we have presented a global view of the influence of
population structure on evolutionary games. We have seen a rich variety of
results, of unquestionable interest, but that on the downside reflect the
non-generality of this kind of evolutionary models. Almost every detail in the
model matters on the outcome, and some of them dramatically.
We have provided evidence that population structure
does not necessarily promote cooperation in evolutionary game theory, showing instances in which
population structure enhances or inhibits it. Nonetheless, we have identified
two topological properties, network clustering and degree heterogeneity, as
those that allow a more unified approach to the characterization and
understanding of the influence of population structure on evolutionary games.
For certain subset of update rules, and for some subregion in the space of
games, they induce consistent modifications in the outcome. In summary, network
clustering has a positive impact on cooperation in Stag Hunt games and degree
heterogeneity in Snowdrift games. Therefore, it would be reasonable to expect
similar effects in other networks which share these key topological properties.
In fact, there is another topological feature of networks that conditions
evolutionary games, albeit of a different type: The community structure
\cite{newman:2003,boccaletti:2006}. Communities are subgraphs of densely
interconnected nodes, and they represent some kind of mesoscopic organization. A
recent study \cite{lozano:2008} has pointed out that communities may have their
own effects on the game asymptotics in otherwise similar graphs, but more work
is needed to assess this influence.
On the other hand, the importance of the update rules cannot be overstated. We
have seen that for the best response and Fermi rules even these ``robust''
effects of population structure are greatly reduced. It is very remarkable from
a application point of view that the influence of population structure is
inhibited so greatly when update rules more sophisticated than merely imitative
ones are considered, or when the selection pressure is reduced. It is evident
that a sound justification of several aspects of the models is mandatory for
applications. Crucial details, as the payoff structure of the game, the
characteristics of the update rule or the main topological features of the
network are critical for obtaining significant results. For the same reasons,
unchecked generalizations of the conclusions obtained from a particular model,
which go beyond the kind of game, the basic topology of the network or the
characteristics of the updated rule, are very risky in this field of research.
Very easily the evolutionary outcome of the model could change dramatically,
making such generalizations invalid.
This conclusion has led a number of researchers to address the issue from a further
evolutionary viewpoint: Perhaps, among the plethora of possible networks one can
think of, only some of them (or some values of their magnitudes) are really important,
because the rest are not found in actual situations. This means that networks themselves
may be subject to natural selection, i.e., they may co-evolve along with the game under
consideration. This promising idea has already been proposed
\cite{zimmermann:2004,marsili:2004,eguiluz:2005,santos:2006b,pacheco:2006,ohtsuki:2007,fosco:2007,pacheco:2008}
and a number of interesting results, which would deserve a separate review on their own
right\footnote{For a first attempt, see Sec.~5 of \cite{gross:2008}.}, have been
obtained regarding the emergence of cooperation. In this respect, it has been
observed that co-evolution seems to favor the
stabilization of cooperative behavior, more so if the network is not rewired
from a
preexisting one but rather grows upon arrival of new players \cite{poncela:2008}.
A related approach, in which the dynamics of the interaction network results from the mobility of
players over a spatial substrate, has been the focus of recent works \cite{sicardi:2009,helbing:2009}.
Albeit these lines of research are appealing and natural when one thinks of possible
applications, we believe the same caveat applies: It is still too early to draw general
conclusions and it might be that details would be again important. Nevertheless,
work along these lines is needed to assess the potential applicability of these
types of models. Interestingly, the same approach is also being introduced to
understand which strategy update rules should be used, once again as a manner
to discriminate among the very many possibilities. This was pioneered by
Harley \cite{harley:1981} (see also the book by Maynard Smith
\cite{maynard-smith:1982},
where the paper by Harley is presented as a chapter) and a few works have appeared
in the last few years
\cite{kirchkamp:1999,moyano:2009,szabo:2008,szolnoki:2008,szolnoki:2009};
although the available results are too specific to allow for a glimpse of any general
feature, they suggest that continuing this research may render fruitful results.
We thus reach our main conclusion: The outcome of
evolutionary game theory depends to a large extent on the details,
a result that has very important implications
for the use of evolutionary game theory to model actual biological, sociological or
economical systems. Indeed, in view of this lack of generality, one has to look carefully
at the main factors involved in the situation to be modeled because they need to
be included as close as necessary to reality to produce conclusions relevant for
the case of interest. Note that this does not mean that it is not possible to
study evolutionary
games from a more general viewpoint; as we have seen above, general conclusions
can be drawn, e.g., about the beneficial effects of clustering for cooperation or the
key role of hubs in highly heterogeneous networks. However, what we do mean is
that one should not take such general conclusions for granted when thinking of a
specific problem or phenomenon, because it might well be that some of its specifics
render these abstract ideas unapplicable. On the other hand, it might be possible that
we are not looking at the problem in the right manner; there may be other magnitudes
we have not identified yet that allow for a classification of the different games and
settings into something similar to universality classes. Whichever the case, it seems
clear to us that much research is yet to be done along the lines presented here.
We hope that this review encourages others to walk off the beaten path in order
to make substantial contributions to the field.
\section*{Acknowledgements}
This work has been supported by projects MOSAICO, from the Spanish Ministerio de Educaci\'on y
Ciencia, and MOSSNOHO and SIMUMAT, from the Comunidad Aut\'onoma de Madrid.
|
2,869,038,154,278 | arxiv | \section{Introduction}
Identifying ligands that bind tightly to a given protein target is a crucial first step in drug discovery. Experimental methods such as high-throughput screening are time-consuming, and costly, while physics-based methods are computationally expensive and can be inaccurate \cite{MacConnell2017, Schneider2017, Grinter2014, Chen2015}. The emergence of large datasets enables data-driven approaches to be applied to this problem. In recent years, a variety of ML-based approaches have been developed to identify active ligands for a protein target given screening data \cite{Gawehn2016, Colwell2018, Ripphausen2011}. These approaches report outstanding {\it in silico} success on benchmark datasets \cite{Ragoza2017, Ramsundar2017, Gomes2017}.
However, recent studies analyzing how neural network models attribute their results suggest that even high-performing models often do not learn the correct rule \cite{McCloskey2019}. Fingerprint-based models also share these issues; \citet{Sheridan2019} observed that high-performing models of the same dataset attribute binding activity to different atoms within a molecule. Attribution of virtual screening models is particularly important because accurate identification of pharmacophores would enable medicinal chemists to improve potential drug candidates.
In this paper, we evaluate attributions from fingerprint-based virtual screening models, complementing a previous analysis of graph convolutional models \cite{McCloskey2019}. We propose a general framework for evaluating model attributions that does not require gradients, and use \textit{in silico} datasets to evaluate a number of standard models. In agreement with previous work \cite{Sheridan2019}, our results establish that high-performing models may not learn the correct binding rule. Going beyond previous work, we analyze properties of the data that lead to misattributions, and provide insight into how this can be mitigated. Our analysis reveals that attribution results can be improved by (i) adding fragment-matched decoys, and (ii) accounting for spurious correlations in the data that originate from both the nature of small molecule structures and the definition of fingerprint descriptors.
\section{Methods}
\subsection{Datasets and Models}
To measure attribution we construct a number of \textit{in silico} datasets, where each dataset has a specified binding logic that requires $3$ randomly selected fragments to be present for a molecule to be active. We identified $600$ active and $600$ inactive ligands from the ZINC12 database \cite{Irwin2012} using each binding logic to generate each dataset. Dataset 0 required a benzene, alkyne, and amino group to be considered active; dataset 1 an alkyne, benzene, and hydroxyl; dataset 2 a fluorine, alkene, and benzene; and dataset 3 a benzene, ether, and amino group. Our binding logics mimic known pharmacaphores for real proteins; for example, most binders to soluble epoxide hydrolase have an amide and a urea group \cite{Waltenberger2016}.
We used ECFP6 fingerprints with 2048 bits \cite{Rogers2010} to featurize molecules. We tested Naive Bayes, Logistic Regression, and Random Forest models, all implemented using scikit-learn \cite{Pedregosa2011}. Naive Bayes used no prior; Logistic Regression used $C = 1$; Random Forest used $100$ trees and a maximum depth of $25$. Model AUCs (Area under the Receiver Operator Characteristic curve) were computed on a randomly held-out test set comprising $20\%$ of the data, with an even split of actives and inactives. All train/test splits were repeated $50$ times to measure the AUC variation.
\subsection{Evaluating Attribution}
For attribution analysis we retrained the models on all the data, without any hold-out sets. Since some models were not differentiable, we developed an attribution method that works for any fingerprint-based model. Our method relies on SIS (sufficient input subsets), which identifies sufficient subsets of the input features to reach a particular score threshold for the given model \cite{Carter2018}. Our null feature mask was the vector of all $0$s. If our model predicted a binding probability of $x$ for a given molecule, the threshold used for SIS attribution was $\frac{\lceil 100 (x-0.01) \rceil}{100}$.
The next step in our attribution procedure is to translate the significant features (fingerprint bits) back to fragments. For each molecule, we used rdkit to indicate the relevant fragment or fragments for each bit \cite{Landrum2006}. For a pre-trained model $m$ and a specific molecule, we generate an attribution vector $\mathbf{v}_m$ of dimensionality the number of atoms in the molecule. Each element of $\mathbf{v}_m$ corresponds to an atom in the molecule and is the number of significant fragments containing that atom. When a single bit corresponds to multiple fragments, all are considered significant. The ground truth attribution vector $\mathbf{v}_R$ for each molecule has $\mathbf{v}_R = 1$ for any atom belonging to fragment in the binding logic and $\mathbf{v}_R = 0$ otherwise.
We used $\mathbf{v}_m$ and $\mathbf{v}_R$, to compute four metrics of attribution accuracy. First, for a given molecule the cosine similarity $\frac{\mathbf{v}_m \cdot \mathbf{v}_R}{||\mathbf{v}_m|| ||\mathbf{v}_R||}$ provides an attribution score for the model $m$. This score is normalized by the attribution score for the same model trained on randomly labeled data to avoid data structure effects \cite{Adebayo2018}. The attribution false positive score is the proportion of atoms that the model erroneously considers relevant. The attribution false negative score is the proportion of relevant atoms that the attribution misses. Finally, the comparative attribution score between two models $m_1$ and $m_2$ is $\frac{\mathbf{v}_{m_1} \cdot \mathbf{v}_{m_2}}{||\mathbf{v}_{m_1}|| ||\mathbf{v}_{m_2}||}$. We computed attribution scores for $50$ randomly selected molecules, each predicted to bind with probability at least $0.95$, to compute errors in attribution scores.
\section{Attribution Results}
The models, especially logistic and random forest, perform outstandingly well on the \textit{in silico} datasets; Figure \ref{fig:ToyDatasetAnalysis} shows that many models achieve very high AUCs. However, the normalized attribution scores in Figure \ref{fig:ToyDatasetAnalysis} are much lower. Even high-performing models tend to generate poor attribution scores with large variance between molecules, suggesting that the models are learning something very different from the ground truth logic.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{Figures/CombinedToyDatasetNormalizedAnalysisShort.pdf}
\caption{Comparison of attribution scores to AUC on \textit{in silico} datasets. The attribution score is a measure of how close the model's attribution is to the real rule for a particular molecule. Even high-performing models have poor attribution scores, suggesting that they are learning the wrong rule.}
\label{fig:ToyDatasetAnalysis}
\end{figure}
To better understand this data, Figure \ref{fig:ToyDatasetFPFN} splits the attribution score into false positives and false negatives, while Figure \ref{fig:ToyDataThreeElemAdversarial} shows some attribution images for dataset 1. Both suggest that logistic regression is more susceptible to false negatives, while random forest is more susceptible to false positives. These results call into question the validity of using these models to physically interpret predictions or to garner medicinal chemistry insight. In order to further establish that our attribution results provide insight into model performance, we used knowledge of the misattributed atoms to manually generate adversarial examples; a sample are shown in Figure \ref{fig:ToyDataThreeElemAdversarial}. This shows that erroneous attributions correspond to weaknesses in the trained models.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.45\textwidth]{Figures/CombinedToyDatasetFPFNShort.pdf}
\caption{(a) Attribution False Positive and (b) False Negatives. We observe higher rates of attribution false negatives for the logistic regression models. Random forest models have higher rates of false positives.}
\label{fig:ToyDatasetFPFN}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{Figures/ToyDatasetThreeElem1AdversarialMolsSmall.pdf}
\caption{Attribution Images and Adversarial Examples for Dataset 1, with binding rule benzene, alkyne, and hydroxyl. The left molecule is the attribution image for the specified model. Red has highest attribution; white has lowest. The right image is an adversarial example constructed using the observed misattributions. The success of the adversarial examples indicates that our attribution method correctly identifies flaws in model performance.}
\label{fig:ToyDataThreeElemAdversarial}
\end{figure}
\subsection{Misattribution: Fragment-Matched Decoys}
To understand attribution false negatives, we compute the Pearson correlation between the presence of every feature and the binding activity of each molecule and examine those features most correlated with activity. We observe that benzene does not appear among the top $20$ features for dataset 1 (data not shown), despite the fact that it is required in the binding logic. This is despite the fact that the top $20$ contains features that are not present in the binding logic. This surprising observation likely explains why models fail to place high weight on benzene, as seen in Figure \ref{fig:ToyDataThreeElemAdversarial}. To address this issue, we add fragment-matched decoys: inactive molecules that have some, but not all, of the fragments required for binding to the dataset. Specifically, we include $150$ generic inactives, $150$ inactives with the first two fragments, $150$ inactives with the first and third fragments, and $150$ inactives with the second two fragments.
We find that adding these fragment-matched decoys decreases the number of attribution false negatives (data not shown), and improves normalized attribution scores (Figure \ref{fig:ToyDataDebiasEffect}). However, our models are still not perfect, as they have plenty of attribution false positives and it is still possible to generate adversarial examples (data not shown). Thus adding fragment-matched decoys is necessary but not sufficient to improve the overall attribution accuracy of our models. When working with real world datasets, one could identify fragment-matched decoys by screening molecules with some, but not all, of the fragments observed in known actives. For example, combinatorial libraries where molecules are generated by linking fragments selected from fragment libraries would serve this purpose.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.4\textwidth]{Figures/CombinedToyDatasetThreeElemDebiasAnalysisShort.pdf}
\caption{Effect of Adding Fragment-Matched Decoys on Normalized Attribution Score for Dataset 1. Adding fragment-matched decoys improves overall attribution scores, but they are still not perfect, at least partly due to high rates of false positives.}
\label{fig:ToyDataDebiasEffect}
\end{figure}
\subsection{Misattribution: Background Correlations}
To understand attribution false positives, we note that the features most correlated with activity in dataset 1 include a number of ethers that connect alkynes to benzenes or to alcohols. The alkynes, benzenes, and alcohols are part of the binding logic, but the ether is not. Similarly, in Figure \ref{fig:ToyDataThreeElemAdversarial}, we see that ethers are incorrectly included in the attributed fragments. This suggests that at least some attribution false positives are due to features that are highly correlated with activity but are not part of the binding logic. This could be caused by the fact that the features are not truly independent, i.e. some features may co-occur in dataset molecules more often than would be expected at random. To measure this, we compute the Pearson correlation between every pair of features across all molecules in dataset 1. This analysis reveals that ether is highly correlated with both the benzene and alkyne groups that are present in the binding logic, explaining why ether is highly correlated with activity. Since the dataset molecules are drawn randomly from ZINC12, these high inter-feature correlations can only be a result of spurious background correlations already present in the ZINC12 dataset.
It has previously been shown that background correlations of hashed fingerprints are approximately drawn from the standard Marchenko-Pastur distribution that would be expected for random vectors drawn from a multivariate Gaussian distribution \cite{Lee2016}. This suggests that there is no additional correlation structure inherent in molecular fingerprint data. Despite this result, both the nature of small molecule structures and the manner in which fingerprints are constructed suggest that there should be correlations between bits that correspond to e.g. fragments and their substructures. To probe this more deeply, we computed the pair correlation matrix for a random sample of $4000000$ molecules from ZINC12 using unhashed ECFP6 fingerprints and found significant background correlation, as shown in Figure \ref{fig:ToyDatasetCorrelationMatrix}. This suggests that the hashing process obscures important information about the background correlation structure.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{Figures/CombinedToyDatasetCorrelationMatrixShort.pdf}
\caption{Correlation between Background and Dataset 1. (A) Top right: Heatmap of dataset 1 correlation matrix using hashed fingerprints. Bottom left: Background dataset correlation matrix from unhashed fingerprints. (B) Scatter plot of these correlations. We observe that many spurious correlations in dataset 1 are likely caused by the background. Correlations that are only present in the dataset are useful for determining the correct binding logic for each dataset.}
\label{fig:ToyDatasetCorrelationMatrix}
\end{figure}
Our results indicate that some of the highest correlations in the hashed fingerprints observed in dataset 1 are related to this background correlation structure of the unhashed fingerprints, as shown by the cloud of points near the $y = x$ line in the scatter plot in Figure \ref{fig:ToyDatasetCorrelationMatrix}. Other correlations appear only in the dataset and are not present in the background; this generates the cloud of points surrounding the $y$-axis in the scatterplots of Figure \ref{fig:ToyDatasetCorrelationMatrix}. Our trained models need to distinguish informative correlations caused by the binding logic from spurious background correlations caused by molecular descriptors.
Successfully distinguishing between spurious and legitimate correlations does help uncover the correlations that correspond directly to the binding logic. After normalizing the dataset correlations by the background correlation, the highest correlations are between an alkyne and hydroxyl, a hydroxyl and a benzene, and an alkyne and a benzene: exactly the groups in the binding logic. Thus correlations that occur most strongly above background do correspond to the binding logic used to build the dataset.
\subsection{Comparison to Real Datasets}
In order to verify that attribution results for our \textit{in silico} datasets reflect those for real datasets, we examined comparative attribution scores. Real activity data was acquired from ChEMBL24.1 \cite{Davies2015, Gaulton2017} for a number of protein targets, with inactive molecules acquired from PubChem \cite{Mervin2018, Kim2016} following the procedure in \citet{Sundar2019}. We see similar degrees of disagreement between the models on the \textit{in silico} datasets and on the real datasets, as measured by comparative attribution scores (data not shown). We note that if two high-performing models like logistic regression and random forest disagree on attributions for specific molecules for which they make accurate binding predictions, then at least one must be wrong.
\section{Conclusions}
Our results suggest a number of important cautionary notes about the success of fingerprint-based protein/ligand binding models. Even high-performing models may misattribute, and using \textit{in silico} datasets to explicitly test model performance can help identify models that perform attribution correctly. We demonstrated that false negative attributions can be mitigated by adding fragment-matched decoys, where future work will test the sensitivity to accurate decoy selection. A corresponding approach for real datasets would be to screen molecules with a subset of the fragments present in known actives. Further, we have shown that the strongest false positives originate from correlations present in the background data that cause various fragments to be spuriously correlated with activity. Thus one key to mitigating false positives is to develop models that account for the background correlation structure.
\nocite{Unterthiner2014, Ramsundar2015, Duvenaud2015, Wallach2015, Kearnes2016}
\nocite{Verdonk2004, Cleves2008, Rohrer2009, Sheridan2013, Wallach2018, Liu2019, Sundar2019}
\bibliographystyle{icml2020}
|
2,869,038,154,279 | arxiv | \section{Introduction}
Prior to the their introduction by the Soviet chess program
KAISSA in the late 1960s, bitboards were used in checkers
playing programs as described in \citeaby{samuel59}.
The elegance and performance advantages of bitboard-based
programs attracted many chess programmers and bitboards were used by
most early programs (\citeaby{slate78}, \citeaby{bitman70},
and \citeaby{hyatt90}).
But to fully exploit the performance advantages of parallel, bitwise
logical operations afforded by bitboards, most programs maintain,
and incrementally update, rotated bitboards. These rotated bitboards allow
for easy attack detection without having to loop over the squares
of a particular rank, file, or diagonal as described in \citeaby{heinz97}
and \citeaby{hyatt99}. The file occupancy is computed by using an
occupancy bitboard rotated 90 degrees and then using the rank attack
hash tables to find the attacked squares. Once the attacked squares are known, they are
mapped back to their original file squares for move generation.
The diagonal attacks are handled similarly except that the rotation
involved is 45 (or -45) degrees depending on which diagonal is being
investigated. These rotated occupancy bitboards are incrementally
updated after each move to avoid the performance penalty of dynamically
recreating them from scratch at every move.
\section{Direct Lookup}
As researchers and practitioners explore Shannon type B
approaches to chess programming \citebay{shannon50}, code clarity
and expressive power become important in implementing complex
evaluation functions and move ordering algorithms.
Many high level programming languages (notably Python \citebay{python93}) have
useful predefined data structures (e.g. associative arrays) which are dynamically
resizable hash tables that resolve collisions by probing techniques. The basic
lookup function used in Python is based on Algorithm D: Open Addressing with
Double Hashing from Section 6.4 in \citeaby{knuth98}. We define four
dictionaries that are two dimensional
hash tables which the are main focus of this paper:
\textbf{\textsf{rank\_attacks}},
\textbf{\textsf{file\_attacks}},
\textbf{\textsf{diag\_attacks\_ne}},
and \textbf{\textsf{diag\_attacks\_nw}}
representing the rank, file, and two diagonal directions (``ne'' represents
the northeast A1-H8 diagonals and ``nw'' represents the northwest A8-H1
diagonals). In order to use these hash tables directly, we need to also
create rank, file and diagonal mask bitboards for each of the squares
(e.g. $diag\_mask\_ne[c4] = a2 | b3 | c4 | d5 | e6 | f7 | g8$).
These hash tables only need to be
generated at startup. The initial cost of calculating these tables
can be avoided altogether if the table
values are stored in a file and simply retrieved.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.5]{fig1.jpg}
\caption{C4 White Bishop Attacks and Attacked Squares Bitboard}
\label{diagattack}
\end{center}
\end{figure}
The first dimension represents the
location of the attacking (sliding) piece and the second dimension represents
the occupancy of the particular rank, file, or diagonal. The first dimension has
64 possible values and the second has 256 possible values (except for the
diagonals with fewer then eight squares). While
the sizes of these hash tables are small, the actual values are fairly large
(up to $2^{64}-1$). The reason for this is that these hash tables are called
directly from the bitboard values retrieved from the chess board.
In Figure \ref{diagattack}, the squares attacked by the bishop at square c4
would ideally be found by simply calculating the occupancy of the two diagonals
intersecting at the square c4 and then performing a logical OR of the attacked
squares provided by direct lookup of the two diagonal hash tables
and then removing squares occupied by friendly pieces.
The techniques described in this paper provide the attacked squares
that are both unoccupied and occupied. These same attack
vectors are also used in evaluation functions that require attacks from
a certain square as well as attacks on a certain square.
\subsection{Rank Attacks}
The rank attack hash array can best be understood by starting with the
first rank. (Note: in the subsequent listings, the convenience variables
for each square are created so that h1=1,
a1=128, h8=72057594037927936, a8=9223372036854775808, etc.)
The rank attack for the first rank is given by the following:
$$ rank\_attacks_{rank_1}(p_{rank_1},o_{rank_1}) = \sum _{i=l}^{p_{rank_1}-1} B_i + \sum _{i=p_{rank_1}+1}^{r}B_i $$
\noindent
where $p_{rank_1}$ is the position of the sliding piece (rook or queen) on the
first rank,
$o_{rank_1}$ is the occupancy value for the first rank, $l$ is the first occupied
square to the left of the sliding piece, and $r$ is the first occupied
square to the right of the sliding piece. And $B_i$ is the value given
by
$$ B_i = \cases{2^i,&if 1 exists at $i^{th}$ bit of $o_{rank_1}$; \cr
0,&otherwise. \cr} $$
\noindent
Then finally, to find the rank attacks at the $i^{th}$ rank, we
simply move this first rank value ``up'' in rank by multiplying
by $256^{rank-1}$ since moving a piece up one rank on the chessboard
is the same as left shifting a binary number by 8 or multiplying by $2^8$.
$$ rank\_attacks_{rank_i}(p_{rank_i},o_{rank_i}) = rank\_attacks_{rank_1}(p_{rank_1},o_{rank_1}) \cdot 256^{i-1} $$
\noindent
where the piece position index and occupancy index at rank $i$ are also
multiplied by the same value as the attack
$$ p_{rank_i} = p_{rank_1} \cdot 256^{i-1}$$
$$ o_{rank_i} = o_{rank_1} \cdot 256^{i-1}$$
An implementation of this is shown in Listing \ref{rank}.
Here, the function's outer loop (variable i in line 3)
iterates the attacking piece over the squares of
the first rank beginning with square h1. The rank\_attacks hash table
is initialized in line 2 and in lines 4 and 5. The second loop iterates over
the possible 256 occupancy values for a rank (line 6). After some initialization,
the function moves one square to the right of the attacking piece,
adding the value to the hash table. If the square is occupied, there is a
piece that will block further movement in this direction and so we break out of
this right side summation. This process is repeated for the left side of the
attacking square (lines 12-15). Finally, when the rank\_attack hash table is complete
for the particular attacking square, the function shifts the values for each
respective rank for the remaining ranks (lines 16-21). Note that this
hash table includes blocking squares that are occupied by both enemy
and friendly pieces. The friendly piece occupancy will need to be
removed before assembling the legal moves.
\begin{lstlisting}[float=ht,frame=tb,label=rank,caption={Rank Attack Lookup Table}]{}
def get_rank_attacks ():
rank_attacks = {}
for i in range(8):
for r in range(8):
rank_attacks[1 << (i + (r * 8))] = {}
for j in range(256):
rank_attacks[1 << i][j] = 0
for right in range(i-1, -1, -1):
rank_attacks[1 << i][j] |= 1<<right # save it
if ((1 << right) & j != 0): # non empty space
break
for left in range(i+1,8):
rank_attacks[1 << i][j] |= 1 << left # save it
if ((1 << left) & j != 0): # non empty space
break
for rank in range(1,8):
x = 1 << (i+(rank*8))
y = j << (rank*8)
value = rank_attacks[1 << i][j]
newvalue = value << (rank*8)
rank_attacks[x][y] = newvalue
return(rank_attacks)
\end{lstlisting}
\subsection{File Attacks}
The file attacks hash table uses the values obtained in the rank attack
table on the first rank and performs a 90 degree rotation.
In the approach shown here, the 8th file
$file\_attacks_{file_8}$ hash table is obtained by converting the rank 1
$rank\_attacks_{rank_1}$
table to base 256. The bitboard position of the sliding piece
as well as the occupancy are also converted in a similar fashion.
$$ file\_attacks_{file_8}(p_{file_8},o_{file_8}) = \sum_{i=1}^{8} B_i \cdot 256^i $$
where $B_i$ is the $i^{th}$ bit of the rank 1 rank\_attacks table
(with h1 being the LSB and a1 being the MSB) and
$$ p_{file_8} = p_{rank_1} \cdot 256^{(9- \tilde f)} $$
$$ o_{file_8} = \sum_{i=1}^{8} O_{file_i} \cdot 256^i$$
where $\tilde f$ is the actual file number of the position square $p_{file_8}$
on the first rank and
$O_{file_i}$ is the $i^{th}$ bit of the occupancy on the first
rank.
\begin{lstlisting}[float=tp,frame=tb,label=file,caption={File Attack Lookup Table}]{}
def get_file_attacks ():
# this routing assumes that the rank_attacks have already
# been calculated.
file_attacks = {}
for i in range(64):
r = rank[1 << i] - 1
mirror_i = rank_to_file((1 << i) >> (8*r)) << r
file_attacks[mirror_i] = {}
for j in range(256):
mirror_j = rank_to_file(j) << r
value = rank_attacks[1 << i][j << (8*r)]
lower_value = value >> (8*r)
file_value = rank_to_file(lower_value)
final_value = file_value << r
file_attacks[mirror_i][mirror_j] = final_value
return(file_attacks)
\end{lstlisting}
The implementation of this is shown in Listing \ref{file} and uses the rank\_attacks
hash table found earlier (line 11). This function has
an outer loop that ranges over the 64 squares, for the attacking piece,
and for each of these, an inner loop that loops over all the occupancy values.
In line 7, we find the symmetric square
value if reflected across the A8-H1 diagonal (e.g. g1 is reflected across
the line of symmetry and onto square h2, f1 to h3, etc.). In this way, the
position values are flipped or ``rotated'' 90 degrees and the occupancy
values are also rotated in line 10. The function rank\_to\_file() performs
this rotation by converting the number to base two and then to base 256.
In line 11, the attacked squares
that were calculated in Listing \ref{rank} are also rotated.
\subsection{Diagonal Attacks} \label{diags}
The attacked squares along the diagonals
are a little more complex to calculate using the base conversion
technique used on the file\_attacks. A more direct approach
like the one used to find the rank\_attacks, involving shifting and adding,
is used. The diagonal hash tables can be found by summing over the squares
up to and including the blocking square. The A1-H8 diagonal can be found
$$ diag\_attacks\_ne(p,o) = \sum _{i=l}^{p-1} B_i + \sum _{i=p+1}^{r}B_i $$
\noindent
where $p$ is the position of the sliding piece (bishop or queen),
$o$ is the occupancy value for the diagonal, $l$ is the first occupied
square along the diagonal to the left of the sliding piece, and $r$ is the first occupied
square along the diagonal to the right of the sliding piece. $B_i$ is the
value of the number if the $i^{th}$ bit is set
$$ B_i = \cases{2^i,&if 1 exists at $i^{th}$ bit of $o$; \cr
0,&otherwise. \cr} $$
The other diagonal hash table (for the A8-H1 direction)
is not shown but has a similar structure.
\begin{lstlisting}[float=tp,frame=tb,label=diagslisting,
caption={Generalized Attack Lookup Table}]{}
def get_attacks (square_list=None):
attack_table = {}
attack_table[0] = {}
attack_table[0][0] = 0
for i in range(len(square_list)):
list_size = len(square_list[i])
for current_position in range(list_size):
current_bb = square_list[i][current_position]
attack_table[current_bb] = {}
for occupation in range(1 << list_size):
moves = 0
for newsquare in range(current_position+1,list_size):
moves |= square_list[i][newsquare]
if ((1 << newsquare) & occupation):
break
for newsquare in range(current_position-1,-1,-1):
moves |= square_list[i][newsquare]
if ((1 << newsquare) & occupation):
break
temp_bb = 0
while (occupation):
lowest = lsb(occupation)
temp_bb |= square_list[i][bin2index[lowest]]
occupation = clear_lsb(occupation)
return(attack_table)
def get_diag_attacks_ne ():
diag_values = [[h1],
[h2,g1],
[h3,g2,f1],
[h4,g3,f2,e1],
[h5,g4,f3,e2,d1],
[h6,g5,f4,e3,d2,c1],
[h7,g6,f5,e4,d3,c2,b1],
[h8,g7,f6,e5,d4,c3,b2,a1],
[g8,f7,e6,d5,c4,b3,a2],
[f8,e7,d6,c5,b4,a3],
[e8,d7,c6,b5,a4],
[d8,c7,b6,a5],
[c8,b7,a6],
[b8,a7],
[a8]]
return(get_diag_attacks(diag_values))
def get_diags_attacks_nw ():
diag_values = [[a1],
[b1,a2],
[c1,b2,a3],
[d1,c2,b3,a4],
[e1,d2,c3,b4,a5],
[f1,e2,d3,c4,b5,a6],
[g1,f2,e3,d4,c5,b6,a7],
[h1,g2,f3,e4,d5,c6,b7,a8],
[h2,g3,f4,e5,d6,c7,b8],
[h3,g4,f5,e6,d7,c8],
[h4,g5,f6,e7,d8],
[h5,g6,f7,e8],
[h6,g7,f8],
[h7,g8],
[h8]]
return(get_diag_attacks(diag_values))
\end{lstlisting}
\begin{lstlisting}[float=tp,frame=tb,label=newrankfile,
caption={Generalized Rank and File Attack Lookup Table}]{}
def get_rank_attacks ():
# these are the rank square values
rank_values = [[a1,b1,c1,d1,e1,f1,g1,h1],
[a2,b2,c2,d2,e2,f2,g2,h2],
[a3,b3,c3,d3,e3,f3,g3,h3],
[a4,b4,c4,d4,e4,f4,g4,h4],
[a5,b5,c5,d5,e5,f5,g5,h5],
[a6,b6,c6,d6,e6,f6,g6,h6],
[a7,b7,c7,d7,e7,f7,g7,h7],
[a8,b8,c8,d8,e8,f8,g8,h8]]
return(get_attacks(rank_values))
def get_file_attacks ():
# these are the file square values
file_values = [[a1,a2,a3,a4,a5,a6,a7,a8],
[b1,b2,b3,b4,b5,b6,b7,b8],
[c1,c2,c3,c4,c5,c6,c7,c8],
[d1,d2,d3,d4,d5,d6,d7,d8],
[e1,e2,e3,e4,e5,e6,e7,e8],
[f1,f2,f3,f4,f5,f6,f7,f8],
[g1,g2,g3,g4,g5,g6,g7,g8],
[h1,h2,h3,h4,h5,h6,h7,h8]]
return(get_attacks(file_values))
\end{lstlisting}
An implementation of this is shown in Listing \ref{diagslisting}. Each diagonal is looped
over (line 5) for the outer loop and the attacking piece is moved along the diagonal
for the inner loop (line 7). For each position of the attacking piece, all of the
possible occupancies are generated (line 10) and the two inner loops, one for the
right side (lines 12-15) and one for the left side (lines 16-19), are used to accumulate
open squares until blocking bits are encountered. The function completes by
converting the occupancy value to a bitboard number along the diagonal.
Not shown is a hash table called \textbf{bin2index}
used to convert bitboard values to square index
values (e.g. a1$\rightarrow$7).
The function is called with a list of lists of the values of the diagonals.
For the A1-H8 direction (also referred to as the ``northeast'' or ne direction),
the diagonal values are shown in lines 28-42 and for the A8-H1 diagonals, the
diagonal values passed into the function are shown in lines 46-60.
This algorithm is general enough to allow for the rank and file attack
tables to be generated and these are reformulated to work with this
approach and the listings are shown in Listing \ref{newrankfile}.
\section{Experimental Results} \label{resultssection}
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline OS and CPU & Rotated Bitboards Time (s) & Direct Lookup Time (s) \\
\hline
\hline
OS X 10.4.9 2.33 GHz Intel Core 2 Duo & 7.29 & 6.42 \\
Linux 2.6.9 3.4 GHz Intel Quad Xeon & 8.82 & 7.67 \\
OS X 10.4.9 1.67 GHz PowerPC G4 & 16.31 & 14.13 \\
SunOS 5.8 1.5 GHz dual UltraSPARC-IIIi & 23.95 & 19.06 \\
FreeBSD 6.2 500 MHz Pentium 3 & 58.17 & 44.85 \\
\hline
\end{tabular}
\caption{Move Generation Results for Rotated Bitboards and Direct Lookup. \label{results}}
\end{center}
\end{table}
A performance comparison of simple move generation
was made between a rotated bitboard implementation and
a direct lookup implementation. The test results are shown in Table \ref{results}.
The times shown reflect a comparison of the move generation routines.
A well known and well studied set of test cases exists in the Encyclopedia of Chess
Middle Games (ECM).
Positions were selected from the ECM \citebay{krogius80} and for each of the 879
test positions, a list of the main board position as well as three rotated boards
were precalculated and saved in a list used by both methods. The moves were then generated
for each of these 879 positions using the same list of bitboards generated earlier.
The move generation functions for direct lookup
and those for the rotated bitboards differed only in how they handled the sliding piece attacks.
This process was repeated 10 times for each of the two types of approaches.
In generating moves for rotated bitboards, we required additional shifting and masking operations
before the lookup of the attacks could take place. Furthermore, the overhead
of maintaining and updating the rotated bitboards is not accounted for since
these test positions represent games in mid play where the rotated bitboards
were precalculated.
The results shown indicate that directly looking up the attacking moves
for sliding pieces in hash tables improves the move generation speeds
from 10\% to 15\% depending on computer architecture. Further efficiencies
can be expected in a full implementation where the overhead of maintaining
rotated bitboards is eliminated.
The implementation and test code is made available in an Open-Source,
interactive, chess programming module called ``Shatranj'' \citebay{shatranj06}.
\section{Conclusions and Future Work}
We have described an approach for obtaining direct access to the attacked
squares of sliding pieces without resorting to rotated
bitboards. Detailed algorithms and working code illustrate how
the four hash tables were derived. The attacked squares are directly retrieved
from the hash tables once the occupancy for the particular rank, file or diagonal
was retrieved by the appropriate masks. Using these four hash tables,
maintaining incrementally updated rotated bitboards
becomes unnecessary as does the required shifting and masking
required to obtain the consecutive occupancy bits for a rank,
file or diagonal.
In addition to simplifying the implementation, we can expect a performance
improvement in move generation of at least 10\%.
Taking the implementation a bit further,
the hash tables described in this
paper are also useful for implementing evaluation functions
which include piece mobilty and attack threats. When the additional
impact of complex evaluation functions is taken into account,
the speed improvements should be greater then the results presented here.
Since most chess implementations do not use a high level interpreted language
such as Python, it is difficult to estimate the effect of cache loading and
execution speed. The results presented here only reflect the savings
seen by move generation and not those of a fully implemented chess engine.
Further research is needed to quantify the effect of these changes on
cache utilization in a complete chess engine.
Alternatives to rotated bitboards have gained some popularity recently.
Minimal perfect hash functions as descibed in
\citebay{czech92} have been used to create hash tables where the
index is calculated based on the mover square and occupancy bitboard.
A recent refinement of this described in \citebay{leiserson98},
called magic move generation, further reduces the memory requirements of the
hash table. In this approach, a ``magic multiplier'' for a particular
square is multiplied by an occupancy bitboard and then shifted by
another ``magic'' number. This provides a hash index where the attacked
squares can be retrieved from a hash table. Performance comparisons
of the built in hash tables provided by interpreted languages and
techniques involving manually creating minimal perfect hash functions
as well as hash functions using de Bruijn sequences (a.k.a. magic move
generation techniques) could also be explored in future work.
Representation of chess knowledge with the data structures
provided by high level languages seems to have received very little
attention since the primary focus of the majority of work has been
improving execution speed, an area that places interpreted languages at a
distinct disadvantage.
|
2,869,038,154,280 | arxiv | \section{Introduction}
\indent The Bogoliubov--de Gennes (BdG) equation and the gap equation describe spatially inhomogeneous states in various kinds of condensed matter systems, such as superconductors\cite{DeGennes}, polyacetylene\cite{TakayamaLinLiuMaki}, and ultracold atomic Fermi gases. The equivalent equations also appear in the mean field theory of the Nambu--Jona-Lasinio (NJL) or the chiral Gross--Neveu (GN) model in high-energy physics\cite{NambuJonaLasinio,GrossNeveu,DashenHasslacherNeveu}.\\
\indent It is generally a difficult problem to obtain a self-consistent exact solution satisfying not only
the BdG equation but also the gap equation, and only a few analytic examples were known so far such as the one- and two-kink (polaron in polyacetylene) solutions \cite{TakayamaLinLiuMaki,DashenHasslacherNeveu,Shei,OkunoOnodera,Feinberg} and the real\cite{BrazovskiiGordyuninKirova,Horovitz} and complex\cite{BasarDunne} kink-crystals.
Recently, the present authors have determined the most general self-consistent solutions under uniform boundary conditions\cite{TakahashiNitta}. The solutions describe $ n $-soliton states, in which their positions are arbitrary but their phase shifts must be discretized. More recently, these solutions have been generalized to the time-dependent case\cite{DunneThies}.
\indent In this paper, we show several complementary results, which were absent in the previous our work. First, we prove \textit{directly from the gap equation} that the self-consistent soliton solutions need to have reflectionless potentials, using the form of Jost solutions derived from the Riemann-Hilbert approach\cite{AblowitzSegur}. In the preceding works, this fact was proved by the functional derivative with respect to the reflection coefficient\cite{DashenHasslacherNeveu,Shei}. Our derivation will be more comprehensive because we can see directly how the reflection coefficient vanishes. Next, we give the self-consistent condition of the system consisting of only right-movers, which is more used in high-energy physics. The resultant condition is consistent with the time-independent case of the recent work\cite{DunneThies}.
\section{Model}
\begin{figure}[t]
\begin{center}
\includegraphics{andreevappsyn.eps}
\caption{\label{fig:andapp}(Color online) Schematic of the Andreev approximation. The left (right) figure shows the dispersion relation before (after) the Andreev approximation. The red dashed line represents the dispersion relation of free particles and holes and the blue solid line represents the dispersion relation of quasiparticles in the gapped system.}
\end{center}
\end{figure}
The one-dimensional BdG system is given by the BdG equation and the gap equation as a self-consistent condition
\begin{align}
\begin{pmatrix} -\frac{1}{2}\partial_x^2-\mu_\uparrow & \Delta(x) \\ \Delta(x)^* & \frac{1}{2}\partial_x^2+\mu_\downarrow \end{pmatrix}\begin{pmatrix}u \\ v \end{pmatrix}=\epsilon \begin{pmatrix} u \\ v \end{pmatrix},\quad -\frac{\Delta(x)}{g}=\sum_{\text{occupied states}} uv^*,
\end{align}
where $ \mu_{\uparrow, \downarrow}=\frac{k_F^2}{2}\pm h $ with a Fermi momentum $ k_F $ and a magnetic field $ h $. In this paper we only consider the case $ h=0 $. Following the Andreev approximation, we linearize the dispersion relation around the Fermi points by substituting $ (u(x),v(x))=\mathrm{e}^{\pm\mathrm{i}k_Fx}(u_{\text{R,L}}(x),v_\text{R,L}(x)) $ near the right and left Fermi point $ k=\pm k_F $ and ignoring second-order derivatives (See Fig.~\ref{fig:andapp}). Moving to the new unit with $ k_F=1 $, we obtain the BdG equation for the right- and left-movers
\begin{align}
\begin{pmatrix} -\mathrm{i}\partial_x & \Delta(x) \\ \Delta(x)^* & \mathrm{i}\partial_x \end{pmatrix}\begin{pmatrix} u_{\text{R}} \\ v_{\text{R}} \end{pmatrix}=\epsilon \begin{pmatrix} u_{\text{R}} \\ v_{\text{R}} \end{pmatrix},\quad \begin{pmatrix} \mathrm{i}\partial_x & \Delta(x) \\ \Delta(x)^* & -\mathrm{i}\partial_x \end{pmatrix}\begin{pmatrix} u_{\text{L}} \\ v_{\text{L}} \end{pmatrix}=\epsilon \begin{pmatrix} u_{\text{L}} \\ v_{\text{L}} \end{pmatrix} \label{eq:BdGLR01}
\end{align}
and the gap equation
\begin{align}
-\frac{\Delta(x)}{g}=\sum_{\text{occupied states}} u_{\text{R}}^{}v_{\text{R}}^*+u_{\text{L}}^{}v_{\text{L}}^*. \label{eq:BdGLR02}
\end{align}
If we consider the system consisting of only right-movers, the fundamental equations are given by
\begin{align}
\begin{pmatrix} -\mathrm{i}\partial_x & \Delta(x) \\ \Delta(x)^* & \mathrm{i}\partial_x \end{pmatrix}\begin{pmatrix} u \\ v \end{pmatrix}=\epsilon \begin{pmatrix} u \\ v \end{pmatrix},\quad -\frac{\Delta(x)}{g}=\sum_{\text{occupied states}} uv^*, \label{eq:BdGR01}
\end{align}
which corresponds to the NJL or the chiral GN model\cite{NambuJonaLasinio,GrossNeveu,DashenHasslacherNeveu}.\\
\indent Henceforth, when we cite Eq.~(x) from our Letter \cite{TakahashiNitta}, we write it as Eq.~(L.x). Similarly, Eq.~(y) in the Supplemental Material of our Letter \cite{TakahashiNitta} is written as Eq.~(S.y). We assume that the gap function $ \Delta(x) $ satisfies the finite-density boundary condition (S.2). Since the solution of the BdG equation for left-movers is expressed by the one for right-movers [Eq.~(L.21)], we can write down the gap equation only using the quantities of right-movers, and we always do so and omit the subscript R hereafter.\\
\indent We use the same definitions of right and left Jost solutions $ f_\pm(x,s) $ and the transition coefficients $ a(s) $ and $ b(s) $ in (S.10)-(S.17), where $ s $ is a uniformizing variable defined by $ \epsilon(s)=\frac{m}{2}(s+s^{-1}) $ and $ k(s)=\frac{m}{2}(s-s^{-1}) $ [Eq.~(L.10)]. In order to fit the notations with Ref.~\cite{TakahashiNitta}, we define the scattering states $ (u(x,s),v(x,s)) $ and bound states $ (u_j(x),v_j(x)) \ (j=1,\dots,n) $ using the left Jost solution as follows:
\begin{align}
\begin{pmatrix} u(x,s) \\ v(x,s) \end{pmatrix}:=f_-(x,s^{-1}),\quad \begin{pmatrix} u_j(x) \\ v_j(x) \end{pmatrix}=\begin{pmatrix} f_j(x) \\ s_j f_j(x)^* \end{pmatrix}=-C_jf_-(x,s_j), \label{eq:reful401}
\end{align}
where $ s_j (j=1,\dots,n) $ is a discrete eigenvalue satisfying $ |s_j|=1 $, and $ C_j=|b(s_j)|c_j $ is a normalization constant of the $j$-th bound state (See Ref.~\cite{TakahashiNitta} for more detail). We also write $ \kappa_j=-\mathrm{i}k(s_j) $ and $ e_j(x)=C_j\mathrm{e}^{\kappa_j x} $. If $ \Delta(x) $ is a reflectionless potential, they reduce to (L.18) and (L.19) with the linear equation (L.13), or equivalently, (S.83) and (S.84) with (S.79). The asymptotic form of scattering states are given by
\begin{align}
\begin{pmatrix} u(x,s) \\ v(x,s) \end{pmatrix} \!\rightarrow\! \begin{cases} \begin{pmatrix}1\\[-0.75ex] s^{-1}\end{pmatrix}\mathrm{e}^{\mathrm{i}k(s)x} &\!\!(x\rightarrow-\infty) \\ \mathrm{e}^{\mathrm{i}\theta\sigma_3}\left[a(s)^* \begin{pmatrix} 1 \\[-0.75ex] s^{-1}\end{pmatrix}\mathrm{e}^{\mathrm{i}k(s)x}-b(s)\begin{pmatrix}s^{-1} \\[-0.75ex] 1 \end{pmatrix}\mathrm{e}^{-\mathrm{i}k(s)x}\right]&\!\!(x\rightarrow+\infty). \end{cases} \label{eq:reful301}
\end{align}
The transmission and reflection coefficients are defined by $ t(s)=1/a(s) $ and $ r(s)=b(s)/a(s) $.\\
\indent For a system with both right- and left-movers described by Eqs.~(\ref{eq:BdGLR01}) and (\ref{eq:BdGLR02}), we consider the same occupation state considered in Ref.~\cite{TakahashiNitta}. As we derive in the next section, the gap equation in the infinite-length limit is given by
\begin{align}
0=&\sum_{\text{b.s.}}\nu_ju_j(x)v_j(x)^*\nonumber \\
&+\left[\int_{-\infty}^0-\int_0^\infty\right]\frac{\mathrm{d}s}{2\pi}\left( \frac{m}{2}\left( u(x,s)v(x,s)^*+r(s)^*u(x,s)^2 \right)-\frac{\Delta(x)}{2s} \right)\!, \label{eq:gapeqforLR}
\end{align}
which gives the generalization of Eq.~(L.27) with a non-vanishing reflection coefficient $ r(s) $. For a system with only right-movers described by Eq.~(\ref{eq:BdGR01}), we also consider the similar occupation state, in which the negative-energy scattering states are completely filled and positive ones are empty, and the bound states are filled partially. Writing the filling rate of the $ j $-th bound state as $ \nu_j:=N_j/N $, we obtain the following gap equation in the next section:
\begin{align}
0=\sum_{\text{b.s.}}\nu_ju_j(x)v_j(x)^*+\int_{-\infty}^0\frac{\mathrm{d}s}{2\pi}\left( \frac{m}{2}\left( u(x,s)v(x,s)^*+r(s)^*u(x,s)^2 \right)-\frac{\Delta(x)}{2s} \right)\!. \label{eq:gapeqforR}
\end{align}
\section{Gap equation}
\indent In this section, we derive the gap equation in the infinite-volume limit. In order to avoid the mathematical difficulty of infinite systems, we first treat the finite system with the length $ L $ in the interval $ [-\frac{L}{2},\frac{L}{2}] $, and take the limit $ L\rightarrow\infty $.
\subsection{Discretized eigenstates in a finite system}
\indent To include the effect of solitons' phase shifts, we consider the following ``twisted'' periodic boundary condition:
\begin{align}
\Delta(\tfrac{L}{2})=\mathrm{e}^{2\mathrm{i}\theta}\Delta(-\tfrac{L}{2}),\quad u(\tfrac{L}{2})=\mathrm{e}^{\mathrm{i}\theta}u(-\tfrac{L}{2}),\quad v(\tfrac{L}{2})=\mathrm{e}^{-\mathrm{i}\theta}v(-\tfrac{L}{2}). \label{eq:reful303}
\end{align}
We note, however, that the final expression Eq.~(\ref{eq:gapssinf01}) below does not depend on the detail of the boundary condition. We assume that $ L $ is sufficiently large and hence the asymptotic form of the left Jost solution (\ref{eq:reful301}) can be used in the substitution at $ x=\pm\frac{L}{2} $.
Then, after a straightforward calculation, we obtain a set of discretized scattering eigenstates satisfying the boundary condition (\ref{eq:reful303}), which is given by
\begin{align}
\begin{pmatrix} u_{\text{PB}}(x,s) \\ v_{\text{PB}}(x,s) \end{pmatrix}\!=\mathrm{e}^{-\mathrm{i}\varphi(s)}\!\begin{pmatrix} u(x,s) \\ v(x,s) \end{pmatrix}\!+\mathrm{e}^{\mathrm{i}\varphi(s)}\!\begin{pmatrix} v(x,s)^* \\ u(x,s)^* \end{pmatrix}\!\!,\ \mathrm{e}^{2\mathrm{i}\varphi(s)}=r(s)\left(1+\mathrm{i}\frac{|t(s)|}{|r(s)|}\right) \label{eq:reful203}
\end{align}
with the discretization condition $ \mathrm{e}^{\mathrm{i}k(s)L}=t(s)^*\left(1+\mathrm{i}\frac{|r(s)|}{|t(s)|}\right). $
\subsection{Gap equation in the infinite-length limit}
\indent Now we move to the derivation of the gap equation in the infinite-length limit. Since $ |u_{\text{PB}}(x,s)|^2+|v_{\text{PB}}(x,s)|^2\xrightarrow{x \to \pm\infty} 2(1+s^{-2})+\text{(oscillating terms)} $, the relation, $ \lim_{L\rightarrow\infty}\frac{1}{L}\int_{-L/2}^{L/2}\left(|u_{\text{PB}}(x,s)|^2+|v_{\text{PB}}(x,s)|^2\right)\mathrm{d}x=2(1+s^{-2}) $, follows.
When $ L $ is sufficiently large, the wavenumbers of discretized eigenstates are given by $ k(s)=\frac{2\pi n}{L} $ with an integer $ n $. Therefore the sum is replaced by
\begin{align}
\frac{1}{L}\sum_{\substack{\text{s.s.} \\ \epsilon\gtrless 0}}\xrightarrow{L \to \infty}\int_{-\infty}^\infty\frac{\mathrm{d}k}{2\pi}=\pm\int_0^{\pm\infty}\frac{m(1+s^{-2})\mathrm{d}s}{4\pi}
\end{align}
Thus, the contribution of positive- (negative-) energy scattering states to the gap equation can be evaluated as
\begin{align}
&\frac{1}{L}\sum_{\substack{\text{s.s.} \\ \epsilon\gtrless 0}}\frac{u_{\text{PB}}(x,s)v_{\text{PB}}(x,s)^*}{\frac{1}{L}\int_{-L/2}^{L/2}\left(|u_{\text{PB}}(x,s)|^2+|v_{\text{PB}}(x,s)|^2\right)\mathrm{d}x} \rightarrow \pm\int_0^{\pm\infty}\frac{mu_{\text{PB}}(x,s)v_{\text{PB}}(x,s)^*\mathrm{d}s}{8\pi} \nonumber \\
&=\pm\int_0^{\pm\infty}\frac{m(u(x,s)v(x,s)^*+r(s)^*u(x,s)^2)\mathrm{d}s}{4\pi} \label{eq:gapssinf01}
\end{align}
To obtain the last line, we have used Eq.~(\ref{eq:reful203}) and the relations $\int\mathrm{d}sr(s)v(x,s)^{*2}=\int\mathrm{d}sr(s)^*u(x,s)^2 $ and $ \int\mathrm{d}s\frac{|t(s)|}{|r(s)|}r(s)v(x,s)^{*2}=\int\mathrm{d}s\frac{|t(s)|}{|r(s)|}r(s)^*u(x,s)^2 $, which can be shown using the relations $ r(s^{-1})=r(s)^* $ and $ s^{-1}v(x,s^{-1})^*=u(x,s) $ valid for real $s$.
Using the expression (\ref{eq:gapssinf01}), and following the same discussion in Ref.~\cite{TakahashiNitta}, we obtain the gap equation (\ref{eq:gapeqforLR}). By a similar procedure, we also obtain Eq.~(\ref{eq:gapeqforR}) for a system with only right-movers.
\section{Proof of reflectionless nature}
\subsection{Gel'fand-Levitan-Marchenko equation}
The Gel'fand-Levitan-Marchenko (GLM) equation, which determines the kernel $ K(x,y) $ of the integral representation for the left Jost solution [Eq.~(S.40)], is given by Eqs.~(S.59)-(S.62). If we rewrite them in the matrix form using the relation (S.43), we obtain
\begin{gather}
K(x,y)+F(x,y)+\int_{-\infty}^x\mathrm{d}z K(x,z)F(z,y)=0, \\
F(x,y):=\frac{m}{4\pi}\int_{-\infty+\mathrm{i}0}^{\infty+\mathrm{i}0}\mathrm{d}s r(s)f_0(x,s)f_0(y,s)^T\sigma_1+\sum_jC_j^2s_jf_0(x,s_j)f_0(y,s_j)^T\sigma_1
\end{gather}
with $ f_0(x,s):=\left(\begin{smallmatrix}s^{-1} \\ 1 \ \end{smallmatrix}\right)\mathrm{e}^{-\mathrm{i}k(s)x} $.
Following the result of the Riemann-Hilbert approach \cite{AblowitzSegur}, the kernel $ K(x,y) $ can be written as
\begin{align}
K(x,y)&=-\frac{m}{4\pi}\int_{-\infty+\mathrm{i}0}^{\infty+\mathrm{i}0}\mathrm{d}sr(s)\frac{f_-(x,s)}{s}f_0(y,s)^T\sigma_1-\sum_j C_j^2s_j\frac{f_-(x,s_j)}{s_j}f_0(y,s_j)^T\sigma_1. \label{eq:RHK01}
\end{align}
\indent By substituting Eq.~(\ref{eq:RHK01}) into the integral representation of $ f_-(x,s) $ [Eq.~(S.40)], and recalling the notations for scattering and bound states introduced in Eq.~(\ref{eq:reful401}), we obtain
\begin{align}
\begin{pmatrix} u(x,s) \\ v(x,s) \end{pmatrix}&=\left[\begin{pmatrix}1 \\ s^{-1} \end{pmatrix}+\int_{-\infty-\mathrm{i}0}^{\infty-\mathrm{i}0}\frac{\mathrm{d}\zeta}{2\pi\mathrm{i}} r(\zeta)^*\begin{pmatrix} u(x,\zeta) \\ v(x,\zeta) \end{pmatrix}\frac{\mathrm{e}^{\mathrm{i}k(\zeta)x}}{(1-\zeta s)}\right. \nonumber \\
&\qquad\qquad\qquad\qquad\qquad\quad \left. +\frac{2\mathrm{i}}{m}\sum_j\begin{pmatrix} f_j(x) \\ s_jf_j(x)^* \end{pmatrix}\frac{e_j(x)}{s_j-s}\right]\mathrm{e}^{\mathrm{i}k(s)x} \label{eq:RHscat001} \\
\intertext{and}
\begin{pmatrix} f_j(x) \\ s_jf_j(x)^* \end{pmatrix}&=-\left[ \begin{pmatrix} 1 \\ s_j \end{pmatrix}+\int_{-\infty-\mathrm{i}0}^{\infty-\mathrm{i}0}\frac{\mathrm{d}\zeta}{2\pi\mathrm{i}} r(\zeta)^*\begin{pmatrix} u(x,\zeta) \\ v(x,\zeta) \end{pmatrix}\frac{\mathrm{e}^{\mathrm{i}k(\zeta)x}}{(1-\zeta s_j^{-1})}\right. \nonumber \\
&\qquad\qquad\qquad\qquad\qquad \left. +\frac{2\mathrm{i}}{m}\sum_l\begin{pmatrix}f_l(x) \\ s_lf_l(x)^*\end{pmatrix}\frac{e_l(x)}{s_l-s_j^{-1}} \right]e_j(x). \label{eq:RHbound001}
\end{align}
Here, we have used the relation $ r(\zeta)^*=r(\zeta^{-1}) $ for real $ \zeta $. Note that Eqs.~(\ref{eq:RHscat001}) and (\ref{eq:RHbound001}) provide a closed set of equations for left Jost solutions expressed without using the kernel $ K(x,y) $.
Using the relation $ \Delta(x)=m+2\mathrm{i}K_{12}(x,x) $ (Prop. 2 of Supplemental Material of Ref.~\cite{TakahashiNitta}) and Eq.~(\ref{eq:RHK01}) with changing variable $ s=\zeta^{-1} $, we obtain
\begin{align}
\Delta(x)=m+m\int_{-\infty-\mathrm{i}0}^{\infty-\mathrm{i}0}\frac{\mathrm{d}\zeta}{2\pi\mathrm{i}} r(\zeta)^*u(x,\zeta)\mathrm{e}^{\mathrm{i}k(\zeta)x}+2\mathrm{i}\sum_js_j^{-1}e_j(x)f_j(x). \label{eq:RHgap01}
\end{align}
When $ r(\zeta)\equiv 0 $, Eqs.~(\ref{eq:RHscat001}), (\ref{eq:RHbound001}) and (\ref{eq:RHgap01}) reduce to Eqs. (L.19), (L.13), and (L.14).
\subsection{Proof of the reflectionless nature of self-consistent multi-soliton solutions}
Using the first components of Eqs.~(\ref{eq:RHscat001}) and (\ref{eq:RHbound001}) and Eq.~(\ref{eq:RHgap01}), after a little long calculation, we obtain
\begin{align}
&\frac{m}{2}u(x,s)v(x,s)^*-\frac{\Delta(x)}{2s}=\frac{m}{2}u(x,s)\frac{u(x,s^{-1})}{s}-\frac{\Delta(x)}{2s} \nonumber \\
&=-2\sum_js_j^{-1}f_j(x)^2\frac{\sin\theta_j}{|s-s_j|^2}+\frac{m}{2}\int_{-\infty-\mathrm{i}0}^{\infty-\mathrm{i}0}\frac{\mathrm{d}\zeta}{2\pi\mathrm{i}}r(\zeta)^*u(x,\zeta)^2\frac{1-\zeta^2}{(s-\zeta)(1-s\zeta)} \label{eq:afterlong01}
\end{align}
for real $ s $. Here $ \theta_j $ is defined by $ s_j=\mathrm{e}^{\mathrm{i}\theta_j} $ (See Ref.~\cite{TakahashiNitta} for more detail).
Using Eq.~(\ref{eq:afterlong01}) and the formulae
\begin{align}
\int\frac{\sin\theta_j\mathrm{d}s}{|s-s_j|^2}=\tan^{-1}\left[ \frac{s-\cos\theta_j}{\sin\theta_j} \right],\quad \int\frac{(1-\zeta^2)\mathrm{d}s}{(s-\zeta)(1-\zeta s)}=\log\frac{s-\zeta}{1-\zeta s},
\end{align}
we obtain
\begin{align}
&(\text{R.H.S. of Eq.~(\ref{eq:gapeqforLR})})\nonumber \\
&=\sum_js_j^{-1}f_j(x)^2\left[ \nu_j-\frac{2\theta_j-\pi}{\pi} \right]+\frac{m}{2}\int_{-\infty}^\infty\frac{\mathrm{d}s}{2\pi}r(s)^*u(x,s)^2\left[ \frac{\log s^2}{\mathrm{i}\pi}-\operatorname{sgn}s \right].
\end{align}
Because of linear independence of $ f_j(x) $'s and $ u(x,s) $'s, we finally obtain
\begin{align}
\nu_j=\frac{2\theta_j-\pi}{\pi},\quad r(s)= 0,
\end{align}
for $ j=1,\dots, n $ and any real $ s $, and we thus conclude the reflectionless property $ r(s)=0 $. On the other hand, for the system with only right-movers, the gap equation [Eq.~(\ref{eq:gapeqforR})] becomes
\begin{align}
&(\text{R.H.S. of Eq.~(\ref{eq:gapeqforR})})\nonumber \\
&=\sum_js_j^{-1}f_j(x)^2\left[ \nu_j-\frac{\theta_j}{\pi} \right]+\frac{m}{2}\int_{-\infty}^\infty\frac{\mathrm{d}s}{2\pi}r(s)^*u(x,s)^2\left[ \frac{\log s^2}{2\mathrm{i}\pi}+H(-s) \right],
\end{align}
where $ H(s) $ is the Heaviside step function. Thus the self-consistent condition is given by
\begin{align}
\nu_j=\frac{\theta_j}{\pi},\quad r(s)=0,
\end{align}
for $ j=1,\dots, n $ and any real $ s $. The result $ \nu_j=\theta_j/\pi $ is consistent with the recent work\cite{DunneThies}.
\section{Summary}
In this paper, we have proved that the self-consistent soliton solutions must have reflectionless potentials. Combining this paper and our previous work\cite{TakahashiNitta}, we can provide the fully self-contained derivation of static self-consistent solutions under the uniform boundary condition. One of future problems is to generalize the solutions with modulated background, using the solution recently obtained in Ref.~\cite{TakahashiarXiv}.
\begin{acknowledgements}
The work of M.~N. is supported in part by KAKENHI (No. 25400268 and 25103720).
\end{acknowledgements}
|
2,869,038,154,281 | arxiv | \section{Introduction}
\subsection{Motivation: Arnold diffusion and instabilities}
By Arnold-Liouville theorem a completely integrable Hamiltonian system
can be written in action-angle coordinates, namely, for action
$p$ in an open set $U\subset \mathbb{R}^n$ and angle $\theta$ on
an $n$-dimensional torus $\mathbb{T}^n$ there is a function $H_0(p)$
such that equations of motion have the form
\[
\dot \theta=\omega(p),\quad \dot p=0, \qquad \text{ where }\ \omega(p):=\partial_p H_0(p).
\]
The phase space is foliated by invariant $n$-dimensional tori $\{p=p_0\}$
with either periodic or quasi-periodic motions
$\theta(t)=\theta_0+t\,\omega (p_0)$ (mod 1). There are many different
examples of integrable systems (see e.g. wikipedia).
It is natural to consider small Hamiltonian perturbations
\[
H_\varepsilon (\theta,p)=H_0(p)+\varepsilon H_1(\theta,p),\qquad \theta\in\mathbb{T}^n,\ p\in U
\]
where $\varepsilon$ is small. The new equations of motion become
\[
\dot \theta=\omega(p)+\varepsilon \partial_pH_1,\quad \dot p=-\varepsilon \partial_\theta H_1. \qquad \qquad
\]
In the sixties, Arnold \cite{Arn64} (see also \cite{Arn89, Arn94})
conjectured that {\it for a generic analytic perturbation there
are orbits $(\theta,p)(t)$ for which the variation of the actions
is of order one, i.e. $\|p(t)-p(0)\|$ that is bounded from
below independently of $\varepsilon$ for all $\varepsilon$ sufficiently small.}
See \cite{BKZ,Ch,KZ12,KZ14a, Ma2, Ma3} about recent
progress proving this conjecture for convex Hamiltonians.
\subsection{KAM stability}
Obstructions to any form of instability, in general, and
to Arnold diffusion, in particular, are widely known,
following the works of Kolmogorov, Arnold, and Moser, nowadays called KAM
theory. The fundamental result
says that for a properly non-degenerate $H_0$ and for all
sufficiently regular perturbations $\varepsilon H_1$, the system
defined by $H_\varepsilon$ still has many invariant $n$-dimensional
tori. These tori are small deformation
of unperturbed tori and measure of the union of these invariant
tori tends to the full measure as $\varepsilon$ goes to zero.
One consequence of KAM theory is that for $n=2$ there are
no instabilities. Indeed, generic energy surfaces $S_E=\{H_\varepsilon =E\}$ are
$3$-dimensional manifolds whereas KAM tori are $2$-dimensional.
Thus, KAM tori separate surfaces $S_E$ and prevent orbits from diffusing.
\subsection{A priori unstable systems}
In \cite{Arn64} Arnold proposed to study
the following important example
\[
\begin{aligned}
H_\varepsilon (p,q,I,\varphi,t)=\dfrac{I^2}{2}+H_0(p,q)
+\varepsilon H_1(p,q,I,\varphi,t):=
\qquad \qquad \qquad \qquad
\\
=\underbrace{\dfrac{\ \ \ \ \ \ \ I^2\ \ \ \ \ \ \ }{2}}_{harmonic\ oscillator}+
\underbrace{\dfrac{p^2}{2}+(\cos q-1)}_{pendulum}+
\varepsilon H_1 (p,q,I,\varphi,t),
\end{aligned}
\]
where $q,\varphi,t\in \mathbb{T}$ --- angles, $p,I\in \mathbb{R}$ --- actions
(see Figure \ref{fig:rotor-pendulum}), $H_1=(\cos q-1)(\cos \varphi+\cos t)$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm]{PhasePortraitRot-Pend.pdf}
\end{center}
\caption{The rotor times the pendulum }
\label{fig:rotor-pendulum}
\end{figure}
For $\varepsilon=0$ the system is a direct product of the harmonic
oscillator $\ddot \varphi=0$ and the pendulum $\ddot q=\sin q$.
Instabilities occur when the $(p,q)$-component follows
the separatrices $H_0(p,q)=0$ and passes near the saddle
$(p,q)=(0,0)$. Equations of motion for $H_\varepsilon$ have
a (normally hyperbolic) invariant cylinder $\Lambda_\varepsilon$
which is $\mathcal{C}^1$ close to $\Lambda_0=\{p=q=0\}$.
Systems having an invariant cylinder with a family of
separatrix loops are called {\it a priori unstable}.
Since they were introduced by Arnold \cite{Arn64}, they
received a lot of attention both in mathematics and physics
community see e.g. \cite{Be,CY,Ch,CV,DLS,GL,T2,T3}.
Chirikov \cite{Ch2} and his followers made extensive numerical
studies for the Arnold example. He conjectured that
{\it the $I$-displacement behaves randomly,
where randomness is due to choice of initial conditions
near $H_0(p,q)=0$}.
More exactly, integration of solutions whose ``initial conditions''
randomly chosen $\varepsilon$-close to $H_0(p,q)=0$ and integrated
over time $\sim \varepsilon^{-2}\ln \varepsilon^{-1}$\,-time. This leads to
the $I$--displacement being of order of one and having some
distribution. This coined the name for this phenomenon:
{\it Arnold diffusion}.
Let $\varepsilon=0.01$ and $T=\varepsilon^{-2}\ln \varepsilon^{-1}$.
On Fig. \ref{fig:histograms} we present several
histograms plotting displacement of the $I$-component
after time $T, 2T, 4T,8T$ with 6 different groups of initial
conditions, and histograms of $10^6$ points. In each
group we start with a large set of initial conditions close
to $p=q=0,\ I=I^*$.\footnote{These histograms are part of
the forthcoming paper of the third author with P. Roldan
with extensive numerical analysis of dynamics of the Arnold's
example.} One of the distinct features is that only one distribution
(a) is close to symmetric, while in all others have a drift.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=11.55cm]{some-histograms.pdf}
\end{center}
\label{fig:histograms}
\end{figure}
A similar stochastic behaviour was observed numerically in
many other nearly integrable problems (\cite{Ch2} pg. 370,
\cite{DL, La}, see also \cite{SLSZ}). To give another illustrative
example consider motion of asteroids in the asteroid belt.
\subsection{Fluctuations of eccentricity in
Kirkwood gaps in the asteroid belt}
The asteroid belt is located between orbits of Mars and
Jupiter and has around one million asteroids of diameter
of at least one kilometer. When astronomers build
a histogram based on orbital period of asteroids there are
well known gaps in distribution called {\it Kirkwood gaps}
(see Figure below).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8cm]{Kirkwood-gaps.jpg}
\end{center}
\label{fig:Kirkwood-gaps}
\end{figure}
These gaps occur when the ratio of periods of an asteroid and
Jupiter is a rational with small denominator: $1/3,2/5,3/7,1/2$.
This corresponds to so called {\it mean motion resonances
for the three body problem}.
{Wisdom \cite{Wi} made a numerical analysis of dynamics at
the $1/3$ resonance and observed drastic jumps of eccentricity
of asteroids, which are large enough so that an orbit of asteroid
starts crossing the orbit of Mars. Once orbits do cross, they eventually undergo
ejection, or collision, or capture.
Later it was shown that this mechanism of jumps applies to
the $2/5$ resonance. However, resonances $3/7$ and $1/2$
exhibited a different nature of instability (see e.g. \cite{Moo}). }
{In \cite{FGKR} for small (unrealistic) eccentricity of
Jupiter, we construct
a dynamical structure along the $1/3$ resonance which hypothetically
leads to random
fluctuations of eccentricity. Using this structure we prove
existence of orbits whose eccentricity change by $\mathcal{O}(1)$ for
the restricted planar three body problem.}
Outside of these resonances one could argue that KAM
theory provides stability see e.g. \cite{Mo}.
\subsection{Random iteration of cylinder maps}
Consider the time one map of $H_\varepsilon$, denoted
$$
F_\varepsilon:(p,q,I,\varphi)\to (p',q',I',\varphi').
$$
It turns out that for initial conditions in certain domains $\varepsilon$-close
to $H_0(p,q)=0$, one can define
a return map to an $\mathcal{O}(\varepsilon)$-neighborhood of $(p,q)=0$.
Often such a map is called {\it a separatrix map} and in
the $2$-dimensional case was introduced by the physicists
Filonenko-Zaslavskii \cite{FZ}.
In multidimensional setting such a map was defined and
studied by Treschev \cite{PT,T1,T2,T3}.
It turns out that starting near $(p,q)=0$ and iterating $F_\varepsilon$ until
the orbit comes back $(p,q)=0$ leads to a family of maps
of a cylinder
$$
f_{\varepsilon,p,q}:(I,\varphi) \to (I',\varphi'), \qquad
(I,\varphi)\in \mathbb{A}=\mathbb{R}\times \mathbb{T}
$$
which are close to integrable. Since at $(p,q)=0$
the $(p,q)$-component has a saddle, there is a sensitive
dependence on initial condition in $(p,q)$ and returns do
have some randomness in $(p,q)$. The precise nature
of this randomness at the moment is not clear. There are
several coexisting behaviours, including unstable diffusive,
stable quasi-periodic, orbits can stick to KAM tori. Which
behavior is dominant is yet to be understood. May be also the
mechanism of capture into resonances \cite{Do}
is also relevant in this setting.
In \cite{KZZ} we construct a normally hyperbolic invariant
lamination (NHIL) for an open class of trigonometric
perturbations
$H_1=P(\exp(i\varphi),\,\exp(i t),\,\exp(iq)).$
Constructing
unstable orbits along a NHIL is also discussed in
\cite{dlL}. In general, NHILs give rise to a skew shift.
For example, let $\Sigma=\{-1,1\}^\mathbb{Z}$ be the space of
infinite sequences of $-1$'s and $1$'s and
$\sigma:\Sigma \to \Sigma$ be the standard shift.
\vskip 0.1in
{\it Consider a skew product of cylinder maps
$$F:\mathbb A \times \Sigma \to \mathbb A \times \Sigma,
\qquad F(r,\theta;\omega)=(f_\omega(r,\theta),\sigma \omega),
$$
where each $f_\omega(r,\theta)$ is a nearly integrable cylinder
maps, in the sense that it almost preserves the $r$-component
\footnote{The reason we switch
from the $(I,\varphi)$-coordinates on the cylinder to
$(r,\theta)$ is because we perform a coordinate change.}.}
\vskip 0.1in
The goal of the present paper is to study a wide enough
class of skew products so that they arise in Arnold's example
with a trigonometric perturbation of the above type
(see \cite{GKZ,KZZ}).
Now we formalize our model and present the main result.
\subsection{Diffusion processes and infinitesimal generators}
\label{sec:diffusion-generators}
We recall some basic probabilistic notions.
Consider a Brownian motion
$\{B_t,\, t\ge 0\}$.
It is a properly chosen limit of the standard
random walk. A generalisation of a Brownian motion is
{\it a diffusion process} or {\it an Ito diffusion}. To define it
let $(\Omega,\Sigma,P)$ be a probability space.
Let $R:[0,+\infty) \times \Omega \to \mathbb{R}$. It is called an Ito diffusion
if it satisfies {\it a stochastic differential equation} of the form
\begin{equation}\label{eq:diffusion}
\mathrm{d} R_{t} = b(R_{t}) \, \mathrm{d} t +
\sigma (R_{t}) \, \mathrm{d} B_{t},
\end{equation}
where $B_t$ is a Brownian motion and $b : \mathbb{R} \to \mathbb{R}$ and
$\sigma : \mathbb{R} \to \mathbb{R}$ are Lipschitz functions called
the drift and the variance respectively. For a point
$r \in \mathbb{R}$, let $\mathbb{P}_r$ denote the law of $X$
given initial data $R_0 = r$, and let $\mathbb{E}_r$ denote
expectation with respect to $\mathbb{P}_r$.
The {\it infinitesimal generator} of $R$ is the operator $A$,
which is defined to act on suitable functions $f :\mathbb{R}\to \mathbb{R}$ by
\[
A f (r) = \lim_{t \downarrow 0} \dfrac{\mathbb{E}_{r} [f(R_{t})] - f(r)}{t}.
\]
The set of all functions $f$ for which this limit exists at
a point $r$ is denoted $D_A(r)$, while $D_A$ denotes
the set of all $f$'s for which
the limit exists for all $r\in \mathbb{R}$. One can show that any
compactly-supported $\mathcal{C}^2$ function $f$
lies in $D_A$ and that
\begin{eqnarray} \label{eq:diffusion-generator}
Af(r)=b(r) \dfrac{\partial f}{\partial r}+ \dfrac 12 \sigma^2(r)
\dfrac{\partial^2 f}{\partial r \partial r}.
\end{eqnarray}
The distribution of a diffusion process is characterized by
the drift $b(r)$ and the variance $\sigma(r)$.
\section{The model and statement of the main result}
Let $\varepsilon>0$ be a small parameter and $l\ge 7$,
$s\geq 0$ be integers. Denote by $\mathcal O_s(\varepsilon)$
a $\mathcal C^s$ function whose $\mathcal C^s$ norm is
bounded by $C\varepsilon$ with $C$ independent of $\varepsilon$.
Similar definition applies for a power of $\varepsilon$. As before
$\Sigma$ denotes $\{-1,1\}^\mathbb{Z}$ and
$\omega=(\dots,\omega_0,\dots)\in \Sigma$.
Consider nearly integrable maps
\begin{eqnarray} \label{mapthetar}
f_\omega:\mathbb{T}
\times \mathbb{R}
& \longrightarrow &
\mathbb{T} \times \mathbb{R} \qquad \qquad
\qquad \qquad \qquad \qquad
\nonumber\\
f_\omega:
\left(\begin{array}{c}\theta\\r\end{array}\right) &
\longmapsto &
\left(\begin{array}{c}\theta+r+\varepsilon u_{\omega_0}(\theta,r)+
\mathcal O_s(\varepsilon^{1+a},\omega)
\\
r+\varepsilon v_{\omega_0}(\theta,r)+
\varepsilon^2 w_{\omega_0}(\theta,r)+\mathcal O_s(\varepsilon^{2+a},\omega)
\end{array}\right),
\end{eqnarray}
for $\omega_0\in \{-1,1\}$, where $u_{\omega_0},\ v_{\omega_0},$
and $w_{\omega_0}$ are bounded $\mathcal{C}^l$ functions,
$1$-periodic in $\theta$, $\mathcal O_s(\varepsilon^{1+a},\omega)$
and $\mathcal O_s(\varepsilon^{2+a},\omega)$ denote remainders
depending on $\omega$ and uniformly $\mathcal C^s$ bounded
in $\omega$, and $a> 1/2$. Assume
\[
\max |v_i(\theta,r)|\le 1,
\]
where maximum is over $i=\pm 1$ and all
$(\theta,r)\in \mathbb{A}$, otherwise, renormalize $\varepsilon$, and
\[
\|u_i\|_{\mathcal{C}^6}, \|v_i\|_{\mathcal{C}^6}, \|w_i\|_{\mathcal{C}^6}\leq C
\]
for some $C>0$ independent of $\varepsilon$.
Even if the maps $f_\omega$ depend on the full sequence $\omega$, the dependence
on the elements of $\omega_k$, $k\neq 0$, is rather weak since only appear in
the small remainder. Therefore, we abuse notation and we denote these maps
as $f_1$ and $f_{-1}$. Certainly we do not have two but an infinite number of
maps. Nevertheless, they can be treated as just two maps since the remainders
are negligible.
We study the random iterations of these maps $f_1$ and $f_{-1}$, assuming that
at each step the probability of performing either
map is $1/2$. The importance of understanding iterations of
several maps for problems of diffusion is well known
(see e.g. \cite{K,Mo}).
Denote the expected potential and
the difference of potentials by
\[
\begin{split}
\mathbb{E}u(\theta,r)&:=
\frac 12 (u_1(\theta,r)+u_{-1}(\theta,r)),\ \ \ \mathbb{E}v(\theta,r):=\frac 12
(v_1(\theta,r)+v_{-1}(\theta,r)),\\
u(\theta,r)&:=\frac 12 (u_1(\theta,r)-u_{-1}(\theta,r)),\ \ \
v(\theta,r):=\frac 12 (v_1(\theta,r)-v_{-1}(\theta,r)).
\end{split}
\]
Suppose the following assumptions hold:
\begin{itemize}
\item[{\bf [H0]}] ({\it zero average})
{For} each $r\in \mathbb{R}$ and $i=\pm 1$ we have
$\int v_i(\theta,r)\,d\theta=0$.
\item[{\bf [H1]}]
for each $r\in \mathbb{R}$ we have $\int_0^1\ v^2(\theta,r)d\theta=:\sigma(r) \neq0$;
\item[{\bf [H2]}] The functions $v_i(\theta,r)$ are trigonometric polynomials
in $\theta$, i.e. for some positive integer $d$ we have
\[
v_i(\theta,r)=\sum_{k\in \mathbb{Z},\ 0<|k|\le d}
v_{{i}}^{(k)}(r) e^{2\pi ik\theta}.
\]
\item[{\bf [H3]}] ({\it no common zeroes})
For each integer $n\in \mathbb{Z}$ potentials $v_{1}(\theta,n)$ and
$v_{-1}(\theta,n)$ have no common zeroes and, equivalently,
$f_1$ and $f_{-1}$ have no fixed points.
\item[{\bf [H4]}] ({\it no common periodic orbits})
Take any rational $r=p/q\in\mathbb Q$
with $p,q$ relatively prime, $1\le |q|\le 2d$ and any $\theta^*\in \mathbb{T}$ such
that for all $\theta$ either
\[
\sum_{k=1}^q v_{-1}\left(\theta+\frac kq,\frac{p}{q}\right) \ne
0
\]
or
\[
\sum_{k=1}^q\left[v_{-1}\left(\theta+\frac kq,\frac{p}{q}\right)-
v_1\left(\theta+\frac kq,\frac{p}{q}\right)\right]^2\ne
0.
\]
This prohibits $f_1$ and $f_{-1}$ to have common periodic
orbits of period $|q|$.
\item[{\bf [H5]}] ({\it no degenerate periodic points})
Suppose for any rational $r=p/q\in\mathbb Q$
with $p,q$ relatively prime, $1\le |q|\le 2d$, the function:
$$\mathbb{E}v_{p,q}(\theta,r)=\sum_{\substack{k\in
\mathbb{Z}\\0<|kq|<d}}\mathbb{E}v^{kq}(r)e^{2\pi ikq\theta}$$
has distinct non-degenerate zeroes, where $\mathbb{E}v^{j}(r)$
denotes the $j$--th Fourier coefficient of $\mathbb{E}v(\theta,r)$.
\end{itemize}
For $\omega\in\{-1,1\}^\mathbb{Z}$ we
can rewrite the maps $f_{\omega}$ in the following form:
\begin{equation*}
f_{\omega}
\left(\begin{array}{c}\theta\\r\end{array}\right)\longmapsto
\left(\begin{array}{c}\theta+r+\varepsilon \mathbb{E}u(\theta,r)+
\varepsilon\omega_0 u(\theta,r)+\mathcal O_s(\varepsilon^{1+a},\omega)
\\
r+\varepsilon \mathbb{E}v(\theta,r)
+\varepsilon\omega_0 v(\theta,r)+\varepsilon^2 w_{\omega_0}(\theta,r)
+\mathcal O_s(\varepsilon^{2+a},\omega)
\end{array}\right).
\end{equation*}
Let $n$ be a positive integer and $\omega_k\in\{-1,1\}$, $k=0,\dots,n-1$, be
independent random variables with $\mathbb{P}\{\omega_k=\pm1\}=1/2$ and
$\Omega_n=\{\omega_0,\dots,\omega_{n-1}\}$.
Given an initial condition $(\theta_0,r_0)$ we denote
\[
(\theta_n,r_n):=f^n_{\Omega_n}(\theta_0,r_0)=
f_{\omega_{n-1}}\circ f_{\omega_{n-2}}\circ \cdots
\circ f_{\omega_0}(\theta_0,r_0).
\]
A straightforward calculation shows that:
\begin{equation}\label{mapthetanrn}
\begin{array}{rcl}
\theta_n&=&\displaystyle\theta_0+nr_0+\varepsilon\left(
\sum_{k=0}^{n-1}
\mathbb{E}u(\theta_k,r_k)+ \sum_{k=0}^{n-2}(n-k-1) \mathbb{E}v(\theta_k,r_k)\right)
\\
&&\displaystyle+\varepsilon\left(
\sum_{k=0}^{n-1} \omega_k
u(\theta_k,r_k)+\sum_{k=0}^{n-2}(n-k-1) \omega_k v(\theta_k,r_k)\right)
+\mathcal O_s(n\varepsilon^{1+a})
\bigskip\\
r_n&=&\displaystyle r_0+\varepsilon\sum_{k=0}^{n-1}
\mathbb{E}v(\theta_k,r_k)+\varepsilon
\sum_{k=0}^{n-1}\omega_kv(\theta_k,r_k)
+\mathcal O_s(n\varepsilon^{2+a})
\end{array}
\end{equation}
\begin{thm}\label{maintheorem}
Assume that, in the notations above, conditions {\bf [H0-H5]}
hold and take $r_0\in\mathbb{R}$. Let $n_\varepsilon \varepsilon^2 \to s>0$ as $\varepsilon\to 0$ for
some
$s>0$. Then as $\varepsilon \to 0$ the distribution of $ r_{n_\varepsilon}-r_0$
converges weakly to $R_s$, where $R_\bullet$ is
a diffusion process of the form \eqref{eq:diffusion},
with the drift and the variance
\begin{eqnarray} \label{eq:drift-variance}
b(R )=\int_0^1E_2(\theta,R)\,d\theta,
\qquad \sigma^2(R)=\int_0^1v^2(\theta,R)\,d\theta.
\end{eqnarray}
for certain function $E_2$, defined in \eqref{eq:drift}.
\end{thm}
\paragraph{Remarks}
\begin{itemize}
\item If the map is area preserving and exact, one can check that
\[
b(R)=0
\]
(see Corollary \ref{corZeroDrift}).
\item In the case that $u_{\pm1}=v_{\pm 1}$ and that they are
independent of $r$, we have two area-preserving standard
maps. In this case the assumptions become
\begin{itemize}
\item{\bf [H0]} $\int v_i(\theta)d\theta=0$ for $i=\pm 1$;
\item{\bf [H1]} $v$ is not identically zero;
\item{\bf [H2]} the functions $v_i$ are trigonometric
polynomials.
\end{itemize}
A good example is $u_1(\theta)=v_1(\theta)=\cos 2\pi\theta$ and
$u_{-1}(\theta)=v_{-1}(\theta)=\sin 2\pi\theta$. In this case
\[
b(r):=\int_0^1E_2(\theta,r)d\theta\equiv 0,\qquad
\sigma^2=\int_0^1 v^2(\theta)\,d\theta=\frac{1}{4}
\]
and for $n\le \varepsilon^{-2}$
the distribution $r_n-r_0$ converges to the zero mean
variance $\varepsilon n^2 \sigma^2$ normal distribution,
denoted $\mathcal N(0,\varepsilon n^2 \sigma^2)$. More generally,
we have the following ``vertical central limit theorem'':
\begin{thm}\label{submaintheorem}
Assume that in the notations above conditions {\bf [H0-H5]}
hold. Let $n_\varepsilon \varepsilon^2 \to s>0$ as $\varepsilon\to 0$ for some
$s>0$. Then as $\varepsilon \to 0$ the distribution of $ r_{n_\varepsilon}-r_0$
converges weakly to a normal random variable
$\mathcal{N}(0,s^2 \sigma^2).$
\end{thm}
\item Numerical experiments of Moeckel \cite{Moe1} show
that no common fixed points and periodic orbits (see Hypotheses {\bf [H3]}
and {\bf [H4]}) is not neccessary to deal with the resonant zones. One
could probably {replace it} by a weaker non-degeneracy condition, e.g. that
the linearization of maps $f_{\pm 1}$ at the common fixed and periodic
points are different.
\item In \cite{Sa} Sauzin studies random iterations of the standard
maps
$$(\theta,r)\to (\theta+r+\lambda \phi(\theta),r+\lambda \phi(\theta)),
$$
where $\lambda$ is chosen randomly from $\{-1,0,1\}$ and proves
the vertical central limit theorem;
In \cite{MS,Sa2} Marco-Sauzin present examples of nearly
integrable systems having a set of initial conditions
exhibiting the vertical central limit theorem.
\item In \cite{Ma} Marco derives a sufficient condition for a
skew-shift to be a step skew-shift.
\item The condition [H2] that the functions $v_i$ are
trigonometric polynomials in $\theta$ seems redundant too,
however, removing it leads to considerable technical
difficulties (see Section \ref{sec:different-strips}). In short, for
perturbations
by a trigonometric polynomial there are finitely many resonant
zones. This finiteness considerably simplifies the analysis.
\item One can replace $\Sigma=\{-1,1\}^\mathbb{Z}$ with
$\Sigma_N=\{0,1,\dots,N-1\}^\mathbb{Z}$, consider any finite number of maps of the form
(\ref{mapthetar}) and a transitive Markov chain with some transition probabilities.
If conditions {\bf [H0--H5]} are satisfied for the proper averages
$\mathbb{E}v$ of $v$, then Theorem \ref{maintheorem} holds.
\end{itemize}
\section{Strategy of the proof}
The random map \eqref{mapthetanrn} has two significantly different regimes:
resonant and non-resonant. In this paper we analyze
\eqref{mapthetanrn}
\emph{away from resonances}. The resonance setting is analyzed in
\cite{CGK}. The main result of \cite{CGK} is presented in Section
\ref{sec:ResonantRegime}.
We proceed to define the two regimes. Let
\begin{equation}\label{def:FourierSupp}
\mathcal N=\{k\in\mathbb{Z}: (\mathbb{E} u^k,\mathbb{E} v^k)\neq 0\}.
\end{equation}
Fix $\beta>0$.
Then, the $\beta$-non-resonant domain is defined as
\begin{equation}\label{eq:non-res-domain}
\mathcal{D}_{\beta}=\left\{r\in \mathbb{R}: \forall q\in \mathcal N, \
p\in\mathbb{Z}\
\text{we have }\ \left|r-\frac{p}{q}\right|\ge 2\beta \right\}.
\end{equation}
Notice that, by Hypothesis \textbf{H2}, $\mathcal{D}_{\beta}$ contains the subset
of $\mathbb{R}$ which excludes the $2\beta$-neighborhoods of all rational numbers
$p/q$ with $0<|q|\le 2d$. Analogously, we can define the resonant domains
associated to a rational $p/q$ with $q\in \mathcal N$ as
\begin{equation}\label{eq:res-domain}
\mathcal{R}^{p/q}_{\beta}=\left\{r\in \mathbb{R}: \ \left|r-\frac{p}{q}\right|\le 2\beta \right\}.
\end{equation}
\subsection{Strip decomposition}
Fix $\gamma\in (0,1)$. We divide the non-resonant zone of the cylinder, namely
$\mathbb{T}\times \mathcal{D}_\beta$ (see \eqref{eq:non-res-domain}), in strips
$\mathbb{T}\times I^j_\gamma$, where $I^j_\gamma\subset\mathcal{D}_\beta,\ j\in \mathbb{Z}$,
are intervals of length $\varepsilon^\gamma$. Then we study how the random
variable $r_n-r_0$ behaves in each strip. More precisely, decompose
the process $r_n(\omega), n\in\mathbb{Z}_+$ into infinitely many time intervals
defined by stopping times
\begin{eqnarray} \label{eq:stopping-time}
0<n_1<n_2<\dots,
\end{eqnarray}
where
\begin{itemize}
\item $r_{n_i}(\omega)$ is $\varepsilon$-close to
the boundary between $I^j_\gamma$ and $I^{j+1}_\gamma$ for
some $j\in \mathbb{Z}$
\item $r_{n_{i+1}}(\omega)$ is $\varepsilon$-close to the other
boundary of either $I^j_\gamma$ or of $I^{j+1}_\gamma$ and
$n_{i+1}>n_i$ is the smallest integer with this property.
\end{itemize}
Since $\varepsilon\ll \varepsilon^\gamma$, being $\varepsilon$-close to the boundary
of $I^j_\gamma$ with a negligible error means jump from $I^j_\gamma$
to the neighbour interval $I^{j\pm 1}_\gamma$. In what follows for brevity
we drop dependence of $r_n(\omega)$'s on $\omega$. For reasons which will be clear
in
Sections \ref{sec:TI-case} and
\ref{sec:IR-case}, we consider $\gamma\in (4/5,4/5+1/40)$.
In \cite{CGK}, we proceed analogously by partitioning the resonant zones.
Nevertheless, the partition is significantly different.
\subsection{Strips with
different quantitative behaviour}\label{sec:different-strips}
Fix
\[
\nu = \frac{1}{4}\quad\text{ and }\quad b>0\quad \text{ such that }\quad
\rho:=\nu-2b>0
\]
Consider
the $\varepsilon^\gamma$-grid in the non-resonant zone $\mathcal D_\beta$ (see
\eqref{eq:non-res-domain}). Denote by $I_\gamma$ a segment
whose end points are in the grid. Since in the present paper we only deal with
the non-resonant zone, we only need to distinguish among
the two following types of strips $I_\gamma$ (other types for the resonant zones
are defined in \cite{CGK}).
\begin{itemize}
\item\textbf{The Totally Irrational case:}
A strip $I_\gamma$ is called {\it totally irrational} if
$r \in I_\gamma$ and $|r-p/q|<\varepsilon^\nu$, with
$\gcd(p,q)=1$, then $|q|>\varepsilon^{-b}$.
In this case, we show that there is a good ``ergodization''
and
\[
\sum_{k=0}^{n-1}\omega_kv\left(\theta_0+k\frac{p}{q}\right)\approx
\sum_{k=0}^{n-1}\omega_kv\left(\theta_0+kr_0^*\right).
\]
for any
$r_0^*\in I_\gamma\cap(\mathbb{R} \setminus\mathbb{Q})$. These strips cover most
of
the cylinder and give the dominant contribution to
the behaviour of $r_n-r_0$. Eventually it will lead to the desired
weak convergence to a diffusion process (Theorem \ref{maintheorem}).
\item \textbf{The Imaginary Rational (IR) case:}
A strip $I_\gamma$ is called {\it imaginary rational} if
there exists a rational $p/q$ in an $\varepsilon^\nu$ neighborhood of $I_\gamma$
with $2d<|q|<\varepsilon^{-b}$.
We call these strips Imaginary Rational,
since the leading term of the angular dynamics
is a rational rotation, however, the associated averaged system
vanishes due to the fact that $u_i$ and $v_i$ only have $k$-harmonics with
$|k|\leq d$.
In Appendix \ref{sec:measure-IR-RR}, we show that the imaginary rational
strips occupy an $\mathcal O(\varepsilon^\rho)$-fraction of the cylinder.
We can show that orbits spend a small fraction of the total time
in these strips and global behaviour is determined
by behaviours in the complement.
\end{itemize}
\subsection{The Normal Forms}
The first step is to find a normal form, so that
the deterministic part of map \eqref{mapthetanrn}
is as simple as possible. It is given in
Theorem \ref{thm:normal-form}.
In short, we shall see that the
deterministic system in both the TI case and
the IR case are
a small perturbation of the twist map
\begin{equation*}
\left(\begin{array}{c}\theta\\r\end{array}\right)
\longmapsto
\left(\begin{array}{c}\theta+r\\r\end{array}\right).
\end{equation*}
On the contrary, in the resonant zones studied in \cite{CGK}, the deterministic
system will be close to a pendulum-like system
\begin{equation*}
\left(\begin{array}{c}\theta\\r\end{array}\right)
\longmapsto
\left(\begin{array}{c}\theta+r\\r+\varepsilon E(\theta,r)\end{array}\right),
\end{equation*}
for an ``averaged'' potential $E(\theta,r)$ (see Theorem
\ref{thm:normal-form}, \eqref{NFnear}).
We note that this system has the following approximate first integral
\[
H(\theta,r)=\frac{r^2}{2}-\varepsilon\int_0^\theta E(s,r)ds,
\]
so that indeed it is close to a pendulum-like system.
This will lead to different qualitative behaviours when considering
the random system.
\subsection{Analysis of the Martingale problem in each kind of
strip}\label{sec:Martingale}
The next step is to study the behaviour of the random system
respectively in Totally Irrational and Imaginary Rational strips
(see Sections \ref{sec:TI-case} and \ref{sec:IR-case}).
More precisely, we use a discrete version of the scheme
by Freidlin and Wentzell \cite{FW}, giving a sufficient condition
to have weak convergence to a diffusion process as $\varepsilon\to0$
in terms of the associated Martingale problem. Namely, $R_s$ satisfies a
diffusion process with drift $b(r)$ and variance $\sigma(r)$ provided that
for any $s>0$, any time
$n\le s\varepsilon^{-2}$ and any $(\theta_0,r_0)$ we have that as $\varepsilon\to 0$,
\begin{eqnarray}\label{eq:suff-condition}
\mathbb{E}\left(f(r_{n})- \varepsilon^2
\sum_{k=0}^{n-1}
\left(b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2}f''(r_k)\right)
\right)-f(r_0)\to 0.
\end{eqnarray}
This implies the main result ---
Theorem \ref{maintheorem}.
The proof of \eqref{eq:suff-condition} is done in two steps.
First, we describe the local behaviour in each strip and
then we combine the information. We define Markov times $0=n_0<n_1<n_2 <\dots
<n_{m-1}<n_m=n\leq s\varepsilon^{-2}$
for some random $m=m(\omega)$ such that each $n_k$ is the stopping
time as in (\ref{eq:stopping-time}) and $n_m$ is the final time. Almost surely
$m(\omega)$ is
finite. We decompose the above sum
\[
\mathbb{E}\left(\sum_{k=0}^{m-1} \left[f(r_{n_{k+1}})-
f(r_{n_{k}})-\varepsilon^2
\sum_{s=n_k}^{n_{k+1}}\left(b(r_s)f'(r_s)+\frac{\sigma^2(r_s)}{2}
f''(r_s)\right)\right]\right),
\]
analyze each summand in the corresponding strip and then prove that the
whole sum converges to 0 as $\varepsilon\to 0$.
\subsubsection{A TI Strip}
\label{sec:TI-prelim}
Let the drift and the variance be as \eqref{eq:drift-variance}.
Let $r_0$ be $\varepsilon$-close to the boundary of two totally
irrational strips and let $n_\gamma$ be stopping of hitting
$\varepsilon$-neighbourhoods of the adjacent boundaries or
$n_\gamma=n\leq s\varepsilon^{-2}$ be the final time.
In Lemma \ref{lemmaexpectation}
we prove that for some $\zeta>0$
\begin{eqnarray}\label{exp-formula}
\begin{aligned}
&&\mathbb{E}\left(f(r_{n_\gamma})- \varepsilon^2
\sum_{k=0}^{n_\gamma-1}\left(b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2}
f''(r_k)\right)
\right)\\
&& \qquad \qquad \qquad \qquad \qquad \qquad \qquad
- f(r_0)=\mathcal{O}(\varepsilon^{2\gamma+\zeta}),
\end{aligned}
\end{eqnarray}
\subsubsection{An IR Strip}
\label{sec:IR-prelim}
Consider the drift and variance given in \eqref{eq:drift-variance}.
Let $r_0$ be $\varepsilon$-close to the boundary of an imaginary
rational strip and let $n_\gamma$ be stopping of hitting
$\varepsilon$-neighbourhoods of the adjacent boundaries or
$n_\gamma=n\leq s\varepsilon^{-2}$ be the final time.
Fix any $\delta>0$ small. In Lemma \ref{lemmaexpectation-IR} we prove that
\begin{eqnarray*}
&&\mathbb{E}\left(f(r_{n_\gamma})- \varepsilon^2
\sum_{k=0}^{n_\gamma-1}
\left(b(r_k)f'(r_k)+
\frac{\sigma^2(r_k)}{2}f''(r_k)\right)\right)\\
&&\qquad \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \qquad -f(r_0)=\mathcal{O}(\varepsilon^{2\gamma-\delta}).
\end{eqnarray*}
\subsection{The resonant zones $\mathcal
R_\beta^{p/q}$}\label{sec:ResonantRegime}
The resonant zones $\mathcal R_\beta^{p/q}$ defined in \eqref{eq:res-domain} are studied in \cite{CGK}.
We summarize here the key steps
(for a more precise statement see Lemma
\ref{lemma:expectlemmaBigStrips} and remark afterwards below).
Fix $p/q$ with $|q|\leq 2 d$ and consider the associated resonant zone $\mathcal
R_\beta^{p/q}$ for some $\beta>0$ independent of $\varepsilon$ ($\beta$ is chosen so
that the
different resonant regions do not overlap).
In $\mathcal
R_\beta^{p/q}$ we do not analyze the stochastic behavior in $r$ but in a
different variable. In \cite{CGK} we show, through a normal form, that,
after a suitable change of coordinates, the
deterministic map associated to \eqref{mapthetanrn} has an approximate first
integral $H$ of the form
\[
H^{p/q}(\theta, r)=\frac{r^2}{2}+\varepsilon V^{p/q}(\theta,
r)+\mathcal{O}\left(\varepsilon^2\right).
\]
In the resonant zone \eqref{eq:res-domain}, we analyze the process
$(\theta_{qn},H_n)$ with
\[H_n:=H^{p/q}\left(\theta_{qn},R_{qn}\right).\]
We prove that,
$H_n-H_0$ converges weakly to a diffusion process
$H_s$ with $s=\varepsilon^{-2}n$.
Notice that the limiting process does not take place on a line.
In this case it takes place on a graph, similarly as in \cite{FW}.
More precisely, consider the level sets of the function
$H^{p/q}(\theta,r)$. The critical points of the potential
$V^{p/q}(\theta)$ give rise to critical points of the associated
Hamiltonian system. Moreover, if the critical point is a local minimum
of $V$, it corresponds to a center of the Hamiltonian system,
while if it is a local maximum of $V^{p/q}$, it corresponds to
a saddle. Now, if for every value
$H\in \mathbb{R}$ we identify all the points $(\theta,r)$ in the same
connected component of the curve $\{H^{p/q}(\theta,r)=H\}$,
we obtain a graph $\Gamma$ (see Figure \ref{fig:potential-graph}
for an example). The interior vertices of this graph represent
the saddle points of the underlying Hamiltonian system jointly with
their separatrices, while the exterior vertices represent the centers
of the underlying Hamiltonian system. Finally, the edges of the graph
represent the domains that have the separatrices as boundaries.
The process $H_n$ {can be viewed as a process on the graph.}
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.75cm]{potential-graph.pdf}
\end{center}
\caption{(a) A potential and the phase portrait of its corresponding
Hamiltonian system. (b) The associated graph $\Gamma$.}
\label{fig:potential-graph}
\end{figure}
In \cite{CGK} we analyze the stochastic behavior in this graph by proving an
analogous sufficient condition to \eqref{eq:suff-condition} on the
graph. Namely, we use that $H_s$ satisfies a
diffusion process provided that
for any $s>0$, any time
$n\le s\varepsilon^{-2}$ and any $(\theta_0,H_0)$ we have that as $\varepsilon\to 0$,
\[
\mathbb{E}\left(f(H_{n})- \varepsilon^2
\sum_{k=0}^{n-1}
\left(b(H_k)f'(H_k)+\frac{\sigma^2(H_k)}{2}f''(H_k)\right)
\right)-f(H_0)\to 0,
\]
then we relate the $H$-process and the $r$-process.
\subsection{Plan of the rest of the paper}
In Section \ref{sec:NormalForm} we state and prove
the normal form theorem for the expected cylinder map
$\mathbb{E} f$. The main difference with a typical normal form is that
we need to have not only the leading term in $\varepsilon$,
but also $\varepsilon^2$-terms. The latter terms give information
about the drift $b(r)$ (see (\ref{eq:drift})).
In Section \ref{sec:TI-case} we analyze the Totally Irrational
case and prove approximation for the expectation from
Section \ref{sec:TI-prelim}. In Section \ref{sec:IR-case} we analyze the
Imaginary Rational case and prove an analogous formula from
Section \ref{sec:IR-prelim}. In Section \ref{sec:IR-cyl-to-line} we prove
Theorem \ref{maintheorem} using
the analysis of the TI and IR strips.
In Appendix \ref{sec:measure-IR-RR} we estimate measure
of the complement to the TI strips. In Appendix \ref{sec:auxiliaries} we
present
several auxiliary lemmas used in the proofs.
\section{The Normal Form Theorem}\label{sec:NormalForm}
In this section we prove the Normal Form Theorem,
which allows us to deal with the simplest possible
deterministic system. To this end, we state
a technical lemma needed in the proof of the theorem.
This is a simplified version (sufficient for our purposes) of Lemma 3.1
in \cite{BKZ}.
\begin{lem}\label{lemClnorms}
Let $g(\theta,r)\in\mathcal{C}^l\left(\mathbb{T}\times B\right)$,
where $B\subset\mathbb{R}$. Then
\begin{enumerate}
\item If $l_0\leq l$ and $k\neq0$, $\|g_k(r)e^{2\pi i k \theta}\|_{\mathcal{C}^{l_0}}\leq |k|^{l_0-l}\|g\|_{\mathcal{C}^{l}}$.
\item Let $g_k(r)$ be functions that satisfy
$\|\partial_{ r^\alpha}g_k\|_{\mathcal{C}^0}\leq M|k|^{-\alpha-2}$
for all $\alpha\leq l_0$ and some $M>0$. Then
$$\left\|\sum_{\substack{k\in\mathbb{Z}\\0<k\leq d}}
g_k(r)e^{2\pi i k \theta}\right\|_{\mathcal{C}^{l_0}}\leq cM,$$
for some constant $c$ depending on $l_0$.
\end{enumerate}
\end{lem}
Let $\mathcal{R}$ be the finite set of resonances of the map \eqref{mapthetar},
namely,
\[
\mathcal{R}=\{p/q\in\mathbb{Q}\,:\,\gcd(p,q)=1, |q|\leq 2d\}.
\]
\begin{thm}\label{thm:normal-form}
Consider the expected map $\mathbb{E}f$ associated to the map \eqref{mapthetar}
\begin{equation}\label{def:ExpectedMap}
\mathbb{E} f
\begin{pmatrix}\theta\\r\end{pmatrix}\longmapsto
\begin{pmatrix}\theta+r+\varepsilon \mathbb{E}u(\theta,r)
+\mathcal O_s(\varepsilon^{1+a})
\\
r+\varepsilon \mathbb{E}v(\theta,r)+\varepsilon^2 \mathbb{E} w(\theta,r)+
\mathcal O_s(\varepsilon^{2+a})
\end{pmatrix}.
\end{equation}
Assume that the functions $\mathbb{E}u(\theta,r)$, $\mathbb{E}v(\theta,r)$ and $\mathbb{E} w(\theta,r)$
are $\mathcal{C}^l$, $l\geq3$. Fix $\beta>0$
small and $0\leq s\leq l-2$. Then, there exists $K>0$ independent of $\varepsilon$
and a canonical change of variables
\begin{eqnarray*}
\Phi:\mathbb{T}\times\mathbb{R}&\rightarrow&
\mathbb{T}\times\mathbb{R},\\
(\tilde \theta,\tilde r)&\mapsto&(\theta,r),
\end{eqnarray*}
such that
\begin{itemize}
\item If $|\tilde r-p/q|\geq \beta$ for all $p/q\in\mathcal{R}$,
then
\begin{equation}\label{NFfar}
\begin{aligned}
\Phi^{-1}\circ\mathbb{E}f\circ\Phi(\tilde\theta,\tilde r)=\qquad
\qquad \qquad \qquad \qquad \qquad \qquad \\
\begin{pmatrix}\tilde \theta+\tilde r+\varepsilon \mathbb{E}u(\theta,r)-\varepsilon \mathbb{E}v(\theta,r)+\varepsilon
E_1(\theta,r)+\mathcal O_s(\varepsilon^{1+a})+\mathcal{O}_s(\varepsilon^2\beta^{-(2s+4)})
\\
\tilde r+\varepsilon^2E_2(\tilde\theta,\tilde
r)+\mathcal{O}_s(\varepsilon^{2+a})+\mathcal{O}_s(\varepsilon^3\beta^{-(3s+5)})\end{pmatrix}
,
\end{aligned}
\end{equation}
where $E_1$ and $E_2$ are some $\mathcal{C}^{l-1}$ functions. There exists a
constant $K$ such that for any $0\leq s\leq l-1$ one has
$$\|E_1\|_{\mathcal{C}^s}\leq K\|\mathbb{E}v\|_{\mathcal{C}^{s+1}},\qquad \|E_2\|_{\mathcal{C}^s}\leq K\beta^{-(2s+3)}.$$
Moreover, $E_2$ satisfies
\begin{equation}\label{eq:drift}
\begin{split}
b(r)=&\int_0^1 E_2(\tilde\theta,\tilde
r)d\tilde\theta\\
=&\int_0^1 \Big(\mathbb{E} w(\tilde\theta,\tilde
r)-\partial_\theta\mathbb{E} v(\tilde\theta,\tilde
r)\mathbb{E} u(\tilde\theta,\tilde
r) \\
&+\partial_{\theta}S_1(\tilde\theta,\tilde r)\left(\partial_{\tilde r}
\mathbb{E}v(\tilde\theta,\tilde
r)-\partial_{\theta}\mathbb{E}v(\tilde\theta,\tilde
r)+\partial_{\theta}\mathbb{E}u(\tilde\theta,\tilde
r)\right)\Big)d\tilde \theta.
\end{split}
\end{equation}
In particular, $b(r)$ satisfies $\|b\|_{\mathcal{C}^0}\leq K$.
\item If $|\tilde r-p/q|\leq 2\beta$ for a given
$p/q\in\mathcal{R}$, then
{\small \begin{equation}\label{NFnear}
\Phi^{-1}\circ\mathbb{E}f\circ\Phi(\tilde\theta,\tilde r)=
\end{equation}
\begin{equation}
\begin{pmatrix}\tilde\theta+\tilde
r+\varepsilon\left[\mathbb{E}u\left(\tilde\theta,\frac{p}{q}\right)-\mathbb{E}v\left(\tilde\theta,
\frac{p}{q}\right)+\mathbb{E}v_ { p , q } \left(\tilde\theta ,\frac{ p}{q}\right)+
E_3(\tilde\theta)\right]+\mathcal{O}_s\left(\varepsilon^{1+a},\varepsilon\beta,\varepsilon^3\beta^{
-(2s+4)}\right)\\
\tilde r+\varepsilon\mathbb{E}v_{p,q}(\tilde\theta,\tilde r)+
\varepsilon^2 E_4(\tilde\theta,\tilde
r)+\mathcal{O}_s(\varepsilon^{2+a},\varepsilon^3\beta^{-(3s+5)})
\end{pmatrix},\nonumber
\end{equation}}
where $\mathbb{E}v_{p,q}$ is the $\mathcal{C}^l$ function defined as
\begin{equation}\label{defEvpq}
\mathbb{E}v_{p,q}(\tilde\theta,\tilde r)=\sum_{k\in \mathcal{R}_{p,q}}\mathbb{E}v^k(\tilde
r)e^{2\pi ik\tilde\theta},
\end{equation}
and $E_3$ is the $\mathcal{C}^{l-1}$ function
\begin{equation}\label{defE3}
E_3(\tilde\theta)=-\sum_{k\not\in\mathcal{R}_\beta^{p,q}}\frac{i(\mathbb{E}v^k)'(p/q)}{
2\pi
k}e^{2\pi i k\tilde\theta},
\end{equation}
where
\begin{equation}\label{def:rpq}
\mathcal{R}_\beta^{p,q}=\{k\in\mathbb{Z}\,:\,k\neq0,\,|k|\leq2d,\,
kp/q\in\mathbb{Z}\}.
\end{equation}
Moreover, $E_4$ is a $\mathcal{C}^{l-1}$ function and there exists a constant
$K$ such that for all $0\leq s\leq l-1$ one has
$$\|E_4\|_{\mathcal{C}^s}\leq K\beta^{-(2s+3)}.$$
\end{itemize}
Also, $\Phi$ is $\mathcal{C}^2$-close to the identity.
More precisely, there exists a constant $M$ independent
of $\varepsilon$ such that
\begin{equation}\label{normPhi-Id}
\|\Phi-\textup{Id}\|_{\mathcal{C}^2}\leq M\varepsilon.
\end{equation}
\end{thm}
\begin{cor}\label{corZeroDrift}
If the map \eqref{def:ExpectedMap} is area preserving and exact,
\[b(r)\equiv
0.\]
\end{cor}
\begin{proof}[Proof of Corollary \ref{corZeroDrift}]
It is enough to recall the following two facts. First, expanding $\mathbb{E}
f^*(dr\wedge d\theta)-dr\wedge d\theta$ in $\varepsilon$ and taking the first order,
one obtains that being $\mathbb{E} f$ area preserving implies
$\partial_{\tilde r}
\mathbb{E}v(\tilde\theta,\tilde
r)-\partial_{\theta}\mathbb{E}v(\tilde\theta,\tilde
r)+\partial_{\theta}\mathbb{E}u(\tilde\theta,\tilde
r)=0$. Second, expanding $\mathbb{E}
f^*(rd\theta)-rd\theta$ in $\varepsilon$ and taking the first and second order, being
exact implies $\int_0^1 \mathbb{E} v(\tilde\theta,\tilde
r)d\tilde
r=0$ and
\[
\int_0^1 \left(\mathbb{E} w(\tilde\theta,\tilde
r)-\partial_\theta\mathbb{E} v(\tilde\theta,\tilde
r)\mathbb{E} u(\tilde\theta,\tilde
r)\right)d\tilde
r=0.\
\]
\end{proof}
\begin{rmk} \label{transition-exponent}
Notice that in the case $\beta=\varepsilon^{1/11}$ and $s=0$
the remainder term
$\mathcal{O}_0(\varepsilon^2\beta^{-5})$ is dominated
by $\mathcal{O}_0(\varepsilon^{2+a})$ if $1/2<a<6/11$.
\end{rmk}
\begin{proof}[Proof of Theorem \ref{thm:normal-form}]
Consider the canonical change defined implicitly by
a given generating function
$S(\theta,\tilde r)=\theta\tilde r+\varepsilon S_1(\theta,\tilde r)$, that is
\[
\begin{split}
\tilde \theta=&\partial_{\tilde r} S(\theta, \tilde r)=\theta+\varepsilon
\partial_{\tilde r}S_1(\theta,\tilde r)\\
r=&\partial_\theta S(\theta, \tilde r)=\tilde r+\varepsilon \partial_\theta
S_1(\theta,\tilde r).
\end{split}
\]
We shall start by writing explicitly the first orders of
the $\varepsilon$-series of $\Phi^{-1}\circ\mathbb{E}f\circ\Phi$.
If $(\theta,r)=\Phi(\tilde\theta,\tilde r)$ is the change given by the
generating function $S$, then one has
\begin{eqnarray}\label{Phi}
\begin{aligned}
\Phi(\tilde\theta,\tilde r)=\qquad \qquad \qquad \qquad
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\\
\begin{pmatrix}\tilde\theta-\varepsilon\partial_{\tilde r}S_1(\tilde\theta,\tilde
r)+\varepsilon^2\partial_\theta\partial_{\tilde r}S_1(\tilde\theta,\tilde
r)\partial_{\tilde r}S_1(\tilde\theta,\tilde
r)+\mathcal{O}_s(\varepsilon^3\|\partial_\theta^2\partial_{\tilde
r}S_1(\partial_{\tilde r}S_1)^2\|_{\mathcal{C}^s})\\
\tilde r+\varepsilon\partial_\theta S_1(\tilde\theta,\tilde r)-\varepsilon^2\partial_\theta^2S_1(\tilde\theta,\tilde r)\partial_{\tilde r}S_1(\tilde\theta,\tilde r)+\mathcal{O}_s(\varepsilon^3\|\partial_\theta^3S_1(\partial_{\tilde r}S_1)^2\|_{\mathcal{C}^s})
\end{pmatrix},
\end{aligned}
\end{eqnarray}
and its inverse is given by
\begin{eqnarray}\label{Phiinverse}
\begin{aligned}
\Phi^{-1}(\theta,r)=\qquad \qquad \qquad \qquad
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\\
\begin{pmatrix}\theta+\varepsilon\partial_{\tilde
r}S_1(\theta,r)-\varepsilon^2\partial^2_{\tilde r}S_1(\theta,r)\partial_\theta
S_1(\theta,r)+\mathcal{O}_s(\varepsilon^3\|\partial^3_{\tilde r}S_1(\partial_\theta
S_1)^2\|_{\mathcal{C}^s})
\\
r-\varepsilon\partial_\theta S_1(\theta,r)+\varepsilon^2\partial_\theta\partial_{\tilde r}S_1(\theta,r)\partial_\theta S_1(\theta,r)+\mathcal{O}_s(\varepsilon^3\|\partial_\theta\partial_{\tilde r}^2S_1(\partial_\theta S_1)^2\|_{\mathcal{C}^s})
\end{pmatrix}.
\end{aligned}
\end{eqnarray}
One can see that
\begin{equation}\label{EfPhi}
\mathbb{E}f\circ\Phi(\tilde\theta,\tilde r)=\begin{pmatrix}
\tilde\theta + \tilde r+ \varepsilon
A_1+\varepsilon^2A_2+\varepsilon^3A_3+\mathcal{O}_s\left(\varepsilon^{1+a}\right)\\
\tilde r+\varepsilon
B_1+\varepsilon^2B_2+\varepsilon^3B_3+\mathcal{O}_s\left(\varepsilon^{2+a}\right)
\end{pmatrix},
\end{equation}
where
\begin{equation}\label{defA1}
\begin{split}
A_1=&\mathbb{E}u(\tilde\theta,\tilde r)-\partial_{\tilde r}S_1(\tilde\theta,\tilde
r)+\partial_\theta S_1(\tilde\theta,\tilde r)\\
A_2=&-\partial_\theta\mathbb{E}u(\tilde\theta,\tilde r)\partial_{\tilde
r}S_1(\tilde\theta,\tilde r)+\partial_r\mathbb{E}u(\tilde\theta,\tilde r)\partial_\theta
S_1(\tilde\theta,\tilde r)\\
&+\partial_\theta\partial_{\tilde r}S_1(\tilde\theta,\tilde r)\partial_{\tilde
r}S_1(\tilde\theta,\tilde r) -\partial_\theta^2S_1(\tilde\theta,\tilde
r)\partial_{\tilde r}S_1(\tilde\theta,\tilde r),\\
A_3=&\mathcal{O}_s(\|\partial_\theta^2\partial_{\tilde r}S_1(\partial_{\tilde
r} S_1)^2\|_{\mathcal{C}^s})+\mathcal{O}_s(\|\partial_\theta^3
S_1(\partial_{\tilde r} S_1)^2\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\mathbb{E}u\|_{\mathcal{C}^{s+1}}\|\partial_\theta
S_1\|_{\mathcal{C}^{s+1}}\|\partial_{\tilde r}S_1\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\mathbb{E}u\|_{\mathcal{C}^{s+2}}(\|\partial_\theta
S_1\|_{\mathcal{C}^s}+\|\partial_{\tilde
r}S_1\|_{\mathcal{C}^s})^2),
\end{split}
\end{equation}
and
\begin{equation}\label{defB2}
\begin{split}
B_1=&\mathbb{E}v(\tilde\theta,\tilde r)+\partial_\theta S_1(\tilde\theta,\tilde r),\\
B_2=&\mathbb{E} w(\tilde \theta, \tilde r)-\partial_\theta\mathbb{E}v(\tilde\theta,\tilde
r)\partial_{\tilde r}S_1(\tilde\theta,\tilde
r)\\
&+\partial_r\mathbb{E}v(\tilde\theta,\tilde r)\partial_\theta
S_1(\tilde\theta,\tilde r)-\partial^2_\theta
S_1(\tilde\theta,\tilde r)\partial_{\tilde r}S_1(\tilde\theta,\tilde
r),\\
B_3=&\mathcal{O}_s(\|\partial_\theta^3S_1(\partial_{\tilde
r}S_1)^2\|_{\mathcal{C}^s})+\mathcal{O}_s(\|\mathbb{E}v\|_{\mathcal{C}^{s+1}}
\|\partial_\theta S_1\|_{\mathcal{C}^{s+1}}\|\partial_{\tilde
r}S_1\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\mathbb{E}v\|_{\mathcal{C}^{s+2}}(\|\partial_\theta
S_1\|_{\mathcal{C}^s}+\|\partial_{\tilde
r}S_1\|_{\mathcal{C}^s})^2).
\end{split}
\end{equation}
Then, using \eqref{Phiinverse},
\begin{equation}\label{PhiinverseEfPhi}
\Phi^{-1}\circ\mathbb{E}f\circ\Phi(\tilde\theta,\tilde r)=\begin{pmatrix}\tilde
\theta+\tilde r+\varepsilon\hat A_1+\varepsilon^2\hat A_2+\mathcal
O_s\left(\varepsilon^{1+a}\right)\\\tilde r+\varepsilon\hat B_1+\varepsilon^2\hat B_2+\varepsilon^3\hat
B_3+\mathcal
O_s\left(\varepsilon^{2+a}\right)\end{pmatrix},
\end{equation}
where
\begin{equation}\label{defA1hat}
\begin{split}
\hat A_1=&A_1+\partial_{\tilde r}S_1(\tilde\theta+\tilde r,\tilde
r),\\
\hat A_2=&A_2+\varepsilon A_3+\mathcal{O}_s(\|\partial_\theta\partial_{\tilde
r}S_1A_1\|_{\mathcal{C}^s})+\mathcal{O}_s(\|\partial_{\tilde
r}^2S_1B_1\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\partial_{\tilde r}^2S_1\partial_\theta
S_1\|_{\mathcal{C}^s}),
\end{split}
\end{equation}
and
\begin{equation}\label{defB2hat}
\begin{split}
\hat B_1=&B_1-\partial_\theta S_1(\tilde \theta+\tilde r,\tilde
r)\\
\hat B_2=&B_2-\partial_\theta^2S_1(\tilde\theta+\tilde r,\tilde
r)A_1-\partial_{\tilde r}\partial_\theta S_1(\tilde\theta+\tilde r,\tilde
r)B_1\\
&+\partial_\theta\partial_{\tilde r}S_1(\tilde \theta+\tilde r,\tilde
r)\partial_\theta S_1(\tilde\theta+\tilde r,\tilde r),
\\
\hat B_3=&B_3+\mathcal{O}_s(\|\partial_\theta\partial_{\tilde
r}^2S_1(\partial_\theta S_1)^2\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\partial_\theta^2 S_1 (A_2+\varepsilon
A_3)\|_{\mathcal{C}^s}+\|\partial_\theta\partial_{\tilde r} S_1
B_2\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\partial_\theta^3S_1
A_1^2\|_{\mathcal{C}^s}+\|\partial_\theta^2\partial_{\tilde r}
S_1A_1B_1\|_{\mathcal{C}^s}+\|\partial_\theta\partial_{\tilde r}^2S_1
B_1^2\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\partial_\theta^2\partial_{\tilde r}S_1A_1\partial_\theta
S_1\|_{\mathcal{C}^s}+\|\partial_\theta\partial_{\tilde r}^2
S_1B_1\partial_\theta S_1\|_{\mathcal{C}^s})\\
&+\mathcal{O}_s(\|\partial_\theta\partial_{\tilde r}S_1\partial_\theta^2S_1
A_1\|_{\mathcal{C}^s}+\|(\partial_\theta\partial_{\tilde
r}S_1)^2B_1\|_{\mathcal{C}^s}).
\end{split}
\end{equation}
Now that we know the terms of order $\varepsilon$ and $\varepsilon^2$ of
$\Phi^{-1}\circ\mathbb{E}f\circ\Phi$, we proceed to find a suitable $S_1(\theta,\tilde
r)$ to make $\hat B_1$ as simple as possible. Ideally we would like
that $\hat B_1=0$ by solving the following equation whenever it is possible
\begin{equation}\label{eq:cohomological}
\partial_{\theta}S_1(\tilde\theta,\tilde r)+\mathbb{E}v(\tilde\theta,\tilde
r)-\partial_{\theta}S_1(\tilde\theta+\tilde r,\tilde r)=0.
\end{equation}
One can find a formal solution of this equation by solving the
corresponding equation for the Fourier coefficients. Write $S_1$
and $\mathbb{E}v$
in their Fourier series
\begin{equation}
\label{eq:Genfunction}
S_1(\theta,\tilde r)=
\sum_{k\in\mathbb{Z}}S_1^k(\tilde r)e^{2\pi ik\theta},
\end{equation}
$$\mathbb{E}v(\theta,r)=\sum_{\substack{k\in\mathbb{Z}\\0<|k|\leq d}}\mathbb{E}v^k(r)e^{2\pi ik\theta}.$$
It is obvious that for $|k|>d$ and $k=0$ we can take $S_1^k(\tilde r)=0$. For
$0<|k|\leq d$ we obtain the following homological equation for $S_1^k(\tilde r)$
\begin{equation} \label{eq:HomEq}
2\pi ikS_1^k(\tilde r)\left(1-e^{2\pi ik\tilde r}\right)+\mathbb{E}v^k(r)=0.
\end{equation}
This equation cannot be solved if $e^{2\pi ik\tilde r}=1$, i.e. if $k\tilde
r\in\mathbb{Z}$. We note that there exists a constant $L$, independent of
$\varepsilon$, $L<d^{-1}$, such that if $\tilde r\neq p/q$ satisfies
$$0<|\tilde r-p/q|\leq L$$
then $k\tilde r\not \in\mathbb{Z}$ for all $0<k\leq d$. Restricting
ourselves to the domain $|\tilde r-p/q|\leq L$, we have that if
$kp/q\not\in\mathbb{Z}$ equation \eqref{eq:HomEq} always has a solution, and if
$kp/q\in\mathbb{Z}$ this equation has a solution except at $\tilde r=p/q$.
Moreover, in the case that the solution exists, it is equal to:
$$S_1^k(\tilde r)=\frac{i\mathbb{E}v^k(r)}{2\pi k\left(1-e^{2\pi ik\tilde r}\right)}.$$
We modify this solution slightly to make it well defined also at $\tilde
r=p/q$. To this end, let us consider a $\mathcal{C}^\infty$ function
$\mu(x)$ such that
\begin{equation*}
\mu(x)=\left\{\begin{array}{rcl}
1 &\textrm{ if }& |x|\leq1,\\
0 &\textrm{ if }& |x|\geq2,
\end{array}\right.
\end{equation*}
and $0<\mu(x)<1$ if $|x|\in(1,2)$. Then we define
$$\mu_k(\tilde r)=\mu\left(\frac{1-e^{2\pi ik\tilde r}}{2\pi k
\beta
}\right),$$
and take
\begin{equation}\label{defS1k}
S_1^k(\tilde r)=\frac{i\mathbb{E}v^{k}(r)(1-\mu_k(\tilde r))}{2\pi k(1-e^{2\pi ik\tilde r})}.\end{equation}
This function is well defined since the numerator is identically zero in a
neighbourhood of $\tilde r=p/q$, the unique zero
of the denominator (if it is a zero indeed, that is, if
$k\in\mathcal{N}\cap q\mathbb{Z}$, see \eqref{def:FourierSupp}). More precisely,
we claim that
\begin{equation}\label{valuesmuk}
\mu_k(\tilde r)=\left\{\begin{array}{ccl}
1&\textrm{ if }& k\in\mathcal{N}\cap q\mathbb{Z} \ \textrm{ and }\ |\tilde
r-p/q|\leq
\beta
/2,\\
0&\textrm{ if }& k\in\mathcal{N}\cap q\mathbb{Z} \ \textrm{ and }\ |\tilde
r-p/q|\geq 3\beta
,
\\
0&\textrm{ if }&k\not\in\mathcal{N}\cap q\mathbb{Z}.
\end{array}\right.
\end{equation}
Indeed if $k\in\mathcal{N}\cap q\mathbb{Z}$ there exists a constant $M$
independent of $\tilde r$ and $\varepsilon$ such that
$$
\frac{1}{\beta
}|\tilde r-p/q|(1-M|\tilde r-p/q|)
\leq
\left|\frac{1-e^{2\pi ik\tilde r}}{2\pi k \beta
}\right|\leq\frac{1}{ \beta
}|\tilde r-p/q|(1+M|\tilde r-p/q|).
$$
Then, on the one hand, if $k\in\mathcal{N}\cap q\mathbb{Z}$ and
$|\tilde r-p/q|\leq \beta
/2$ we have:
$$
\left|\frac{1-e^{2\pi ik\tilde r}}{2\pi k \beta
}\right|\leq\frac{1}{2}+\frac{M}{4} \beta
<1,$$
for $\beta$ sufficiently small, and thus $\mu_k(\tilde r)=1$.
On the other hand, if $|\tilde r-p/q|\geq 3\beta
$ then
$$
\left|\frac{1-e^{2\pi ik\tilde r}}{2\pi k\beta
}\right|\geq 3-9M \beta
>2,$$
for $\beta$ sufficiently small, and thus $\mu_k(\tilde r)=0$.
Finally, if $k\not\in \mathcal{N}\cap q\mathbb{Z}$ then
$$\left|\frac{1-e^{2\pi ik\tilde r}}{2\pi k \beta
}\right|\geq \frac{M}{\beta}
>2$$
for $\beta$ sufficiently small and then we also have
$\mu_k(\tilde r)=0$.
Now we proceed to check that the first order terms of \eqref{PhiinverseEfPhi} take the form \eqref{NFfar} if
$|\tilde r-p/q|\geq 3\beta$
and \eqref{NFnear} if
$|\tilde r-p/q|\leq \beta/2$.
On the one hand, by definitions in \eqref{defS1k} of
the coefficients $S_1^k(\tilde r)$ and in \eqref{defB2hat}
of $\hat B_1$, we have
$$
\hat B_1=\sum_{0<|k|\leq d}
\mu_k(\tilde r)\mathbb{E}v^k(\tilde r)e^{2\pi i k\tilde\theta}.$$
Then, recalling \eqref{valuesmuk} we obtain
\begin{equation}\label{valuesB1hat}
\hat B_1=\left\{\begin{array}{lcl}0&\quad
\textrm{ if }&|\tilde r-p/q|\geq 3\beta
\\
\displaystyle\sum_{k\in\mathcal{N}\cap q\mathbb{Z}}
\mathbb{E}v^k(\tilde r)e^{2\pi i k\tilde\theta}=
\mathbb{E}v_{p,q}(\tilde\theta,\tilde r)&\quad
\textrm{ if }&|\tilde r-p/q|\leq \beta/2
.\end{array}\right.
\end{equation}
where we have used the definition \eqref{defEvpq} of
$\mathbb{E}v_{p,q}(\tilde\theta,\tilde r)$. On the other hand, from
the definition \eqref{defS1k} of $S_1^k(\tilde r)$ one can check that
\begin{eqnarray*}
&&-\partial_{\tilde r}S_1(\tilde\theta,\tilde r)+
\partial_{\tilde r}S_1(\tilde\theta+\tilde r,\tilde r)\\
&&=-\partial_\theta S_1(\tilde\theta+\tilde r,\tilde
r)-\sum_{0<|k|\leq d}\frac{i(\mathbb{E}v^k)'(\tilde r)(1-\mu_k(\tilde r))+
i\mathbb{E}v^k(\tilde r)\mu_k'(\tilde r)}{2\pi k}e^{2\pi i k\tilde\theta}.
\end{eqnarray*}
Recalling definitions \eqref{defA1hat} of $\hat A_1$ and
\eqref{defB2hat} of $\hat B_1$, this implies that
\begin{equation}\label{hatA1rewrite}
\begin{split}
\hat A_1=&\mathbb{E}u(\tilde\theta,\tilde r)-\mathbb{E}v(\tilde\theta,\tilde r)+\hat B_1\\
&-\sum_{0<|k|\leq d}\frac{i(\mathbb{E}v^k)'(\tilde r)(1-\mu_k(\tilde r))+i\mathbb{E}v^k(\tilde
r)\mu_k'(\tilde r)}{2\pi k}e^{2\pi i k\tilde\theta}.
\end{split}
\end{equation}
Then we use \eqref{valuesB1hat} and \eqref{valuesmuk}
again, noting that $\mu'_k(\tilde r)=0$ in both regions
$|\tilde r-p/q|\geq 3\beta$
and $|\tilde r-p/q|\leq \beta/2$,
Moreover, we note that for $|\tilde r-p/q|\leq \beta/2$.
$$
\mathbb{E}v_{p,q}(\tilde\theta,\tilde r)=
\mathbb{E}v_{p,q}(\tilde\theta,p/q) +\mathcal{O}(\beta
),$$
$$(\mathbb{E}v^k)'(\tilde r)=(\mathbb{E}v^k)'(p/q)+\mathcal{O}(
\beta
).$$
Define
\begin{equation}\label{defE1}
E_1(\tilde\theta,\tilde r)=-\sum_{0<|k|\leq d}
\frac{i(\mathbb{E}v^k)'(\tilde r)}{2\pi k}e^{2\pi i k\tilde\theta}.
\end{equation}
Then the same holds for $\mathbb{E}u(\tilde\theta,\tilde r)$ and $\mathbb{E}v(\tilde\theta,\tilde r)$: recalling definition
\eqref{defE3} of $E_3$, equation \eqref{hatA1rewrite} yields
\begin{equation}\label{valuesA1hat}
\hat A_1=\left\{\begin{array}{lcl}\mathbb{E}u(\tilde\theta,\tilde r)-\mathbb{E}v(\tilde\theta,\tilde r)+E_1(\tilde\theta,\tilde r)&\,
\textrm{ if }&|\tilde r-p/q|\geq 3\beta
,\\ \Delta \mathbb{E}(\tilde\theta,p/q)
+\mathbb{E}v_{p,q}(\tilde\theta)+E_3(\tilde\theta)+\mathcal{O}(\beta) &\,\textrm{ if
}&|\tilde r-p/q|\leq \beta/2,\end{array}\right.
\end{equation}
where $\mathbb{E}u(\tilde\theta,p/q)-\mathbb{E}v(\tilde\theta,p/q)=\Delta \mathbb{E}(\tilde\theta,p/q)$.
In conclusion, by \eqref{valuesA1hat} and \eqref{valuesB1hat}
we obtain that the first order terms of \eqref{Phiinverse} coincide
with the first order terms of \eqref{NFfar} and \eqref{NFnear} in
each region.
For the $\varepsilon^2-$terms we rename $\hat B_2$ in the following way
\begin{equation}\label{defE2}
\begin{split}
E_2(\tilde\theta,\tilde r)&=\hat B_2|_{\{|\tilde r-p/q|\geq
3\beta
\}}, \\
E_4(\tilde\theta,\tilde r)&=\hat B_2|_{\{|\tilde r-p/q|\leq\beta/2
\}}.
\end{split}
\end{equation}
Now we see that $E_2$ satisfies \eqref{eq:drift}. To avoid
long notation, in the following we do not write explicitly that
expressions $A_i$, $B_i$, $\hat A_i$ and $\hat B_i$ are restricted
to the region $\{|\tilde r-p/q|\geq 3 \bet
\}$. We note that since in
this region we have $\hat B_1=0$ by \eqref{valuesB1hat}, recalling
the definition \eqref{defB2hat} of $\hat B_1$ it is clear that
$B_1=\partial_\theta S_1(\tilde\theta+\tilde r,\tilde r)$. Hence, from
definition \eqref{defB2hat} of $\hat B_2$ it is straightforward to see that
\begin{equation}\label{hatB2simple}
\hat B_2=B_2-\partial_\theta^2S_1(\tilde\theta+\tilde r,\tilde r)A_1.
\end{equation}
Recalling that $\hat A_1=A_1+\partial_{\tilde r}S_1(\tilde\theta+\tilde
r,\tilde r)$ and using the definition of $A_1$ in
\eqref{defA1} and the definition \eqref{defB2} of $B_2$,
\begin{equation}\label{B2hatequality}
\begin{split}
E_2(\tilde\theta,\tilde r)=&\,\hat B_2|_{\{|\tilde r-p/q|\geq3\bet
\}}\\
=&\,\mathbb{E} w(\tilde\theta,\tilde r)-\partial_\theta\mathbb{E}v(\tilde\theta,\tilde
r)\partial_{\tilde r}S_1(\tilde\theta,\tilde r) \\
&+\partial_r\mathbb{E}v(\tilde\theta,\tilde r)\partial_\theta S_1(\tilde\theta,\tilde
r)-\partial^2_\theta S_1(\tilde\theta,\tilde r)\partial_{\tilde
r}S_1(\tilde\theta,\tilde r) \\
&-\partial_\theta^2S_1(\tilde\theta+\tilde r,\tilde
r)\left[\mathbb{E}u(\tilde\theta,\tilde r)+\partial_\theta S_1(\tilde\theta,\tilde
r)-\partial_r S_1(\tilde\theta,\tilde r)\right]\\
&-\partial_{\theta}\partial_{\tilde r}S_1(\tilde\theta+\tilde r,\tilde
r)\left[\mathbb{E} v(\tilde\theta,\tilde r)+\partial_\theta S_1(\tilde\theta,\tilde
r)-\partial_\theta S_1(\tilde\theta+\tilde r,\tilde r)\right].
\end{split}
\end{equation}
Since, for $|\tilde r-p/q|\geq3\beta$, $S_1$ satisfies
\eqref{eq:cohomological}, the last row of the definition of $E_2$ vanishes and
the same happens with
\[
\begin{split}
-\partial_\theta \mathbb{E} v(\tilde\theta,\tilde r)\partial_{\tilde r}
S_1(\tilde\theta,\tilde r)-\partial_\theta^2S_1(\tilde\theta,\tilde r)\partial_{\tilde
r}S_1(\tilde\theta,\tilde r)+\partial_\theta^2 S_1(\tilde\theta+\tilde r,\tilde
r)\partial_{\tilde r}S_1(\tilde\theta,\tilde r)&=\\
\partial_{\tilde r}S_1(\tilde\theta,\tilde r)\left( -\partial_\theta \mathbb{E}
v(\tilde\theta,\tilde r)-\partial_\theta^2S_1(\tilde\theta,\tilde r)+\partial_\theta^2
S_1(\tilde\theta+\tilde r,\tilde
r)\right)&=0.
\end{split}
\]
Therefore,
\[
\begin{split}
b(\tilde r)=&\int_0^1 E_2(\tilde\theta,\tilde r)d\tilde\theta\\
=&\int_0^1 \Big(\mathbb{E} w(\tilde\theta,\tilde
r)+\partial_{\tilde r}\mathbb{E}v(\tilde\theta,\tilde
r)\partial_{\theta}S_1(\tilde\theta,\tilde r) \\
&-\partial^2_\theta S_1(\tilde\theta+\tilde r,\tilde
r)\left(\mathbb{E}u(\tilde\theta,\tilde r)+\partial_\theta S_1(\tilde\theta,\tilde
r)\right)\Big)d\tilde \theta.
\end{split}
\]
Using $\partial^2_\theta S_1(\tilde\theta+\tilde r,\tilde
r)=\partial^2_\theta S_1(\tilde\theta,\tilde
r)+\partial_\theta\mathbb{E}v(\tilde\theta,\tilde
r)$ and taking into account that $\int_0^1 \partial^2_\theta
S_1(\tilde\theta,\tilde
r)\partial_\theta S_1(\tilde\theta,\tilde
r)d\tilde\theta=0$, we have that
\[
\begin{split}
b(\tilde r)=&\int_0^1 \Big(\mathbb{E} w(\tilde\theta,\tilde
r)+\partial_{\tilde r}\mathbb{E}v(\tilde\theta,\tilde
r)\partial_{\theta}S_1(\tilde\theta,\tilde r)-\partial_\theta \mathbb{E}v(\tilde\theta,\tilde
r)\mathbb{E} u(\tilde\theta,\tilde
r) \\
&-\partial_\theta \mathbb{E}v(\tilde\theta,\tilde
r)\partial_{\theta}S_1(\tilde\theta,\tilde
r)-\partial^2_{\theta}S_1(\tilde\theta,\tilde r)\mathbb{E}u(\tilde\theta,\tilde
r)\Big)d\tilde \theta.
\end{split}
\]
Integrating by parts, we obtain \eqref{eq:drift}.
We note that, from the definition \eqref{defS1k} of the Fourier coefficients of
$S_1$, it is clear that $S_1$ is $\mathcal{C}^l$ with respect to $r$. Since it
just has a finite number of nonzero coefficients, it is analytic with respect to
$\theta$. Then, from the definitions \eqref{defE2} of $E_2$ and
$E_4$ and the expression $\eqref{defB2hat}$ of $\hat B_2$, it is clear that both
$E_2$ and $E_4$ are $\mathcal{C}^{l-1}$.
Finally we bound the $\mathcal{C}^0$-norms of the functions $E_2$, $b(r)$ and
$E_4$ and also the error terms. To that aim, we bound the
$\mathcal{C}^l$ norms of $S_1$ and its derivatives. We will use Lemma
\ref{lemClnorms} and proceed similarly as in \cite{BKZ}. We note that
\begin{enumerate}
\item If $\mu_k(\tilde r)\neq 1$ we have $|1-e^{2\pi ik\tilde r}|>M
\beta
|k|$, and thus
$$\left|\frac{1}{1-e^{2\pi ik\tilde r}}\right|<M^{-1}
\beta^{-1}
|k|^{-1}.$$
\item Then, using that $\|f\circ g\|_{\mathcal{C}^l}\leq
C\|f{|_{\textrm{Im}(g)}}\|_{\mathcal{C}^l}\left(1+\|g\|_{\mathcal{C}^l}
^l\right)$, we get that
$$\left\|\frac{1}{1-e^{2\pi ik\tilde r}}\right\|_{\mathcal{C}^l}\leq
M \beta
^{-(l+1)}|k|^{-(l+1)},$$
for some constant $M$, not the same as item 1.
\item Using the rule for the norm of the composition again
and the fact that $\|\mu\|_{\mathcal{C}^l}$ is bounded
independently of $\beta$, we get
$$\|\mu_k(\tilde r)\|_{\mathcal{C}^l}\leq M\beta^{-l}
|k|^{-l},$$
for some constant $M$, and the same bound is obtained
for $\|1-\mu_k(\tilde r)\|_{\mathcal{C}^l}$.
\end{enumerate}
Using items $2$ and $3$ above and the fact that $\|\mathbb{E}v^k\|_{\mathcal{C}^l}$ are
bounded, we get that
{\small \begin{eqnarray*}
\left\|\partial_{\tilde r^\alpha}\left[\frac{1-\mu_k(\tilde r)i\mathbb{E}v^k(\tilde r)}{2\pi k(1-e^{2\pi ik\tilde r})}\right]\right\|_{\mathcal{C}^0}&\leq&M_1\sum_{\alpha_1+\alpha_2=\alpha}\frac{1}{2\pi|k|}\|1-\mu_k(\tilde r)\|_{\mathcal{C}^{\alpha_1}}\left\|\frac{1}{1-e^{2\pi ik\tilde r}}\right\|_{\mathcal{C}^{\alpha_2}}\\
&\leq& M_2\beta^{-(\alpha+1)}
|k|^{-\alpha-2}.
\end{eqnarray*}}
Then, by item $2$ of Lemma \ref{lemClnorms}, we obtain
$$ \|S_1\|_{\mathcal{C}^l}\leq M\beta^{-(l+1)
.$$
One can also see that $\|\partial_{\tilde r} S_1\|_{\mathcal{C}^l}\leq
M\|S_1\|_{\mathcal{C}^{l+1}}$ and $\|\partial_\theta S_1\|_{\mathcal{C}^l}\leq
M\|S_1\|_{\mathcal{C}^l}$. In general, one has
\begin{equation}\label{boundderivsS1}
\|\partial_\theta^n\partial_{\tilde r}^m S_1\|_{\mathcal{C}^l}\leq M\beta^{-(l+m+1)}
.
\end{equation}
Now, recalling definitions \eqref{defE2} of $E_2$ and $E_4$,
and using \eqref{B2hatequality},
bound \eqref{boundderivsS1} implies that for $0\leq s\leq l-1$ there exists some
$K>0$ independent of $\varepsilon$ and $\beta$ such that
$$\|E_2\|_{\mathcal{C}^s}\leq K\bet
^{-(2s+3)},\qquad \|E_4\|_{\mathcal{C}^0}\leq K\beta
^{-(2s+3)}.$$
To bound the $\mathcal{C}^s$ norm, $0\leq s\leq l-1$, of $b(r)$ in \eqref{eq:drift},
we use again \eqref{boundderivsS1} to obtain
$$
\|b\|_{\mathcal{C}^s}\leq K\beta^{-(s+1)}
.$$
Similarly, and taking into account that for $n=1,2$ we have
$$
\|\mathbb{E}u\|_{\mathcal{C}^{s+n}}\leq K,\|\mathbb{E}v\|_{\mathcal{C}^{s+n}}\leq K,
$$
because $s\leq l-2$, the error term in
the equation for $\tilde r$ satisfies
\begin{equation}\label{errorr}
\varepsilon^3\hat B_3=\mathcal{O}_s(\varepsilon^3\beta^{-(3s+5)}),
\end{equation}
and the error terms for the equation of $\tilde \theta$,
\begin{equation}\label{errortheta}
\varepsilon^2\hat A_2=\mathcal{O}_s(\varepsilon^2\beta^{-(2s+4)}).
\end{equation}
This completes the proof for the normal forms \eqref{NFfar} and \eqref{NFnear}
(in the latter case, we have to take into account
the extra error term of order $\mathcal{O}(\varepsilon^{1+a})$ caused
by the $\beta$
--error term in \eqref{valuesA1hat}).
To prove \eqref{normPhi-Id}, we just need to recall \eqref{Phi} and
use \eqref{boundderivsS1}. Then one obtains
$$\|\Phi-\textrm{Id}\|_{\mathcal{C}^2}\leq M'\varepsilon\|S_1\|_{\mathcal{C}^3}.
$$
\end{proof}
From now on we consider that our deterministic system is
in normal form, and we drop tildes.
\section{Analysis of the Martingale problem in the strips
of each type}
After performing the change to normal form (Theorem \ref{thm:normal-form}), the
$n$-th iteration of the original map (see \eqref{mapthetanrn}), becomes both in
the Totally Irrational and Imaginary Rational zones of the form
\begin{equation}\label{eq:NRmap-n}
\begin{split}
\theta_n=&\displaystyle\theta_0+nr_0+\mathcal{O}(n^2\varepsilon),\\
r_n=&\displaystyle r_0+\varepsilon\sum_{k=0}^{n-1}\omega_k[v(\theta_k,r_k)+\varepsilon
v_2(\theta_k,r_k)]\\
&+\varepsilon^2\sum_{k=0}^{n-1}E_2(\theta_k,r_k)+\mathcal{O}(n\varepsilon^{
2+a}),
\end{split}
\end{equation}
where $v_2(\theta,r)$ is a given function which can be written explicitly in
terms of $v(\theta,r)$ and $S_1(\theta,r)$.
\subsection{The TI case}\label{sec:TI-case}
Recall that we have defined $\gamma\in (4/5,4/5+1/40)$ and $\nu=1/4$. A
strip $I_\gamma$ is a totally irrational segment if $p/q\in I_\gamma$,
then $|q|>\varepsilon^{-b},$ where $0<2b<\nu$ and that we define
$b=(\nu-\rho)/2$ for a certain $0<\rho<\nu$. In the following we shall
assume that $\rho$ satisfies an extra condition, which ensures that certain
inequalities are satisfied. These inequalities involve the degree of
differentiability of certain $\mathcal{C}^l$ functions. Assume that $l\geq 6$.
Then, there exists a constant $R>0$ such that
\begin{equation}\label{constantR}
R= \frac{l-5}{l-2}>0,\quad \textrm{ for all }\quad l\geq 6.
\end{equation}
We choose $\rho$, satisfying
\begin{equation}\label{conditionbeta}
\quad \rho=R\nu.
\end{equation}
\begin{lem}\label{lemmasigma2}
Fix $\tau\in (0,1/40)$ and let $g$ be a $\mathcal{C}^l$ function, $l\ge 6$.
Suppose $r^*$ satisfies the following condition: if for some
rational $p/q$ we have $|r^*-p/q|<\varepsilon^\nu$, then $|q|>\varepsilon^{-b}$.
Then,
for $\varepsilon>0$ small enough there is
$N\le \varepsilon^{-(\nu+b+2\tau)}$
such that for some $K$ independent of $\varepsilon$ and any $\theta^*$ we have
\[\left|N\,\int_0^1g(\theta,r^*)d\theta-\sum_{k=0}^{N-1}g(\theta^*+kr^*,
r^*)\right|\leq K\varepsilon^{\tau}. \]
\end{lem}
\begin{proof} Denote $g_0(r)=\int_0^1g(\theta,r)d\theta$.
Expand $g(\theta,r)$ in its Fourier series, i.e.
$$
g(\theta,r)=g_0(r)+\sum_{m\in \mathbb{Z}\setminus \{0\}} g_m(r) e^{2\pi im\theta}
$$
for some $g_m(r):\mathbb{R}\to \mathbb{C}$. Then we have
\begin{equation}\label{rewritesum}
\begin{split}
\sum_{k=0}^{N-1}&(g(\theta^*+kr^*,r^*)-g_0(r^*))=
\sum_{k=0}^{N-1}\sum_{m\in \mathbb{Z}\setminus \{0\}}g_m(r^*) e^{2\pi
im(\theta^*+kr^*)}\\
=&\sum_{k=0}^{N-1}\sum_{1\leq|m|\leq[\varepsilon^{-b}]}g_m(r^*) e^{2\pi
im(\theta^*+kr^*)}+\sum_{k=0}^N\sum_{|m|\geq[\varepsilon^{-b}]}g_m(r^*) e^{2\pi
im(\theta^*+kr^*)}\\
=&\sum_{1\leq|m|\leq[\varepsilon^{-b}]}g_m(r^*)e^{2\pi im\theta^*}\sum_{k=0}^{N-1}
e^{2\pi imkr^*}+\sum_{k=0}^{N-1}
\sum_{|m|\geq[\varepsilon^{-b}]}g_m(r^*) e^{2\pi im(\theta^*+kr^*)}\\
=&\sum_{1\leq|m|\leq[\varepsilon^{-b}]}g_m(r^*)e^{2\pi im\theta^*}
\frac{e^{2\pi iNmr^*}-1}{e^{2\pi imr^*}-1}+
\sum_{k=0}^{N-1}\sum_{|m|\geq[\varepsilon^{-b}]}
g_m(r^*) e^{2\pi im(\theta^*+kr^*)}.\\
\end{split}
\end{equation}
To bound the first sum in \eqref{rewritesum} we distinguish into the following
cases
\begin{itemize}
\item If $r^*$ is rational $p/q$, we know that $|q|>\varepsilon^{-b}$.
\begin{itemize}
\item
If $|q|\le \varepsilon^{-(\nu+b+2\tau)}$, then pick $N=|q|$ and the first sum
vanishes.
\item If $|q|>\varepsilon^{-(\nu+b+2\tau)}$, then by definition of $r^*$ for
any $s/m$ with $|m|<\varepsilon^{-b}$ we have
or $|m r^*-s|>\varepsilon^\nu$. By the pigeon hole principle
there exist integers $0<N=\tilde q<\varepsilon^{-(\nu+b+2\tau)}$ and $\tilde p$ such
that
$|\tilde qr^*-\tilde p|\le 2 \varepsilon^{\nu+b+2\tau}$.
\end{itemize}
\item If $r^*$ is irrational, consider a continuous fraction expansion
$p_n/q_n\to r^*$ as $n\to \infty$. Choose $p'/q'=p_n/q_n$ with
$n$ such that $q_{n+1}>\varepsilon^{-(\nu+b+2\tau)}$. This implies that
$|q'r^*-p'|<1/|q_{n+1}|\le \varepsilon^{\nu+b+2\tau}$.
The same argument as above
shows that for any value $|m|<\varepsilon^{-b}$ we have
\hbox{$|m r^*-s|>\varepsilon^\nu$}.
\end{itemize}
Let $N$ be as above. Then, since $|m|\leq \varepsilon^{-b}$,
\[\left|\sum_{1\leq|m|\leq[\varepsilon^{-b}]}\ g_m(r^*)\ e^{2\pi im\theta^*}\
\frac{e^{2\pi iNmr^*}-1}{e^{2\pi imr^*}-1}\right|\leq 2
\varepsilon^{\tau}\sum_{1\leq|m|\leq[\varepsilon^{-b}]}|g_m(r^*)|.\]
Since $g(\theta,r)$ is $\mathcal{C}^l$, its Fourier
coefficients satisfy $|g_m(r^*)|\le C|m|^{-l},\ m\ne 0$. Thus we can bound the
first sum in \eqref{rewritesum} by
\[
\left|\sum_{1\leq|m|\leq[\varepsilon^{-b}]}\ g_m(r^*)\ e^{2\pi im\theta^*}\
\frac{e^{2\pi iNmr^*}-1}{e^{2\pi imr^*}-1}\right|
\leq K\varepsilon^{\tau}\sum_{1\leq|m|\leq[\varepsilon^{-b}]}\frac{1}{m^2
}\leq K\varepsilon^\tau.
\]
To bound the second sum, we use again the bound for the Fourier coefficients
$g_m(r^*)$
\begin{equation}\label{secondsum}
\left|\sum_{k=0}^N\sum_{|m|\geq[\varepsilon^{-b}]}g_m(r^*) e^{2\pi
im(\theta+kr^*)}\right|\leq N\sum_{|m|\geq[\varepsilon^{-b}]}\frac{1}{m^l}
\le K \varepsilon^{(l-1)b-(\nu+b+2\tau)}.
\end{equation}
Taking into account that $b=(\nu-\rho)/2$, $\rho\leq R\nu$
where $R=(l-5)/(l-2)$, $\nu=1/4$ and $\tau\in (0,1/40)$, one obtains
\[
\left|\sum_{k=0}^N\sum_{|m|\geq[\varepsilon^{-b}]}g_m(r^*) e^{2\pi
im(\theta+kr^*)}\right|\leq
K\varepsilon^{\frac{\nu}{2}-2\tau}\leq K\varepsilon^{\tau}
\]
\end{proof}
Fix a totally irrational strip $I_\gamma$ and let $(\theta_0,r_0)\in I_\gamma$.
Recall that
$n_\gamma\leq n\leq
s\varepsilon^{-2}$
is either the exit time from $I_\gamma$, that is the first number such that
$(\theta_{n_\gamma+1},r_{n_\gamma+1})\not\in I_{\gamma}$ or
$n_\gamma=n$ the final time.
\begin{lem}\label{lemma:TI:exittime}
Fix $\gamma\in (4/5,4/5+1/40)$. Then, there
exists a constant $C>0$ such that,
\begin{itemize}
\item For any $\delta\in (0, 2(1-\gamma))$ and $\varepsilon>0$ small
enough,
\[
\mathbb{P}\{n_\gamma<\varepsilon^{-2(1-\gamma)+\delta}\}\leq
e^{-\frac{C}{\varepsilon^{\delta}}}.
\]
\item For any $\delta>0$ and $\varepsilon>0$ small
enough,
\[
\mathbb{P}\{\varepsilon^{-2(1-\gamma)-\delta}<n_\gamma<s\varepsilon^{-2}\}\leq
e^{-\frac{C}{\varepsilon^{\delta}}}.
\]
\end{itemize}
\end{lem}
\begin{proof}
We first prove the second statement. Let $\widetilde n_\gamma=[\varepsilon^{-2(1-\gamma)}],$
$n_\delta=[\varepsilon^{-\delta}]$, and
$n_i=in_\gamma$. Then,
\begin{equation}\label{probN-D3}
\begin{split}
\mathbb{P}\left\{n_\gamma>\varepsilon^{-2(1-\gamma)-\delta}\right\}\leq&\,\mathbb{P}\left\{|r_{n_{i+1}}
-r_ { n_i } |\leq
\varepsilon^\gamma\,\textrm{ for all }i=0,\dots, n_\delta-1\right\}\\
\leq&\prod_{i=0}^{n_\delta}\mathbb{P}\left\{|r_{n_{i+1}}-r_{n_i}|\leq
\varepsilon^{\gamma}\right\}.
\end{split}
\end{equation}
We have that
\[r_{n_{i+1}}=r_{n_i}+\varepsilon
\sum_{k=0}^{\widetilde n_\gamma-1}\omega_k
v(\theta_{n_i+k},r_{n_i+k})+\mathcal{O}(\widetilde n_\gamma\varepsilon^2).\]
Taking also into account that
$\theta_{n_i+k}=\theta_{n_i}+kr_{n_i}+\mathcal{O}
(\widetilde n_\gamma^2\varepsilon)$ for $0\leq k\leq \widetilde n_\gamma$, we can write
\begin{equation}\label{eqHni}
r_{n_{i+1}}=r_{n_i}+\varepsilon
\sum_{k=0}^{\widetilde n_\gamma-1}\omega_k v\left(\theta_{n_i}+k
r_{n_i},r_{n_i}\right)+\mathcal{O}(\widetilde n_\gamma^3\varepsilon^2).
\end{equation}
Define
\begin{equation}\label{def:xi}
\xi=\frac{1}{\sqrt{\widetilde n_\gamma}}\sum_{k=0}^{\widetilde n_\gamma-1}\omega_k
v(\theta_{n_i}+kr_{n_i},r_{n_i}).
\end{equation}
For $\widetilde n_\gamma$ sufficiently large (i.e., for $\varepsilon$ sufficiently small), one
has that $\xi$ converges in distribution to a normal random variable
$\mathcal{N}(0,\sigma^2(\theta_{n_i},r_{n_i}))$ with
\[
\sigma^2(\theta_{n_i},r_{n_i})=\frac{1}{\widetilde
n_\gamma}\sum_{k=0}^{\widetilde n_\gamma-1}v^2(\theta_{
n_i}+kr_{n_i},r_{n_i}).\]
Then it is enough to use Lemma \ref{lemmasigma2} (if $\widetilde n_\gamma\geq
\varepsilon^{-(\nu+b+2\tau)}$, it is enough to split the sum into several sums) and
use Hypothesis {\bf [H1]} to ensure that
$\sigma^2(\theta_{n_i},r_{n_i})\geq K>0$ for some constant $K$. Then
\eqref{eqHni} yields
\[
r_{n_{i+1}}-r_{n_i}=\varepsilon \widetilde n_\gamma^{1/2}\xi+\mathcal{O}
(\widetilde n_\gamma^3\varepsilon^2)\]
Then, using that $\gamma\in (4/5,4/5+1/40)$,
\[
\mathbb{P}\{|r_{n_{i+1}}-r_{n_i}|\leq
\varepsilon^{\gamma}\}=\mathbb{P}\{|\xi+\mathcal{O}(\varepsilon^{5\gamma-4})|\leq
1\}\leq\mathbb{P}\{|\xi|\leq 2\}.\]
Since $\xi$ converges in
distribution to $\mathcal{N}(0,\sigma^2(\theta_{n_i},r_{n_i}))$ and
$\sigma^2(\theta_{n_i},r_{n_i})\geq K>0$, one has
\[\mathbb{P}\{|r_{n_{i+1}}-r_{n_i}|\leq \varepsilon^{\gamma}\}\leq\rho,\]
for some $0<\rho<1$. Using this in \eqref{probN-D3} one obtains the claim of
the lemma with $C=-\log\rho>0$.
For the first statement, note that $\mathbb{P}\{n_\gamma<\varepsilon^{-1-\gamma}\}=0$ since
$|r_{k+1}-r_k|\leq 2\varepsilon$ and therefore one needs at least
$\lceil\varepsilon^{-1-\gamma}/2\rceil$ iterations. Thus, we only need to analyze
$\mathbb{P}\{\varepsilon^{-1-\gamma}/2\leq n_\gamma<\varepsilon^{-2(1-\gamma)+\delta}\}$, which is
equivalent to
\[
\mathbb{P}\{\exists\, n\in [\varepsilon^{-1-\gamma}/2,\varepsilon^{-2(1-\gamma)+\delta}):
|r_n-r_0|\geq \varepsilon^\gamma\}.
\]
Proceeding as before, for $\varepsilon>0$ small enough,
\[
\begin{split}
\mathbb{P}\left\{ |r_{n}-r_0|\geq \varepsilon^\gamma\right\}&\leq
\mathbb{P}\left\{\left|\varepsilon\sum_{k=0}^{n-1}\omega_kv(\theta_0+r_0k,
r_0)+\mathcal{O}\left(\varepsilon^2n^3\right)\right|\geq\varepsilon^\gamma\right\} \\
&\leq \mathbb{P}\left\{\left|\xi+\mathcal{O}(\varepsilon
n^{5/2})\right|\geq\varepsilon^{\gamma-1}n^{-1/2}\right\}
\end{split}
\]
where $\xi$ is the function defined in \eqref{def:xi} with $n_i=0$. Now, using
that $\gamma\in (4/5,4/5+1/40)$ and $n\in
[\varepsilon^{-1-\gamma}/2,\varepsilon^{-2(1-\gamma)+\delta})$
we have that
\[
\mathbb{P}\left\{\left|\xi+\mathcal{O}\left(\varepsilon
n^{5/2}\right)\right|\geq\varepsilon^{\gamma-1}n^{-1/2}\right\}\leq
\mathbb{P}\left\{|\xi|\geq\frac{\varepsilon^{-\delta/2}}{2}\right\}
\]
By Lemma \ref{main lemma} and hypothesis \textbf{H1}, $\xi$
converges to a normal random variable with $\sigma^2>0$
(with lower bound independent of $\varepsilon$) as
$\varepsilon\rightarrow 0$. Thus,
\[
\mathbb{P}\left\{ |r_{n}-r_0|\geq \varepsilon^\gamma\right\} \leq e^{-\frac{C}{\varepsilon^\delta}}
\]
for some $C>0$ independent of $\varepsilon$. Then, since $\sharp
[\varepsilon^{-1-\gamma}/2,\varepsilon^{-2(1-\gamma)+\delta})\sim \varepsilon^{-2(1-\gamma)+\delta}$,
\[
\mathbb{P}\{\exists\, n\in [\varepsilon^{-1-\gamma}/2,\varepsilon^{-2(1-\gamma)+\delta}):
|r_n-r_0|\geq \varepsilon^\gamma\}\leq e^{-\frac{C}{\varepsilon^\delta}},
\]
taking a smaller $C>0$.
\end{proof}
Now we state the main lemma of this section which shows the convergence of the
random map to a diffusion process in the strip $I_{\gamma}$. To this end,
we define the functions $b$ and $\sigma$ as in \eqref{eq:drift-variance}.
\begin{lem}\label{lemmaexpectation}
Let $\nu$, $b=(\nu-\rho)/2$ and $\rho$
satisfy \eqref{conditionbeta} and $\gamma\in (4/5,4/5+1/40)$. Take
$f:\mathbb{R}\rightarrow\mathbb{R}$ be any $\mathcal{C}^l$ function with $l\ge
3$ and $\|f\|_{\mathcal{C}^3}\leq C$ for some constant $C>0$ independent of $\varepsilon$.
Then there exists $\zeta>0$ such that
\begin{eqnarray*}
&&\mathbb{E}\Bigg(f(r_{n_\gamma})- \varepsilon^2
\sum_{k=0}^{n_\gamma-1}\left(b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2}
f''(r_k)\right) \Bigg)\\
&&\qquad \qquad \qquad \qquad \qquad \qquad \qquad
- f(r_0)=\mathcal{O}(\varepsilon^{2\gamma+\zeta}).
\end{eqnarray*}
\end{lem}
\begin{proof}
Let us denote
\begin{equation}\label{defeta}
\eta=f(r_{n_\gamma})-\varepsilon^2
\sum_{k=0}^{n_\gamma-1}\left(b(r_k)f'(r_k)+\frac{ \sigma^2(r_k)}{2}
f''(r_k)\right).
\end{equation}
Writing,
\[f(r_{n_\gamma})=f(r_0)+
\sum_{k=0}^{n_\gamma-1}\left(f(r_{k+1})-f(r_k)\right)\]
and doing the Taylor expansion in each term inside the sum we get
\begin{eqnarray*}
f(r_{n_\gamma})&=&f(r_0)+\sum_{k=0}^{n_\gamma-1}\Big[
f'(r_k)(r_{k+1}-r_k)\\
&&+\frac{1}{2}f''(r_k)(r_{k+1}-r_k)^2+\mathcal{O}(\varepsilon^3)\Big].
\end{eqnarray*}
Substituting this in \eqref{defeta} we get
\begin{equation}\label{eta-version2}
\begin{split}
\eta=&f(r_0)+\sum_{k=0}^{n_\gamma-1}
\Big[f'(r_k)(r_{k+1}-r_k)+
\frac{1}{2}f''(r_k)(r_{k+1}-r_k)^2\Big]\\
&-\varepsilon^2\sum_{k=0}^{n_\gamma-1}
\left[b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2}
f''(r_k)\right ] +\sum_{k=0}^{n_\gamma-1}\mathcal{O}\left(\varepsilon^3\right).
\end{split}
\end{equation}
Using \eqref{eq:NRmap-n} we can write
\[
\begin{split}
r_{k+1}-r_k&=\varepsilon\omega_k[v(\theta_k,r_k)+\varepsilon
v_2(\theta_k,r_k)]+\varepsilon^2E_2(\theta_k,r_k)
+\mathcal{O}(\varepsilon^{2+a})\\
(r_{k+1}-r_k)^2&=\varepsilon^2v^2(\theta_k,r_k)+\mathcal{O}(\varepsilon^3).
\end{split}
\]
Thus, \eqref{eta-version2} can be written as
\begin{equation}\label{eta-version3}
\begin{split}
\eta=&f(r_0)+\varepsilon\sum_{k=0}^{n_\gamma-1}
f'(r_k)\omega_k\left[v(\theta_k,r_k)
+\varepsilon v_2(\theta_k,r_k)\right]\\
&+\varepsilon^2\sum_{k=0}^{n_\gamma-1}
f'(r_k)\left[E_2(\theta_k,r_k)-b(r_k)\right]
\\
&+\frac{\varepsilon^2}{2}\sum_{k=0}^{n_\gamma-1}f''(r_k)
\left[v^2(\theta_k,r_k)-\sigma^2(r_k)\right]\\
&+\sum_{k=0}^{n_\gamma-1}\mathcal{O}(\varepsilon^{2+a}).
\end{split}
\end{equation}
Note first that since $\omega_k$ is independent of
$(\theta_k,r_k)$ and $\mathbb{E}(\omega_k)=0$, we have
\[
\begin{split}
\mathbb{E}(\omega_kf'(r_k)[v(\theta_k,r_k)+\varepsilon v_2(\theta_k,r_k)])&=\\
\mathbb{E}(\omega_k)\mathbb{E}(f'(r_k)[v(\theta_k,r_k)+\varepsilon v_2(\theta_k,r_k)])&=0.
\end{split}
\]
for all $k\in\mathbb{N}$. So, we do not need to analyze the term in the first
row.
Using the law of total expectation and taking $\delta>0$ small enough, we
split $\mathbb{E}(\eta)$ as
\begin{equation}\label{def:TI:expectation:splittingexit}
\begin{split}
\mathbb{E}\left(\eta\right)=&\mathbb{E}\left(\eta\,|\,\varepsilon^{-2(1-\gamma)+\delta}\leq
n_\gamma\leq \varepsilon^{-2(1-\gamma)-\delta}\right)
\mathbb{P}\left\{\varepsilon^{-2(1-\gamma)+\delta}\leq n_\gamma\leq
\varepsilon^{-2(1-\gamma)-\delta}\right\}\\
+&\mathbb{E}\left(\eta\,|\,n_\gamma< \varepsilon^{-2(1-\gamma)+\delta}\right)
\mathbb{P}\left\{n_\gamma<\varepsilon^{-2(1-\gamma)+\delta}\right\}\\
+&\mathbb{E}\left(\eta\,|\,\varepsilon^{-2(1-\gamma)-\delta}<n_\gamma\leq s\varepsilon^{-2}\right)
\mathbb{P}\left\{\varepsilon^{-2(1-\gamma)-\delta}<n_\gamma\leq s\varepsilon^{-2}\right\}.
\end{split}
\end{equation}
We treat first the second and third rows. Taking into
account that
\[
\left|\mathbb{E}\left(\eta-f(r_0)\,|\,n_\gamma< \varepsilon^{-2(1-\gamma)+\delta}\right)\right|
\leq K\varepsilon^2n_\gamma\leq
K\varepsilon^{2\gamma+\delta}
\]
and using the first statement
of Lemma
\ref{lemma:TI:exittime}, we obtain the bound needed for the second row of
\eqref{def:TI:expectation:splittingexit}. For the third row, it is enough to
use the second statement of Lemma \ref{lemma:TI:exittime} and
\[
\left|\mathbb{E}\left(\eta-f(r_0)\,|\,
n_\gamma\in\left(
\varepsilon^{-2(1-\gamma)-\delta},s\varepsilon^{-2}\right]\right)\right|\leq K\varepsilon^2n_\gamma\leq
Ks.
\]
For the first row in \eqref{def:TI:expectation:splittingexit}, we need more
accurate estimates. We need upperbounds for
\begin{equation}\label{def:As}
\begin{split}
A_1&=\varepsilon^2\sum_{k=0}^{n_\gamma-1}
f'(r_k)\left[E_2(\theta_k,r_k)-b(r_k)\right], \\
A_2&=\frac{\varepsilon^2}{2}\sum_{k=0}^{n_\gamma-1}f''(r_k)
\left[v^2(\theta_k,r_k)-\sigma^2(r_k)\right], \\
A_3&=\sum_{k=0}^{n_\gamma-1}\mathcal{O}\left(\varepsilon^{2+a}\right).
\end{split}
\end{equation}
with $\varepsilon^{-2(1-\gamma)+\delta}\leq
n_\gamma\leq \varepsilon^{-2(1-\gamma)-\delta}$.
For the last term $A_3$, it is enough to use
\begin{equation}\label{errortermsum1-small}
|A_3|\leq \left|\sum_{k=0}^{n_\gamma-1}\mathcal{O}(\varepsilon^{2+a})
\right|\leq K\varepsilon^{2+a}n_\gamma\leq K\varepsilon^{2\gamma+d},
\end{equation}
where $d=a-\delta>0$ due to smallness of $\delta$ and $K$ is independent of
$\varepsilon$.
The terms $A_1$ and $A_2$ are bounded analogously. We show how to bound the
first one. Consider the constant $N$ given by Lemma \ref{lemmasigma2}. Then,
we write $n_\gamma$ as $n_\gamma=P_\gamma N+Q_\gamma$ for some $P_\gamma$ and $0\leq
Q_\gamma<N$ and $A_1$ as $A_1=A_{11}+A_{12}$ with
\[
\begin{split}
A_{11}&= \varepsilon^2\sum_{k=0}^{P_\gamma-1}
\sum_{j=0}^{N-1}f'(r_{kN+j})\left[E_2(\theta_{kN+j},
r_{kN+j})-b(r_{kN+j})\right ], \\
A_{12}&=\varepsilon^2\sum_{j=0}^{Q_\gamma-1}
f'(r_{P_\gamma N+j})\left[E_2(\theta_{P_\gamma N+j},r_{P_\gamma
N+j})-b(r_{P_\gamma N+j})\right].
\end{split}
\]
The term $A_{12}$ can be bounded as $|A_{12}|\leq K\varepsilon^{2}Q_\gamma$. Now, by Lemma
\ref{lemmasigma2}, $Q_\gamma<N\leq \varepsilon^{-(\nu+b+2\tau)}$, which implies
\[
|A_{12}|\leq K\varepsilon^{2-\nu-b-2\tau}\leq K
\varepsilon^{2\gamma+\tau}\varepsilon^{2(1-\gamma)-\nu-b-3\tau}.
\]
Thus, it only suffices to check that $2(1-\gamma)-\nu-3\tau\geq 0$. Using that
$\nu=1/4$, $\gamma\in (4/5,4/5+1/40)$ and \eqref{conditionbeta}, we have
\[
2(1-\gamma)-\nu-b\geq \frac{1}{160}
\]
Therefore, taking $\tau\in (0,10^{-4})$, we have $2(1-\gamma)-\nu-3\tau\geq 0$.
For the term $A_{11}$ we use \eqref{eq:NRmap-n} to obtain
\[
\begin{split}
A_{11}&= \varepsilon^2\sum_{k=0}^{P_\gamma-1}
\sum_{j=0}^{N-1}f'(r_{kN})\left[E_2(\theta_{kN}+jr_{kN},
r_{kN})-b(r_{kN})\right]+\mathcal{O}(P_\gamma N^3\varepsilon^3) \\
&= \varepsilon^2\sum_{k=0}^{P_\gamma-1}
f'(r_{kN})\sum_{j=0}^{N-1}\left[E_2(\theta_{kN}+jr_{kN},
r_{kN})-b(r_{kN})\right]+\mathcal{O}(P_\gamma N^3\varepsilon^3).
\end{split}
\]
Now, using Lemma \ref{lemmasigma2}, we have
\[
|A_{11}|\leq K\left(\varepsilon^{2+\tau}P_\gamma+P_\gamma N^3\varepsilon^3\right),
\]
for some constant $K>0$ independent of $\varepsilon$. Using that
$P_\gamma$ and $N$ satisfy
$$
P_\gamma N\leq n_\gamma\leq \varepsilon^{-2(1-\gamma)-\delta}
$$ and $\gamma\in (4/5,4/5+1/40)$, we have that
\begin{equation}
|A_{11}|\leq K\left(\varepsilon^{2\gamma+\tau-\delta}+\varepsilon^{6\gamma-3-3\delta}\right)\leq
K\left(\varepsilon^{2\gamma+\tau-\delta}+\varepsilon^{2\gamma+1/5-3\delta}\right).
\end{equation}
Proceeding analogousy, one can bound $A_2$. Thus, it is enough to take
$\delta<\tau,$ $\delta<\frac{1}{15}$ and
\[
\zeta=\min\left\{\tau-\delta,\frac{1}{5}-3\delta ,a-\delta\right\}
\]
to obtain that, for $n\in (\varepsilon^{-2(1-\gamma)+\delta},
\varepsilon^{-2(1-\gamma)-\delta})$,
\[
\eta=f(r_0)+\varepsilon\sum_{k=0}^{n_\gamma-1}f'(r_k)\omega_k\left[
v(\theta_k,r_k)+\varepsilon v_2(\theta_k,r_k)\right]+ \mathcal{O}(\varepsilon^{2\gamma+\zeta}).
\]
and therefore
\[
\mathbb{E}\left(\eta\,|\,\varepsilon^{-2(1-\gamma)+\delta}\leq
n_\gamma\leq \varepsilon^{-2(1-\gamma)-\delta}\right) \times
\]
\[
\mathbb{P}\{\varepsilon^{-2(1-\gamma)+\delta}\leq n_\gamma\leq
\varepsilon^{-2(1-\gamma)-\delta}\}=f(r_0)+ \mathcal{O}(\varepsilon^{2\gamma+\zeta}).
\]
This completes the proof of the lemma.
\end{proof}
\subsection{The IR case}\label{sec:IR-case}
The ideas to deal with Imaginary Rational strips are essentially the same as
in the Totally Irrational case. Recall that after performing the change
to normal form (Theorem \ref{thm:normal-form}), we are dealing with
\eqref{eq:NRmap-n}.
We also recall that given an imaginary rational strip $I_\gamma$ there exists a
unique $r^*\in I_\gamma$, with $r^*=p/q$ and $|q|<\varepsilon^{-b}$, in its
$\varepsilon^\nu$--neighborhood.
Fix an Imaginary Rational strip $I_\gamma$ and
Let $(\theta_0,r_0)\in I_\gamma$. Recall that
$n_\gamma\leq s\varepsilon^{-2}$ is either
the exit time from $I_\gamma$, that is the first number such that
$(\theta_{n_\gamma+1},r_{n_\gamma+1})\not\in I_\gamma$ or the final time $n_\gamma=n$. One
has estimates for the exit time analogous to the ones in Lemma
\ref{lemma:TI:exittime}.
\begin{lem}\label{lemma:IR:exittime}
Fix $\gamma\in (4/5,4/5+1/40)$. Then, there
exists a constant $C>0$ such that,
\begin{itemize}
\item For any $\delta\in (0, 2(1-\gamma))$ and $\varepsilon>0$ small
enough,
\[
\mathbb{P}\{n_\gamma<\varepsilon^{-2(1-\gamma)+\delta}\}\leq
e^{-{C\varepsilon^{-\delta}}}.
\]
\item For any $\delta>0$ and $\varepsilon>0$ small
enough,
\[
\mathbb{P}\{\varepsilon^{-2(1-\gamma)-\delta}<n_\gamma<s\varepsilon^{-2}\}\leq
e^{-{C\varepsilon^{-\delta}}}.
\]
\end{itemize}
\end{lem}
\begin{proof}
We prove the second statement. The first one can be proved following the same
lines as in Lemma \ref{lemma:TI:exittime} and the modifications that we use to
prove the second statement. As in Lemma \ref{lemma:TI:exittime}, we
define $\widetilde n_\gamma=[\varepsilon^{-2(1-\gamma)}]$,
$n_\delta=[\varepsilon^{-\delta}]$, and
$n_i=in_\gamma$ and we use
\[
\begin{split}
\mathbb{P}\left\{n_\gamma>\varepsilon^{-2(1-\gamma)-\delta}\right\}\leq&\,\mathbb{P}\left\{|r_{n_{i+1}}
-r_ { n_i } |\leq
\varepsilon^\gamma\,\textrm{ for all }i=0,\dots, n_\delta-1\right\}\\
\leq&\prod_{i=0}^{n_\delta}\mathbb{P}\left\{|r_{n_{i+1}}-r_{n_i}|\leq
\varepsilon^{\gamma}\right\}.
\end{split}
\]
We have
\begin{equation}\label{eqHni:IR}
r_{n_{i+1}}=r_{n_i}+\varepsilon
\sum_{k=0}^{\widetilde n_\gamma-1}\omega_k v\left(\theta_{n_i}+k
r_{n_i},r_{n_i}\right)+\mathcal{O}(\widetilde n_\gamma^3\varepsilon^2).
\end{equation}
Considering $\xi$ defined in \eqref{def:xi}, we want to show that as $\widetilde
n_\gamma\to\infty$, it converges in distribution to a normal
random variable
$\mathcal{N}(0,\sigma^2(\theta_{n_i},r_{n_i}))$ with positive variance.
Using Lemma \ref{main lemma}, we need a lower bound for
\[
\sigma^2(\theta_{n_i},r_{n_i})=\lim_{\widetilde n_\gamma\to\infty}
\frac{1}{\widetilde
n_\gamma}\sum_{k=0}^{\widetilde n_\gamma-1}v^2(\theta_{
n_i}+kr_{n_i},r_{n_i}).\]
Taking into account that there exists a rational $r=p/q$ with $d<q<\varepsilon^{-b}$
in
a $\varepsilon^\nu$-neighborhood of the imaginary rational strip
$I_j$, we have
\[
\sigma^2(\theta_{n_i},r_{n_i})=\lim_{\widetilde n_\gamma\to\infty}
\frac{1}{\widetilde
n_\gamma}\sum_{k=0}^{\widetilde n_\gamma-1}v^2(\theta_{
n_i}+kr_{n_i},p/q)+\mathcal{O}\left(\varepsilon^\nu\right).\]
The right hand side is a trigonometric polynomial of degree $2d$ in
$\theta_{n_i}$ and therefore it can have at most $4d$ zeros. Therefore, taking
$\varepsilon$ small enough, we have that
$\sigma^2(\theta_{n_i},r_{n_i})\geq K>0$ for some constant $K$.
Then, the rest of the proof follows the same lines as in Lemma
\ref{lemma:TI:exittime}.
\end{proof}
\begin{lem}\label{lemmaexpectation-IR}
Let $\nu$, $b=(\nu-\rho)/2$ and $\rho$
satisfy \eqref{conditionbeta} and $\gamma\in (4/5,4/5+1/40)$. Fix $\delta>0$
small. Take
$f:\mathbb{R}\rightarrow\mathbb{R}$ be any $\mathcal{C}^l$ function with $l\ge
3$ and $\|f\|_{\mathcal{C}^3}\leq C$ for some constant $C>0$ independent of $\varepsilon$.
Then,
\begin{eqnarray*}
&&\mathbb{E}\left(f(r_{n_\gamma})-\varepsilon^2
\sum_{k=0}^{n_\gamma-1}\left(b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2
}
f''(r_k)\right)\right)\\
&&-f(r_0)=\mathcal{O}(\varepsilon^{2\gamma-\delta}),
\end{eqnarray*}
where $b$ and $\sigma$ are the functions introduced in
\eqref{eq:drift-variance}.
\end{lem}
\begin{proof}
Proceeding as in the proof of Lemma \ref{lemmaexpectation}, we define
\begin{equation}\label{defeta-IR}
\eta=f(r_{n_\gamma})-
\varepsilon^2
\sum_{k=0}^{n_\gamma-1}\left(b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2
}
f''(r_k)\right),
\end{equation}
which can be written as
\begin{equation}\label{eta-version3-IR}
\begin{split}
\eta=&f(r_0)+\sum_{k=0}^{n_\gamma-1}f'(r_k)\varepsilon\omega_k\left
[v(\theta_k,r_k)+\varepsilon v_2(\theta_k,r_k)\right]\\
&+\varepsilon^2\sum_{k=0}^{n_\gamma-1}f'(r_k)\left[E_2(\theta_k,
r_k)-b(r_k)\right]\\
&+\frac{\varepsilon^2}{2}\sum_{k=0}^{n_\gamma-1}f''(r_k)\left[
v^2(\theta_k,r_k)-\sigma^2(r_k)\right]\\
&+\sum_{k=0}^{n_\gamma-1}\mathcal{O}(\varepsilon^{2+a}).
\end{split}
\end{equation}
Using the law of total expectation and taking $\delta>0$ small enough,
\begin{eqnarray}\label{lawtotalexp-v2}
\begin{aligned}
\mathbb{E}\left(\eta\right)&=
\mathbb{E}\left(\eta\,|\,\varepsilon^{-2(1-\gamma)+\delta}\leq n_\gamma\leq
\varepsilon^{-2(1-\gamma)-\delta}\right)
\mathbb{P}\left\{\varepsilon^{-2(1-\gamma)+\delta}\leq n_\gamma\leq
\varepsilon^{-2(1-\beta)-\delta}\right\}\\
&+\mathbb{E}\left(\eta\,|\,n_\gamma< \varepsilon^{-(1-\gamma)+\delta}\right)
\mathbb{P}\left\{n_\gamma<
\varepsilon^{-2(1-\gamma)+\delta}\right\}\\
&+\mathbb{E}\left(\eta\,|\, \varepsilon^{-2(1-\gamma)-\delta}<n_\gamma\leq s\varepsilon^{-2}\right)
\mathbb{P}\left\{\varepsilon^{-2(1-\gamma)-\delta}<n_\gamma\leq s\varepsilon^{-2}\right\}.
\end{aligned}
\end{eqnarray}
By Lemma \ref{lemma:IR:exittime}, we have
\[
\begin{split}
\mathbb{P}\left\{n_\gamma<\varepsilon^{-2(1-\gamma)+\delta}\right\}&\le
e^{-{C\varepsilon^{-\delta}}} \\
\mathbb{P}\left\{\varepsilon^{-2(1-\gamma)-\delta}<n_\gamma\leq s\varepsilon^{-2}\right\}&\le
e^{-{C\varepsilon^{-\delta}}}.
\end{split}
\]
As in the proof of Lemma \ref{lemmaexpectation},
\begin{equation}\label{eq:IR:zeroexpeceps}
\begin{split}
\mathbb{E}(\omega_kf'(r_k)[v(\theta_k,r_k)+\varepsilon v_2(\theta_k,r_k)])&=\\
\mathbb{E}(\omega_k)\mathbb{E}(f'(r_k)[v(\theta_k,r_k)+\varepsilon v_2(\theta_k,r_k)])&=0.
\end{split}
\end{equation}
for all $k\in\mathbb{N}$ and one can obtain the needed estimates for the
second and third row of \eqref{lawtotalexp-v2} exactly as in the proof of Lemma
\ref{lemmaexpectation}.
To upper bound the first row in \eqref{lawtotalexp-v2}, recall
$ n_\gamma\in (\varepsilon^{-(1-\gamma)+\delta},\varepsilon^{-2(1-\gamma)-\delta})$. Then, using
\eqref{eta-version3-IR} and \eqref{eq:IR:zeroexpeceps},
\[
\left |\mathbb{E}\left(\eta-f(r_0)\,|\,\varepsilon^{-2(1-\gamma)+\delta}\leq n_\gamma\leq
\varepsilon^{-2(1-\gamma)-\delta}\right)\right|\leq K\varepsilon^2 n_\gamma\leq K\varepsilon^{2\gamma-\delta}.
\]
This completes the proof of the lemma.
\end{proof}
\subsection{From a local diffusion to the global one: proof of Theorem
\ref{maintheorem} }
\label{sec:IR-cyl-to-line}
In Sections \ref{sec:TI-case} and \ref{sec:IR-case} we proved
\emph{local} versions of formula \eqref{eq:suff-condition} in totally irrational and imaginary rational strips.
Namely, as long as we stay in one of the strips $I^j_\gamma$ of these two types,
for any $s>0$, any time
$n\le s\varepsilon^{-2}$ and any $(\theta_0,r_0)$, as $\varepsilon\to 0$, we have
\begin{equation}\label{def:ExpNF}
\mathbb{E}(\eta_f)\to 0\,\, \text{ with }\eta_f= f(r_{n})- \varepsilon^2
\sum_{k=0}^{n-1}\left(b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2}
f''(r_k)\right)-f(r_0).
\end{equation}
In \cite{CGK}, an analogous analysis is done for the resonant strips. To complete the proof of Theorem
\ref{maintheorem}, it suffices to prove the global version in the whole
cylinder. Namely, when the iterates visit totally irrational,
imaginary rational strips and resonant zones.
To this end, we need to analyze how the iterates visit the different strips. We
model these visits as a \emph{random walk}. It turns out that in the core of resonant zones we face serious technical difficulties since they are significantly different from the non-resonant zones (see
\cite{CGK}). Since the cores have a very
small measure, we prove that the fraction of time spent in those cores is
rather low and, thus, has small influence in the long time behavior.
To be able to finally combine the resonant and non-resonant regimes, we
consider a second division of both the resonant and non-resonant zones in strips
of bigger size than $I_\gamma^j$. The behavior in those strips will be the same at
either non-resonant and resonant strips. This will allow us to later ``join''
both regimes.
Fix a parameter $\kappa\in (1/3,1/11)$ and divide both resonant and
non-resonant zones into intervals $\mathcal{I}_\kappa^j$ of length $\varepsilon^\kappa$. The
non-resonant zones are chosen so that the endpoints of those strips coincide
with endpoints of the previous grid of strips $I_\gamma^j$. Each interval
$\mathcal{I}_\kappa^j$ contains $\varepsilon^{\kappa-\gamma}$ $I_\gamma^j$ strips. This new division at the
resonant zones is done in \cite{CGK}.
We prove in the non-resonant strips $\mathcal{I}_\kappa^j$ a result analogous to Lemma
\ref{lemmaexpectation}. Namely, we show that, since the relative measure of
Imaginary Rational strips is very small, the behavior in the strip $\mathcal{I}_\kappa^j$
is given by the behavior of the Totally Irrational substrips $I_\gamma^j$.
\begin{lem}\label{lemma:expectlemmaBigStrips}
Consider $C>0$, $\kappa\in (1/3,1/11)$ and a strip $\mathcal{I}_\kappa^j$ in the
non-resonant zone $\mathcal D_\beta$ (see \eqref{eq:non-res-domain}). Take
$f:\mathbb{R}\rightarrow\mathbb{R}$ be any $\mathcal{C}^l$ function with $l\ge
3$ and $\|f\|_{\mathcal{C}^3}\leq C$. Then there exists $\zeta>0$ such that
\begin{equation}\label{eq:ExpectIntermStrips}
\mathbb{E}\Bigg(f(r_{n_\kappa})- \varepsilon^2
\sum_{k=0}^{n_\kappa-1}\left(b(r_k)f'(r_k)+\frac{\sigma^2(r_k)}{2}
f''(r_k)\right) \Bigg)
- f(r_0)=\mathcal{O}(\varepsilon^{2\kappa+\zeta}).
\end{equation}
where $b$ and $\sigma$ are the functions defined in \eqref{eq:drift-variance}.
Moreover, call $n_\kappa$ the exit time from these strips. Then, there
exists a constant $C'>0$ such that,
\begin{itemize}
\item For any $\delta\in (0, 2(1-\kappa))$ and $\varepsilon>0$ small
enough,
\[
\mathbb{P}\{n_\kappa<\varepsilon^{-2(1-\kappa)+\delta}\}\leq
e^{- C' \varepsilon^{- \delta}}.
\]
\item For any $\delta>0$ and $\varepsilon>0$ small
enough,
\[
\mathbb{P}\{\varepsilon^{-2(1-\kappa)-\delta}<n_\gamma<s\varepsilon^{-2}\}\leq
e^{- C' \varepsilon^{- \delta}}.
\]
\end{itemize}
\end{lem}
This lemma is proven in Section \ref{sec:ProofExpecIntermediateStrips}. An analogous lemma for the resonant zones is proven in \cite{CGK}. In that lemma we replace $\mathcal D_\beta$ from \eqref{eq:non-res-domain}
by $\mathcal R_\beta^{p/q}$ from \eqref{eq:res-domain} and the $r$-component by the Hamiltonian $H$. All the rest is the same.
\subsubsection{Proof of Lemma \ref{lemma:expectlemmaBigStrips}}\label{sec:ProofExpecIntermediateStrips}
The strip $\mathcal{I}_\kappa=\mathcal{I}_\kappa^j$ is the union of $\varepsilon^{\kappa-\gamma}$ totally irrational
and imaginary rational strips. We analyze the amount of visits that are done to
each strip and we
prove that the time spent in Imaginary Rational strips is small compared with
the time spent in the Totally Irrational strips. Assume $r_0=0$ (if not just
apply a translation). We want to
model the visits to the different strips in $\mathcal{I}_\kappa$ by a symmetric random walk.
Modifying slightly the strips considered in
Sections
\ref{sec:TI-case} and \ref{sec:IR-case}, we consider endpoints of the strips
\[
r_j=A_j \varepsilon^\gamma,\quad j\in\mathbb{Z}
\]
with some (later determined) constants $A_j$ independent of $\varepsilon$ to the leading order and satisfying $A_0=0$, $A_1=A>0$
and
$A_j<A_{j+1}$ for $j>0$ (and similarly for $j<0$). We consider the strips
\[
I_\gamma^j=[r_j,r_{j+1}]=[A_j\,\varepsilon^\gamma, A_{j+1}\,\varepsilon^\gamma].
\]
To analyze the visits to these strips, we consider the lattice of points
$\{r_j\}_{j\in\mathbb{Z}}\subset\mathbb{R}$ and we analyze the ``visits'' to these points. By
visit we mean the existence of an iterate $\mathcal{O}(\varepsilon)$-close to it. Lemmas
\ref{lemma:TI:exittime} and \ref{lemma:IR:exittime} imply that
if we start with $r=r_j$ we hit either $r_{j-1}$ or
$r_{j+1}$ with probability one. This process can be treated as a random walk
for $j\in\mathbb{Z}$,
\begin{equation}\label{def:randomwalk:inter}
S_j=\sum_{i=0}^{j-1}Z_i,
\end{equation}
where $Z_i
$ are Bernouilli variables taking values $\pm 1$. $Z_i$'s are not necessarily
symmetric. Thus, we choose
the constants $A_j>0$ so that the $Z_i$ are Bernouilli
variables with $p=1/2$.
\begin{lem}\label{lemma:randomwalkinter}
There exist constants $J^\pm>0$ independent of $\varepsilon$
and $\{A_j\}_j,\ j\in \left[\lfloor J_-\varepsilon^{\kappa-\gamma}\rfloor, \lfloor
J_+\varepsilon^{\kappa-\gamma}\rfloor \right]$ such that
\begin{itemize}
\item
$A_j=A_{j-1}+(A_1-A_{0})\exp(-\int_0^{r_{j-1}}\frac {
2b(r)}{\sigma^2(r)}dr)+\mathcal{O}(\varepsilon^{\gamma}).$
\item $\displaystyle \mathcal{I}_\kappa\subset\bigcup_{j=\lfloor
J_-\varepsilon^{\kappa-\gamma}\rfloor}^{\lfloor
J_+\varepsilon^{\kappa-\gamma}\rfloor}[A_j,A_{j+1}]$.
\item The random walk process induced by the map \eqref{eq:NRmap-n} on the
lattice $\displaystyle\{r_j\}_j,\
j\in \left[ \lfloor J_-\varepsilon^{\kappa-\gamma}\rfloor, \lfloor
J_+\varepsilon^{\kappa-\gamma}\rfloor \right]$ is a symmetric random walk.
\end{itemize}
\end{lem}
\begin{proof}
To compute the probability of hitting (an $\varepsilon$-neighborhood of) either
$r_{j\pm 1}$ from $r_j$, we use the local
expectation lemmas (Lemmas \ref{lemmaexpectation} and
\ref{lemmaexpectation-IR}). Therefore we can consider $f$ in the kernel of the
infinitesimal generator $A$ of the diffusion process (see
\eqref{eq:diffusion-generator}) and solve the boundary problem
\[
b(r)f'(r)+\frac{1}{2}\sigma^2(r)f''(r)=0,\qquad f(r_{j-1})=0,\quad f(r_{j+1})=1
\]
The solution gives the probability of hitting $r_{j+1}$ before hitting
$r_{j-1}$ starting at a given
$r\in[r_{j-1},r_{j+1}]$. The unique solution is given by
\[
f(r)=\frac{\int_{r}^{r_{j+1}}\exp(-\int_0^\rho
\frac{2b(s)}{\sigma^2(s)}ds)\ d\rho}{
\int_{r_{j-1}}^{r_{j+1}}\exp(-\int_0^\rho
\frac{2b(s)}{\sigma^2(s)}ds)\ d\rho}.
\]
We use $f$ to choose the coefficients $A_j$ iteratively (both as $j>0$ increases
and $j<0$ decreases). Assume that $A_{j-1}$, $A_j$ have been fixed. Then, to
have a symmetric random walk, we have to choose $A_{j+1}$ such that
$f(r_j)=1/2$.
Define
\[
m(r)=\exp\left(-\int_0^\rho
\frac{2b(s)}{\sigma^2(s)}ds\right)
\]
and $D_j=A_j-A_{j-1}$. Then, using the mean value theorem, $f(r_j)=1/2$ can be
written as
\[
\frac{m(\xi_j)D_j}{m(\xi_j)D_j+m(\xi_{j+1})D_{j+1}}=\frac{1}{2}
\]
where $\xi_j\in [A_{j-1},A_j]$ and $\xi_{j+1}\in [A_{j},A_{j+1}]$. Thus, one
has
\[
D_{j+1}=\frac{m(\xi_{j})}{m(\xi_{j+1})}D_j \quad \text{ which implies }\quad
D_{j+1}=\frac{m(\xi_{1})}{m(\xi_{j+1})}D_1.
\]
Thus the length $D_j$ of the strip $I_\gamma^j=[r_j,r_{j+1}]=[A_j\varepsilon^\gamma,
A_j\varepsilon^\gamma]$.
\[
D_j=\frac{m(\xi_{1})}{m(\xi_{j})}\ D_0
=A \exp\left(-\int_0^{r_{j-1}}
\frac{2b(s)}{\sigma^2(s)}ds)+\mathcal{O}(\varepsilon^{\gamma}\right).
\]
The distortion of the strips does not depend on $\varepsilon$ (at first order). Therefore, adjusting $A$ and $J_+$ one can obtain the intervals
$[r_{j}, r_{j+1}]=[A_{j}\varepsilon^\gamma, A_{j+1}\varepsilon^\gamma]$ which cover $\mathcal{I}_\kappa$ with $r>0$.
Proceeding analogously for $j<0$, one can do the same for $\mathcal{I}_\kappa$ with $\{r<0\}$.
\end{proof}
To prove \eqref{eq:ExpectIntermStrips}, we need to combine the iterations within each strip $I_\gamma^j$ and the random walk evolution among the strips. Since we have $\varepsilon^{\kappa-\gamma}$ strips, the exit time $j^*$ for the random walk $S_j$ from $\mathcal{I}_\kappa$ satisfies the following. There exists $C>0$ such that for any small $\delta$ and $\varepsilon$,
\begin{equation}\label{eq:Probj*}
\begin{split}
\mathbb{P} \left(j^*\geq \varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}\right) & \leq
e^{-C\, \varepsilon^{-{\delta/2}}}\\
\mathbb{P} \left(j^*\leq \varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}}\right) & \leq
e^{-C\, \varepsilon^{-{\delta/2}}}.
\end{split}
\end{equation}
We use this to obtain the probabilities for the exit time $n_\kappa$ stated in
Lemma \ref{lemma:expectlemmaBigStrips}. We prove the second statement for
$n_\kappa$, the other one can be proved analogously. Call $j^*$ the exit time for
the random walk and $n_\gamma^j$, $j=1, \ldots,j^*$ the exit times for the $j^*$
visited strip before hitting the endpoints of $\mathcal{I}_\kappa$. Define also
$\Delta_j=n_\gamma^j-n_\gamma^{j-1}$ with $j\geq 2$, $\Delta_1=n_\gamma^1$ and
$X=\{\varepsilon^{-2(1-\kappa)-\delta}<n_\kappa<s\varepsilon^{-2}\}$. We condition the
probability as follows,
{\small \[
\begin{split}
&\mathbb{P}\{X\}\\
&\leq\mathbb{P}\left\{X\left|j^*\in
(\varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}},\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}), \Delta_j\in(
\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}},\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}}), j=1,\ldots,
j^*\right.\right\}\\
&\times \mathbb{P}\left\{j^*\in (\varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}},\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}), \Delta_j\in( \varepsilon^{-2(1-\gamma)+\frac{\delta}{2}},\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}}), j=1,\ldots, j^*\right\}\\
&+\mathbb{P}\left\{X\left|j^*\not\in
(\varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}},\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}) \text{ or }
\exists j, \Delta_j\not\in(
\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}},\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}})\right.\right\}\\
&\times\mathbb{P}\left\{j^*\not\in
(\varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}},\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}) \text{ or }
\exists j, \Delta_j\not\in(
\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}},\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}})\right\}
\end{split}
\]}
For the first term in the conditionned probability we show that
{\small \[
\mathbb{P}\left\{X\left|j^*\in
(\varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}},\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}), \Delta_j\in(
\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}},\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}}), j=1,\ldots,
j^*\right.\right\}=0
\]}
Indeed, we have that
\[
n_\kappa=\sum_{j=1}^{j^*}n_\gamma^j\leq j^*\sup_j n_\gamma^j<
\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}\cdot
\varepsilon^{-2(1-\gamma)-\frac{\delta}{2}}\leq\varepsilon^{-2(1-\kappa)-\delta}.
\]
Therefore, we only need to bound the second term in the conditioned probability. To this end, we need
an upper bound for the number of visited strips. Since $n\le s\varepsilon^{-2}$ and $|r_n-r_{n-1}|\lesssim\varepsilon$, there exists a
constant $c>0$ such that
\[
\Delta_j=n_\gamma^j-n_\gamma^{j-1}\geq c\varepsilon^{\gamma-1}\quad\text{ for }j=0,\ldots, j^*-1.
\]
This implies that
\begin{equation}\label{def:maxvisitedstrips}
j^*\lesssim \varepsilon^{-1-\gamma}.
\end{equation}
Thus, using Lemmas \ref{lemma:TI:exittime} and \ref{lemma:IR:exittime},
{\small
\[
\begin{split}
&\mathbb{P}\left\{j^*\not\in (\varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}},\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}) \text{ or } \Delta_j\not\in( \varepsilon^{-2(1-\gamma)+\frac{\delta}{2}},\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}}) \text{ for some } j=1,\ldots, j^*\right\}\\
&\leq\varepsilon^{-1-\gamma} e^{-C\varepsilon^{-\delta/2}}
\end{split}
\]}
Thus, taking a smaller $C>0$ and taking $\varepsilon$ small, we obtain the second
statement for $n_\kappa$ in Lemma \ref{lemma:expectlemmaBigStrips}. One can prove
the lower bound for $n_\kappa$ analogously.
It only remains to prove \eqref{eq:ExpectIntermStrips}.
We define the
Markov times $0=n_\gamma^0<n_\gamma^1<n_\gamma^2 <\dots <n_\gamma^{j^*-1}<n_\gamma^{j^*}<n$
for some random $j^*=j^*(\omega)$ such that each $n_\gamma^j$ is the stopping
time as in \eqref{eq:stopping-time}, where $j^*$ denotes either the exit time
from $\mathcal{I}_\kappa$ or the last change between strips $I_\gamma^j$ inside $\mathcal{I}_\kappa$.
By \eqref{eq:Probj*}, $j^*(\omega)$ is the exit time except for an exponentially
small probability. We use conditionned expectation
as
\[
\mathbb{E}(\eta_f)=\mathbb{E}(\eta_f|A_1)\mathbb{P} (A_1)+\mathbb{E}(\eta_f|A_2)\mathbb{P}(A_2)
\]
with
\[
\begin{split}
A_1=&\Big\{\varepsilon^{-2(1-\kappa)-\delta}<n_\kappa<\varepsilon^{-2(1-\kappa)+\delta}, j^*\in
(\varepsilon^{2(\kappa-\gamma)+\frac{\delta}{2}},\varepsilon^{2(\kappa-\gamma)-\frac{\delta}{2}}),\\
&\Delta_j\in( \varepsilon^{-2(1-\gamma)+\frac{\delta}{2}},\varepsilon^{-2(1-\gamma)+\frac{\delta}{2}}),
j=1,\ldots, j^*\Big\}\\
A_2=&A_1^c.
\end{split}
\]
Lemmas \ref{lemma:TI:exittime}, \ref{lemma:IR:exittime}, the estimates for
$n_\kappa$ given in Lemma \ref{lemma:expectlemmaBigStrips} and
\eqref{eq:Probj*} imply that $\mathbb{P}(A_2)\ll \varepsilon^{2\kappa+\zeta}$. Moreover, since
we only consider functions $f$ such that $\|f\|_{\mathcal
C^3}\leq C$ with $C>0$ independent of $\varepsilon$, we have that
\[
\left|\mathbb{E}(\eta_f|A_2)\mathbb{P}(A_2)\right|\lesssim \varepsilon^{2\kappa+\zeta}.
\]
Therefore, it only remains to bound $\mathbb{E}(\eta_f|A_1)\mathbb{P} (A_1)$. We use that
$\mathbb{P} (A_1)\leq 1$ and we estimate $\mathbb{E}(\eta_f|A_1)$.
We decompose the above sum as
$\eta_f=\sum_{j=0}^{j^*} \eta_j$ with
\begin{eqnarray*}
\eta_j=&&f(r_{n_\gamma^{j+1}})-
f(r_{n_\gamma^{j}})- \qquad \qquad \\
&& \varepsilon^2
\sum_{s=n_\gamma^j}^{n_\gamma^{j+1}}\left(b(r_s)f'(r_s)+\frac{\sigma^2(r_s)}{2}
f''(r_s)\right).
\end{eqnarray*}
Theorems \ref{lemmaexpectation} and \ref{lemmaexpectation-IR} imply that for
any $j$,
\begin{equation}\label{def:localExpec:Intermediate}
\begin{split}
|\mathbb{E}(\eta_j)|&\lesssim\varepsilon^{2\gamma+\zeta}\qquad\text{ for totally irrational
strips}\\
|\mathbb{E}(\eta_j)|&\lesssim\varepsilon^{2\gamma-\delta}\qquad\quad\text{for imaginary rational
strips},
\end{split}
\end{equation}
for some $\delta>0$ arbitrarily small and some $\zeta>0$. To use these estimates,
we
need to control how many visits we do to each type of strips. Taking into
account that the visits to the strips are modelled by the symmetric random walk
$S_j$. Denote by
$B\subset M= \{1,\ldots,\lceil \varepsilon^{\kappa-\gamma}\rceil\}\subset\mathbb N$ the
endpoints of the Imaginary rational strips $I_\gamma^j$ in $\mathcal{I}_\kappa$. By Appendix
\ref{sec:measure-IR-RR}, we know that
\[
|B|\lesssim \varepsilon^{\kappa-\gamma+\rho}.
\]
Denote by $\mu= |B|/\lceil \varepsilon^{\kappa-\gamma}\rceil$ the relative measure of $B$ in $M$.
\begin{lem}\label{lemma:ShortVisitsB} Fix $\delta>0$ small. There exists a constant
$C>0$ such that for $\varepsilon>0$ small enough,
\[
\mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j\in B\right\}\geq
j^*\mu\varepsilon^{-\delta} \right)\leq e^{-C \varepsilon^{-\delta/2}}
\]
\end{lem}
\begin{proof}
We have that
\[
\mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j\in B\right\}\geq
j^*\mu\varepsilon^{-\delta} \right)= \mathbb{P}\left(\sum_{k\in B}\sharp \left\{j\in [0,j^*):S_j=k\right\}\geq
j^*\mu\varepsilon^{-\delta} \right)
\]
Take any $k^*\in B$, then
\[
\mathbb{P}\left(\sum_{k\in B}\sharp \left\{j\in [0,j^*):S_j=k\right\}\geq
j^*\mu\varepsilon^{-\delta} \right)\leq \mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j=k^*\right\}\geq
j^*\varepsilon^{-\gamma+\kappa-\delta} \right).
\]
Since we start the random walk at $S_0=0$, it is clear that the probability of visiting $k^*$ $j$-times is lower than the probability of visit $0$ $j$-times. Namely,
\[
\mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j=k^*\right\}\geq
j^*\varepsilon^{-\gamma+\kappa-\delta} \right)\leq \mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j=0\right\}\geq
j^*\varepsilon^{-\gamma+\kappa-\delta} \right)
\]
We prove that such probability is exponentially small in $\varepsilon$. Denote by $f_k$
the random variable that gives the number of iterates between the $k-1$ and $k$
visiting zero. Then,
\[
\begin{split}
\mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j=0\right\}\geq
j^*\varepsilon^{-\gamma+\kappa-\delta} \right) &=\mathbb{P}\left(\sum_{k=1}^{\lceil j^*\varepsilon^{-\gamma+\kappa-\delta} \rceil}f_k\leq j^*\right)\\
&\leq \prod_{k=1}^{\lceil j^*\varepsilon^{-\gamma+\kappa-\delta} \rceil}\mathbb{P}\left(f_k\leq j^*\right).
\end{split}
\]
Since the random variables $\{f_k\}$ are independent identically distributed,
\[
\mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j=0\right\}\geq
j^*\varepsilon^{-\gamma+\kappa-\delta} \right)\leq \mathbb{P}\left(f_1\leq j^*\right)^{\lceil j^*\varepsilon^{-\gamma+\kappa-\delta} \rceil}.
\]
Since we are dealing with a symmetric random walk, it is well known that
\[
\mathbb{P}(f_1= m)=\begin{pmatrix} 2m\\ m\end{pmatrix}\frac{2^{-2m}}{2m-1}.
\]
which satisfies
\[
\mathbb{P}(f_1= m)\sim \frac{1}{\sqrt{\pi m}(2m-1)} \qquad\text{ as }m\to +\infty.
\]
Therefore, there exists a constant $c>0$ such that for $m$ large enough
\[
\mathbb{P}(f_1\leq m)\leq 1-cm^{-1/2}\]
Then, one can conclude that
\[
\begin{split}
\mathbb{P}\left(\sharp \left\{j\in [0,j^*):S_j\in B\right\}\geq
j^*\mu\varepsilon^{-\delta} \right)&\geq \mathbb{P}\left(f_1\leq j^*\right)^{\lceil j^*\varepsilon^{-\gamma+\kappa-\delta} \rceil}\\
& \leq \left(1- \frac{c}{(j^*)^{1/2}}\right)^{\lceil j^*\varepsilon^{-\gamma+\kappa-\delta} \rceil}\\
& \leq e^{-C \varepsilon^{-{\delta/2}}},
\end{split}
\]
for some constant $C>0$ independent of $\varepsilon$ and $\varepsilon$ small enough.
\end{proof}
Lemma \ref{lemma:ShortVisitsB} implies that it is enough to deal
with the case
\[
\{n:S_n\in B\}\leq j^*\mu\varepsilon^{-\delta}\leq
\varepsilon^{2(\kappa-\gamma)+\rho-2\delta},
\]
where we have used that $j^*\leq\varepsilon^{2(\kappa-\gamma)-\delta}$ and $\mu\leq \varepsilon^\rho$.
Using this and \eqref{def:localExpec:Intermediate}, we can deduce that
\[
\begin{split}
\left|\mathbb{E}\left(\left.\sum_{j=0}^{j^*}\eta_j\right| A_1\right)\right|
&\lesssim
\varepsilon^{2\gamma+\zeta}\varepsilon^{-2(\gamma-\kappa)-2\delta}
+\varepsilon^{2\gamma-\delta}\varepsilon^{-2(\gamma-\kappa)+\rho-2\delta}\\
&\leq \varepsilon^{2\kappa+\zeta-2\delta}+\varepsilon^{2+\kappa+\rho-3\delta}.
\end{split}
\]
Therefore, taking $\delta>0$ small enough, we have proven
\eqref{eq:ExpectIntermStrips}.
\subsubsection{Proof of Theorem \ref{maintheorem}}
To complete the proof of Theorem \ref{maintheorem} it is enough to use Lemmas
\ref{lemma:expectlemmaBigStrips} and the
corresponding lemma for the resonant strips
given in \cite{CGK} and
model the visits to the strips $\mathcal{I}_\kappa^j$ as a random walk as we have done for
the strips $I_\gamma^j$ to prove Lemma \ref{lemma:expectlemmaBigStrips} in Section
\ref{sec:ProofExpecIntermediateStrips}.
This proof is slightly different since we are dealing with a
non-compact domain and therefore we need estimates for the low probability of
doing big excursions. As before, we assume $r_0=0$ (if not just
apply a translation) and we
treat the visits to the different strips $\mathcal{I}_\kappa^j$ by a random walk.
Consider $R\gg 1$, which we will fix a posteriori, and
consider the
endpoints of the strips $[-R,R]$.
To prove \eqref{def:ExpNF}, we condition the expectation in a different way as
for the proof of Lemma \ref{lemma:expectlemmaBigStrips}. We condition it as
\begin{equation}\label{def:SplittingExpect}
\begin{split}
\mathbb{E} (\eta)= &\mathbb{E} \left(\eta \left||r_n|< R\,\text{ for all }n\leq
s\varepsilon^{-2}\right.\right)\mathbb{P}\left(|r_n|< R\,\text{ for all }n\leq
s\varepsilon^{-2}\right)\\
&+ \mathbb{E} \left(\eta \left|\exists n^*\leq
s\varepsilon^{-2} \text{ with }|r_n|\geq R\right.\right)\mathbb{P}\left(\exists n^*\leq
s\varepsilon^{-2} \text{ with }|r_n|\geq R\right).
\end{split}
\end{equation}
We bound each row. We start with the second one.
Since we are considering
$n\leq
s\varepsilon^{-2}$ and we consider functions $f$ such that $\|f\|_{\mathcal
C^3(\mathbb R)}\leq C$ with $C>0$ independent of $\varepsilon$, we have that
\[
\left| \mathbb{E} \left(\eta \left|\exists n^*\leq
s\varepsilon^{-2} \text{ with }|r_n|\geq R\right.\right)\right|\leq C'
\]
for some $C'>0$ which depends on $s$ but is independent of $\varepsilon$ and $R$.
Thus,
to bound the second row, it is
enough to prove that choosing $R$ large enough, $\mathbb{P}\left(\exists n^*\leq
s\varepsilon^{-2} \text{ with }|r_{n^*}|\geq R\right)$ can be made as small as desired
uniformly for small $\varepsilon$.
We divide the interval $[-R,R]$ into equal substrips $\mathcal{I}_\kappa^j$ of length equal
to $\varepsilon^\kappa$. It is clear that there are $R\varepsilon^{-\kappa}$ strip. We model the
visits to these strips as a non-symmetric random walk $S_j$
in \eqref{def:randomwalk}. Note that the this is significantly different from
Section \ref{sec:ProofExpecIntermediateStrips} since now the probabilities of
going left or right depend on the point (because of the drift).
Note that now the random walk $S_j=\sum_{k=1}^j Z_j$ where each $Z_j$ is a
Bernouilli variable with probabilities $p_j$, $q_j$ which depend on the visited
strip. Proceeding as in the proof of Lemma \ref{lemma:randomwalkinter} and
taking into account that we have uniform bounds for the drift given in Theorem
\ref{thm:normal-form}, one can prove that at every strip the probabilities
$p_j$, $q_j$ satisfy
\[
\left|p_j-\frac{1}{2}\right|\leq C\varepsilon^\kappa,\quad \left|q_j-\frac{1}{2}\right|\leq C\varepsilon^\kappa
\]
for some constant $C>0$ which is independent of $\varepsilon$ and $R$. As a consequence,
\begin{equation}\label{eq:ExpecBernouilli}
\left|\mathbb{E} Z_j\right| \leq 2C\varepsilon^\kappa.
\end{equation}
Call $j^*$ the first visit to one of the strips
containing $r=\pm R$. It is clear that
\[
j^*\geq R\varepsilon^{-\kappa}.
\]
We fix $\delta>0$ small and we condition $\mathbb{P}\left(|r_n|< R\,\text{ for all }n\leq
s\varepsilon^{-2}\right)$ as follows. Call $X=\{|r_n|< R,\text{ for all }n\leq
s\varepsilon^{-2}\}$,
\begin{equation}\label{eq:ExitProbFinal}
\begin{split}
\mathbb{P}(X)=&\mathbb{P}\left(X\left|j^*\leq R^\delta\varepsilon^{-2\kappa}\right.\right)\mathbb{P}\left(j^*\leq R^\delta\varepsilon^{-2\kappa}\right)\\
&+\mathbb{P}\left(X\left|j^*> R^\delta\varepsilon^{-2\kappa}\right.\right)\mathbb{P}\left(j^*> R^\delta\varepsilon^{-2\kappa}\right).
\end{split}
\end{equation}
For the first row it is enough to use $|\mathbb{P}\left(X\left|j^*\leq R^\delta\varepsilon^{-2\kappa}\right.\right)|\leq 1$ and the following lemma.
\begin{lem}
Fix $\varepsilon_0>0$. Then, for any $\varepsilon\in (0,\varepsilon_0)$ and $R>0$ large enough,
\[
\mathbb{P}\left(j^*\leq R^\delta\varepsilon^{-2\kappa}\right)\leq e^{-CR^{2-\delta}}
\]
for some constant $C>0$ independent of $\varepsilon$ and $C>0$.
\end{lem}
\begin{proof}
Since the number of strips is $R\varepsilon^{-\kappa}$,
\[
\mathbb{P}\left(j^*\leq R^\delta\varepsilon^{-2\kappa}\right)\leq \mathbb{P}\left(\exists j^*\leq
R^\delta\varepsilon^{-2\kappa}:\left|\sum_{k=1}^{j^*}Z_j\right|\geq R\varepsilon^{-\kappa}\right).
\]
Define $Y_j=Z_j-\mathbb{E} Z_j$, then for $R$ large enough and taking \eqref{eq:ExpecBernouilli} into account
\[
\begin{split}
\mathbb{P}\left(\left|\sum_{k=1}^{j^*}Z_j\right|\geq R\varepsilon^{-\kappa}\right)= &\leq \mathbb{P}\left(\left|\sum_{k=1}^{j^*}Y_j+\sum_{k=1}^{j^*}\mathbb{E} Z_j\right|\geq R\varepsilon^{-\kappa}\right)\\
&\leq \mathbb{P}\left(\left|\sum_{k=1}^{j^*}Y_j\right|\geq R\varepsilon^{-\kappa}-C j^*\varepsilon^{\kappa}\right)\\
&= \mathbb{P}\left(\left|\frac{1}{\sqrt{j^*}}\sum_{k=1}^{j^*}Y_j\right|\geq
\frac{R\varepsilon^{-\kappa}}{\sqrt{j^*}}-C \sqrt{j^*}\varepsilon^{\kappa}\right).
\end{split}
\]
Using that $j^*\leq R^\delta\varepsilon^{-2\kappa}$, taking $R$ big enough,
\[
\frac{R\varepsilon^{-\kappa}}{\sqrt{j^*}}-C \sqrt{j^*}\varepsilon^{\kappa}\leq \frac{R\varepsilon^{-\kappa}}{2\sqrt{j^*}}
\]
which implies,
\[
\mathbb{P}\left(\left|\sum_{k=1}^{j^*}Z_j\right|\geq R\varepsilon^{-\kappa}\right)\leq
\mathbb{P}\left(\left|\frac{1}{\sqrt{j^*}}\sum_{k=1}^{j^*}Y_j\right|\geq
\frac{R\varepsilon^{-\kappa}}{2\sqrt{j^*}}\right).
\]
The variables $Y_j$ are independent but not identically distributed.
Nevertheless, their third moments have a uniform upper bound independent of
$\varepsilon$ and $R$. Then, one can apply Lyapunov center limit theorem to prove that
\[\frac{1}{\sqrt{j^*}}\sum_{k=1}^{j^*}Y_j\]
tends in distribution to a normal random variable with
positive variance which has a lower bound independent of $\varepsilon$ and $R$.
Therefore,
\[
\mathbb{P}\left(\left|\sum_{k=1}^{j^*}Z_j\right|\geq R\varepsilon^{-\kappa}\right)\leq e^{-C' \frac{R^2\varepsilon^{-2\kappa}}{4j^*}},
\]
for some $C'>0$ independent of $\varepsilon$ and $R$. This implies that
\[
\mathbb{P}\left(\exists j^*\leq R^\delta\varepsilon^{-2\kappa}:\left|\sum_{k=1}^{j^*}Z_j\right|\geq R\varepsilon^{-\kappa}\right)\leq e^{-C'R^{2-\delta}}.
\]
reducing slightly $C'$ if necessary.
\end{proof}
Now we bound the second row in \eqref{eq:ExitProbFinal}.
Call $N_j$ the exit time for
$r_n$ of the
$j$-th visit. The expectation $\mathbb{E}
N_j$ depends on the visited strip but is independent of $j$ since the different
visits to the same strip are independent.
Moreover, a direct consequence of Lemmas \ref{lemma:expectlemmaBigStrips} and
the analogous lemma for resonant zones given in \cite{CGK} is
that
\[
C^{-1}\varepsilon^{-2(1-\kappa)}\leq \mathbb{E} N_j\leq C\varepsilon^{-2(1-\kappa)}
\]
for some constant $C>0$ independent of $\varepsilon$ and $R$ (the lengths of the
strips are $R$ independent).
To bound the first row in \eqref{eq:ExitProbFinal}, we use $\mathbb{P}\left(j^*\leq
R^\delta\varepsilon^{-2\kappa}\right)\leq 1$ and we condition $\mathbb{P}\left(X\left|j^*\leq
R^\delta\varepsilon^{-2\kappa}\right.\right)$ as follows. Fix $\lambda>0$ small independent
of $\varepsilon$ and $R$.
\begin{equation}\label{eq:ProbExitFinal2}
\begin{split}
\mathbb{P}\left(X\left|j^*\leq
R^\delta\varepsilon^{-2\kappa}\right.\right)=& \mathbb{P}\left(X\left|j^*\leq
R^\delta\varepsilon^{-2\kappa},\left. \left|\frac{1}{j^*}\sum_{j=1}^{j^*}M_j\right|
>\lambda\right.\right.\right)\mathbb{P}\left(\left|\frac{1}{j^*}\sum_{j=1}^{j^*}
M_j\right| >\lambda\right)\\
&+\mathbb{P}\left(X\left|j^*\leq
R^\delta\varepsilon^{-2\kappa},\left. \left|\frac{1}{j^*}\sum_{j=1}^{j^*}M_j\right|
\leq\lambda\right.\right.\right)\mathbb{P}\left(\left|\frac{1}{j^*}\sum_{j=1}^{j^*}
M_j\right| \leq\lambda\right)
\end{split}
\end{equation}
We start by bounding the first row. Define the variables
\[
M_j=\frac{N_j-\mathbb{E} N_j}{\mathbb{E} N_j}
\]
It can be easily seen that $\mathrm{Var}(M_j)\leq C$ for some $C>0$ which is
independent of $j$. Since $\mathbb{E} M_j=0$,
\[
\mathbb{P}\left(\left.
\left|\frac{1}{j^*}\sum_{j=1}^{j^*}M_j\right| >\lambda \right|j^*>
R^\delta\varepsilon^{-2\kappa}\right)\to 0
\]
as $\varepsilon\to 0$, which gives the necessary estimates for the first row in
\eqref{eq:ProbExitFinal2}. Therefore, it only remains to bound the second row
in \eqref{eq:ProbExitFinal2}. To this end, it is enough to point out that
\[
\left|\frac{1}{j^*}\sum_{j=1}^{j^*}M_j\right| \leq\lambda
\]
implies
\[
n^*\geq \sum_{j=1}^{j^*-1}N_j \geq (1-\lambda)(j^*-1)\min_j \mathbb{E} N_j.
\]
Therefore, $n^*\gtrsim R^\delta \varepsilon^{-2}$. Nevertheless, by hypothesis, $n^*\leq
s\varepsilon^{-2}$. Therefore, taking $R$ large enough (depending on $s$), we obtain
\[
\mathbb{P}\left(X\left|j^*\leq
R^\delta\varepsilon^{-2\kappa},\left. \left|\frac{1}{j^*}\sum_{j=1}^{j^*}M_j\right|
\leq\lambda\right.\right.\right)=0.
\]
This completes the proof of the fact that the second row in
\eqref{def:SplittingExpect} goes to zero as $\varepsilon\to 0$ and $R\to+\infty$.
Now we prove that the first row in
\eqref{def:SplittingExpect} goes to zero as $\varepsilon\to 0$ for any fixed $R>0$.
Now we proceed as in the proof of Lemma \ref{lemma:expectlemmaBigStrips} and we
model the visits to
the strips in $[-R,R]$ as a symmetric random walk. The number
of strips is of order $C(R)\varepsilon^{-\kappa}$ for some function $C(R)$ independent of
$\varepsilon$.
As in the proof of Lemma \ref{lemma:expectlemmaBigStrips}, we modify slightly
the strips
$\mathcal{I}_\kappa^j$. Consider endpoints of the strips
\[
r_j=A_j \varepsilon^\kappa,\quad j\in\mathbb{Z}
\]
with some constants $A_j$ independent of $\varepsilon$ satisfying $A_0=0$, $A_1=A>0$
and
$A_j<A_{j+1}$ for $j>0$ (and analogously for negative $j$'s). We consider the
strips
\[
\mathcal{I}_\kappa^j=[r_j,r_{j+1}]=[A_j\varepsilon^\kappa, A_j\varepsilon^\kappa].
\]
To analyze the visits to these strips, we consider the lattice of points
$\{r_j\}_{j\in\mathbb{Z}}\subset\mathbb{R}$ and we treat the ``visits'' to these
points. Lemma \ref{lemma:expectlemmaBigStrips} and the analogous lemma for
resonant zones given in \cite{CGK} imply that
if we start with $r=r_j$ we hit either $r_{j-1}$ or
$r_{j+1}$ with probability one. We treat this process as a
random walk
for $j\in\mathbb{Z}$,
\begin{equation}\label{def:randomwalk}
S_j=\sum_{i=0}^{j-1}Z_i,
\end{equation}
where $Z_i$ are Bernouilli variables taking values $\pm 1$. We choose properly
the constants $A_j>0$ to have $Z_i$ which are Bernouilli
variables with $p=1/2$. That is, to have a
classical symmetric random walk.
\begin{lem}
There exists constants $J^\pm>0$ and $\{A_j\}_{j=\lfloor
J_-\varepsilon^{-\kappa}\rfloor}^{\lfloor
J_+\varepsilon^{-\kappa}\rfloor}$ all independent of $\varepsilon$ such that
\begin{itemize}
\item Satisfy
\[
A_j=A_{j-1}+(A_1-A_{0})\exp\left(-\int_0^{r_{j-1}}\frac {
2b(r)}{\sigma^2(r)}dr\right)+\mathcal{O}(\varepsilon^{\kappa})
\]
\item $\displaystyle [-R,R]\subset\bigcup_{j=\lfloor
J_-\varepsilon^{-\kappa}\rfloor}^{\lfloor
J_+\varepsilon^{-\kappa}\rfloor}[A_j,A_{j+1}]$.
\item The random walk process induced by the map \eqref{eq:NRmap-n} on the
lattice $\displaystyle\{r_j\}_{j=\lfloor
J_-\varepsilon^{-\kappa}\rfloor}^{\lfloor
J_+\varepsilon^{-\kappa}\rfloor}$ is a symmetric random walk.
\end{itemize}
\end{lem}
The proof of this lemma is analogous to the proof of Lemma
\ref{lemma:randomwalkinter}.
Now we prove the convergence to zero of the first row in
\eqref{def:SplittingExpect}. In that case we stay in $[-R,R]$ for all time
$n\leq s\varepsilon^{-2}$ and we can model the whole evolution as a symmetric random
walk. Define $j^*$ the number of changes of strip until
reaching $n=\lfloor s\varepsilon^{-2}\rfloor$. We define the
Markov times $0=n_0<n_1<n_2 <\dots <n_{j^*-1}<n_{j^*}<n$
for some random $j^*=j^*(\omega)$ such that each $n_j$ is the stopping
time as in \eqref{eq:stopping-time}. Almost surely $j^*(\omega)$ is
finite. We decompose the above sum as $\eta_f=\sum_{j=0}^{j^*} \eta_j$ with
\begin{eqnarray*}
\eta_j=&&f(r_{n_{j+1}})-
f(r_{n_{j}})- \qquad \qquad \\
&& \varepsilon^2
\sum_{s=n_j}^{n_{j+1}}\left(b(r_s)f'(r_s)+\frac{\sigma^2(r_s)}{2}
f''(r_s)\right).
\end{eqnarray*}
Lemma \ref{lemma:expectlemmaBigStrips} and the analogous lemma for resonant
zones in \cite{CGK} imply that for
any $j$,
\begin{equation}\label{def:localExpec}
|\mathbb{E}(\eta_j)|\lesssim\varepsilon^{2\kappa+\zeta}
\end{equation}
for some $\zeta>0$. Define $\Delta_j=n_{j+1}-n_j$ and . We split
$\mathbb{E}(\eta_f)$ as
\begin{equation}\label{def:SplitGlobalExpec}
\begin{split}
\mathbb{E}(\eta_f)= & \mathbb{E}\left(\left.\sum_{j=0}^{j^*}\eta_j\right|
\varepsilon^{-2(1-\kappa)+\delta}\leq \Delta_j\leq
\varepsilon^{-2(1-\kappa)-\delta}\,\,\forall j\right)\\
&\qquad\times\mathbb{P}\left(\varepsilon^{-2(1-\kappa)+\delta}\leq
\Delta_j\leq
\varepsilon^{-2(1-\kappa)-\delta}\,\,\forall j\right)\\
&+\mathbb{E}\left(\left.\sum_{j=0}^{j^*}\eta_j\right| \,\,\exists j \,\text{ s.
t. }
\Delta_j<\varepsilon^{-2(1-\kappa)+\delta} \text{ or }
\Delta_j>\varepsilon^{-2(1-\kappa)-\delta}\right)\\
&\qquad\times \mathbb{P}\left(\exists
j \,\text{ s. t. }
\Delta_j<\varepsilon^{-2(1-\kappa)+\delta} \text{ or }
\Delta_j>\varepsilon^{-2(1-\kappa)-\delta}\right)
\end{split}
\end{equation}
where $j$ satisfies $0\leq j\leq j^*-1$.
We first bound the second term in the sum. We need
to estimate how many strips the iterates may
visit for $n\le s\varepsilon^{-2}$. Proceeding as in the proof of Lemma
\ref{lemma:expectlemmaBigStrips}, since we have $|r_n-r_{n-1}|\lesssim\varepsilon$,
there exists a
constant $c>0$ such that
\[
|n_{j+1}-n_j|\geq c\varepsilon^{\kappa-1}\quad\text{ for }j=0,\ldots, j^*-1.
\]
Therefore
\begin{equation}\label{def:maxvisitedstrips:final}
j^*\lesssim \varepsilon^{1-\kappa}.
\end{equation}
Then, by Lemmas \ref{lemma:expectlemmaBigStrips}
and the corresponding lemma for resonant zones in \cite{CGK}, for any small
$\delta$,
\[
\mathbb{P}\left(\exists
k \,\text{ s. t. }
\Delta_j<\varepsilon^{-2(1-\kappa)+\delta} \text{ or }
\Delta_j>\varepsilon^{-2(1-\kappa)-\delta}\right)\leq
\varepsilon^{-1-\kappa}e^{-C\varepsilon^{-\delta}}.
\]
This implies,
\[
\begin{split}
\Bigg| \mathbb{E}\Bigg(\sum_{j=0}^{j^*}\eta_j\Bigg| &\,\,\exists j \,\text{ s.
t. }
\Delta_j<\varepsilon^{-2(1-\kappa)+\delta} \text{ or }
\Delta_j>\varepsilon^{-2(1-\kappa)-\delta}\Bigg)\Bigg|\\
&\times \mathbb{P}\left(\exists
j \,\text{ s. t. }
\Delta_j<\varepsilon^{-2(1-\kappa)+\delta} \text{ or }
\Delta_j>\varepsilon^{-2(1-\kappa)-\delta}\right)\\
&\leq
\varepsilon^{-1-\kappa}\cdot \varepsilon^{2\kappa+d}\cdot\varepsilon^{-1-\kappa}e^{
-C\varepsilon^{-\delta}}.
\end{split}
\]
Now we bound the first term in \eqref{def:SplitGlobalExpec}. Taking into
account the assumptions on the exit times $\Delta_j$, we can assume
\begin{equation}\label{eq:j*:final}
\varepsilon^{-2\kappa+\delta}\leq j^*\leq \varepsilon^{-2\kappa-\delta}.
\end{equation}
Now we are ready to prove that the first term in \eqref{def:SplitGlobalExpec}
tends to zero with $\varepsilon$. We bound the probability by one. To prove
that
the conditioned expectation in the first line tends to zero with $\varepsilon$,
it is enough to take into account \eqref{def:localExpec} and
\eqref{eq:j*:final}, to obtain
\[
\left|\mathbb{E}\left(\left.\sum_{j=0}^{j^*}\eta_j\right|
\varepsilon^{-2(1-\kappa)+\delta}\leq \Delta_j\leq
\varepsilon^{-2(1-\kappa)-\delta}\,\,\forall j\right)\right|
\lesssim
\varepsilon^{2\kappa+\zeta}\cdot\varepsilon^{-2\kappa-\delta}
\leq \varepsilon^{\zeta-\delta}.
\]
Therefore, taking $\delta>0$ small enough we have that the first row in
\eqref{def:SplittingExpect} tends to zero with $\varepsilon$. This completes the proof
of \eqref{def:ExpNF} and therefore of Theorem \ref{maintheorem}.
|
2,869,038,154,282 | arxiv | \section{Introduction}
In his landmark 1977 paper, Purcell elucidated the unique challenges faced by microorganisms attempting to propel themselves in an inertia-less world \cite{purcell77}. In the creeping flow limit, viscous stresses dominate, and thus shape-changing motions which are invariant under time reversal cannot produce any net locomotion -- the so-called {scallop theorem}. In order to circumvent this limitation many microorganisms are observed to pass waves along short whip-like appendages known as flagella, usually transverse planar waves for many flagellated eukaryotic cells, and helical waves for prokaryotes \cite{lighthill76,brennan77,lauga09b}.
In the first of a series of pioneering papers on the swimming of microorganisms, G.I. Taylor investigated back in 1951 such motions by considering the self propulsion of a two-dimensional sheet which passes waves of transverse displacement \cite{taylor51}. By stipulating that such waves have a small amplitude relative to their wavelength Taylor utilized a perturbation expansion to compute the steady swimming speed of the sheet to fourth order in amplitude. Drummond later extended Taylor's calculation of the swimming speed of an oscillating sheet to eighth order in amplitude \cite{drummond66}. A
concise presentation of the derivation can be found in Steve Childress' textbook \cite{childress81}.
Since then many more sophisticated theoretical and computational models have been proposed to study the locomotion of microorganisms which are well documented in several review articles \cite{lighthill76,brennan77,lauga09b}. Nevertheless, the simplicity of the swimming sheet still provides opportunity for insight and analysis into such problems as swimming in viscoelastic fluids \cite{lauga07,teran10}, the synchronization of flagellated cells \cite{elfring09, elfring10}, or peristaltic pumping between walls \cite{jaffrin71,pozrikidis87b,teran08,felderhof09}. The swimming sheet has also been utilized to yield theoretical insight into inertial swimming \cite{childress81,reynolds65,tuck68}.
In this paper we show that through some mathematical manipulation the perturbation expansion for an inextensible sheet outlined by Taylor (\S\ref{series}) may be performed systematically so that the result may be obtained to arbitrary order in amplitude (\S\ref{large}). The resulting series obtained is found to be divergent for order one wave amplitudes. Using boundary-integral computations as benchmark results (\S\ref{BI}), we show however that the series may be transformed to obtain an infinite radius of convergence (\S\ref{comparison}), thus providing an analytical model valid for arbitrarily large wave amplitude. The coefficients for both the original and the transformed series are included as supplementary material.
\section{Series solution for Taylor's swimming sheet}
\label{series}
\subsection{Setup}
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{figure1}
\caption{Left: Graphical representation of the Taylor's swimming sheet. Right: Example of wave amplitude studied in this paper: $\epsilon=0.1,1,7$.}
\label{figsystem}
\end{figure}
We consider a two dimensional sheet of amplitude $b$ which passes waves of transverse displacement at speed $c=\omega/k$, where $\omega$ is the frequency and $k$ is the wavenumber (see Fig.~\ref{figsystem}). The material coordinates of such a sheet, denoted by $s$, are given by
\begin{eqnarray}
y_{s} &=& b \sin (kx-\omega t).
\end{eqnarray}
We use the following dimensionless variables for length $x^*=xk$ and time $t^*=t\omega$ (where *'s indicate dimensionless quantities). The ratio of the amplitude of the waves to their wavelength is given by $\epsilon=bk$. For convenience we use the wave variable $z=x^*-t^*$ and therefore write
\begin{equation}
y_{s}^* = \epsilon \sin (z) = \epsilon f (z).
\end{equation}
The regime we consider here, that of microorganisms, is the creeping flow limit governed by the Stokes equations for incompressible Newtonian flows
\begin{eqnarray}
\boldsymbol{\nabla}\cdot\mathbf{u}^*&=&0, \\
\boldsymbol{\nabla} p^* &=& \nabla^2\mathbf{u}^*,
\end{eqnarray}
where the velocity field $\mathbf{u^*}=\{u,v\}/c$ and pressure field $p^*=p/\mu\omega$. We now drop the *'s for convenience.
In two dimensions the continuity equation is automatically satisfied by invoking the stream function $\psi$ where
\begin{equation}
u = - \frac{\partial \psi}{\partial y} ,\quad v = \frac{\partial \psi}{\partial x}\cdot
\end{equation}
The Stokes equations are then transformed into a biharmonic equation in the stream function
\begin{equation}
\nabla^{4}\psi = 0.
\end{equation}
The components of velocity of a material point of the sheet are denoted by $u_{0}$ and $v_{0}$. The conditions to be satisfied by the field $\psi$ at the surface $y = \epsilon f(z)$ are hence
\begin{equation}
-\frac{\partial \psi}{\partial y} | _{y=\epsilon f}= u_{0}, \quad \frac{\partial \psi}{\partial x} | _{y=\epsilon f}=v_{0}.
\label{CL0}
\end{equation}
In order to find an analytical solution we seek a regular perturbation expansion in powers of $\epsilon$,
\begin{equation}
\psi \sim \sum_{k=1}^K \epsilon^{k}\psi^{(k)},
\end{equation}
with
\begin{eqnarray}
u_0\sim\sum_{k=1}^{K}\epsilon^k u_0^{(k)}, \quad v_0\sim\sum_{k=1}^{K}\epsilon^k v_0^{(k)},
\end{eqnarray}
where K is the order to which we wish to take our expansion.
We consider here only the upper-half solution which, by symmetry, is sufficient to yield the swimming velocity. The solution to the biharmonic equation which yields bounded velocities in the upper half plane, at $\mathcal{O}(\epsilon^k)$, is given by
\begin{equation}
\psi^{(k)} = U^{(k)}y+\sum_{j=1}^\infty \bigg[(A_{j}^{(k)}+B_{j}^{(k)}y)\sin (jz) +(C_{j}^{(k)}+D_{j}^{(k)}y)\cos (jz)\bigg]e^{-jy}.
\label{gensol}
\end{equation}
We look to solve this problem in a frame moving with the sheet and hence the terms $U^{(k)}y$ allows for the waving sheet to move relative to the far field with a velocity equal to $\mathbf{U}\sim-\sum_{k=0}^{K}\epsilon^{k}U^{(k)}\mathbf{e}_x$.
In order to express the stream function on the boundary we expand $\psi$ in powers of $\epsilon$ about $y=0$ using Taylor expansions, and get
\begin{equation}
-\frac{\partial \psi}{\partial y}| _{y=\epsilon f} = - \sum_{k=1}^\infty \epsilon ^{k} \sum_{n=0}^{k-1}\frac{f^{n}}{n!} \frac{\partial ^{n+1} \psi ^{(k-n)}}{ \partial y^{n+1}} | _{y=0},
\end{equation}
and \begin{equation}
\frac{\partial \psi}{\partial x}| _{y=\epsilon f} = \sum_{k=1}^\infty \epsilon ^{k} \sum_{n=0}^{k-1}\frac{f^{n}}{n!} \frac{\partial ^{n+1} \psi ^{(k-n)}}{ \partial x \partial y^{n}} | _{y=0}.
\end{equation}
Substituting for $\psi$ from Eq.~\eqref{gensol} and equating with the boundary conditions we find that for $k\in [1,K]$ we must have
\begin{eqnarray}
u_0^{(k)} = -U^{(k)}- \sum_{n=0}^{k-1} \sum_{j=1}^{k-n} \frac{(-j \sin (z))^{n}}{n!}\bigg[\left(-jA_{j}^{(k-n)}+(n+1)B_{j}^{(k-n)} \right)\sin (jz) \nonumber \\
+\left(-jC_{j}^{(k-n)}+(n+1)D_{j}^{(k-n)}\right)\cos (jz)\bigg],
\end{eqnarray}
and
\begin{eqnarray}
v_0^{(k)} = \sum_{n=0}^{k-1} \sum_{j=1}^{k-n} \frac{(-j \sin (z)) ^{n}}{n!}&\bigg[&\left(jA_{j}^{(k-n)}-nB_{j}^{(k-n)}\right)\cos (jz)
+\left(-jC_{j}^{(k-n)}+nD_{j}^{(k-n)}\right)\sin (jz)\bigg].
\end{eqnarray}
At order $k$, the unknowns, which are the $k^{\text{th}}$ coefficients $A_{j}^{(k)}$, $B_{j}^{(k)}$, $C_{j}^{(k)}$, $D_{j}^{(k)}$ and swimming speed $U^{(k)}$, are in the $n=0$ term only. Factoring this off and rearranging we obtain
\begin{eqnarray}\label{eq1}
u_0^{(k)}+\tilde{G}^{(k)} = -U^{(k)}+\sum_{j=1}^{k}\bigg[(jA_{j}^{(k)}-B_{j}^{(k)})\sin (jz)+(jC_{j}^{(k)}-D_{j}^{(k)})\cos (jz)\bigg],
\end{eqnarray}
and
\begin{eqnarray}\label{eq2}
v_0^{(k)}-\tilde{H}^{(k)}&=& \sum_{j=1}^{k}\bigg[jA_{j}^{(k)}\cos (jz)-jC_{j}^{(k)}\sin (jz)\bigg].
\end{eqnarray}
with
$\tilde{G}^{(k)}$ and $\tilde{H}^{(k)}$ given by
\begin{eqnarray}
\tilde{G}^{(k)}=\sum_{n=1}^{k-1} \sum_{j=1}^{k-n} \frac{(-j \sin (z)) ^{n}}{n!}\bigg[(-jA_{j}^{(k-n)}+(n+1)B_{j}^{(k-n)} )\sin (jz)
+(-jC_{j}^{(k-n)}+(n+1)D_{j}^{(k-n)})\cos (jz) \bigg],
\end{eqnarray}
and
\begin{eqnarray}
\tilde{H}^{(k)}=\sum_{n=1}^{k-1} \sum_{j=1}^{k-n} \frac{(-j \sin (z))^{n}}{n!}&\bigg[&(jA_{j}^{(k-n)}-nB_{j}^{(k-n)} )\cos (jz)
+(-jC_{j}^{(k-n)}+nD_{j}^{(k-n)} )\sin (jz)\bigg].
\end{eqnarray}
Provided the solution for the flow field is known for all orders up to $k-1$, the left-hand side of Eqs.~\eqref{eq1}--\eqref{eq2} is thus known, and all the unknowns, determining the $k^{\text{th}}$ order terms, are on the right-hand side.
The terms $\tilde{G}^{(k)}$ and $\tilde{H}^{(k)}$ may conveniently be rearranged into a Fourier series of order $k$ as
\begin{eqnarray}
\tilde{G}^{(k)}&=&\sum_{j=0}^{k} \tilde{K}_j^{(k)} \cos (jz)+\sum_{j=1}^{k} \tilde{S}_j^{(k)} \sin (jz),\\
\tilde{H}^{(k)}&=&\sum_{j=0}^{k} \tilde{T}_j^{(k)} \cos (jz)+\sum_{j=1}^{k} \tilde{R}_j^{(k)} \sin (jz).
\end{eqnarray}
A simple expression of the Fourier coefficients is not easily obtained; however, they are easily (numerically) computed.
Finally, as we show below, the $k^{\text{th}}$ term of the components of velocity at the boundary can be written as a Fourier cosine series of order $k$
\begin{equation}
u_{0}^{(k)} = \sum_{j=0}^{k}\alpha_{j}^{(k)}\cos(jz),
\label{u0}
\end{equation}
and
\begin{equation}
v_{0}^{(k)} = \sum_{j=0}^{k}\beta_{j}^{(k)}\cos(jz).
\label{v0}
\end{equation}
We can hence write, for all $k$, the system to solve
\begin{eqnarray}
jA_{j}^{(k)}-B_{j}^{(k)} &=&\tilde{S}_j^{(k)}, \\
jC_{j}^{(k)}-D_{j}^{(k)} &=&\alpha_{j}^{(k)}+\tilde{K}_j^{(k)},\\
jC_{j}^{(k)} &=&\tilde{R}_j^{(k)},\\
jA_{j}^{(k)} &=&\beta_{j}^{(k)}-\tilde{T}_j^{(k)},
\end{eqnarray}
for $j \in \left[1,k\right]$, or more compactly
\begin{eqnarray}
\mathcal{J}_j\mathbf{A}_j^{(k)}=\mathbf{\tilde{b}}_j^{(k)}.
\end{eqnarray}
The determinant of the coefficient matrix $\det(\mathcal{J}_j)=j^2$ and hence invertible $\forall j\ne 0$. The solutions for each $j$ are decoupled, and thus for each $k$ we invert a $4k$ block diagonal matrix.
Note that for the mean $j=0$ terms we obtain
\begin{eqnarray}
U^{(k)}&=& -\alpha_{0}^{(k)}-\tilde{K}_0^{(k)},\\
0&=&\beta_{0}^{(k)}-\tilde{T}_0^{(k)}.
\end{eqnarray}
We thus see that the swimming speed at $\mathcal{O}(\epsilon^k)$ depends only on the mean at that order. We also find that since there is no far-field vertical velocity we require $\beta_{0}^{(k)}=\tilde{T}_0^{(k)}$, which are both known, in order to avoid an ill-posed problem. This means that since we do not allow a mean vertical flow in the solution of the stream function (which gives $\tilde{T}_0^{(k)}=0$) then the vertical boundary conditions must have zero mean, $\beta_{0}^{(k)}=0$.
Now we can solve for the swimming speed up to $O(\epsilon^k)$ by solving the above system all $k$ orders sequentially, provided we have the Fourier coefficients for the boundary conditions up to $\mathcal{O}(\epsilon^k)$.
\subsection{Boundary conditions}
Following Taylor \cite{taylor51}, we wish the material of the sheet to be inextensible. In a frame moving at the wave speed the shape of the sheet is at rest \cite{taylor51,childress81}, therefore in a frame moving with the sheet the boundary conditions are
\begin{eqnarray}
u_{0} &=& -Q \cos\theta+1, \\
v_{0} &=& -Q \sin\theta,
\end{eqnarray}
where $\tan\theta=y_s'$ and $Q$ is the material velocity in the moving frame is given by
\begin{eqnarray}
Q &=&\frac{1}{2\pi}\int_0^{2\pi} \sqrt{1+\epsilon^2 \cos ^2 (z)} \text{d} z.
\end{eqnarray}
Expanding in powers of $\epsilon$ and integrating we obtain
\begin{eqnarray}
Q&=&\sum_{n=0}^{\infty}\frac{(-1)^{n+1}}{(2n-1)2^{4n}}\binom{2n}{n}^{2}\epsilon^{2n},\nonumber \\
&=&\sum_{n=0}^{\infty}q_{n}\epsilon^{2n}.
\end{eqnarray}
Similarly we expand $\cos\theta$ in powers of $\epsilon$ to give
\begin{eqnarray}
\cos\theta &=& \sum_{n=0}^{\infty}\epsilon^{2n}(-1)^{n}\frac{1}{2^{4n}}\binom{2n}{n}\left[-\binom{2n}{n}+2\sum_{r=0}^{n}\binom{2n}{n-r}\cos (2rz)\right]\nonumber \\
&=&\sum_{n=0}^{\infty}\epsilon^{2n}\sum_{r=0}^{n}t_{r}^{n}\cos(2rz).
\end{eqnarray}
Letting $k=2n$ and considering only even values we obtain
\begin{eqnarray}
u_{0}&=& 1-\sum_{k=0}^{\infty}\epsilon^k\sum_{r=0}^{k/2}\cos(2rz)\sum_{p=r}^{k/2}t_{r}^{p}q_{\frac{k}{2}-p}\nonumber\\
&=&-\sum_{k=2}^{\infty}\epsilon^k\sum_{r=0}^{k/2}\cos(2rz)\sum_{p=r}^{k/2}t_{r}^{p}q_{\frac{k}{2}-p}.
\end{eqnarray}
Now letting $j=2r$ we find for even $j$ and even $k\ge2$
\begin{equation}
u_0^{(k)}=\sum_{j=0}^{k}\alpha_j^{(k)}\cos(jz),
\end{equation}
where we have defined
\begin{equation}
\alpha_j^{(k)}=-\sum_{p=j/2}^{k/2}t_{\frac{j}{2}}^p q_{\frac{k}{2}-p},
\end{equation}
while $u_0^{(k)}=0$ for odd $k$ and $\alpha_j^{(k)}=0$ for odd $j$.
We then know that
\begin{eqnarray}
v_0=-y_s'(z)Q\cos\theta,
\end{eqnarray}
and hence we find for odd $j$ and odd $k\ge3$
\begin{eqnarray}
v_j^{(k)}&=& \sum_{j=1}^{k}\beta_j^{(k)}\cos(jz),\\
\beta_1^{(k)}&=& \alpha_0^{(k-1)}+\frac{1}{2}\alpha_2^{(k-1)},\\
\beta_j^{(k)}&=& \frac{\alpha_{j-1}^{(k-1)}+\alpha_{j+1}^{(k-1)}}{2}, \quad 3\le j\le k-2,\\
\beta_k^{(k)}&=&\frac{1}{2}\alpha_{k-1}^{(k-1)},
\end{eqnarray}
and for $k=1$ $\beta_1^{(1)}=-1$. In contrast, $v_0^{(k)}=0$ for even $k$ and $\alpha_j^{(k)}=0$ for even $j$. We see that the vertical component of the boundary velocity has no mean component at any order in $\epsilon$ which, as we saw in the previous section, is required given the form of the solution. With these coefficients we can now solve a linear system at each order to obtain $U^{(k)}$ to arbitrary order.
In practice the number of terms obtainable will be limited by numerical technique. To obtain the first one thousand terms of the series used in the analysis in the following sections, the system of equations was solved using the $\textit{C}$ programming language with GNU MP, the {GNU multiple precision arithmetic library} \cite{gmp}, using $300$ digits of accuracy.
\section{Analysis and improvement of the perturbation series}
\label{large}
In the previous sections we presented methodology to obtain the solution to the swimming speed $U$ in the form of a perturbation series
\begin{equation}\label{originalseries}
U(\epsilon)\sim\sum_{k=1}^{K}U^{(k)}\epsilon^{k}.
\end{equation}
It remains of course to be seen whether the series will converge to $U$ for arbitrary $\epsilon$.
We analyze here the convergence properties of the series, and methods to improve upon that convergence.
In Fig.~\ref{fig2_3label} we plot the coefficients of the series $U^{(k)}$ against $k$. On Fig.~\ref{fig2_3label}a are plotted the first 100 terms, and on Fig.~\ref{fig2_3label}b the logarithm of the absolute value of the nonzero terms up to $k=1000$.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figure2}
\caption{Coefficients of the series for the swimming speed, Eq.~\eqref{originalseries}.
(a): The first 100 terms of the series $U^{(k)}$; (b): $\ln\left(
\left|U^{(k)}\right|\right)$ for $k=1$ to $1000$ for nonzero values of $U^{(k)}$.}
\label{fig2_3label}
\end{figure}
We see that the coefficients have an exponentially increasing amplitude while alternating in sign, $U^{(k)}>0$ for $k=4n-2$ and $U^{(k)}<0$ for $k=4n$ where $n\in\mathbb{N}$. We also note that due to the $\epsilon\rightarrow-\epsilon$ symmetry of the geometry in the problem, all odd powers in the series are zero. It is therefore useful to recast the series as follows
\begin{eqnarray}\label{newseries}
U=
\sum_{k=1}^{K}U^{(k)}\epsilon^{k}=\sum_{k=1}^{K/2}U^{(2k)}\epsilon^{2k}= \delta\sum_{k=0}^{K/2-1}c_k\delta^k,
\end{eqnarray}
where $c_k=U^{(2k+2)}$ and $\delta=\epsilon^2$. The coefficients $c_k$ for $k=1$ to 500 have been reproduced in the included supplementary material of the manuscript.
The sign of $c_k$ alternates in a regular manner which indicates that the nearest singularity lies on the negative real axis and since only positive values of $\delta$ have any meaning, there is no physical significance to the singularity; it does of course govern the radius of convergence of the series \cite{vandyke74}.
\subsection{Series convergence}
The radius of convergence, $\delta_0$, of the power series
\begin{eqnarray}
f(\delta)\sim\sum_k c_k \delta^k,
\end{eqnarray}
may be simply found by using the ratio test
\begin{eqnarray}
\delta_0 = \lim_{k\rightarrow \infty} \frac{c_{k-1}}{c_k}\cdot
\end{eqnarray}
In order to find this value we must extrapolate due to the finite number of terms. In order to aid this process Domb and Sykes noted it is helpful to plot $c_k/c_{k-1}$ against $1/k$ \cite{domb57}. The reason is that if the singular function $f$, has a dominant factor
\begin{eqnarray}
(\delta_0-\delta)^\gamma \quad &\text{for}& \quad \gamma\ne 0,1,2,...,\\
(\delta_0-\delta)^\gamma\ln(\delta_0-\delta) \quad &\text{for}& \quad \gamma= 0,1,2,...,
\end{eqnarray}
then the coefficients behave like
\begin{equation}
\frac{c_{k}}{c_{k-1}}\sim \frac{1}{\delta_{0}}\left(1-\frac{1+\gamma}{k}\right),
\label{dombsykes}
\end{equation}
for large $k$ \cite{vandyke74,hinch91}. The result in Eq.~(\ref{dombsykes}) indicates that the intercept $1/k=0$ in a Domb-Sykes plot gives the reciprocal of the radius of convergence while the slope approaching the intercept gives $\gamma$. In Fig.~\ref{figckDS} we show the Domb-Sykes plot of the series $c_k$. The plot indicates that the nearest singularity is at $\delta_{0}\approx-0.914912217581184$, and that $\gamma=-1$ corresponding to a first order pole.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure3}
\caption{Domb-Sykes plot of the coefficients $c_k$, from the series in Eq.~\eqref{newseries}, shows convergence to $1/\delta_0\approx -1.093$.}
\label{figckDS}
\end{figure}
\subsection{Euler transformation}\label{sec:euler}
One approach to improve convergence of the series is to factor out the first-order pole characterized above, and characterize the singularities of the new series. However we find that that series is no more tractable due to the presence of an apparent branch cut in the complex plane close to $\delta = -1$.
Alternatively, the original non-physical singularity $\delta_0$ may be mapped to infinity using a Euler transformation and introducing a new small variable
\begin{eqnarray}
\tilde{\delta}=\frac{\delta}{\delta-\delta_0}\cdot
\end{eqnarray}
The power series for $f$ is then recast as
\begin{equation}\label{eq:euler}
f\sim\sum_k c_k \delta^k \sim \sum_k d_k \tilde{\delta}^k.
\end{equation}
The coefficients $d_k$ for $k=1$ to 500 have been reproduced in the included supplementary material of the manuscript. Their values for $k>50$ are shown in Fig.~\ref{figdk}a and we can see that they decay in magnitude for large $k$.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figure4}
\caption{
Coefficients of the new series for the swimming speed using the Euler transformation, Eq.~\eqref{eq:euler}.
(a) Coefficients $d_{k}$ of the new series;
(b) Domb-Sykes plot of the coefficients shows a convergence to one.}
\label{figdk}
\end{figure}
In order to find the radius of convergence of the new series, $d_k$, we again turn to the Domb-Sykes plot, which is show in Fig.~\ref{figdk}b. We see that it appears $d_k/d_{k-1}\rightarrow 1$ as $k^{-1}\rightarrow 0$ and since $\delta/(\delta-\delta_0)\rightarrow 1$ when $\delta\rightarrow \infty$, we have now have an infinite radius of convergence in the original variable $\delta$. Note that the vastly improved convergence does not necessarily mean the series will provide a good approximation beyond $\delta_0$ \cite{hinch91}; however we will see in the results section that it actually provides an excellent fit to the numerical results.
\subsection{Pad\'{e} approximants}\label{sec:pade}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{figure5}
\caption{Zeros in the complex plane of the denominators of various Pad\'{e} approximants for $M=N=10$, 50, 100, 200 and 249.}
\label{figpadezero}
\end{figure}
A popular scheme to improve the convergence properties of series is to recast the series as a rational polynomial
\begin{eqnarray}
f(\delta)\sim \sum_{k=0}^{M+N} c_k\delta^k \sim \frac{\sum_0^M a_k \delta^k}{\sum_0^N b_k\delta^k}=P_{N}^{M},
\end{eqnarray}
where $M+N\le K/2-1 $. If we multiply both sides by the denominator $\sum b_k\delta^k$ for the terms of order $\delta^k$ where $k=M+1:M+N$ we obtain a square matrix to invert for $b_1,...,b_N$ and one takes $b_0=1$ with no loss of generality \cite{bender78}. One can then solve for $a_k$.
We apply this method to our swimming sheet, and plot the zeros of different Pad\'{e} denominators with $M=N$ in Fig.~\ref{figpadezero}.
It is evident that the pole we identified earlier at $\delta_{0}$ is well reproduced here. The interesting feature beyond this is the fact that the remaining zeros do not exhibit any consistency, which indicates a branch cut in the complex plane.
\subsection{Shanks transformation}\label{sec:shanks}
A scheme to improve the rate of convergence of a sequence of partial sums
\begin{eqnarray}
S_n=\sum_{k=0}^{n}c_k\delta^k,
\end{eqnarray}
for $n=0$ to $N\le K/2-1$, is to assume they are in a geometric progression
\begin{eqnarray}
S_n=A+BC^n.
\end{eqnarray}
Solving for A by nonlinear extrapolation of three sums yields
\begin{eqnarray}
A_n=S_n-\frac{\left(S_{n+1}-S_{n}\right)\left(S_n-S_{n-1}\right)}{\left(S_{n+1}-S_n\right)-\left(S_n-S_{n-1}\right)}.
\end{eqnarray}
The $A_n$'s for $n=1$ to $N-1$, can then be considered a series of partial sums and the Shanks transformation may be thereby repeated $(N-1)/2$ times \cite{hinch91}.
\section{Boundary Integral Formulation}
\label{BI}
In order to provide benchmark results for the analysis of the perturbation series and its various transformations, we use the boundary integral method to obtain what we will consider to be an {exact} solution of the swimming speed for waves of arbitrarily large amplitude.
We briefly summarize the principle of the method here. The Lorentz reciprocal theorem states that two solutions to the Stokes equations, $\left(\mathbf{u}, \boldsymbol{\sigma}\right)$ and $\left(\tilde{\mathbf{u}},\tilde{\boldsymbol{\sigma}}\right)$ are related by
\begin{eqnarray}
\int_S \left(\mathbf{u}\cdot\tilde{\boldsymbol{\sigma}}\right)\cdot\mathbf{n} \ dS=\int_S \left(\tilde{\mathbf{u}}\cdot\boldsymbol{\sigma}\right)\cdot\mathbf{n} \ dS,
\label{lorenz}
\end{eqnarray}
within a volume $V$ bounded by the surface $S$ whose unit normal $\mathbf{n}$ is taken pointing into the fluid. The velocity and stress fields, $\tilde{\mathbf{u}}(\mathbf{x})$ and $\tilde{\boldsymbol{\sigma}}(\mathbf{x})$, are taken to be fundamental solutions for two-dimensional Stokes flow due to a point force at $\mathbf{x}_0$,
\begin{eqnarray}
\tilde{\mathbf{u}}(\mathbf{x})&=&\frac{1}{4\pi}\mathbf{G}(\hat{\mathbf{x}})\cdot\tilde{\mathbf{f}}(\mathbf{x}_0), \\
\tilde{\boldsymbol{\sigma}}(\mathbf{x})&=&\frac{1}{4\pi}\mathbf{T}(\hat{\mathbf{x}})\cdot\tilde{\mathbf{f}}(\mathbf{x}_0),
\end{eqnarray}
where $\hat{\mathbf{x}}=\mathbf{x}-\mathbf{x}_0$ and the two dimensional Stokeslet $\mathbf{G}$, and stresslet $\mathbf{T}$ are given by
\begin{eqnarray}
\mathbf{G}&=&-\mathbf{I}\ln(\left|\hat{\mathbf{x}}\right|)+\frac{\hat{\mathbf{x}}\bxh}{\left|\hat{\mathbf{x}}\right|^2},\\
\mathbf{T}&=&-4\frac{\hat{\mathbf{x}}\hat{\mathbf{x}}\hat{\mathbf{x}}}{\left|\hat{\mathbf{x}}\right|^4}\cdot
\end{eqnarray}
Taking the singular point $\mathbf{x}_0$ to be on the boundary $S$ one obtains from Eq.~\eqref{lorenz} a boundary integral solution to two-dimensional Stokes equations for the velocity
\begin{eqnarray}
\mathbf{u}(\mathbf{x}_0)=\frac{1}{2\pi}\int_S\left(\mathbf{u}(\mathbf{x})\cdot\mathbf{T}(\hat{\mathbf{x}})\cdot\mathbf{n}(\mathbf{x})-\mathbf{f}(\mathbf{x})\cdot\mathbf{G}(\hat{\mathbf{x}})\right)\ d S(\mathbf{x}), \label{bi2}
\end{eqnarray}
where $\mathbf{f}=\boldsymbol{\sigma}\cdot\mathbf{n}$.
We wish to capture the swimming speed of an infinite sheet therefore the domain of integration is an entire half plane of fluid bounded by the sheet. In order to avoid performing an integration over the entire bound it is convenient to use an array of periodically placed Stokeslets and stresslets, given by
\begin{eqnarray}
\mathbf{G}^p&=&\sum_{n=-\infty}^{\infty}-\mathbf{I}\ln(\left|\hat{\mathbf{x}}_n\right|)+\frac{\hat{\mathbf{x}}_n\hat{\mathbf{x}}_n}{\left|\hat{\mathbf{x}}_n\right|^2},\\
\mathbf{T}^p&=&\sum_{n=-\infty}^{\infty}-4\frac{\hat{\mathbf{x}}_n\hat{\mathbf{x}}_n\hat{\mathbf{x}}_n}{\left|\hat{\mathbf{x}}_n\right|^4},
\end{eqnarray}
where $\hat{\mathbf{x}}_n=\{\hat{x}_0+2\pi n,\hat{y}_0\}$, so that we may then instead integrate $\mathbf{G}^p$ and $\mathbf{T}^p$ over a single period \cite{pozrikidis87}. The periodic Stokeslet and stresslet may be conveniently expressed in closed form \cite{pozrikidis87,pozrikidis92}, through the use of the following summation formula
\begin{eqnarray}
A=\sum_{n=-\infty}^{\infty}\ln(\left|\hat{\mathbf{x}}_n\right|)=\frac{1}{2}\ln\left[2\cosh(\hat{y}_0)-2\cos(\hat{x}_0)\right],
\end{eqnarray}
and its derivatives, as follows
\begin{eqnarray}
G_{xx}^p &=& -A-\partial_yA+1, \\
G_{xy}^p &=& y\partial_xA, \\
G_{yy}^p &=& -A+y\partial_y A,
\end{eqnarray}
and
\begin{eqnarray}
T^p_{xxx} &=& -2\partial_x(2A+y\partial_yA), \\
T^p_{xxy} &=& -2\partial_y(y\partial_yA), \\
T^p_{xyy} &=& 2y\partial_{xy}A, \\
T^p_{yyy} &=& -2(\partial_yA-y\partial_{yy}A).
\end{eqnarray}
The remaining elements follow from a permutation of the indices of the Stokeslet and stresslet which leaves the right hand side unchanged \cite{pozrikidis92}.
The flow is quiescent at infinity and periodic on $2\pi$ and therefore the domain of integration $S$ reduces to the surface of the sheet over one period. To facilitate integration the continuous boundary is discretized into $N$ straight line elements $S_n$ and we assume that $\mathbf{f}$ is a linear function over each particular interval, $\mathbf{f}\rightarrow\mathbf{f}_n$ (see Ref.~\cite{higdon85}). We decompose the boundary velocity into surface deformations and rigid body motion $\mathbf{u}\rightarrow\mathbf{u}_n+\mathbf{U}$, where $\mathbf{u}_n$ is a linear function over each interval and $\mathbf{U}\equiv -U\mathbf{e}_x$. Then $\mathbf{x}_0$ is taken at the center of each of the the $N$ segments $S_n$,where the velocity is known, $\mathbf{x}_0\rightarrow \mathbf{x}_m$. The $\mathbf{G}^p$ and $\mathbf{T}^p$ are regularized by subtracting off the Stokeslet and stresslet from their periodic counterparts. The two-dimensional Stokeslet and stresslet are then integrated analytically and added back.
We thereby obtain from Eq.~\eqref{bi2} a linear system for $\mathbf{f}_n$ and $U$, given by
\begin{eqnarray}
\mathbf{u}(\mathbf{x}_m)+\mathbf{U}=\frac{1}{2\pi}\sum_{n=1}^N\Bigg[-\int_{S_n}\mathbf{f}_n\cdot\left(\mathbf{G}^p-\mathbf{G}\right)d S_n
-\int_{S_n}\mathbf{f}_n\cdot\mathbf{G} dS_n
\nonumber\\
+\int_{S_n}(\mathbf{u}_n+\mathbf{U})\cdot\left(\mathbf{T}^p-\mathbf{T}\right)\cdot\mathbf{n}_n d S_n
+\int_{S_n}(\mathbf{u}_n+\mathbf{U})\cdot\mathbf{T}\cdot\mathbf{n}_n dS_n
\Bigg].
\end{eqnarray}
We then obtain $U$ by specifying that the sheet is force free
\begin{eqnarray}
\sum_{n=1}^N \left[\mathbf{e}_x\cdot \int_{S_n} \mathbf{f}_n dS_n\right]=0.
\end{eqnarray}
The numerical procedure was validated by reproducing Pozrikidis' results for shear flow over sinusoidal surface \cite{pozrikidis87}.
\section{Comparison between series solution and computations}
\label{comparison}
\subsection{Series solution}
We first show the convergence of the unaltered series expansion, Eq.~\eqref{originalseries}, in Fig.~\ref{figseries} where we display the swimming speed of the sheet, $U$, as a function of its amplitude, $\epsilon$.
The red squares indicate numerical points computed with the boundary integral method. We
plot the results for Taylor's original fourth order expansion (dashed-dot line) which is reasonably accurate up to $\epsilon\approx 0.4$. The series with $K=20$ is shown in dashed line. As we add terms, we get that the series with $K=1000$ (solid line) fails to converge beyond the singularity at $\epsilon=\sqrt{-\delta_{0}}\approx0.95651$, as expected from the analysis in \S\ref{large}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure6}
\caption{Swimming speed, $U$, against wave amplitude, $\epsilon$, for the unaltered series, Eq.~\eqref{originalseries}, with $K=4$ (dashed-dot), $K=20$ (dashed), $K=1000$ (solid). The series diverges for $\epsilon\approx 0.9565$. Red squares indicate data points from the boundary integral method.}
\label{figseries}
\end{figure}
\subsection{Euler transformation}
The presence of the singularity on the negative real axis for the series $c_k$ led naturally to an Euler transformation to map the singularity to infinity which, as detailed in \S\ref{sec:euler}, yields a series with an infinite radius of convergence in $\delta$, and thus in $\epsilon$. In Fig.~\ref{figeuler} we plot the results of the Euler-transformed series, Eq.~\eqref{eq:euler},
for the swimming speed, $U$, against the wave amplitude, $\epsilon$. The results are markedly improved over the original unaltered series. With $K=4$ we obtain results which are accurate for up to $\epsilon\approx1.3$, already higher than for Taylor's fourth order formula. With $K=20$ terms, $U(\epsilon)$ is found to be accurate up to $\epsilon\approx2$, and when using $K=100$ terms we obtain results which are accurate for $\epsilon>7$. With all $K=500$ terms the series is accurate up to $\epsilon\approx 15$ with a relative error of 1\% (the series is however convergent for all values of $\epsilon$).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure7}
\caption{Swimming speed, $U$, against wave amplitude, $\epsilon$, for the Euler series, Eq.~\eqref{eq:euler}, with $K=4$ (dashed-dot), $K=20$ (dashed), $K=100$ (solid). Red squares indicate data points from the boundary integral method.}
\label{figeuler}
\end{figure}
\subsection{Pad\'{e} approximants and Shanks transformation}
Pad\'{e} approximants provide a convenient (yet brute-force) way to drastically improve the performance of the series without the need to investigate the analytic structure of the underlying function. We find that using only a few terms provides very good results, as we show in Fig.~\ref{figpadeshanks}. For $K=4$ we obtain $P_{2}^{2}$ (dashed) which is accurate past the singularity, while for $K=22$ we obtain $P_{10}^{10}$ (solid) which is accurate up to $\epsilon\approx 4$, and shows an error which is reasonably small for larger amplitudes. Unfortunately the coefficient matrix which must be inverted to obtain the $b_k$ coefficients of the Pad\'{e} approximants becomes increasingly ill-conditioned as more terms of the series are added and we see diminishing returns from the Pad\'{e} approximants of higher order expansions; for example, $P_{150}^{150}$ is only accurate up to $\epsilon\approx5$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figure8}
\caption{Swimming speed, $U$, against amplitude, $\epsilon$, for (repeated) Shanks transformations of partial sums up to $S_2$ (dotted) and $S_6$ (dashed-dot) and for the Pad\'{e} approximants $P_{2}^{2}$ (dashed), $P_{10}^{10}$ (solid). Red squares indicate data points from the boundary integral method.}
\label{figpadeshanks}
\end{figure}
Similarly, repeated Shanks transformations of the first few partial sums results in a marked improvement of the convergence of the series. We see in Fig.~\ref{figpadeshanks} that the (repeated) Shanks transformation of partial sums up to $S_2$ (dotted line) yields results nearly identical to the $P_{2}^{2}$ approximant, while for terms up to $S_6$ (dashed-dot) we see reasonable accuracy up to $\epsilon\approx2$ in agreement with the results from Ref.~\cite{drummond66}. We find however that the addition of any further terms in the sequence leads to a pronounced decrease in the convergence properties of the sum.
\section{Concluding Remarks}
Despite its simplicity, Taylor's swimming sheet model is still used to provide physical insight into many interesting natural phenomena. In this paper, we demonstrated that by systematizing the perturbation expansion outlined by Taylor in the wave amplitude, $\epsilon$, the solution for the swimming speed can be obtained in a straightforward fashion to arbitrarily high order. The series unfortunately diverges for $\epsilon\approx 0.9565$ due to a nonphysical first-order pole located in the negative real axis. In order to increase the convergence of the series, the singularity can be mapped to infinity via an Euler transformation. The recast series then has an infinite radius of convergence and produces spectacularly accurate results for very large amplitudes (albeit requiring a good number of terms). An alternative is to reformulate the series using Pad\'{e} approximants or repeated Shanks transformations, which give reasonable accuracy for moderate amplitudes with fewer terms, but can become problematic for very large amplitudes.
\vspace{1cm}
This paper is dedicated to Steve Childress whose textbook on swimming and flying remains an inspiration. We thank Glenn Ierley for useful discussions and advice. Funding by the NSF (CBET-0746285) and NSERC (PGS D3-374202) is gratefully acknowledged.
\bibliographystyle{elsarticle-num}
|
2,869,038,154,283 | arxiv | \section{Introduction}
Of the current ensemble of $\sim$30 free-floating young
planetary mass objects \citep[]{Gag14, Gag15}, PSO J318.5-22
\citep[]{Liu13} is the closest analogue in properties to imaged
exoplanet companions. \citet[]{Gag14} and
\citet[]{Liu13} identify it as a $\beta$ Pic moving group member
\citep[23$\pm$3~Myr,][]{Mam14}
and it possesses
colors and magnitudes similar to the HR 8799
planets \citep[][]{Mar08,Mar10} and 2M1207-39b \citep[][]{Cha05}.
PSO J318.5-22 has
T$_\mathrm{eff}$ = 1160$^{+30}_{-40}$ K and a published mass estimate of
6.5$^{+1.3}_{-1.0}$ M$_{Jup}$ for an age of 12 Myr \citep[]{Liu13},
rising to 8.3$\pm$0.5 M$_{Jup}$ for the updated age of 23$\pm$3 Myr
(Allers et al. submitted). PSO J318.5-22 is intermediate
in mass and luminosity between 51 Eri b \citep[$\sim$2~M$_{Jup}$,][]{Mac15} and $\beta$ Pic b
\citep[$\sim$11-12~M$_{Jup}$,][]{Lag10, Bon14}, the two known exoplanet companions in the $\beta$ Pic
moving group.
Because PSO J318.5-22 is free-floating, it enables high
precision characterization not currently possible for exoplanet companions to bright
stars. In particular, we report here the first detection of
photometric variability in a young, L/T transition planetary mass object.
Variability is common for cool brown dwarfs
but until now has not been probed for lower-mass planetary objects
with similar effective temperatures.
Recent large-scale surveys of brown dwarf variability
with Spitzer have revealed mid-IR variability of up to a few percent
in $>$50\% of L and T type brown dwarfs \citep[]{Met15}.
\citet[]{Bue14} find that $\sim$30\% of the L5-T6 objects surveyed in
their HST SNAP survey show variability trends and large ground-based
surveys also find widespread variability \citep[]{Rad14a, Wil14, Rad14b}.
While variability amplitude may be increased across the L/T transition
\citep[]{Rad14a}, variability is now robustly observed
across a wide range of L and T spectral types. We therefore
expect variability in young extrasolar planets, which share
similar T$_\mathrm{eff}$ and spectral types but lower surface gravity.
In fact,
\citet[]{Met15} tentatively find a correlation between low surface gravity and
high-amplitude variability in their L dwarf sample.
Observed field brown dwarf variability is likely produced by rotational
modulation of inhomogenous cloud cover over the 3-12 hour rotational periods
of these objects \citep[]{Zap06}.
\citet[]{Apa13} and \citet[]{Bue15} find that their
variability amplitude as a function of wavelength
are best fit by a combination of thin and thick cloud
layers. We expect a similar mechanism to drive variability in
planetary mass objects with similar T$_\mathrm{eff}$, albeit with
potentially longer periods, as these objects will not yet have
spun up with age. Only a handful of
directly imaged exoplanet companions are amenable to variability searches
using high-contrast imagers such as SPHERE at the VLT \citep[]{Beu08} and GPI at
Gemini \citep[]{Mac14}; to search for variability in a larger sample of planetary mass
objects and young, very low mass brown dwarfs,
we have been conducting the first survey for free-floating
planet variability using NTT SoFI \citep[]{Moo98}. We have observed 22 objects to
date, of which 7 have mass estimates $<$13 M$_{Jup}$ and all have mass
estimates $<$25 M$_{Jup}$. PSO J318.5-22 is the first variability
detection from this survey.
\section{Observations and Data Reduction}
\begin{deluxetable}{lccccc}
\tablecolumns{10}
\tablewidth{3in}
\tabletypesize{\tiny}
\tablecaption{SOFI observations of PSO J318.5-22\label{tab:obs}}
\tablehead{
\colhead{Date} & \colhead{Filter} & \colhead{DIT} &
\colhead{NDIT} & \colhead{Exp. Time} & \colhead{On-Sky Time}}
\startdata
2014 Oct 9 & J$_{S}$ & 10 s & 6 & 3.80 hours & 5.15 hours \\
2014 Nov 9 & J$_{S}$ & 15 s & 6 & 2.40 hours & 2.83 hours \\
2014 Nov 10 & K$_{S}$ & 20 s & 6 & 2.80 hours & 3.16 hours \\
\enddata
\end{deluxetable}
We obtained 3 datasets for PSO J318.5-22 with NTT SoFI
(0.288$\arcsec$/ pixel, 4.92'$\times$4.92' field of view)
in October and November 2014.
Observations are presented in Table~\ref{tab:obs}.
We attempted to cover as much of the unknown rotation period
as possible, however, scheduling constraints and weather conditions
limited our observations to 2-5 hours on sky. In search mode, we
observed in J$_{S}$, however we did obtain a K$_{S}$ followup
lightcurve for PSO J318.5-22.
We nodded the target between two positions on the chip, ensuring that, at each jump from
position to position, the object is accurately placed on the same
original pixel. This allowed for sky-subtraction, while
preserving photometric stability. We followed an ABBA
nodding pattern, taking three exposures at each nod position.
Data were corrected for crosstalk artifacts between quadrants, flat-fielded using special
dome flats which correct for the ``shade'' (illumination dependent
bias) found in SoFI images, and illumination corrected using
observations of a standard star.
Sky frames for each nod position were created by median
combining normalized frames from the other nod positions
closest in time. These were then re-scaled to and subtracted
from the science frame. Aperture photometry
for all sources on the frame were acquired using
the IDL task aper.pro with aperture radii of 4, 4.5, 5, 5.5, 6, and 6.5 pixels and
background subtraction annuli from 21-31 pixels.
\section{Light Curves}
We present the final binned J$_{S}$ lightcurve from October 2014
(with detrended reference stars for comparison) in
Fig.~\ref{fig:lightcurves1} and the final binned J$_{S}$ and K$_{S}$
lightcurves from November
2014 in Fig.~\ref{fig:lightcurves2}. Raw light curves
obtained from aperture photometry display fluctuations in brightness
due to changing atmospheric transparency, airmass, and residual
instrumental effects.
These changes
can be removed via
division of a calibration curve calculated from carefully
chosen, well-behaved reference stars \citep[]{Rad14a}.
To detrend our lightcurves, first we discarded
potential reference stars with peak flux values below 10 or greater
than 10000 ADU (where array non-linearity is limited to $<$1.5$\%$). Different nods
were normalized via division by their median flux before being
combined to give a relative flux light curve. For each star a
calibration curve was created by median combining all other reference
stars (excluding that of the target and star in question). The
standard deviation and linear slope for each lightcurve was calculated
and stars with a standard deviation or slope $\sim$1.5-3 times greater
than that of the target were discarded. This process was iterated
until a set of well-behaved reference stars was
chosen. Final detrended light curves were obtained by dividing the raw
curve for each star by its calibration curve. The best lightcurves
shown here are with the aperture that minimizes the standard deviation
after removing a smooth polynomial \citep[as~done~in~][]{Bil13} --
for all epochs, the 4 pixel aperture (similar to the PSF FWHM) yielded
the best result.
Final lightcurves
are shown binned by a factor of three -- combining all three exposures
taken in each ABBA nod position. Error bars were calculated in
a similar manner as in \citet[]{Bil13} -- a low-order polynomial was fit to
the final lightcurve and then subtracted to remove any astrophysical
variability and the standard deviation of the subtracted lightcurve was
adopted as the typical error on a given photometric point
(shown in each lightcurve as the error bar given on the first
photometric point). As a check, we also
measured photometry and light curves using both the publically
available aperture photometry pipeline from \citet[]{Rad14b} as well as
the psf-fitting pipeline described in \citet[]{Bil13}. Results
from all three pipelines were consistent.
We found the highest amplitude of variability in our J$_{S}$
lightcurve from 9 October 2014 -- over the
five hours observed, PSO J318.5-22 varies by 10$\pm$1.3$\%$.
The observed variability does not correlate with airmass
changes -- the target was overhead for the majority of this observation,
with airmass between 1 and 1.2 for the first
3 hours, increasing to $\sim$2 by the end of the observation.
The flattening of the lightcurve from 4-5 hours elapsed time in our
lightcurve may be indicative of a minimum in the lightcurve.
However, as no clear repetition of maxima or minima have been covered,
the strongest constraints we can place on the rotational period and variability
amplitude for PSO J318.5-22 in this epoch is that the period
must be $>$5 hours and the amplitude must be $\geq$10$\%$.
If the variation is sinusoidal, these observations point to an
even longer period of $>$7-8 hours.
On 9 November 2014, we recovered J$_{S}$ variability with a
somewhat smaller amplitude of 7$\pm$1$\%$ over our three hour long
observation. A maximum is seen 1 hour into the observation
and a potential minimum is seen at 2 hours into the observation.
The observed variability is not correlated with airmass changes
during the observation -- the observation started at
airmass = 1.1, increasing steadily to airmass =2.0 at the end of the
observation.
If the variability is roughly sinusoidal and single peaked,
this observation would suggest a period$\sim$3 hours; however,
we cannot constrain the period beyond requiring it to be
$>$3 hours, as we have not covered
multiple extrema and as the light curve could potentially be double-peaked
\citep[]{Rad12}. The lightcurve evolved considerably
between the October and November 2014 epochs -- a phenomena also found
in other older variable brown dwarfs \citep[]{Rad12, Rad14a, Art09,
Met15,Gil13}.
On 11 November 2014, we obtained a K$_{S}$ lightcurve for PSO J318.5-22.
Given its extremely red colors, PSO J318.5-22 is brighter
in K$_{S}$ than J$_{S}$ and is one of the brightest objects in the
SoFI field. Thus, we attain higher photometric
precision in our K$_{S}$ (0.7$\%$) lightcurve compared to J$_{S}$ (1 -
1.3$\%$). Fitting slopes to the target and 3
similarly-bright reference stars, the target increases in flux by 0.9$\%$ per hour
while the reference stars have slopes of 0.1-0.6$\%$ / hour
(consistent with a flat line within our photometric precision).
Thus, we tentatively find a marginal variability trend of up to 3$\%$ over
our 3 hour observation, requiring reobservation to be confirmed.
Additionally, in this case
the tentative variability is not completely uncorrelated with airmass
changes -- during this observation, airmass increased steadily from 1.1 to
2.2.
\begin{figure}
\includegraphics[width=3.5in]{f1.eps}
\caption{ Final binned J$_S$ lightcurve and comparison
detrended reference stars from 9 October 2014.
Typical error bars are shown
on the first photometric point.
The variability amplitude at this epoch is $>$10$\%$ with a period
of $>$5 hours. \label{fig:lightcurves1}
}
\end{figure}
\begin{figure}
\includegraphics[width=3.5in]{f2a.eps}
\includegraphics[width=3.5in]{f2b.eps}
\caption{
Top: Final binned J$_S$ lightcurve from 9 November 2014.
Bottom: Final binned K$_{S}$ lightcurve from 11 November 2014.
Lightcurves are presented similarly as in Fig.~\ref{fig:lightcurves1}.
The
J$_{S}$ variability amplitude at this epoch is $>$7$\%$ with a period
of $\geq$3 hours. We marginally detect K$_{S}$ variability,
with amplitude up to 3$\%$ over our 3 hour observation.
\label{fig:lightcurves2}}
\end{figure}
\section{Discussion}
This is the first detection of variability in such a cool, low-surface
gravity object. While variability has been
detected previously for very young ($<$1-2 Myr) planetary mass
objects in star-forming regions such as Orion \citep[cf.~][]{Joe13},
such variability is driven by a different mechanism than
expected for PSO J318.5-22. These previous detections have been for
M spectral type objects with much higher T$_\mathrm{eff}$ than PSO J318.5-22.
At these temperatures, variability is driven by starspots induced by
the magnetic fields of these objects or ongoing accretion.
PSO J318.5-22 is too cool to have starspots and
likely too old for ongoing accretion. From its red colors, PSO
J318.5-22 must be entirely cloudy \citep[]{Liu13}. Thus the likely mechanism producing
the observed variability is inhomogeneous cloud cover, as has
been found previously to drive variability in higher mass brown dwarfs
with similar T$_\mathrm{eff}$ \citep[]{Art09, Rad12, Bue14, Rad14a,
Rad14b, Wil14, Apa13, Bue15}. Notably, among
L dwarfs surveyed at high-photometric precision ($<$3$\%$),
PSO J318.5-22's J band
variability amplitude is the highest measured for an L dwarf to date
(cf. Yang et al. 2015 and Buenzli et al. submitted) -- reinforcing
the suggestion by \citet[]{Met15}
that variability amplitudes might be typically larger for lower
gravity objects.
To model cloud-driven as well as hot-spot variability,
we follow the approach of \citet[]{Art09} and \citet[]{Rad12},
combining multiple 1-d models to represent different regions of cloud cover.
We consider the observed atmosphere of our object to be composed of
flux from two distinct cloud regions (varying in temperature and/or in cloud
prescription) with fluxes of $F_{1}$ and $F_{2}$ respectively and with
a minimum filling fraction for the $F_{2}$ region of $a$.
The peak-to-trough amplitude of variability ($\Delta F$ / $F$, i.e.
the change of flux divided by the mid-brightness flux) observed in a given
bandpass due a change of filling fraction over the course of the
observation is given by Equation 2 from Radigan et al. 2012,
where $\Delta$a is the change in filling factor over the observation,
$\Delta$F = $F_{2}$ - $F_{1}$, and $\alpha$ = $a$ + 0.5$\Delta a$,
the filling fraction of the $F_{2}$ regions at mid-brightness:
\begin{equation}
\begin{array}{l}
A = \frac{(1-a-\Delta a) F_{1} + (a + \Delta a) F_{2} - (1 - a) F_{1}
- a F_{2}}{0.5[(1-a-\Delta a) F_{1} + (a + \Delta a) F_{2} + (1 - a) F_{1}
+ a F_{2}]} \\
=\frac{\Delta a }{\alpha + F_{1} / \Delta F} \\
\end{array}
\label{eq:ff}
\end{equation}
We calculated the synthetic photon fluxes $F_{1}$ and $F_{2}$ using
the cloudy exoplanet models of \citet[]{Mad11} and the filter
transmissions provided for the SoFI $J_{S}$ and $K_{S}$ filters.
While a diversity of brown dwarf / exoplanet cloud models are
available \citep[e.g.~][]{Sau08, All01, All12}, the \citet[]{Mad11}
models are particularly tuned to fit the
cloudy atmospheres and extremely
red colors of young low-surface gravity objects such as the HR 8799
exoplanets \citep[]{Mar08, Mar10}. As PSO J318.5-22 is a
free-floating analogue of these exoplanets, the \citet[]{Mad11}
models are the optimal choice for this analysis.
Because PSO J318.5-22's extraordinarily red colors preclude clear
patches in its atmosphere \citep[]{Liu13}, we consider
only combinations of cloudy models.
The \citet[]{Mad11} models model the cloud distribution
according to a shape function, $f(P)$:
\begin{equation}
f(P) =
\begin{cases}
(P/P_{\rm u})^{s_{\rm u}} & P \le P_{\rm u} \\
f_{\rm cloud} & P_{\rm u} \le P \le P_{\rm d} \\
\left(P/P_{\rm d}\right)^{-s_{\rm d}}& P \ge P_{\rm d} \, , \\
\end{cases}
\label{eq:cloud}
\end{equation}
where P$_{u}$ and P$_{d}$ are the pressures at the upper and lower pressure
cutoffs of the cloud and P$_{u}$ $<$ P$_{d}$. The indices
s$_{u}$ and s$_{d}$ control how rapidly the clouds dissipate at their
upper and lower boundaries. We consider combinations of 3
cloud models from \citet[]{Mad11}, with 60 $\mu$m
grain sizes and solar metallicity:
\begin{equation}
\begin{array}{l}
\mbox{Model E: } s_{\rm u}=6, s_{\rm d}=10, f_{\rm cloud}=1 \\
\mbox{Model A: } s_{\rm u}=0, s_{\rm d}=10, f_{\rm cloud}=1 \\
\mbox{Model AE: } s_{\rm u}=1, s_{\rm d}=10, f_{\rm cloud}=1 \\
\end{array}
\end{equation}
where model E cuts off rapidly at altitude, model A provides the
thickest clouds, extending all the way to the top of the atmosphere,
and model AE provides an intermediate case.
Fitting single component models to the spectrum presented in \citet[]{Liu13},
we find that the best single component fit is for A prescription clouds
with T$_\mathrm{eff}$=1100 K (see Fig.~\ref{fig:spectrum}). This agrees well
with the derived T$_\mathrm{eff}$=1100$^{+30}_{-40}$ K from \citet[]{Liu13}.
We thus adopt T$_\mathrm{eff}$=1100 K as the temperature of the dominant
cloud component, with a second cloud component at T$_{2}$. Explicitly
fitting multi-cloud component models, we find that a combination of
80$\%$ model A clouds with T$_\mathrm{eff}$=1100 K and 20$\%$
model A clouds with T$_\mathrm{eff}$=1200 K
marginally fit the spectrum better than a single component fit. Multi-component
fits using multiple cloud prescriptions do not fit the spectrum well
-- model A clouds (or similar) are likely
the dominant cloud component in this atmosphere.
We did not attempt further analysis of the spectrum in terms of variable
cloud components, as the spectrum was observed at a different epoch
than the variability monitoring.
We then calculated synthetic fluxes in $J_{S}$ and $K_{S}$ for models
with all three cloud prescriptions, T$_\mathrm{eff}$ from 700-1700
K, and log(g)=4 (matching the measured log(g) of PSO J318.5-22
from Liu et al. 2013). Then, considering different
values for $a$, we solved for $\Delta$a from Equation~\ref{eq:ff}
for the maximum observed amplitude
in $J_{S}$, with T$_{1}$ = 1000 K, different values of T$_{2}$, and varying
cloud prescriptions (plotted in the bottom
panels of Fig.~\ref{fig:modelcurves1}
for a minimum T$_{2}$ filling fraction of 0.2).
Filling fraction significantly varies for small
$\Delta$T, but only small variations in filling factor can
drive variability for abs($\Delta$T) $>$ 200 K.
Considering different
values for $a$, we calculated the variability amplitude ratio
$A_{K_S} / A_{J_S}$ for the same combinations of
T$_{1}$, T$_{2}$, and varying cloud prescriptions. We adopt the same
convention as \citet[]{Rad12}, where the thicker cloud prescription
is used for the F$_{1}$ regions. In the inhomogenous cloud case,
we also assume that the thinner
cloud producing the F$_{2}$ region is at a hotter
T$_\mathrm{eff}$ than the F$_{1}$ regions (i.e. the thin
cloud top is deeper in the atmosphere and thus hotter), so $\Delta$T = T$_{2}$ - T$_{1}$ $>$ 0.
Representative results for predicted amplitude ratio are presented
in Fig.~\ref{fig:modelcurves1} -- similar to \citet[]{Rad12},
different minimum filling
fractions yield qualitatively similar results, so we
present only $a$=0.2 results here. Inhomogeneous
combinations of clouds are shown on the left, homogeneous
combinations on the right (i.e. hot spots instead of cloud patchiness
as the driver of variability).
Observations of variable brown dwarfs have generally found
abs($A_{K_S} / A_{J_S}$) $<$ 1 \citep[see~e.g.~][]{Art09, Rad12,
Rad14a, Wil14, Rad14b}, thus, we shade this region in yellow
in Fig. ~\ref{fig:modelcurves1}.
As we have not yet covered a whole period of this variability
nor do we have simultaneous multi-wavelength observations,
we cannot determine $A_{K_S} / A_{J_S}$ with the data
in hand. It remains to be seen whether abs($A_{K_S} / A_{J_S}$) is
also $<$1 for PSO J318.5-22, which is much redder in $J-K$ than
the high-g, bluer objects for which $A_{K_S} / A_{J_S}$ is robustly
measured.
Future observations that
cover the entire period of variability at multiple wavelengths are
necessary to characterize the source of this variability.
However, in advance of these observations, it is instructive to
consider what amplitude ratios can be produced for young low surface
gravity objects with thick clouds.
In the case of inhomogeneous cloud cover (E+AE, E+A, A+AE),
combinations of thick clouds can produce $A_{K_S} / A_{J_S} <$ 1,
for $\Delta$T $>$150, similar to what was found by \citet[]{Rad12}
for the field early T 2MASS J21392676+0220226.
However, while \citet[]{Rad12} found that single component cloud models
from \citet[]{Sau08} with f$_{sed}$=3 always have
$A_{K_S} / A_{J_S} >$ 1, we do not find this to be the case
with all of the \citet[]{Mad11} cloud models.
This is true in the E+E case, but
for combinations of thicker cloud models (AE+AE, A+A),
$A_{K_S} / A_{J_S} $ can be $<$1.
Unlike \citet[]{Rad12}, who rule out homogeneous cloud
cover with hot spots as a source of variability for the T1.5 brown dwarf
2MASS J21392676+0220226 based on a measured $A_{K_S} / A_{J_S} $ $<$1,
a measurement of $A_{K_S} / A_{J_S} $ $<$1 for a young, low surface
gravity objects with thick clouds would be consistent with both
inhomogeneous clouds (patchy cloud cover) and homogeneous clouds
(hot spots).
\section{Conclusions}
We detect significant variability in the young,
free-floating planetary mass object PSO J318.5-22, suggesting that
planetary companions to stars with similar colors (e.g. the HR 8799 planets) may also be
variable. With variability amplitudes from 7-10$\%$ in J$_{S}$ at
two separate epochs over 3-5 hour observations, we constrain the period to
$>$5 hours, likely $>$7-8 hours in the case of sinusoidal variation.
In K$_{S}$, we marginally detect a variability trend of
up to 3$\%$ over our 3 hour observation. Our marginal detection
suggests that the variability amplitude in K$_{S}$
may be smaller than that in J$_{S}$, but simultaneous
multi-wavelength observations are necessary to confirm this.
Using the models of \citet[]{Mad11},
combinations of both homogeneous and inhomogeneous cloud prescriptions
can tentatively model variability with abs($A_{K_S} / A_{J_S}$) $<$ 1
for young, low surface gravity objects with thick clouds.
Only one exoplanet rotation period has been measured to date -- 7-9
hours for $\beta$ Pic b \citet[]{Sne14}.
PSO J318.5-22 is only the second young planetary mass object with
constraints placed on its rotational period and is likely also a fast rotator like $\beta$
Pic b, with possible rotation periods from
$\sim$5-20 hours. PSO J318.5-22 is thus an important link
between the rotational properties of exoplanet companions
and those of old, isolated Y dwarfs with similar masses.
\acknowledgements
We thank the anonymous referee for useful comments which helped
improve this paper. This work was supported by a consolidated grant from STFC.
E.B. was supported by the Swiss National Science Foundation (SNSF)
DH acknowledges support from the the ERC and DFG.
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=3.5in]{f3a.eps} \\
\includegraphics[width=3.5in]{f3b.eps}\\
\end{tabular}
\caption{Top: Best fit single model spectra for
model A, AE, and E clouds from \citet[]{Mad11}
overlaid on the spectrum of PSO J318.5-22 presented in \citet[]{Liu13}.
The overall best fit model has
T$_{\mathrm{eff}}$=1100 K, log(g)=4, solar metallicity, and model A (thick) clouds.
AE and E cloud models fail to reproduce the observed spectrum.
Bottomt: T$_{\mathrm{eff}}$=1400 K, log(g)=4, solar metallicity,
and model A (thick) cloud spectrum as well as the
best fit multi-component spectrum, consisting of 80$\%$ T$_{\mathrm{eff}}$=1100 K + 20$\%$
T$_{\mathrm{eff}}$=1200 K model A clouds overplotted on the
\citet[]{Liu13} spectrum. Hotter models do not fit the observed
features of the \citet[]{Liu13} spectrum; the combined 1100 K
+ 1200 K model spectrum reproduces the observed spectrum marginally better
than the best fit single component model.
\label{fig:spectrum}}
\end{figure}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=3in]{f4a.eps} &
\includegraphics[width=3in]{f4b.eps} \\
\includegraphics[width=3in]{f4c.eps} &
\includegraphics[width=3in]{f4d.eps} \\
\end{tabular}
\caption{top left and top right: predicted K$_{S}$ to J$_{S}$
amplitude ratio $A_{K_S} / A_{J_S}$ as a function of $\Delta$T, the temperature
difference between cloud components at T$_{1}$ and T$_{2}$, and for
a filling fraction of the T$_{2}$ regions of 0.2. Inhomogeneous cloud cover is plotted on the left (AE+E, A+E, A+AE)
and homogeneous cloud cover is plotted on the right (E+E, A+A,
AE+AE). The yellow region denotes the values of the
amplitude ratio that have previously been found for variable field
brown dwarfs.
Bottom left and bottom right: maximum change in filling fraction
needed to produce the observed amplitude A$_{J_S}$ as a function of $\Delta$T.
\label{fig:modelcurves1}
}
\end{figure}
|
2,869,038,154,284 | arxiv | \section{Introduction}
Affective understanding is a task that aims to predict the viewer expressions evoked by the videos, which could be applied to video creation and recommendation. With the rapid increase of video content on the Internet, this technique becomes even more desired and has attracted enormous interest recently\cite{baveye2017affective,wang2015video}. However, due to the affective gap between video content and the emotional response, it remains a challenging topic.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{Multi-Granularity Network with Modal Attention for Affective Understanding/Figure1.png}
\end{center}
\caption{Visualization of scores of the 15 expressions scores for a video on EEV dataset.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
The main methods in this area are multi-modal models, which combine visual and auditory features for affective predictions\cite{poria2015towards,sun2019gla}. However, for the recent proposed frame-level affective prediction task\cite{sun2021eev}, the expression of specific frame could be determined by signals of various timescales, these previous methods may be insufficient and imprecise. In this paper, we propose a Multi-Granularity Network with Modal Attention named MGN-MA, which extends previous multi-modal features into multi-granularity levels for frame-level affective understanding. Specifically, the multi-granularity features could be divided into three levels with various timescales, frame-level features, clips-level features and video-level features. Frame-level feature contains visual-salient content of the frame and audio style at the moment. Clips-level features represent semantic-context information around the frame, e.g. human behaviors, semantic information from speech. Video-level feature consists of the main affective information of the whole video, which is related to the theme of the video. Taken the above features into consideration, we employ a modal attention module with modal drop-out to further emphasize more affection-relevant modals. Finally, the representation goes through a MOE model\cite{jordan1994hierarchical} for the expression prediction. Experiments on the Evoked Expressions from Videos (EEV) dataset verify the effectiveness of our proposed method.
\begin{figure*}[htp]
\begin{center}
\includegraphics[width=1\linewidth]{Multi-Granularity Network with Modal Attention for Affective Understanding/Figure2.png}
\end{center}
\caption{An overview of the proposed Multi-Granularity Network with Modal Attention (MGN-MA). The frame-level feature, the clips-level feature and video-level feature are extracted by multi-granularity feature construction module.Then the multi-granularity multi-modal feature is feed into modal attention fusion module and expression classification module.}
\label{fig2}
\end{figure*}
\section{Method}
The overall framework of our proposed Multi-Granularity Network with Modal Attention is illustrated in Fig.~\ref{fig2}, which could divided into Multi-Granularity Feature Construction, Modal Attention Fusion and Expression Classification. Taking frames, clips and the whole video as the multi-granularity input, three sub-networks are employed to extract frame-level, clips-level and video-level features and construct the Multi-Granularity Feature(MGF). Then the above MGF are fed into the Modal Attention Fusion(MAF) module to obtain the multi-modal feature. Finally, a MOE model is used to predict the evoked expressions of the frame.
\subsection{Multi-Granularity Feature Construction}
To provide sufficient information for frame-level expression prediction, we construct the Multi-Granularity Feature, which contains frame-level features, clips-level features and video-level features. More details of the feature extraction will be discussed as follows.
\subsubsection{Frame-level Features}
\,\,\,\,\,\,\,\textbf{Image Feature.} We employ Inception-Resnet-v2\cite{szegedy2017inception} architecture pretrained on ImageNet\cite{deng2009imagenet} for image feature extraction, which could represent visual-semantic content appeared in the frame. We extract
the output in the last hidden layer as the feature and obtain a 1536-D vector for each frame. The image feature is performed at 6 Hz.
\textbf{Audio Feature.} We employ a VGG-style model provided by AudioSet\cite{gemmeke2017audio} trained on a preliminary version of YouTube8M\cite{abu2016youtube} for audio feature extraction. Following the method from \cite{hershey2017cnn}, the raw audio signal is first divided into 960 ms frames, then decomposed with a short-time Fourier transform applying 25 ms windows every 10 ms, which results the log-mel spectrogram patches of 96×64 bins. The spectrogram is further fed into the model and the 128-D embedding could be obtained.
\subsubsection{Clips-level Features}
\,\,\,\,\,\,\,\textbf{Action Feature.} S3D\cite{xie2018rethinking} CNN backbone is used to extract Action feature for human behaviors. The S3D backbone is trained with MIL-NCE method\cite{miech2020end}, which employs self-supervised information between visual content and speech texts on HowTo100M\cite{miech2019howto100m} narrated video dataset. Around the target frame, we extract the context video clip with 32 frames sampled at 10 fps, and the output of the S3D Linear layer with 512-D is used as the embedding.
\textbf{Subtitle Feature.} Subtitles or automatically generated subtitles are downloaded for each video and the nearest speech text for each frame could be obtained. Then a base BERT model pretrained on the BooksCorpus (800M words)\cite{devlin2018bert} and English Wikipedia (2,500M words) is applied to extract the 768-D embedding of the speech text.
\subsubsection{Video-level Features}
To extract video-level feature, we trained a video-level expression prediction network on the EEV dataset\cite{sun2021eev} and the output of last hidden layer with 1024-D is used. The video-level expression is obtained by the averaged labels of frames. For each video, we take the 80 uniform selected video frames, audio frames and video title as input, and extract the multi-modal features by above Inception-Resnet-v2\cite{szegedy2017inception}, VGG-Style model\cite{hershey2017cnn} and BERT\cite{devlin2018bert} model. Features of video frames and audio frames are fed into its own NetVLAD sub-network for temporal pooling, and further merged into the video-level feature with the title feature. We propose a modal attention fusion module for feature fusion, which will be discussed in the following section.
\subsection{Modal Attention Fusion}
The modal attention fusion module is proposed to emphasize more affection-relevant features in the above multi-granularity multi-modal features. Specifically, The weighting value of each feature is adaptively predicted by an attention module $\alpha(\cdot)$, which outputs a normalized weight by softmax. The input of the modal attention fusion module is defined as the concatenation of above multi-granularity multi-modal features. By multiplying the weight to the corresponding feature, the total merged multi-modal feature $V$ could be obtained:
\begin{equation}
V=\overset{N}{\underset{i=1}\bigoplus}{v_i\times \alpha_i}, \, \, \alpha = e^{(w_k\times v_i)}/\sum_{j=1}^N{e^{(w_j\times v_j)}}
\end{equation}
where $\bigoplus$ is concatenation operator, $w_i$ is the weight of feature $v_i$, $N$ is total number of modals. A modal level dropout mechanism is performed by replacing specific feature by zeros vector randomly, which could improve the robustness of the model.
\subsection{Expression Classification}
A 3-experts MOE\cite{jordan1994hierarchical} model is used for expression classification. To train all the parameters in our model, we minimize the classification loss between the predicted score and ground-truth score. The predicted score $o_{i,j}$ is first converted to probabilistic score by Sigmoid function:
\begin{equation}
p_{i,j} = \frac{1}{1 + e^{-o_{i,j}}}
\end{equation}
Then the classification loss function is computed as :
\begin{equation}
L = -\frac{1}{B\times C}\sum_{i=1}^B\sum_{j=1}^Cy_{i,j}\log(p_{i,j})+(1-y_{i,j})\log(1-p_{i,j})
\end{equation}
where B is the batch size, C is the class number and $y_{i,j}$ is the $j_{th}$ expression label of the $i_{th}$ sample.
\section{Experiment}
\subsection{Datasets and Protocols}
\textbf{EEV dataset} The dataset contains 8 million annotations of viewer facial reactions to 5,153 videos (370 hours). Each video is annotated at 6 Hz with 15 continuous evoked expression labels, corresponding to the facial expression of viewers who reacted to the video.
\textbf{Protocols} The performance is evaluated using correlation computed for each expression in each video, then averaged over the expressions and the videos. The correlation is based on scipy:
\begin{equation}
r=\frac{\sum(x-m_x)(y-m_y)}{\sqrt{\sum(x-m_x)^2\sum(y-m_y)^2}}
\end{equation}
where \(x\) is the predicted values (0~1) for each expression, \(y\) is the ground truth value (0~1) for each expression, \(m_x\) is the average of \(x\) and \(m_y\) is the average of \(y\). Note that correlation is computed over each video.
\subsection{Implementation Details}
Our model is trained with Adam optimizer with 30 epochs. We start training with a learning rate of 0.0001. We use a mini-batch size of 1536. The dimension of the multi-modal feature is set to 1024. For the mixture of experts, we apply batch normalization before the non-linear layer. To avoid over-fitting, we select the snapshot of the model that gains the best result of correlation on the validation set. As some video might be deleted or made private, we employ the available 3024 videos for training and 730 videos for validation. The models are implemented in TensorFlow.
\subsection{Results}
To evaluate the importance of different granularity of features, we start with frame-level features and gradually add the clip-level features and video-level features, the experiment results on the test set of the EEV dataset are shown in Table 1. For the frame-level feature, the performance of image feature and audio feature are evaluated separately with the correlation score of 0.00349 and 0.00574, which implies that audio is more effective than visual content for affective prediction. The combination of image and audio feature could further increase the performance to 0.00739. After the introduction of clip-level features, the correlation score is remarkably increased by 0.0589. This indicates the importance of semantic-context information around the frame, which is not supplied from the image feature and audio feature. It is noted that the subtitle feature are more effective than action features, this means more semantic information exists in the subtitles. Moreover, the introduction of video-level feature further increases the performance by 0.00201, and proves the importance of the affective information from the whole video. The above results indicates that our proposed multi-granularity features are effective for affective prediction.
\begin{table}[htp]
\centering
\begin{tabular}{lc}
\hline
Input & Correction \\
\hline
Image & 0.00349 \\
Audio & 0.00574 \\
Image+Audio & 0.00739 \\
Image+Audio+Action & 0.00893 \\
Image+Audio+Action+Subtitle & 0.01328 \\
Image+Audio+Action+Subtitle+Video & 0.01529 \\
\hline
\\
\end{tabular}
\caption{Comparison of different input modalities on EEV}
\label{tab:my_label}
\end{table}
\begin{table}[htp]
\centering
\begin{tabular}{lc}
\hline
Method & Correction \\
\hline
MLP & 0.01529 \\
MOE & 0.01718 \\
MAF+MOE & 0.01849 \\
Ensemble & 0.02292 \\
\hline
\\
\end{tabular}
\caption{Comparison of different model methods on EEV}
\label{tab:my_label}
\end{table}
In addition to the proposed the multi-granularity feature, we further evaluate the performance of modal fusion module and expression classification module. As shown in Table 2, four methods are compared with the same multi-granularity feature. Compared with Multilayer Perception classification, MOE improves the correlation result by 0.00189. The model achieves the best performance when the number of expert is set as 3. When the Modal Attention Fusion (MAF) module is introduced to replace the feature concatenation fusion, the correlation score is improved from 0.01718 to 0.01849. We ensembles five models to further improve the performance, and achieve a correlation score of 0.02292, which improves the result by 0.00443 compared with single model.
\section{Conclusion}
In this paper, the multi-granularity network with modal attention (MGN-MA) is proposed for dense affective predictions. MGN-MA consists of multi-granularity feature construction module, modal attention fusion module and expression classification module. The multi-granularity feature construction module could learn more semantic content and video theme information. The modal attention fusion module further improves the performance by emphasizing more affection-relevant features in the multi-granularity multi-modal features. The MGN-MA achives the correlation score of 0.02292 in the EEV challenge.
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
Affective understanding is a task that aims to predict the viewer expressions evoked by the videos, which could be applied to video creation and recommendation. With the rapid increase of video content on the Internet, this technique becomes even more desired and has attracted enormous interest recently\cite{baveye2017affective,wang2015video}. However, due to the affective gap between video content and the emotional response, it remains a challenging topic.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{Multi-Granularity Network with Modal Attention for Affective Understanding/Figure1.png}
\end{center}
\caption{Visualization of scores of the 15 expressions scores for a video on EEV dataset.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
The main methods in this area are multi-modal models, which combine visual and auditory features for affective predictions\cite{poria2015towards,sun2019gla}. However, for the recent proposed frame-level affective prediction task\cite{sun2021eev}, the expression of specific frame could be determined by signals of various timescales, these previous methods may be insufficient and imprecise. In this paper, we propose a Multi-Granularity Network with Modal Attention named MGN-MA, which extends previous multi-modal features into multi-granularity levels for frame-level affective understanding. Specifically, the multi-granularity features could be divided into three levels with various timescales, frame-level features, clips-level features and video-level features. Frame-level feature contains visual-salient content of the frame and audio style at the moment. Clips-level features represent semantic-context information around the frame, e.g. human behaviors, semantic information from speech. Video-level feature consists of the main affective information of the whole video, which is related to the theme of the video. Taken the above features into consideration, we employ a modal attention module with modal drop-out to further emphasize more affection-relevant modals. Finally, the representation goes through a MOE model\cite{jordan1994hierarchical} for the expression prediction. Experiments on the Evoked Expressions from Videos (EEV) dataset verify the effectiveness of our proposed method.
\begin{figure*}[htp]
\begin{center}
\includegraphics[width=1\linewidth]{Multi-Granularity Network with Modal Attention for Affective Understanding/Figure2.png}
\end{center}
\caption{An overview of the proposed Multi-Granularity Network with Modal Attention (MGN-MA). The frame-level feature, the clips-level feature and video-level feature are extracted by multi-granularity feature construction module.Then the multi-granularity multi-modal feature is feed into modal attention fusion module and expression classification module.}
\label{fig2}
\end{figure*}
\section{Method}
The overall framework of our proposed Multi-Granularity Network with Modal Attention is illustrated in Fig.~\ref{fig2}, which could divided into Multi-Granularity Feature Construction, Modal Attention Fusion and Expression Classification. Taking frames, clips and the whole video as the multi-granularity input, three sub-networks are employed to extract frame-level, clips-level and video-level features and construct the Multi-Granularity Feature(MGF). Then the above MGF are fed into the Modal Attention Fusion(MAF) module to obtain the multi-modal feature. Finally, a MOE model is used to predict the evoked expressions of the frame.
\subsection{Multi-Granularity Feature Construction}
To provide sufficient information for frame-level expression prediction, we construct the Multi-Granularity Feature, which contains frame-level features, clips-level features and video-level features. More details of the feature extraction will be discussed as follows.
\subsubsection{Frame-level Features}
\,\,\,\,\,\,\,\textbf{Image Feature.} We employ Inception-Resnet-v2\cite{szegedy2017inception} architecture pretrained on ImageNet\cite{deng2009imagenet} for image feature extraction, which could represent visual-semantic content appeared in the frame. We extract
the output in the last hidden layer as the feature and obtain a 1536-D vector for each frame. The image feature is performed at 6 Hz.
\textbf{Audio Feature.} We employ a VGG-style model provided by AudioSet\cite{gemmeke2017audio} trained on a preliminary version of YouTube8M\cite{abu2016youtube} for audio feature extraction. Following the method from \cite{hershey2017cnn}, the raw audio signal is first divided into 960 ms frames, then decomposed with a short-time Fourier transform applying 25 ms windows every 10 ms, which results the log-mel spectrogram patches of 96×64 bins. The spectrogram is further fed into the model and the 128-D embedding could be obtained.
\subsubsection{Clips-level Features}
\,\,\,\,\,\,\,\textbf{Action Feature.} S3D\cite{xie2018rethinking} CNN backbone is used to extract Action feature for human behaviors. The S3D backbone is trained with MIL-NCE method\cite{miech2020end}, which employs self-supervised information between visual content and speech texts on HowTo100M\cite{miech2019howto100m} narrated video dataset. Around the target frame, we extract the context video clip with 32 frames sampled at 10 fps, and the output of the S3D Linear layer with 512-D is used as the embedding.
\textbf{Subtitle Feature.} Subtitles or automatically generated subtitles are downloaded for each video and the nearest speech text for each frame could be obtained. Then a base BERT model pretrained on the BooksCorpus (800M words)\cite{devlin2018bert} and English Wikipedia (2,500M words) is applied to extract the 768-D embedding of the speech text.
\subsubsection{Video-level Features}
To extract video-level feature, we trained a video-level expression prediction network on the EEV dataset\cite{sun2021eev} and the output of last hidden layer with 1024-D is used. The video-level expression is obtained by the averaged labels of frames. For each video, we take the 80 uniform selected video frames, audio frames and video title as input, and extract the multi-modal features by above Inception-Resnet-v2\cite{szegedy2017inception}, VGG-Style model\cite{hershey2017cnn} and BERT\cite{devlin2018bert} model. Features of video frames and audio frames are fed into its own NetVLAD sub-network for temporal pooling, and further merged into the video-level feature with the title feature. We propose a modal attention fusion module for feature fusion, which will be discussed in the following section.
\subsection{Modal Attention Fusion}
The modal attention fusion module is proposed to emphasize more affection-relevant features in the above multi-granularity multi-modal features. Specifically, The weighting value of each feature is adaptively predicted by an attention module $\alpha(\cdot)$, which outputs a normalized weight by softmax. The input of the modal attention fusion module is defined as the concatenation of above multi-granularity multi-modal features. By multiplying the weight to the corresponding feature, the total merged multi-modal feature $V$ could be obtained:
\begin{equation}
V=\overset{N}{\underset{i=1}\bigoplus}{v_i\times \alpha_i}, \, \, \alpha = e^{(w_k\times v_i)}/\sum_{j=1}^N{e^{(w_j\times v_j)}}
\end{equation}
where $\bigoplus$ is concatenation operator, $w_i$ is the weight of feature $v_i$, $N$ is total number of modals. A modal level dropout mechanism is performed by replacing specific feature by zeros vector randomly, which could improve the robustness of the model.
\subsection{Expression Classification}
A 3-experts MOE\cite{jordan1994hierarchical} model is used for expression classification. To train all the parameters in our model, we minimize the classification loss between the predicted score and ground-truth score. The predicted score $o_{i,j}$ is first converted to probabilistic score by Sigmoid function:
\begin{equation}
p_{i,j} = \frac{1}{1 + e^{-o_{i,j}}}
\end{equation}
Then the classification loss function is computed as :
\begin{equation}
L = -\frac{1}{B\times C}\sum_{i=1}^B\sum_{j=1}^Cy_{i,j}\log(p_{i,j})+(1-y_{i,j})\log(1-p_{i,j})
\end{equation}
where B is the batch size, C is the class number and $y_{i,j}$ is the $j_{th}$ expression label of the $i_{th}$ sample.
\section{Experiment}
\subsection{Datasets and Protocols}
\textbf{EEV dataset} The dataset contains 8 million annotations of viewer facial reactions to 5,153 videos (370 hours). Each video is annotated at 6 Hz with 15 continuous evoked expression labels, corresponding to the facial expression of viewers who reacted to the video.
\textbf{Protocols} The performance is evaluated using correlation computed for each expression in each video, then averaged over the expressions and the videos. The correlation is based on scipy:
\begin{equation}
r=\frac{\sum(x-m_x)(y-m_y)}{\sqrt{\sum(x-m_x)^2\sum(y-m_y)^2}}
\end{equation}
where \(x\) is the predicted values (0~1) for each expression, \(y\) is the ground truth value (0~1) for each expression, \(m_x\) is the average of \(x\) and \(m_y\) is the average of \(y\). Note that correlation is computed over each video.
\subsection{Implementation Details}
Our model is trained with Adam optimizer with 30 epochs. We start training with a learning rate of 0.0001. We use a mini-batch size of 1536. The dimension of the multi-modal feature is set to 1024. For the mixture of experts, we apply batch normalization before the non-linear layer. To avoid over-fitting, we select the snapshot of the model that gains the best result of correlation on the validation set. As some video might be deleted or made private, we employ the available 3024 videos for training and 730 videos for validation. The models are implemented in TensorFlow.
\subsection{Results}
To evaluate the importance of different granularity of features, we start with frame-level features and gradually add the clip-level features and video-level features, the experiment results on the test set of the EEV dataset are shown in Table 1. For the frame-level feature, the performance of image feature and audio feature are evaluated separately with the correlation score of 0.00349 and 0.00574, which implies that audio is more effective than visual content for affective prediction. The combination of image and audio feature could further increase the performance to 0.00739. After the introduction of clip-level features, the correlation score is remarkably increased by 0.0589. This indicates the importance of semantic-context information around the frame, which is not supplied from the image feature and audio feature. It is noted that the subtitle feature are more effective than action features, this means more semantic information exists in the subtitles. Moreover, the introduction of video-level feature further increases the performance by 0.00201, and proves the importance of the affective information from the whole video. The above results indicates that our proposed multi-granularity features are effective for affective prediction.
\begin{table}[htp]
\centering
\begin{tabular}{lc}
\hline
Input & Correction \\
\hline
Image & 0.00349 \\
Audio & 0.00574 \\
Image+Audio & 0.00739 \\
Image+Audio+Action & 0.00893 \\
Image+Audio+Action+Subtitle & 0.01328 \\
Image+Audio+Action+Subtitle+Video & 0.01529 \\
\hline
\\
\end{tabular}
\caption{Comparison of different input modalities on EEV}
\label{tab:my_label}
\end{table}
\begin{table}[htp]
\centering
\begin{tabular}{lc}
\hline
Method & Correction \\
\hline
MLP & 0.01529 \\
MOE & 0.01718 \\
MAF+MOE & 0.01849 \\
Ensemble & 0.02292 \\
\hline
\\
\end{tabular}
\caption{Comparison of different model methods on EEV}
\label{tab:my_label}
\end{table}
In addition to the proposed the multi-granularity feature, we further evaluate the performance of modal fusion module and expression classification module. As shown in Table 2, four methods are compared with the same multi-granularity feature. Compared with Multilayer Perception classification, MOE improves the correlation result by 0.00189. The model achieves the best performance when the number of expert is set as 3. When the Modal Attention Fusion (MAF) module is introduced to replace the feature concatenation fusion, the correlation score is improved from 0.01718 to 0.01849. We ensembles five models to further improve the performance, and achieve a correlation score of 0.02292, which improves the result by 0.00443 compared with single model.
\section{Conclusion}
In this paper, the multi-granularity network with modal attention (MGN-MA) is proposed for dense affective predictions. MGN-MA consists of multi-granularity feature construction module, modal attention fusion module and expression classification module. The multi-granularity feature construction module could learn more semantic content and video theme information. The modal attention fusion module further improves the performance by emphasizing more affection-relevant features in the multi-granularity multi-modal features. The MGN-MA achives the correlation score of 0.02292 in the EEV challenge.
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,154,285 | arxiv | \section{Introduction}
The advent of quantum mechanics at the beginning of the 20th century marks an important step into a new era of physics. The classical
theory was left behind and it became possible to describe physical processes occurring at the atomic length scale of $\unit[10^{-10}]{m}$.
Quantum mechanics was then also applied to atomic nuclei reducing the length scale of its applicability again by five orders of magnitude.
In the aftermath, quantum field theory was developed, which for several decades has been used to great success in order to understand
physics down to approximately $\unit[10^{-18}]{m}$.
Perhaps a similar revolution is currently taking place at the beginning of the 21st century. The interest in experiments looking for signs of
quantum gravity, which is expected to play a role at the Planck length i.e. at $\unit[10^{-35}]{m}$, has been steadily increasing. If
Einstein's relativity and quantum physics are assumed to be still valid at this scale, both theories will have to merge into a new theory
correctly describing fluctuations of spacetime itself. Currently there is no chance of investigating quantum gravitational effects directly
at the Planck energy. However there is a clear signal for such effects that may be visible far below the Planck energy: a violation of
Lorentz symmetry.
The latter is motivated by various approaches to a fundamental theory: string theory, loop quantum gravity, noncommutative spacetime, etc.
Since we currently still do not know how such a theory looks like it makes sense to study Lorentz violation within a model-independent
framework: the Standard-Model Extension (SME) \cite{ColladayKostelecky1998}. This is a collection of all field theoretic operators
that are symmetric with respect to the gauge group $\mathit{SU}_{\mathrm{c}}(3) \times \mathit{SU}_L(2) \times \mathit{U}_Y(1)$ of the ordinary
Standard Model but that violate particle Lorentz invariance. The current goal of experiments is to measure the parameters associated with these
operators. A detection of nonvanishing Lorentz-violating coefficients could deliver significant insights on how different particle sectors are
indeed affected by quantum gravitational phenomena. So far it has been impossible to detect Lorentz violation. Hence with better and better
experiments even stricter constraints on Lorentz-violating parameters can be set.
\section{Modified Maxwell theory}
In the power-counting renormalizable photon sector of the SME there exist two modifications: Maxwell--Chern--Simons (MCS) theory and modified
Maxwell theory. The first is characterized by a dimensionful scale and a background vector field where the second involves a dimensionless
fourth-rank tensor background field. The dimensionful parameter of MCS theory was already heavily bounded by astrophysical observations
\cite{Carroll-etal1990} whereas some of the parameters of modified Maxwell theory --- especially the isotropic sector --- were only weakly
bounded in 2008. This was the motivation to consider a modified quantum electrodynamics (QED) in Ref.\ \refcite{Klinkhamer:2008ky} that
results from minimally coupling modified Maxwell theory to a standard Dirac theory of spin-1/2 fermions with charge $e$ and mass $M$:
\begin{subequations}
\begin{equation}
S_{\mathrm{modQED}}=S_{\mathrm{modMax}}+S_{\mathrm{standDirac}}\,,
\end{equation}
\begin{equation}
\label{eq:modified-maxwell-theory}
S_{\mathrm{modMax}}=\int_{\mathbb{R}^4} \mathrm{d}^4x\,\left\{-\frac{1}{4}F^{\mu\nu}(x)F_{\mu\nu}(x)-\frac{1}{4}\kappa^{\mu\nu\varrho\sigma}F_{\mu\nu}(x)F_{\varrho\sigma}(x)\right\}\,,
\end{equation}
\begin{equation}
\label{eq:standard-dirac-theory}
S_{\mathrm{standDirac}}=\int_{\mathbb{R}^4} \mathrm{d}^4x\,\overline{\psi}(x)\Big\{\gamma^{\mu}\big[\mathrm{i}\partial_{\mu}-eA_{\mu}(x)\big]-M\Big\}\psi(x)\,.
\end{equation}
\end{subequations}
Here $F_{\mu\nu}(x)\equiv \partial_{\mu}A_{\nu}(x)-\partial_{\nu}A_{\mu}(x)$ is the field strength tensor of the \textit{U}(1) photon field $A_{\mu}(x)$
and $\psi(x)$ is the spinor field. The fields are defined in Minkowski spacetime with the metric tensor $(\eta_{\mu\nu})=\mathrm{diag}(1,-1,-1,-1)$.
The second term of Eq.\ \refeq{eq:modified-maxwell-theory} containing the fourth-rank tensor background field $\kappa^{\mu\nu\varrho\sigma}$
manifestly violates particle Lorentz invariance. This field is a low energy effective description of possible physics at the Planck scale
and it defines preferred spacetime directions.
The modified photon action given by Eq.\ \refeq{eq:modified-maxwell-theory} can be restricted to a sector exhibiting nonbirefringent\footnote{at
first order in Lorentz violation} photon dispersion laws. This is possible via the following ansatz for the background field:
\begin{equation}
\kappa^{\mu\nu\varrho\sigma}=\frac{1}{2}(\eta^{\mu\varrho}\widetilde{\kappa}^{\nu\sigma}-\eta^{\mu\sigma}\widetilde{\kappa}^{\nu\varrho}
-\eta^{\nu\varrho}\widetilde{\kappa}^{\mu\sigma}+\eta^{\nu\sigma}\widetilde{\kappa}^{\mu\varrho})\,,
\end{equation}
where $\widetilde{\kappa}^{\mu\nu}$ is a symmetric and traceless $(4\times 4)$-matrix: $\widetilde{\kappa}^{\mu\nu}=\widetilde{\kappa}^{\nu\mu}$, $\widetilde{\kappa}^{\mu}_{\phantom{\mu}\mu}=0$. The isotropic part of modified Maxwell theory is then defined by the choice
\begin{equation}
(\widetilde{\kappa}^{\mu\nu})\equiv \frac{3}{2}\widetilde{\kappa}_{\mathrm{tr}}\,\mathrm{diag}\left(1,\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)\,.
\end{equation}
From the field equations results a modified photon dispersion law that is isotropic in three-space:
\begin{equation}
\label{eq:modified-dispersion-law}
\omega(k)=c\,\sqrt{\frac{1-\widetilde{\kappa}_{\mathrm{tr}}}{1+\widetilde{\kappa}_{\mathrm{tr}}}}k\,.
\end{equation}
Here $\omega$ is the photon frequency and $k$ the wave number. Note that $c$ is the maximum attainable velocity of the Dirac particle
described by Eq.\ \refeq{eq:standard-dirac-theory}, which is not affected by Lorentz violation. Dependent on the choice of
$\widetilde{\kappa}_{\mathrm{tr}}$ the modified photon velocity does not coincide with the maximum velocity of massive particles any
more. This leads to peculiar particle physics processes, which are forbidden in standard QED.
\section{Vacuum Cherenkov radiation and photon decay}
The modified photon dispersion relation of Eq.\ \refeq{eq:modified-dispersion-law} allows $\widetilde{\kappa}_{\mathrm{tr}}$ to lie in the interval
$(-1,1]$. For $\widetilde{\kappa}_{\mathrm{tr}}\in (0,1]$ the photon velocity is smaller than $c$ and Dirac particles can travel faster
than photons. If the latter is the case, a Cherenkov-type process in vacuum takes place leading to energy loss of the Dirac particle by the emission
of modified photons $\widetilde{\upgamma}$. This process occurs above a certain threshold energy $E_{\mathrm{thresh}}$ of the massive particle
(e.g. a proton $\mathrm{p^+}$) and its radiated energy rate $\mathrm{d}W/\mathrm{d}t$ far above the threshold is:
\begin{equation}
\label{eq:threshold-cherenkov}
E_{\mathrm{thresh}}^{\mathrm{p^+}\rightarrow \mathrm{p^+}\widetilde{\upgamma}}=\frac{M_{\mathrm{p}}c^2}{2}\sqrt{2+\frac{2}{\widetilde{\kappa}_{\mathrm{tr}}}}\,,\quad
\left.\frac{\mathrm{d}W}{\mathrm{d}t}\right|^{\mathrm{p^+}\rightarrow \mathrm{p^+}\widetilde{\upgamma}}_{E\gg E_{\mathrm{thresh}}^{\mathrm{p^+}\rightarrow \mathrm{p^+}\widetilde{\upgamma}}}\simeq \frac{7}{12}\alpha\widetilde{\kappa}_{\mathrm{tr}}\frac{E^2}{\hbar}\,.
\end{equation}
Herein, $M_{\mathrm{p}}$ is the proton mass, $\alpha\equiv e^2/(4\pi\varepsilon_0\hbar c)$ the electromagnetic fine structure constant,\footnote{with the
vacuum permittivity $\varepsilon_0$} and $\hbar$ Planck's constant. For vanishing isotropic parameter $\widetilde{\kappa}_{\mathrm{tr}}$ the threshold
energy goes to infinity and the radiated energy rate vanishes showing that the process is forbidden in standard~QED.
For $\widetilde{\kappa}_{\mathrm{tr}}\in (-1,0)$ the modified photon velocity is larger than the maximum velocity of Dirac
particles. In this case a photon may decay preferably into an electron positron pair $\mathrm{e^+e^-}$. This
decay is possible above a certain threshold energy $E_{\mathrm{thresh}}$ for the photon and it has the following decay rate~$\Gamma$:
\begin{equation}
\label{eq:threshold-photon-decay}
E_{\mathrm{thresh}}^{\widetilde{\upgamma}\rightarrow \mathrm{e^+e^-}}=M_{\mathrm{e}}c^2\sqrt{2-\frac{2}{\widetilde{\kappa}_{\mathrm{tr}}}}\,,\quad \Gamma^{\widetilde{\upgamma}\rightarrow \mathrm{e^+e^-}}|_{E\gg E_{\mathrm{thresh}}^{\widetilde{\upgamma}\rightarrow \mathrm{e^+e^-}}}\simeq -\frac{2}{3}\alpha\widetilde{\kappa}_{\mathrm{tr}}E\,,
\end{equation}
with the electron mass $M_{\mathrm{e}}$.
The crucial difference from vacuum Cherenkov radiation is the minus sign appearing together with $\widetilde{\kappa}_{\mathrm{tr}}$. This tells us that
the above equations only make sense for negative Lorentz-violating parameters.
\begin{table}[t!]
\tbl{Ultra-high energy photon and cosmic ray event. For details on the Auger event consult Ref.\ \refcite{Klinkhamer:2008ky} and references
therein. The HEGRA event is obtained from the last bin of Fig.\ 3 in Ref.\ \refcite{Aharonian:2004gb}, which has a significance of 2.7$\sigma$.
Since the energy is high enough and the source was identified as the Crab nebula, the left-hand bin endpoint with
$E\simeq\unit[56]{TeV}$ can be taken as a fiducial photon event. For the energy uncertainty a conservative estimate of 10\% is used far above
the detector threshold where the uncertainty mainly originates from statistical~errors.
}
{\begin{tabular}{@{}cccc@{}}
\toprule
Experiment & Observation & Energy $E$ & Energy uncertainty $\Delta E/E$ \\
\colrule
HEGRA & 1997--2002 & $\unit[56]{TeV}$ [Fig.\ 3 of Ref.\ \refcite{Aharonian:2004gb}] & 10\% [p.\ 12 of Ref.\ \refcite{Aharonian:2004gb}] \\
Auger & ID 737165 & $\unit[212]{EeV}$ [see Ref.\ \refcite{Klinkhamer:2008ky}] & 25\% [see Ref.\ \refcite{Klinkhamer:2008ky}] \\
\botrule
\end{tabular}}
\label{tab:uhecr-events}
\end{table
\section{Updated two-sided bound on the isotropic parameter}
If a hadronic primary or a photon is detected on Earth its energy must be smaller than the threshold energy of
Eq.\ \refeq{eq:threshold-cherenkov} and Eq.\ \refeq{eq:threshold-photon-decay}, respectively. Using then the events of
Table~\ref{tab:uhecr-events} we obtain the following updated two-sided bound on the isotropic parameter of modified Maxwell theory at
the 2$\sigma$~level:
\begin{equation}
-2 \cdot 10^{-16} < \widetilde{\kappa}_{\mathrm{tr}} < 6\cdot 10^{-20}\,.
\end{equation}
The lower bound has been improved by a factor of 4 in comparison to Ref.~\refcite{Klinkhamer:2008ky}.
\section*{Acknowledgments}
It is a pleasure to thank R. Wagner for helpful discussions and for pointing out Ref.\ \refcite{Aharonian:2004gb} during the CPT'13
conference in Bloomington, Indiana.
|
2,869,038,154,286 | arxiv | \section{Introduction and main results}
The idea of the spin injection from ferromagneic metal to
paramagnetic metal was first proposed by Aronov
\cite{bib:Aronov2}. In the spin injection process the charge
current flow between the ferromagnetic and paramagnetic metals
produces the non-equilibrium magnetization in the paramagnet. This
magnetization is proportional to the induced chemical potentials
difference of electrons with opposite spins \cite{bib:Aronov2} -
the spin accumulation. Non-equilibrium spin imbalance due to
injection was observed by Johnson and Silsbee \cite{bib:Johnson}.
The theory of spin injection was developed in details in many
works \cite{bib:Son, bib:Johnson2, bib:Valet, bib:Hershfield,
bib:Rashba} and well studied experimentally, see for a review
\cite{bib:Review, bib:Dyakonov}. However, the degree of electron
spin polarization is relatively small at standard spin injection
from ferromagnetic to paramagnetic metal \cite{bib:Jedema1,
bib:Jedema2}. In order to increase the non-equilibrium
polarization it is interesting to look for the possibility of
spin-injection based magnetic transition in metamagnetic metals.
Here we consider the metamagnetic transition of itinerant
electrons induced by the spin injection mechanism. Let us briefly
describe the properties of the metamagnet \cite{bib:Wohlfarth,
bib:Goto}. When the energy splitting of electrons with opposite
spins is smaller than the characteristic energy scale of itinerant
electrons, then magnetic part of the free energy density can be
expanded in powers of magnetization $F(H,M) = aM^2 +bM^4+cM^4-MH$,
where coefficients $a,b,c$ are determined by the energy dependence
of the density of states at the Fermi level, $H$ is the external
magnetic field and $M$ is the magnetization.
At $b<0$ magnetic part of free energy $F(H=0,M)$ might have
extremum at nonzero $\lvert M\rvert$ as it is shown in Fig.
\ref{fig:1}, which schematically illustrates evolution of free
energy with increasing magnetic field due to contribution of the
term $-MH$. At small magnetic field the state with low
magnetization has lower energy, while at magnetic field larger
than so called metamagnetic field $H_{m}$ the metamagnetic state
acquires lower energy and system undergoes to state with higher
magnetization. Metamagnetic state is induced by external magnetic
field through the first order phase transition
\cite{bib:Wohlfarth, bib:Goto}.
Metamagnetic transition of itinerant electrons might appear
\cite{bib:Wohlfarth, bib:Goto} in strongly enhanced Pauli
paramagnets when the Fermi level is close to peak in electron
density of states. In this case Zeeman splitting increases the
density of states and drives the ferromagnetic transition.
The chemical potentials difference of electrons with opposite
spins is the analog of external magnetic field in the
non-equilibrium case. Characteristic feature of this effective
magnetic field $H^{\textmd{eff}}(x)$ is the spatial non-uniformity
which results in the finite length of the metamagnetic state. In
the ferromagnet - metamagnetic metal contact spin accumulation and
therefore effective magnetic field is generated in the region of
the order of spin relaxation length at the vicinity of contact
with ferromagnet and at the domain wall between metamagnetic -
paramagnetic states. We assume that the domain wall thickness is
much smaller than the spin relaxation length.
\begin{figure}[t] \centering
\includegraphics[width=8cm]{pic1.eps}
\caption{Free energy $F(H,M)$ dependence on the magnetization $M$
of the metamagnet shown schematically for different magnetic
fields $H_2>H_1$. The state with higher magnetization has lower
energy at higher magnetic fields. Inset: Dependence of the
magnetization on magnetic field.}\label{fig:1}
\end{figure}
Schematically ferromagnetic metal - metamagnetic state contact is
shown in Fig. \ref{fig:2}. Metamagnetic state is located at
$0<x<d$. Metamagnetic phase emerges at electric currents such that
the effective magnetic field is $H^{\mathrm{eff}}(x=0)\geq H_{m}$. If
$d$ is of order or larger than the spin relaxation length then
effective field $H^{\textmd{eff}}(x)$ can be estimated as a sum
$H^{\textmd{eff}}(x)=H_{F-m}^{\textmd{eff}}(x)+H_{m-p}^{\textmd{eff}}(x)$
of the fields due to spin accumulation at boundary $x=0$
\begin{equation}\label{F-m}
H_{F-m}^{\textmd{eff}}(x) =
\frac{eJ}{g\mu_B}\frac{2R_FR_m}{R_F+R_m}[\Pi_F-\Pi_m]e^{-x/\ell_{m}}
\end{equation}
and effective field due to spin accumulation at domain wall
$x=d$
\begin{equation}\label{m-p}
H_{m-p}^{\textmd{eff}}(x) =
\frac{eJ}{g\mu_B}\frac{2R_mR_p}{R_m+R_p}\Pi_{m}
e^{-(d-x)/\ell_{m}}
\end{equation}
This case is shown by the solid line in Fig. \ref{fig:3}. In
expressions (\ref{F-m}) and (\ref{m-p}) $J$ is the current
density, $e$ is the electron charge, $\mu_B = |e|\hbar/2mc$ is the
Bohr magneton and $g=2$ for electrons,
\begin{eqnarray}\label{pi}
\Pi_{F,m} = \frac{\sigma_{\uparrow F,m}-\sigma_{\downarrow
F,m}}{\sigma_{\uparrow F,m}+\sigma_{\downarrow F,m}}
\end{eqnarray}
is proportional to the current polarizations, where
$\sigma_{\alpha} = e^2 D_{\alpha} \nu_{\alpha}$ are the
corresponding conductivities in the ferromagnetic, metamagnetic
and paramagnetic states for electrons with spin $\alpha$,
$D_{\alpha}$ is the diffusion coefficient, $\nu_{\alpha}$ is the
density of states at the Fermi level,
\begin{equation}\label{resist}
R_{F,m}=\ell_{F,m}\frac{\sigma_{\uparrow F,m}+\sigma_{\downarrow
F,m}}{4\sigma_{\uparrow F,m}\sigma_{\downarrow F,m}} , R_p =
\frac{\ell_{p}}{\sigma_{p}}
\end{equation}
are the effective resistances and the spin relaxation lengths are
defined as $\ell = \sqrt{\overline{D}t_s}$, where in each state
$\overline{D} = (D_{\uparrow}\sigma_{\downarrow} +
D_{\uparrow}\sigma_{\downarrow})/(\sigma_{\uparrow}+\sigma_{\downarrow})$
and $t_s$ is spin relaxation time.
In case of small thickness of domain wall transition between metamagnetic and paramagnetic states takes
place at $x=d$ when
\begin{equation}\label{main}
H^{\textmd{eff}}(d)=H_{m}
\end{equation}
as shown in Fig. \ref{fig:3}. Taking the sum of expressions
(\ref{F-m}) and (\ref{m-p}) we estimate the corresponding length
of the metamagnetic region at $d\geq \ell_{m}$ as
\begin{equation}\label{length}
d \sim \ell_{m}\ln\left|\frac{R_F[\Pi_F -\Pi_m
]/(R_m+R_F)}{g\mu_BH_m/eJ2R_m -R_p\Pi_m/(R_m+R_p)} \right|
\end{equation}
At large electrical current when $g\mu_BH_m/eJ2R_m \rightarrow
R_p\Pi_m/(R_m+R_p)$, according to expression (\ref{length}) the
length of metamagnetic region $d\rightarrow\infty$. Threshold
current density dependence of $d$ occurs because of spin
accumulation generation at domain wall between metamagnetic and
paramagnetic states. However, since the effective field in most
part of the region is below $H_m$, the energy of stationary
metamagnetic state at large $d$ becomes smaller than the energy of
paramagnetic state. We propose that, system must undergo to
paramagnetic state at large values of current density. More
detailed discussion of the transition is given below.
\begin{figure}[t] \centering
\includegraphics[width=8cm]{pic2.eps}
\caption{Ferromagnetic (-L/2,0) metamagnetic (0,L/2) contact.
Shaded region defines the high magnetization state of the
metamagnet induced by the spin injection from the ferromagnet.
$\ell_{F}, \ell_{m}, \ell_{p}$ are the spin diffusion lengths in
ferromagnet and metamagnet in high and low magnetization
regimes.}\label{fig:2}
\end{figure}
\begin{figure}[t] \centering
\includegraphics[width=8cm]{pic3.eps}
\caption{Up: Dependence of the effective magnetic field on
coordinate for two values of the current density $|J_2|>|J_1|$.
Effective field decreases in the metamagnet and the phase
transition undergoes at $x=d(J)$ when $H^{\mathrm{eff}} = H_m$.
Down: Magnetization profile, where $M_F, M_m, M_p$ are the
corresponding magnetizations of the ferromagnet, high and low
magnetization states of the metamagnet}\label{fig:3}
\end{figure}
\section{Electrical spin injection}
Consider the spin injection process from the ferromagnetic metal
to metamagnetic metal. We focus on the spin and charge transport
in the presence of the spin-orbit coupling and the short-range
exchange electron-electron interactions. We assume the vector of
the non-equilibrium magnetization in the metamagnetic metal to be
parallel to the vector of the magnetization in the ferromagnet.
The Green's function in Keldysh technique in the matrix form
\begin{equation}\nonumber
\underline{\hat{G}}=\left(\begin{array}{clcr}
\hat{G}^R&\hat{G}^K\\
0&\hat{G}^A
\end{array}\right)
\end{equation}
is given by retarded $\hat{G}^R(\mathrm{x},\mathrm{x}')$, advanced
$\hat{G}^A(\mathrm{x},\mathrm{x}')$ and Keldysh function
$\hat{G}^K(\mathrm{x},\mathrm{x}')$, where
$\mathrm{x}=(\mathbf{r}, t)$ denote position and time arguments,
hat $\left( \hat{} \right)$ stands for the matrix in spin space.
Further we will consider the stationary regime in which function
$\underline{\hat{G}}$ satisfies the equation
\begin{eqnarray}\nonumber
&[(\omega+\frac{1}{2m}\nabla^2_{\mathbf{r}} + \mu -U(\mathbf{r})
-e\phi(\mathbf{r}) )\hat{1}-&\\ \nonumber &-
\hat{U}_{so}(\mathbf{r})+\hat{\varepsilon}(\mathbf{r})]
\underline{\hat{G}}(\mathbf{r},\mathbf{r}',\omega)
=\hat{1}\delta(\mathbf{r}-\mathbf{r}')&
\end{eqnarray}
where $\phi(\mathbf{r})$ is the electrostatic potential,
$U(\mathbf{r})$ is the random potential of the impurities assumed
to be Gaussian distributed with $\langle U(\mathbf{r})\rangle=0$,
$\langle
U(\mathbf{r})U(\mathbf{r}')\rangle=2\pi\nu\tau\delta(\mathbf{r}-\mathbf{r}')$,
$\tau$ is the mean free time and $\nu=(\nu_\uparrow +
\nu_\downarrow)/2$ is the density of states. Spin-orbit
interactions of electrons with impurities is described by the
potential
$\hat{U}_{so}(\mathbf{r})=i\gamma\mathbf{\hat{\sigma}}(\mathbf{\nabla}
U(\mathbf{r})\times\mathbf{\nabla})$, where $\gamma$ is the
spin-orbit coupling constant and $\hat{\sigma}$ is the Pauli
matrix \cite{bib:Altshuler}. The contribution of the short-range
electron-electron exchange interactions to the spin splitting is
described by the term $\hat{\varepsilon}(\mathbf{r})$
\begin{equation}\nonumber
\varepsilon_{\alpha}(\mathbf{r})=\frac{-i\lambda}{2}\int\frac{d\mathbf{p}d\omega}{(2\pi)^4}G^{K}_{-\alpha}(\mathbf{r},
\mathbf{p},\omega)
\end{equation}
where $\lambda$ is the electron coupling constant and it is
convenient to apply the Fourier transformation with respect to the
relative coordinates $\mathbf{r}_1 = \mathbf{r}-\mathbf{r}'$ as
\begin{equation}\nonumber
\underline{\hat{G}}(\mathbf{R},\mathbf{p},\omega) =\int
d^3\mathbf{r}_1 \underline{\hat{G}}(\mathbf{R}+\mathbf{r}_1/2,
\mathbf{R}-\mathbf{r}_1/2) e^{-i\mathbf{pr_1}}
\end{equation}
in which $\mathbf{R}=(\mathbf{r}+\mathbf{r}')/2$. The retarded and
advanced Green functions $\hat{G}^R$ and $\hat{G}^A$ averaged over
disorder in the $\mu\tau\gg1$ approximation are diagonal in spin
and are given by
\begin{equation}\label{green}
\hat{G}^{R,A}(\mathbf{R},\mathbf{p},\omega) =
\frac{1}{\omega-\xi_\mathbf{p}+\hat{\varepsilon}(\mathbf{R})\pm
i\hat{\gamma}}
\end{equation}
where $\xi_\mathbf{p}$ is the electron dispersion,
$2\gamma_{\alpha} = \tau_{\alpha}^{-1}+
(t^{-1}_{\alpha}-t^{-1}_{-\alpha})/2$ and
$\tau_{\alpha}^{-1}=\tau_{0\alpha}^{-1}+\tau_{s\alpha}^{-1}$ is
the inverse scattering times due to disorder and spin-orbit
interactions for the spin state $\alpha$,
$t^{-1}_{s\alpha}=4/3\tau_{s\alpha}$ \cite{bib:Altshuler}. We
assume that the momentum relaxation time $\tau_{0\alpha}$ is
smaller than the time $t_{s \alpha}$ corresponding to the spin
flip process. Let us note, that we are considering the metamagnet
when the exchange energy is the coordinate dependent function. In
deriving the equation for the function $\hat{G}^K$ we assume the
limit when the exchange energy is small compared to the Fermi
energy $\mid\varepsilon_\downarrow -\varepsilon_\uparrow \mid/\mu
< 1$. In this limit the equation for the function $\hat{G}^K$
yields
\begin{eqnarray}\nonumber
&\mathbf{v}(\mathbf{\nabla}_{\mathbf{R}} +
[\mathbf{\nabla}_{\textmd{R}}\varepsilon_{\alpha} +
e\mathbf{E}]\partial_{\varepsilon_p})G^{K}_{\alpha}=-\left(
\frac{1}{\tau_{\alpha}} -\frac{1}{ t_{\alpha s}}+ \frac{1}{
t_{-\alpha s}}\right)G^{K}_{\alpha}&\\ \nonumber&
+\left(\frac{F_{\alpha}}{\tau_{\alpha}} -
\frac{F_{\alpha}-F_{-\alpha}}{2t_{\alpha s
}}\right)[G^{R}_{\alpha}-G^{A}_{\alpha}]&
\end{eqnarray}
here $\mathbf{E}=-\nabla\phi(\mathbf{r})$ is the electric field
and we denote the coordinate and frequency dependent function
\begin{equation}\nonumber
F_{\alpha}(\mathbf{R},\omega)=\frac{i}{2\pi\nu_{\alpha}}\int\frac{d\mathbf{p}}{(2\pi)^3}G^{K}_{\alpha}(\mathbf{R},\mathbf{p},\omega)
\end{equation}
In the diffusion approximation for the function
$F_{\alpha}(\mathbf{R},\omega)$ one obtains the equation
\begin{equation}\label{diff}
\mathbf{\nabla}\sigma_{\alpha}\mathbf{\nabla}F_{\alpha}(\mathbf{R},\omega)
= \frac{\nu_{\alpha}
\nu_{-\alpha}}{2\nu}\frac{F_{\alpha}(\mathbf{R},\omega)-F_{-\alpha}(\mathbf{R},\omega)}{t_{s}}
\end{equation}
where $\sigma_{\alpha} = e^2 \nu_{\alpha} D_{\alpha}$ is the
conductivity, $D_{\alpha} = v^2_{\alpha}\tau_{\alpha}/3$ is the
diffusion coefficient and the density of states are the space
dependent functions.
Let us consider the system when functions in Eq. (\ref{diff})
depend on one spatial coordinate $(x)$ only. Consider the
interface between a ferromagnetic metal that occupies the region
$(x<0)$ and a metamagnetic metal $(x>0)$. We assume that the
lengths of the ferromagnetic and metamagnetic regions $L/2$ are
much larger than the corresponding spin diffusion lengths. We also
assume the external reservoirs of the sample at $x=\pm L/2$ to be
in the spin equilibrium state. The electric field in the system is
treated through the boundary conditions
\begin{eqnarray}\label{boundary}\nonumber
F_{\alpha}(-L/2,\omega) = f(\omega-eV/2)
\\ F_{\alpha}(L/2,\omega) = f(\omega+eV/2)
\end{eqnarray}
where $f(\omega)=\tanh(\omega/2T)$ and $V=E/L$ is the potential
difference across the structure. The solution of the Eq.
(\ref{diff}) is the continuous function at the interface $x=0$
\begin{equation}\label{con1}
F_{\alpha}(0-,\omega)=F_{\alpha}(0+,\omega)
\end{equation}
while the derivatives satisfy
\begin{equation}\label{con2}
\sigma_{\alpha}\frac{\partial F_{\alpha}(x,\omega)}{\partial x}|
_{x=0-} = \sigma_{\alpha}\frac{\partial
F_{\alpha}(x,\omega)}{\partial x}|_{x=0+}
\end{equation}
describing the continuity of the current density at the interface.
The current density carried by spin up and spin down electrons is
given as
\begin{equation}\nonumber
J_{\alpha}(x)= \frac{1}{2e}\int \sigma_{\alpha}\frac{\partial
F_{\alpha}(x,\omega)}{\partial x}d\omega
\end{equation}
We solve Eq. (\ref{diff}) assuming the boundaries
ferromagnet-paramagnet, ferromagnet-metamagnet and
metamagnet-paramagnet independently. This approximation is valid
in the limit when the length of the metamagnet $d>\ell_m$. Taking
into account the length of the system to be larger than the
corresponding spin-diffusion lengths we solve the diffusion
equation in the region $x>0$ with boundary (\ref{boundary}) and
continuity (\ref{con1}), (\ref{con2}) conditions.
\begin{eqnarray}\label{solution}\nonumber
&&F_{ \uparrow, \downarrow| p, m}(x,\omega)= f(\omega+eV/2)
+A_{p,m}[(x-L/2) \pm\\
&&\pm 2\sigma_{\downarrow, \uparrow| p, m}(\Pi_F-\Pi_{p,
m})\frac{R_F R_{p, m}}{(R_F+R_{p, m})}e^{-x/\ell_{p,m}}]
\end{eqnarray}
where $p, m$ denotes low and high magnetization regimes of the
metamagnet and $F$ stands for the ferromagnet, coefficient
\begin{eqnarray}\nonumber
A_{p,m} = \frac{2(\sigma_{\uparrow F}+ \sigma_{\downarrow
F})[f(\omega+eV/2)-f(\omega-eV/2)]}{ [(\sigma_{\uparrow F}+
\sigma_{\downarrow F})+(\sigma_{\uparrow |p, m}+
\sigma_{\downarrow |p, m})]L }
\end{eqnarray}
is connected with the current density as
\begin{eqnarray}\nonumber
J=J_{\uparrow}(x)+J_{\downarrow}(x)=\frac{1}{2e}\int
[\sigma_{\uparrow |p, m}+\sigma_{\downarrow |p, m}]A_{p,m}d\omega
\end{eqnarray}
The conductivity spin polarization and resistivities in the
ferromagnet and metamagnet are defined by expressions (\ref{pi})
and (\ref{resist}). Note, that in the low magnetization regime of
the metamagnet $\sigma_{\uparrow p} = \sigma_{\downarrow p} =
\sigma_{p}/2$, $D_{\uparrow p} = D_{\downarrow p} = D_{p}$.
Solution (\ref{solution}) has to be supplemented by the local
neutrality condition which self-consistently determines the
electric potential in the sample. The spin injection process does
not change concentration of electrons
\begin{equation}\label{neitrality}
N = \frac{1}{2}\int
[\nu_{\uparrow}F_{\uparrow}(x,\omega)+\nu_{\downarrow}F_{\downarrow}(x,\omega)]d\omega
\end{equation}
\section{Paramagnetic state} Low magnetization state of the
metamagnet can be studied by solving Eq. (\ref{diff}) assuming
contact between ferromagnetic metal and paramagnetic metal at
$x=0$. Effective field due to spin accumulation is
\begin{equation}\nonumber
H_p^{\mathrm{eff}}(x) = \frac{eJ}{g\mu_B} \frac{2R_F
R_p}{R_F+R_p}\Pi_Fe^{-x/\ell_{p}}
\end{equation}
and magnetization at $x>0$ is
\begin{equation}\label{para}
M_{p}(x) =\frac{(g\mu_B)^2
\nu_p}{1-\lambda\nu_p}H_{p}^{\mathrm{eff}}(x)
\end{equation}
Effective magnetic field produced by the spin accumulation in the
ferromagnetic metal at $x<0$ is
\begin{equation}\label{FerroFieldF-N}
H_{Fp}^{\mathrm{eff}} (x) = \frac{eJ}{g\mu_B} \frac{2R_F
R_{p}}{R_F+R_{p}}\Pi_{F}e^{x/\ell_{F}}
\end{equation}
here expressions for resistances $R_{F}$ and $R_{p}$ are given by
Eq. (\ref{resist}).
\section{Metamagnetic transition}
The self-consistency equation for the magnetization density $M(x)$ in the sample is
defined as
\begin{eqnarray}\label{magnetization}\nonumber
&M(x) =g\mu_B[\varepsilon_{\downarrow}(x) -
\varepsilon_{\uparrow}(x)]/\lambda =&\\& = -\frac{g\mu_B}{2}\int
[\nu_{\uparrow}F_{\uparrow}(x,\omega)-\nu_{\downarrow}F_{\downarrow}(x,\omega)]
d\omega&
\end{eqnarray}
In the case of equilibrium metamagnetic metal, Eq.
(\ref{magnetization}) has two solutions even without the external
magnetic field, corresponding to two minima of free energy, see
inset in Fig. (\ref{fig:1}). Transition between these solutions
takes place when magnetic field is equal to $H_{m}$. One could
verify that in linear on $V$ response spin dependent part of
expression (\ref{solution}) enters Eqs. (\ref{neitrality},
\ref{magnetization}) as magnetic field.
The procedure of finding solutions is following. We assume that
there is metamagnetic state in the system at $0<x<d$. Then we
solve Eq. (\ref{diff}) for the spin accumulation at two boundaries
and self consistently determine the value of $d$ from Eq.
(\ref{main}).
To obtain Eq. (\ref{main}) we need to consider transition in more
detail. Near transition between metamagnetic and paramagnetic
states we need to include the spatial derivatives of magnetization
into consideration, so
\begin{equation}\label{x}\nonumber
-K\frac{d^2}{dx^2}M+\frac{\delta F(H_{m}^{\mathrm{eff}} (x),M)}{\delta
M}=0
\end{equation}
Here $K$ is positive constant. Let we have solution $M_{w}(x-d)$,
describing transition between metamagnetic and paramagnetic states
at point $x=d$ in uniform magnetic field. It is solution of
equation
\begin{equation}\label{y}\nonumber
-K\frac{d^2}{dx^2}M_{w}+\frac{\delta F(H_{m},M_{w})}{\delta M_{w}}=0
\end{equation}
Assuming small difference $H_{m}^{\mathrm{eff}} (x)-H_{m}$ at
$x\approx d$ and substituting $M=M_{w}(x-d)+\delta M$, we obtain
\begin{equation}\label{delta-M}\nonumber
-K\frac{d^2}{dx^2}\delta M+\frac{1}{2}\frac{\delta ^2 F(H_{m},M)}{\delta
M^2}|_{M=M_{w}}\delta M=H_{m}^{\mathrm{eff}} (x)-H_{m}
\end{equation}
Solution of this equation exists if
\begin{equation}\label{condition}
\int dx\Psi_{0}(x)(H_{m}^{\mathrm{eff}} (x)-H_{m})=0
\end{equation}
where $\Psi_{0}(x)$ is eigenfunction, corresponding to zero
$E_{0}=0$ mode of equation
\begin{equation}\label{z}\nonumber
-K\frac{d^2}{dx^2}\Psi_{k}+\frac{1}{2}\frac{\delta ^2
F(H_{m},M)}{\delta M^2}|_{M=M_{w}}\Psi_{k}=E_{k}\Psi_{k}
\end{equation}
$\Psi_{0}(x)$ has no zeros and is localized near $x=d$ in region
of order of domain wall thickness. This mode describes small
translation of $M$, so in the uniform field one has $E_{0}=0$. In
the case when $\ell_{m,p}$ are much larger than domain wall
thickness from condition (\ref{condition}) one obtains Eq.
(\ref{main}).
\subsection{Metamagnetic state}
Let us discuss different realizations of spin accumulation.
1). Consider the case when $\Pi_F-\Pi_m$ and $\Pi_m$ have the same
sign. Effective fields of both contacts have same sign too. The
estimation for the length of metamagnetic region $d$ in the limit
$d\geq\ell_{m}$ is given by expression (\ref{length}). $d$
diverges at some threshold electrical current density.
2). Let $\Pi_F-\Pi_m$ and $\Pi_m$ have opposite signs. Thus,
effective fields of both contacts have different signs too.
Analysis shows that solution of Eq.(\ref{main}) with finite $d$
exist at
$|H_{F-m}^{\textmd{eff}}(0)|>|H_{m-p}^{\textmd{eff}}(d)|$. With
incresing electrical current density $d$ stays finite.
In metamagnetic region the magnetization is
\begin{equation}\label{Magnet-meta}\nonumber
M_m (x) = M_{m}^{0} +\frac{(g\mu_B)^2\nu_m}{1-\lambda \nu_m}
H_m^{\mathrm{eff}}(x)
\end{equation}
Here $M_{m}^{0}$ is the magnetization of metamagnetic state,
calculated at zero magnetic field and $\nu_m = 2\nu_{\uparrow
m}\nu_{\downarrow m}/(\nu_{\uparrow m}+\nu_{\downarrow m})$.
Spin accumulation appears also in the ferromagnetic metal at $x<0$
as
\begin{equation}\label{FerroFieldF-m}
H_{Fm}^{\mathrm{eff}} (x) = \frac{eJ}{g\mu_B} \frac{2R_F
R_{m}}{R_F+R_{m}}[\Pi_F - \Pi_{m}]e^{x/\ell_{F}}
\end{equation}
here expressions for $R_{F}$ and $R_{m}$ are given by
(\ref{resist}).
\subsection{Free energy criterium}
We propose that the realization of metamagnetic state must be
energetically favorable over realization of the paramagnetic
state. In the linear on the applied current regime the magnetic
part of free energy in the case of paramagnetic state realization
is
\begin{equation}\nonumber
\delta\mathcal{F}_{\textmd{Fp}} = - M_F
\int_{-L/2}^{0}H^{\mathrm{eff}}_{Fp}(x) dx
\end{equation}
where $M_F$ is the magnetization of ferromagnetic contact. In the
case of metamagnetic state realization it is
\begin{eqnarray}\nonumber
&\delta\mathcal{F}_{\textmd{Fmp}} = - M_F \int_{-L/2}^{0}
H^{\mathrm{eff}}_{Fm}(x) dx -&\\ \nonumber&- M^{0}_m
\int_{0}^{d}[H^{\mathrm{eff}}_{m}(x)-H_{m}] dx+F_{S}&
\end{eqnarray}
Effective magnetic fields in ferromagnetic region are given by
expressions (\ref{FerroFieldF-N}) and (\ref{FerroFieldF-m}).
$F_{S}$ is the energy, associated with domain wall and boundary
$F-m$. While domain wall energy is positive, the sign of $F-m$
boundary energy depends on relative directions of magnetizations
in the ferromagnet and metamagnet. Estimation of $F_{S}$ depends
on details that are beyond the scope of the paper.
From criterium $\delta \mathcal{F}_{\textmd{Fp}}-\delta
\mathcal{F}_{\textmd{Fmp}}\geq 0$ for realization of the
metamagnetic transition one can estimate the threshold value of
current density. In the limiting case $R_F>R_m>R_p$, which assumes
the contribution of the boundary $m-p$ and $F-p$ to the free
energy is smaller than the corresponding contribution from the
$F-m$ interface, one can estimate as
\begin{eqnarray}\nonumber
J_{thr}\approx\frac{g\mu_B}{e}\frac{
H_m/2R_m}{(\Pi_F-\Pi_m)(1+\ell_F M_F/\ell_m
M^0_m)}\frac{d(J_{thr})}{\ell_m}
\end{eqnarray}
\subsection{Ferromagnet-metamagnet-ferromagnet structure}
Let us briefly discuss the spin injected metamagnetic state in
system with metamagnetic metal placed between two ferromagnetic
contacts with opposite directions of magnetizations. In this case
$\delta\mathcal{F}_{\textmd{Fp}}=0$, because of cancelation of
contributions in ferromagnets with opposite magnetizations. In
metamagnetic state
\begin{eqnarray}\nonumber
&\delta\mathcal{F}_{\textmd{FmF}} = - M^{0}_m
\int_{0}^{d}[H^{\mathrm{eff}}_{m}(x)-H_{m}] dx&
\end{eqnarray}
Both ferromagnets contribute equally to the effective field. At
$d\geq \ell_{m}$ using expression (\ref{F-m}) we obtain the
threshold value of electrical current density at which
$\delta\mathcal{F}_{\textmd{FmF}}\leq 0$ as
\begin{equation}\nonumber
J_{thr}=\frac{g\mu_B}{e}H_{m}\frac{R_F+R_m}{4R_FR_m[\Pi_F-\Pi_m]}\frac{d}{\ell_{m}}
\end{equation}
Note, that the expression for $J_{thr}$ for the $F-m$ contact in
the limit discussed in the previous section is similar to the
$F-m-F$ contact. Also note, that the transition to paramagnetic
state with increasing current is absent.
\section{conclusions}
To conclude, we have studied the metamagnetic transition of
itinerant electrons in the metamagnet under the spin injection
from the ferromagnetic metal. Spin injection produces the
non-equilibrium effective magnetic field in metamagnet which
drives the transition. We have calculated the effective magnetic
fields and electrical currents required for the metamagnetic
transition. We have shown that the length of metamagnetic state
has threshold dependence on electrical current due to the
effective magnetic field self generated at domain wall.
Typical values of the spin accumulation in metals are in the $\mu
\mathrm{eV}$ range \cite{bib:Fert, bib:Zaffalon}, which
corresponds to the effective magnetic fields in tenth
$\mathrm{mT}$ range at reasonably high current density. Metallic
metamagnets with metamagnetic field in tesla's range are well
known \cite{bib:Gto}. Applying external magnetic field one can
easily bring such system close to the transition.
Well studied $\mathrm{YCo_{2}}$ in crystal form undergoes the
metamagnetic transition at $H_{m}=70T$ \cite{bib:Goto}, while in
the nanoscale structured form it is a weak ferromagnet
\cite{bib:Narayana}, suggesting the possibility of metamagnetic
field reducing at proper technology. Other possibility is to study
the system with temperature induced metamagnetic transition
\cite{bib:Markosyan}. Unfortunately, spin relaxation length, the
main parameter that governs the magnitude as well as the spatial
distribution of effective field, is not known in such systems.
\section{Acknowledgments}
We thank V.I. Kozub and A.T. Burkov for valuable discussion. We
are grateful for the financial support of Federal Program under
Grant No. 2009-1.5-508-008-012 and RFFI under Grant No.
10-02-00681-A.
|
2,869,038,154,287 | arxiv | \section{Introduction}
The Travelling Salesperson Problem (TSP) is one of the most central problems in combinatorial optimization.
The problem asks to find a shortest closed walk visiting each vertex at least once in an edge-weighted graph, or
alternatively to find a shortest Hamilton cycle in a complete graph where the edge weights satisfy the triangle inequality.
The Travelling Salesperson Problem is notoriously hard.
The approximation factor of $3/2$ established by Christofides~\cite{christo} has not been improved for 40 years
despite a significant effort of many researchers.
The particular case of the problem, the Hamilton Cycle Problem, was among the first problems to be shown to be NP-hard.
Moreover, Karpinski, Lampis and Schmied~\cite{KLS13} have recently shown that
the Travelling Salesperson Problem is NP-hard to approximate within the factor $123/122$,
improving the earlier inapproximability results of Lampis~\cite{cor13} and of Papadimitriou and Vempala~\cite{papavemp}.
In this paper, we are concerned with an important special case of the Travelling Salesperson Problem, the graphic TSP,
which asks to find a shortest closed walk visiting each vertex at least once in a graph where all edges have unit weight.
We will refer to such a walk as to a \emph{TSP walk}.
There have recently been a lot of research focused on approximation algorithms for the graphic TSP,
which was ignited by the breakthrough of the $3/2$-approximation barrier in the case of $3$-connected
cubic graphs by Gamarnik, Lewenstein and Sviridenko~\cite{cor08}.
This was followed by the improvement of the $3/2$-approximation factor for the general graphic TSP
by Oveis Gharan, Saberi and Singh~\cite{cor16}.
Next, M\"omke and Svensson~\cite{graprox} designed a $1.461$-approximation algorithm for the problem and
Mucha~\cite{cor15} showed that their algorithm is actually a $13/9$-approximation algorithm.
This line of research culminated with the $7/5$-approximation algorithm of Seb\"o and Vygen~\cite{cor21}.
We here focus on the case of graphic TSP for cubic graphs,
which was at the beginning of this line of improvements.
The $(3/2-5/389)$-approximation algorithm of Gamarnik et al.~\cite{cor08} for $3$-connected cubic graphs
was improved by Aggarwal, Garg and Gupta~\cite{cor01}, who designed a $4/3$-approximation algorithm.
Next, Boyd et al.~\cite{boyd} found a $4/3$-approximation algorithm for $2$-connected cubic graphs.
The barrier of the $4/3$-approximation factor was broken by Correa, Larr\'e and Soto~\cite{correa}
who designed a $(4/3-1/61236)$-approximation algorithm for this class of graphs.
The currently best algorithm for $2$-connected cubic graphs
is the $1.3$-approximation algorithm of Candr\'akov\'a and Lukot{\hskip -0.3ex}'ka~\cite{candluk},
based on their result on the existence of a TSP walk of length at most $1.3n-2$ in $2$-connected cubic $n$-vertex graphs.
We improve this result as follows.
Note that we obtain a better approximation factor and
Theorem~\ref{thm-alg} also applies to a larger class of graphs.
\begin{theorem}
\label{thm-main}
There exists a polynomial-time algorithm that for a given $2$-connected subcubic $n$-vertex graph
with $n_2$ vertices of degree two outputs a TSP walk of length at most
$$\frac{9}{7}n+\frac{2}{7}n_2-1\;.$$
\end{theorem}
\begin{theorem}
\label{thm-alg}
There exists a polynomial-time $9/7$-approximation algorithm for the graphic TSP for cubic graphs.
\end{theorem}
At this point, we should remark that we have not attempted to optimize the running time of our algorithm.
Also note that our approximation factor matches the approximation factor for cubic bipartite graphs
in the algorithm Karp and Ravi~\cite{karpra},
who designed a $9/7$-approximation algorithm for the graphic TSP for cubic bipartite graphs.
However, van Zuylen~\cite{zuylen} has recently found a $5/4$-approximation algorithm for this class of graphs.
Both the result of Karp and Ravi, and the result of van Zuylen are based on finding
a TSP walk of length of at most $9n/7$ and $5n/4$, respectively, in an $n$-vertex cubic bipartite graph.
On the negative side, Karpinski and Schmied~\cite{KS13} showed that
the graphic TSP is NP-hard to approximate within the factor of $535/534$ in the general case and
within the factor $1153/1152$ in the case of cubic graphs.
Our contribution in addition to improving the approximation factor for graphic TSP for cubic graphs
is also in bringing several new ideas to the table.
The proof of our main result, Theorem~\ref{thm-main}, differs from the usual line of proofs in this area.
In particular,
to establish the existence of a TSP walk of length at most $9n/7-1$ in a $2$-connected cubic $n$-vertex graph,
we allow subcubic graphs as inputs and perform reductions in this larger class of graphs.
While we cannot establish the approximation factor of $9/7$ for this larger class of graphs,
we are still able to show that
our techniques yields the existence of a TSP walk of length at most $9n/7-1$ for cubic $n$-vertex graphs.
In addition, unlike in the earlier results,
we do not construct a TSP walk in the final reduced graph by linking cycles in a spanning $2$-regular subgraph of the reduced graph
but we consider spanning subgraphs with vertices of degree zero and two, which gives us additional freedom.
We conclude with a brief discussion on possible improvements of the bound from Theorem~\ref{thm-main}.
In Section~\ref{sec-lb}, we give a construction of a $2$-connected cubic $n$-vertex graph
with no TSP walks of length smaller than $\frac{5}{4}n-2$ (Proposition~\ref{prop-qrepl}) and
a $2$-connected subcubic $n$-vertex graph with $n_2=\Theta(n)$ vertices of degree two
with no TSP walks of length smaller than $\frac{5}{4}n+\frac{1}{4}n_2-1$ (Proposition~\ref{prop-drepl});
the former construction was also found independently by Maz\'ak and Lukot{\hskip -0.3ex}'ka~\cite{mazluk}.
We believe that these two constructions provide the tight examples for an improvement of Theorem~\ref{thm-main} and
conjecture the following. We also refer to a more detailed discussion at the end of Section~\ref{sec-lb}.
\begin{conjecture}\label{conj-main}
Every $2$-connected subcubic $n$-vertex graph with $n_2$ vertices of degree
has a TSP walk of length at most
$$\frac{5}{4}n+\frac{1}{4}n_2-1\;.$$
\end{conjecture}
We would like to stress that it is important that Conjecture~\ref{conj-main} deals with simple graphs,
i.e., graphs without parallel edges. Indeed, consider the cubic graph $G$ obtained as follows:
start with the graph that has two vertices of degree three that are joined by three paths,
each having $2\ell$ internal vertices of degree two, and replace every second edge of these paths with a pair of parallel edges
to get a cubic graph. The graph $G$ has $n=6\ell+2$ vertices but no TSP walk of length shorter than $8\ell+2$.
\section{Preliminaries}\label{sec-prelim}
In this section, we fix the notation used in the paper and
make several simple observations on the concepts that we use.
All graphs considered in this paper are \emph{simple}, i.e., they do not contain parallel edges.
When we allow parallel edges, we will always emphasize this by referring to a considered graph as to a \emph{multigraph}.
We will occasionally want to stress that a graph obtained during the proof has no parallel edges and
we will do so by saying that it is simple even if saying so is superfluous.
The \emph{underlying graph} of a multigraph $H$ is the graph obtained
from $H$ by suppressing parallel edges,
i.e., replacing each set of parallel edges by a single edge.
If $G$ is a graph, its vertex set is denoted by $V(G)$ and its edge set by $E(G)$.
Further, the number of vertices of $G$ is denoted by $n(G)$ and the number of its vertices of degree two by $n_2(G)$.
If $w$ a vertex of $G$,
then $G-w$ is a graph obtained by deleting the vertex $w$ and all the edges incident with $w$.
Similarly, if $W$ is a set of vertices of $G$, then $G-W$ is the graph obtained by deleting
all vertices of $W$ and edges incident with them.
Finally, if $F$ is a set of its edges, then $G\setminus F$ is the graph obtained from $G$ by removing the edges of $F$
but none of the vertices.
A graph with all vertices of degree at most three is called \emph{subcubic}.
We say that a graph $G$ is \emph{$k$-connected}
if it has at least $k+1$ vertices and $G-W$ is connected for any $W\subseteq V(G)$ containing at most $k-1$ vertices.
If $G$ is connected but not $2$-connected, then a vertex $v$ such that $G-v$ is not connected is called a \emph{cut-vertex}.
Maximal $2$-connected subgraphs of $G$ are called \emph{blocks}.
Note that a vertex of a graph is contained in two or more blocks if and only if it is a cut-vertex.
A subset $F$ of the edges of a graph $G$ is an \emph{edge-cut}
if the graph $G\setminus F$ have more components than $G$ and $F$ is minimal with this property.
Such a subset $F$ containing exactly $k$ edges will also be referred to as \emph{$k$-edge-cut}.
An edge forming a $1$-edge-cut is called a \emph{cut-edge}.
A graph $G$ is \emph{$k$-edge-connected} if it has no $\ell$-edge-cut for $\ell\le k$.
Note that a subcubic graph $G$ with at least two vertices is $2$-connected if and only if $2$-edge-connected.
A \emph{$\theta$-graph} is a simple graph obtained from the pair of vertices joined by three parallel edges
by subdividing some of the edges several times.
In other words,
a $\theta$-graph is a graph that contains two vertices of degree three joined by three paths formed by vertices of degree two
such that at most one of these paths is trivial, i.e., it is a single edge.
In our consideration, we will need to consider a special type of cycles of length six in subcubic graphs,
which resembles $\theta$-graphs.
A cycle $K=v_1\ldots v_6$ of length six in a subcubic graph $G$ is a \emph{$\theta$-cycle},
if all vertices $v_1,\ldots,v_6$ have degree three, their neighbors $x_1$, \ldots, $x_6$ outside of $K$ are pairwise distinct, and
if $G-V(K)$ has three connected components,
one containing $x_1$ and $x_2$, one containing $x_4$ and $x_5$, and one containing $x_3$ and $x_6$.
See Figure \ref{fig-thetacycle} for an example.
The vertices $v_3$ and $v_6$ of the cycle $K$ will be referred to as the \emph{poles} of the $\theta$-cycle $K$.
\begin{figure}
\begin{center}
\includegraphics[width=60mm]{fig-thetacycle.pdf}
\end{center}
\caption{A $\theta$-cycle with poles $v_3$ and $v_6$.}\label{fig-thetacycle}
\end{figure}
We say that a multigraph is \emph{Eulerian} if all its vertices have even degree;
note that we do not require the multigraph to be connected,
i.e., a multigraph has an Eulerian tour if and only if it is Eulerian and connected.
A subgraph is \emph{spanning} if it contains all vertices of the original graphs,
possibly some of them as isolated vertices, i.e., vertices of degree zero.
It is easy to relate the length of the shortest TSP walk in a graph $G$
to the size of Eulerian multigraphs using edges of $G$ as follows.
To simplify our presentation, let ${\rm tsp}(G)$ denote the length of the shortest TSP walk in a graph $G$.
\begin{observation}\label{obs-ep}
For every graph $G$,
${\rm tsp}(G)$ is equal to the minimum number of edges of a connected Eulerian multigraph $H$
such that the underlying graph of $H$ is a spanning subgraph of $G$.
\end{observation}
\begin{proof}
Let $W$ be a TSP walk of length ${\rm tsp}(G)$, and
let $H$ be the multigraph on the same vertex set as $G$ such that
each edge $e$ of $G$ is included to $H$ with multiplicity equal to the number of times that it is used by $W$.
In particular, edges not traversed by $W$ are not included to $H$ at all.
Clearly, the multigraph $H$ is connected and Eulerian,
the number of its edges is equal to the length of $W$ and
its underlying graph is a spanning subgraph of $G$.
We next establish the other inequality claimed in the statement.
Let $H$ be a connected Eulerian multigraph whose underlying graph is a spanning subgraph of $G$, and
$H$ has the smallest possible number of edges.
A closed Eulerian tour in $H$ yields a TSP walk in $G$ (just follow the tour in $G$) and
the length of this TSP walk is equal to the number of edges of $H$.
Hence, ${\rm tsp}(G)$ is at most the number of edges of $H$.
\end{proof}
We now explore the link between Eulerian spanning subgraphs and
the minimum length of a TSP walk further.
For a graph $G$, let $c(F)$ denote the number of non-trivial components of $F$,
i.e., components formed by two or more vertices, and
let $i(F)$ be the number of isolated vertices of $F$.
We define the \emph{excess} of a graph $F$ as
$${\rm exc}(F)=2c(F)+i(F).$$
If $G$ is a subcubic graph, we define
$${\rm minexc}(G)=\min\;\{{\rm exc}(F):\text{$F$ spanning Eulerian subgraph of $G$}\}.$$
Note that any subcubic Eulerian graph $F$ is a union of $c(F)$ cycles and $i(F)$ isolated vertices,
i.e., the spanning subgraph $F$ of a subcubic graph $G$ with ${\rm exc}(F)={\rm minexc}(G)$ must also have this structure.
The values of ${\rm minexc}(G)$ for simple-structured graphs are given in the next observation (note that
the condition $k_2\not=0$ implies that the $\theta$-graph is simple).
\begin{observation}
\label{obs-nontarg}
The following holds.
\begin{enumerate}
\item If $G$ is a cycle, then ${\rm minexc}(G)=2<\frac{n(G)+n_2(G)}{4}+1$.
\item If $G=K_4$, then ${\rm minexc}(G)=2=\frac{n(G)+n_2(G))}{4}+1$.
\item If $G$ is $\theta$-graph with $k_1$, $k_2$ and $k_3$ vertices of degree two on the paths joining its two
vertices of degree three and $k_1\le k_2\le k_3$ and $k_2\not=0$, ${\rm minexc}(G)=2+k_1\le\frac{n(G)+n_2(G)}{4}+1$.
\end{enumerate}
\end{observation}
We next relate the quantity ${\rm minexc}(G)$ to the length of the shortest TSP walk in $G$.
\begin{observation}\label{obs-exc}
Let $G$ be a connected subcubic $n$-vertex graph, and
let $F$ be a spanning Eulerian subgraph $F$ of $G$.
There exists a polynomial-time algorithm that finds a TSP walk of length $n-2+{\rm exc}(F)$.
In addition, the minimum length of a TSP walk in $G$ is equal to
$${\rm tsp}(G)=n-2+{\rm minexc}(G)\;.$$
\end{observation}
\begin{proof}
Let $F$ be a spanning Eulerian subgraph of $G$.
We aim to construct a TSP walk of length $n-2+{\rm exc}(F)$.
The subgraph $F$ has $c(F)+i(F)$ components.
Since $F$ is subcubic, each of the $c(F)$ non-trivial components of $F$ is a cycle,
which implies that $F$ has $n-i(F)$ edges.
Since $G$ is connected, there exists a subset $S$ of the edges of $G$ such that $|S|=c(F)+i(F)-1$ and
$F$ together with the edges of $S$ is connected.
Clearly, such a subset $S$ can be found in linear time.
Let $H$ be the multigraph obtained from $F$ by adding each edge of $S$ with multiplicity two.
Since $H$ is a connected Eulerian multigraph whose underlying graph is a spanning subgraph of $G$,
the proof of Observation~\ref{obs-ep} yields that it corresponds to an Eulerian tour of length
$$|E(H)|=|E(F)|+2|S|=n-i(F)+2(c(F)+i(F)-1)=n-2+{\rm exc}(F),$$
which can be found in linear time.
In particular, it holds that ${\rm tsp}(G)\le n-2+{\rm exc}(F)$.
Since the choice of $F$ was arbitrary,
we conclude that ${\rm tsp}(G)\le n-2+{\rm minexc}(G)$.
To finish the proof, we need to show that $n-2+{\rm minexc}(G)\le{\rm tsp}(G)$.
By Observation~\ref{obs-ep}, there exists a connected Eulerian multigraph $H$ with $|E(H)|={\rm tsp}(G)$
such that its underlying graph is a spanning subgraph of $G$.
By the minimality of $|E(H)|$, every edge of $H$ has multiplicity at most two (otherwise,
we can decrease its multiplicity by $2$ while keeping the multigraph Eulerian and connected).
Similarly, removing any pair of parallel edges of $H$ disconnects $H$ (as the resulting multigraph would still be Eulerian),
i.e., the edge in the underlying graph of $H$ corresponding to a pair of parallel edges is a cut-edge.
Let $F$ be the graph obtained from $H$ by removing all the pairs of parallel edges.
The number of components of $F$ is equal to
$$c(F)+i(F)=\frac{|E(H)|-|E(F)|}{2}+1.$$
Since $F$ is subcubic, it is a union of $c(F)$ cycles and $i(F)$ isolated vertices,
which implies that $|E(F)|=n-i(F)$.
Consequently, we get that
$$c(F)+i(F)=\frac{|E(H)|-(n-i(F))}{2}+1,$$
which yields the desired inequality
$$n-2+{\rm minexc}(G)\le n-2+{\rm exc}(F)=n-2+2c(F)+i(F)=|E(H)|={\rm tsp}(G).$$
\end{proof}
\section{Reductions}\label{sec-redu}
In this section, we present a way of reducing a 2-connected subcubic graph to a smaller one such that
a spanning Eulerian subgraph of the smaller graph yields a spanning Eulerian subgraph of the original graph
with few edges.
We now define this process more formally.
For subcubic graphs $G$ and $G'$, let $$\delta(G,G')=(n(G)+n_2(G))-(n(G')+n_2(G'))\;.$$
We say that a 2-connected subcubic graph $G'$ is a reduction of a 2-connected subcubic graph $G$
if $n(G')<n(G)$, $\delta(G,G')\ge 0$, and there exists a linear-time algorithm that
turns any spanning Eulerian subgraph $F'$ of $G'$ into a spanning Eulerian subgraph $F$ of $G$ satisfying
\begin{equation}\label{eq-redu}
{\rm exc}(F)\le {\rm exc}(F')+\tfrac{\delta(G,G')}{4}.
\end{equation}
For the proof of our main result,
it would be enough to prove the lemmas in this section with $\frac{1}{4}$ replaced by $\frac{2}{7}$ in (\ref{eq-redu}).
However, this would not simplify most of our arguments and
we believe that the stronger form of (\ref{eq-redu}) can be useful in an eventual proof of Conjecture~\ref{conj-main}.
The reductions that we consider involve altering a subgraph $K$ of a graph $G$ such that
$K$ has some additional specific properties. This subgraph sometimes needs to be provided as
a part of an input of an algorithm that constructs $G'$.
We say that a reduction is a \emph{linear-time reduction with respect to a subgraph $K$}
if there exists a linear-time algorithm that transforms $G$ to $G'$ given $G$ and a subgraph $K$ with the specific properties.
We will say that a reduction is a \emph{linear-time reduction}
if there exists a linear-time algorithm that both finds a suitable subgraph $K$ and performs the reduction.
If a graph $G$ admits such a reduction, we will say that $G$ has a linear-time reduction or
that $G$ has a linear-time reduction with respect to a subgraph $K$.
The reductions that we present are intended to be applied to an input subcubic $2$-connected graph
until the resulting graph is simple or it becomes having a special structure.
A subcubic $2$-connected graph is \emph{basic} if it is a cycle, a $\theta$-graph, or $K_4$.
A subcubic $2$-connected graph that is not basic will be referred to as \emph{non-basic}.
We say that a $2$-connected subcubic graph $G$ is a \emph{proper} graph
if $G$ is non-basic, has no cycle with at most four vertices of degree three, and
has no cycle of length five or six with five vertices of degree three.
In Subsection~\ref{sub-proper},
we will show that every non-basic $2$-connected subcubic graph that is not proper has a linear-time reduction.
In addition to proper $2$-connected subcubic graph,
we will also consider clean $2$-connected subcubic graphs.
This definition is more complex and we postpone it to Subsection~\ref{sub-clean}.
\subsection{Cycles with few vertices of degree three}
\label{sub-proper}
In this subsection, we show that a non-basic $2$-connected subcubic graph that is not proper
has a linear-time reduction, i.e.,
every graph containing a cycle with at most four vertices of degree three or
a cycle of length five or six containing five vertices of degree three
has a linear-time reduction.
We present the reductions in Lemmas~\ref{lemma-c2e}--\ref{lemma-no5} assuming that such a cycle is given.
We remark that such a cycle can be found in linear time (if it exists) using the following argument:
a subcubic $n$-vertex graph $G$ has at most $3\cdot 2^{k-1}n$ cycles containing at most $k$ vertices of degree three.
Indeed, suppressing all vertices of degree two in $G$ results in a cubic multigraph,
its cycles of length at most $k$ one-to-one correspond to cycles with at most $k$ vertices of degree three in $G$, and
it is possible to list all cycles of length at most $k$ in a cubic multigraph in linear time.
The fact that we can list all such cycles in linear time is important for Lemmas~\ref{lemma-no4} and~\ref{lemma-no5}
where we need to choose a cycle with at most $k$ vertices of degree three with some additional properties.
\begin{lemma}\label{lemma-c2e}
Every non-basic $2$-connected subcubic graph $G$ that contains a cycle $K$ with at most two vertices of degree three
has a linear-time reduction.
\end{lemma}
\begin{proof}
Since $G$ is neither a cycle nor a $\theta$-graph, it follows that $V(G)\neq V(K)$.
Since $G$ is 2-connected, $K$ contains exactly two vertices of degree three, say $v_1$ and $v_2$.
Let $x_1$ and $x_2$ be their neighbors outside of $K$, and
let $k_1$ and $k_2$ be the the number of the internal vertices of the two paths between $v_1$ and $v_2$ in $K$.
We can assume that $k_1\le k_2$ by symmetry.
If $x_1=x_2$, then either $G$ is a $\theta$-graph or $x_1$ is incident with a cut-edge;
since neither of these is possible, it holds that $x_1\neq x_2$.
Suppose that $k_1=0$ and $k_2=1$, i.e., $K$ is a triangle.
Let $z$ be the vertex of $K$ distinct from $v_1$ and $v_2$, and let $G'=G-z$.
Note that $G'$ is a 2-connected subcubic graph.
We claim that $G'$ is a reduction of $G$.
Since $n(G')=n(G)-1$ and $n_2(G')=n_2(G)+1$, it follows $\delta(G,G')=0$.
Consider a spanning Eulerian subgraph $F'$ of $G'$.
If $F'$ contains the edge $v_1v_2$,
then let $F$ be the spanning Eulerian subgraph of $G$ obtained from $F'$ by removing the edge $v_1v_2$ and adding the path $v_1zv_2$.
If $F'$ does not contain the edge $v_1v_2$, i.e., $v_1$ and $v_2$ are isolated vertices of $F$,
then let $F$ be the spanning Eulerian subgraph of $G$ obtained from $F'$ by adding the cycle $K$.
It holds that ${\rm exc}(F)={\rm exc}(F')$ in both cases.
It remains to consider the case $k_1+k_2\ge 2$.
Let $G'$ be obtained from $G-V(K)$ by adding a path $x_1wx_2$ where $w$ is a new vertex;
note that $w$ has degree two in $G'$ and $\delta(G,G')=2(k_1+k_2)$.
Since $x_1\neq x_2$, $G'$ is simple.
We show that $G'$ is a reduction of $G$.
Let $F'$ be a spanning Eulerian subgraph of $G'$;
we will construct a spanning Eulerian subgraph $F$ of $G$.
If $F'$ contains the path $x_1wx_2$,
then let $F$ be obtained from $F'-w$ by adding the vertices of $K$ and the edges $x_1v_1$, $x_2v_2$, and
the path in $K$ between $v_1$ and $v_2$ with $k_2$ internal vertices.
Note that the $k_1$ vertices of the other path between $v_1$ and $v_2$ in $K$ are isolated in $F$.
Observe that
$${\rm exc}(F)={\rm exc}(F')+k_1\le {\rm exc}(F')+\frac{k_1+k_2}{2},$$
since $k_1\le k_2$.
If $w$ is an isolated vertex of $F'$,
then let $F$ be obtained from $F'-w$ by adding the cycle $K$.
In this case, we get that
$${\rm exc}(F)={\rm exc}(F')+1\le {\rm exc}(F')+\frac{k_1+k_2}{2}.$$
Since it holds that ${\rm exc}(F)\le{\rm exc}(F)+\frac{1}{4}\delta(G,G')$ in both cases,
the proof of the lemma is finished.
\end{proof}
In the next lemma, we consider cycles containing three vertices of degree three.
\begin{lemma}\label{lemma-no3}
Every non-basic $2$-connected subcubic graph $G$ that contains a cycle $K$ with three vertices of degree three
has a linear-time reduction.
\end{lemma}
\begin{proof}
Let $v_1$, $v_2$ and $v_3$ be the three vertices of degree three of $K$.
Since $G$ is 2-connected, each of the vertices $v_1$, $v_2$ and $v_3$ has a neighbor outside the cycle $K$;
let $x_i$ be such a neighbor of the vertex $v_i$, $i\in\{1,2,3\}$.
Further, let $P_i$ denote the path between $v_{i+1}$ and $v_{i+2}$ in $K$ that does not contain $v_i$ for $i\in\{1,2,3\}$ (indices
are taken modulo three), and let $k_i$ be the number of its internal vertices.
By symmetry, we can assume that $k_1\le k_2\le k_3$.
Since $G$ is not basic, in particular, $G\neq K_4$,
we can assume that $x_2\neq x_3$ if $k_1=k_2=k_3=0$.
Let $G'$ be obtained from $G-V(K)$ by adding a vertex $z$ and
paths $Q_1$, $Q_2$ and $Q_3$ joining $z$ with $x_1$, $x_2$, and $x_3$, respectively,
such that $Q_1$ has $k_1+1$ internal vertices, $Q_2$ has $k_2$ internal vertices, and $Q_3$ has $k_3$ internal vertices.
Note that the graph $G'$ is simple since if $k_2=k_3=0$, then $x_2\neq x_3$.
Also note that $\delta(G,G')=0$.
We now show that $G'$ is a reduction of $G$.
Let $F'$ be a spanning Eulerian subgraph of $G'$.
If the vertex $z$ is isolated in $F'$, then let $F$ be obtained from $F'$ by removing $z$ and
the internal vertices of $Q_1$, $Q_2$, and $Q_3$ (all of these are isolated vertices in $F'$) and adding the cycle $K$.
Observe that $c(F)=c(F')+1$ and $i(F)=i(F')-2-k_1-k_2-k_3$ in this case.
If $F'$ contains paths $Q_i$ and $Q_j$, $i\neq j$, $i,j\in\{1,2,3\}$,
then let $F$ be obtained from $F'$ by removing the vertex $z$ and the internal vertices of $Q_1$, $Q_2$, and $Q_3$,
adding the vertices of $K$, edges $x_iv_i$ and $x_jv_j$, and the edges of the paths $P_i$ and $P_j$.
We have $c(F)=c(F')$ and $i(F)\le i(F')$ in this case.
In both cases, it holds ${\rm exc}(F)\le{\rm exc}(F')$, which finishes the proof of the lemma.
\end{proof}
In the final two lemmas of this subsection,
we will present several possible reductions of a configuration $K$ and
choose the one that is $2$-connected.
Since it is possible to test $2$-connectivity of a graph in linear time,
the reductions presented in Lemmas~\ref{lemma-no4} and~\ref{lemma-no5} are linear-time.
\begin{lemma}\label{lemma-no4}
Every non-basic $2$-connected subcubic graph $G$ that contains a cycle $K$ with four vertices of degree three
has a linear-time reduction.
\end{lemma}
\begin{proof}
Choose a shortest cycle $K$ of $G$ that contains four vertices of degree four, and
let $v_1,\ldots,v_4$ be these vertices listed in the cyclic order around $K$.
Since $K$ is the shortest possible and
all cycles in $G$ contain at least four vertices of degree three by Lemmas~\ref{lemma-c2e} and~\ref{lemma-no3},
every vertex $v_i$ has a neighbor $x_i$ outside the cycle $K$, $i\in\{1,\ldots,4\}$.
In addition, it holds that $x_i\neq x_{i+1}$ (indices are taken modulo four).
Let $P_i$ denote the path between $v_i$ and $v_{i+1}$ in $K$ (again, indices are taken modulo four), and
let $k_i$ be the number of internal vertices of $P_i$.
Finally, let $k=k_1+\cdots+k_4$.
We present two possible reductions parameterized by $j\in\{1,2\}$.
Let $G_j$ be the graph obtained from $G$ by removing the edges and internal vertices of the paths $P_j$ and $P_{j+2}$.
Suppose that neither $G_1$ nor $G_2$ is 2-connected.
In particular, the vertices of $G_1$ can be partitioned into non-empty sets $A$ and $B$ such that
there is at most one edge between $A$ and $B$ of $G_1$.
If $x_1\in A$ and $x_4\in B$, then this edge is contained in $P_4+x_1v_1+x_4v_4$;
by symmetry, we can assume that $x_2,x_3\in B$,
which yields that the edge $v_1x_1$ is a cut-edge in $G$, which is impossible.
Hence, it must hold that $x_1,x_4\in A$ and $x_2,x_3\in B$.
Since $G$ is 2-connected, there exists a path between $x_1$ and $x_4$ using only the vertices of $A\setminus V(K)$ and
a path between $x_2$ and $x_3$ using only the vertices of $B\setminus V(K)$.
The symmetric argument applied to $G_2$ yields the existence of such paths
between $x_1$ and $x_2$, and between $x_3$ and $x_4$, which is impossible
since there is at most one edge between $A$ and $B$.
It follows that at least one of the graphs $G_1$ and $G_2$ is 2-connected.
By symmetry, we assume that $G_1$ is $2$-connected in the rest of the proof.
We first consider the case that $k_1=k_3=0$.
Let $G'$ be the graph obtained from $G-V(K)$ by adding paths $x_1z_1x_4$ and $x_2z_2x_3$,
where $z_1$ and $z_2$ are new vertices, each having degree two in $G'$.
Note that $G'$ is is isomorphic to a graph obtained from $G_1$ by suppressing some vertices of degree two;
in particular, $G'$ is 2-connected.
Also note that $\delta(G,G')=2k$.
We next show that $G'$ is a reduction of $G$.
Consider a spanning Eulerian subgraph $F'$ of $G'$.
We distinguish several cases based on whether the vertices $z_1$ and $z_2$ are isolated in $F'$.
\begin{itemize}
\item If both vertices $z_1$ and $z_2$ are isolated in $F'$,
then let $F$ be obtained from $F'-\{z_1,z_2\}$ by adding the cycle $K$.
Note that ${\rm exc}(F)={\rm exc}(F')$ in this case.
\item Assume that $z_1$ is not isolated, i.e., the edges $x_1z_1$ and $x_4z_1$ are contained in $F'$,
but $z_2$ is isolated in $F'$.
We consider two spanning Eulerian subgraphs $F_1$ and $F_2$ of $G$.
The subgraph $F_1$ is obtained from $F'-\{z_1,z_2\}$ by adding the vertices of $K$,
the edges $x_1v_1$ and $x_4v_4$, and edges of the path $P_4$.
The subgraph $F_2$ is obtained from $F'-\{z_1,z_2\}$ by adding the vertices of $K$,
the edges $x_1v_1$ and $x_4v_4$, and the edges of the paths $P_1$, $P_2$, and $P_3$.
Note that ${\rm exc}(F_1)={\rm exc}(F')+k_1+k_2+k_3+1={\rm exc}(F')+k_2+1$ and ${\rm exc}(F_2)={\rm exc}(F')+k_4-1$.
Let $F$ be one of the subgraphs $F_1$ and $F_2$ with the smaller excess.
Since ${\rm exc}(F_1)+{\rm exc}(F_2)=2{\rm exc}(F')+k$, we get that ${\rm exc}(F)\le{\rm exc}(F')+k/2$.
\item The case that $z_1$ is isolated in $F'$ but $z_2$ is not is symmetric to the case that we have just analyzed.
\item If neither $z_1$ nor $z_2$ is isolated in $F'$,
then let $F$ be obtained from $F'-\{z_1,z_2\}$ by adding the vertices of $K$,
the edges $x_iv_i$ for $i\in\{1,\ldots, 4\}$, and
the edges of the paths $P_2$ and $P_4$.
Since $k_1=k_3=0$, we get that ${\rm exc}(F)={\rm exc}(F')$.
\end{itemize}
In all the cases we have found a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)\le{\rm exc}(F')+k/2={\rm exc}(F')+\delta(G,G')/4$.
We can assume that $k_1+k_3\ge 1$ in the rest of the proof.
Note that this implies that neither $x_1x_4$ nor $x_2x_3$ is an edge of $G$ (otherwise,
$G$ would contain a cycle with at most four vertices of degree three that is shorter than $K$).
We now distinguish two cases: $k\ge 2$ and $k=1$.
We first consider the case that $k\ge 2$.
Let $G'$ be the graph obtained from $G-V(K)$ by adding edges $x_1x_4$ and $x_2x_3$.
Since $G'$ can be obtained from $G_1$ by suppressing vertices of degree two,
it follows that $G'$ is 2-connected.
Also note that $G'$ is simple since neither $x_1x_4$ nor $x_2x_3$ is an edge of $G$, and that $\delta(G,G')=2k+4$.
We next verify that $G'$ is a reduction of $G$.
To do so, consider a spanning Eulerian subgraph $F'$ of $G'$ and
distinguish four cases based on the inclusion of the edges $x_1x_4$ and $x_2x_3$ in $F'$
to construct a spanning Eulerian subgraph $F$ of $G$.
\begin{itemize}
\item If neither the edge $x_1x_4$ nor the edge $x_2x_3$ is in $F$,
then let $F$ be obtained from $F'$ by adding the cycle $K$.
Note that ${\rm exc}(F)={\rm exc}(F')+2$.
\item If the edge $x_1x_4$ is in $F'$ but the edge $x_2x_3$ is not,
then we consider two spanning Eulerian subgraphs $F_1$ and $F_2$ of $G$, and
choose $F$ to be the one with the smaller excess.
The subgraph $F_1$ is obtained from $F'$ by removing the edge $x_1x_4$ and
by adding the vertices of $K$ and the edges $x_1v_1$ and $x_4v_4$, and the edges of the path $P_4$.
The subgraph $F_2$ is obtained from $F'$ by removing the edge $x_1x_4$ and
by adding the vertices of $K$ and the edges $x_1v_1$ and $x_4v_4$, and the edges of the paths $P_1$, $P_2$, and $P_3$.
Note that ${\rm exc}(F_1)={\rm exc}(F')+k_1+k_2+k_3+2$ and ${\rm exc}(F_2)={\rm exc}(F')+k_4$.
Hence, if $F$ is the one of the subgraphs $F_1$ and $F_2$ with the smaller excess,
then ${\rm exc}(F)\le{\rm exc}(F')+\frac{k}{2}+1$.
\item The case that the edge $x_1x_4$ is not contained in $F'$ but the edge $x_2x_3$ is
is symmetric to the case that we have just analyzed.
\item The final case is that both the edges $x_1x_4$ and $x_2x_3$ are in $F'$.
We again construct two spanning Eulerian subgraphs $F_1$ and $F_2$ of $G$, and
choose $F$ to be the one with the smaller excess.
We start with removing the edges $x_1x_4$ and $x_2x_3$ from $F'$ and
adding the vertices of $K$ together with the edges $x_iv_i$ for $i\in\{1,\ldots, 4\}$.
To create the subgraph $F_1$, we also add the edges of the paths $P_2$ and $P_4$, and
to create the subgraph $F_2$, we add the edges of the paths $P_1$ and $P_3$.
Note that the latter can result in either creating or merging two cycles of $F'$,
in particular, $c(F_2)\le c(F')+1$.
Hence, we get that ${\rm exc}(F_1)={\rm exc}(F')+k_1+k_3$ and ${\rm exc}(F_2)\le{\rm exc}(F')+k_2+k_4+2$.
Since $F$ is the one of the subgraphs $F_1$ and $F_2$ with the smaller excess,
we get that ${\rm exc}(F)\le{\rm exc}(F')+\frac{k}{2}+1$.
\end{itemize}
Since $k\ge 2$,
the excess ${\rm exc}(F)$ of the spanning Eulerian subgraph $F$ of $G$ is at most ${\rm exc}(F')+\frac{k}{2}+1={\rm exc}(F')+\delta(G,G')/4$
in all the four cases.
The final case to consider is that $k=1$.
Since $k_1+k_3\ge 1$, we can assume by symmetry that $k_1=1$ and $k_2,k_3,k_4=0$.
In this case, we consider the graph $G'$ obtained from $G-V(K)$ by adding the edge $x_1x_4$ and a path $x_2zx_3$,
where $z$ is a new vertex of degree two.
Again, $G'$ is isomorphic to a graph obtained from $G_1$ by suppressing some vertices of degree two,
in particular, $G'$ is 2-connected. Also note that $\delta(G,G')=4$.
To show that $G'$ is a reduction of $G$,
one considers a spanning Eulerian subgraph $F'$ of $G'$ and distinguish four cases based on whether
the edge $x_1x_4$ and the path $x_2zx_3$ are contained in $F'$.
If neither of them is, we construct a spanning Eulerian subgraph $F$ of $G$ by removing the vertex $z$ and
including the cycle $K$; note that ${\rm exc}(F)={\rm exc}(F')+1$ in this case.
If one of them but the other is not,
we construct a spanning Eulerian subgraph $F$ by removing the edge $x_1x_4$ and the edges of the path $x_2zx_3$,
adding the vertices of $K$ together with the edges $x_iv_i$ for those $i\in\{1,\ldots, 4\}$ such that the degree of $x_i$ is odd and
the edges of three of the paths $P_1$, $P_2$, $P_3$ and $P_4$ in a way that $F$ is an Eulerian subgraph of $G$.
Note that ${\rm exc}(F)\le{\rm exc}(F')$ (the inequality is strict if $z$ is an isolated vertex in $F'$)
Finally, if both the edge $x_1x_4$ and the path $x_2zx_3$ are contained in $F'$,
we construct $F$ by removing the edge $x_1x_4$ and the edges of the path $x_2zx_3$, and
by adding the vertices of $K$ together with the edges $x_iv_i$ for $i\in\{1,\ldots, 4\}$ and
the edges of the paths $P_2$ and $P_4$.
Note that ${\rm exc}(F)={\rm exc}(F')+1$ in this case since the only inner vertex of $P_1$ is isolated in $F$.
Hence, we have constructed a spanning Eulerian subgraph $F$ of $G$
with ${\rm exc}(F)\le{\rm exc}(F')+1={\rm exc}(F')+\delta(G,G')/4$ in each of the cases.
\end{proof}
In the final lemma of this subsection, we deal with cycles of length five or six that
contain five vertices of degree three.
\begin{lemma}\label{lemma-no5}
Every non-basic $2$-connected subcubic graph $G$ that contains a cycle $K$ of length at most $6$ with five vertices of degree three
has a linear-time reduction.
\end{lemma}
\begin{proof}
By Lemmas~\ref{lemma-c2e}--\ref{lemma-no4},
we can assume that every cycle of $G$ contains at least five vertices of degree three.
Let $K$ be a cycle of length five or six that contains five vertices of degree three.
If $G$ contains such cycles of length five or six, choose $K$ to be a cycle of length five.
By symmetry, we can assume that the vertices $v_1$, \ldots, $v_5$ of degree three of $K$ form a path $v_1v_2v_3v_4v_5$;
if $K$ has length five, then $v_5v_1$ is an edge, and
if $K$ has length six, then there is a vertex $z$ of degree two such that $v_5zv_1$ is a path in $G$.
Let $x_i$ be the neighbor of $v_i$ outside the cycle $K$ for $i\in\{1,\ldots,5\}$.
The vertices $x_1,\ldots,x_5$ are pairwise distinct (otherwise, $G$ would contain a cycle with at most four vertices of degree three).
Since $G$ has no cycle with at most four vertices of degree three,
$G$ does not contain the edge $x_ix_{i+1}$ for any $i\in\{1,\ldots,5\}$ (indices are taken modulo five).
Let $G_1$ be the graph obtained from $G-V(K)$ by adding the edge $x_5x_1$ and
a new vertex $w$ that is adjacent to the vertices $x_2$, $x_3$ and $x_4$.
Note that $\delta(G,G_1)\in\{4,6\}$.
If $F'$ is a spanning Eulerian subgraph of $G_1$,
then there exists a spanning Eulerian subgraph $F$ of $G$ with $c(F)=c(F')$ and $i(F)\le i(F')+1$,
i.e., with ${\rm exc}(F)\le{\rm exc}(F')+1$.
Hence, if $G_1$ is $2$-connected, it is a reduction of $G$.
Suppose that $G_1$ is not 2-connected.
Hence, the vertices of $G_1$ can be partitioned to non-empty sets $A$ and $B$ such that
there is at most one edge between $A$ and $B$ in $G_1$.
Since the original graph $G$ is 2-connected, both $x_1$ and $x_5$ belong to the same set, say $A$, and
the vertex $w$ to the other set, i.e., the set $B$.
In addition, at most one of the neighbors of $w$ in $G_1$ belongs to $A$ (there is at most one edge between $A$ and $B$) and
$G_1$ contains at most one of the edges $x_1x_4$ and $x_2x_5$ (for the same reason).
By symmetry, we assume that $G_1$ does not contain the edge $x_2x_5$.
If $x_1x_4$ is an edge of $G_1$, then either the edge $wx_4$ or the edge $x_1x_4$ is the edge between $A$ and $B$ and
the vertex $x_2$ must belong to $B$.
If $x_1x_4$ is not an edge of $G_1$, then at least one of the vertices $x_2$ and $x_4$ is in $B$ and
we can assume by symmetry that this vertex is $x_2$.
In either case, we have arrived at the conclusion that $x_2$ is in $B$ and $x_2x_5$ is not an edge of $G$.
Since there is at most one edge between $A$ and $B$ and the original graph is $2$-connected,
there exist disjoint paths $Q_1$ and $Q_2$,
where $Q_1$ connects the vertices $x_1$ and $x_5$ (and is fully contained in $A$) and
$Q_2$ connects the vertex $x_2$ with the vertex $x_j$ for $j=3$ or $j=4$ (and this path is fully contained in $B$).
Let $G_2$ be the graph obtained from $G-V(K)$ by adding the edge $x_2x_5$ and
a vertex $y$ that is adjacent to the vertices $x_1$, $x_3$ and $x_4$.
Note that $G_2$ is simple since $x_2x_5$ is not an edge of $G$.
If the length of $K$ in $G$ is six, we subdivide the edge $x_3y$ in addition.
Since $\delta(G,G_2)=4$ and every spanning Eulerian subgraph $F'$ of $G_2$
can be transformed to spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)\le {\rm exc}(F')+1$,
we get that $G_2$ is a reduction of $G$ unless $G_2$ is not $2$-connected.
We show that $G_2$ must be $2$-connected in the rest of the proof.
Suppose that $G_2$ is not $2$-connected,
i.e., the vertices of $G_2$ can be partitioned to non-empty sets $C$ and $D$ such that
there is at most one edge between $C$ and $D$ in $G_2$.
The path $Q_1$ from $x_1$ to $x_5$, the edge $x_5x_2$, the path $Q_1$ from $x_2$ to $x_j$ and
the path from $x_j$ to $x_1$ through $y$ form a cycle in $G_2$.
This implies that all the four vertices $x_1$, $x_2$, $x_j$ and $x_5$ are in the same set and
we can assume by symmetry that they are in the set $C$.
Consequently, the remaining vertex $x_{7-j}$ must be in $D$ (note that $7-j$ is either $3$ or $4$),
which implies that the edge $x_{7-j}v_{7-j}$ is a cut-edge in $G$, which is impossible.
\end{proof}
\subsection{Cycles of length six}
\label{sub-six}
Lemmas~\ref{lemma-c2e}--\ref{lemma-no5} imply that
every non-basic $2$-connected subcubic graph $G$ that is not proper has a linear-time reduction.
In this subsection, we focus on proper $2$-connected subcubic graphs that
contain a cycle of length six that satisfies some additional assumptions.
Note that such all the six vertices of such a cycle must have degree three,
each of them has a neighbor not contained in the cycle and
these neighbors are pairwise distinct.
In Lemmas~\ref{lemma-6-opp3}--\ref{lemma-6oppcuts} that we establish in this subsection,
we assume that a cycle $K$ with the properties stated in the lemmas is given.
The properties asserted by the lemmas can be checked in linear time.
In Lemma~\ref{lemma-no26}, this follows for the fact that every cycle of length six in a subcubic graph
can be intersected by at most a constant number other cycles of length six.
Since all cycles of length six can be listed in linear time (see the arguments
given at the beginning of Subsection~\ref{sub-proper}),
it is possible to find a cycle of length six with the properties given in one of the lemmas or
conclude that such a cycle does not exist in quadratic time.
\begin{lemma}
\label{lemma-6-opp3}
Let $G$ be a proper $2$-connected subcubic graph,
let $K=v_1v_2v_3v_4v_5v_6$ be a cycle of length six in $G$, and
let $x_i$ be the neighbor of $v_i$ not contained in $K$ for $i=1,\ldots, 6$.
Let $A,B$ be a partition of the vertices of $G-V(K)$ such that $x_1,x_3,x_5\in A$ and $x_2,x_4,x_6\in B$.
If $G-V(K)$ has no edge between $A$ and $B$, then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Let $G'$ be the graph obtained from $G-V(K)$
by adding the paths $x_1z_1x_2$, $x_3z_2x_4$ and $x_5z_3x_6$,
where $z_1$, $z_2$, and $z_3$ are new vertices, each having degree two in $G'$.
Note that the graph $G'$ is simple since the vertices $x_1$, \ldots, $x_6$ are pairwise distinct as $G$ is proper.
In addition, $G'$ is $2$-connected since $G$ is $2$-connected, and $\delta(G,G')=0$.
Let $F'$ be a spanning Eulerian subgraph of $G'$.
We show that $F'$ can be transformed to a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)\le {\rm exc}(F')$.
Since there are no edges between $A$ to $B$,
either one or three of the vertices $z_1$, $z_2$ and $z_3$ are isolated in $F'$.
If all of the three vertices are isolated in $F'$,
then $F$ is obtained from $F'-\{z_1,z_2,z_3\}$ by adding the cycle $K$ including its edges.
Note that ${\rm exc}(F)={\rm exc}(F')-1$ in this case.
Suppose that only one of the vertices is isolated, say $z_3$.
Since there are no edges between $A$ and $B$,
the cycle of $F'$ that contains $z_1$ consists of the path $x_1z_1x_2$, a path from $x_2$ to $x_4$ inside $B$,
the path $x_4z_2x_3$, and a path from $x_3$ to $x_1$ inside $A$.
Let $F$ be the spanning Eulerian subgraph of $G$ obtained from $F'-\{z_1,z_2,z_3\}$
by adding the paths $x_2v_2v_3x_3$ and $x_4v_4v_5v_6v_1x_1$;
we have $c(F)=c(F')$ and $i(F)=i(F')-1$, and hence ${\rm exc}(F)={\rm exc}(F')-1$.
We conclude that $G'$ is a reduction of $G$.
\end{proof}
Note that unlike in all the other lemmas in this section,
we consider a partition of the vertices of the original graph $G$ in the next lemma
since $G\setminus E(K)$ contains all the vertices of $G$.
\begin{lemma}\label{lemma-6-ob1}
Let $G$ be a proper $2$-connected subcubic graph and let $K=v_1\ldots v_6$ be a cycle of length six in $G$.
If there exists a partition of the vertex set of $G\setminus E(K)$ into two sets $A$ and $B$ such that
$v_1,v_3\in A$, $v_2,v_4,v_5,v_6\in B$, and there is at most one edge between $A$ and $B$,
then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Let $G_A$ and $G_B$ be the subgraphs of $G-E(K)$ induced by $A$ and $B$, respectively.
Since $G$ is 2-connected, the graph $G_A$ is connected, and the graph $G_B$ has at most two components.
First suppose that $G_B$ is connected or has two components each containing two of the vertices $v_2$, $v_4$, $v_5$ and $v_6$.
We consider the spanning forest of $G_B$ and derive that
$G_B$ contains two disjoint paths between with the end-vertices being the neighbors of $v_2$, $v_4$, $v_5$ and $v_6$.
Let $Q_1$ be a path from $v_1$ to $v_3$ with all internal vertices in $A$, and
let $Q_2$ and $Q_3$ be the paths between two disjoint pairs of vertices $v_2$, $v_4$, $v_5$ and $v_6$ such that
their all internal vertices are in $B$.
By symmetry, we can assume that neither $Q_2$ nor $Q_3$ connects $v_5$ and $v_6$.
Suppose that $G_B$ has two components such that
one contains one and the three of the vertices $v_2$, $v_4$, $v_5$ and $v_6$.
Let $C$ be the former component.
If $C$ contains the vertex $v_5$,
then the edge between $A$ and $B$ joins a vertex of $A$ and a vertex of $C$, and
we can apply Lemma~\ref{lemma-6-opp3}.
Hence, we can assume that $C$ does not contain the vertex $v_5$, and
let $v_j$ be the vertex contained in $C$.
By symmetry, we can assume that $j\not=6$, i.e., $j=2$ or $j=4$.
Let $Q_1$ be a tree in $G$ such that its leaves are the vertices $v_1$, $v_3$ and $v_j$ and
all its vertices belong to $A$ or $C$, and
let $Q_2$ be a tree in $G$ such that its leaves are the vertices $v_{6-j}$, $v_5$ and $v_6$ (note that $6-j$ is $2$ or $4$) and
all its vertices belong to the component of $G_B$ different from $C$.
Let $G'$ be the graph obtained from $G\setminus E(K)$ by identifying the vertices $v_1$ and $v_5$ to a single vertex $z_{15}$,
identifying $v_2$ and $v_4$ to a single vertex $z_{24}$, and identifying $v_3$ and $v_6$ to a single vertex $z_{36}$.
The paths and trees $Q_i$, which we have constructed in the previous two paragraphs,
yield that the graph $G'$ is $2$-connected.
Note that $\delta(G,G')=0$.
We establish that $G'$ is a reduction of $G$.
Let $F'$ be a spanning Eulerian subgraph of $G'$.
If all three vertices $z_{15}$, $z_{24}$ and $z_{36}$ are isolated in $F'$,
then we can extend $F$ by adding a cycle $K$ to an Eulerian spanning subgraph $F$ of $G$ with ${\rm exc}(F)={\rm exc}(F')-1$.
If two of the vertices $z_{15}$, $z_{24}$ and $z_{36}$ are isolated in $F'$,
then we can extend by rerouting one of the cycles of $F'$ through the cycle $K$
to an Eulerian spanning subgraph $F$ of $G$ with ${\rm exc}(F)\in\{{\rm exc}(F')-1,{\rm exc}(F')\}$.
Finally, if one of the vertices $z_{15}$, $z_{24}$ and $z_{36}$ is an isolated vertex in $F'$,
it is possible to reroute the cycle(s) of $F'$ containing the two of the vertices $z_{15}$, $z_{24}$ and $z_{36}$
to get an Eulerian spanning subgraph $F$ such that
the number of non-trivial components of $F$ does not exceed that of $F'$ and
the same is true for the number of isolated vertices, i.e., ${\rm exc}(F)\le{\rm exc}(F')$.
Hence, none of the vertices $z_{15}$, $z_{24}$ and $z_{36}$ isolated in $F'$.
If the vertices $z_{15}$, $z_{24}$ and $z_{36}$ are contained in at least two different cycles of $F'$,
it is possible to complete the three paths of $F'-\{z_{15},z_{24},z_{36}\}$
to an Eulerian spanning subgraph $F$ of $G$ in a way that
there are at most two cycles of $F$ passing through the cycle $K$ and
none of the vertices of $K$ is isolated in $F$. In particular, ${\rm exc}(F)\le{\rm exc}(F')$.
Consequently, we can assume that all the vertices $z_{15}$, $z_{24}$ and $z_{36}$ are contained in the same cycle of $F'$.
Let $R$, $R'$ and $R''$ be the paths of this cycle after removing the vertices $z_{15}$, $z_{24}$ and $z_{36}$.
Observe that one of the paths is fully contained in $A$ and connects the neighbors of the vertices $v_1$ and $v_3$;
let $R$ be this path.
Since the paths $R$, $R'$ and $R''$ together with the vertices $z_{15}$, $z_{24}$ and $z_{36}$ form a cycle,
it follows that neither the path $R'$ nor the path $R''$ connects the neighbors of the vertices $v_5$ and $v_6$.
Hence, $G$ contains a cycle formed by the paths $R$, $R'$, $R''$,
the edges joining $v_1,\ldots,v_6$ to their numbers outside $K$ and
the edges $v_1v_2$, $v_3v_4$ and $v_5v_6$.
Replacing the cycle of $F'$ containing the vertices $z_{15}$, $z_{24}$ and $z_{36}$ with this cycle
yields an Eulerian spanning subgraph $F$ of $G$ with ${\rm exc}(F)={\rm exc}(F')$.
This finishes the proof that $G'$ is a reduction of $G$.
\end{proof}
In the next two lemmas,
we show that two different types of cycles of length six that are not $\theta$-cycles can be reduced.
\begin{lemma}\label{lemma-6-ob0}
Let $G$ be a proper $2$-connected subcubic graph,
let $K=v_1v_2v_3v_4v_5v_6$ be one of its cycles of length six, and
let $x_i$ be the neighbor of $v_i$ not contained in $K$ for $i=1,\ldots,6$.
Let $A,B$ be a partition of the vertices of $G-V(K)$ such that $x_1,x_2\in A$, $x_3,x_4,x_5,x_6\in B$, and
there is no edge between $A$ and $B$.
If $K$ is not a $\theta$-cycle, then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Since $G$ is proper, the vertices $x_1, \ldots, x_6$ are pairwise distinct.
Let $G_A$ and $G_B$ be the subgraphs of $G$ induced by $A$ and $B$.
The $2$-connectivity of $G$ implies that $G_A$ is connected and $G_B$ has at most two components,
each containing two vertices among $x_3,\ldots,x_6$.
If $G_B$ contains an edge-cut of size at most one separating $\{v_3,v_5\}$ from $\{v_4,v_6\}$,
then a reduction of $G$ can be obtained using Lemma~\ref{lemma-6-ob1},
which we apply with one of the sides of this cut in $G_B$ playing the role of $A$ and
the rest of the vertices outside the cycle $B$ playing the role of $B$ in the statement of Lemma~\ref{lemma-6-ob1}.
We conclude that $G-V(K)$ contains three disjoint paths $Q_1$, $Q_2$, and $Q_3$ such that
$Q_1$ connects $x_1$ with $x_2$, $Q_2$ connects $x_3$ with $x_4$ or $x_6$, and
$Q_3$ connects $x_5$ with the other of the vertices $x_4$ and $x_6$.
Let $G_1$ be the graph obtained from $G-V(K)$ by adding paths $x_1z_1x_4$, $x_2z_2x_5$, and $x_3z_3x_6$,
where $z_1$, $z_2$ and $z_3$ are new vertices, each having degree two in $G_1$.
Note that $\delta(G,G_1)=0$.
We show that $G_1$ is a reduction of $G$ assuming that $G_1$ is $2$-connected.
Let $F_1$ be a spanning Eulerian subgraph of $G_1$.
If at least two of the vertices $z_1$, $z_2$ and $z_3$ are isolated in $F_1$,
then it is easy to construct a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)\le{\rm exc}(F_1)$.
Hence, assume that at most one of the vertices $z_1$, $z_2$ and $z_3$ is isolated in $F_1$.
Since $z_1$ and $z_2$ is a $2$-vertex cut in $G_1$, it follows that
the paths $x_1z_1x_4$ and $x_2z_2x_5$ are contained in the same cycle of $F_1$.
If $z_3$ is isolated in $F_1$,
then let $F$ be a spanning Eulerian subgraph of $G$ obtained from $F_1-\{z_1,z_2,z_3\}$
by adding the paths $x_1v_1v_6v_5x_5$ and $x_2v_2v_3v_4x_4$.
Note that ${\rm exc}(F)={\rm exc}(F_1)-1$ in this case.
If $z_3$ is not isolated in $F_1$, i.e., the path $x_3z_3x_6$ is contained in a cycle of $F_1$,
then let $F$ be obtained from $F_1-\{z_1,z_2,z_3\}$
by adding the paths $x_1v_1v_6x_6$, $x_2v_2v_3x_3$ and $x_4v_4v_5x_5$.
Observe that $c(F)=c(F_1)$, which implies ${\rm exc}(F)={\rm exc}(F_1)$.
We conclude that $G_1$ is a reduction of $G$ if $G_1$ is $2$-connected.
It remains to consider the case that $G_1$ is not 2-connected.
This implies that the path $Q_2$ connects $x_3$ with $x_6$, and
the path $Q_3$ connects $x_4$ with $x_5$.
In addition, the vertices of the subgraph $G_B$ can be split into two parts $B'$ and $B''$ such that
$B'$ contains the vertices $x_3$ and $x_6$, $B''$ contains the vertices $x_4$ and $x_5$, and
there is at most one edge between $B'$ and $B''$.
Since $K$ is not a $\theta$-cycle, there must be at least one edge between $B'$ and $B''$,
i.e., there is exactly one edge between $B'$ and $B''$.
Let $e$ be this edge.
Let $G_2$ be the graph obtained from $G-V(K)$ by adding the edges $x_2x_3$, $x_1x_4$ and $x_5x_6$, and
by subdividing $e$ by one new vertex $w$.
Observe that $G_2$ is $2$-connected and $\delta(G,G_2)=4$.
In addition, $G_2$ is simple since $G$ is proper.
We show that $G_2$ is a reduction of $G$.
Let $F_2$ be a spanning Eulerian subgraph of $G_2$.
If $w$ is an isolated vertex in $F_2$,
then $F'$ contains either none or all of the edges $x_2x_3$, $x_1x_4$ and $x_5x_6$.
In the former case, let $F$ be the spanning Eulerian subgraph of $G$ obtained from $F'$ by adding the cycle $K$.
In the latter case, let $F$ be the subgraph obtained from $F'$ by removing the edges $x_2x_3$, $x_1x_4$ and $x_5x_6$ and
adding the paths $x_2v_2v_3x_3$, $x_4v_4v_5x_5$ and $x_6v_6v_1x_1$.
Since $c(F)=c(F_2)+1$ and $i(F)=i(F_2)-1$ in either of the cases, it follows that ${\rm exc}(F)={\rm exc}(F_2)+1$.
If $w$ is not an isolated vertex,
then the subgraph $F_2$ either contains the edge $x_5x_6$ or
it contains the edges $x_2x_3$ and $x_1x_4$.
In the former case, let $F$ be the spanning Eulerian subgraph of $G$ obtained from $F'$
by removing the edge $x_5x_6$ and adding the path $x_6v_6v_1\cdots v_5x_5$.
In the latter case, let $F$ be the spanning Eulerian subgraph of $G$ obtained from $F'$
by removing the edges $x_2x_3$ and $x_1x_4$, and
adding the paths $x_2v_2v_3x_3$ and $x_1v_1v_6v_5v_4x_4$.
In both case, we get that $c(F)=c(F_2)$ and $i(F)=i(F_2)$, which yields that ${\rm exc}(F)={\rm exc}(F_2)$.
This concludes the proof that $G_2$ is a reduction of $G$.
\end{proof}
\begin{lemma}\label{lemma-6-oppa}
Let $G$ be a proper $2$-connected subcubic graph,
let $K=v_1v_2v_3v_4v_5v_6$ be one of its cycles of length six, and
let $x_i$ be the neighbor of $v_i$ not contained in $K$ for $i=1,\ldots,6$.
If $K$ is not a $\theta$-cycle and
the vertices $x_1$ and $x_4$ are in different components of $G-V(K)$,
then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Let $A$ and $B$ a partition of the vertices of $G-V(K)$ such that
$x_1\in A$ and $x_4\in B$, and there is no edge between $A$ and $B$.
By symmetry, we can assume that $|A\cap \{x_1,\ldots, x_6\}|\le 3$.
If $x_3\in A$ or $x_5\in A$,
then the reduction exists by Lemma~\ref{lemma-6-ob1};
e.g., if $x_3\in A$, apply the lemma with $A\cup\{v_1,v_3\}$ playing the role of the set $A$ and
with $B\cup\{v_2,v_4,v_5,v_6\}$ playing the role of the set $B$ from the statement of the lemma.
If $A=\{x_1,x_2,x_6\}$,
then the reduction also exists by Lemma~\ref{lemma-6-ob1}:
apply the lemma with $A\cup\{v_6,v_2\}$ playing the role of the set $A$ and
with $B\cup\{v_1,v_3,v_4,v_5\}$ playing the role of the set $B$.
We conclude that $A\subseteq\{x_1,x_2,x_6\}$ and $|A|=2$.
By symmetry, we can assume that $A=\{x_1,x_2\}$ and $B=\{x_3,x_4,x_5,x_6\}$.
The existence of the reduction now follows from Lemma~\ref{lemma-6-ob0}.
\end{proof}
Lemmas~\ref{lemma-6-ob0} and~\ref{lemma-6-oppa} yield the following.
\begin{lemma}\label{lemma-6-nocut}
Let $G$ be a proper $2$-connected subcubic graph.
If $G$ contains a cycle $K$ of length six that is not a $\theta$-cycle and that contains an edge in $2$-edge-cut,
then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Let $v_1,\ldots,v_6$ be the vertices of the cycle $K$.
By symmetry, we can assume that the edge $v_1v_2$ is contained in a $2$-edge-cut.
The $2$-edge-cut must contain another edge $e$ of the cycle $K$.
Since $G$ is $2$-connected, this edge $e$ is neither $v_1v_6$ nor $v_2v_3$.
If the edge $e$ is $v_3v_4$ or $v_5v_6$, then the reduction exists by Lemma~\ref{lemma-6-ob0}.
Otherwise, the edge $e$ is the edge $v_4v_5$ and the reduction exists by Lemma~\ref{lemma-6-oppa}.
\end{proof}
\begin{lemma}\label{lemma-6-mainred}
Let $G$ be a proper $2$-connected subcubic graph,
let $K=v_1v_2v_3v_4v_5v_6$ be one of its cycles of length six, and
let $x_i$ be the neighbor of $v_i$ not contained in $K$ for $i=1,\ldots,6$.
If the edge $v_1x_1$ is not contained in a $2$-edge-cut, then $G$ has a linear-time reduction unless
\begin{itemize}
\item all the edges $v_2x_2$, $v_3x_3$, $v_5x_5$, and $v_6x_6$ are contained in $2$-edge-cuts, and
\item there exists a partition $A$ and $B$ of the vertices of $G-V(K)$ such that
$x_1,x_2,x_6\in A$, $x_3,x_4,x_5\in B$,
$G-V(K)$ contains exactly one edge between $A$ and $B$, and
both the subgraphs induced by $A$ and $B$ are connected.
\end{itemize}
\end{lemma}
\begin{proof}
The cycle $K$ is not a $\theta$-cycle since all edges incident with a $\theta$-cycle are contained in $2$-edge-cuts.
Since the edge $v_1x_1$ is not contained in a $2$-edge-cut, the degree of $x_1$ is three,
in particular, its degree in $G-V(K)$ is two.
Note that $G-V(K)$ contains a path $Q_{25}$ connecting the vertex $x_2$ with the vertex $x_5$,
a path $Q_{36}$ connecting $x_3$ with $x_6$, and a path $Q_{14}$ connecting $x_1$ with $x_4$ (the three paths need not be disjoint)
since otherwise the existence of the reduction of $G$ follows from Lemma~\ref{lemma-6-oppa}.
Let $G_1$ be the graph obtained from $G-V(K)$ by adding the edge $x_2x_6$ and a vertex $z$ adjacent to $x_3$, $x_4$, and $x_5$.
Note that $\delta(G,G_1)=4$.
Since $G$ is proper, the vertices $x_2$ and $x_6$ are not adjacent in $G$.
Hence $G_1$ is a simple subcubic graph.
Observe that any spanning Eulerian subgraph $F_1$ of $G_1$
can be transformed to a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)\le {\rm exc}(F_1)+1$.
Hence, $G_1$ is a reduction of $G$ unless $G_1$ is not 2-connected.
In the rest of the proof, we assume that $G_1$ is not 2-connected.
This implies that there exists a partition of vertices of $G_1$ to non-empty sets $A$ and $B$
such that there is at most one edge between $A$ and $B$.
By symmetry, we can assume that $x_2$ is contained in $A$.
Note that the edge $x_2x_6$ and the paths $Q_{36}$, $x_3zx_5$ and $Q_{25}$
contain a cycle passing through the edge $x_2x_6$ and a cycle passing through the path $x_3zx_5$;
note that their union need not be a cycle since the path $Q_{36}$ and $Q_{25}$ need not be disjoint.
This implies that $x_6\in A$, and either $\{x_3,x_5\}\subseteq A$ or $\{x_3,x_5\}\subseteq B$.
If $\{x_3,x_5\}\subseteq A$, then either $G$ is be 2-connected (if $x_1\in A$),
or the edge $v_1x_1$ is contained in a $2$-edge-cut in $G$ (if $x_1\in B$).
Since both these conclusions are impossible,we get that $\{x_3,x_5\}\subseteq B$.
Hence, there is an edge between $A$ and $B$ and this edge is contained in both paths $Q_{25}$ and $Q_{36}$.
Let $e_0$ be this edge.
Observe that $e_0$ is not incident with the vertex $z$, which does not exist in $G$.
In particular, both the vertices $z$ and $x_4$ belong to $B$.
If $x_1\in B$, then Lemma~\ref{lemma-6-ob1} yields the existence of a reduction of $G$.
So, we can assume that $x_1\in A$.
This yields that the path $Q_{14}$ also contains the edge $e_0$.
Since all paths $Q_{14}$, $Q_{25}$, and $Q_{36}$ must contain the edge $e_0$,
we conclude that $G-V(K)-e_0$ has exactly two components;
one of the two components has the vertex set $A$, in particular, it contains the vertices $x_1$, $x_2$ and $x_6$, and
the other component has the vertex set $B\setminus\{z\}$ and contains the vertices $x_3$, $x_4$ and $x_5$.
If $v_ix_i$ is not contained in a $2$-edge-cut for some $i\in\{2,3,5,6\}$, say $i=2$,
then consider the graph $G_2$ obtained from $G-V(K)$ by adding the edge $x_1x_3$ and
a new vertex $z$ adjacent to $x_4$, $x_5$, and $x_6$.
If the graph $G_2$ were not $2$-connected, it is easy to see that $G$ would not be $2$-connected.
Hence, $G_2$ is a reduction for $G$ (note that the edge $v_ix_i$ can play the role of the edge $v_1x_1$
at the beginning of our proof).
\end{proof}
\begin{lemma}\label{lemma-6no2e}
Let $G$ be a proper $2$-connected subcubic graph,
let $K=v_1v_2v_3v_4v_5v_6$ be one of its cycles of length six, and
let $x_i$ be the neighbor of $v_i$ not contained in $K$ for $i=1,\ldots,6$.
If neither $v_1x_1$ nor $v_4x_4$ is contained in a $2$-edge-cut, then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Lemma~\ref{lemma-6-mainred} yields that there either exists a reduction of $G$ or
a partition $A$ and $B$ of the vertices of $G-V(K)$ such that $x_1,x_2,x_6\in A$ and $x_3,x_4,x_5\in B$,
$G-V(K)$ contains exactly one edge $e$ between $A$ and $B$, and
both subgraphs of $G-V(K)$ induced by $A$ and $B$ are connected.
In the former case, the proof of the lemma is finished.
So, we focus on the latter case.
Let $G'$ be the graph obtained from $G-V(K)$ by adding the edges $x_2x_3$ and $x_5x_6$, and by subdividing the edge $e$ twice.
Observe that $G'$ is a 2-connected simple subcubic graph and $\delta(G,G_1)=0$.
Let $F'$ be a spanning Eulerian subgraph of $G'$.
If $F'$ does not contain the path corresponding to the edge $e$ or one of the edges $x_2x_3$ and $x_5x_6$,
then $F'$ does not contain any of the edges $x_2x_3$ and $x_5x_6$, and
$G$ has a spanning Eulerian subgraph $F$ such that $c(F)=c(F')+1$ and $i(F)=i(F')-2$, i.e., ${\rm exc}(F)={\rm exc}(F')$.
If $F'$ does not contain the path corresponding to the edge $e$ but contains both the edges $x_2x_3$ and $x_5x_6$,
then $G$ has a spanning Eulerian subgraph $F$ such that $c(F)=c(F')$ and $i(F)=i(F')$, i.e., ${\rm exc}(F)={\rm exc}(F')$.
Finally, if $F'$ contains the path corresponding to the edge $e$,
then $F'$ contains one of the edges $x_2x_3$ and $x_5x_6$, and
$G$ has a spanning Eulerian subgraph $F$ such that $c(F)=c(F')$ and $i(F)=i(F')$, i.e., ${\rm exc}(F)={\rm exc}(F')$.
In all the case, it holds that $G$ has a spanning Eulerian subgraph $F$ with ${\rm exc}(F)\le {\rm exc}(F_1)$.
We conclude that $G'$ is a reduction of $G$.
\end{proof}
We now combine Lemmas~\ref{lemma-6-nocut}, \ref{lemma-6-mainred} and \ref{lemma-6no2e}.
\begin{lemma}\label{lemma-no26}
Let $G$ be a proper $2$-connected subcubic graph and
let $K$ and $K'$ be two distinct cycles of length six in $G$.
If the cycles $K$ and $K'$ intersect and
at least one of them is not a $\theta$-cycle,
then $G$ has a linear-time reduction with respect to $K\cup K'$.
\end{lemma}
\begin{proof}
We can assume that $K$ is not a $\theta$-cycle by symmetry.
Since the cycles $K$ and $K'$ are distinct,
the cycle $K$ is incident with at least two edges of $K'$ not contained in $K$.
None of these edges is contained in a $2$-edge-cut by Lemma~\ref{lemma-6-nocut}.
By Lemma~\ref{lemma-6-mainred}, these two edges must be incident with the opposite vertices of $K$.
Finally, the existence of the reduction follows from Lemma~\ref{lemma-6no2e}.
\end{proof}
We finish this subsection with two additional lemmas on edges incident with cycles of length six that
are contained in $2$-edge-cuts.
\begin{lemma}\label{lemma-6adjcuts}
Let $G$ be a proper $2$-connected subcubic graph,
let $K=v_1v_2v_3v_4v_5v_6$ be one of its cycles of length six, and
let $x_i$ be the neighbor of $v_i$ not contained in $K$ for $i=1,\ldots,6$.
If $K$ is not a $\theta$-cycle, and
there exists $i<j$ such that $j-i\neq 3$ and $\{v_ix_i, v_jx_j\}$ is a $2$-edge-cut,
then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
By symmetry, we can assume that $j-i$ is equal to $1$ or $2$.
If $j-i=1$, then the existence of the reduction follows from Lemma~\ref{lemma-6-nocut}, and
if $j-i=2$, then its existence follows from Lemma~\ref{lemma-6-ob1}.
\end{proof}
\begin{lemma}\label{lemma-6oppcuts}
Let $G$ be a proper $2$-connected subcubic graph,
let $K=v_1v_2v_3v_4v_5v_6$ be one of its cycles of length six, and
let $x_i$ be the neighbor of $v_i$ not contained in $K$ for $i=1,\ldots,6$.
If there exists $1\le i<j\le 3$ such that
both $\{v_ix_i,v_{i+3}x_{i+3}\}$ and $\{v_jx_j,v_{j+3}x_{j+3}\}$ are $2$-edge-cuts in $G$,
then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Sine $G$ is $2$-connected,
$G-V(K)$ has three components $C_1$, $C_2$, and $C_3$, and
the vertices $x_i$ and $x_{i+3}$ are contained in $C_i$ for $i\in\{1,2,3\}$.
Let $G'$ be the graph obtained from $G-V(K)$ by adding the edges $x_1x_5$ and $x_2x_6$, and
the path $x_3wx_4$, where $w$ is a new vertex, which have degree two in $G'$.
Note that $G'$ is a simple $2$-connected subcubic graph and $\delta(G,G')=4$.
Let $F'$ be a spanning Eulerian subgraph of $G'$.
The subgraph $F'$ either contains none of the edges $x_1x_5$, $x_2x_6$, $x_3w$ and $x_4w$, or
it contains all of them.
In the former case, $G$ has a spanning Eulerian subgraph $F$ with $c(F)=c(F')+1$ and $i(F)=i(F')-1$, i.e., ${\rm exc}(F)={\rm exc}(F')+1$.
In the latter case, $G$ has a spanning Eulerian subgraph $F$ with $c(F)=c(F')$ and $i(F)=i(F')$, i.e., ${\rm exc}(F)={\rm exc}(F')$.
It follows that $G'$ is a reduction of $G$.
\end{proof}
\subsection{Cycles of length seven}
\label{sub-seven}
In this subsection, we establish two lemmas concerning the reductions involving cycles of length seven.
As in Subsection~\ref{sub-six}, we assume that a cycle $K$ with the properties stated in the lemmas is given.
Since the properties asserted by Lemmas~\ref{lemma-no2in7} and~\ref{lemma-cutsin7} can be checked in linear time and
all cycles of length seven can be listed in linear time, it is possible to find a cycle of length seven
with the properties given in one of Lemmas~\ref{lemma-no2in7} and~\ref{lemma-cutsin7} or
conclude that such a cycle does not exist in quadratic time.
To prove the first of the lemmas, we need to use the Splitting Lemma of Fleischner~\cite{fleischsplit},
which we now state.
Let us introduce some additional notation.
We say that the graph $G'$ is obtained from a graph $G$ by \emph{splitting off} the edges $u_1v$ and $u_2v$
if the graph is obtained by removing the edges $u_1v$ and $u_2v$ and adding the edge $u_1u_2$.
We will always apply this operation to edges incident with the same vertex.
We can now state the Splitting Lemma.
\begin{lemma}[Splitting Lemma]\label{lemma-fleisch}
Let $G$ be a $2$-edge-connected graph and let $v$ be a vertex of degree at least $4$.
\begin{itemize}
\item If $v$ is a cut-vertex and $e_1$ and $e_2$ are two edges incident with $v$ that belong to different blocks of $G$,
then splitting off $e_1$ and $e_2$ results in a 2-edge-connected graph.
\item If $v$ is not a cut-vertex and $e_1$, $e_2$, and $e_3$ are edges incident with $v$,
then splitting off $e_1$ and $e_2$ or splitting off $e_2$ and $e_3$ results in a 2-edge-connected graph.
\end{itemize}
\end{lemma}
We are now ready to prove the first lemma of this subsection.
\begin{lemma}\label{lemma-no2in7}
Let $G$ be a proper $2$-connected subcubic graph.
If $G$ has a cycle $K$ of length seven that contains a vertex of degree two,
then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Since $G$ is proper, $K$ is an induced cycle and at most two vertices of $K$ have degree two.
Let $v_1, \ldots, v_k$ be the vertices of $K$ of degree three in order around the cycle;
note that $k$ is five or six.
Further, let $x_i$ be the neighbor of $v_i$ outside of $K$ for $i\in\{1,\ldots,k\}$.
Since $G$ is proper,
the vertices $x_1,\ldots,x_k$ are pairwise distinct.
Moreover,
if $i\neq j$ and either $|i-j|\le 2$, or $|i-j|\ge k-2$, then $x_ix_j$ is not an edge of $G$.
Let $G'$ be the graph obtained from $G$ by contracting the cycle $K$ to a single vertex $w$.
By Lemma~\ref{lemma-fleisch} and symmetry, we can assume that
the graph $G''$ obtained from $G'$ by splitting off $wx_1$ and $wx_2$ is $2$-edge-connected.
We first deal with the case that $k=5$.
Note that $G''$ is a simple $2$-connected subcubic graph and $\delta(G,G'')=8$.
Let $F'$ be a spanning Eulerian subgraph of $G''$.
If the vertex $w$ is isolated in $F'$ and $F'$ does not contain the edge $x_1x_2$,
then there exists a spanning Eulerian subgraph $F$ of $G$ with $c(F)=c(F')+1$ and $i(F)=i(F')-1$.
If either the vertex $w$ is isolated and $F'$ contains the edge $x_1x_2$, or
the vertex $w$ is not isolated and $F'$ does not contain the edge $x_1x_2$,
then there exists a spanning Eulerian subgraph $F$ of $G$ with $c(F)=c(F')$ and $i(F)\le i(F')+1$.
Finally, if the vertex $w$ is not isolated and $F'$ contains the edge $x_1x_2$,
then there exists a spanning Eulerian subgraph $F$ of $G$ with $c(F)=c(F')$ and $i(F)\le i(F')+3$, and
if there is no such subgraph $F$ with $i(F)\le i(F')+2$,
then there is also a spanning Eulerian subgraph $F$ with $c(F)\le c(F')+1$ and $i(F)=i(F')$.
In all the cases, we conclude that there is a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)\le{\rm exc}(F')+2$.
We now deal with the case $k=6$.
If $G'$ is not $2$-connected, then $w$ must be a cut-vertex and $G'$ has two blocks,
each containing two neighbors of $w$.
Regardless whether $w$ is a cut-vertex,
Lemma~\ref{lemma-fleisch} implies that splitting off $wx_3$ with either $wx_4$ or $wx_5$ and
suppressing $w$ yields a $2$-connected subcubic graph $G''$.
Note that $G''$ is simple and $\delta(G,G'')=8$.
Let $e$, $e'$ and $e''$ be the edges of $G''$ not contained in $G$.
Consider a spanning Eulerian subgraph $F'$ of $G''$.
If $F'$ uses none of the edges $e$, $e'$ and $e''$ ,
then there exists a spanning Eulerian subgraph $F$ of $G$ with $c(F)=c(F')+1$ and $i(F)=i(F')$.
If $F'$ uses exactly one of the edges $e$, $e'$ and $e''$,
then there exists a spanning Eulerian subgraph $F$ with $c(F)=c(F')$ and $i(F)\le i(F')+2$.
If $F'$ uses exactly two of the edges $e$, $e'$ and $e''$,
then there exists a spanning Eulerian subgraph $F$ with $c(F)\le c(F')+1$ and $i(F)\le i(F')+1$, and
if there is no such subgraph $F$ with $c(F)<c(F')+1$ or $i(F)<i(F')+1$,
then there is also a spanning Eulerian subgraph $F$ with $c(F)=c(F')$ and $i(F)=i(F')+2$.
Finally, if $F'$ uses all the edges $e$, $e'$ and $e''$,
then there exists a spanning Eulerian subgraph $F$ with $c(F)\le c(F')+2$ and $i(F)=i(F')$, and
if there is no such subgraph $F$ with $c(F)\le c(F')+1$,
then there is also spanning Eulerian subgraph $F$ with $c(F)=c(F')$ and $i(F)=i(F')+1$.
In all the cases, there exists a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)\le {\rm exc}(F')+2$,
i.e., $G''$ is a reduction of $G$.
\end{proof}
Note that Lemmas~\ref{lemma-no5} and~\ref{lemma-no2in7} yield that
if a proper $2$-connected subcubic graph $G$ contains a cycle of length at most seven that contains a vertex of degree two,
then $G$ has a linear-time reduction.
We next prove the final lemma of this section.
\begin{lemma}\label{lemma-cutsin7}
Let $G$ be a proper $2$-connected subcubic graph and let $K=v_1v_2\ldots v_m$ be a cycle in $G$ of length at most $7$.
If each of the edges $v_1v_m$ and $v_2v_3$ is contained in a $2$-edge-cut
but the edges $v_1v_m$ and $v_2v_3$ themselves do not form a $2$-edge-cut,
then $G$ has a linear-time reduction with respect to $K$.
\end{lemma}
\begin{proof}
Since $G$ is proper, the length of $K$ is at least six, i.e., $m\ge 6$.
If $m=6$, then the existence of a reduction of $G$ follows from Lemma~\ref{lemma-6-nocut}.
Hence, we can assume that $m=7$.
In addition, all the vertices of $K$ have degree three (otherwise, Lemma~\ref{lemma-no2in7} yields the existence of a reduction).
For $i=1,\ldots,7$, let $x_i$ be the neighbor of the vertex $v_i$ outside of $K$.
Let $e_{17}$ be an edge forming a $2$-edge-cut with the edge $v_1v_7$, and
let $e_{23}$ be an edge forming a $2$-edge-cut with the edge $v_2v_3$.
Note that the edges $e_{17}$ and $e_{23}$ must be edges of the cycle $K$.
Moreover, since $G$ is $2$-connected and the edges $v_1v_7$ and $v_2v_3$ do not form a $2$-edge-cut,
it follows that $e_{17}$ is one of the edges $v_3v_4$, $v_4v_5$ and $v_5v_6$ and
$e_{23}$ is one of the edges $v_4v_5$, $v_5v_6$ and $v_6v_7$.
For $(i,j)\in\{(1,7),\,(2,3)\}$,
let $U_{ij}$ and $V_{ij}$ be the two sides of the $2$-edge-cut formed by the edges $v_iv_j$ and $e_{ij}$;
by symmetry, we can assume that $v_i\in U_{ij}$.
Since neither $e_{17}$ nor $e_{23}$ is the edge $v_1v_2$, we have $v_1,v_2\in U_{17}\cap U_{23}$.
Since $e_{17}\neq v_2v_3$ and $e_{23}\neq v_1v_7$, we have $v_3\in U_{17}\cap V_{23}$ and $v_7\in U_{23}\cap V_{17}$.
Finally, since $e_{23}\neq v_3v_4$, we have $v_4\in V_{23}$, and the symmetric arguments yields that $v_6\in V_{17}$.
Suppose that the vertex $v_5$ is contained in $V_{17}$.
If $v_4$ were also contained in $V_{17}$, then the edge $v_3x_3$ would be a cut-edge in $G$, which is impossible.
If $v_4$ were not contained in $V_{17}$, i.e., it were contained in $U_{17}$,
then the edges $v_1v_7$ and $v_2v_3$ would form a $2$-edge-cut, which is also impossible.
Hence, the vertex $v_5$ must be contained in $U_{17}$.
The symmetric argument yields that $v_5$ is contained in $U_{23}$.
Consequently, the edge $e_{17}$ is the edge $v_5v_6$ and the edge $e_{23}$ is the edge $v_4v_5$.
It follows that $G-V(K)$ has three components with vertex sets
$A=(U_{17}\cap U_{23})\setminus V(K)$, $B=(U_{17}\cap V_{23})\setminus V(K)$, and $C=(U_{23}\cap V_{17})\setminus V(K)$.
Note that $x_1,x_2,x_5\in V(A)$, $x_3,x_4\in V(B)$, and $x_6,x_7\in V(C)$.
Let $G'$ be obtained from $G$ by removing $v_1$ and $v_2$,
adding edges $v_3x_1$ and $v_7x_2$, and by subdividing the edge $v_5x_5$ once;
let $w$ be the new vertex of degree two.
The graph $G'$ is a simple $2$-connected subcubic graph and $\delta(G,G')=0$.
Let $F'$ be a spanning Eulerian subgraph of $G'$.
Let $F''$ be the spanning Eulerian subgraph of $G$ obtained from $F'$ as follows.
First, include the vertices $v_1$ and $v_2$ as isolated vertices.
If $F'$ contains the path $v_5wx_5$, then replace it with the edge $v_5x_5$; otherwise, remove the vertex $w$.
If $F'$ contains the edge $v_3x_1$, include the edges $x_1v_1$ and $v_3v_2$, and
if $F'$ contains the edge $v_7x_2$, include the edges $x_2v_2$ and $v_7v_1$.
Finally, include the edge $v_1v_2$ if the vertices $v_1$ and $v_2$ have odd degree so far.
Note that the resulting graph $F''$ is a spanning Eulerian subgraph of $G$,
$c(F'')=c(F')$, $i(F'')\le i(F')+1$, and if $i(F'')=i(F')+1$,
then all the three vertices $v_1$, $v_2$ and $v_5$ are isolated in $F''$.
If $i(F'')\le i(F')$, set $F=F''$; otherwise, set $F$ the be the spanning Eulerian subgraph of $G$
with the edge set equal to the symmetric difference of $E(F)$ and $E(K)$.
In the latter case, $c(F)\le c(F')$ and $i(F)=i(F')-2$.
It follows that $G'$ is a reduction of $G$.
\end{proof}
\subsection{Clean subcubic graphs}
\label{sub-clean}
We now summarize the facts that have been established in this section.
We will call a non-basic $2$-connected subcubic graph $G$ clean
if none of the lemmas that we have proven can be applied to $G$.
Formally, a $2$-connected subcubic graph $G$ is \emph{clean} if it is proper and
\begin{itemize}
\item[(CT1)] no cycle of length at most $7$ in $G$ contains a vertex of degree two,
\item[(CT2)] every cycle of length six in $G$ that is not a $\theta$-cycle is disjoint from all other cycles of length six,
\item[(CT3)] every cycle $K=v_1\ldots v_m$ of length $m\le 7$ in $G$ satisfies that
if each of the edges $v_1v_m$ and $v_2v_3$ is contained in a $2$-edge-cut,
then the edges $v_1v_m$ and $v_2v_3$ themselves form a $2$-edge-cut, and
\item[(CT4)] every cycle $K=v_1\ldots v_6$ of length six in $G$ satisfies at least one of the following
\begin{itemize}
\item[(a)] $K$ is a $\theta$-cycle, or
\item[(b)] each edge exiting $K$ is contained in a $2$-edge-cut but no two of them together form a $2$-edge-cut, or
\item[(c)] each edge exiting $K$ is contained in a $2$-edge-cut, and there exists exactly one pair $i$ and $j$ with $1\le i<j\le 6$
such that the edges $v_ix_i$ and $v_jx_j$ form a $2$-edge-cut, and this pair satisfies $j-i=3$, or,
\item[(d)] precisely one edge exiting $K$, say $v_1x_1$, is not contained in a $2$-edge-cut, and
there exists a partition $A$ and $B$ of the vertices of $G-V(K)$
such that $x_1,x_2,x_6\in A$, $x_3,x_4,x_5\in B$,
there is exactly one edge between $A$ and $B$, and
both $A$ and $B$ induce connected subgraphs of $G-V(K)$,
\end{itemize}
where $x_i$ is the neighbor of the vertex $v_i$ outside the cycle $K$, $i\in\{1,\ldots,6,\}$.
\end{itemize}
Summarizing the results of this section, we get the following.
\begin{theorem}
\label{cor-summary}
There exists an algorithm running in time $O(n^3)$ that
constructs for a given $n$-vertex $2$-connected subcubic graph $G$
reduction of $G$ that is either basic or clean.
\end{theorem}
\begin{proof}
We show that if $G$ is neither basic nor clean,
then one of Lemmas~\ref{lemma-c2e}--\ref{lemma-cutsin7} applies.
As discussed in Subsections~\ref{sub-proper}--\ref{sub-seven},
it is possible to check the existence of a reduction as described in these lemmas,
to find the corresponding subgraph and to perform the reduction in quadratic time.
Since each step results in decreasing the sum $n(G)+n_2(G)$,
the algorithm stops after at most $O(n)$ steps, which yields the claimed running time.
If $G$ is basic, then there is nothing to prove.
If $G$ is not proper, a reduction exists by Lemma~\ref{lemma-no5}.
If $G$ is proper and fails to satisfy (CT1), the existence of a reduction follows from Lemma~\ref{lemma-no2in7} (note that
$G$ cannot have a cycle of length at most six containing a vertex of degree two since it is proper).
If $G$ is proper and does not satisfy (CT2), then a reduction exists by Lemma~\ref{lemma-no26}, and
if it does not satisfy (CT3), then a reduction exists by Lemma~\ref{lemma-cutsin7}.
Finally, if $G$ is proper and fails to not satisfy (CT4),
then a reduction exists by one of Lemmas~\ref{lemma-6-mainred}, \ref{lemma-6no2e}, \ref{lemma-6adjcuts} or \ref{lemma-6oppcuts}.
\end{proof}
\section{Main result}\label{sec-proof}
We need few additional results before we can prove Theorem~\ref{thm-main}.
The first concerns the structure of cycles passing through vertices of a cycle of length six
in a clean $2$-connected subcubic graph.
Let $v$ be a vertex of degree three in a graph $G$, and let $x_1$, $x_2$ and $x_3$ be its neighbors.
The \emph{type} of $v$ is the triple $(\ell_1, \ell_2, \ell_3)$ such that
$\ell_1$, $\ell_2$ and $\ell_3$ are the lengths of shortest cycles containing paths $x_1vx_2$, $x_1vx_3$ and $x_2vx_3$.
In our consideration, the order of the coordinates of the triple will be irrelevant,
so we will always assume that the lengths satisfy that $\ell_1\le\ell_2\le\ell_3$.
A type $(\ell'_1,\ell'_2,\ell'_3)$ \emph{dominates} the type $(\ell_1, \ell_2, \ell_3)$
if $\ell'_i\ge \ell_i$ for every $i=1,2,3$.
If $K$ is a cycle in a graph $G$ and each vertex of $K$ has degree three,
then the \emph{type} of the cycle $K$ is the multiset of the types of the vertices of $K$.
Finally, a multiset $M_1$ of types \emph{dominates} a multiset $M_2$ types
if there exists a bijection between the types contained in $M_1$ and $M_2$ such that
each type of $M_1$ is dominates the corresponding type in $M_2$.
We can now prove the following lemma (note that all vertices of the cycle $K$ in the lemma
must have degrees three since $G$ is assumed to be clean).
\begin{lemma}\label{lemma-6types}
Let $G$ be a clean $2$-connected subcubic graph and let $K=v_1v_2\ldots v_6$ be a cycle of length six in $G$.
If $K$ is not a $\theta$-cycle, then the type of $K$ dominates at least one of the following multisets:
\begin{itemize}
\item $\{(6,7,7),\;(6,7,7),\;(6,8,8),\;(6,8,8),\;(6,8,8),\;(6,8,8)\}$,
\item $\{(6,7,7),\;(6,7,8),\;(6,7,8),\;(6,8,8),\;(6,8,8),\;(6,8,8)\}$, or
\item $\{(6,7,7),\;(6,7,8),\;(6,7,9),\;(6,7,9),\;(6,8,8),\;(6,8,8)\}$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $x_i$ be the neighbor of $v_i$ outside of $K$, $i=1,\ldots,6$.
Since $G$ is clean, the cycle $K$ satisfies one of the four conditions in (CT4).
As $K$ is not a $\theta$-cycle, it must satisfy (CT4)(b), (CT4)(c) or (CT4)(d).
We analyze each of these three cases separately.
Suppose that the cycle $K$ satisfies (CT4)(b),
i.e., each edge $v_ix_i$, $i=1,\ldots,6$, is contained in a $2$-edge-cut but no two of them together form a $2$-edge-cut.
Let $K'$ be a cycle in $G$ containing the edge $v_1x_1$.
If the intersection of $K$ and $K'$ is not a path, then the length of $K'$ is at least ten by (CT2).
In the rest, we assume that the intersection of $K$ and $K'$ is a path and that
the cycles $K$ and $K'$ share a path $v_1v_2\ldots v_k$.
If $k=2$, then the length of $K'$ is at least $8$ by (CT3) since both $v_1x_1$ and $v_2x_2$ are contained in a $2$-edge-cut.
If $x_1$ or $x_k$ has degree two, then the length of $K'$ is also at least $8$ by (CT1).
Hence, we assume that $k\ge 3$ and that both the vertices $x_1$ and $x_k$ have degree three.
Let $C_1$ and $C_2$ be the blocks of $G-V(K)$ containing the vertices $x_1$ and $x_k$, respectively, and
let $e_1$ and $e_2$ be the cut-edges of $G-V(K)$ that are contained in $K'$ and
that are incident with $C_1$ and $C_2$, respectively.
Note that $C_1$ and $C_2$ are vertex-disjoint and $e_1\not=e_2$
since the edges $v_1x_1$ and $v_kx_k$ do not form a $2$-edge-cut by (CT4)(b).
In addition, $e_1$ is not incident with $x_1$ and $e_2$ is not incident with $x_k$
since the vertices $x_1$ and $x_k$ have degree three and $G$ is $2$-connected.
We conclude that the cycle $K'$ has at least five vertices outside the cycle $K$:
the vertices $x_1$, $x_k$ and the end vertices of $e_1$ and $e_2$.
Hence, the length of the cycle $K'$ is at least $k+5\ge 8$.
Since $K'$ was an arbitrary cycle containing the edge $v_1x_1$ and
the symmetric argument applies to each of the edges $v_ix_i$, $i=1,\ldots,6$,
we conclude that the type of each vertex of $K$ dominates $(6,8,8)$.
In particular, the type of $K$ dominates the first multiset from the statement of the lemma.
Suppose next that $K$ satisfies (CT4)(c),
i.e., each edge $v_ix_i$, $i=1,\ldots,6$, is contained in a $2$-edge-cut, and
there exists exactly one pair $i$ and $j$ with $1\le i<j\le 6$ such that
the edges $v_ix_i$ and $v_jx_j$ form a $2$-edge-cut, and this pair satisfies $j-i=3$.
By symmetry, we can assume that the edges $v_1x_1$ and $v_4x_4$ form a $2$-edge-cut.
Let $K'$ be an arbitrary cycle containing an edge $v_ix_i$ for $i=1,\ldots,6$.
The length of $K'$ is at least seven by (CT2),
which implies that the type of $v_i$ dominates $(6,7,7)$.
If $i\in\{2,3,5,6\}$, then the arguments presented in the analysis of the case (CT4)(b)
yield that the length of $K'$ is at least eight, i.e., the type of $v_i$ dominates $(6,8,8)$.
We conclude that the type of $K$ dominates the first multiset from the statement of the lemma.
Finally, suppose that $K$ satisfies (CT4)(d), i.e.,
the edge $v_1x_1$ is not contained in a $2$-edge-cut while
each of the edges $v_ix_i$, $i=2,\ldots,6$, is a contained in a $2$-edge-cut,
there exists a partition $A$ and $B$ of the vertices of $G-V(K)$
such that $x_1,x_2,x_6\in A$, $x_3,x_4,x_5\in B$,
there is exactly one edge between $A$ and $B$, and
both $A$ and $B$ induce connected subgraphs of $G-V(K)$.
Note that the structure of $G-V(K)$ implies that
no two of the edges $v_1x_1,\ldots,v_6x_6$ form a $2$-edge-cut.
Let $K'$ be an arbitrary cycle containing an edge incident with the cycle $K$.
The length of $K'$ is at least seven by (CT2).
If the intersection of the cycles $K$ and $K'$ is not a path, then the length of $K'$ is at least eight.
If the cycle $K'$ does not contain the edge $v_1x_1$,
then the analysis of the case (CT4)(b) yields that the length of $K'$ is at least eight.
Hence, the length of $K'$ is at least eight unless
$K'$ contains the edge $v_1x_1$ and the intersection of $K$ and $K'$
is a path $v_1v_2\ldots v_k$ with $k\le 4$ (if $k=5$ or $k=6$, then the length of $K'$ is at least nine by (CT2)).
If every cycle of length seven intersects the cycle $K$ only in two vertices,
then the type of $v_1$ dominates $(6,7,7)$,
the types of $v_2$ and $v_6$ dominate $(6,7,8)$, and
the types of $v_3$, $v_4$, and $v_5$ dominate $(6,8,8)$.
Consequently, the type of $K$ dominates the second multiset from the statement of the lemma.
In the rest of the proof, we assume that there exists a cycle $K'$ of length seven such that
the intersection of $K$ and $K'$ is a path $v_1v_2\ldots v_k$ with $k=3$ or $k=4$.
Let $C$ be the block of $G-V(K)$ containing the vertex $x_k$, and
let $e=zz'$ be the cut-edge incident with $C$ that is contained in $K'$.
Note that $V(C)\subset B$, in particular, $x_1\not\in V(C)$.
Since each of the edges $v_3x_3$, $v_4x_4$ $v_5x_5$ is contained in a $2$-edge-cut and
the subgraph of $G-V(K)$ induced by $B$ is connected, both the end-vertices of $e$ are in $B$,
i.e., $e$ is not the edge between $A$ and $B$.
By (CT1), the degrees of $x_k$ is three, which implies that $e$ is not incident with $x_k$.
We conclude that $k=3$: otherwise, $K'$ contains the four vertices $v_1,\ldots,v_4$,
the vertices $x_1$ and $x_4$, and the two end-vertices of $e$.
Moreover, the cycle $K'$ is the cycle $v_1v_2v_3x_3zz'x_1$.
Note that since $k=3$, the type of $v_4$ dominates $(6,8,8)$.
Since both the end-vertices $z$ and $z'$ of the edge $e$ are contained in $B$,
the edge $z'x_1$ is the unique edge between $A$ and $B$.
Since the edge $v_3x_3$ is contained in a $2$-edge-cut, $G$ is $2$-connected and the degree of $x_3$ is three,
if follows that the edges $v_3x_3$ and $zz'$ form a $2$-edge-cut in $G$.
If $G$ had a cycle of length seven passing through the vertex $v_5$,
then the symmetric argument would yield that the edges $v_5x_5$ and $z''z'$ form a $2$-edge-cut in $G$,
where $z''$ is the neighbor of $z'$ different from $z$ and $x_1$.
Since this is impossible since the edge $v_4x_4$ is contained in a $2$-edge-cut and
the subgraph of $G-V(K)$ induced by $B$ is connected,
we conclude that the vertex $v_5$ is contained in no cycle of length seven.
Hence, we have established that the type of $v_5$ dominates $(6,8,8)$.
Also note that the type of $v_3$ dominates $(6,7,8)$
since any cycle of length seven containing $v_3$ contains the path $v_2v_3x_3$.
Consider now a cycle $K''$ in $G$ containing the path $v_3v_2x_2$.
If $K''$ contains only the vertices from $V(K)\cup A$,
then $K''$ contains at least five vertices of $K$ and at least four vertices of $A$;
otherwise, there would be a cycle of length six intersecting the cycle $K$, which is excluded by (CT2).
If the cycle $K''$ contains some vertices from the set $B$,
then it contains at least two vertices of the cycle $K$,
five vertices of $A$ (otherwise, the path of $K''$ from $x_2$ to $x_1$ together with the path $x_2v_2v_1x_1$
would form a cycle of length six intersecting $K$) and
two vertices in $B$ (the vertices $x_3$ and $z'$ cannot coincide since the edge $v_3x_3$ is contained in a $2$-edge-cut).
In both cases the length of $K''$ is at least nine.
We conclude that the type of the vertex $v_2$ dominates $(6,7,9)$.
The symmetric argument yields that the type of $v_6$ dominates $(6,7,9)$.
Since the type of $v_1$ dominates $(6,7,7)$, it follows that the type of $K$
dominates the third multiset from the statement of the lemma.
\end{proof}
The following lemma follows from the description of the perfect matching polytope by Edmonds~\cite{edmonds1965maximum} and
the fact that the perfect matching polytope has a strong separation oracle~\cite{padberg1982odd};
see e.g.~\cite{grotschel2012geometric} for further details.
\begin{lemma}\label{lemma-2m}
There exists a polynomial-time algorithm that
for a given cubic $2$-connected $n$-vertex graph
outputs a collection of $m\le n/2+2$ perfect matchings $M_1,\ldots,M_m$ and non-negative coefficients $a_1,\ldots,a_m$ such that
$a_1+\cdots+a_m=1$ and
$$\sum_{i=1}^m a_i\chi_{M_i}=(1/3,\ldots,1/3)\in{\mathbb R}^{E(G)}\;,$$
where $\chi_{M_i}\in{\mathbb R}^{E(G)}$ is the characteristic vector of $M_i$.
\end{lemma}
Lemma~\ref{lemma-2m} gives the following.
\begin{lemma}\label{lemma-2fs}
There exists a polynomial-time algorithm that
for a given $2$-connected $n$-vertex subcubic graph
outputs a collection of $m\le n/2+2$ spanning Eulerian subgraphs $F_1,\ldots,F_m$ and
probabilities $p_1,\ldots,p_m\ge 0$, $p_1+\cdots+p_m=1$ that satisfy the following.
If a spanning Eulerian subgraph $F$ is equal to $F_i$ with probability $p_i$, $i=1,\ldots,m$,
then ${\mathbb P}[e\in E(F)]=2/3$.
In particular, a vertex of degree three is contained in a cycle of $F$ with probability one and
a vertex of degree two is isolated with probability $1/3$.
\end{lemma}
\begin{proof}
Let $G$ be the input $2$-connected $n$-vertex subcubic graph, and
let $G'$ be the $2$-connected cubic graph obtained from $G$ by suppressing all vertices of degree two.
Apply the algorithm from Lemma~\ref{lemma-2m} to $G'$ to get a collection of $m$ perfect matchings $M_1,\ldots,M_m$ and
non-negative coefficients $a_1,\ldots,a_m$ with the properties stated in the lemma. Note that $m\le n/2+2$.
Let $F'_i$ be the $2$-factor of $G'$ consisting of the edges not contained in $M_i$, and
let $F_i$ be the spanning Eulerian subgraph of $G$ consisting of the edges contained in paths corresponding to the edges of $F'_i$,
$i=1,\ldots,m$.
It is easy to see that the lemma holds for $F_1,\ldots,F_m$ with $p_i=a_i$, $i=1,\ldots,m$.
\end{proof}
We now combine Lemmas~\ref{lemma-6types} and~\ref{lemma-2fs}.
\begin{lemma}\label{lemma-expected}
There exists a polynomial-time algorithm that given a clean $2$-connected subcubic graph $G$
outputs a spanning Eulerian subgraph $F$ of $G$ such that $${\rm exc}(F)\le \frac{2n(G)+2n_2(G)}{7}\;.$$
\end{lemma}
\begin{proof}
We first apply the algorithm from Lemma~\ref{lemma-2fs} to get
a collection of $m\le n/2+2$ spanning Eulerian subgraphs $F_1,\ldots,F_m$ and probabilities $p_1,\ldots,p_m$.
We show that
\begin{equation}
{\mathbb E}\;{\rm exc}(F)\le \frac{2n(G)+2n_2(G)}{7}\;,\label{eq-2fs}
\end{equation}
which implies the statement of the lemma since the number of the subgraphs $F_1,\ldots,F_m$ is linear in $n$ and
the excess of each them can be computed in linear time.
In particular, the algorithm can output the subgraph $F_i$ with the smallest ${\rm exc}(F_i)$.
We now show that (\ref{eq-2fs}) holds.
We apply a double counting argument, which we phrase as a discharging argument.
At the beginning, we assign each vertex of degree three charge of $2/7$ and
to each vertex of degree two charge of $4/7$. Let $c_1(v)$ be the initial charge of a vertex $v$.
Note that the sum of the initial charges of the vertices
is the right side of the inequality (\ref{eq-2fs}).
We next choose a random spanning Eulerian subgraphs $F$ among the subgraphs $F_1,\ldots,F_m$
with probabilities given by $p_1,\ldots,p_m$.
The charge of each vertex that is isolated in $F$ is decreased by one unit, and
the charge of each vertex contained in a cycle of length $k$ by $2/k$ units.
Let $c_2(v)$ be the new charge of a vertex $v$.
Observe that the total decrease of charge of the vertices is equal to ${\rm exc}(F)$,
i.e.,
$${\rm exc}(F)=\sum_{v\in V(G)} c_1(v)-c_2(v)\;.$$
Hence, it is enough to prove that
\begin{equation}
{\mathbb E}\;\,\sum_{v\in V(G)} c_2(v)\ge 0.
\label{eq:Ec1}
\end{equation}
To prove (\ref{eq:Ec1}), we consider the expectation of $c_2(v)$ for individual vertices $v$ of $G$.
If $v$ is a vertex of $G$ of degree two,
then every cycle of $G$ that contains $v$ has length at least eight by (CT1).
With probability $1/3$, the vertex $v$ is isolated and looses one unit charge;
with probability $2/3$, it is contained in a cycle and looses at most $2/8=1/4$ units of charge.
We conclude that
$${\mathbb E}\; c_2(v)\ge \frac{4}{7}-\frac{1}{3}-\frac{2}{3}\cdot\frac{1}{4}=\frac{1}{14}>0\;.$$
If $v$ is a vertex of $G$ of degree three with type $(\ell_1,\ell_2,\ell_3)$,
we proceed as follows.
Since each edge incident with $v$ is contained in $F$ with probability $2/3$,
$v$ is contained in a cycle of $F$ with a particular pair of its neighbors with probability $1/3$.
It follows that the expected value of $c_1(v)$ is at least
$${\mathbb E}\; c_2(v)\ge\frac{2}{7}-\frac{1}{3}\left(\frac{2}{\ell_1}+\frac{2}{\ell_2}+\frac{2}{\ell_3}\right)\;.$$
Since $G$ is clean, the type of $v$ dominates $(6,6,6)$.
If the type of $v$ dominates $(7,7,7)$, then ${\mathbb E}\; c_1(v)\ge 0$.
Hence, we focus on vertices contained in cycles of length six in $G$ in the rest of the proof.
Let $K=v_1\ldots v_6$ be a cycle of length six in $G$.
Since $G$ is clean, each vertex of $K$ has degree three.
Suppose that $K$ is not a $\theta$-cycle.
By (CT2), $K$ is disjoint from all other cycles of length six in $G$.
Observe that
\begin{itemize}
\item if the type of $v_i$ dominates $(6,8,8)$, then ${\mathbb E}\; c_2(v_i)\ge \frac{1}{126}$,
\item if the type of $v_i$ dominates $(6,7,7)$, then ${\mathbb E}\; c_2(v_i)\ge -\frac{1}{63}$,
\item if the type of $v_i$ dominates $(6,7,8)$, then ${\mathbb E}\; c_2(v_i)\ge -\frac{1}{252}$, and
\item if the type of $v_i$ dominates $(6,7,9)$, then ${\mathbb E}\; c_2(v_i)\ge \frac{1}{189}$.
\end{itemize}
Since the type of the cycle $K$ dominates one of the three multisets listed in Lemma~\ref{lemma-6types},
it holds that
$${\mathbb E}\; c_2(v_1)+\cdots+c_2(v_6)\ge 0\;.$$
It remains to analyze the case that $K$ is a $\theta$-cycle.
By symmetry, we can assume that the vertices $v_1$ and $v_4$ are its poles.
Let $x_i$ be the neighbor of $v_i$ outside of $K$, $i=1,\ldots,6$.
Further, let $P=x_6v_6v_1v_2x_2$, $P_1=x_6v_6v_1$ and $P_2=x_2v_2v_1$.
Since each of the paths $P_1$ and $P_2$ is contained in $F$ with probability $1/3$,
the subgraph $F$ contains the path $P$ with probability at most $1/3$;
let $p$ be this probability.
Since $G$ is clean (and so proper), the distance between $x_2$ and $x_3$ in $G-V(K)$ is at least three;
likewise, the distance between $x_5$ and $x_6$ in $G-V(K)$ is at least three.
Hence, any cycle containing $P_1$ or $P_2$ has length at least $10$, and
any cycle containing $P$ has length at least $14$.
Since $F$ contains the path $P$ with probability $p$,
the path $P_1$ but not $P$ with probability $1/3-p$,
the path $P_2$ but not $P$ with probability $1/3-p$, and
neither $P_1$ nor $P_2$ with probability $1/3+p$,
it follows that
$${\mathbb E}\; c_2(v_1)=\frac{2}{7}-p\cdot\frac{1}{7}-2\left(\frac{1}{3}-p\right)\cdot\frac{1}{5}-\left(\frac{1}{3}+p\right)\cdot\frac{1}{3}=
\frac{13}{315}-\frac{8}{105}p\ge\frac{1}{63}\;.$$
The symmetric argument yields that ${\mathbb E}\; c_2(v_4)\ge\frac{1}{63}$.
Since every cycle in $G$ containing the path $P_2$ has length at least $10$,
the type of $v_2$ dominates $(6,6,10)$ and thus ${\mathbb E}\; c_2(v_2)\ge -\frac{1}{315}$.
The same holds for vertices $v_3$, $v_5$ and $v_6$.
Let $Q_1$ be the set of all poles of $\theta$-cycles in $G$, and
let $Q_2$ be the set of vertices contained in $\theta$-cycle that are not a pole of a (possibly different) $\theta$-cycle.
Since each vertex of $Q_2$ has a neighbor in $Q_1$, it follows $|Q_2|\le 3|Q_1|$.
The previous analysis yields that
$${\mathbb E}\;\sum_{v\in Q_1\cup Q_2} c_2(v)\ge |Q_1|\left(\frac{1}{63}-3\cdot\frac{1}{315}\right)=\frac{2}{315}|Q_1| \ge 0.$$
Since the set $Q_1\cup Q_2$ and the vertex set of cycles of length six that are not $\theta$-cycles are disjoint,
the inequality (\ref{eq:Ec1}) follows.
\end{proof}
We are ready to prove Theorems~\ref{thm-main} and~\ref{thm-alg}.
\begin{proof}[Proof of Theorem~\ref{thm-main}]
By Observation~\ref{obs-exc},
it is enough to construct a spanning Eulerian subgraph $F$ of $G$ with
$${\rm exc}(F)\le \frac{2(n(G)+n_2(G))}{7}+1.$$
If $G$ is basic, such a subgraph $F$ exists by Observation~\ref{obs-nontarg}, and
can easily be constructed in polynomial time.
If $G$ is not basic,
we can find a reduction $G'$ of $G$ that is either basic or clean in polynomial time by Theorem~\ref{cor-summary}.
If $G'$ is basic, then we find a spanning Eulerian subgraph with
$${\rm exc}(F')\le \frac{2(n(G')+n_2(G'))}{7}+1$$
as in the case when $G$ itself is basic.
If $G'$ is clean, then Lemma~\ref{lemma-expected} yields that
we can construct in polynomial time a spanning Eulerian subgraph $F'$ of $G'$ such that
$${\rm exc}(F')\le \frac{2(n(G')+n_2(G'))}{7}.$$
Since $G'$ is a reduction of $G$, we can find in polynomial time a spanning Eulerian subgraph $F$ of $G$ such that
$${\rm exc}(F)\le {\rm exc}(F')+\frac{\delta(G,G')}{4}\le {\rm exc}(F')+\frac{2\delta(G,G')}{7}\le \frac{2(n(G')+n_2(G'))}{7}+1\;,$$
which finishes the proof of the theorem.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm-alg}]
Let $G$ be an input cubic graph and let $n$ be the number of its vertices.
We assume that $G$ is connected since $G$ would not have a TSP walk otherwise.
Let $F$ be the set of bridges of $G$, which can be found in linear time using the standard algorithm based on DFS.
Further, let $G'$ be the graph obtained from $G$ by removing the edges of $F$, and
let $n_0$ and $n_2$ be the number of its vertices of degree zero and two, respectively.
Note that $G'$ has no vertices of degree one
since if two edges incident with a vertex $v$ in a cubic graph are bridges,
then the third edge incident with $v$ is also a bridge.
Finally, let $k$ be the number of non-trivial components of $G'$,
i.e., the components of $G'$ that are not formed by a single vertex.
Observe that the number of vertices of degree two in $G'$ is at most $2k-2$,
i.e., $n_2\le 2k-2$.
We next apply the algorithm from Theorem~\ref{thm-main} to each non-trivial component of $G'$, and
obtain a collection of $k$ TSP walks such that the sum of their lengths is at most
$$\frac{9}{7}(n-n_0)+\frac{2}{7}n_2-k\;\mbox{.}$$
These $k$ TSP walks can be connected by traversing each of the edges of $F$ twice,
which yields a TSP walk in $G$ of total length at most
\begin{equation}
\frac{9}{7}(n-n_0)+\frac{2}{7}n_2-k+2|F|\;\le\;\frac{9}{7}(n-n_0)+2|F|\;\mbox{.}\label{eq-alg-1}
\end{equation}
The inequality in (\ref{eq-alg-1}) follows from the inequality $n_2\le 2k-2$,
which we have observed earlier in the proof.
Since any TSP walk in $G$ must have length at least $(n-n_0)+2|F|$,
the upper bound in (\ref{eq-alg-1}) on the length of the constructed TSP walk
is at most the multiple of $9/7$ of the length of the optimal TSP walk in $G$,
which yields the desired approximation factor of the algorithm.
\end{proof}
\section{Lower bounds}\label{sec-lb}
In this section, we provide two constructions of 2-connected subcubic graphs that illustrate that
the bound claimed in Conjecture~\ref{conj-main} would be the best possible.
The constructions are based on two operations that we analyze in Lemmas~\ref{lemma-drepl} and~\ref{lemma-qrepl}.
\begin{figure}
\begin{center}
\includegraphics{fig-DGv.pdf}
\end{center}
\caption{Replacing a vertex of degree two with a cycle of length four in Lemma~\ref{lemma-drepl}.}\label{fig-DGv}
\end{figure}
\begin{lemma}\label{lemma-drepl}
Let $G$ be a 2-connected subcubic graph,
let $v$ be a vertex of $G$ that has exactly two neighbors, and
let $x$ and $y$ be its two neighbors.
Further, let $G'$ be the graph obtained from $G$ by removing the vertex $v$,
adding a cycle $v_1v_2v_3v_4$ and edges $xv_1$ and $yv_3$ as in Figure~\ref{fig-DGv}.
The graph $G'$ is a 2-connected subcubic graph and
it holds that $n(G')=n(G)+3$, $n_2(G')=n_2(G)+1$ and ${\rm minexc}(G')={\rm minexc}(G)+1$.
\end{lemma}
\begin{proof}
It is clear that $G'$ is a 2-connected subcubic graph such that
$n(G')=n(G)+3$ and $n_2(G')=n_2(G)+1$.
So, we need to show that ${\rm minexc}(G')={\rm minexc}(G)+1$.
We start with showing that ${\rm minexc}(G')\le{\rm minexc}(G)+1$.
Let $F$ be a spanning Eulerian subgraph of $G$ with ${\rm exc}(F)={\rm minexc}(G)$.
We now construct a spanning Eulerian subgraph $F'$ of $G'$.
If $v$ is an isolated vertex in $F$, then $F'$ contains all the edges of $F$ and the cycle $v_1v_2v_3v_4$.
Note that $c(F')=c(F)+1$ and $i(F')=i(F)-1$,
Otherwise, $v$ has degree two in $F$ and we let $F'$ to contain the edges $xv_1$, $v_1v_2$, $v_2v_3$, $v_3y$ and
all the edges of $F$ except for $vx$ and $vy$.
In this case, we have that $c(F')=c(F)$ and $i(F')=i(F)+1$.
In both cases, we get that ${\rm exc}(F')={\rm exc}(F)+1={\rm minexc}(G)+1$,
which implies that ${\rm minexc}(G')\le{\rm minexc}(G)+1$.
We next prove that ${\rm minexc}(G)\le{\rm minexc}(G')-1$.
Consider a spanning Eulerian subgraph $F'$ of $G'$ with ${\rm exc}(F')={\rm minexc}(G')$.
We reverse the transformation described in the previous paragraph.
By symmetry, we can assume that $F'$ contains either the cycle $v_1v_2v_3v_4$ or the path $xv_1v_2v_3y$.
In the former case, let $F$ be the spanning Eulerian subgraph of $G$ containing all the edges of $F'$
except for the edges of the cycle $v_1v_2v_3v_4$.
In the latter case, let $F$ be the spanning Eulerian subgraph of $G$ containing the edges $vx$, $vy$ and
all the edges of $F'$ except for the edges $xv_1$, $v_1v_2$, $v_2v_3$, $v_3y$.
In both cases, it holds that ${\rm exc}(F)={\rm exc}(F')-1$,
which implies that ${\rm minexc}(G)\le{\rm minexc}(G')-1$ as desired.
\end{proof}
Repeated applications of the operation described in Lemma~\ref{lemma-drepl}
starting with the graph $K_{2,3}$ yields the following.
\begin{proposition} \label{prop-drepl}
For every integer $n\ge 5$, $n\equiv 2\;{\rm mod}\; 3$,
there exists a $2$-connected subcubic $n$-vertex graph $G$ such that
$${\rm minexc}(G)=\frac{n(G)+n_2(G)}{4}+1.$$
\end{proposition}
The second operation is more involved.
A \emph{diamond} in a graph $G$ is an induced subgraph isomorphic to $K_4^-$,
i.e., the graph $K_4$ with one edge removed.
\begin{figure}
\begin{center}
\includegraphics[width=120mm]{fig-QGK.pdf}
\end{center}
\caption{The operation of replacing a diamond analyzed in Lemma~\ref{lemma-qrepl}.}\label{fig-QGK}
\end{figure}
\begin{lemma}\label{lemma-qrepl}
Let $G$ be a 2-connected cubic graph containing a diamond $D$.
Let $v_1$, $v_2$, $w_1$ and $w_2$ be the vertices of the diamond as depicted in Figure~\ref{fig-QGK}, and
let $x_1$ and $x_2$ be the neighbors of $v_1$ and $v_2$ outside of the diamond $D$.
Further, let $G'$ be the graph obtained from $G$ by removing the vertices of the diamond $D$ and
inserting the subgraph depicted in Figure~\ref{fig-QGK}.
The graph $G'$ is a 2-connected cubic graph with $n(G')=n(G)+8$ and ${\rm minexc}(G')={\rm minexc}(G)+2$.
Moreover, the graph $G'$ contains at least two diamonds.
\end{lemma}
\begin{proof}
As in the proof of Lemma~\ref{lemma-drepl},
the only non-trivial assertion of the lemma is that ${\rm minexc}(G')={\rm minexc}(G)+2$.
Let the labels of the vertices be as in Figure~\ref{fig-QGK}.
We start with showing ${\rm minexc}(G')\le{\rm minexc}(G)+2$.
Consider a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)={\rm minexc}(G)$.
By symmetry, we can assume that the subgraph $G$ contains either the path $x_1v_1w_1w_2v_2x_2$ or the cycle $v_1w_1v_2w_2$.
In the former case, let $F'$ be the spanning subgraph of $G'$ that
contains the path $x_1z_1u_1v^1_1w^1_1w^1_2v^1_2u_2z_2x_2$ and the cycle $v^2_1w^2_1v^2_2w^2_2$
instead of the path $x_1v_1w_1w_2v_2x_2$.
In the latter case, $F'$ is the spanning subgraph of $G'$ that
contains the cycles $z_1u_1v^1_1w^1_1w^1_2v^1_2u_2z_2$ and $v^2_1w^2_1v^2_2w^2_2$
instead of the cycle $v_1w_1v_2w_2$.
In both cases, it holds that $c(F')=c(F)+1$ and $i(F')=i(F)$,
which implies that ${\rm minexc}(G')\le{\rm exc}(F')={\rm exc}(F)+2={\rm minexc}(G)+2$.
We next prove the opposite inequality ${\rm minexc}(G)\le{\rm minexc}(G')-2$.
Let $F'$ be a spanning Eulerian subgraph of $G'$ with ${\rm exc}(F')={\rm minexc}(G')$.
A simple case analysis using that $F'$ has the minimum possible excess yields that
we can assume that $F'$ contains either the path $x_1z_1u_1v^1_1w^1_1w^1_2v^1_2u_2z_2x_2$ and the cycle $v^2_1w^2_1v^2_2w^2_2$ or
the cycles $z_1u_1v^1_1w^1_1w^1_2v^1_2u_2z_2$ and $v^2_1w^2_1v^2_2w^2_2$.
In both cases, we can reverse the operation described in the previous paragraph
to get a spanning Eulerian subgraph $F$ of $G$ with ${\rm exc}(F)={\rm exc}(F')-2$.
It follows that ${\rm minexc}(G)\le{\rm exc}(F)={\rm exc}(F')-2={\rm minexc}(G')-2$ as desired.
\end{proof}
Consider the cubic graph formed by two diamonds and two edges joining the vertices of degree two in different diamonds, and
repeatedly apply the operation described in Lemma~\ref{lemma-qrepl}.
\begin{proposition} \label{prop-qrepl}
For every integer $n\ge 8$, $n\equiv 0\;{\rm mod}\; 8$,
there exists a $2$-connected cubic $n$-vertex graph $G$
with ${\rm minexc}(G)=n/4$.
\end{proposition}
Propositions~\ref{prop-drepl} and~\ref{prop-qrepl}, and Observation~\ref{obs-exc} yield that
neither the coefficient $5/4$ nor the coefficient $1/4$ in Conjecture~\ref{conj-main} can be improved.
Indeed, for every $\alpha<5/4$,
there exist infinitely many 2-connected cubic graphs $G$ with ${\rm tsp}(G)>\alpha n(G)+o(n(G))$ by Proposition~\ref{prop-qrepl}.
Likewise, for every $\beta<1/4$,
there exist infinitely many 2-connected subcubic graphs $G$ with ${\rm tsp}(G)>\frac{5}{4}n(G)+\beta n_2(G)+o(n(G))$.
While neither of the two coefficient in Conjecture~\ref{conj-main} can be improved in general,
it may be possible to prove better bounds under some additional structural assumptions.
In particular, Conjecture~\ref{conj-main} asserts that ${\rm tsp}(G)\le 3n(G)/2$ for 2-connected subcubic graph
while Boyd et al.~\cite{boyd} proved that ${\rm tsp}(G)\le 4n(G)/3$ for such graphs $G$,
which is tight up to an additive constant.
\bibliographystyle{siam}
|
2,869,038,154,288 | arxiv | \section{Introduction}
Copula-based models have been widely used in many applied fields such as finance \citep{Embrechts/McNeil/Straumann:2002,Nasri/Remillard/Thioub:2020}, hydrology \citep{Genest/Favre:2007,Zhang/Singh:2019} and medicine \citep{Clayton:1978,deLeon/Wu:2011}, to cite a few. According to Sklar's representation \citep{Sklar:1959}, for a random vector $\mathbf{X} = (X_1,\ldots,X_d)$ with joint distribution function $H$ and margins $F_1, \ldots, F_d$, there exists a non-empty set $\ensuremath{\mathcal{S}}_H$ of copula functions $C$ so that for any $x_1,\ldots, x_d$,
$
H(x_1,\ldots,x_d) = C\{F_1(x_1),\ldots,F_d(x_d)\},\quad \text{ for any } C\in \ensuremath{\mathcal{S}}_H.
$
If all margins are continuous, then $\ensuremath{\mathcal{S}}_H$ contains a unique copula which is the distribution function of the random vector $\mathbf{U} = (U_1,\ldots,U_d)$, with $\mathbf{U}_j = F_j(X_j)$, $j\in \{1,\ldots,d\}$. However, in many applications, discontinuities are often present in one or several margins. Whenever at least one margin is discontinuous, $\ensuremath{\mathcal{S}}_H$ is infinite, and in this case,
any $C \in\ensuremath{\mathcal{S}}_H$ is only uniquely defined on the closure of the range $\ensuremath{\mathcal{R}}_\mathbf{F}= \ensuremath{\mathcal{R}}_{\ensuremath{\mathcal{F}}_1} \times \cdots \ensuremath{\mathcal{R}}_{\ensuremath{\mathcal{F}}_d}$ of $\mathbf{F} = (\ensuremath{\mathcal{F}}_1,\ldots,\ensuremath{\mathcal{F}}_d)$, where
$\ensuremath{\mathcal{R}}_{\ensuremath{\mathcal{F}}_j}=\{F_j(y): \; y\in\ensuremath{\mathbb{R}}\}$ is the range of $F_j$, $j\in\{1,\ldots,d\}$. This can lead to identifiability issues raised in
\cite{Faugeras:2017, Geenens:2020,Geenens:2021}, creating also estimation problems that need to be addressed.
However, even if the copula is not unique,
it still makes sense to use a copula family $\{C_{\boldsymbol\theta}: {\boldsymbol\theta}\in \ensuremath{\mathcal{P}}\}$, to define multivariate models, provided one is aware of the possible limitations. Indeed, the copula-based model $\ensuremath{\mathcal{K}}_{\boldsymbol\theta}(\mathbf{x}) = C_{\boldsymbol\theta}\{F_1(x_1),\ldots, F_d(x_d)\}$, ${\boldsymbol\theta}\in\ensuremath{\mathcal{P}}$, is a well-defined family of distributions for which estimating ${\boldsymbol\theta}$ is a challenge. \\
In the literature, the case of discontinuous margins is not always treated properly. It is either ignored or, in some cases, continuous margins are fitted to the data \citep{Chen/Singh/Guo/Mishra/Guo:2013}. This procedure does not solve the problem since there still will be ties.
An explicit example that underlines the problem with ignoring ties is given in Section \ref{sec:est} and Remark \ref{rem:incorrect_method}. A solution proposed in the literature is to use jittering, where data are perturbed by independent random variables, introducing extra variability.
Our first aim is to address the identifiability issues for the model
$\{\ensuremath{\mathcal{K}}_{\boldsymbol\theta}:{\boldsymbol\theta}\in\ensuremath{\mathcal{P}}\}$, when the margins are unknown and arbitrary. Our second aim is to present formal inference methods for the estimation of ${\boldsymbol\theta}$.
More precisely, we will consider a semiparametric approach for the estimation of ${\boldsymbol\theta}$, when the margins are arbitrary, i.e. each margin can be continuous, discrete, or even a mixture of a discrete and continuous distribution. The estimation approach is based on pseudo log-likelihood taking into account the discontinuities. We also propose a pairwise composite log-likelihood. In the literature, few articles focused on the estimation of the copula parameters in the case of noncontinuous margins \citep{Song/Li/Yuan:2009, deLeon/Wu:2011, Zilko/Kurowicka:2016,Ery:2016,Li/Li/Qin/Yan:2020}. Most of them have considered only the case when the components of the copula are discrete or continuous and a full parametric model, except \cite{Ery:2016} and \cite{Li/Li/Qin/Yan:2020}. In \cite{Ery:2016}, in the bivariate discrete case, the author considered a semiparametric approach and studied the asymptotic properties. In \cite{Li/Li/Qin/Yan:2020}, the authors have proposed a semiparametric approach for the estimation of the copula parameters for bivariate data having arbitrary distributions, without presenting any asymptotic results, neither discussing identifiability issues. \\
The article is organized as follows: In Section \ref{sec:limits}, we discuss the important topic of identifiability, as well as the limitations of the copula-based approach for multivariate data with arbitrary margins. Conditions are stated in order to have identifiability so that the estimation problem is well-posed. Next, in Section \ref{sec:est}, we describe the estimation methodology, using the pseudo log-likelihood approach as well as the composite pairwise approach, while the estimation error is studied in Section \ref{ssec:conv1}.
By using an extension of the asymptotic behavior of rank-based statistics \citep{Ruymgaart/Shorack/vanZwet:1972} to data with arbitrary distributions (Theorem \ref{thm:rum_ext}), we show that the limiting distribution of the pseudo log-likelihood estimator is Gaussian. Similar results are obtained for pairwise composite pseudo log-likelihood. In addition, we can obtain similar asymptotic results if the margins are estimated from parametric families, instead of using the empirical margins. This is discussed in Remark \ref{rem:IFM}.
Finally, in Section \ref{sec:exp}, numerical experiments are performed to assess the convergence and precision of the proposed estimators in bivariate and trivariate settings, while in Section \ref{sec:example}, as an example of application, a copula-based regression approach is proposed to investigate the relationship between the duration and severity, using the hydrological data in \cite{Shiau:2006}.
\section{Identifiability and limitations}\label{sec:limits}
As exemplified in \cite{Faugeras:2017, Geenens:2020}, there are several problems and limitations when applying copula-based models to data with arbitrary distributions, the most important being identifiability. This is discussed first. Then, we will examine some limitations of these models.
\subsection{Identifiability}\label{ssec:identify}
For the discussion about identifiability, we consider two cases: copula family identifiability, and copula parameter identifiability.
For copula family identifiability, it may happen that two copulas from different families, say $C_{\boldsymbol\theta}$ and $D_{\boldsymbol\psi}$, belong to $\ensuremath{\mathcal{S}}_H$.
For example, in \cite{Faugeras:2017}, the author considered the bivariate Bernoulli case, whose distribution is completely determined by $h(0,0)=P(X_1=0,X_2=0)$, $p_1=P(X_1=0)$, and $p_2=P(X_2=0)$.
Provided that the copula families $C_\theta$ and $D_\psi$ are rich enough, there exist unique parameters $\theta_0$ and $\psi_0$ so that
$h(0,0) = C_{\theta_0}(p_1,p_2) = D_{\psi_0}(p_1,p_2)$. For calculation purposes, the choice of the copula in this case is immaterial, and there is no possible interpretation of the copula or its parameter or the type of dependence induced by each copula. All that matters is $h(0,0)$, or the associated odd ratio \citep{Geenens:2020}, or Kendall's tau of $C^\maltese$, given in this case by $\tau^\maltese = 2\{h(0,0)-p_{1}p_{2}\}$. Here, $C^\maltese$ is the so-called multilinear copula, depending on the margins and belonging to $\ensuremath{\mathcal{S}}_H$; see, e.g., \cite{Genest/Neslehova/Remillard:2017}. To illustrate the fact that the computations are the same for any $C\in\ensuremath{\mathcal{S}}_H$, consider the conditional distribution of $X_2$ given $X_1=x_1$, given by
\begin{equation}\label{eq:RegressionCop}
P(X_2\le x_2|X_1=x_1) = \left\{
\begin{array}{l}
\left. \dfrac{ \partial} {\partial_{u}} C(u,v)\right|_{u=F_1(x_1), v = F_2(x_2)}, \\
\text{ if $F_1$ is continuous at $x_1$}, \\
\\
\dfrac{C\{F_1(x_1), F_2(x_2)\}-C\{F_1(x_1-),F_2(x_2)\}}{F_1(x_1)-F_1(x_1-)}, \\
\text{ if $F_1$ is not continuous at $x_1$},
\end{array}
\right.
\end{equation}
where $F_1(x_1-) = P(X_1< x_1)$.
The value of expression \eqref{eq:RegressionCop} is independent of the choice of $C\in\ensuremath{\mathcal{S}}_H$, since all copulas in $\ensuremath{\mathcal{S}}_H$ have the same value on the closure of the range $\ensuremath{\mathcal{R}}_\mathbf{F}$.
So apart from the lack of interpretation of the type of dependence, there is no problem for calculations, as long as for the chosen copula family, its parameter is identifiable, as defined next.\\
We now consider the identifiability of the copula parameter for a given copula family $\{C_{\boldsymbol\theta} : {\boldsymbol\theta}\in\ensuremath{\mathcal{P}}\}$.
Since one of the aims is to estimate the parameter of the family $\{\ensuremath{\mathcal{K}}_{\boldsymbol\theta}=C_{\boldsymbol\theta}\circ \mathbf{F}: {\boldsymbol\theta}\in\ensuremath{\mathcal{P}}\}$, the following definition of identifiability is needed.
\begin{defi}\label{def:identity0}
For a copula family $\{C_{\boldsymbol\theta}: {\boldsymbol\theta}\in\ensuremath{\mathcal{P}}\}$ and a vector of margins $\mathbf{F} = (F_1,\ldots,F_d)$, the parameter ${\boldsymbol\theta}$ is identifiable with respect to $\mathbf{F}$
if the mapping ${\boldsymbol\theta}\mapsto \ensuremath{\mathcal{K}}_{\boldsymbol\theta} = C_{\boldsymbol\theta}\circ \mathbf{F} $ is injective, i.e., if for ${\boldsymbol\theta}_1, {\boldsymbol\theta}_2 \in \ensuremath{\mathcal{P}}$, ${\boldsymbol\theta}_1\neq {\boldsymbol\theta}_2$ implies that $\ensuremath{\mathcal{K}}_{{\boldsymbol\theta}_1} \neq \ensuremath{\mathcal{K}}_{{\boldsymbol\theta}_2}$. This is equivalent to the existence of $\mathbf{u}\in \ensuremath{\mathcal{R}}_\mathbf{F}^*$ such that $C_{{\boldsymbol\theta}_1}(\mathbf{u}) \neq C_{{\boldsymbol\theta}_2} (\mathbf{u})$, where for a vector of margins $\mathbf{G}$,
$$
\ensuremath{\mathcal{R}}_{\mathbf{G}}^* = \left\{\mathbf{u} \in \ensuremath{\mathcal{R}}_{\mathbf{G}} \cap (0,1]^d : u_j < 1 \text{ for at least two indices } j\right\}.
$$
\end{defi}
The following result, proven in the Supplementary Material A,
is essential for checking that a given mapping is injective whenever $\ensuremath{\mathcal{R}}_\mathbf{F}^*$ is finite.
\begin{prop}\label{prop:rolle}
Suppose $\mathbf{T}$ is a continuously differentiable mapping from a convex open set $O\subset \ensuremath{\mathbb{R}}^p$ to $\ensuremath{\mathbb{R}}^q$. If $p>q$, then $\mathbf{T}$ is not injective. Also, $\mathbf{T}$ is injective if the rank of the derivative $\mathbf{T}'$ is $p\le q$.
Furthermore, if the rank of $\mathbf{T}'({\boldsymbol\theta}_0)$ is $p$ for some ${\boldsymbol\theta}_0 \in O$, the rank of $\mathbf{T}'({\boldsymbol\theta})$ is $p$ for any ${\boldsymbol\theta}$ in some neighborhood of ${\boldsymbol\theta}_0$. Finally, if the maximal rank is $r<p$, attained at some ${\boldsymbol\theta}_0\in O$, then the rank of $\mathbf{T}'$ is $r$ in some neighborhood of ${\boldsymbol\theta}_0$, and $T$ is not injective.
\end{prop}
In practice, the margins $\mathbf{F}$ are unknown, and since $\mathbf{F}_n = (F_{n1},\ldots,F_{nd})$ converges to $\mathbf{F}$,
where $\displaystyle F_{nj}(y) = \dfrac{1}{n+1}\sum_{i=1}^n \ensuremath{\mathbb{I}}(X_{ij}\le y)$, $y\in \ensuremath{\mathbb{R}}$, $j\in \{1,\ldots,d\}$,
one could verify that the parameter is identifiable with respect to $\mathbf{F}_n$, i.e., that the mapping ${\boldsymbol\theta}\mapsto C_{\boldsymbol\theta}\circ \mathbf{F}_n$ is injective. The latter does not necessarily imply identifiability with respect to $\mathbf{F}$ but if the parameter is not identifiable with respect to $\mathbf{F}_n$, one should choose another parametric family $C_{\boldsymbol\theta}$. \\
Next, the cardinality of $\ensuremath{\mathcal{R}}_{\mathbf{F}_n}^*$ is
$q_n= m_{n1} \times \cdots \times m_{nd} - \sum_{j=1}^d m_{nj} +d-1$, where $m_{nj}$ is the size of the support of $F_{nj}$, $j\in\{1,\ldots,d\}$. Let $ \{\mathbf{u}_i: 1\le i\le q_n\}$ be an enumeration of $\ensuremath{\mathcal{R}}_{\mathbf{F}_n}^* $. As a result,
${\boldsymbol\theta}\mapsto C_{\boldsymbol\theta}\circ \mathbf{F}_n$ is injective iff the mapping ${\boldsymbol\theta}\mapsto T_n({\boldsymbol\theta}) = \{ C_{\boldsymbol\theta}(\mathbf{u}_1), \ldots, C_{\boldsymbol\theta}(\mathbf{u}_{q_n})\}^\top$ is injective,
There are two cases to consider: $p>q_n$ or $p\le q_n$.
\begin{itemize}
\item First, one should not choose a copula family with $p>q_n$, since in this case, according to Proposition
\ref{prop:rolle}, the mapping $\mathbf{T}_n$ cannot be injective, so the parameter is not identifiable.\\% in Appendix \ref{app:ident}. \\
\item Next, if $p\le q_n$, it follows from Proposition \ref{prop:rolle},
that $T_n$ is injective if $\mathbf{T}_n'$ has rank $p$. Also, it follows from Proposition \ref{prop:rolle}
that if the rank of $\mathbf{T}_n'({\boldsymbol\theta})$ is $p$, then there is a neighborhood $\ensuremath{\mathcal{N}} \subset \ensuremath{\mathcal{P}}$ of ${\boldsymbol\theta}$ for which the rank of $\mathbf{T}_n'(\tilde{\boldsymbol\theta})$ is $p$, for any $\tilde{\boldsymbol\theta}\in\ensuremath{\mathcal{N}}$. One could then restrict the estimation to this neighborhood if necessary. Note that the matrix $\mathbf{T}_n'({\boldsymbol\theta})$ is composed of the (column) gradient vectors $\dot C_{\boldsymbol\theta}(u_j)$, $j \in \{1,\ldots, q_n\}$. The latter can be calculated explicitly or it can be approximated numerically. To check that the rank of $T'$ is $p$,
one needs to choose a bounded neighborhood $\ensuremath{\mathcal{N}}$ for ${\boldsymbol\theta}$, choose an appropriate covering of $\ensuremath{\mathcal{N}}$ by balls of radius $\delta$, with respect to an appropriate metric, corresponding to a given accuracy $\delta$ needed for the estimation, and then compute the rank of the mappings $T_n'({\boldsymbol\theta})$, for all the centers ${\boldsymbol\theta}$ of the covering balls.
\end{itemize}
\begin{example}\label{ex:bernoulli} In the bivariate Bernoulli case, $q=q_n=1$, so a copula parameter is identifiable if for any fixed $\mathbf{u}\in (0,1)^2$, the mapping ${\boldsymbol\theta}\mapsto C_{\boldsymbol\theta}(\mathbf{u})$ is injective.
As a result of the previous discussion, $p=1$, i.e., the parameter should be 1-dimensional and for any fixed $\mathbf{u}\in (0,1)^2$, the mapping $\theta\mapsto C_\theta(\mathbf{u})$ must be strictly monotonic since $\partial_\theta C_\theta(\mathbf{u})$ must be always positive or always negative.
\end{example}
\begin{example}\label{ex:monotone}
A 1-dimensional parameter is identifiable for any $\mathbf{F}$ for the following copula families and their rotations: the bivariate Gaussian copula, the Plackett copula, as well as the multivariate Archimedean copulas with one parameter like the Clayton, Frank, Gumbel, Joe. This is because for any fixed $\mathbf{u}\in (0,1)^d$, ${\boldsymbol\theta}\mapsto C_{\boldsymbol\theta}(\mathbf{u})$ is strictly monotonic. Note also that by \cite[Theorem A2]{Joe:1990}, every meta-elliptic copula $C_{\boldsymbol\rho}$, with ${\boldsymbol\rho}$ the correlation matrix parameter, is ordered with respect to $\rho_{ij}$.
\end{example}
\begin{remark}[Student copula]\label{rem:student}
There are cases when $p<q_n$ is needed. For example, for the bivariate Student copula, $q_n \ge 3>p=2$ is sometimes necessary. To see why,
note that the point $\mathbf{u} = (1/2,1/2)$ determines $\rho$ uniquely since for any $\nu>0$, $C_{\rho,\nu}\left(\dfrac{1}{2},\dfrac{1}{2}\right)= \dfrac{1}{4}+\dfrac{1}{2\pi}\arcsin{\rho}$. Take for example $\rho=0.3$. Then $C_{0.3,\nu}(0.75,0.55) = 0.452$ is not injective, having $2$ solutions, namely $\nu=0.224965$ and $\nu = 0.79944$, so the rank of $T'(\rho,\nu)$ is
$1 < p$ at points $\mathbf{u}_1=(0.5,0.5)$ and $\mathbf{u}_2 = (0.75,0.55)$.
However, if $\nu$ is restricted to a finite set like $\{k:\; 1\le k \le 50\} $, then the mapping is injective. This restriction of the values of $\nu$ makes sense if one wants to use the Student copula. In fact, since exact values of $C_{\rho,\nu}$ can be computed explicitly only when $\nu$ is an integer \citep{Dunnett/Sobel:1954}. It is the case for example in the R package \textit{copula}. Otherwise, one needs to use numerical integration \citep{Genz:2004}, with might lead to numerical inconsistencies while differentiating the copula with respect to $\nu$ in a given open interval.
\end{remark}
\subsection{Limitations}\label{ssec:limits}
Having discussed identifiability issues, we are now in a position to discuss limitations of the copula-based approach for modeling multivariate data. Two main issues have been identified when one or some margins have discontinuities: interpretation and dependence on margins.
\subsubsection{Interpretation}
As exemplified in the extreme case of the bivariate Bernoulli distribution discussed in Example \ref{ex:bernoulli}, interpretation of the copula or its parameter can be hopeless.
Recall that the object of interest is the family $\{\ensuremath{\mathcal{K}}_{\boldsymbol\theta} : {\boldsymbol\theta}\in\ensuremath{\mathcal{P}}\}$, not the copula family. Even if $X_1$ has only one atom at $0$ and $X_2$ is continuous, then one only ``knows'' $ C_{\boldsymbol\theta}$ on $\bar \ensuremath{\mathcal{R}}= \{[0,F_1(0-)]\cup [F_1(0),1]\}\times [0,1]$. For example, how can we interpret the form of dependence induced by a Gaussian copula or a Student copula restricted to such $\bar\ensuremath{\mathcal{R}}$? Apart from tail dependence, there is not much one can say.
As it is done in the case for bivariate copula, one could plot the graph of a large sample, say $n=1000$. The scatter plot is not very informative. This is even worse if the support of $H$ is finite. As an illustration, we generated 1000 pairs of observations from Gaussian, Student(2.5), Clayton, and Gumbel copulas with $\tau=0.7$, where the margin $F_1$ is a mixture of a Gaussian and an atom at $0$ (with probability $0.1$), and $F_2$ is Gaussian. The scatter plots are given in Figure \ref{fig:simgaussian}, while the scatter plots of the associated pseudo-observations $\left( F_{n1}(X_{i1}), F_{n2}(X_{i2}) \right)$ are displayed in Figure \ref{fig:pseudos}. The raw datasets in Figure \ref{fig:simgaussian} do not say much, apart from the fact that $X_1$ has an atom at $0$, while Figure \ref{fig:pseudos} illustrates the fact that the copula is only known on $\bar \ensuremath{\mathcal{R}}$. All the graphs are similar but the one from the Clayton copula. Note that in general, the scatter plot of pseudo-observations is an approximation of the graph of pairs $\left(F_1\circ F_1^{-1}(U_{i1}), F_2\circ F_2^{-1}(U_{i2})\right)$, where $(U_{i1},U_{i2})\sim C$. So one can see that even if there is only one atom, one cannot really interpret the type of dependence.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.2]{Gaussian.jpg}
\caption{Scatter plots of 1000 pairs from Gaussian, Student(2.5), Clayton, and Gumbel copulas with $\tau=0.7$, where $F_1$ is a mixture of a Gaussian (with probability $0.9$) and a Dirac at $0$, and $F_2$ is Gaussian. }
\label{fig:simgaussian}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.2]{pseudos.jpg}
\caption{Scatter plots of the pseudo-observations from the data sets illustrated in Figure \ref{fig:simgaussian}.}
\label{fig:pseudos}
\end{figure}
\subsubsection{Dependence on margins}\label{sssec:margins}
In the bivariate Bernoulli case, \cite{Geenens:2020} proposed the odds-ration $\omega$ as a ``margin-free'' measure of dependence. In fact, \cite{Geenens:2020} quotes Item 3 of \cite[Theorem 6.3 (p. 116)]{Rudas:2018} which says that if ${\boldsymbol\theta}_1 = (p_1,p_2)$ are the margins' parameters and $\theta$ is a parameter, whose range does not depend on ${\boldsymbol\theta}_1$, called variation independent parametrization in \cite{Rudas:2018}, and if $({\boldsymbol\theta}_1,\theta)$ determines the full distribution $H$, then $\theta$ is a one-to-one function of the odd-ratio $\omega$. However, as a proof of this statement, \cite{Rudas:2018} says that it is because $({\boldsymbol\theta}_1,\omega)$ determines $H$. His argument only proves that $\theta$ is a one-to-one function of $\omega$, for a fixed ${\boldsymbol\theta}_1$, i.e., for fixed margins, not that it is a one-to-one function of $\omega$ alone. In fact, according to Rudas' definition, taking the Gaussian copula $C_\rho$, $\rho\in [-1,1]$, it follows that $({\boldsymbol\theta}_1,\rho)$ is also a variation independent parametrization of $H$, by simply setting $H(p_1,p_2) =C_\rho(p_1,p_2)$. Therefore, $\rho$ qualifies as a ``margin-free'' association measure as well, if one agrees that margin-free means ``variation independent in the sense of \cite{Rudas:2018}''.
Note that this construction also works for any rich enough copula family such as Clayton, Frank, or Plackett, and that it also applies to all margins, not only Bernoulli margins. In fact, the property of being one-to-one is the same as what we defined as ``parameter identifiability'' in Section \ref{ssec:identify}.
\section{Estimation of parameters}\label{sec:est}
In this section, we will show how to estimate the parameter ${\boldsymbol\theta}$ of the family of multivariate distributions $\{\ensuremath{\mathcal{K}}_{\boldsymbol\theta} = C_{\boldsymbol\theta}\circ F: {\boldsymbol\theta}\in \ensuremath{\mathcal{P}}\}$, where $\ensuremath{\mathcal{P}}\subset \ensuremath{\mathbb{R}}^p$ is convex. It is assumed that the given copula family $\{ C_{\boldsymbol\theta}: {\boldsymbol\theta}\in \ensuremath{\mathcal{P}}\}$ satisfies the following assumption.
\begin{hyp}\label{hyp:cop}
${\boldsymbol\theta} \mapsto C_{\boldsymbol\theta} $ on $\ensuremath{\mathcal{P}}$ is thrice continuously differentiable with respect to ${\boldsymbol\theta}$, for any ${\boldsymbol\theta}\in \ensuremath{\mathcal{P}}$, the density $c_{\boldsymbol\theta}$ exists, is thrice continuously differentiable, and is strictly positive on $(0,1)^d$.
Furthermore, for a given vector of margins $\mathbf{F}$, ${\boldsymbol\theta} \mapsto C_{\boldsymbol\theta}\circ \mathbf{F} $ is injective on $\ensuremath{\mathcal{P}}$.
\end{hyp}
Suppose that $\mathbf{X}_1,\ldots,\mathbf{X}_n$ are iid with $\mathbf{X}_i\sim \ensuremath{\mathcal{K}}_{{\boldsymbol\theta}_0}$, for some ${\boldsymbol\theta}_0\in\ensuremath{\mathcal{P}}$. Without loss of generality, we may suppose that $\mathbf{X}_i = \mathbf{F}^{-1}(\mathbf{U}_i)$, where
$\mathbf{U}_1,\ldots,\mathbf{U}_n$ are iid with $\mathbf{U}_i\sim C_{{\boldsymbol\theta}_0}$.
Estimating parameters for arbitrary distributions is more challenging that in the continuous case.
For continuous (unknown) margins, copula parameters can be estimated using different approaches, the most popular ones based on pseudo log-likelihood being the normalized ranks method \citep{Genest/Ghoudi/Rivest:1995,Shih/Louis:1995} and the IFM method (Inference Functions for Margin) \citep{Joe/Xu:1996, Joe:2005}.
Since the margins are unknown, it is tempting to ignore that some margins are discontinuous and consider maximizing the (averaged) pseudo log-likelihood $
L_n({\boldsymbol\theta}) = \dfrac{1}{n}\sum_{i=1}^n \log{ c_{\boldsymbol\theta}(\mathbf{U}_{n,i})}$, $\mathbf{U}_{n,i}=\mathbf{F}_{n}(\mathbf{X}_{i})$, $i\in\{1,\ldots,n\}$.
Note than one could also replace the nonparametric margins with parametric ones, as in the IFM approach.
However, in either cases, there is a problem with using $L_n$ when there are atoms. To see this, consider a simple bivariate model with Bernoulli margins, i.e., $P(X_{ij}=0)= p_j$, $P(X_{ij}=1)=1-p_j$, $j\in \{1,2\}$. Then, the full (averaged) log-likelihood is
\begin{eqnarray*}
\ell_n^\star ({\boldsymbol\theta},p_1,p_2) &=& h_n(0,0)\log\{C_{\boldsymbol\theta}(p_{1},p_{2})\}+
h_n(0,1)\log\{p_{1} - C_{\boldsymbol\theta}(p_{1},p_{2})\}\\
&&
+ h_n(1,0)\log\{p_{2} - C_{\boldsymbol\theta}(p_{1},p_{2})\} + h_n(1,1)\log\{1-p_{1} -p_{2}+ C_{\boldsymbol\theta}(p_{1},p_{2})\},
\end{eqnarray*}
where $h_n(x_1,x_2) = \dfrac{1}{n}\sum_{i=1}^n
\ensuremath{\mathbb{I}}\{X_{i1}=x_1,X_{i2}=x_2\}$. Hence, the pseudo log-likelihood $\ell_n$ for ${\boldsymbol\theta}$, when $p_1$ and $p_2$ are estimated by
$p_{n1} = h_n(0,0)+h_n(0,1)$ and $p_{n2} = h_n(0,0)+h_n(1,0)$, is
\begin{eqnarray*}
{\ell}_n ({\boldsymbol\theta})&=& h_n(0,0)\log\{C_{\boldsymbol\theta}(p_{n1},p_{n2})\}+
h_n(0,1)\log\{p_{n1} - C_{\boldsymbol\theta}(p_{n1},p_{n2})\}\\
&&
+ h_n(1,0)\log\{p_{n2} - C_{\boldsymbol\theta}(p_{n1},p_{n2})\} + h_n(1,1)\log\{1-p_{n1} -p_{n2}+ C_{\boldsymbol\theta}(p_{n1},p_{n2})\}.
\end{eqnarray*}
It is clear that under general conditions, when the parameter is 1-dimensional,
and the copula family is rich enough,
maximizing $\ell_n$ produces a consistent estimator for ${\boldsymbol\theta}$ while maximizing $L_n$ will not.
In fact, for $\ell_n$, the estimator $\theta_n$ of $\theta$ is the solution of $C_\theta(p_{n1},p_{n2}) = h_n(0,0)$ \citep[Example 13]{Genest/Neslehova:2007}.
Note also that $\ell_n$ converges almost surely to
\begin{eqnarray*}
\ell_\infty({\boldsymbol\theta}) &=& h(0,0)\log\{C_{\boldsymbol\theta}(p_{1},p_{2})\}+
h(0,1)\log\{p_{1} - C_{\boldsymbol\theta}(p_{1},p_{2})\}\\
&& \qquad + h(1,0)\log\{p_{2} - C_{\boldsymbol\theta}(p_{1},p_{2})\}
+ h(1,1)\log\{1-p_{1} -p_{2}+ C_{\boldsymbol\theta}(p_{1},p_{2})\},
\end{eqnarray*}
where $h(i,j)=P(X_1=i,X_2=j)$, $i,j\in \{0,1\}$.
However, the estimator obtained by maximizing $L_n$ is not consistent in general.
In fact, for any copula $C_\theta$ with a density $c_\theta$ which is continuous and non-vanishing on $(0,1]^2$,
\begin{eqnarray*}
L_n ({\boldsymbol\theta}) &= & h_n(0,0)\log{ c_\theta\left(\dfrac{n}{n+1}p_{n,1}, \dfrac{n}{n+1}p_{n,2}\right)}+
h_n(1,0)\log{c_\theta\left(\dfrac{n}{n+1}, \dfrac{n}{n+1}p_{n,2}\right)}\\
&&\quad+ h_n(0,1)\log{c_\theta\left(\dfrac{n}{n+1}p_{n,1}, \dfrac{n}{n+1}\right)}+ h_n(1,1)\log{c_\theta\left(\dfrac{n}{n+1}, \dfrac{n}{n+1}\right)}
\end{eqnarray*}
converges almost surely to
\begin{eqnarray*}
L_\infty ({\boldsymbol\theta}) &=& h(0,0)\log{c_\theta\left(p_1, p_2\right)}+
h(1,0)\log{c_\theta\left(1, p_2\right)} \\
&& \qquad + h(0,1)\log{c_\theta\left(p_1, 1 \right)}+ h(1,1)\log{c_\theta\left( 1,1\right)}.
\end{eqnarray*}
In particular, for a Clayton copula with $\theta_0=2$, one gets
\begin{eqnarray*}
L_\infty ({\boldsymbol\theta}) &=& \log(1+\theta)+ \theta ( p_1\log{p_1} +p_2\log{p_2}) \\
&& \qquad +h(0,0)(1+2\theta)
\left\{\log{C_\theta(p_1,p_2)} -\log(p_1 p_2)\right\},
\end{eqnarray*}
with $h(0,0) = C_{\theta_0}(1/2,1/2) = 1/\sqrt{7}$. As displayed in Figure \ref{fig:claytonbernoulli}, for the limit $L_\infty$ of $L_n$, the supremum is attained at ${\boldsymbol\theta}= 4.9439$, while for the limit $\ell_\infty$ of $\ell_n$, the supremum is attained at the correct value $\theta=\theta_0=2$.
\BB{\begin{figure}[ht!]
\centering
\includegraphics[scale=0.25]{ClaytonBL} \includegraphics[scale=0.25]{ClaytonBEll}
\caption{Graphs of $L_\infty(\theta)$ (left panel) and $\ell_\infty(\theta)$ (right panel) for the bivariate Bernoulli case with Clayton copula ($\theta_0=2$.) }
\label{fig:claytonbernoulli}
\end{figure}}
\begin{remark}\label{rem:incorrect_method}
This simple example shows that one must be very careful in estimating copula parameters when the margins are not continuous. In particular, one should not use the usual pseudo-MLE method based on $L_n$ or the IFM method with continuous margins fitted to data with ties.
\end{remark}
\subsection{Pseudo log-likelihoods}\label{sec:lik}
For any $j\in \{1,\ldots,d\}$, let $\ensuremath{\mathcal{A}}_j = \{x\in \ensuremath{\mathbb{R}}: \Delta F_j(x)>0\}$ be the (countable) set of atoms of $F_j$, where for a general univariate distribution function $\ensuremath{\mathcal{G}}$, $\Delta \ensuremath{\mathcal{G}}(x) = \ensuremath{\mathcal{G}}(x) - \ensuremath{\mathcal{G}}({x}-)$, with $\ensuremath{\mathcal{G}}({x}-) = \lim_{n\to\infty}\ensuremath{\mathcal{G}}(x-1/n)$. Throughout this paper, we assume that $\ensuremath{\mathcal{A}}_j$ is closed.
For $j\in\{1,\ldots,d\}$, $\mu_{cj}$ denotes the counting measure on $\ensuremath{\mathcal{A}}_j$, and let $\ensuremath{\mathcal{L}}$ be Lebesgue's measure; both measures are defined on $(\ensuremath{\mathbb{R}},\ensuremath{\mathcal{B}}_\ensuremath{\mathbb{R}})$, and $\mu_{cj}+\ensuremath{\mathcal{L}}$, also defined on $(\ensuremath{\mathbb{R}},\ensuremath{\mathcal{B}}_\ensuremath{\mathbb{R}})$, is $\sigma$-finite.
Further
let $\mu$ be the product measure on $\ensuremath{\mathbb{R}}^d$ defined by $\mu = (\mu_{c1}+\ensuremath{\mathcal{L}})\times \cdots \times (\mu_{cd}+\ensuremath{\mathcal{L}})$, which is also $\sigma$-finite. In what follows, $\ensuremath{\mathcal{G}}^{-1}(u)$, $u\in (0,1)$, is defined as the left-continuous inverse
$
\ensuremath{\mathcal{G}}^{-1}(u) = \inf\{x: \ensuremath{\mathcal{G}}(x)\ge u\}$.
Since our aim is to estimate the parameter of the copula family without knowing the margins, the following assumption is necessary.
\begin{hyp}\label{hyp:margins}
The margins $F_1,\ldots, F_d$ do not depend on the parameter
${\boldsymbol\theta} \in \ensuremath{\mathcal{P}} \subset \ensuremath{\mathbb{R}}^p$. In addition, for any $j\in \{1,\ldots,d\}$, $F_j$ has density $f_j$ with respect to the measure $\mu_{cj}+\ensuremath{\mathcal{L}}$.
\end{hyp}
What is meant in Assumption \ref{hyp:margins} is that if the margins were parametric (e.g., Poisson), their parameters are not related to the parameter of the copula. This assumption is needed when one wants to estimate the margins first, and then estimate ${\boldsymbol\theta}$.
Assumption \ref{hyp:margins} really means that $\nabla_{\boldsymbol\theta} F_j(x_j) \equiv 0$, which is a natural assumption. This assumption is also implicit in the continuous case.\\
To estimate the copula parameters, one could use at least two different pseudo log-likelihoods: an informed one, if $\ensuremath{\mathcal{A}}_1,\ldots, \ensuremath{\mathcal{A}}_d$ are known, and a non-informed one, if some atoms are not known. The latter is the approach proposed in \cite{Li/Li/Qin/Yan:2020} in the bivariate case. From a practical point of view, it is easier to implement, since there is no need to define the atoms. However, it requires more assumptions, and one could argue that in practice, atoms should be known.
Before writing these pseudo log-likelihoods, we need to introduce some notations.\\
Suppose that $C$ is a copula having a continuous density $c$ on $(0,1)^d$. For any $B\subset \{1,\ldots,d\}$ and for any $\mathbf{u} = (u_1,\ldots,u_d) \in (0,1)^d$, set $\partial_B C (\mathbf{u}) = \{\prod_{j\in B}\partial_{u_j} \}C(\mathbf{u})$, where $\partial_{\{1,\ldots,d\}} C(\mathbf{u}) = c(\mathbf{u})$ is the density of $C_{\boldsymbol\theta}$ and $\partial_\emptyset C(\mathbf{u}) = C(\mathbf{u})$. Finally, for any vector $\mathbf{G} = (\ensuremath{\mathcal{G}}_1,\ldots,\ensuremath{\mathcal{G}}_d)$ of margins, and any $B\subset \{1,\ldots,d\} $, set
\begin{equation}\label{eq:FB}
\left(\tilde \mathbf{G}^{(B)}(\mathbf{x})\right)_j =
\left\{
\begin{array}{ll}
\ensuremath{\mathcal{G}}_{j}({x_{j}}-) & \text{, if } j \in B;\\
\ensuremath{\mathcal{G}}_{j}(x_{j}) & \text{, if } j \in B^\complement = \{1,\ldots,d\}\setminus B.
\end{array}\right.
\end{equation}
Under Assumptions \ref{hyp:cop}--\ref{hyp:margins}, it follows from Equation (3)
in the Supplementary Material B,
that the full (averaged) log-likelihood is given by
\begin{equation*}
\ell_{n}^\star({\boldsymbol\theta}) %
= \dfrac{1}{n} \sum_{A\subset \{1,\ldots,d\}} \sum_{i=1}^n J_A(\mathbf{X}_i) \log{K_{A,{\boldsymbol\theta}}(\mathbf{X}_i)} + \dfrac{1}{n} \sum_{A\subset \{1,\ldots,d\}}\sum_{i=1}^n \sum_{j \in A^\complement}\log f_j(X_{ij}),
\end{equation*}
where, for any $A\subset \{1,\ldots,d\}$, $J_A(\mathbf{x})= \left[\prod_{j\in A} \ensuremath{\mathbb{I}}\{x_j\in \ensuremath{\mathcal{A}}_j \}\right] \left[\prod_{j\in A^\complement} \ensuremath{\mathbb{I}}\{x_j\not\in \ensuremath{\mathcal{A}}_j \}\right]$ and
\begin{equation}\label{eq:GAtheta}
K_{A,{\boldsymbol\theta}} (\mathbf{x})= \sum_{B\subset A} (-1)^{| B|} \partial_{A^\complement} C_{\boldsymbol\theta}\left\{\tilde\mathbf{F}^{(B)}(\mathbf{x})\right\},
\end{equation}
with the usual convention that a product over the empty set is $1$.
If the margins were known or estimated first, then maximizing
$\ell_n^\star({\boldsymbol\theta})$ with respect to ${\boldsymbol\theta}$ would be the same as maximizing
$\displaystyle
\sum_{A\subset \{1,\ldots,d\}} \sum_{i=1}^n J_A(\mathbf{X}_i) \log{K_{A,{\boldsymbol\theta}}(\mathbf{X}_i)}$,
since by Assumption \ref{hyp:margins}, the margins do not depend on ${\boldsymbol\theta}$.
As a result, replacing the unknown margins by the empirical estimators, one gets the ``informed'' (averaged) pseudo log-likelihood $\ell_n$, defined by
\begin{equation}\label{eq:ll2d}
\ell_{n}({\boldsymbol\theta}) %
= \dfrac{1}{n} \sum_{A\subset \{1,\ldots,d\}} \sum_{i=1}^n J_A(\mathbf{X}_i) \log{K_{n,A,{\boldsymbol\theta}}(\mathbf{X}_i)},
\end{equation}
where for any $A\subset \{1,\ldots,d\}$,
$\displaystyle K_{n,A,{\boldsymbol\theta}}(\mathbf{x})= \sum_{B\subset A} (-1)^{| B|} \partial_{A^\complement} C_{\boldsymbol\theta}\left\{\tilde\mathbf{F}_{n}^{(B)}(\mathbf{x})\right\}$,
and
$ \tilde\mathbf{F}_{n}^{(B)}(\mathbf{x})$ is defined by \eqref{eq:FB}, with $\mathbf{G}=\mathbf{F}_n$.
Setting ${\boldsymbol\theta}_n= \displaystyle \arg\max_{{\boldsymbol\theta}\in\ensuremath{\mathcal{P}}} \ell_n({\boldsymbol\theta})$, then we define $ {\boldsymbol\Theta}_n = n^{1/2}({\boldsymbol\theta}_n-{\boldsymbol\theta}_0)$.
As \cite{Li/Li/Qin/Yan:2020} did in the bivariate case, we can also define the ``non-informed'' (averaged) pseudo log-likelihood $\tilde\ell_{n}({\boldsymbol\theta})$ as
\begin{equation*}
\tilde\ell_{n}({\boldsymbol\theta}) %
= \dfrac{1}{n} \sum_{A\subset \{1,\ldots,d\}} \sum_{i=1}^n J_{n,A}(\mathbf{X}_i) \log {K_{n,A,{\boldsymbol\theta}}(\mathbf{X}_i)},
\end{equation*}
where $J_{n,A}(\mathbf{x})=\left[\prod_{j\in A} \ensuremath{\mathbb{I}}\{\Delta F_{nj}(x_j) > 1/(n+1) \}\right] \left[\prod_{j\in A^\complement} \ensuremath{\mathbb{I}}\{\Delta F_{nj}(x_j) = 1/(n+1) \}\right]$ is an approximation of $J_A(\mathbf{x})$. Of course, for a given $i$, if $X_{ij}$ is an atom, then, when $n$ is large enough, $\Delta F_{nj}(X_{ij})> 1/(n+1)$. However, for a given $n$, it is possible that $\Delta F_{nj}(X_{ij})= 1/(n+1)$ even when $X_{ij}$ is an atom.
If the real value of the parameter is ${\boldsymbol\theta}_0$ and $\tilde{\boldsymbol\theta}_n= \displaystyle \arg\max_{{\boldsymbol\theta} \in \ensuremath{\mathcal{P}}} \tilde\ell_n({\boldsymbol\theta})$, we set $\tilde {\boldsymbol\Theta}_n = n^{1/2}\left(\tilde {\boldsymbol\theta}_n-{\boldsymbol\theta}_0\right)$.
\begin{example}\label{ex:bivLL}
If $d=2$, then
$$
\begin{array}{lll}
K_{n,\emptyset,{\boldsymbol\theta}}(x_1,x_2) &=& c_{\boldsymbol\theta}\{F_{n1}(x_1),F_{n2}(x_2)\},\label{eq:0}\\
K_{n,\{1\},{\boldsymbol\theta}}(x_1,x_2) &=& \partial_{u_2}C_{\boldsymbol\theta}\{F_{n1}(x_1),F_{n2}(x_2)\}-\partial_{u_2}C_{\boldsymbol\theta}\{F_{n1}(x_1-),F_{n2}(x_2)\},\label{eq:LL1}\\
K_{n,\{2\},{\boldsymbol\theta}}(x_1,x_2) &=& \partial_{u_1}C_{\boldsymbol\theta}\{F_{n1}(x_1),F_{n2}(x_2)\}-\partial_{u_1}C_{\boldsymbol\theta}\{F_{n1}(x_1),F_{n2}(x_2-)\},\label{eq:LL2}\\
K_{n,\{1,2\},{\boldsymbol\theta}}(x_1,x_2) &=&
C_{\boldsymbol\theta}\{F_{n1}(x_1),F_{n2}(x_2)\}-C_{\boldsymbol\theta}\{F_{n1}(x_1-),F_{n2}(x_2)\} \nonumber\\
&& \qquad -
C_{\boldsymbol\theta}\{F_{n1}(x_1),F_{n2}(x_2-)\}+C_{\boldsymbol\theta}\{F_{n1}(x_1-),F_{n2}(x_2-)\}
,\label{eq:LL12}\\
\end{array}
$$
and
\begin{eqnarray*}\label{eq:tilde_ll2d}
\tilde\ell_n({\boldsymbol\theta}) &=& \dfrac{1}{n} \sum_{i=1}^n \left[ J_{n,\emptyset}(\mathbf{X}_i) \log{K_{n,\emptyset,{\boldsymbol\theta}}(\mathbf{X}+i)}+J_{n,\{1\}}(\mathbf{X}_i)\log{ K_{n,\{1\},{\boldsymbol\theta}}(\mathbf{X}_i) } \right.\\
&& \qquad
\left. + J_{n,\{2\}}(\mathbf{X}_i)\log{K_{n,\{2\},{\boldsymbol\theta}}(\mathbf{X}_i)} +J_{n,\{1,2\}}(\mathbf{X}_i) \log{K_{n,\{1,2\},{\boldsymbol\theta}}(\mathbf{X}_i)}\right\}.\nonumber
\end{eqnarray*}
\end{example}
\begin{remark}\label{rem:Li1}
In \cite{Li/Li/Qin/Yan:2020}, instead of using $F_{nj}(X_{ij}-)$, the authors used $F_{nj}(X_{ij}-)+\dfrac{1}{n+1}$. The difference between our pseudo log-likelihood and theirs is negligible. However, the choice we propose seems more natural and simplifies notations in the multivariate case, which was not considered in \cite{Li/Li/Qin/Yan:2020}.
\end{remark}
Finally, one can see that computing the pseudo log-likelihoods $\ell_n$ or $\tilde\ell_n$ might be cumbersome when $d$ is large. To overcome this problem, we propose to use a pairwise composite pseudo log-likelihood. For a review on this approach in other settings, see, e.g., \cite{Varin/Reid/Firth:2011}. See also \cite{Oh/Patton:2016} for the composite method in a particular copula context.
Here, the pairwise composite pseudo log-likelihood is simply defined as
$\displaystyle
\check \ell_n = \sum_{1\le k < l\le d} \ell_n^{(k,l)}$,
where $\ell_n^{(k,l)}$ is the pseudo log-likelihood defined by \eqref{eq:ll2d} for the pairs $(X_{ik},X_{il})$, $i\in\{1,\ldots,n\}$. One can also replace $\ell_n$ by $\tilde\ell_n$ is the previous expression.
If $\check {\boldsymbol\theta}_n= \displaystyle \arg\max_{{\boldsymbol\theta} \in \ensuremath{\mathcal{P}}} \check\ell_n({\boldsymbol\theta})$, set $\check {\boldsymbol\Theta}_n = n^{1/2}(\check {\boldsymbol\theta}_n-{\boldsymbol\theta}_0)$.
The asymptotic behavior of the estimators ${\boldsymbol\theta}_n$, $\tilde{\boldsymbol\theta}_n$, and $\check{\boldsymbol\theta}_n$ is studied next.
\subsection{Notations and other assumptions}\label{ssec:results}
For sake of simplicity, the gradient column vector of a function $g$ with respect to ${\boldsymbol\theta}$ is often denoted $\dot g$ or $\nabla_{\boldsymbol\theta} g$, while the associated Hessian matrix is denoted by $\ddot g$ or $\nabla_{\boldsymbol\theta}^2 g$.
Before stating the main convergence results, for any $A\subset \{1,\ldots,d\}$, $0\le a_j < b_j \le 1$, $j\in A$, $u_j\in (0,1)$, $j\in A^\complement$, define
\begin{equation}\label{eq:varphi-cop}
{\boldsymbol\varphi}_{A,{\boldsymbol\theta}}\left(\mathbf{a},\mathbf{b},\mathbf{u}\right) = \frac{\sum_{B\subset A}(-1)^{|B|}\partial_{A^\complement}\dot C_{\boldsymbol\theta}\left((\mathbf{a},\mathbf{b},\mathbf{u})^{(B,A)}\right)}{\sum_{B\subset A}(-1)^{|B|}\partial_{A^\complement} C_{\boldsymbol\theta}\left((\mathbf{a},\mathbf{b},\mathbf{u})^{(B,A)}\right)},
\end{equation}
where for $B\subset A$,
\left((\mathbf{a},\mathbf{b},\mathbf{u})^{(B,A)}\right)_j =
\left\{
\begin{array}{ll}
a_j ,& j\in B, \\
b_j, & j \in A\setminus B,\\
u_j , & j\in A^\complement.
\end{array}
\right.$
In particular, ${\boldsymbol\varphi}_\emptyset(\mathbf{u}) = \frac{\dot c_{\boldsymbol\theta}(\mathbf{u})}{c_{\boldsymbol\theta}(\mathbf{u})}$, and if $A=\{1,\ldots, k\}$, $k\in\{1,\ldots,d\}$, one has
\begin{multline*}
{\boldsymbol\varphi}_{A,{\boldsymbol\theta}}(\mathbf{a},\mathbf{b},\mathbf{u}) = {\boldsymbol\varphi}_{A,{\boldsymbol\theta}}(a_1,\ldots,a_k,b_1,\ldots b_k, u_{k+1},\ldots, u_{d}) \\
= \frac{\int_{a_1}^{b_1} \cdots \int_{a_k}^{b_k} \dot c_{\boldsymbol\theta} (s_1,\ldots,s_k,u_{k+1},\ldots,u_d)ds_1 \cdots d s_k}{\int_{a_1}^{b_1} \cdots \int_{a_k}^{b_k}c_{\boldsymbol\theta}(s_1,\ldots,s_k,u_{k+1},\ldots,u_d)ds_1 \cdots d s_k}.
\end{multline*}
Next, set
$\mathbf{H}_{A,{\boldsymbol\theta}}(\mathbf{x})=
J_A(\mathbf{x}) {\boldsymbol\varphi}_{A,{\boldsymbol\theta}}\left\{\mathbf{F}_{A}(\mathbf{x}-), \mathbf{F}_{A}(\mathbf{x}), \mathbf{F}_{A^\complement}(\mathbf{x}) \right\}$.
Finally, set
$\ensuremath{\mathcal{K}}=\ensuremath{\mathcal{K}}_{{\boldsymbol\theta}_0}$ and
$\mathbf{H}_{\boldsymbol\theta} = \sum_{A\subset \{1,\ldots,d\}} \mathbf{H}_{A,{\boldsymbol\theta}}$.
\begin{hyp}\label{hyp:varphi}
The functions ${\boldsymbol\varphi}_{A,{\boldsymbol\theta}}$, $A\subset \{1,\ldots,d\}$, satisfy Assumptions \ref{hyp:varphiApp}--\ref{hyp:est}.
\end{hyp}
Next, to be able to deal with $\tilde\ell_n$ and the associated composite pseudo log-likelihood, one needs extra assumptions. First, one needs to control the gradient of the likelihood.
\begin{hyp}\label{hyp:max} There is a neighborhood $\ensuremath{\mathcal{N}}$ of ${\boldsymbol\theta}_0$ such that
for any $A\subset \{1,\ldots,d\}$,
$$
n^{-1/2}\max_{{\boldsymbol\theta}\in\ensuremath{\mathcal{N}}} \max_{1\le i\le n}
\left| \frac{
\sum_{B\subset A} (-1)^{|B|} \dot \partial_{A^\complement}C_{{\boldsymbol\theta}}\left\{\mathbf{F}_n^{(B)}(\mathbf{X}_i)\right\}}{\sum_{B\subset A} (-1)^{|B|} \partial_{A^\complement}C_{{\boldsymbol\theta}}\left\{\mathbf{F}_n^{(B)}(\mathbf{X}_i)\right\}}\right| \stackrel{Pr}{\to} 0.
$$
\end{hyp}
Second, in order to measure the errors one makes by considering $J_{n,A}$ instead of $J_A$, set
$$
\ensuremath{\mathcal{E}}_{nj}= \sum_{i=1}^n \ensuremath{\mathbb{I}}\{\Delta F_{nj}(X_{ij}) = 1/(n+1)\}\ensuremath{\mathbb{I}}(X_{ij}\in \ensuremath{\mathcal{A}}_j).
$$
Then, $\displaystyle E(\ensuremath{\mathcal{E}}_{nj}) = V_n(F_j) = n\sum_{x\in \ensuremath{\mathcal{A}}_j}\Delta F_j(x)\{1-\Delta F_j(x)\}^{n-1}$, $j\in\{1,\ldots,d\}$.
\begin{hyp}\label{hyp:support}
For any $j\in\{1,\ldots,d\}$, $\displaystyle
\limsup_{n\to\infty}V_n(F_j) <\infty$.
\end{hyp}
Assumption \ref{hyp:support} means that the average number of indices $i$ so that $ \Delta F_{nj}(X_{ij})=1/(n+1)$ when $X_{ij} \in \ensuremath{\mathcal{A}}_j$ is bounded.
This assumption holds true when the discrete part of the margin is a finite discrete distribution, a Geometric distribution, a Negative Binomial distribution, or a Poisson distribution, using Remark 1
in the Supplementary Material C.
\subsection{Convergence of estimators}\label{ssec:conv1}
Recall that for any $j\in \{1,\ldots,d\}$ and any $i\in \{1,\ldots,n\}$, $X_{ij} = F_{j}^{-1}(U_{ij})$, where
$\mathbf{U}_1 = (U_{11}, \ldots, U_{1d})$, $\ldots, \mathbf{U}_n = (U_{n1}, \ldots, U_{nd})$ are iid observations from copula $C_{{\boldsymbol\theta}_0}$. Also, let $P_{{\boldsymbol\theta}_0}$ be the associated probability distribution of $\mathbf{X}_i$, for any $i\in \{1,\ldots,n\}$.\\
Now, for any $j\in\{1,\ldots,d\}$, and any $y\in \ensuremath{\mathbb{R}}$, $
F_{nj}(y) = B_{nj}\circ F_j(y)$,
where
$\displaystyle
B_{nj}(v) = \dfrac{1}{n+1}\sum_{i=1}^n \ensuremath{\mathbb{I}}(U_{ij}\le v)$, $v\in [0,1]$.
Note that if for some $i\in \{1,\ldots,n\}$, $j\in\{1,\ldots,d\}$, $X_{ij}\not\in \ensuremath{\mathcal{A}}_j$, i.e., $\Delta F_j(X_{ij})=0$, then $F_{nj}(X_{ij}) = B_{nj}(U_{ij})$.
It is well-known that the processes $\ensuremath{\mathbb{B}}_{nj}(u_j) = n^{1/2}\{B_{nj}(u_j)-u_j\}$, $u_j\in [0,1]$, $j\in\{1,\ldots,d\}$, converge jointly in $C([0,1])$ to $\ensuremath{\mathbb{B}}_j$, denoted $\ensuremath{\mathbb{B}}_{nj} \rightsquigarrow \ensuremath{\mathbb{B}}_j$, where $\ensuremath{\mathbb{B}}_j$ are Brownian bridges, i.e., $\ensuremath{\mathbb{B}}_1, \ldots, \ensuremath{\mathbb{B}}_d$ are continuous centered Gaussian processes with
$
{\rm Cov}\{\ensuremath{\mathbb{B}}_j(s),\ensuremath{\mathbb{B}}_k(t)\}
= P_{{\boldsymbol\theta}_0}(U_{1j}\le s, U_{1k}\le t)-st$, $s,t\in [0,1]$.
In particular, for any $j\in\{1,\ldots,d\}$, ${\rm Cov}\{\ensuremath{\mathbb{B}}_j(s),\ensuremath{\mathbb{B}}_j(t)\} = \min(s,t)-st$.
Before stating the main convergence results for the estimation errors ${\boldsymbol\Theta}_n = n^{1/2}({\boldsymbol\theta}_n-{\boldsymbol\theta}_0) $ and
$\tilde{\boldsymbol\Theta}_n = n^{1/2}\left(\tilde {\boldsymbol\theta}_n-{\boldsymbol\theta}_0\right) $, for any $A\subset \{1,\ldots,d\}$, define ${\boldsymbol\zeta}_{A,i} = H_{A,{\boldsymbol\theta}_0}(X_i)$ and set $\displaystyle {\boldsymbol\zeta}_i = \sum_{A\in\{1,\ldots,d\}}{\boldsymbol\zeta}_{A,i}$.
Next, set $\displaystyle \ensuremath{\mathcal{W}}_{n,1} = n^{-1/2}\sum_{i=1}^n {\boldsymbol\zeta}_i$ and let
\begin{eqnarray*}
\ensuremath{\mathcal{W}}_{n,2}
&=& - \sum_{j=1}^d \sum_{A\not\ni j} \int c_{{\boldsymbol\theta}_0}(\mathbf{u}) {\boldsymbol\eta}_{A,j,2}(\mathbf{u})\ensuremath{\mathbb{B}}_{nj}(u_{j})d\mathbf{u}\\
&& \quad - \sum_{j=1}^d \sum_{x_j\in \ensuremath{\mathcal{A}}_j} \ensuremath{\mathbb{B}}_{nj}\{F_j(x_j)\}\sum_{A\ni j} \int c_{{\boldsymbol\theta}_0}(\mathbf{u}) {\boldsymbol\eta}_{A,j,2+}(x_j,\mathbf{u})d\mathbf{u} \\
&& \qquad
+ \sum_{j=1}^d \sum_{x_j\in \ensuremath{\mathcal{A}}_j} \ensuremath{\mathbb{B}}_{nj}\{F_j(x_j-)\} \sum_{A\ni j} \int c_{{\boldsymbol\theta}_0}(\mathbf{u}) {\boldsymbol\eta}_{A,j,2-}(x_j, \mathbf{u})d\mathbf{u},
\end{eqnarray*}
where the functions ${\boldsymbol\eta}_{A,j}, {\boldsymbol\eta}_{A,j,\pm}, {\boldsymbol\eta}_{A,j,1,\pm}, {\boldsymbol\eta}_{A,j,2\pm}$ are defined in Appendix \ref{app:eta}.
Finally, set $\ensuremath{\mathcal{W}}_{n,0} = n^{-1/2}\sum_{i=1}^n \dfrac{\dot c_{{\boldsymbol\theta}_0}(\mathbf{U}_i)}{c_{{\boldsymbol\theta}_0}(\mathbf{U}_i)}$.
Basically, $\ensuremath{\mathcal{W}}_{n,1}$ is what one should have if the margins were known, while $\ensuremath{\mathcal{W}}_{n,2}$ is the price to pay for not knowing the margins. Finally, $\ensuremath{\mathcal{W}}_{n,0}$ is needed for obtaining bootstrapping results \citep{Genest/Remillard:2008, Nasri/Remillard:2019}.
The following result is a consequence of Theorem \ref{thm:gen_est} and Theorem \ref{thm:zeta}, proven in Appendix \ref{app:Zest} and Appendix \ref{app:zeta} respectively.
\begin{theorem}\label{thm:main}
Under Assumptions \ref{hyp:cop}--\ref{hyp:varphi}, ${\boldsymbol\Theta}_n $ converges in law to ${\boldsymbol\Theta} = \ensuremath{\mathcal{J}}_1^{-1}(\ensuremath{\mathcal{W}}_1+\ensuremath{\mathcal{W}}_2)$,
where $\ensuremath{\mathcal{W}}_1 \sim N(0,\ensuremath{\mathcal{J}}_1)$, $E\left(\ensuremath{\mathcal{W}}_1 \ensuremath{\mathcal{W}}_0^\top\right) = \ensuremath{\mathcal{J}}_1$, and $E\left(\ensuremath{\mathcal{W}}_2 \ensuremath{\mathcal{W}}_0^\top\right) = 0$.
If in addition Assumptions \ref{hyp:max}--\ref{hyp:support} hold true,
then $\tilde{\boldsymbol\Theta}_n-{\boldsymbol\Theta}_n$ converges in probability to $0$, so $\tilde{\boldsymbol\Theta}_n$ converges in law to ${\boldsymbol\Theta}$.
\end{theorem}
\begin{cor}\label{cor:reg}
${\boldsymbol\theta}_n$ and $\tilde {\boldsymbol\theta}_n$ are regular estimator of ${\boldsymbol\theta}_0$ in the sense that ${\boldsymbol\Theta}_n$ and $\tilde {\boldsymbol\Theta}_n$ converge to ${\boldsymbol\Theta}$ and $E\left({\boldsymbol\Theta} \ensuremath{\mathcal{W}}_0^\top \right) = I_d$.
\end{cor}
\begin{proof}
Theorem \ref{thm:main} yields
$ E\left( {\boldsymbol\Theta} \ensuremath{\mathcal{W}}_0^\top \right) = \ensuremath{\mathcal{J}}_1^{-1}E \left\{ (\ensuremath{\mathcal{W}}_1+\ensuremath{\mathcal{W}}_2) \ensuremath{\mathcal{W}}_0^\top \right\} = I_d$.
\end{proof}
Having got the asymptotic behavior of ${\boldsymbol\theta}_n$ and $\tilde{\boldsymbol\theta}_n$, it is easy to obtain the following result.
\begin{cor}\label{cor:LLcomp}
Under Assumptions \ref{hyp:cop}--\ref{hyp:varphi},
$\check {\boldsymbol\Theta}_n = n^{1/2}(\check{\boldsymbol\theta}_n-{\boldsymbol\theta}_0) $ converges in law to
$$
\check{\boldsymbol\Theta} = \left(\sum_{1\le k<l\le d} \ensuremath{\mathcal{J}}^{(k,l)}\right)^{-1} \sum_{1\le k<l\le d} \ensuremath{\mathcal{W}}^{(k,l)},
$$
where $\ensuremath{\mathcal{W}}^{(k,l)} = \ensuremath{\mathcal{W}}_1^{(k,l)}+ \ensuremath{\mathcal{W}}_2^{(k,l)}$ are defined as $\ensuremath{\mathcal{W}}_1$ and $\ensuremath{\mathcal{W}}_2$ but restricted to the pairs $(X_{ik},X_{il})$, $i\in\{1,\ldots,n\}$. Moreover, $\check{\boldsymbol\theta}_n$ is regular. The same result holds if $\ell_n$ is replaced by $\tilde\ell_n$, and if in addition, Assumptions \ref{hyp:max}--\ref{hyp:support} hold true.
\end{cor}
\begin{remark}\label{rem:IFM}
The previous results hold if one uses parametric margins instead of nonparametric margins, provided the margins are smooth enough and the estimated parameters converge in law. In fact, assume that for $j\in\{1,\ldots,d\}$, $F_{nj} = F_{j,{\boldsymbol\gamma}_{nj}}$, $F_{j} = F_{j,{\boldsymbol\gamma}_{0j}}$, and
$\ensuremath{\mathbb{F}}_{nj}=n^{1/2} \left\{F_{nj} - F_{j}\right\}$ converges in law to $\ensuremath{\mathbb{F}}_j = {\boldsymbol\Gamma}_j^\top \dot F_j$, with
$\dot F_j = \left. \nabla_{{\boldsymbol\gamma}_j}F_{j,{\boldsymbol\gamma}_j}\right|_{{\boldsymbol\gamma}_j={\boldsymbol\gamma}_{0j}}$, and ${\boldsymbol\Gamma}_j$ is a centered Gaussian random vector.
Then, under Assumptions \ref{hyp:cop}--\ref{hyp:varphi}, replacing respectively $\ensuremath{\mathbb{B}}_{j}(u_j)$, $\ensuremath{\mathbb{B}}_{j}\{F_j(x_j)\}$, and $\ensuremath{\mathbb{B}}_{j}\{F_j(x_j-)\}$ with
$\ensuremath{\mathbb{F}}_{j}\left\{F_j^{-1}(u_j)\right\}$,
$\ensuremath{\mathbb{F}}_{j}(x_j)$, and $\ensuremath{\mathbb{F}}_{j}(x_j-)$, one obtains the analogs of Theorem \ref{thm:main} and Corollaries \ref{cor:reg}--\ref{cor:LLcomp}.
\end{remark}
\section{Numerical experiments}\label{sec:exp}
In this section, we study the quality and performance of the proposed estimators,
for various choices of copula families, margins and sample sizes.
Note that all simulations were done with the more demanding case $\tilde{\boldsymbol\theta}_n$.
In the first set of experiments, we deal with the bivariate case, and in a second set of experiments, we compute the composite estimator in a trivariate setting.
First, in the bivariate case, we consider five copula families and five pairs of margins for each copula family. For the first experiment (Exp1), both margins are standard Gaussian, i.e.,
$F_1,F_2\sim N(0,1)$. In the second experiment (Exp2), the margins are Poisson with parameters $5$ and $10$ respectively, i.e., $F_1\sim \ensuremath{\mathcal{P}}(5)$ and $F_2\sim \ensuremath{\mathcal{P}}(10)$, while in the third experiment (Exp3), $F_1 \sim \ensuremath{\mathcal{P}}(10)$ and $F_2\sim N(0,1)$.
In the fourth experiment (Exp4), $F_1$ is a rounded Gaussian, namely $X_1 = \lfloor 1000 Z_1\rfloor$, with $Z_1\sim N(0,1)$, and $F_2\sim N(0,1)$.
Finally, for the fifth experiment (Exp5), $F_1$ is zero-inflated, with $F_1(0-)=0$, $F_1(x) = 0.05+0.95(2F_2(x)-1)$, $x\ge 0$, and $F_2\sim N(0,1)$.
To estimate the parameter $\theta$ of the Clayton, Frank, Gumbel, Gaussian and Student (with $\nu=5$) copula families corresponding to a Kendall's tau $\tau_0 = 0.5$, samples of size $n\in\{100,250,500\}$ were generated for each copula family, and ${\boldsymbol\theta}_n$ was computed. Here, $\tau$ is Kendall's tau of the copula family $C_\theta$, written $\tau(C_\theta)$. For results to be comparable throughout copula families, we computed the relative
bias and the relative root mean square error (RMSE) of $\tau(C_{\theta_n})$, instead of $\theta_n$. The results for $1000$ samples are reported in Table \ref{tab:est2d}.
As one can see, the estimator performs quite well for the five numerical experiments and the five copula families. Furthermore, the precision depends on the copula family, but for a given copula family, the precision does not vary significantly with the margins.
Even the case when both margins are continuous (Exp1) does not yield the best results. When the sample size is 250 or more, the relative bias is always smaller than 2\%. Finally, as expected, both the bias and the RMSE decrease when the sample size increases.
\begin{table}
\caption{\label{tab:est2d} Relative bias and relative RMSE (in parentheses) in percent for $\tau(C_{\theta_n})$ vs $\tau_0$ when $n\in \{100,250,500\}$, based on 1000 samples. For the Student copula, $\nu=5$ is assumed known.}
\centering
\begin{tabular}{crrrrr}
\hline
& \multicolumn{5}{c}{Copula}\\
\cline{2-6}
Margins & \multicolumn{1}{c}{Clayton} & \multicolumn{1}{c}{Frank} & \multicolumn{1}{c}{Gumbel} & \multicolumn{1}{c}{Gaussian} & \multicolumn{1}{c}{Student} \\[6pt]
& \multicolumn{5}{c}{$n=100$} \\
\hline
Exp1 & -0.42 (9.62) & 0.40 (9.30) & 3.02 (10.9) & 2.92 (9.40) & 2.18 (11.1) \\
Exp2 & 0.52 (10.2) & 0.86 (9.64) & 3.70 (11.4) & 3.16 (9.80) & 2.64 (11.5) \\
Exp3 & 0.02 (9.84) & 0.58 (9.40) & 3.34 (11.1) & 2.96 (9.52) & 2.36 (11.3) \\
Exp4 & -0.44 (9.62) & 0.40 (9.30) & 3.02 (10.9) & 2.88 (9.38) & 2.14 (11.1) \\
Exp5 & -1.00 (9.90) & 0.36 (9.30) & 2.76 (10.8) & 2.42 (9.28) & 1.68 (11.0) \\[6pt]
& \multicolumn{5}{c}{$n=250$} \\
\hline
Exp1 & 0.18 (6.06)& -0.32 (5.76) & 0.56 (6.12) & 1.30 (5.92) & 0.92 (6.52) \\
Exp2 & 0.50 (6.44)& -0.00 (5.92) & 1.52 (6.36) & 1.72 (6.18) & 1.52 (6.82) \\
Exp3 & 0.40 (6.22)& -0.20 (5.84) & 0.96 (6.16) & 1.42 (6.02) & 1.16 (6.60) \\
Exp4 & 0.12 (6.08)& -0.32 (5.76) & 0.60 (6.12) & 1.30 (5.92) & 0.92 (6.52) \\
Exp5 & -0.10 (6.22)& -0.34 (5.76) & 0.42 (6.12) & 1.06 (5.90) & 0.62 (6.56) \\[6pt]
& \multicolumn{5}{c}{$n=500$} \\
\hline
Exp1 & 0.08 (4.44) & -0.04 (3.88) & 0.38 (4.44)& 0.64 (4.40) & 0.34 (4.48) \\
Exp2 & 0.36 (4.60) & 0.06 (4.02) & 0.72 (4.64)& 0.76 (4.30) & 0.54 (4.68) \\
Exp3 & 0.28 (4.50) & -0.00 (3.90) & 0.52 (4.52)& 0.70 (4.24) & 0.42 (4.56) \\
Exp4 & 0.04 (4.44) & -0.04 (3.88) & 0.40 (4.44)& 0.62 (4.19) & 0.32 (4.48) \\
Exp5 & 0.10 (4.48) & -0.04 (3.88) & 0.34 (4.44)& 0.54 (4.20) & 0.24 (4.48) \\
\hline
\end{tabular}
}
\end{table}
In the second set of experiments, using the pairwise composite estimator $\check{\boldsymbol\theta}_n$, we estimated the parameters of a trivariate non-central squared Clayton copula \citep{Nasri:2020} with Kendall's tau $\tau_0=0.5$, and non-centrality parameters $a_{10} = 0.9$, $a_{20} = 2.3$, and $a_{30}= 1.4$. In this case, for all five experiments, $F_1$ and $F_2$ are defined as before, while $F_3\sim N(0,1)$. The results are displayed in Table \ref{tab:est3d}. As one could have guessed, the estimation of $\tau$ is not as good as in the bivariate case, but the results are good enough. As for the non-centrality parameters, the estimation of $a_2$, which has a large value (the upper bound being $3$), is not as good as the other values, but this is coherent with the simulations in \cite{Nasri:2020}. All in all, the composite method of estimation yields quite satisfactory results.
\begin{table}[ht!]
\caption{Relative bias and RMSE (in parentheses) in percent for the estimation errors of $\tau_0$ and $a_j$ for $j=1,2,3$ in the case of the trivariate non-central squared Clayton copula when $n\in \{100, 250, 500\}$, based on 1000 samples.}\label{tab:est3d}
\begin{tabular}{crrrr}
\hline
& \multicolumn{4}{c}{Parameters}\\
\cline{2-5}
Margins & \multicolumn{1}{c}{$\tau $} & \multicolumn{1}{c}{$a_1$} & \multicolumn{1}{c}{$a_2$} & \multicolumn{1}{c}{$a_3$} \\[6pt]
& \multicolumn{4}{c}{$n=100$} \\
\hline
Exp1 & 4.01 (11.21) & -4.92 (24.95) & -30.67 (41.09) & -5.08 (21.56) \\
Exp2 & 4.24 (11.68) & -4.70 (25.27) & -28.71 (40.58) & -4.83 (21.33) \\
Exp3 & 4.31 (11.50) & -5.74 (23.72) & -30.02 (40.95) & -5.32 (21.08) \\
Exp4 & 3.99 (11.22) & -4.93 (25.04) & -30.74 (41.10) & -5.12 (21.52) \\
Exp5 & 3.99 (11.18) & -4.98 (24.34) & -30.63 (41.12) & -5.22 (21.18) \\[3pt]
& \multicolumn{4}{c}{$n=250$} \\
\hline
Exp1 & 1.34 (6.80) & -4.10 (13.05) & -21.93 (34.13) & -3.82 (13.07) \\
Exp2 & 1.32 (6.97) & -2.82 (13.24) & -20.59 (33.63) & -3.15 (13.07) \\
Exp3 & 1.41 (6.85) & -3.47 (13.47) & -20.46 (33.45) & -3.66 (13.08) \\
Exp4 & 1.33 (6.77) & -4.01 (12.98) & -22.01 (34.03) & -3.77 (12.83) \\
Exp5 & 1.34 (6.80) & -4.10 (13.04) & -21.93 (34.14) & -3.82 (13.08) \\[3pt]
& \multicolumn{4}{c}{$n=500$} \\
\hline
Exp1 & 0.63 (4.56) & -2.07 (8.81) & -12.30 (25.44) & -2.19 (8.09) \\
Exp2 & 0.74 (4.74) & -1.98 (9.13) & -12.81 (26.67) & -2.19 (8.54) \\
Exp3 & 0.73 (4.61) & -2.07 (8.93) & -12.24 (25.45) & -2.30 (8.20) \\
Exp4 & 0.63 (4.56) & -2.06 (8.84) & -12.31 (25.44) & -2.21 (8.13) \\
Exp5 & 0.63 (4.56) & -2.05 (8.78) & -12.20 (25.33) & -2.17 (8.06) \\
\hline
\end{tabular}
\end{table}
\section{Example of application}\label{sec:example}
In this section, we propose a rigorous method to study the relationship between duration and severity for hydrological data used in \cite{Shiau:2006}. The data were kindly provided by the author.
There are many articles in the hydrology literature about modeling drought duration and severity with copulas; see, e.g., \cite{Chen/Singh/Guo/Mishra/Guo:2013, Shiau:2006}. One of the main tools to compute the drought duration and severity is the so-called Standardized Precipitation Index (SPI) \citep{McKee/Doesken/Kleist:1993}. Basically, \cite{McKee/Doesken/Kleist:1993} suggest to fit a gamma distribution over a moving average (1-month, 3-month, etc.) of the precipitations and then transform them into a Gaussian distribution. However, it may happen that there are several zero values in the observations so fitting a continuous distribution is not possible.
Using the data kindly provided by Professor Shiau (daily precipitations in millimeters for the Wushantou gauge station from 1932 to 2001), we see from Figure \ref{fig:densityMAP} that even taking a 1-month moving average leads to a zero-inflated distribution. So, instead of fitting a gamma distribution to the moving average, as it is often done, we suggest to simply apply the inverse Gaussian distribution to the empirical distribution. Then, one can compute the duration and severity: a drought is defined as a sequence of consecutive days with negative SPI values, say $SPI_{i}, \dots, SPI_j$: the length the sequence is the duration $D$, i.e., $D=j-i+1$, and the severity is defined by $S = -\sum_{k=i}^j SPI_k$. It makes sense to consider the severity $S$ as a continuous random variable but the duration $D$ is integer-valued. Again, in the literature, a continuous distribution is usually fitted to $D$, which is incorrect. These variables are then divided by $30$ in order to represent months. With the dataset, we obtained 175 drought periods.
A non-parametric estimation of the density of the severity per month is displayed in Figure \ref{fig:densityMAP} which seems to be a mixture of at least two distributions. We tried mixtures of up to $4$ gamma distributions without success. A scatter plot of the duration and the severity also appears in Figure \ref{fig:densityMAP}. With the copula-based methodology developed here, based on a measure of fit, we chose the Frank copula.
In contrast, the preferred copula families in \cite{Shiau:2006} were the Galambos and Gumbel families. Using a smoothed distribution for the severity $D$, we can compute the conditional probability $P(D > y|S=s)$ for $y=1$ to $8$ months, in addition to the conditional expectation $E(D|S=s)$. These functions are displayed in Figure \ref{fig:condDuration}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.3]{densityMAP}
\includegraphics[scale=0.3]{densitySeverity}
\includegraphics[scale=0.3]{scatterShiau}
\caption{Estimated zero-inflated density for the 1-month moving average of precipitations (top left) and severity per month (top right), together with a scatter plot of the duration and severity (bottom).}
\label{fig:densityMAP}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.3]{CondCdfDuration}
\includegraphics[scale=0.3]{expectedDuration}
\caption{Conditional probability $P(D \le y)$ for $y\in \{1,\ldots,8\}$ months and conditional expectation of the duration given severity per month.}
\label{fig:condDuration}
\end{figure}
\section{Conclusion}
We presented methods based on pseudo log-likelihood for estimating the parameter of copula-based models for arbitrary multivariate data.
These pseudo log-likelihoods depend on the non-parametric margins and are adapted to take into account the ties. We have also shown that the methodology can be extended to the case of parametric margins. According to numerical experiments, the proposed estimators perform quite well. As a example of application, we
estimated the relationship between drought duration and severity in hydrological data, where the problem of ties if often ignored. The proposed methodologies can also be applied to high dimensional data. For this reason, we have shown in Corollary \ref{cor:LLcomp} that
the pairwise composite applied to our bivariate pseudo-likelihood is valid.
Finally, in a future work, we will also develop bootstrapping methods and formal tests of goodness-of-fit.
|
2,869,038,154,289 | arxiv | \section{Introduction}
The Croatian Physical Society organizes the annual Summer school for young physicists [1], intended to reward elementary and high school students for their accomplishments on the national physics competitions. The schools typically consists of half-day lectures combined with workshops, experiments or games that take place during leisure hours. For the last three editions of the summer schools, we developed a variant of the popular game "Pictionary" as a small competition for the students. ``Physionary'', as we named the variant, has proven to be very successful in entertaining the students, not only during the evenings intended for the competition, but also during the rest of leisure time.
\section{Game description}
``Physionary'' is a game loosely derived and expanded from the commercially-distributed ``Pictionary''. A similar game was developed earlier for University biology students [2], however, we expanded further on the game since we found it beneficial to do so.
The students are divided into groups based on their age and are given a number of cards, each containing 6 terms from elementary or high school physics. The terms are taken from indices of physics curricula or physics manuals and divided into 5 sets of cards, one for each high school grade and one for elementary school grades. A dice is thrown to randomly select the ordinal number of the term on the card. A one minute timer is started and one of the students from each group is required to draw the term found on his/her card, while the rest of the group has to guess what term it is. If the team accomplishes this within the given time allotment, they receive a point. Several rounds are played this way before the pace of the game is then made faster by decreasing the available time for drawing. After a predetermined number of rounds played in this manner, the second phase of the game begins. In this phase, the students don't draw the terms but instead try to ``act out'' the term given on their cards, as it is usually done in charades. This is also done within a given time allotment, typically set to a minute.
This game turned out not to be only entertaining, but also highly educational. Sometimes, a specific term may not be recognized by some of the members of a certain group. However, we noticed that the said term is quickly taken in by those members, as evidenced by their future recognition of that, as well as similar terms. Once the students start to communicate concepts pictorially, they move from their definitions and try to use everyday examples to convey them to their group. We have also noticed that a sense of connection between various terms is formed, since the students find it beneficial to explain a new term with the help of terms that they have already drawn on paper - they simply circle the term that was already guessed by the group. Finally, the students were often found drawing graphs and diagrams, which is a skill they need to develop in physics, but are often not motivated to do so.
\begin{wrapfigure}[16]{r}{0.3\textwidth}
\begin{center}
\includegraphics[width=0.28\textwidth]{pictionary.png}
\end{center}
\caption{An example of a playing card.}
\end{wrapfigure}
\section{Conclusion}
We have expanded on the known popular party games to create an effective and entertaining physics learning tool. The skills developed during game play seem to be beneficial to the students and the terms they are required to draw or act out are taken from their curricula. The students seem to develop another important skill during the game - using simple physical concepts in everyday situations, which is a skill they are most often found lacking. In what follows, we present a selection of 10 terms (out of 850) from each of the 5 sets of cards. We have limited the selection to 10 terms since we expect that different countries will have differing curricula, so we thought it best that everyone interested made their own sets of cards.
\section*{References}
[1] http://www.hfd.hr/ljetna\_skola/\newline
[2] Kathleen A. Parson, John S. Miles, ``Bio-Pictionary -- a scientific party game which helps to develop pictorial communication skills'', Journal of Biological Education, 28:1, 17-18, DOI:10.1080/00219266.1994.9655358
\newpage
\begin{center}
\begin {table}[H]
\begin{tabular}{ |c|c|c|c|c| }
\hline
\bf elementary & \bf 1st grade &\bf 2nd grade &\bf 3rd grade &\bf 4th grade \\
\hline
heat insulator & joule & diffusion & diffraction & plasma \\
surface area & frequency & capacitor & rainbow & atom \\
power & dynamics & inductivity & standing wave & antiparticle \\
mass & buoyancy & linear expansion & intensity & semiconductor \\
sliding friction & unit & Lorentz force & sound & fractal \\
Solar energy & fluid & work & rotation & boson \\
pulley & projectile motion & ideal gas & length contraction & red giant \\
molecule & cosmic speed & insulator & phase & mass defect \\
cavity & energy & interaction & lens & butterfly effect \\
electricity & Galileo & isobar & light guide & quark \\
\hline
\end{tabular}
\caption {A sample from the terms given on the ``Physionary'' cards, sorted in 5 classes according to students' grade.} \label{tab:title}
\end {table}
\end{center}
\end{document}
|
2,869,038,154,290 | arxiv | \section{Introduction}
One of the most fundamental questions in the study of composition operators is to characterize when such an operator is well-defined and continuous in terms of its symbol. The goal of this article is to consider this question for weighted locally convex spaces of one real variable smooth functions.
Let $\varphi: \mathbb{R} \to \mathbb{R}$ be smooth. In \cite{GJ} Galbis and Jord\'a showed that the composition operator $C_\varphi: \mathscr{S} \to \mathscr{S}, \, f \mapsto f \circ \varphi$, with $\mathscr{S}$ the space of rapidly decreasing smooth functions \cite{Schwartz}, is well-defined (continuous) if and only if
$$
\exists N \in \mathbb{Z}_+~:~ \sup_{x \in \mathbb{R}} \frac{1+ |x|}{(1+|\varphi(x)|)^N} < \infty
$$
and
$$
\forall p\in \mathbb{Z}_+~\exists N \in \mathbb{N}~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{(1+|\varphi(x)|)^{N}} < \infty.
$$
Albanese et.\ al \cite{AJM} proved that the composition operator $C_\varphi: \mathscr{O}_M \to \mathscr{O}_M$, with $\mathscr{O}_M$ the space of slowly increasing smooth functions \cite{Schwartz}, is well-defined (continuous) if and only if
$\varphi \in \mathscr{O}_M$. In \cite[Remark 2.6]{AJM} they also pointed out that the corresponding result for the space $\mathscr{O}_C$ of very slowly increasing smooth functions \cite{Schwartz} is false, namely, they showed that $\sin (x^2) \notin \mathscr{O}_C$, while, obviously, $\sin x, x^2 \in \mathscr{O}_C$.
Inspired by these results, we study in this article the following general question: Given two weighted locally convex spaces $X$ and $Y$ of smooth functions, when is the composition operator $C_\varphi: X \to Y$ well-defined (continuous)? We shall consider this problem for $X$ and $Y$ both being Fr\'echet spaces, $(LF)$-spaces, or $(PLB)$-spaces.
We now state a particular instance of our main result that covers many well-known spaces. We need some preparation.
Given a positive continuous function $v$ on $\mathbb{R}$, we write $\mathscr{B}^n_v$, $n \in \mathbb{N}$, for the Banach space consisting of all $f \in C^n(\mathbb{R})$ such that
$$
\|f\|_{v,n} = \max_{p \leq n} \sup_{x \in \mathbb{R}} \frac{|f^{(p)}(x)|}{v(x)} < \infty.
$$
For $v \geq 1$ we consider the following three weighted spaces of smooth functions
\begin{align*}
\mathscr{K}_{v} &= \varprojlim_{N \in \mathbb{N}} \mathscr{B}^N_{1/v^N} , \\
\mathscr{O}_{C,v} &= \varinjlim_{N \in \mathbb{N}} \varprojlim_{n \in \mathbb{N}} \mathscr{B}^n_{v^N} ,\\
\mathscr{O}_{M,v} &= \varprojlim_{n \in \mathbb{N}} \varinjlim_{N \in \mathbb{N}} \mathscr{B}^n_{v^N} .
\end{align*}
Theorem \ref{main} below implies the following result:
\begin{theorem} \label{intro} Let $v,w: \mathbb{R} \to [1,\infty)$ be continuous functions such that
$$
\sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{v(x+t)}{v^\lambda(x)} < \infty \qquad \mbox{and} \qquad \sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{w(x+t)}{w^\mu(x)} < \infty,
$$
for some $\lambda,\mu > 0$. Let $\varphi: \mathbb{R} \to \mathbb{R}$ be smooth. Then,
\begin{enumerate}
\item[$(I)$] The following statements are equivalent:
\begin{enumerate}
\item[$(i)$] $C_\varphi(\mathscr{K}_{v} ) \subseteq \mathscr{K}_{w}$.
\item[$(ii)$] $C_\varphi: \mathscr{K}_{v} \rightarrow \mathscr{K}_{w}$ is continuous.
\item[$(iii)$] $\varphi$ satisfies the following two properties
\begin{enumerate}
\item [$(a)$] $\displaystyle \exists \lambda > 0~:~ \sup_{x \in \mathbb{R}} \frac{w(x)}{v^\lambda(\varphi(x))} < \infty$.
\item [$(b)$] $\displaystyle \forall p \in \mathbb{Z}_+~\exists \lambda > 0~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{v^\lambda(\varphi(x))} < \infty$.
\end{enumerate}
\end{enumerate}
\item[$(II)$] The following statements are equivalent:
\begin{enumerate}
\item[$(i)$] $C_\varphi(\mathscr{O}_{C,v} ) \subseteq \mathscr{O}_{C,w}$.
\item[$(ii)$] $C_\varphi: \mathscr{O}_{C,v} \rightarrow \mathscr{O}_{C,w}$ is continuous.
\item[$(iii)$] $\varphi$ satisfies the following two properties
\begin{enumerate}
\item [$(a)$] $\displaystyle \exists \mu > 0~:~ \sup_{x \in \mathbb{R}} \frac{v(\varphi(x))}{w^\mu(x)} < \infty$.
\item [$(b)$] $\displaystyle \forall p,k \in \mathbb{Z}_+~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{w^{1/k}(x)} < \infty$.
\end{enumerate}
\end{enumerate}
\item[$(III)$] The following statements are equivalent:
\begin{enumerate}
\item[$(i)$] $C_\varphi(\mathscr{O}_{M,v} ) \subseteq \mathscr{O}_{M,w}$.
\item[$(ii)$] $C_\varphi: \mathscr{O}_{M,v} \rightarrow \mathscr{O}_{M,w}$ is continuous.
\item[$(iii)$] $\varphi$ satisfies the following two properties
\begin{enumerate}
\item [$(a)$] $\displaystyle \exists \mu > 0~:~ \sup_{x \in \mathbb{R}} \frac{v(\varphi(x))}{w^\mu(x)} < \infty$.
\item [$(b)$] $\displaystyle \forall p \in \mathbb{Z}_+~\exists \mu > 0~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{w^\mu(x)} < \infty$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{theorem}
By setting $v(x) = w(x) = 1+|x|$ in Theorem \ref{intro} we recover the above results about $\mathscr{S}$ and $\mathscr{O}_M$ from \cite{GJ, AJM} as well as the following characterization for the space $\mathscr{O}_C$ of very slowly increasing smooth functions: $C_\varphi: \mathscr{O}_C \to \mathscr{O}_C$ is well defined (continuous) if and only if
$$
\exists N \in \mathbb{N}~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi(x)|}{(1+|x|)^N} < \infty\qquad \mbox{and} \qquad \forall p,k \in \mathbb{Z}_+~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{(1+|x|)^{1/k}} < \infty.
$$
For $v = w = 1$, Theorem \ref{intro} gives the following result for the Fr\'echet space $\mathscr{B}$ of smooth functions that are bounded together will all their derivatives \cite{Schwartz}: $C_\varphi: \mathscr{B} \to \mathscr{B}$ is well defined (continuous) if and only if $\varphi' \in \mathscr{B}$. Another interesting choice is $v(x) = w(x) = e^{|x|}$, for which Theorem \ref{main} characterizes composition operators on spaces of exponentially decreasing/increasing smooth functions \cite{hasumi, zielezny}. We leave it to the reader to explicitly formulate this and other examples.
\section{Statement of the main result}
A pointwise non-decreasing sequence $V = (v_{N})_{N \in \mathbb{N}}$ of positive continuous functions on $\mathbb{R}$ is called a \emph{weight system} if $v_0 \geq 1$ and
$$
\forall N~\exists M \geq N~:~ \sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{v_N(x+t)}{v_M(x)} < \infty.
$$
We shall also make use of the following condition on a weight system $V = (v_{N})_{N \in \mathbb{N}}$:
\begin{equation}
\forall N,M~ \exists K \geq N,M~:~ \sup_{x \in \mathbb{R}} \frac{v_N(x)v_M(x)}{v_K(x)} < \infty.
\label{cond-mult}
\end{equation}
\begin{exampl} \label{example}
Let $v: \mathbb{R} \to [1,\infty)$ be a continuous function satisfying
$$
\sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{v(x+t)}{v^N(x)} < \infty.
$$
for some $N \in \mathbb{N}$ (cf.\ Theorem \ref{intro}). Then,
$$V_v = (v^N)_{N \in \mathbb{N}}$$ is a weight system satisfying \eqref{cond-mult}.
\end{exampl}
Recall that for a positive continuous function $v$ on $\mathbb{R}$ and $n \in \mathbb{N}$, we write $\mathscr{B}^n_v$ for the Banach space consisting of all $f \in C^n(\mathbb{R})$ such that
$$
\|f\|_{v,n} = \max_{p \leq n} \sup_{x \in \mathbb{R}} \frac{|f^{(p)}(x)|}{v(x)} < \infty.
$$
Let $V = (v_{N})_{N \in \mathbb{N}}$ be a weight system. We shall be concerned with the following weighted spaces of smooth functions
\begin{align*}
\mathscr{K}_{V} &= \varprojlim_{N \in \mathbb{N}} \mathscr{B}^N_{1 / v_N} , \\
\mathscr{O}_{C,V} &= \varinjlim_{N \in \mathbb{N}} \varprojlim_{n \in \mathbb{N}} \mathscr{B}^n_{v_N} , \\
\mathscr{O}_{M,V} &= \varprojlim_{n \in \mathbb{N}} \varinjlim_{N \in \mathbb{N}} \mathscr{B}^n_{v_N} .
\end{align*}
Note that $\mathscr{K}_{V}$ is a Fr\'echet space, $\mathscr{O}_{C,V}$ is an $(LF)$-space, and $\mathscr{O}_{M,V}$ is a $(PLB)$-space. Furthermore, we have the following continuous inclusions
$$
\mathscr{D}(\mathbb{R}) \subset \mathscr{K}_{V} \subset \mathscr{O}_{C,V} \subset \mathscr{O}_{M,V} \subset C^\infty(\mathbb{R}),
$$
where $\mathscr{D}(\mathbb{R})$ denotes the space of compactly supported smooth functions. The spaces $\mathscr{K}_V$ were introduced and studied by Gelfand and Shilov \cite{GS}, while we refer to \cite{DV} for more information on the spaces $\mathscr{O}_{C,V}$. For $N,n \in \mathbb{N}$ fixed we will also need the following spaces
$$
\mathscr{B}_{v_N} = \varprojlim_{n \in \mathbb{N}} \mathscr{B}^n_{v_N}, \qquad \mathscr{O}^n_{M,V} = \varinjlim_{N \in \mathbb{N}} \mathscr{B}^n_{v_N}.
$$
The goal of this article is to show the following result.
\begin{theorem} \label{main}
Let $V = (v_{N})_{N \in \mathbb{N}}$ and $W = (w_{M})_{M \in \mathbb{N}}$ be two weight systems and let $\varphi: \mathbb{R} \to \mathbb{R}$ be smooth.
\begin{enumerate}
\item[$(I)$] Suppose that $V$ satisfies \eqref{cond-mult}. The following statements are equivalent:
\begin{enumerate}
\item[$(i)$] $C_\varphi(\mathscr{K}_{V} ) \subseteq \mathscr{K}_{W}$.
\item[$(ii)$] $C_\varphi: \mathscr{K}_{V} \rightarrow \mathscr{K}_{W}$ is continuous.
\item[$(iii)$] $\varphi$ satisfies the following two properties
\begin{enumerate}
\item [$(a)$] $\displaystyle \forall M~\exists N~:~ \sup_{x \in \mathbb{R}} \frac{w_M(x)}{v_N(\varphi(x))} < \infty$.
\item [$(b)$] $\displaystyle \forall p \in \mathbb{Z}_+~\exists N~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{v_N(\varphi(x))} < \infty$.
\end{enumerate}
\end{enumerate}
\item[$(II)$] Suppose that $W$ satisfies \eqref{cond-mult}. The following statements are equivalent:
\begin{enumerate}
\item[$(i)$] $C_\varphi(\mathscr{O}_{C,V} ) \subseteq \mathscr{O}_{C,W}$.
\item[$(ii)$] $C_\varphi: \mathscr{O}_{C,V} \rightarrow \mathscr{O}_{C,W}$ is continuous.
\item[$(iii)$] $\displaystyle \forall N~\exists M$ such that $C_\varphi: \mathscr{B}_{v_N} \rightarrow \mathscr{B}_{w_M}$ is continuous.
\item[$(iv)$] $\varphi$ satisfies the following two properties
\begin{enumerate}
\item [$(a)$] $\displaystyle \forall N~ \exists M~:~ \sup_{x \in \mathbb{R}} \frac{v_N(\varphi(x))}{w_M(x)} < \infty$.
\item [$(b)$] $\displaystyle\exists M~\forall p,k \in \mathbb{Z}_+~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{w^{1/k}_M(x)} < \infty$.
\end{enumerate}
\end{enumerate}
\item[$(III)$] Suppose that $W$ satisfies \eqref{cond-mult}. The following statements are equivalent:
\begin{enumerate}
\item[$(i)$] $C_\varphi(\mathscr{O}_{M,V} ) \subseteq \mathscr{O}_{M,W}$.
\item[$(ii)$] $C_\varphi: \mathscr{O}_{M,V} \rightarrow \mathscr{O}_{M,W}$ is continuous.
\item[$(iii)$]$C_\varphi: \mathscr{O}^n_{M,V} \rightarrow \mathscr{O}^n_{M,W}$ is continuous for all $n \in \mathbb{N}$.
\item[$(iv)$] $\varphi$ satisfies the following two properties
\begin{enumerate}
\item [$(a)$] $\displaystyle \forall N~ \exists M~:~ \sup_{x \in \mathbb{R}} \frac{v_N(\varphi(x))}{w_M(x)} < \infty$.
\item [$(b)$] $\displaystyle \forall p \in \mathbb{Z}_+~\exists M~:~ \sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{w_M(x)} < \infty$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{theorem}
The proof of Theorem \ref{main} will be given in the next section. The spaces $\mathscr{K}_{v}$, $\mathscr{O}_{C,v}$ and $\mathscr{O}_{M,v}$ from the introduction can be written as
$$
\mathscr{K}_{v} = \mathscr{K}_{V_v}, \qquad \mathscr{O}_{C,v} = \mathscr{O}_{C,V_v}, \qquad \mathscr{O}_{M,v} = \mathscr{O}_{M,V_v},
$$
where $V_v = (v^N)_{N \in \mathbb{N}}$ is the weight system from Example \ref{example}. Hence, Theorem \ref{intro} is a direct consequence of Theorem \ref{main} with $V = V_v$ and $W = V_w$.
\section{Proof of the main result}
Throughout this section we fix a smooth symbol $\varphi: \mathbb{R} \to \mathbb{R}$. We need two lemmas in preparation for the proof of Theorem \ref{main}. For $n \in \mathbb{N}$ we set
$$
\| f \|_n = \| f \|_{1,n} = \ \max_{p \leq n} \sup_{x \in \mathbb{R}} |f^{(p)}(x)|.
$$
\begin{lemma}\label{lemma-1} Let $v, \widetilde{v},w$ be three positive continuous functions on $\mathbb{R}$ such that
$$
C_0 = \sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{v(x+t)}{\widetilde{v}(x)} < \infty.
$$
Let $p,n \in \mathbb{N}$ be such that
\begin{equation}
\| C_\varphi(f) \|_{w,p} \leq C_1 \| f \|_{\widetilde{v},n}, \qquad \forall f \in \mathscr{D}(\mathbb{R}),
\label{norm-ineq}
\end{equation}
for some $C_1 > 0$. Then,
\begin{equation}
\sup_{x \in \mathbb{R}} \frac{v(\varphi(x))}{w(x)} < \infty ,
\label{ineq-0}
\end{equation}
and, if $p \geq 1$, also
\begin{equation}
\sup_{x \in \mathbb{R}} \frac{v(\varphi(x))|\varphi'(x)|^{p}}{{w}(x)} < \infty ,
\label{ineq-1}
\end{equation}
and
\begin{equation}
\sup_{x \in \mathbb{R}} \frac{v(\varphi(x))|\varphi^{(p)}(x)|}{{w}(x)} < \infty.
\label{ineq-2}
\end{equation}
\end{lemma}
\begin{proof}
Given $f \in \mathscr{D}(\mathbb{R})$ with $\operatorname{supp} f \subseteq [-1,1]$, we set $f_x = f(\, \cdot \, -\varphi(x))$ for $x \in \mathbb{R}$. Note that
\begin{equation}
\| f_x \|_{\widetilde{v},n} \leq \frac{C_0\| f\|_n}{v(\varphi(x))}, \qquad x \in \mathbb{R}.
\label{norm-comp}
\end{equation}
We first show \eqref{ineq-0}. Choose $f \in \mathscr{D}(\mathbb{R})$ with $\operatorname{supp} f \subseteq [-1,1]$ such that $f(0) = 1$. For all $x \in \mathbb{R}$ it holds that
$$
\| C_\varphi(f_x) \|_{w,p} \geq \frac{C_\varphi(f_x)(x)}{w(x)} = \frac{1}{w(x)}.
$$
Hence, by \eqref{norm-ineq} and \eqref{norm-comp}, we obtain that
$$
\frac{v(\varphi(x))}{{w}(x)} \leq C_0C_1 \| f\|_n, \qquad \forall x \in \mathbb{R}.
$$
Now assume that $p \geq 1$. We prove \eqref{ineq-1}. Choose $f \in \mathscr{D}(\mathbb{R})$ with $\operatorname{supp} f \subseteq [-1,1]$ such that $f^{(j)}(0) = 0$ for $j = 1, \ldots, p-1$ and $f^{(p)}(0)=1$. Fa\`a di Bruno's formula implies that for all $x \in \mathbb{R}$
$$
\| C_\varphi(f_x) \|_{w,p} \geq \frac{|C_\varphi(f_x)^{(p)}(x)|}{w(x)} = \frac{|\varphi'(x)|^p}{w(x)}.
$$
Similarly as in the proof of \eqref{ineq-0}, the result now follows from \eqref{norm-ineq} and \eqref{norm-comp}. Finally, we show \eqref{ineq-2}. Choose $f \in \mathscr{D}(\mathbb{R})$ with $\operatorname{supp} f \subseteq [-1,1]$ such that $f'(0) = 1$ and $f^{(j)}(0)=0$ for $j = 2, \ldots, p$. Fa\`a di Bruno's formula implies that for all $x \in \mathbb{R}$
$$
\| C_\varphi(f_x) \|_{w,p} \geq \frac{|C_\varphi(f_x)^{(p)}(x)|}{w(x)} = \frac{|\varphi^{(p)}(x)|}{w(x)}.
$$
As before, the result is now a consequence of \eqref{norm-ineq} and \eqref{norm-comp}.
\end{proof}
\begin{lemma}\label{lemma-2}
Let $v$ and $w$ be positive continuous functions on $\mathbb{R}$. Then,
\begin{enumerate}
\item[$(i)$] If
$$
\sup_{x \in \mathbb{R}} \frac{v(\varphi(x))}{w(x)} < \infty,
$$
then $C_\varphi : \mathscr{B}^0_v \rightarrow \mathscr{B}^0_w$ is well-defined and continuous.
\item[$(ii)$] Let $n \in \mathbb{Z}_+$. If
$$
\sup_{x \in \mathbb{R}} \frac{v(\varphi(x))}{w(x)} \prod_{p=1}^n |\varphi^{(p)}(x)|^{k_p} < \infty
$$
for all $(k_1, \ldots, k_n) \in \mathbb{N}^n$ with $\sum_{j = 1}^p jk_j \leq p$ for all $p = 1, \ldots, n$, then $C_\varphi : \mathscr{B}^n_{v} \rightarrow \mathscr{B}^n_{w}$ is well-defined and continuous.
\end{enumerate}
\end{lemma}
\begin{proof}
$(i)$ Obvious. \\
\noindent $(ii)$ This is a direct consequence of $(i)$ and Fa\`a di Bruno's formula.
\end{proof}
\begin{proof}[of Theorem \ref{main}]
$(I)$ $(i)\Rightarrow (ii)$: Since $C_\varphi: C^\infty(\mathbb{R}) \to C^\infty(\mathbb{R})$ is continuous, this follows from the closed graph theorem for Fr\'echet spaces. \\
$(ii)\Rightarrow (iii)$: For all $p,M \in \mathbb{N}$ there are $n,L \in \mathbb{N}$ such that
$$
\| C_\varphi(f)\|_{p, 1/w_M} \leq C\| f\|_{n,1/v_L}, \qquad \forall f \in \mathscr{K}_V.
$$
Choose $N \geq L$ such that
$$
\sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{v_L(x+t)}{v_N(x)} = \sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{1/v_N(x+t)}{1/v_L(x)} < \infty.
$$
Lemma \ref{lemma-1} with $w = 1/w_M$, $v = 1/v_N$ and $\widetilde{v} = 1/v_L$ yields that
$$
\sup_{x \in \mathbb{R}} \frac{w_M(x)}{v_N(\varphi(x))} < \infty
$$
and (recall that $w_M \geq 1$)
$$
\sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{v_N(\varphi(x))} < \infty.
$$
$(iii) \Rightarrow (i)$: As $V$ satisfies \eqref{cond-mult}, this follows from Lemma \ref{lemma-2}.
\\ \\
$(II)$ $(i) \Rightarrow (ii)$: Since $C_\varphi: C^\infty(\mathbb{R}) \to C^\infty(\mathbb{R})$ is continuous, this follows from De Wilde's closed graph theorem. \\
$(ii) \Rightarrow (iii)$: This is a consequence of Grothendieck's factorization theorem. \\
$(iii) \Rightarrow (iv)$: Fix an arbitrary $N \in \mathbb{N}$. Choose $L \geq N$ such that
$$
\sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{v_N(x+t)}{v_L(x)} < \infty.
$$
Choose $K \in \mathbb{N}$ such that $C_\varphi: \mathscr{B}_{v_L} \to \mathscr{B}_{w_K}$ is continuous. For all $m \in \mathbb{Z}_+$ there are $n \in \mathbb{Z}_+$ and $C > 0$ such that
$$
\| C_\varphi(f)\|_{m, w_K} \leq C\| f\|_{n,v_L}, \qquad \forall f \in \mathscr{B}_{v_L}.
$$
Lemma \ref{lemma-1} with $w = w_K$, $v = v_N$ and $\widetilde{v} = v_L$ yields that
\begin{equation}
\sup_{x \in \mathbb{R}} \frac{v_N(\varphi(x))}{w_K(x)} < \infty
\label{ineq-proof-1}
\end{equation}
and (recall that $v_N \geq 1$)
\begin{equation}
\sup_{x \in \mathbb{R}} \frac{|\varphi'(x)|}{{w_K^{1/m}}(x)} < \infty \qquad \mbox{and} \qquad \sup_{x \in \mathbb{R}} \frac{|\varphi^{(m)}(x)|}{w_K(x)} < \infty.
\label{ineq-proof-2}
\end{equation}
Equation \eqref{ineq-proof-1} shows $(a)$. We now prove $(b)$. To this end, we will make use of the following Landau-Kolmogorov type inequality due to Gorny \cite{gorny}: For all
$j \leq m \in \mathbb{Z}_+$ there is $C > 0$ such that
\begin{equation}
\| g^{(j)}\| \leq C \| g\|^{1 - j/m} \left(\max \{ \| g \| , \| g^{(m)}\| \} \right)^{j/m}, \qquad \forall g \in C^\infty([-1,1]),
\label{Gorny}
\end{equation}
where $\| \, \cdot \, \|$ denotes the sup-norm on $[-1,1]$. Choose $M \geq K$ such that
$$
\sup_{x,t \in \mathbb{R},|t| \leq 1} \frac{w_K(x+t)}{w_M(x)} < \infty.
$$
Let $p,k \in \mathbb{Z}_+$ and $x \in \mathbb{R}$ be arbitrary. Equation \eqref{ineq-proof-2} yields that for all $m \in \mathbb{Z}_+$ there is $C > 0$ such that
$$
\| \varphi'(x+ \, \cdot \, )\| \leq Cw_M^{1/m}(x) \qquad \mbox{and} \qquad \| \varphi^{(m)}(x+\, \cdot \, )\| \leq Cw_M(x).
$$
By applying \eqref{Gorny} to $g = \varphi'(x+\, \cdot \, )$ and $m \geq p$ such that
$$
\left(1 - \frac{p-1}{m} \right) \frac{1}{m} + \frac{p-1}{m} \leq \frac{1}{k}
$$
we find that (recall that $w_M \geq 1$)
\begin{align*}
|\varphi^{(p)}(x)| &\leq \| \varphi^{(p)}(x+\, \cdot \,)\| \\
&\leq C \| \varphi'(x+\, \cdot \, )\|^{1 - (p-1)/m} \left(\max \{ \| \varphi' \| , \| \varphi^{(m+1)}\| \}\right)^{(p-1)/m} \\
&\leq C'w_M^{1/k}(x).
\end{align*}
$(iv)\Rightarrow(i)$: As $W$ satisfies \eqref{cond-mult}, this follows from Lemma \ref{lemma-2}.
\\ \\
$(III)$ $(iii) \Rightarrow (ii) \Rightarrow (i)$: Obvious. \\
$(i) \Rightarrow (iv)$: Fix arbitrary $p \in \mathbb{Z}_+$ and $N \in \mathbb{N}$. Choose $L \geq N$ such that
$$
\sup_{x,t \in \mathbb{R}, |t| \leq 1} \frac{v_N(x+t)}{v_L(x)} < \infty.
$$
Since $\mathscr{B}_{v_L} \subset \mathscr{O}_{M,V}$ and $ \mathscr{O}_{M,W} \subset \mathscr{O}^p_{M,W}$, we obtain that $C_\varphi(\mathscr{B}_{v_L}) \subset \mathscr{O}^p_{M,W}$. As $C_\varphi: C^\infty(\mathbb{R}) \to C^p(\mathbb{R})$ is continuous, De Wilde's closed graph theorem implies that $C_\varphi: \mathscr{B}_{v_L} \to \mathscr{O}^p_{M,W}$ is continuous. Grothendieck's factorization theorem yields that there is $M \in \mathbb{N}$ such that $C_\varphi: \mathscr{B}_{v_L} \to \mathscr{B}^p_{w_M}$ is well-defined and continuous, and thus that
$$
\| C_\varphi(f)\|_{p, w_M} \leq C\| f\|_{n,v_L}, \qquad \forall f \in \mathscr{B}_{v_L},
$$
for some $n \in \mathbb{N}$ and $C > 0$. Lemma \ref{lemma-1} with $w = w_M$, $v = v_N$ and $\widetilde{v} = v_L$ yields that
$$
\sup_{x \in \mathbb{R}} \frac{v_N(\varphi(x))}{w_M(x)} < \infty
$$
and (recall that $v_N \geq 1$)
$$
\sup_{x \in \mathbb{R}} \frac{|\varphi^{(p)}(x)|}{w_M(x)} < \infty.
$$
$(iv)\Rightarrow(iii)$: As $W$ satisfies \eqref{cond-mult}, this follows from Lemma \ref{lemma-2}.
\end{proof}
\begin{acknowledgement}
L. Neyt gratefully acknowledges support by FWO-Vlaanderen through the postdoctoral grant 12ZG921N.
\end{acknowledgement}
|
2,869,038,154,291 | arxiv | \section{Introduction} \label{Sec:Intro}
\IEEEPARstart{A}{n} array of new technologies that can contribute to beyond 5G (B5G) and the sixth-generation (6G) wireless networks are being proposed and extensively investigated, including massive multiple-input multiple-output (m-MIMO) \cite{BJORNSON20193} or utilizing higher frequency bands such as millimeter wave (mmWave) \cite{mmWave} and even terahertz (THz) \cite{Terahertz}. However, to apply these technologies, it may be necessary to develop new network software and hardware platforms. For example, millions of antennas and baseband units (BBU) need to be deployed to support mmWave and THz technologies. As a low-cost solution, reconfigurable intelligent surface (RIS) has recently drawn significant attention \cite{IRS1,IRS2, IRS3, IRS-RIS}.
The RIS-assisted wireless network not only provides lower cost compared with some other technologies, but also leads to less overhead to the existing wireless systems \cite{wu2020tutorial}. RIS is a planar surface composed of reconfigurable passive printed dipoles connected to a controller. The controller is able to reconfigure the phase shifts according to the incident signal, to achieve more favorable signal propagation. By its nature, RIS provides a higher degree of freedom for data transmission, thus improving the capacity and reliability. In contrast to traditional relay technology, RIS consumes much less energy without amplifying noise or generating self-interference, as the signal is passively reflected. Also, as deploying RIS in a wireless network is modular, it is more suitable for upgrading current wireless systems.
\subsection{Related Works}
\subsubsection{Studies on Characteristics of RIS} The authors of \cite{Emil1} propose a far-field path loss model for RIS by the physical optics method, and explain why RIS can beamform the diffuse signal to the desired receivers. In \cite{9119122, bjornson2019intelligent, huang2019reconfigurable}, the differences and similarities among RIS, decode-and-forward (DF), and amplify-and-forward (AF) relay are discussed. The authors of \cite{bjornson2019intelligent} point out that a sufficient number of reconfigurable elements should be considered to make up for the low reflecting channel gain without any amplification. The authors of \cite{8970580} conduct the performance comparison between non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) in the RIS-assisted downlink for the two-users scenario, and reveal that OMA outperforms NOMA for near-RIS user pairing. Additionally, even though the adjustment range of the reflection elements usually is limited in order to reduce the cost in practice, \cite{8746155} shows that it still can guarantee a good spectral efficiency. Furthermore, in \cite{zou2020joint}, RIS powered by wireless energy transmission is considered, and the results show that RIS can work well with wireless charging.
\subsubsection{Studies on Single-cell System with RIS} Inspired by the advantages of RIS, the authors of \cite{wu2019towards} outline several typical use cases of RIS, {e.g.}, improving the experience of the user located in poorly covered areas and enhancing the capacity and reliability of massive devices communications. The authors of \cite{9039554} optimize the reflection elements to improve the rate of the cell-edge users in an RIS-assisted orthogonal frequency division multiplexing access (OFDMA)-based system. In \cite{wu1}, the authors jointly adopt the semidefinite relaxation (SDR) and the alternating optimization (AO) method to improve the spectral and energy efficiency of an RIS-assisted multi-user multi-input single-output (MISO) downlink system. For the same type of system, the authors of \cite{guo2019weighted} propose an algorithm based on the fractional programming technique to improve the capacity. Besides, due to its excellent compatibility, the RIS is able to adapt to various multiple access techniques, e.g., NOMA \cite{9203956, 9240028} and the promising rate splitting multiple access (RSMA) \cite{yang2020energy}. Additionally, a variety of RIS-assisted application are investigate in \cite{9076830, 9110849, 9133107, 8743496, 9133130, Huang_additional}. The works in \cite{CR1, CR2, CR3, CR4, CR5} study RIS-assisted spectrum sharing scenarios. The authors of \cite{CR1} integrate an RIS into a multi-user full-duplex cognitive radio network (CRN) to simultaneously improve the system performance of the secondary network and efficiently reduce the interference from the primary users (PUs). The authors of \cite{CR2} provide an algorithm based on block coordinate descent to maximize the weighted sum rate of the secondary users (SUs) in CRNs subject to their total power. The work in \cite{CR3} investigates two types of CSI error models for the PU-related channels in RIS-assisted CRNs, and propose two schemes based on the successive convex approximation method to jointly optimize the transmit precoding and phase shift of matrices. The authors of \cite{CR4} propose alternative optimization method based semidefinite relaxation techniques via jointly optimizing vertical beamforming and RIS to maximize spectral efficiency of the secondary network in an RIS-assisted CRN. The work in \cite{CR5} studies an RIS-assisted spectrum sharing underlay cognitive radio wiretap channel, and proposes efficient algorithms to enhance the secrecy rate of SU for three different cases: full CSI, imperfect CSI, and no CSI. In addition, the authors of \cite{learning1} develop a deep reinforcement learning (DRL) based algorithm to obtain the transmit beamforming and phase shifts in RIS-assisted multi-user MISO systems. A novel DRL-based hybrid beamforming algorithm is designed in \cite{learning2} to improve the coverage range of THz-band frequencies in multi-hop RIS-assisted communication systems. The authors of \cite{learning3} propose a decaying deep Q-network based algorithm to solve an energy consumption minimizing problem in RIS in unmanned aerial vehicle (UAV) enabled wireless networks.
\subsubsection{Studies on Multi-cell System with RIS} All the above works \cite{wu2019towards, 9039554, wu1, guo2019weighted, 9203956, 9240028, yang2020energy, 9076830, 9110849, 9133107, 8743496, 9133130, CR1, CR2, CR3, CR4, CR5} focus on a single-cell setup. Different from the single-cell scenario, the key issue for a multi-cell system is the presence of inter-cell interference. The authors of \cite{hua2020intelligent} jointly optimize transmission beamforming and the reflection elements to maximize the minimum user rate in an RIS-assisted joint processing coordinated multipoint (JP-CoMP) system. For fairness, a max-min weighted signal-interference-plus-noise ratio (SINR) problem in an RIS-assisted multi-cell MISO system is solved by three AO-based algorithms in \cite{xie2020max}, and the numerical results demonstrate that RIS can help improve the SINR and suppress the inter-cell interference, especially for cell-edge users. Three algorithms are proposed to minimize the sum power of a large-scale discrete-phase RIS-assisted MIMO multi-cell system in \cite{omid2020irs}. In \cite{ni2020resource}, the authors give a novel algorithm for resource allocation in an RIS-assisted multi-cell NOMA system to maximize the sum rate. The authors of \cite{kim2020exploiting} jointly optimize the reflection elements, base station (BS)-user pairing, and user transmit power by a DRL approach to maximize the sum rate for a multi-RIS-assisted MIMO multi-cell uplink system.
\subsection{Our Contributions}
It is noteworthy that almost all existing papers studying RIS-assisted multi-cell systems focus on the rate maximization problem, which usually means making full use of the resource. In many scenarios, however, the user demands are finite, and hence it is relevant to meet the user data demands rather than achieving highest total rate by exhausting all the resource. For such scenarios, inter-cell interference is not represented by the worst-case value as not all resources are used for transmission. Having this in mind, we consider the time-frequency resource consumption minimization problem in the multi-RIS-assisted multi-cell system. Different from the existing full-buffer works, in our scenario, the mutual interference is not known a priori instead of the worst-case value. The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item We formulate an optimization problem for the multi-RIS-assisted multi-cell system. Our objective is to optimize the reflection coefficients of all RISs in the system to minimize the total time-frequency resource consumption subject to the user demand requirement.
\item Due to the non-convexity of this problem, we first investigate its single-cell version. We derive an approximate convex model for the single-cell problem. We then propose an algorithm based on the Majorization-Minimization (MM) method to obtain a locally optimal solution.
\item In the next step, we embed the single-cell algorithm into an algorithmic framework to obtain a locally optimal solution for the overall multi-cell problem, and prove its feasibility and convergence. The algorithmic framework also is proved that it can reach the global optimality if the single-cell can be solved to optimum.
\item We evaluate the performance of the system optimized by our algorithmic framework, make performance comparison to three benchmark solutions. The numerical results demonstrate the proposed algorithmic framework is capable of achieving significant time-frequency resource saving.
\end{itemize}
\subsection{Organization and Notation}
\subsubsection{Organization} The remainder of this paper is organized as follows. In Section \ref{Sec:SystemModel}, we describe the multi-RIS-assisted multi-cell system model and formulate our optimization problem with the load coupling model for characterizing inter-cell interference. In Section \ref{Sec:SingleCell}, we investigate the single-cell problem, and propose an algorithm based on the MM method. In Section \ref{Sec:multi-cell}, we propose an algorithmic framework for the multi-cell problem. In Section \ref{Sec:evaluation}, simulation results are shown for performance evaluation. Finally, we conclude in Section \ref{Sec:conclusion}.
\subsubsection{Notation} The matrices and vectors are respectively denoted by boldface capital and lower case letters. $\mathbb{C}^{1 \times M}$ and $\mathbb{C}^{M \times 1}$ stand for a collection of complex matrices, in which each matrix has size $1 \times M$ and $M \times 1$, respectively. $\mathfrak{diag} \{{\cdot}\}$ denotes the diagonalization operation. For a complex value $\mathrm{e}^{\mathrm{i}\theta}$, $\mathrm{i}$ denotes the imaginary unit. $x\sim \mathcal{CN}(\mu,\sigma^2)$ denotes the circularly symmetric complex Gaussian (CSCG) distribution with mean $\mu$ and variance $\sigma^2$. Transpose, conjugate, and transpose-conjugate operations are denoted by $(\cdot)^T$, $(\cdot)^\star$, and $(\cdot)^H$, respectively. $\Re\{\cdot\}$ and $\Im\{\cdot\}$ denote the real and imaginary parts of a complex number, respectively. $\mathcal{O}(\cdot)$ denotes the order in computational complexity. In addition, $||\cdot||_{\infty}$ denotes the infinity norm of a vector.
\section{System Model and Problem Formulation} \label{Sec:SystemModel}
\subsection{System Model}\label{Preliminaries}
\begin{figure}[tbp]
\centering
\begin{overpic}
[scale=0.16]{sys.pdf}
\put(38.5,46){$g_{i,j}$}
\put(35.5,36.5){$\boldsymbol{H}_{l,i}$}
\put(42,30){$\boldsymbol{G}_{l,j}$}
\end{overpic}
\caption{This figure shows a concise example for the received signal and interference at a user in the multi-RIS-assisted multi-cell system, where blue solid line, dashed line stand for the direct and reflected signal, respectively, and red solid line, dashed line stand for the direct and reflected interference, respectively.}\label{fig:sys}
\end{figure}
We consider a downlink multi-cell wireless system with a total of $L$ RISs and $I$ user equipments (UEs) distributed in $J$ cells. Denote by $\mathcal{L} = \{1,2,...,L \}$, $\mathcal{I} = \{1,2,...,I \}$, and $\mathcal{J} = \{1,2,...,J\}$ the sets of RISs, cells, and UEs, respectively. In addition, $\mathcal{J}_i$ $(\forall i \in \mathcal{I})$ represents the set of the UEs served by cell $i$. Without loss of generality, we assume that all RISs have the same number (denoted by $M$) of reflection elements. Let $\mathcal{M} = \{1,2,...,M\}$.
We use $g_{ij}$, $\boldsymbol{G}_{il} \in \mathbb{C}^{1 \times M}$, and $\boldsymbol{H}_{lj}\in \mathbb{C}^{M \times 1}$ to denote the channel gain from the BS of cell $i$ to UE $j$, that from the BS of cell $i$ to RIS $l$, and that from RIS $l$ to UE $j$, respectively. Note that the channels between BS and RIS, as well as the channels between RIS and users in RIS-assisted multi-cell systems can be estimated by existing works, such as the channel estimation framework based on the PARAllel FACtor (PARAFAC) decomposition in \cite{Estimation}. The diagonal reflection matrix of the $l$-th RIS is denoted by
\begin{equation}
\boldsymbol{\Theta}_l =\mathfrak{diag} \{\phi_{l1}, \phi_{l2} , ..., \phi_{lM} \},
\end{equation}
where $\phi_{lm} = \lambda_{lm} \mathrm{e}^{\mathrm{i}\theta_{lm}}$, $\forall m \in \mathcal{M}$. Further, $\phi_{lm}$ is the $m$-th reflection coefficient of the $l$-th RIS, where $\lambda_{lm}$ and $\theta_{lm}$ represent its amplitude and phase, respectively. In this paper, we consider the following three value domains of the reflection coefficient.
\subsubsection{Ideal} Ideally, the amplitude and phase can be adjusted independently, i.e., $\lambda_{lm} \in \left[0,1\right]$ and $\theta_{lm} \in [0, 2\pi]$. This leads to the following domain definition:
\begin{equation}
\mathcal{D}_1=\left\{ \lambda_{lm} \mathrm{e}^{\mathrm{i}\theta_{lm}}\Big| \lambda_{lm} \in [0,1], \theta_{lm} \in [0,2\pi]\right\}.
\end{equation}
\subsubsection{Continuous Phase Shifter} In this case, the amplitude is at its maximum, {i.e.}, $\lambda_{lm} = 1$, and only the phase can be adjusted. Under this assumption, the corresponding domain is:
\begin{equation}
\mathcal{D}_2=\left\{ \mathrm{e}^{\mathrm{i}\theta_{lm}}\Big| \theta_{lm} \in [0,2\pi]\right\}.
\end{equation}
\subsubsection{Discrete Phase Shifter} This is also called practical RIS. A practical RIS only provides a finite number of phase shifts, and its amplitude is fixed at one. We assume the phase shift can be adjusted in $N$ discrete values in the following domain:
\begin{equation}
\mathcal{D}_3= \left\{ \mathrm{e}^{\mathrm{i}\theta_{lm}}\Big| \theta_{lm}\in \{0,\Delta\theta, ..., \left(N-1\right)\Delta\theta\}, N\geq2\right\},
\end{equation}
where $\Delta \theta = 2\pi / N$.
Note that, for single-user single-antenna scenarios with perfect CSI, amplitude control is unnecessary because the amplitude should be one in both theory and practice \cite{Amplitude_Control}. For the multi-user systems with perfect SCI, the work in \cite{guo2019weighted} has shown that the performance gain provided by the amplitude control is negligible. Therefore, in our paper, we do not exclude amplitude control, and use optimization to examine if our scenario with cell load coupling also submits to this observation. Our numerical results (see later in Section \ref{Sec:evaluation}) show the optimal amplitude of the RIS should be one.
We use $x_k$ to represent the transmitted signal at BS in cell $k$. Accordingly, the overall received signal including interference at UE $j$ is given by
\begin{align}
y_j& =\underbrace{ \sum_{k \in \mathcal{I}}g_{kj}x_k}_{\text{Direct links (from BSs)}} + \underbrace{\sum_{k \in \mathcal{I}}\sum_{l \in \mathcal{L}} \boldsymbol{G}_{kl} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}x_k}_{\text{RIS-assisted links}} +z_j \\
&= \sum_{k \in \mathcal{I}} \left( g_{kj} + \sum_{l \in \mathcal{L}} \boldsymbol{G}_{kl} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}\right) x_k + z_j,\label{y1}
\end{align}
where $z_j\sim \mathcal{CN}(0,\sigma^2)$ is the additive white Gaussian noise at UE $j$. UE $j$ in cell $i$ treats all the signals from other cells as interference, then (\ref{y1}) can be re-written by
\begin{align}
y_j& =\underbrace{ \left(g_{ij} + \sum_{l \in \mathcal{L}} \boldsymbol{G}_{il} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}\right)x_i}_{\text{Desired signal}} \notag \\
&+ \underbrace{\left( \sum_{k\in\mathcal{I}, \atop k\neq i}g_{kj} + \sum_{k\in\mathcal{I}, \atop k\neq i}\sum_{l \in \mathcal{L}} \boldsymbol{G}_{kl} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}\right)x_k}_{\text{Interference}} +z_j,
\end{align}
Let $P_k$ be the transmission power per resource block (RB) in cell $k$, and we use $ \rho_k \in [0,1]$ to represent the proportion of RBs consumed in cell $k$, and this entity is referred to as the cell {\em load} in \cite{mogensen2007lte}. In fact, it is quite difficult to fully coordinate the inter-cell interference in large-scale multi-cell networks. For this reason, we use the load levels to characterize a cell's likelihood of interfering the others. A cell transmitting on many RBs, i.e., high load, generates more interference to others than almost idle cell of low load. We remark that this inter-cell interference approximation is suitable for the network-level performance analysis, and \cite{fehske2012aggregation, klessig2015performance} have shown that the approximation has good accuracy for inter-cell interference characterization. Thus, the power of interference received at UE $j$ is calculated by
\begin{equation}
\sum_{k\in\mathcal{I}, \atop k\neq i} | g_{kj}\!+\! \sum_{l \in \mathcal{L}}\boldsymbol{G}_{kl} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj} |^2\rho_k{P_k},
\end{equation}
where $g_{kj}$ and $\sum_{l \in \mathcal{L}}\boldsymbol{G}_{kl} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}$ are the channel gain of the direct interference link and that of the RIS-assisted interference link between cell $k$ and UE $j$, respectively.\\
Hence, the signal-to-interference-and-noise ratio (SINR) of the UE $j$ in cell $i$ is modelled as
\begin{equation}\label{sinr1}
\text{SINR}_j \left(\boldsymbol{\rho}, \boldsymbol{\phi} \right)\!=\! \frac{ \left | g_{ij} \! + \! \sum\limits_{l \in \mathcal{L}}\boldsymbol{G}_{il} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}\right |^2{P_i} }{ \sum\limits_{k\in\mathcal{I}, \atop k\neq i} \left | g_{kj}\!+\! \sum\limits_{l \in \mathcal{L}}\boldsymbol{G}_{kl} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}\right |^2{P_k}\rho_k \!+\! \sigma^2},
\end{equation}
where $\boldsymbol{\rho} = [\rho_1, \rho_2, ..., \rho_I ]^T$ and $\boldsymbol{\phi} = \{\phi_{lm} | l \in \mathcal{L}, m \in \mathcal{M} \}$. Note that the load of cell $k$, $\rho_k$ in (\ref{sinr1}), the proportion of resource used for transmission in cell $k$, serves as interference scaling.
The achievable capacity of UE $j$ is then $ \log \left(1+\text{SINR}_j \left(\boldsymbol{\rho}, \boldsymbol{\phi} \right)\right)$. Let $d_j$ denote the demand of UE $j$, and denote by $B$ and $K$ the bandwidth of each RB and the total number of RBs in one cell, respectively. In addition, let $\rho_j$ be the proportion of RBs consumption by a specific UE $j$ in the associated cell. We have
\begin{equation}
KB\rho_j \log_2\left(1+\text{SINR}_j \left(\boldsymbol{\rho}, \boldsymbol{\phi}\right)\right) \geq d_j.
\end{equation}
For any cell $i$, we have
\begin{equation}\label{rhoi}
\rho_i = \sum_{j\in \mathcal{J}_i} \rho_j \geq \sum_{j\in \mathcal{J}_i} \frac{d_j}{ K B \log_2\left(1+\text{SINR}_j \left(\boldsymbol{\rho}, \boldsymbol{\phi}\right)\right)}.
\end{equation}
For convenience, in the following discussion, we use normalized $d_j$ such that $B$ and $K$ are no longer necessary.
\subsection{Problem Formulation}\label{MathematicalFormulation}
We consider the following optimization problem where the reflection coefficients are to be optimized for minimizing the total resource consumption, subject to the user demand requirement. \begin{subequations}\label{formulation}
\begin{align}
\textup{[{\rm P1}]}\ \ \ \underset{\boldsymbol{\rho}, \boldsymbol{\phi}}{\min} \ \ & \sum_{ i \in \mathcal{I}} \rho_i \label{P1obj} \\
\textup{ s.t.}\ \ \ & \rho_j \log_2\left(1+\text{SINR}_j \left(\boldsymbol{\rho}, \boldsymbol{\phi}\right)\right) \geq d_j, \forall j \in \mathcal{J}, \label{P1C1}\\
& \rho_i = \sum_{j\in \mathcal{J}_i} \rho_j , \forall i \in \mathcal{I},\label{P1C2}\\
& \phi_{lm} \in \mathcal{D}, \forall l \in \mathcal{L},\forall m \in \mathcal{M}.\label{P1C3}
\end{align}
\end{subequations}
The objective function (\ref{P1obj}) is the sum of required RBs. Constraint (\ref{P1C1}) represents user demand requirement, and it is non-convex. Constraint (\ref{P1C2}) is the load of cell $i$. Constraint (\ref{P1C3}) is for the value domain of the reflection coefficients, in which $\mathcal{D}$ can be any of $\mathcal{D}_1$, $\mathcal{D}_2$, and $\mathcal{D}_3$. In general, problem P1 is hard to solve, because not only is it non-convex, but also the cells are highly coupled. From equation (\ref{sinr1}) in constraint (\ref{P1C1}), we can see that the inter-cell interference depends on the load levels, which in turn is governed by interference. What is more, the problem have different properties in different domains, and the impact of these domains should be discussed further.
\section{Optimization within a Cell} \label{Sec:SingleCell}
Let us first consider the simpler single-cell case, and later we will use the derived results to address the multi-cell problem. In the single-cell problem, we minimize the load of any generic cell ${i}$, whereas the load levels as well as the RIS reflection coefficients of the other cells are given. Let $\mathcal{L}_i$ $(\forall i \in \mathcal{I})$ represent the set of RISs in cell $i$. We define $\boldsymbol{\rho}_{-i} = \left\{{\rho}_{k}|k\in \mathcal{I},k\neq i\right\}$, $\boldsymbol{\phi}_{-i} = \left\{ {\phi}_{lm} |m\in \mathcal{M}, l\in \mathcal{L}\setminus\mathcal{L}_i\right\}$, and $\boldsymbol{\phi}_{i} = \left\{ {\phi}_{lm} |m\in \mathcal{M}, l\in \mathcal{L}_i\right\}$.
\subsection{Formulation and Transformation}
For a UE $j$ in cell $i$, the intended signal received at UE $j$ is given by
\begin{align}
&{ \Bigg | \underbrace{g_{ij}+ \sum_{l \in \mathcal{L}\setminus \mathcal{L}_i} \boldsymbol{G}_{il} \boldsymbol{\Theta}_l \boldsymbol{H}_{lj}}_{\text{known}}+\underbrace{\sum_{l \in \mathcal{L}_i} \boldsymbol{G}_{il} \boldsymbol{\Theta}_l \boldsymbol{H}_{lj}}_{\text{to be optimized}} \Bigg|^2 P_i}\notag \\
=& { \Bigg | \hat{g}_{ij}+{\sum_{l \in \mathcal{L}_i} \boldsymbol{G}_{il} \boldsymbol{\Theta}_l \boldsymbol{H}_{lj}} \Bigg|^2 P_i},
\end{align}
where $\hat{g}_{i,j}$ is the known part of total gain.
Similarly, the interference received at UE $j$ is given by
\begin{equation}
{\sum_{k\in\mathcal{I}, \atop k\neq i}\Bigg | \hat{g}_{kj}+{\sum_{l \in \mathcal{L}_i}\boldsymbol{G}_{kl} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj}}\Bigg|^2 P_k \rho_k}.
\end{equation}
Let $\boldsymbol{\Phi}_l = [\phi_{l1}, \phi_{l2}, ..., \phi_{lM}]^T$, we have
\begin{equation}
\boldsymbol{G}_{il} \boldsymbol{\Theta}_l\boldsymbol{H}_{lj} = \boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l,
\end{equation}
where $\boldsymbol{\Lambda}_{ijl} = \boldsymbol{G}_{il}\mathfrak{diag} \{\boldsymbol{H}_{lj}\} $. To distinguish from the SINR formulation (\ref{sinr1}) in the multi-cell problem, we use $\text{SINR}_j^{\left \langle s \right \rangle} \left(\boldsymbol{\phi}_{i} \right)$ as follows to present the SINR of UE $j$ in the single-cell problem.
\begin{align}\label{SSINR}
\text{SINR}_j^{\left \langle s \right \rangle} \left(\boldsymbol{\phi}_{i} \right)= \frac{ \left | \hat{g}_{ij}+ \sum\limits_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right |^2{P_i} }{ \sum\limits_{k\in\mathcal{I}, \atop k\neq i} \left | \hat{g}_{kj}+ \sum\limits_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l\right |^2{P_k}\rho_k + \sigma^2}.
\end{align}
Then the single-cell problem is as follows.
\begin{subequations}\label{single_formulation}
\begin{align}
\textup{[\rm P2]}\ \ \ \underset{ \boldsymbol{\phi}_{i}}{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \text{SINR}_j^{\left \langle s \right \rangle} \left( \boldsymbol{\phi}_{i} \right) \right)} \label{P2obj}\\
\textup{ s.t.}\ \ \ & \phi_{lm} \in \mathcal{D}, \forall m \in \mathcal{M},\forall l \in \mathcal{L}_i. \label{P2C1}
\end{align}
\end{subequations}
Note that (\ref{P2obj}) is non-convex. To proceed, we first introduce auxiliary variables $\boldsymbol{\gamma}_i = [\gamma_1,\gamma_2,..., \gamma_{J_i}]^T$, where $J_i = |\mathcal{J}_i|$. Then problem P2 can be rewritten as follows.
\begin{subequations}
\begin{align}
\textup{[\rm P2.1]}\ \ \ \underset{ \boldsymbol{\phi}_{i}, \boldsymbol{\gamma}_i }{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \gamma_j \right)} \label{P2.1obj}\\
\textup{ s.t.}\ \ \ & \text{(\ref{P2C1})}, \notag\\
& \text{SINR}_j^{\left \langle s \right \rangle} \left( \boldsymbol{\phi}_{i} \right) \geq \gamma_j , \forall j \in \mathcal{J}_i.
\label{P2.1C2}
\end{align}
\end{subequations}
Clearly, the objective function of P2.1 is convex. However, the additional constraint (\ref{P2.1C2}) is not convex. We introduce auxiliary variables $\boldsymbol{\beta}_i = [\beta_1,\beta_2,..., \beta_{J_i}]^T$ such that
\begin{equation}\label{convexconstraint}
\sum_{k\in\mathcal{I}, k\neq i}\left | \hat{g}_{kj}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l\right |^2{P_k}\rho_k + \sigma^2\leq \beta_j,\forall j \in \mathcal{J}_i.
\end{equation}
\begin{lemma}
Constraint (\ref{convexconstraint}) is convex.
\end{lemma}
\begin{proof}
Note that
\begin{align}
&\left | \hat{g}_{kj}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l\right |^2 \notag\\
=& \left(\hat{g}_{kj}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l\right)\left(\hat{g}_{kj}^\star+ \sum_{l \in \mathcal{L}_i} \boldsymbol{\Phi}^H_l\boldsymbol{\Lambda}_{kjl}^H\right)\notag\\
=&\sum_{l \in \mathcal{L}_i}\boldsymbol{\Phi}_l^H\boldsymbol{\Lambda}_{kjl}^H\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l \!+\! 2\Re\left\{\hat{g}_{kj}^\star\sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l\right\} \!+\! | \hat{g}_{kj}|^2 ,
\end{align}
where $\sum_{l \in \mathcal{L}_i} \boldsymbol{\Phi}_l^H\boldsymbol{\Lambda}_{kjl}^H\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l$ is the second-order cone (SOC) and $2\Re\{\hat{g}_{kj}^\star\sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l\}$ is affine, hence constraint (\ref{convexconstraint}) is convex.
\end{proof}
With $\boldsymbol{\beta}_i$, the SINR constraint (\ref{P2.1C2}) can be expressed by
\begin{equation}\label{non-convexconstraint}
\left | \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right |^2{P_i} \geq \beta _j \gamma _j,\forall j \in \mathcal{J}_i.
\end{equation}
Thus, problem P2.1 can be restated as follows.
\begin{subequations}\label{final_formulation}
\begin{align}
\textup{[\rm P2.2]}\ \ \ \underset{ \boldsymbol{\phi}_{i}, \boldsymbol{\gamma}_i, \boldsymbol{\beta}_i}{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \gamma_j \right)} \label{P2fobj}\\
\textup{ s.t.}\ \ \ & \text{(\ref{P2C1}), (\ref{convexconstraint}), and (\ref{non-convexconstraint})}. \notag
\end{align}
\end{subequations}
\begin{proposition}
Problems P2 and P2.2 are equivalent.
\end{proposition}
\begin{proof}
The result follows from that, by construction, there clearly is a one-to-one mapping between the solutions of P2 and P2.2.
\end{proof}
\subsection{Problem Approximation}
Consider P2.2, where constraints (\ref{P2C1}) and (\ref{non-convexconstraint}) are non-convex. We first deal with constraint (\ref{non-convexconstraint}), and defer the discussion of domain constraint (\ref{P2C1}).
We apply the transformation $\beta _j \gamma _j = \frac{1}{4}((\beta _j+\gamma_j)^2-(\beta _j-\gamma_j)^2)$ to constraint (\ref{non-convexconstraint}), we thus have the following equivalent constraint
\begin{align}\label{non-convexconstraint2}
\left(\beta _j+\gamma_j\right)^2 &-\left(\beta _j-\gamma_j\right)^2\notag\\
& - 4{P_i}\left | \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right |^2 \leq 0,\forall j \in \mathcal{J}_i.
\end{align}
We define a function as follows
\begin{align}\label{definition1}
&\mathcal{F}_j(\gamma_j, \beta_j, \boldsymbol{\phi}_i) \triangleq \notag\\
& \left(\beta _j+\gamma_j\right)^2-\left(\beta _j-\gamma_j\right)^2 - 4{P_i}\left | \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right |^2, \forall j \in \mathcal{J}_i.
\end{align}
Observing that $\mathcal{F}_j$ is the difference of two convex (DC) functions, we adopt the DC programming technique:
\begin{enumerate}
\item The Taylor expansion of $\left(\beta _j-\gamma_j\right)^2$ at point $(\tilde{\beta}_j,\tilde{\gamma}_j)$ is given by
\begin{equation}\label{TE1}
\left(\beta _j\!-\!\gamma_j\right)^2 \!=\! 2(\tilde{\beta} _j-\tilde{\gamma}_j)\left(\beta _j \!-\!\gamma_j\right) - (\tilde{\beta} _j-\tilde{\gamma}_j)^2 \!+\! R_1(\beta_j,\gamma_j),
\end{equation}
where $R_1(\beta_j,\gamma_j)$ is the remainder. As $\left(\beta _j-\gamma_j\right)^2$ is convex, we can easily know
\begin{equation}\label{TE2}
R_1(\beta_j,\gamma_j) \geq 0.
\end{equation}
By (\ref{TE1}) and (\ref{TE2}), we have
\begin{equation}\label{equality1}
\left(\beta _j-\gamma_j\right)^2 \geq 2(\tilde{\beta} _j-\tilde{\gamma}_j)\left(\beta _j-\gamma_j\right) - (\tilde{\beta} _j-\tilde{\gamma}_j)^2,
\end{equation}
\item Similarly, we have
\begin{align}\label{equality2}
&\left | \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right |^2 \notag \\
=& \left(\hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right)\left(\hat{g}_{ij}^\star+ \sum_{l \in \mathcal{L}_i} \boldsymbol{\Phi}^H_l\boldsymbol{\Lambda}_{ijl}^H\right)\notag\\
=& \left|\sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l \right|^2 + 2\Re\left\{\hat{g}_{ij}\sum_{l \in \mathcal{L}_i}\boldsymbol{\Phi}_l^H\boldsymbol{\Lambda}_{ijl}^H\right\}+| \hat{g}_{ij}|^2 \notag\\
\geq &2\Re\left\{ \left(\hat{g}_{ij}+\sum_{l \in \mathcal{L}_i} \boldsymbol{\Lambda}_{ijl}\tilde{\boldsymbol{\Phi}}_l\right)^\star \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right\} \notag \\
& \qquad\qquad\qquad\qquad -\left|\sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\tilde{\boldsymbol{\Phi}}_l\right|^2+|\hat{g}_{ij}|^2.
\end{align}
\end{enumerate}
Then we define the following function
\begin{align}\label{definition2}
&\!\tilde{\mathcal{F}}_j(\gamma_j, \beta_j, \boldsymbol{\phi}_i) \!=\! \left(\beta _j\!+\!\gamma_j\right)^2\!-\!2(\tilde{\beta} _j\!-\!\tilde{\gamma}_j)\!\left(\beta _j\!-\!\gamma_j\right)\!+\! (\tilde{\beta} _j\!-\!\tilde{\gamma}_j)^2\notag\\
&\qquad- 4{P_i}\Bigg( 2\Re\left\{ \left(\hat{g}_{ij}+\sum_{l \in \mathcal{L}_i} \boldsymbol{\Lambda}_{ijl}\tilde{\boldsymbol{\Phi}}_l\right)^\star \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l\right\} \notag \\
& \qquad\qquad\qquad\quad -\left|\sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\tilde{\boldsymbol{\Phi}}_l\right|^2+|\hat{g}_{ij}|^2\Bigg),\forall j \in \mathcal{J}_i.
\end{align}
\begin{lemma}\label{ubf}
$\tilde{\mathcal{F}}_j(\gamma_j, \beta_j, \boldsymbol{\phi}_i) \geq {\mathcal{F}}_j(\gamma_j, \beta_j, \boldsymbol{\phi}_i)$.
\end{lemma}
\begin{proof}
It follows from immediately definitions (\ref{definition1}) and (\ref{definition2}), and the inequalities (\ref{equality1}) and (\ref{equality2}).
\end{proof}
Thus, we use the following constraint to approximate constraint (\ref{non-convexconstraint2}).
\begin{equation}\label{approximate constraints}
\tilde{\mathcal{F}}_j(\gamma_j, \beta_j, \boldsymbol{\phi}_i) \leq 0, \forall j \in \mathcal{J}_i.
\end{equation}
Noted that $\tilde{\mathcal{F}}_j(\gamma_j, \beta_j, \boldsymbol{\phi}_i)$ is a sum of a convex function and affine functions, thus the approximate constraint (\ref{approximate constraints}) is convex. We thus obtain an approximate problem as follows.
\begin{subequations}
\begin{align}
\textup{[\rm P2.3]}\ \ \ \underset{ \boldsymbol{\phi}_{i}, \boldsymbol{\gamma}_i ,\boldsymbol{\beta}_i }{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \gamma_j \right)} \label{objjjjj} \\
\textup{ s.t.}\ \ \ &\text{(\ref{P2C1}), (\ref{convexconstraint}), and (\ref{approximate constraints})}. \notag
\end{align}
\end{subequations}
The objective function (\ref{objjjjj}), constraints (\ref{convexconstraint}) and (\ref{approximate constraints}) are all convex. Next, we investigate the effect of constraint (\ref{P2C1}).
\subsection{The Impact of Reflection Coefficient Models}
\subsubsection{Ideal}
As $\mathcal{D}_1$ is a convex set, approximate problem P2.3 is a convex problem for this domain. We can find the optimal solution efficiently via existing solvers. Making use of the following remark, the MM method ({a.k.a} Successive Upper-bound Minimization) can be applied to approach P2 and guarantee a local optimum \cite{wu1983convergence}.
\begin{remark}\label{remark}
By Lemma \ref{ubf}, the Lagrange dual function of problem P2.3 is an upper bound of that of P2.2.
\end{remark}
Specifically, we can find a locally optimal solution to P2 by solving a sequence of successive upper-bound approximate problems in form of P2.3. The algorithm for the single-cell case with $ \mathcal{D}_1$ is detailed in Algorithm \ref{al0}.
\begin{algorithm}[tbp]\label{al0}
\caption{Single-cell optimization based on MM}
\KwIn{$\boldsymbol{\rho}_{-i}, \{d_j, g_{ij}, \boldsymbol{G}_{il}, \boldsymbol{H}_{lj}\}, \forall l \in \mathcal{L}_i, \forall j \in \mathcal{J}_i, \epsilon$;}
\KwOut{$\rho_i$, $ \boldsymbol{\phi}_i$;}
Initialize $\{ {\boldsymbol{\phi}}^{(0)}_i, {\boldsymbol{\gamma}}^{(0)}_i,{\boldsymbol{\beta}}^{(0)}_i\}$\;
$t\leftarrow0$\;
\Repeat
{{\rm (\ref{objjjjj}) or (\ref{objjjjjjj}) converges with respect to $\epsilon$}\label{step7}
}
{
$\{ \tilde{\boldsymbol{\phi}}_i, \tilde{\boldsymbol{\gamma}}_i,\tilde{\boldsymbol{\beta}}_i\}\leftarrow\{{\boldsymbol{\phi}}^{(t)}_i, {\boldsymbol{\gamma}}^{(t)}_i,{\boldsymbol{\beta}}^{(t)}_i\}$\;
{
Obtain $\{ {\boldsymbol{\phi}}^{(t+1)}_i, {\boldsymbol{\gamma}}^{(t+1)}_i,{\boldsymbol{\beta}}^{(t+1)}_i\}$ by P2.3 or P2.5\;
}
$t\leftarrow t+1$\;
}
$\rho_i = \sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \gamma_j^{(t)} \right)}$, $\boldsymbol{\phi}_i = \boldsymbol{\phi}_i^{(t)}$\;
\Return {$\rho_i $, $\boldsymbol{\phi}_i $}\;
\end{algorithm}
\subsubsection{Continuous Phase Shifter}
Domain $ \mathcal{D}_2$ is non-convex, and we handle this issue by the penalty method. The resulting optimization problem can be rewritten as
\begin{subequations}
\begin{align}
\textup{[\rm P2.4]}\ \ \ \underset{ \boldsymbol{\phi}_{i}, \boldsymbol{\gamma}_i ,\boldsymbol{\beta}_i}{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \gamma_j \right)} - C \sum_{ l \in \mathcal{L}_i, \atop m\in \mathcal{M}}\left(\left|\phi_{lm}\right|^2 -1\right)
\label{P2.4obj}\\
\textup{ s.t.}\ \ \ &\text{(\ref{convexconstraint}) and (\ref{approximate constraints})}, \label{P2.4C1}\notag\\
& \left | \phi_m \right | \leq 1, \forall m \in \mathcal{M},\forall l \in \mathcal{L}_i,
\end{align}
\end{subequations}
where $C$ is the penalty parameter. Note that the penalty term $C \sum_{l \in \mathcal{L}_i,m\in \mathcal{M}}(\left|\phi_{lm}\right|^2 -1 )$ enforces that $\left|\phi_{lm}\right|^2 -1 = 0$ for the optimal solution of P2.4. However, the objective function becomes non-convex, and we use the DC programming technique to approximate it. With the first-order Taylor expansion of $\phi_{lm}^\star\phi_{lm}$, we have
\begin{equation}\label{1st}
\left|\phi_{lm}\right|^2 = \phi_{lm}^\star\phi_{lm} \geq 2\Re\left\{\tilde{\phi}_{lm}^\star \phi_{lm} \right\}-|\tilde{\phi}_{lm} |^2.
\end{equation}
Then P2.4 can be approximated with the following convex problem.
\begin{subequations}
\begin{align}
\textup{[\rm P2.5]}\ \underset{ \boldsymbol{\phi}_{i}, \boldsymbol{\gamma}_i ,\boldsymbol{\beta}_i}{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \gamma_j \right)} - 2 C \sum_{ m \in \mathcal{M}, \atop l \in \mathcal{L}_i} \Re\left\{\tilde{\phi}_{lm}^\star \phi_{lm} \right\}\label{objjjjjjj} \\
\textup{ s.t.}\ \ \ & \text{(\ref{convexconstraint}), (\ref{approximate constraints}), and (\ref{P2.4C1})} \notag.
\end{align}
\end{subequations}
Note that the constant term $C(|\tilde{\phi}_{lm} |^2+1)$ is not stated explicitly in (\ref{objjjjjjj}) as it has no impact on optimum.
We obtain a locally optimal solution to P2 with domain $\mathcal{D}_2$ by the MM method that solves a sequence of problems in form of P2.5. The algorithm is given in Algorithm \ref{al0}, as all major algorithmic steps for $\mathcal{D}_1$ remain.
\subsubsection{Discrete Phase Shifter}
If $\mathcal{D} = \mathcal{D}_3$, the single-cell problem P2 is NP-hard.
\begin{proposition}\label{NP-hard}
The single-cell problem P2 is NP-hard when $\mathcal{D} = \mathcal{D}_3$.
\end{proposition}
\begin{proof}
Please refer to Appendix.
\end{proof}
As P2 with $\mathcal{D}_3$ is NP-hard, we aim to get an approximate solution to this case. We first obtain the solution to P2 with $\mathcal{D}_2$, $\phi_{lm}^{\left \langle \mathcal{D}_2 \right \rangle}$ $( \forall m \in \mathcal{M},\forall l \in \mathcal{L}_i)$, by Algorithm \ref{al0}. Then we make use of the following rounding equation to obtain an approximate solution $\phi_{lm}^{\left \langle \mathcal{D}_3 \right \rangle}$ $( \forall m \in \mathcal{M},\forall l \in \mathcal{L}_i)$ in this case.
\begin{equation}\label{rounding}
\phi_{lm}^{\left \langle \mathcal{D}_3 \right \rangle} = \underset{\phi_{lm}\in\mathcal{D}_3}{\text{argmin}} \left| \phi_{lm} -\phi_{lm}^{\left \langle \mathcal{D}_2 \right \rangle}\right|, \forall m \in \mathcal{M},\forall l \in \mathcal{L}_i,
\end{equation}
\subsection{Complexity Analysis}
Algorithm \ref{al0} includes two parts: 1) An iterative process based on the MM method, and 2) solving the convex approximate problems. We first analyze the computation complexity of solving problem P2.3 by a standard interior-point method in \cite{ben2001lectures}. There are $M \times |\mathcal{L}_i|$ SOC constraints of size two (in the number of variables), $|\mathcal{J}_i|$ SOC constraints of size $2\times M\times|\mathcal{L}_i|+1$, and $|\mathcal{J}_i|$ SOC constraints of size $M\times|\mathcal{L}_i| + 2$, with $2\times M\times|\mathcal{L}_i| + 2\times|\mathcal{J}_i|$ optimization variables. To simplify the notation, let $Q = M \times |\mathcal{L}_i| $ and $T = |\mathcal{J}_i|$. From \cite{6891348}, the complexity of solving P2.3 is thus given by
\begin{align}
&\sqrt{Q+2T}\left(2Q+2T\right)\Big(4Q+T\left(2Q+1\right)^2 \notag\\
& \qquad+T\left(Q+2\right)^2+\left(2Q+2T\right)^2\Big) = \mathcal{O}(Q^{3.5}T^{2.5}).
\end{align}
Similarly, the computation complexity of solving problem P2.5 is also of $\mathcal{O}(Q^{3.5}T^{2.5})$. Additionally, as the complexity of the iterative process based on the MM method is of $\mathcal{O}\left(ln(1/\epsilon)\right)$, where $\epsilon$ is the convergence tolerance in Step \ref{step7}, and the overall complexity of Algorithm \ref{al0} is of $\mathcal{O}\left(Q^{3.5}P^{2.5}\ln(1/\epsilon)\right)$
\section{Multi-cell Load Optimization} \label{Sec:multi-cell}
This section proposes an algorithmic framework for the multi-cell problem and then discuss its convergence.
In the last section, we have solved the single-cell problem via algorithm \ref{al0}, and a locally optimal load of cell $i$ can be obtained, when loads of other cells $\boldsymbol{\rho}_{-i}$ and the reflection coefficients of RISs at the other cells $\boldsymbol{\phi}_{-i}$ are given. We aim to solve the multi-cell problem by solving iteratively the single-cell problem. However, Algorithm \ref{al0} will obtain different locally optimal solutions, with the different initial parameters. If we directly embed it into an algorithmic framework based on the fixed-point method like \cite{siomina2012analysis, 8353846}, the convergence will not be guaranteed. To avoid this issue, the initial parameters of Algorithm \ref{al0} should be pre-determined instead of random in the algorithmic framework. Therefore, we use $f_i\left(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{-i}, \Psi_i \right) $ to represent the process of obtaining $\rho_i$ by the Algorithm \ref{al0} with the pre-determined initial parameters, {i.e.},
\begin{equation}
f_i\left(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{-i}, \Psi_i \right) = \rho_i, \\
\end{equation}
where $\rho_i$ is obtained by Algorithm \ref{al0} with the initial parameters $\Psi_i = \{ {\boldsymbol{\phi}}^{(0)}_i, {\boldsymbol{\gamma}}^{(0)}_i,{\boldsymbol{\beta}}^{(0)}_i\}$.
\begin{lemma}\label{lemma3}
$f_i\left(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{-i}, \Psi_i \right)$ is well-defined.
\end{lemma}
\begin{proof}
Note that $\rho_i$ is obtained by solving a sequence of convex problems, where the solution to a problem is the initial point of the next problem instance. It is clear that Algorithm \ref{al0} will always converge by the construction of the MM method, and the same holds if it is applied repeatedly for a finite number of times. Therefore, $f_i\left(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{-i}, \Psi_i \right)$ is well-defined.
\end{proof}
With $f_i\left(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{-i}, \Psi_i \right)$ and Lemma \ref{lemma3}, we then propose an algorithmic framework based on the following iteration to obtain a locally optimal solution.
\begin{align}\label{iterationequality}
&\boldsymbol{\rho}^{(\tau+1)}= \Big[f_1\left(\boldsymbol{\rho}_{-1}^{(\tau)},\boldsymbol{\phi}_{-1}^{(\tau)}, \Psi_1^{(\tau)} \right), f_2\left(\boldsymbol{\rho}_{-2}^{(\tau)},\boldsymbol{\phi}_{-2}^{(\tau)}, \Psi_2^{(\tau)} \right), \notag \\
& \qquad\qquad\qquad\qquad\qquad\quad ...,f_I\left(\boldsymbol{\rho}_{-I}^{(\tau)},\boldsymbol{\phi}_{-I}^{(\tau)}, \Psi_I^{(\tau)} \right)\Big]^T.
\end{align}
The algorithmic framework is detailed in Algorithm \ref{al1}, named iterative convex approximation (ICA) algorithm.
\begin{algorithm}[tbp]\label{al1}
\caption{Iterative convex approximation (ICA)}
\KwIn{ $\{d_j, g_{ij}, \boldsymbol{G}_{il}, \boldsymbol{H}_{lj}\}, \forall i \in \mathcal{I}, \forall l \in \mathcal{L}, \forall j \!\in \mathcal{J}, \varepsilon$;}
\KwOut{$\boldsymbol{\rho}$, $ \boldsymbol{\phi}$;}
Initialize $\boldsymbol{\phi}^{(0)}$\label{step1}\;
Obtain $\boldsymbol{\rho}^{(0)}, \boldsymbol{\gamma}^{(0)} = \{ \boldsymbol{\gamma}_1^{(0)}, \boldsymbol{\gamma}_2^{(0)}, ..., \boldsymbol{\gamma}_I^{(0)}\}$, and $\boldsymbol{\beta}^{(0)} = \{ \boldsymbol{\beta}_1^{(0)}, \boldsymbol{\beta}_2^{(0)}, ..., \boldsymbol{\beta}_I^{(0)}\}$ from solving the multi-cell problem with $\boldsymbol{\phi}^{(0)}$\label{step2}\;
$\tau \leftarrow0$\;
\Repeat
{$||\boldsymbol{\rho}^{(\tau)}- \boldsymbol{\rho}^{(\tau-1)}||_{\infty} \leq \varepsilon$
}
{
\For{{\rm cell} $i$, $\forall i \in \mathcal{I}$}{${\rho}_i^{(\tau+1)} = f_i\left(\boldsymbol{\rho}_{-i}^{(\tau)},\boldsymbol{\phi}_{-i}^{(\tau)}, \Psi_i^{(\tau)} \right)$\;\label{s5} $\boldsymbol{\phi}_{-i}^{(\tau+1)}\leftarrow \boldsymbol{\phi}_{-i}^{(\tau)}$ retrieved from Step \ref{s5}\; $\Psi_i^{(\tau+1)}\leftarrow \Psi_i^{(\tau)}$ retrieved from Step \ref{s5}\; }
$\tau \leftarrow \tau+1$\;
}
$\boldsymbol{\rho} = \boldsymbol{\rho}^{(t)}$, $\boldsymbol{\phi}=\boldsymbol{\phi}^{(t)}$\;
\Return {$\boldsymbol{\rho} $, $\boldsymbol{\phi}$}\;
\end{algorithm}
Steps \ref{step1} and \ref{step2} are for obtaining appropriate initial parameters. We first give $\boldsymbol{\phi}^{(0)}$, and our problem be in form of the problem in \cite{siomina2012analysis}, then the $\boldsymbol{\rho}^{(0)}$ can be easily obtained, and the $\boldsymbol{\gamma}^{(0)}$ and $\boldsymbol{\beta}^{(0)}$ also can be calculated. The convergence of ICA is shown below.
\begin{lemma}\label{lemma4}
$f_i\left(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{-i}, \Psi_i \right) $ is a monotonically increasing function of $\boldsymbol{\rho}_{-i}$.
\end{lemma}
\begin{proof}
When $\boldsymbol{\phi}_{-i}$ and $\Psi_i$ are fixed, our single-cell problem P2 will degenerate into the single-cell problem in \cite{siomina2012analysis}. In \cite{siomina2012analysis}, it has been proved that the corresponding function is monotonically increasing with increasing $\boldsymbol{\rho}_{-i}$.
\end{proof}
\begin{lemma}\label{lemma5}
$\rho_i^{(1)} \leq \rho_i^{(0)}, \forall i \in \mathcal{I}$ holds in ICA.
\end{lemma}
\begin{proof}
In ICA, $\boldsymbol{\rho}^{(1)}$ is obtained by (\ref{iterationequality}) based on the MM method with $\boldsymbol{\rho}^{(0)}$ as the initial parameter. By the construction of the MM method, the obtained locally optimal solution can not be worse than the initial one, i.e., $\rho_i^{(1)} \leq \rho_i^{(0)}, \forall i \in \mathcal{I}$.
\end{proof}
\begin{proposition}
ICA is convergent.
\end{proposition}
\begin{proof}
From Lemmas \ref{lemma4} and \ref{lemma5}, for any $\tau > 0$ and $\forall i \in \mathcal{I}$, ${\rho}^{(\tau+1)}_i \leq {\rho}^{(\tau)}_i $ holds. Therefore, the total load is monotonically decreasing over iterations in ICA, the convergence follows then from that the load levels are non-negative.
\end{proof}
A particular interesting aspect for the multi-cell problem is under what condition it can be solved to global optimum with our multi-cell algorithm. As an interesting finding, our algorithm carries the property of converging to global optimum, if the single-cell problem can be solved to optimum.
\begin{theorem}\label{additional}
The global optimum of the multi-cell problem can be obtained if the single-cell problem can be solved to optimum.
\end{theorem}
\begin{proof}
First, we use $F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i})$ to represent the optimal solution to the single-cell optimization problem under any given reflection coefficient vector $\boldsymbol{\phi}_i$, and it can be directly expressed by
\begin{align}
F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i}) = \sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \text{SINR}_j^{\left \langle s \right \rangle} \left( \boldsymbol{\phi}_{i} \right) \right)}.
\end{align}
By \cite[Theorem 2]{siomina2012analysis}, $F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i})$ is strictly concave for $\boldsymbol{\rho}_{-i}$, and by \cite[Corollary 3]{siomina2012analysis}, we can prove easily it has the property of scalability, i.e., for any $\alpha>1$,
\begin{equation}\label{ine28}
\alpha F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i}) > F_i(\alpha \boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i}).
\end{equation}
In addition, $\text{SINR}_j^{\left \langle s \right \rangle} \left(\boldsymbol{\phi}_{i} \right)$ is clearly monotonically decreasing for $\boldsymbol{\rho}_{-i}$. Therefore, $F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i})$ has {monotonicity}, i.e., for any $\boldsymbol{\rho}'_{-i} > \boldsymbol{\rho}_{-i}$,
\begin{equation}\label{ine39}
F_i(\boldsymbol{\rho}'_{-i},\boldsymbol{\phi}_{i}) > F_i( \boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i}).
\end{equation}
These properties define the so called standard interference function (SIF) \cite{Yates}, hence, $F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i})$ is an SIF.
Let $G_i(\boldsymbol{\rho}_{-i})$ be the optimal solution to the single-cell optimization problem P2, then we have
\begin{align}\label{fmin}
G_i(\boldsymbol{\rho}_{-i}) &= F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}^*_{i}) = \underset{ \boldsymbol{\phi}_{i}}{\min} \ \rho_i \ \textup{s.t.}\ \text{(\ref{P2C1})} \notag\\
&= \min\left\{F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i}) | \boldsymbol{\phi}_{i} \in \mathcal{D} \right\}
\end{align}
where $\boldsymbol{\phi}^*_{i}$ is the optimal reflection coefficients vector. We show $G_i(\boldsymbol{\rho}_{-i})$ is also an SIF, because it has scalability and monotonicity, and the proof is as follows.
\begin{itemize}
\item Scalability: \\
For any $\alpha>1$, let $\boldsymbol{\phi}^{\odot}_{i}$ be the optimal reflection coefficients vector for $F_i(\alpha\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i})$, i.e.,
\begin{equation}\label{last1}
G_i(\alpha \boldsymbol{\rho}_{-i}) = F_i(\alpha \boldsymbol{\rho}_{-i},\boldsymbol{\phi}^{\odot}_{i}) \leq F_i(\alpha \boldsymbol{\rho}_{-i},\boldsymbol{\phi}^{*}_{i}).
\end{equation}
By (\ref{ine28}),
\begin{equation}\label{last2}
F_i(\alpha \boldsymbol{\rho}_{-i},\boldsymbol{\phi}^{*}_{i}) < \alpha F_i( \boldsymbol{\rho}_{-i},\boldsymbol{\phi}^{*}_{i}) = \alpha G_i( \boldsymbol{\rho}_{-i}).
\end{equation}
Therefore, by (\ref{last1}) and (\ref{last2}), we have
\begin{equation}
G_i(\alpha \boldsymbol{\rho}_{-i}) < \alpha G_i( \boldsymbol{\rho}_{-i}).
\end{equation}
\item Monotonicity: \\
Since, for any $\boldsymbol{\phi}_{i} \in \mathcal{D}$ and $\boldsymbol{\rho}'_{-i} > \boldsymbol{\rho}_{-i}$, inequality (\ref{ine39}) holds, we have
\begin{equation}
\min \{F_i(\boldsymbol{\rho}'_{-i},\boldsymbol{\phi}_{i}) | \boldsymbol{\phi}_{i} \in \mathcal{D} \} \! > \! \min\{F_i(\boldsymbol{\rho}_{-i},\boldsymbol{\phi}_{i}) | \boldsymbol{\phi}_{i} \in \mathcal{D} \}.
\end{equation}
Therefore, for $\boldsymbol{\rho}'_{-i} > \boldsymbol{\rho}_{-i}$,
\begin{equation}
G_i(\boldsymbol{\rho}'_{-i}) > G_i(\boldsymbol{\rho}_{-i}),
\end{equation}
\end{itemize}
Therefore, $G_i(\boldsymbol{\rho}_{-i})$ is an SIF, for which fixed-point iterations guarantees convergence and optimality \cite{Yates}. Hence the conclusion.
\end{proof}
Theorem \ref{additional} implies that the multi-cell problem can be solved to global optimum if an optimal algorithm of single-cell is embedded into our algorithmic framework. The performance of the algorithmic framework remains satisfactory even though the embedded single-cell algorithm can only guarantee local optimality (e.g., Algorithm \ref{al0}). For small-scale scenarios with $\mathcal{D}_3$, the optimum of the single-cell problem can be found by exhaustive search, then the global optimum of the multi-cell problem also will be guaranteed by Theorem 10. With the global optimum provided, we will gauge the performance of the algorithmic framework embedded with Algorithm \ref{al0} in the next section.
\section{Performance Evaluation} \label{Sec:evaluation}
\subsection{Preliminaries}
We use a cellular network of seven cells, adopting a wrap-around technique. In each cell, ten UEs are randomly and uniformly distributed. The cells have the same number of RISs. All RISs have the same number of reflection elements. For each cell, its RISs are evenly distributed around the BS location, with a distance of $250$ m to the BS that is located in the center point of the cell. Each RIS has $M = 20$ reflection elements, and we use $S_i = | \mathcal{L}_i | \times M$ to denote the total number of reflection elements in cell $i$. The channel between cell $i$ and UE $j$ is given by $g_{ij} = D_{ij}^{-\alpha_{cu}} g_0$, where $D_{ij}$ is the distance between the BS and UE, $\alpha_{cu} = 3.5$ is the path loss exponent, and $g_0$ follows a Rayleigh distribution. Similarly, the channel from the BS of cell $i$ to RIS $l$ is given by $\boldsymbol{G}_{il} = D_{il}^{-\alpha_{ci}} \boldsymbol{G}_0$, and the channel from RIS $l$ to UE $j$ is given by $\boldsymbol{H}_{lj} = D_{lj}^{-\alpha_{iu}} \boldsymbol{H}_0$. Some additional simulation parameters are given in Table \ref{tab:test}.
\begin{table}[htbp]
\caption{\label{tab:test}Simulation Parameters}
\begin{center}
\begin{tabular}{lll}
\toprule
\textbf{Parameter} & \textbf{Value} \\
\midrule
Cell radius & $500$ m \\
Carrier frequency & $2$ GHz \\
Total bandwidth & $20$ MHz \\
RB power in each cell $P$ & $1$ W\\
Noise power spectral density $\sigma ^2$ & $-174$ dBm/Hz \\
Convergence tolerance $\epsilon$ & $10^{-4}$ \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Benchmarks and Initialization}
In our simulation, we use three benchmark schemes as follows.
\subsubsection{No RIS} We take the system without RIS as a baseline scheme. For this scheme, we can compute the global optimum of resource minimization using the fixed-point method in \cite{siomina2012analysis}.
\subsubsection{Random} In this scheme, the RIS reflection coefficients are randomly chosen from domain $\mathcal{D}_1$. Once the coefficients are set, the optimum subject to the chosen coefficient values can again be obtained by the fixed-point method.
\subsubsection{Decomposition} In this scheme, the RIS reflection coefficients of the cells are optimized independently, by treating inter-cell interference as zero or the worst-case value, namely Decomposition-1 and Decomposition-2, respectively. The resulting single-cell problem for domain $\mathcal{D}_1$ for a generic cell $i$ can be reformulated as follows.
\begin{subequations}
\begin{align}
\textup{[\rm P3]}\ \underset{ \boldsymbol{\phi}_{i}}{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \frac{ | \hat{g}_{ij} + \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l |^2{P_i} }{\Upsilon + \sigma^2} \right)} \\
\textup{ s.t.}\ \ \ & \phi_{lm} \in \mathcal{D}_1, \forall m \in \mathcal{M},\forall l \in \mathcal{L}_i,
\end{align}
\end{subequations}
where given inter-cell interference $\Upsilon = 0$ or $\sum_{k\in\mathcal{I}, \atop k\neq i} \left | \hat{g}_{kj}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{kjl}\boldsymbol{\Phi}_l\right |^2{P_k}$.
Following \cite{9039554}, by introducing a set of auxiliary variables $\boldsymbol{y}_i = [y_1, y_2, ..., y_{J_i} ]^T$, $\boldsymbol{a}_i = [a_1, a_2, ..., a_{J_i} ]^T$, and $\boldsymbol{b}_i = [b_1,b_2, ..., b_{J_i} ]^T$, problem P3 can be approximated by the following problem.
\begin{subequations}
\begin{align}
\textup{[\rm P3.1]}\underset{ \boldsymbol{\phi}_{i}, \boldsymbol{y}_i, \atop \boldsymbol{a}_i, \boldsymbol{b}_i}{\min} \ \ &\sum_{j\in \mathcal{J}_i} \frac{d_j}{\log_2\left(1+ \frac{ y_j{P_i} }{\sigma^2} \right)} \label{P2.7obj}\\
\textup{ s.t.}\ \ \ & |\phi_{lm} |\leq 1, \forall m \in \mathcal{M},\forall l \in \mathcal{L}_i. \label{}\\
&a_j = \Re\left\{ \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l \right\}, \forall j\in \mathcal{J}_i\\
&b_j = \Im\left\{ \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\boldsymbol{\Phi}_l \right\}, \forall j\in \mathcal{J}_i\\
&y_j \leq \tilde{a}_j^2+\tilde{b}_j^2+2\tilde{a}_j({a}_j-\tilde{a}_j)+2\tilde{b}_j({b}_j-\tilde{b}_j), \notag \\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad \forall j\in \mathcal{J}_i.
\end{align}
\end{subequations}
\begin{algorithm}[tbp]\label{al2}
\caption{Optimization for the decomposition scheme based on MM for cell $i$}
\KwIn{ $\{d_j, g_{ij}, \boldsymbol{G}_{il}, \boldsymbol{H}_{lj}\}, \forall l \in \mathcal{L}_i, \forall j \!\in \mathcal{J}_i, \varepsilon$;}
\KwOut{$\boldsymbol{\rho}$, $ \boldsymbol{\phi}$;}
Initialize $ {\boldsymbol{\phi}}^{(0)}_i$\;
$t \leftarrow 0$\;
\Repeat
{{\rm (\ref{P2.7obj}) converges with respect to $\epsilon$}
}
{
$ \tilde{\boldsymbol{\phi}}_i \leftarrow {\boldsymbol{\phi}}^{(t)}_i$\;
$\tilde{a}_j \leftarrow \Re\left\{ \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\tilde{\boldsymbol{\Phi}}_l \right\}, \forall j\in \mathcal{J}_i$\;
$\tilde{b}_j \leftarrow \Im\left\{ \hat{g}_{ij}+ \sum_{l \in \mathcal{L}_i}\boldsymbol{\Lambda}_{ijl}\tilde{\boldsymbol{\Phi}}_l \right\}, \forall j\in \mathcal{J}_i$\;
Obtain ${\boldsymbol{\phi}}^{(t+1)}_i$ by solving P3.1\;
$t\leftarrow t+1$\;
}
\Return {$\boldsymbol{\phi}_i= \boldsymbol{\phi}_i^{(t)}$}\;
\end{algorithm}
P3.1 is a convex optimization problem that can be solved efficiently. An algorithm based on the MM method for problem P3 is provided in Algorithm \ref{al2}. After solving P3.1 for all individual cells to obtain the RIS reflection coefficients, the resulting optimum of P1 can be computed by the fixed-point method.
As for the value domain of RIS reflection coefficients, we consider $\mathcal{D}_1$, $ \mathcal{D}_2$, $\mathcal{D}_3$ (2-bit), and $\mathcal{D}_3$ (1-bit). Specifically, $\mathcal{D}_3$ (2-bit) represents the discrete phases with $N = 4$ and $\mathcal{D}_3$ (1-bit) that with $N = 2$. Additionally, in our simulation, we initialize all the reflection coefficients to be $\phi = \mathrm{e}^{\mathrm{i} \pi}$.
\subsection{Impact of Demand}
\begin{figure}[tbp]
\begin{center}
\begin{tikzpicture}
\begin{axis}[
xlabel={Normalized demand $d$ (Mbps)},
ylabel={Total load},
xmin=0.4, xmax=0.8,
ymin=0, ymax=6,
xtick={0.4, 0.5, 0.6, 0.7, 0.8},
ytick={0, 1, 2, 3, 4, 5, 6},
legend pos=north west,
grid style=densely dashed,
tick label style={font=\footnotesize},
]
\addplot[ color=darkgray, mark=square, mark options={solid}, line width=0.8pt]
coordinates { (0.4, 2.00) (0.5, 2.70) (0.6, 3.47) (0.7, 4.32) (0.8, 5.25) };
\addplot[ color=darkgray, mark=o, densely dashed, mark options={solid}, line width=0.8pt]
coordinates { (0.4, 1.94) (0.5, 2.62) (0.6, 3.40) (0.7, 4.20) (0.8, 5.14) };
\addplot[ color= teal, mark=x, densely dashed, mark options={solid}, line width=0.8pt]
coordinates { (0.4, 1.11) (0.5, 1.8) (0.6, 2.60) (0.7, 3.50) (0.8, 4.5) };
\addplot[ color= teal, mark=square, mark options={solid}, line width=0.8pt]
coordinates { (0.4, 1.85) (0.5, 2.25) (0.6, 2.78) (0.7, 3.30) (0.8, 3.78) };
\addplot[ color=red, mark=o, line width=0.8pt]
coordinates { (0.4, 0.97) (0.5, 1.48) (0.6, 2.05) (0.7, 2.68) (0.8, 3.4) };
\addplot[ color=blue, mark=x, densely dashed, mark options={solid}, line width=0.8pt ]
coordinates { (0.4, 0.98) (0.5, 1.50) (0.6, 2.06) (0.7, 2.69) (0.8, 3.41) };
\addplot[ color=orange, mark=square, line width=0.8pt ]
coordinates {(0.4, 1.03) (0.5, 1.54) (0.6, 2.12) (0.7, 2.77) (0.8, 3.52)};
\addplot[color=orange, mark=o, densely dashed, mark options={solid}, line width=0.8pt]
coordinates {(0.4, 1.17) (0.5, 1.73) (0.6, 2.36) (0.7, 3.07) (0.8, 3.87)};
\legend{No RIS, Random-$\mathcal{D}_1$, Decomposition-1-$\mathcal{D}_1$, Decomposition-2-$\mathcal{D}_1$, ICA-$ \mathcal{D}_1$, ICA-$\mathcal{D}_2$, ICA-$\mathcal{D}_3$ (2-bit), ICA-$\mathcal{D}_3$ (1-bit)}
\end{axis}
\end{tikzpicture}
\caption{Total load in function of normalized demand when $ S = 140$ and $ \alpha_{ci} = \alpha_{iu} = 2.2$.} \label{fig:1}
\end{center}
\end{figure}
Fig. \ref{fig:1} illustrates the total load, i.e., resource consumption, with respect to the normalized demand $d$, when $ S_i = 140$ $(\forall i \in \mathcal{I})$ and $ \alpha_{ci} = \alpha_{iu} = 2.2$. It can be seen that the RIS provides little improvement if it is not optimized, as the difference between the performance of no RIS and that of the random scheme is very small. In contrast, ICA achieves significant performance gains compared with the three benchmark solutions. Specifically, ICA-$\mathcal{D}_1$ can save up to $51.3\%$ resource than no RIS when $d = 0.4$. Even with the most restrictive setup of ICA-$\mathcal{D}_3$ (1 bit), a $41.4\%$ reduction in resource usages is achieved.
\subsection{Discussion for the coupling effects}
We have evaluated two decomposition schemes with given interference. In Fig. \ref{fig:1}, we can see that the performance of Decomposition-1 is close to that of ICA-$\mathcal{D}_1$ when $d \leq 0.4$, since small demand leads to fairly negligible inter-cell interference. As the demand increases, accounting for inter-cell interference becomes increasingly important, thus the difference between Decomposition-1 and ICA-$\mathcal{D}_1$ grows. For example, Decomposition-1 indicates $32.8\%$ more resource consumption compared with ICA-$\mathcal{D}_1$ when $d=0.8$. Conversely, the performance of Decomposition-2 is close to that of ICA-$\mathcal{D}_1$ when the demand is high, however, Decomposition-2 erroneously predicts $80\%$ more resource usage when $d = 0.4$. In summary, the results obtained by the two decomposition schemes both deviate significantly from those obtained when the coupling relation between cells is accounted for. The observation demonstrates the importance of capturing the dynamics due to load coupling in optimization.
\subsection{Impact of RIS}
\begin{figure}[tbp]
\begin{center}
\begin{tikzpicture}
\begin{axis}[
xlabel={The number of reflection elements $S_i$},
ylabel={Total load},
xmin=100, xmax=180,
ymin=0, ymax=4,
xtick={100, 120, 140, 160, 180},
ytick={0, 1, 2, 3, 4},
legend pos=south west,
grid style=densely dashed,
tick label style={font=\footnotesize},
]
\addplot[ color=darkgray, mark=square, mark options={solid}, line width=0.8pt]
coordinates { (100, 3.47) (120, 3.47) (140, 3.47) (160, 3.47) (180, 3.47) };
\addplot[ color=darkgray, mark=o, densely dashed, mark options={solid}, line width=0.8pt]
coordinates { (100, 3.43) (120, 3.42) (140, 3.40) (160, 3.36) (180, 3.33) };
\addplot[ color= teal, mark=x, densely dashed, mark options={solid}, line width=0.8pt]
coordinates { (100, 3.10) (120, 2.85) (140, 2.60) (160, 2.40) (180, 2.3) };
\addplot[ color= teal, mark=square, mark options={solid}, line width=0.8pt]
coordinates { (100, 3.22) (120, 3.0) (140, 2.78) (160, 2.60) (180, 2.5) };
\addplot[ color=red, mark=o, line width=0.8pt]
coordinates { (100, 2.59) (120, 2.30) (140, 2.05) (160, 1.84) (180, 1.7) };
\addplot[ color=blue, mark=x, densely dashed, mark options={solid}, line width=0.8pt ]
coordinates { (100, 2.60) (120, 2.31) (140, 2.06) (160, 1.85) (180, 1.71) };
\addplot[ color=orange, mark=square, line width=0.8pt ]
coordinates {(100, 2.70) (120, 2.38) (140, 2.12) (160, 1.90) (180, 1.76)};
\addplot[color=orange, mark=o, densely dashed, mark options={solid}, line width=0.8pt]
coordinates {(100, 2.86) (120, 2.61) (140, 2.36) (160, 2.17) (180, 2.05)};
\legend{No RIS, Random-$\mathcal{D}_1$, Decomposition-1-$\mathcal{D}_1$, Decomposition-2-$\mathcal{D}_1$, ICA-$ \mathcal{D}_1$, ICA-$\mathcal{D}_2$, ICA-$\mathcal{D}_3$ (2-bit), ICA-$\mathcal{D}_3$ (1-bit)}
\end{axis}
\end{tikzpicture}
\caption{Total load in function of the number of reflection elements $ S_i$ $(\forall i \in \mathcal{I})$ when $d = 0.6$ Mbps, when $\alpha_{ci} = \alpha_{iu} = 2.2$.} \label{fig:2}
\end{center}
\end{figure}
Fig. \ref{fig:2} depicts the total load versus the number of reflection elements, when $d = 0.6$ Mbps and $ \alpha_{ci} = \alpha_{iu} = 2.2$. We can see that, with more reflected elements, the system can clearly benefit except for the random scheme.
\begin{figure}[tbp]
\begin{center}
\begin{tikzpicture}
\begin{axis}[
xlabel={$\alpha_{RIS}$},
ylabel={Total load},
xmin=2, xmax=2.6,
ymin=0, ymax=4,
xtick={2, 2.2, 2.4, 2.6},
ytick={0, 1, 2, 3, 4, 5, 6},
legend pos=south east,
grid style=densely dashed,
tick label style={font=\footnotesize},
]
\addplot[ color=darkgray, mark=square, mark options={solid}, line width=0.8pt]
coordinates { (2, 3.47) (2.2, 3.47) (2.4, 3.47) (2.6, 3.47) };
\addplot[ color=darkgray, mark=o, densely dashed, mark options={solid}, line width=0.8pt]
coordinates { (2, 3.25) (2.2, 3.40) (2.4, 3.40) (2.6, 3.42) };
\addplot[ color= teal, mark=x, densely dashed, mark options={solid}, line width=0.8pt]
coordinates { (2, 2.15) (2.2, 2.60) (2.4, 2.85) (2.6, 3.00) };
\addplot[ color= teal, mark=square, mark options={solid}, line width=0.8pt]
coordinates { (2, 2.35) (2.2, 2.78) (2.4, 3.02) (2.6, 3.15) };
\addplot[ color=red, mark=o, line width=0.8pt]
coordinates { (2, 1.38) (2.2, 2.03) (2.4, 2.30) (2.6, 2.43) };
\addplot[ color=blue, mark=x, densely dashed, mark options={solid}, line width=0.8pt ]
coordinates { (2, 1.39) (2.2, 2.04) (2.4, 2.32) (2.6, 2.455) };
\addplot[ color=orange, mark=square, line width=0.8pt ]
coordinates { (2, 1.50) (2.2, 2.12) (2.4, 2.38) (2.6, 2.5)};
\addplot[color=orange, mark=o, densely dashed, mark options={solid}, line width=0.8pt]
coordinates { (2, 1.86) (2.2, 2.36) (2.4, 2.58) (2.6, 2.74)};
\legend{No RIS, Random-$\mathcal{D}_1$, Decomposition-1-$\mathcal{D}_1$, Decomposition-2-$\mathcal{D}_1$, ICA-$ \mathcal{D}_1$, ICA-$\mathcal{D}_2$, ICA-$\mathcal{D}_3$ (2-bit), ICA-$\mathcal{D}_3$ (1-bit)}
\end{axis}
\end{tikzpicture}
\caption{Total load in function of the path loss exponent $\alpha_{RIS} = \alpha_{ci} = \alpha_{iu}$, when $d = 0.6$ and $ S_i = 140$ $(\forall i \in \mathcal{I})$.} \label{fig:3}
\end{center}
\end{figure}
Fig. \ref{fig:3} shows the impact of the path loss exponent of RIS for the total load. As we can observe that the total load increases upon increasing $\alpha_{RIS}$, since larger $\alpha_{RIS}$ leads to the lower channel gain between RIS and BS/UE. The path loss exponent of RIS depends on the physical environment. For example, in practice $\alpha_{RIS}$ is usually smaller when the RIS is at a higher altitude due to fewer obstacles. But with an increasing height of the RIS position, the distance between the RIS and BS/UE also will increase, which leads to a larger path loss.
We also observe from Fig. \ref{fig:1}-\ref{fig:3} that the curves of ICA-$\mathcal{D}_1$ and ICA-$\mathcal{D}_2$ almost overlap. It means that ICA loses little performance even though the amplitudes of the reflection coefficients are restricted to be one. In addition, for domain $\mathcal{D}_3$, we can see that, the performance of ICA-$\mathcal{D}_2$ is almost closed to the ICA-$\mathcal{D}_1$. It means that even RIS with low adjustability in practice can still help.
\subsection{Discussion for Practical RIS}
\begin{figure}[tbp]
\begin{center}
\begin{tikzpicture}
\centering
\begin{axis}[
ybar, axis on top,
tick label style={font=\footnotesize},
tick align=inside,
xlabel={Normalized demand $d$ (Mbps)},
ylabel={Total load},
ymin=0, ymax=2,
xtick={1, 2, 3, 4, 5},
enlarge x limits=true,
legend style={at={(0.98, 0.98)},
legend pos=north west,
anchor=north west,
legend columns=-1},
legend image code/.code={\draw [#1] (0cm,-0.1cm) rectangle (0.2cm,0.25cm); },
]
\addplot coordinates { (1, 0.256) (2, 0.5715) (3, 0.9215) (4, 1.299) (5, 1.70) };
\addplot coordinates { (1, 0.246) (2, 0.56) (3, 0.9115) (4, 1.279) (5, 1.68) };
\legend{ICA-$\mathcal{D}_3$ (2-bit), Optimal-$\mathcal{D}_3$ (2-bit)}
\end{axis}
\end{tikzpicture}
\caption{A comparison about the solution obtained by our algorithm and the optimal solution in a small scale simulation scenario under different normalized demand.} \label{fig:4}
\end{center}
\end{figure}
By Proposition \ref{additional}, for a small number of RIS elements, for the single-cell problem one can exhaustively consider the combinations of phase shifts for $\mathcal{D}_3$, and embedding this into the fixed-point method gives the global optimum for the multi-cell system. This allows us to gain insight of our algorithm in terms of performance with respect to global optimality. To this end, we consider a $3$-cell scenario where two UEs and total ten reflection elements in each cell. The results are shown in Fig. \ref{fig:4}, showing that algorithm ICA delivers solutions within at most 4\% gap to the global optimum for the small-scale scenario.
\subsection{Convergence Analysis}
\begin{figure}[tbp]
\begin{center}
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel={Iteration $\tau$ in the ICA algorithm},
ylabel={$||\boldsymbol{\rho}^{(\tau)}- \boldsymbol{\rho}^{(\tau-1)}||_{\infty}$},
xmin=2, xmax=10,
xtick={ 2, 3, 4, 5, 6, 7, 8, 9, 10},
legend pos=north east,
grid style=densely dashed,
tick label style={font=\footnotesize},
]
\addplot[ color=darkgray, dashdotted, line width=0.8pt ]
coordinates { (2, 1.5E-2) (10, 4E-9) };
\addplot[ color=red, dotted, line width=0.8pt ]
coordinates { (2, 3E-2) (10, 3E-10) };
\addplot[ color=blue, densely dashed, mark options={solid}, line width=0.8pt ]
coordinates { (2, 2E-2) (10, 1E-10) };
\legend{ $d = 0.4\ S = 180$, $d = 0.4\ S = 100$, $d = 0.8\ S = 100$}
\end{semilogyaxis}
\end{tikzpicture}
\caption{This figure illustrates the norm $||\cdot||_{\infty}$ in function of iteration $\tau$ in the ICA algorithm under three scenarios.} \label{fig:5}
\end{center}
\end{figure}
We show the convergence behavior of ICA in Fig. \ref{fig:5}. The convergence is consistently very fast. Note that the convergence becomes slightly slower for smaller demand. The reason is that smaller demand require a lower load in the final solution, and as a result, the difference between the initial one and the final load becomes larger. Similar reasoning applies to the convergence rate for larger $S$.
\section{Conclusion} \label{Sec:conclusion}
In this paper, we have investigated the resource minimization problem with interference coupling subject to the user demand requirement in multi-RIS-assisted multi-cell systems. An algorithmic framework has been designed to obtain a locally optimal solution. The numerical results have shown that the RIS can enhance the multi-cell system performance significantly. Additionally, even RIS with few discrete adjustable phases can achieve a good performance. In fact, RIS-assisted multi-cell scenarios are complex, and system simulation of large-scale scenario is time-consuming. Therefore, it would be desirable to explore model-based approaches, that are able to reasonably performance. In this paper, we chooese the load coupling model to capture the system dynamics, namely the interference dependency between interference and resource consumption. Although we acknowledge that the method is not exact, we hope one work represents the one step towards the multi-cell interference-coupled scenarios with RIS.
\section*{Appendix}
To prove Proposition \ref{NP-hard}, we use a polynomial-time reduction from the 3-satisfiability (3-SAT) problem to P2. The 3-SAT problem is NP-complete \cite{karp1972reducibility}. All notations we define in this appendix are used for this proof only.
Consider a 3-SAT instance with $n$ Boolean variables $x_1, x_2, ..., x_n$ and $m$ clauses. Denote by $\widehat{x}_i$ the negation of $x_i$. A literal means a variables or its negation. Each clause is composed by a disjunction of exactly three distinct literals, for example, $(\widehat{x}_1 \vee x_2 \vee \widehat{x}_n)$. The problem is to determine whether or not there exists an assignment of true/false values to the variables, such that all clauses are true.
For any instance of 3-SAT with $n$ clauses and $m$ variables, we construct a corresponding network with a single cell, $n$ UEs, and $m$ RISs each having one reflection element, with domain $\mathcal{D}_3= \{ \mathrm{e}^{-\mathrm{i}\frac{\lambda}{4c}}, \mathrm{e}^{\mathrm{i}\frac{\lambda}{4c}}\}$, where $\lambda $ is the wavelength of the carrier, and $c$ is the speed of light. Each variable of the 3-SAT instance is mapped to an RIS. Moreover, the assignment of true/false to $x_i$ corresponds to setting the reflection coefficient of RIS $i$ to $\mathrm{e}^{\mathrm{i}\frac{\lambda}{4c}}$ and $\mathrm{e}^{-\mathrm{i}\frac{\lambda}{4c}}$, respectively. There is a one-to-one correspondence between the clauses in the 3-SAT instance and the UEs. It is not difficult to see that the parameters of our P2 scenario can be set such that the following hold.
\begin{enumerate}
\item The phase of the direct signal from the BS to all UEs is always at a crest.
\item For any UE, there are three RISs that potentially may contribute to the signal; the other RISs will have no effect on the UE (e.g., due to obstacle). In addition, For UE $j$, $j=1, 2, ... , m$, these three RISs are those corresponding to the three literals in clause $j$ of the 3-SAT instance. Specifically, the literal $x_i$ in clause $j$ corresponds the case where the phase of the reflected signal from RIS $i$ to UE $j$ will be at a crest if the reflection coefficient $ \mathrm{e}^{-\mathrm{i}\frac{\lambda}{4c}}$ is chosen, and that will be at a trough for $ \mathrm{e}^{\mathrm{i}\frac{\lambda}{4c}}$. Conversely, the literal $\widehat{x}_i$ in the clause corresponds the case where the phase will be at a trough if $ \mathrm{e}^{-\mathrm{i}\frac{\lambda}{4c}}$, and that will be at a crest if $ \mathrm{e}^{\mathrm{i}\frac{\lambda}{4c}}$.
\item The demand of UEs is set such that it can not be satisfied if and only if the power of the overall composite signal is zero. For any of three reflected signals received by UEs, the received power is one-third of the received power of the direct signal from the BS. By 1), we can know that, for any UE, the power of the overall composite signal will be zero (i.e., the demand can not be satisfied) if and only if its three reflected signals are all at a trough.
\end{enumerate}
Obviously, solving an instance of 3-SAT problem corresponds to determining if the defined P2 scenario has any feasible solution or not. Hence the conclusion.
\bibliographystyle{IEEEtran}
|
2,869,038,154,292 | arxiv | \section{Introduction}\label{sec0}
Quantum entanglement, which is recognized as an essential ingredient for testing local hidden variable theories against quantum mechanics, has extensive application in quantum computing and quantum information processing. It is well known that multipartite entangled states have many properties more peculiar than the bipartite ones because they exhibit the contradiction between local hidden variable theories and quantum mechanics even for nonstatistical predictions, as opposed to the statistical ones for the Einstein-Podolsky-Rosen (EPR) states~\cite{DPTAJP904358, AFPRA0469}. It has been shown that genuine three-qubit entanglement comes in two different inconvertible types represented by the GHZ state and the W state~\cite{DMAAAJP9058, WGJPRA0062}. The GHZ state is inequivalent to the W state in the sense that they cannot be converted into each other even under any stochastic local operations and classical communication (SLOCC). These two kinds of entangled states represent two distinct classes of three-qubit entanglement and can perform different quantum information processing tasks, and much interest has been paid in the investigation of how to convert the two types of entanglement into each other. Based on positive operator valued measures (POVMs), Walther {\it et al.} proposed a method to convert a GHZ state to an arbitrarily good approximation to a W state and experimentally realized this scheme in the three-photon case~\cite{PKAPRL0594}. In Ref.~\cite{TTSTMNPRL09102}, the authors experimentally demonstrated a transformation of two Einstein-Podolsky-Rosen photon pairs distributed among three parties into a three-photon W state using local operations and classical communication. We also proposed a linear optical method to convert $N-1$ $(N\geq3)$ entangled two-photon pairs distributed among $N$ parties into a $N$-photon W state~\cite{HIJTP1352}. Through a dissipative dynamics process in an open quantum system, Song {\it et al.} showed that a four-atom W state can be converted into a GHZ state with deterministic probability~\cite{JXQLYHPRA1388}. Furthermore, we proposed a linear-optics-based scheme for local conversion of four Einstein-Podolsky-Rosen photon pairs distributed among five parties into four-photon polarization-entangled decoherence-free states using SLOCC and non-photon-number-resolving detectors~\cite{HSAXKOE1119}. These works make it possible to convert different kinds of quantum states into each other.
Cavity quantum electrodynamics (QED) system is a promising candidate for quantum information processing because atoms are suitable for storing information in stationary nodes and photons suitable for transporting information. In practice, however, the distribution of entanglement for atoms over long distance is difficult because of unavoidable transmission losses and decoherence in the quantum channel. In recent years, several schemes have been proposed to realize quantum computation and engineer entanglement between atoms trapped in distant optical cavities, either through detection of leaking photons~\cite{HXYSKJPB0942, SPMVPRL9983, LHPRL0390, DMSPRL0391, HXKJPB0942}, or through direct connection of the cavities by optical fiber~\cite{TPRL9779, JPHHPRL9778, ASSPRL0696, ZFPRA0775, JYHEPL0987, SAPL0994, SCFPRA1082, SCPB1019, HSAKJOSAB1229}. In this paper, we propose a scheme for converting a three-atom W state to a GHZ state with the certain success probability. In the scheme each $\lambda\lambda$-type atom is individually trapped in an optical cavity and by the interference and detections of the polarized photons leaking out of the separate optical cavities, the conversion from a three-atom W state to a GHZ state is achieved. The scheme does not require the simultaneous click of the detectors and it is robust against the asynchronous emission of the polarized photons and the detection inefficiency. Furthermore, we consider the influence of cavity decay, atomic spontaneous decay, and photon leakage on the success probability of the scheme and the fidelity of the state for a practical system. Compared with Ref.~\cite{PKAPRL0594}, our scheme has the following merits: (i) the method proposed in Ref.~\cite{PKAPRL0594} is to convert a three-photon GHZ state to a approximate W state, while in our scheme, a three-atom W state can be converted to a exact GHZ state; (ii) in the ideal case, the fidelity of the converted state in Ref.~\cite{PKAPRL0594} is 3/4, while in our scheme, the fidelity is 1.0.
\begin{figure}
\includegraphics[width=5.3in]{fig1.eps}\caption{(Color online) The level configuration and excitation scheme of the atoms. $|g_{L}\rangle,~|g_{R}\rangle,~|e_{L}\rangle$, and $|e_{R}\rangle$ are the ground states, $|f_{L}\rangle$ and $|f_{R}\rangle$ are the excited states. The transition $|g_L\rangle\leftrightarrow|f_L\rangle$ ($|g_R\rangle\leftrightarrow|f_R\rangle$) is driven by a classical field with left-circular (right-circular) polarization and the transition $|e_L\rangle\leftrightarrow|f_L\rangle$ ($|e_R\rangle\leftrightarrow|f_R\rangle$) is coupled to cavity mode $a_L$ ($a_R$) with the left-circular (right-circular) polarization.}
\end{figure}
\section{Conversion of a three-atom W state to a GHZ state by interference of polarized photons}\label{sec1}
We consider a $\lambda\lambda$-type atom, as shown in Fig.~1. This kind of level structure has been proposed to generate entangled single-photon wave packets~\cite{CKPJPPRA0061} and achieve quantum computation in a single
cavity~\cite{TSJPPRL9575, XWPRA0571}. The transition $|g_L\rangle\leftrightarrow|f_L\rangle$
($|g_R\rangle\leftrightarrow|f_R\rangle$) is driven by a classical field with left-circular (right-circular)
polarization. The transition $|e_L\rangle\leftrightarrow|f_L\rangle$ ($|e_R\rangle\leftrightarrow|f_R\rangle$)
is coupled to cavity mode $a_L$ ($a_R$) with the left-circular (right-circular) polarization. The Hamiltonian
of the system is written as
\begin{eqnarray}\label{e01}
H_{\rm I}=\sum\limits_{j=L,R}{\big[}\Delta|f_j\rangle\langle f_j|+(\lambda_ca_j|f_j\rangle\langle e_{j}|+\Omega|f_j\rangle\langle g_j|+{\rm H.c.}){\big]},
\end{eqnarray}
where $\Delta$ is the detuning between the cavity mode and the corresponding atomic transition, $\lambda_c$ is the coupling strength between the atom and cavity mode, $\Omega$ is the Rabi frequency of the classical field, and $a_L$ and $a_R$ are the annihilation operators of the left-circular and right-circular polarization
modes $L$ and $R$, respectively. Under the large-detuning conditions $\Delta\gg \lambda_c, \Omega$,
the excited states of the atom $|f_L\rangle$ and $|f_R\rangle$ are only virtually excited during the atom-cavity
interaction process. Therefore, the effect of rapidly oscillating terms can be neglected and the levels $|f_L\rangle$ and $|f_R\rangle$
can be eliminated adiabatically, leading to the effective Hamiltonian
\begin{eqnarray}\label{e02}
H_{\rm eff}=\sum\limits_{j=L,R}-\left[\frac{\lambda_c^2}{\Delta}|e_{j}\rangle\langle e_{j}|a_{j}^\dag a_{j}+\frac{\Omega^2}{\Delta}|g_j\rangle\langle g_j|+\frac{\lambda_c\Omega}{\Delta}{\big(}|g_j\rangle\langle e_{j}|a_{j}+{\rm H.c.}{\big)}\right].
\end{eqnarray}
\begin{figure}
\includegraphics[width=4.2in]{fig2.eps}\caption{(Color online) Schematic setup of converting a three-atom W state to a GHZ state. Atoms $a$, $b$, and $c$ are trapped in three Fabry-P\'{e}rot cavities $A$, $B$, and $C$, respectively. Here QWP denotes a quarter-wave plate, PBS denotes a polarization beam splitter that transmits $H$ photon and reflects $V$ photon, HWP denotes a half-wave plate, $M$ denotes mirror, and $D$ is a conventional photon detector.}
\end{figure}
Assume that atoms $a$, $b$, and $c$, which are respectively trapped in three spatially
separated optical cavities $A$, $B$, and $C$, as shown in Fig.~2, are in the following three-atom entangled W state,
\begin{eqnarray}\label{e03}
|\Psi\rangle_{{\rm W}}=\frac{1}{\sqrt{3}}(|g_{L}\rangle_a|g_{L}\rangle_b|g_{R}\rangle_c+|g_{L}\rangle_a|g_{R}\rangle_b|g_{L}\rangle_c+|g_{R}\rangle_a|g_{L}\rangle_b|g_{L}\rangle_c).
\end{eqnarray}
This kind of W state can be prepared by using the same method proposed in Ref.~\cite{HXYSKJPB0942}. Performing a Hadamard gate operation, which can be achieved by a $\pi/2$ microwave pulse, on atoms $a$, $b$, and $c$ respectively, to accomplish the transformation
\begin{eqnarray}\label{e04}
|g_{L}\rangle&\rightarrow&\frac{1}{\sqrt{2}}(|g_{L}\rangle+|g_{R}\rangle),\cr\cr
|g_{R}\rangle&\rightarrow&\frac{1}{\sqrt{2}}(|g_{L}\rangle-|g_{R}\rangle).
\end{eqnarray}
After that, the state becomes
\begin{eqnarray}\label{e05}
|\Psi\rangle_1&=&\frac{1}{2\sqrt{6}}(3|g_{L}\rangle_a|g_{L}\rangle_b|g_{L}\rangle_c-3|g_{R}\rangle_a|g_{R}\rangle_b|g_{R}\rangle_c+|g_{L}\rangle_a|g_{L}\rangle_b|g_{R}\rangle_c
+|g_{L}\rangle_a|g_{R}\rangle_b|g_{L}\rangle_c\cr\cr&&-|g_{L}\rangle_a|g_{R}\rangle_b|g_{R}\rangle_c
+|g_{R}\rangle_a|g_{L}\rangle_b|g_{L}\rangle_c-|g_{R}\rangle_a|g_{L}\rangle_b|g_{R}\rangle_c-|g_{R}\rangle_a|g_{R}\rangle_b|g_{L}\rangle_c).
\end{eqnarray}
If optical cavities $A$, $B$, and $C$ are initially prepared in vacuum states $|0_L,0_R\rangle_A\otimes|0_L,0_R\rangle_B\otimes|0_L,0_R\rangle_C$,
after time $t$, the temporal evolution of the total system is expressed as
\begin{eqnarray}\label{e06}
|\Psi(t)\rangle_2&=&\frac{1}{2\sqrt{6}}{\big(}3|\phi_{L}(t)\rangle_a|\phi_{L}(t)\rangle_b|\phi_{L}(t)\rangle_c-3|\phi_{R}(t)\rangle_a|\phi_{R}(t)\rangle_b|\phi_{R}(t)\rangle_c
\cr\cr&&+|\phi_{L}(t)\rangle_a|\phi_{L}(t)\rangle_b|\phi_{R}(t)\rangle_c+|\phi_{L}(t)\rangle_a|\phi_{R}(t)\rangle_b|\phi_{L}(t)\rangle_c
\cr\cr&&-|\phi_{L}(t)\rangle_a|\phi_{R}(t)\rangle_b|\phi_{R}(t)\rangle_c+|\phi_{R}(t)\rangle_a|\phi_{L}(t)\rangle_b|\phi_{L}(t)\rangle_c
\cr\cr&&-|\phi_{R}(t)\rangle_a|\phi_{L}(t)\rangle_b|\phi_{R}(t)\rangle_c-|\phi_{R}(t)\rangle_a|\phi_{R}(t)\rangle_b|\phi_{L}(t)\rangle_c{\big)},
\end{eqnarray}
where
\begin{eqnarray}\label{e07}
|\phi_{L}(t)\rangle_\mu=\alpha|g_L\rangle_\mu|0_L,0_R\rangle_\nu+\beta|e_L\rangle_\mu|1_L,0_R\rangle_\nu,\cr\cr
|\phi_{R}(t)\rangle_\mu=\alpha|g_R\rangle_\mu|0_L,0_R\rangle_\nu+\beta|e_R\rangle_\mu|0_L,1_R\rangle_\nu,
\end{eqnarray}
with $\mu=a,b,c$, $\nu=A,B,C$, and
\begin{eqnarray}\label{e08}
\alpha&=&\frac{\lambda_c^2+\Omega^2\cos[(\lambda_c^2+\Omega^2)t/\Delta]+{\rm i}\Omega^2\sin[(\lambda_c^2
+\Omega^2)t/\Delta]}{\lambda_c^2+\Omega^2},\cr\cr
\beta&=&\frac{-\lambda_c\Omega+\Omega^2\cos[(\lambda_c^2+\Omega^2)t/\Delta]+{\rm i}\lambda_c\Omega\sin[(\lambda_c^2
+\Omega^2)t/\Delta]}{\lambda_c^2+\Omega^2}.
\end{eqnarray}
With the choices of $\lambda_c=\Omega$ and $t=\frac{\Delta\pi}{\lambda_c^2+\Omega^2}$,
then the photons leaking out from the cavities $A$, $B$, and $C$
first pass through a quarter-wave plate (QWP), whose action is to
make the left-circularly polarized photons become vertically
polarized photons and to make the right-circularly polarized photons become
horizontally polarized photons, i.e., $|1_L,0_R\rangle\rightarrow|V\rangle$
and $|0_L,1_R\rangle\rightarrow|H\rangle$,
respectively, as shown in Fig.~2. Next the photons in modes 1, 2, and 3 pass through
a series of polarization beam splitters (PBSs) and half-wave plates (HWPs). Here the
action of the PBS is to transmit the horizontal polarization and reflect vertical polarization and the action of the HWP is given by the transformation
\begin{eqnarray}\label{e9}
|H\rangle\rightarrow\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle),\cr\cr
|V\rangle\rightarrow\frac{1}{\sqrt{2}}(|H\rangle-|V\rangle).
\end{eqnarray}
After that, the resulting state of the atom-photon system is given by
\begin{eqnarray}\label{e10}
|\Psi\rangle_r&=&\frac{1}{2\sqrt{6}}(-3|e_L\rangle_a|e_L\rangle_b|e_L\rangle_c|\psi_7^-\rangle|\psi_8^-\rangle|\psi_9^-\rangle
+3|e_R\rangle_a|e_R\rangle_b|e_R\rangle_c|\psi_7^+\rangle|\psi_8^+\rangle|\psi_9^+\rangle
\cr\cr&&-|e_L\rangle_a|e_L\rangle_b|e_R\rangle_c|\psi_7^-\rangle|\psi_8^-\rangle|\psi_8^+\rangle
-|e_L\rangle_a|e_R\rangle_b|e_L\rangle_c|\psi_7^-\rangle|\psi_7^+\rangle|\psi_9^-\rangle
\cr\cr&&+|e_L\rangle_a|e_R\rangle_b|e_R\rangle_c|\psi_7^-\rangle|\psi_7^+\rangle|\psi_8^+\rangle
-|e_R\rangle_a|e_L\rangle_b|e_L\rangle_c|\psi_8^-\rangle|\psi_9^-\rangle|\psi_9^+\rangle
\cr\cr&&+|e_R\rangle_a|e_L\rangle_b|e_R\rangle_c|\psi_8^-\rangle|\psi_8^+\rangle|\psi_9^+\rangle
+|e_R\rangle_a|e_R\rangle_b|e_L\rangle_c|\psi_7^+\rangle|\psi_9^-\rangle|\psi_9^+\rangle,
\end{eqnarray}
where $|\psi_m^\pm\rangle=1/\sqrt{2}(|H_m\rangle\pm|V_m\rangle)$, with $m\in\{7,8,9\}$.
Finally, the photons are detected by conventional photon detectors, which we consider here are realistic detectors commonly used
in photonic experiments. This kind of detector cannot
resolve the number of the detected photons but instead tell us
whether photons exist in a detection event with nonunit probability
$\eta_d$. Usually, the dark count of the detector is considerable low
and hence can be neglected. The positive-operator-valued-measure
(POVM) describing a conventional photon detector
is given by~\cite{HSPRA0979, HSAXKOE1119}
\begin{eqnarray}\label{e11}
\Pi_{{\rm
off}}&=&\sum\limits_{k=0}^\infty{\big(}1-\eta_d{\big)}^k|k\rangle\langle
k|,\cr\Pi_{{\rm
click}}&=&\sum\limits_{k=0}^\infty{\big[}1-{\big(}1-\eta_d{\big)}^k{\big]}|k\rangle\langle
k|.
\end{eqnarray}
Here $\Pi_{{\rm off}}$ is the POVM element for no photocounts and $\Pi_{{\rm click}}$ is that for photocounts. We only consider the event that one of the detectors ($D_{7H}$, $D_{7V}$) detects photons and another does not register any photon, similar events to detectors ($D_{8H}$, $D_{8V}$) and ($D_{9H}$, $D_{9V}$).
After the detection of photons, the state of atoms $a$, $b$, and $c$ is given by
\begin{eqnarray}\label{e12}
\rho^k_{{\rm out}}&=&\frac{{\rm
Tr}_{7H,7V,8H,8V,9H,9V}\left[\Pi_{{\rm
click}}^{7\delta}\Pi_{{\rm click}}^{8\delta}\Pi_{{\rm
click}}^{9\delta}\Pi_{{\rm
off}}^{7\gamma}\Pi_{{\rm off}}^{8\gamma}\Pi_{{\rm
off}}^{9\gamma}{\big(}|\Psi\rangle_r{_r\langle\Psi|}{\big)}\right]}{\
{\rm
Tr}_{a,b,c,7H,7V,8H,8V,9H,9V}\left[\Pi_{{\rm
click}}^{7\delta}\Pi_{{\rm click}}^{8\delta}\Pi_{{\rm
click}}^{9\delta}\Pi_{{\rm
off}}^{7\gamma}\Pi_{{\rm off}}^{8\gamma}\Pi_{{\rm
off}}^{9\gamma}{\big(}|\Psi\rangle_r{_r\langle\Psi|}{\big)}\right]\
}\cr\cr&=&|\Psi\rangle_a^j{_a^j\langle\Psi|},
\end{eqnarray}
where $\delta\neq\gamma\in\{H,V\}$, $j=1,2$, and
$|\Psi\rangle_r$ is denoted by Eq.~({\ref{e10}}), with
\begin{eqnarray}\label{e13}
|\Psi\rangle_a^1&=&\frac{1}{\sqrt{2}}(|e_L\rangle_a|e_L\rangle_b|e_L\rangle_c+|e_R\rangle_a|e_R\rangle_b|e_R\rangle_c),\cr\cr
|\Psi\rangle_a^2&=&\frac{1}{\sqrt{2}}(-|e_L\rangle_a|e_L\rangle_b|e_L\rangle_c+|e_R\rangle_a|e_R\rangle_b|e_R\rangle_c),
\end{eqnarray}
which are the three-atom GHZ states. Here $|\Psi\rangle_a^1$ corresponds to that photon detectors $\{D_{7H}, D_{8H}, D_{9V}\}$ (or $\{D_{7H}, D_{8V}, D_{9H}\}$, or $\{D_{7V}, D_{8H}, D_{9H}\}$, or $\{D_{7V}, D_{8V}, D_{9V}\}$) detect photons and the others do not register any photon, and $|\Psi\rangle_a^2$ corresponds to that photon detectors $\{D_{7H}, D_{8H}, D_{9H}\}$ (or $\{D_{7H}, D_{8V}, D_{9V}\}$, or $\{D_{7V}, D_{8H}, D_{9V}\}$, or $\{D_{7V}, D_{8V}, D_{9H}\}$) detect photons. The state $|\Psi\rangle_a^2$ can be transformed into the state $|\Psi\rangle_a^1$ by applying a classical microwave pulse to change the sign of an arbitrary atomic state.
The overall success probability for obtaining the state in Eq.~({\ref{e13}}) is
\begin{eqnarray}\label{e14}
P=\frac{3\eta_d^3}{4},
\end{eqnarray}
with $\eta_d$ being the quantum efficiency of photon detector. Performing the following transformations
\begin{eqnarray}\label{e15}
|e_L\rangle_i&\rightarrow&|g_L\rangle_i,\cr\cr
|e_R\rangle_i&\rightarrow&|g_R\rangle_i,~~i=a,b,c,
\end{eqnarray}
which can be achieved by applying fast Raman transitions to
manipulating atoms $a$, $b$, and $c$~\cite{HXYSKJPB0942, HASKJPB1043} individually. Then the
state in Eq.~({\ref{e13}}) is mapped to the state
\begin{eqnarray}\label{e16}
|\Psi\rangle_{\rm GHZ}=\frac{1}{\sqrt{2}}(|g_L\rangle_a|g_L\rangle_b|g_L\rangle_c+|g_R\rangle_a|g_R\rangle_b|g_R\rangle_c).
\end{eqnarray}
In this way the conversion from a three-atom W state to a GHZ state is achieved, with the maximal success probability of being $3/4$ in an ideal case.
\section{The effects of cavity decay, spontaneous decay, and photon leakage}\label{sec2}
In this section, we investigate the influence of cavity decay, atomic spontaneous decay and photon leakage of the cavities. When the cavity decay is considered, the Hamiltonian is rewritten as
\begin{eqnarray}\label{e17}
H_{\rm eff}^\prime=H_{\rm eff}-{\rm i}\kappa\sum\limits_{j=L,R}a_j^\dag a_j,
\end{eqnarray}
where $H_{\rm eff}$ is denoted by Eq.~({\ref{e02}}) and both the polarization modes $L$ and $R$ have been assumed to have the same loss rate $\kappa$. The evolution coefficients $\alpha$ and $\beta$ of the state denoted by Eq.~({\ref{e08}}) is thus given by
\begin{eqnarray}\label{e18}
\alpha^\prime&=&\frac{[\phi\cosh(\phi t/2)+\kappa\sinh(\phi t/2)]e^{\varphi t}}{\phi},\cr\cr
\beta^\prime&=&\frac{{\rm i}\eta(e^{\phi t}-1)e^{t(\varphi-\phi/2)}}{\phi},
\end{eqnarray}
where $\phi=\sqrt{\kappa^2-4\eta^2}$, $\eta=\frac{\lambda_c^2}{\Delta}$, $\varphi={\rm i}\eta-\frac{\kappa}{2}$,
and $\lambda_c=\Omega$. In this case the success probability corresponding to successful detections of photons
for obtaining the state $|\Psi\rangle_{\rm GHZ}$ in Eq.~({\ref{e16}}) at time $t$ is given by
\begin{eqnarray}\label{e19}
P_d=\frac{6\eta^6\left[1-\cos(\phi^\prime t)\right]^3e^{-3\kappa t}}{{\phi^\prime}^6},
\end{eqnarray}
where $\phi^\prime=\sqrt{4\eta^2-\kappa^2}$. The success probability $P_d$ as a function
of $\kappa t$ with different values $\eta$ was plotted in Fig.~3, which shows that the cavity decay rate $\kappa$ is the dominant noise source in the state conversion. The success probability $P_d$ (when $\kappa=0$, the maximal success probability is $P_d=0.75$) rapidly decreases to 0.03853 from 0.715584 when $\kappa t$ increases from 0.0156582 to 0.9896 (here we set $\eta=100\kappa$). Therefore, the waiting time for the effective detection of photons can be chosen to be a few times of cavity lifetime $1/\kappa$.
\begin{figure}
\includegraphics[width=4.5in]{fig3.eps}\\
\caption{(Color online) The success probability $P_d$ as a function of $\kappa t$ with different values of $\eta$ for detecting three photons successfully in different modes shown in Fig.~2.}\label{f03}
\end{figure}
On the other hand, for convenience, we just consider the partial system including one atom and one cavity. Based on the density-matrix formalism, the master equation for the density matrices of the partial system can be expressed as
\begin{eqnarray}\label{e21}
\dot{\rho}&=&-{\rm i}[H_{\mathrm{I}},\rho]-\sum_{j=L, R}\Bigg[\frac{\kappa_{j}}{2}\left(a_{j}^{\dag}a_{j}\rho-2a_{j}\rho a_{j}^{\dag}+\rho a_{j}^{\dag}a_{j}\right)\cr\cr
&&+\sum_{x=g,e}\frac{\gamma_{j}^{fx}}{2}\left(\sigma_{j}^{ff}\rho-2\sigma_{j}^{xf}\rho\sigma_{j}^{fx}+\rho\sigma_{j}^{ff}\right)\Bigg],
\end{eqnarray}
where $H$ is denoted by Eq.~(\ref{e01}), $\kappa_{j}(j=L, R)$ denotes the decay rates of the cavity field, $\gamma_{j}^{fx}(x=g, e)$ denotes the spontaneous decay rate of the atom from level $|f_{j}\rangle$ to $|x_{j}\rangle$, $\sigma_{j}^{mn}=|m_{j}\rangle\langle n_{j}|(m,n=f,g,e)$ are the usual Pauli matrices. For the sake of convenience, we assume that $\gamma_{j}^{fx}=\gamma_{a}/2$ due to the equiprobably transition of $|f_{j}\rangle\leftrightarrow|x_{j}\rangle$ and $\kappa_{j}=\gamma_{j}^{fx}$ for simplicity. In the following, we analyze and discuss the parameter conditions and the experimental feasibility of the present scheme. With the choice of a scaling $\gamma$, then all parameters can be reduced to the dimensionless units related to $\gamma$. Setting $\Omega=2.9\gamma$, $\Delta=14\gamma$, and $\lambda_{c}=2.86\gamma$. By solving Eq.~(\ref{e21}) numerically, we obtain the effects of the atomic spontaneous decay and photon leakage of the cavities on the fidelity including three atoms and three cavities, as shown in Fig.~\ref{f03}.
In current experiments, the parameters $\lambda_{c}=2.5~\mathrm{GHz}, \kappa=10~\mathrm{MHz}$, and $\gamma_{a}=10~\mathrm{MHz}$ have been reported in Refs.~\cite{STKKEHPRA0571,MFMNP062}. For such parameters, the calculated fidelity is about $91.04\%$, which is relatively high. Even if the atomic spontaneous decay increases to $\lambda_{c}/\gamma_{a}=50$, the fidelity also can reach $90.09\%$.
\begin{figure}
\includegraphics[width=5in]{fig4.eps}\\
\caption{(Color online) The fidelity relates to the effects of the atomic spontaneous decay and the photon leakage of the cavities. The parameters are chosen as $\Omega=2.9\gamma$, $\Delta=14\gamma$, and $\lambda_{c}=2.86\gamma$.}\label{f03}
\end{figure}
\section{Conclusions}\label{sec3}
In conclusion, basing on the atom-cavity interaction and linear optical elements, we have proposed a method to convert a three-atom W state to a GHZ state by interference of polarized photons emitted by the atoms trapped in spatially separated cavities. In our scheme, the levels $|F=1/2,m=-1/2\rangle$ and $|F=1/2,m=1/2\rangle$ of
$4^2P_{1/2}$ for a $^{40}{\rm Ca}^+$ can be used as the excited states $|f_L\rangle$ and $|f_R\rangle$, $|F=1/2,m=1/2\rangle$ and $|F=1/2,m=-1/2\rangle$ of
$4^2S_{1/2}$ can be used as the ground states $|g_L\rangle$ and $|g_R\rangle$, and $|F=3/2,m=-3/2\rangle$ and
$|F=3/2,m=3/2\rangle$ of $3^2D_{3/2}$ can be used to serve as the states $|e_L\rangle$ and $|e_R\rangle$, respectively. The lifetimes
of the atomic levels $|e_L\rangle$, $|e_R\rangle$, $|g_L\rangle$, and $|g_R\rangle$ are comparatively long so that we can neglect the spontaneous decay
of these states. We analyze and discuss the effect of cavity decay on the success probability and the effects of spontaneous decay and photon leakage on the state fidelity, the calculated results show that our scheme might be experimentally realizable based on the current cavity QED and linear optical
techniques.
\begin{center}
{\small {\bf ACKNOWLEDGMENTS}}
\end{center}
This work was supported by the National Natural Science Foundation of China under
Grant Nos. 11264042, 11465020, 61465013, 11165015, and 11564041.
|
2,869,038,154,293 | arxiv | \section{Introduction}
\label{i}
Time evolution of the density $\varrho = \varrho(t,x)$ and the velocity $\vc{u} = \vc{u}(t,x)$ of a compressible barotropic viscous fluid can be described
by the Navier--Stokes system
\begin{eqnarray}
\label{i1}
\partial_t \varrho + {\rm div}_x (\varrho \vc{u}) &=& 0,
\\ \label{i2}
\partial_t (\varrho \vc{u}) + {\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x p(\varrho) &=& {\rm div}_x \mathbb{S} (\nabla_x \vc{u}),\\
\label{i3}
\mathbb{S} (\nabla_x \vc{u}) &=& \mu \left( \nabla_x \vc{u} + \nabla_x^t \vc{u} - \frac{2}{3} {\rm div}_x \vc{u} \mathbb{I} \right) + \eta {\rm div}_x \vc{u} \mathbb{I}.
\end{eqnarray}
We assume the fluid is confined to a bounded physical domain $\Omega \subset R^3$, where the velocity satisfies the no-slip boundary conditions
\begin{equation} \label{i4}
\vc{u}|_{\partial \Omega} =0.
\end{equation}
For the sake of simplicity, we ignore the effect of external forces in the momentum equation (\ref{i2}).
{\color{black}
In the literature there is a large variety of efficient numerical methods developed for the compressible Euler and Navier-Stokes equations. The most classical of them are the finite volume methods, see, e.g., \cite{feist1}, \cite{kroener}, \cite{tadmor-ns}, the methods based on a suitable
combination of the finite volume and finite element methods \cite{dolejsi}, \cite{ffl}, \cite{fflw}, \cite{gal1}, \cite{gal2},
or the discontinous Galerkin schemes, e.g.~\cite{feist2}, \cite{feist3} and the references therein.
Although these methods are frequently used for many physical or engineering applications, there are only partial theoretical results available
concerning their analysis for the compressible Euler or
Navier-Stokes systems. We refer to the works of Tadmor et al.~\cite{tadmor2}, \cite{tadmor5}, \cite{tadmor} for entropy stability in the context of hyperbolic balance laws and to the works of Gallou\"et et al.~\cite{gal1}, \cite{gal2} for the stability analysis of the
mixed finite volume--finite element methods based on the Crouzeix-Raviart elements for compressible viscous flows.
In \cite{rohde} Jovanovi\'{c} and Rohde obtained the error estimate for entropy dissipative finite volume methods applied to nonlinear hyperbolic balance laws under (a rather restrictive) assumption of the global existence of a bounded, smooth exact solution.}
Our goal in this paper is to study convergence of solutions to the numerical scheme proposed originally by Karlsen and Karper \cite{KarKar3},
\cite{KarKar2}, \cite{KarKar1}, \cite{Karp}
to solve problem (\ref{i1}--\ref{i4}) in polygonal (numerical) domains, and later modified in
\cite{FeKaMi} to accommodate approximations of smooth physical domains. The scheme is implicit and of mixed type, where the convective terms are approximated via
upwind operators, while the viscous stress is handled by means of the Crouzeix--Raviart finite element method. As shown by Karper \cite{Karp} and in
\cite{FeKaMi},
the scheme provides a family of numerical solutions containing a sequence that converges to a weak solution of the Navier-Stokes system as the discretization
parameters tend to zero. Recently, Gallou{\"e}t et al. \cite{GalHerMalNov} established rigorous error estimates on condition that the limit problem admits
a smooth solution. Numerical experiments illustrating theoretical predictions have been performed in \cite{FLNNS}.
We consider the problem under physically realistic assumptions, where theoretical results are still in short supply. In particular, our results cover
completely the \emph{isentropic} pressure--density state equation
\begin{equation} \label{i5}
p(\varrho) = a \varrho^{\gamma}, \ 1 < \gamma < 2.
\end{equation}
Note that the assumption $\gamma < 2$ is not restrictive in this context as the largest physically relevant exponent is $\gamma = \frac{5}{3}$.
Let us remark that the available theoretical results concerning global-in-time existence of \emph{weak} solutions cover only the case
$\gamma > \frac{3}{2}$ \cite{FNP}, see also the recent result by Plotnikov and Weigant \cite{PloWei} for the borderline case in the 2D setting.
{Similarly}, the
error estimates obtained by Gallou{\"e}t et al. \cite{GalHerMalNov} provide convergence under the same conditions yielding explicit convergence rates for
$\gamma > \frac{3}{2}$ and mere boundedness of the numerical solutions in the limit case $\gamma = \frac{3}{2}$.
Our goal is to establish convergence of the numerical solutions in the full range of the adiabatic exponent $\gamma$ specified in (\ref{i5}).
The main idea is to use the concept of \emph{dissipative measure-valued solution} to problem (\ref{i1}--\ref{i4}) introduced recently in {\cite{FGSWW1}}, \cite{GSWW}.
These are, roughly speaking, measure-valued solutions satisfying, in addition, an energy inequality in which the dissipation defect
measure dominates the concentration remainder in the equations. Although very general, a dissipative measure-valued solution coincides
with the strong solution of the same initial-value problem as long as the latter exists, see \cite{FGSWW1}. Our approach is based on the following steps:
\begin{itemize}
\item
We recall the numerical energy balance identified in Karper's original paper.
\item
We use the energy estimates to show stability of the numerical method.
\item
A consistency formulation of the problem is derived involving numerical solutions and error terms vanishing with the time step
$\Delta t$ and the spatial
discretization parameter $h$ approaching zero.
\item
We show that the family of numerical solutions generates a dissipative measure-valued solution of the problem. Such a result is, of course,
of independent interest. {\color{black} As claimed recently by Fjordholm et al.~\cite{tadmor3}, \cite{tadmor4} the dissipative measure-valued solutions yield,
{at least in the context of \emph{hyperbolic} conservation laws}, a more appropriate solution concept than the weak entropy solutions.}
\item
Finally, using the weak--strong uniqueness principle established in \cite{FGSWW1}, we infer that the numerical solutions converge (a.a.) pointwise
to the smooth solution of the limit problem as long as the latter exists.
\end{itemize}
The paper is organized as follows. The numerical scheme is introduced in Section \ref{N}. In Section \ref{S}, we recall the numerical counterpart of the
energy balance and derive stability estimates. In Section \ref{C}, we introduce a consistency formulation of the problem and estimate the numerical
errors. Finally, we show that the numerical scheme generates a dissipative measure-valued solution to the compressible Navier--Stokes system
and state our main convergence results in Section \ref{M}.
\section{Numerical scheme}
\label{N}
To begin, we introduce the notation necessary to formulate our numerical method.
\subsection{Spatial domain, mesh}
We suppose that $\Omega \subset R^3$ is a bounded domain. We consider a polyhedral approximation $\Omega_h$, where $\Omega_h$ is a polygonal domain,
\[
\Ov{\Omega}_h = \cup_{E^j \in E_h} E^j, \ {\rm int}[ E^i ] \cap {\rm int}[E^j] = \emptyset \ \mbox{for}\ i \ne j,
\]
where each $E^j \in E_h$ is a closed tetrahedron that can be obtained via the affine transformation
\[
E^j = h \mathbb{A}_{E^j} \tilde E + \vc{a}_{E^j}, \ \mathbb{A}_{E^j} \in R^{3 \times 3}, \ \vc{a}_{E^j} \in R^3,
\]
where $\tilde E$ is the reference element
\[
\tilde{E} = {\rm co} \left\{ [0,0,0], [1,0,0], [0,1,0], [0,0,1] \right\},
\]
and
where all eigenvalues of the matrix $\mathbb{A}_{E^j}$ are bounded above and below away from zero uniformly for $h \to 0$.
The family $E_h$ of all tetrahedra covering $\Omega_h$ is called \emph{mesh}, the positive number $h$ is the parameter of spatial discretization.
We write
\[
\begin{split}
a &\stackrel{<}{\sim} b \Leftrightarrow a \leq c b, \ c > 0 \ \mbox{independent of}\ h, \\
a &\stackrel{>}{\sim} b \Leftrightarrow a \geq c b, \ c > 0 \ \mbox{independent of}\ h, \\
a&=b \Leftrightarrow a \stackrel{<}{\sim} b \ \mbox{and}\ a \stackrel{>}{\sim} b.
\end{split}
\]
Furthermore, we suppose that:
\begin{itemize}
\item
a non-empty intersection of two elements $E^j$, $E^i$ is their common face, edge, or vertex;
\item
for all compact sets $K_i \subset \Omega$, $K_e \subset R^3 \setminus \Ov{\Omega}$ there is $h_0 > 0$ such that
\[
K_i \subset \Omega_h, \ K_e \subset R^3 \setminus \Ov{\Omega}_h \ \mbox{for all}\ 0 < h < h_0.
\]
\end{itemize}
The symbol $\Gamma_h$ denotes the set of all faces in the mesh. We distinguish exterior and interior faces:
\[
\Gamma_h = \Gamma_{h, {\rm int}} \cup \Gamma_{h, {\rm ext}},\
\Gamma_{h, {\rm ext}} = \left\{ \Gamma \in \Gamma_h \ \Big| \ \Gamma \subset \partial \Omega_h \right\},\
\Gamma_{h, {\rm int}} = \Gamma_h \setminus \Gamma_{h, {\rm ext}}.
\]
\subsection{Function spaces}
Our scheme utilizes spaces of piecewise smooth functions, for which we define the traces
\[
v^{\rm out} = \lim_{\delta \to 0} v(x + \delta \vc{n}_\Gamma),\
v^{\rm in} = \lim_{\delta \to 0} v(x - \delta \vc{n}_\Gamma), \ x \in \Gamma, \ \Gamma \in \Gamma_{h, {\rm int}},
\]
where $\vc{n}_\Gamma$ denotes the outer normal vector to the face $\Gamma \subset \partial E$. Analogously, we define
$v^{\rm in}$ for $\Gamma \subset \Gamma_{h,{\rm ext}}$. We simply write $v$ for $v^{\rm in}$ if no confusion arises. We also define
\[
\ju{v} = v^{\rm out} - v^{\rm in}, \ \avg{ v } = \frac{ v^{\rm out} + v^{\rm in} }{2}, \ \avg{v} = \frac{1}{|\Gamma|} \intG{v}.
\]
Next, we introduce the space of piecewise
constant functions
\[
Q_h (\Omega_h) = \left\{ v \in L^1(\Omega_h) \ \Big|\ v|_E = {\rm const} \in R \ \mbox{for any} \ E \in E_h \right\},
\]
with the associated projection
\[
\Pi^{Q}_h : L^1 (\Omega_h) \to Q_h (\Omega_h), \ \Pi^Q_h [v] = \left< v \right>_{E} = \frac{1}{|E|} \int_E v \ \,{\rm d} {x}, \ E \in E_h.
\]
We shall occasionally write
\[
\Pi^Q_h [v] = \avo{v}.
\]
Finally, we introduce the Crouzeix--Raviart finite element spaces
\[
V_h (\Omega_h) = \left\{ v \in L^2(\Omega_h) \ \Big| \ v|_E = \mbox{affine function} \ E \in E_h, \
\intG{ v^{\rm in} } = \intG{ v^{\rm out}} \ \mbox{for}\ \Gamma \in \Gamma_{h, {\rm int}} \right\},
\]
\[
V_{0,h} (\Omega_h) = \left\{ v \in V_h (\Omega_h) \ \Big| \ \intG{ v^{\rm in} } = 0 \ \mbox{for}\ \Gamma \in \Gamma_{h, {\rm ext}} \right\},
\]
along with the associated projection
\[
\Pi^V_h : W^{1,1}(\Omega_h) \to V_h (\Omega_h), \ \intG{ \Pi^V_h [v] } = \intG{v} \ \mbox{for any}\ \Gamma \in \Gamma_h.
\]
We denote by $\nabla_h v$, ${\rm div}_h v$ the piecewise constant functions resulting from the action of the corresponding differential
operator on $v$ on each fixed element in $E_h$,
\[
\nabla_h v \in Q_h(\Omega_h; R^3), \ \nabla_h v = \nabla_x v \ \mbox{for}\ E \in E_h,\
{\rm div}_h \vc{v} \in Q_h(\Omega_h), \ {\rm div}_h v = {\rm div}_x v \ \mbox{for}\ E \in E_h.
\]
\subsection{Discrete time derivative, dissipative upwind}
For a given time step $\Delta t > 0$ and the (already known) value of the numerical solution $v^{k-1}_h$ at a given time level $t_{k-1} = (k-1) \Delta t$,
we introduce the discrete time derivative
\[
D_t v_h = \frac{ v^k_h - v^{k-1}_h }{\Delta t}
\]
to compute the numerical approximation $v^k_h$ at the level $t_k = t_{k-1} + \Delta t$.
To approximate the convective terms, we use the dissipative upwind operators introduced in \cite{FeKaMi} (see also \cite{FeiKaPok}), specifically,
\begin{equation} \label{N1}
\begin{split}
{\rm Up}[r_h, \vc{u}_h] &= \underbrace{\av{ r_h } \avg{ \vc{u}_h \cdot \vc{n}}}_{\rm convective \ part} - \frac{1}{2}
\underbrace{\max\{ h^\alpha; | \avg{ \vc{u}_h \cdot \vc{n}} | \} \ju{ r_h }}_{\rm dissipative \ part}
\\ &= \underbrace{ r_h^{\rm out} [ \avg{\vc{u}_h \cdot \vc{n}} ]^- +
r_h^{\rm in} [ \avg{\vc{u}_h \cdot \vc{n}} ]^+}_{\rm standard \ upwind} - \frac{h^\alpha}{2} \ju{r_h} \chi \left( \frac{ \avg{\vc{u}_h \cdot \vc{n} }}{h^\alpha} \right),
\end{split}
\end{equation}
where
\[
\chi(z) = \left\{ \begin{array}{l} 0 \ \mbox{for}\ z < -1, \\ z + 1 \ \mbox{if} \ -1 \leq z \leq 0, \\
1 - z \ \mbox{if} \ 0 < z \leq 1, \\
0 \ \mbox{for}\ z > 1.
\end{array} \right.
\]
\subsection{Numerical scheme}
Given the initial data
\begin{equation} \label{N2}
\varrho^0_h \in Q_h (\Omega_h), \ \vc{u}^0_h \in V_{0,h} (\Omega_h; R^3),
\end{equation}
and the numerical solution
\[
\varrho^{k-1}_h \in Q_h (\Omega_h), \ \vc{u}^{k-1}_h \in V_{0,h} (\Omega_h; R^3),\ k \geq 1,
\]
the value $[\varrho^k_h, \vc{u}^k_h] \in Q_h (\Omega_h) \times V_{0,h} (\Omega_h; R^3)$ is obtained as a solution of the following system of equations:
\begin{equation} \label{N3}
\intOh{ D_t \varrho^k_h \phi } - \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ {\rm Up}[\varrho^k_h, \vc{u}^k_h] \ju{\phi} } = 0
\end{equation}
for any $\phi \in Q_h(\Omega_h)$;
\begin{equation} \label{N4}
\begin{split}
\intOh{ D_t \left( \varrho^k_h \avo{ \vc{u}^k_h } \right) \cdot \vcg{\phi} } &- \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ {\rm Up}[\varrho^k_h \avo{ \vc{u}^k_h} , \vc{u}^k_h]
\cdot \ju{ \avo{ \vcg{\phi} } } }- \intOh{ p(\varrho^k_h) {\rm div}_h \vcg{\phi} }\\
&+ \mu \intOh{ \nabla_h \vc{u}^k_h : \nabla_h \vcg{\phi} } + \left( \frac{\mu}{3} + \eta \right) \intOh{ {\rm div}_h \vc{u}^k_h {\rm div}_h \vcg{\phi} } =
0
\end{split}
\end{equation}
for any $\vcg{\phi} \in V_{0,h}(\Omega_h; R^3)$. The specific form of the viscous stress in (\ref{N4}) reflects the fact that the viscosity coefficients
are constant.
It was shown in {\cite{Karp} (see also \cite[Part II]{FeiKaPok})} that system (\ref{N3}), (\ref{N4}) is solvable for any choice of the initial data (\ref{N2}). In addition,
$\varrho^k_h > 0$ whenever $\varrho^0_h > 0$. In general, the solution $[\varrho^k_h, \vc{u}^k_h]$ may not be uniquely determined by $[\varrho^{k-1}_h, \vc{u}^{k-1}_h]$
unless the time step $\Delta t$ is conveniently adjusted by a CFL type condition. We make more comments on this option in Remark \ref{RC3} below.
As shown in {\cite{FeKaMi} (see also \cite[Part II]{FeiKaPok})}, the family of numerical solutions converges, up to a suitable subsequence, to a weak solution of the Navier-Stokes
system (\ref{i1}--\ref{i4}) as $h \to 0$ if
\begin{itemize}
\item
the time step is adjusted so that $\Delta t \approx h$;
\item
the viscosity coefficients satisfy $\mu > 0$, $\eta \geq 0$,
\item
the pressure satisfies
\[
p(\varrho) = a \varrho^{\gamma} + b \varrho, \ a,b > 0, \ \gamma > 3.
\]
\end{itemize}
If the limit solution of the Navier--Stokes system is smooth, then qualitative error estimates can be derived on condition
that $p$ satisfies (\ref{i5}) with $\gamma \geq 3/2$, {see Gallou{\"e}t et al. \cite{GalHerMalNov}.
Unfortunately, many real world applications correspond to smaller adiabatic exponents, the most popular among them is the air with
$\gamma = 7/5$.
It is therefore of great interest to discuss convergence
of the scheme in the physically relevant range
$1 < \gamma < 2$.}
\section{Stability - energy estimates}
\label{S}
It is crucial for our analysis that the numerical scheme (\ref{N2}--\ref{N4}) admits a certain form of total energy balance.
For the pressure potential
\[
P(\varrho) = \frac{a}{\gamma - 1} \varrho^{\gamma}, \ P''(\varrho) = \frac{p'(\varrho)}{\varrho} = a \gamma \varrho^{\gamma - 2},
\]
the \emph{total energy balance} reads
\begin{equation} \label{S1}
\begin{split}
&\intOh{ D_t \left[ \frac{1}{2} \varrho^k_h |\avo{\vc{u}^k_h}|^2 + P(\varrho^k_h) \right] }
+ \intOh{ \left[\mu |\nabla_h \vc{u}^k_h |^2 + (\mu/3 + \eta)
|{\rm div}_h \vc{u}^k_h |^2\right] }
\\
&= - \frac{1}{2} \intOh{ P''(s^k_h) \frac{ \left( \varrho^k_h - \varrho^{k-1}_h \right)^2}{\Delta t} } - \intOh{\frac{\Delta t}{2} \varrho^{k-1}_h
\left| \frac{ \avo{\vc{u}^k_h} - \avo{\vc{u}^{k-1}_h} } {\Delta t} \right|^2 }
\\
&- \frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{
\ju{\varrho^k_h} \ju{ P' (\varrho^k_h )}\chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) }
\\
&- \frac{1}{2} \sum_{\Gamma \in \Gamma_h} \intG{ P''(z^k_h) \ju{ \varrho^k_h}^2 | \avg{ \vc{u}^k_h \cdot \vc{n} } | }
\\
&- \frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \av{ \varrho^k_h } \cdot \ju{ \avo{\vc{u}^k_h} }^2
\chi \left( \frac{ \avg{ \vc{u}^k_h \cdot \vc{n} } }{h^\alpha} \right) }
\\
&- \frac{1}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}} } \intG{ \left( (\varrho^k_h)^{\rm in} [ \avg{ \vc{u}^k_h \cdot \vc{n} } ]^+ -
(\varrho^k_h)^{\rm out} [ \avg{ \vc{u}^k_h \cdot \vc{n}} ]^- \right) \ju{ \avo{\vc{u}^k_h} }^2 },
\end{split}
\end{equation}
with
\[
s^k_h \in {\rm co}\{ \varrho^k_h, \varrho^{k-1}_h \}, \ z^k_h \in {\rm co}\{ (\varrho^k)^{\rm in}, (\varrho^k_h)^{\rm out} \},
\]
see \cite[Chapter 7, Section 7.5.4]{FeiKaPok}.
As the numerical densities are positive,
all terms on the right-hand side of (\ref{S1}) representing numerical dissipation
are non-positive. For completeness, we remark that the scheme conserves the total mass, specifically,
\begin{equation} \label{S1a}
\intOh{ \varrho^k_h } = \intOh{ \varrho^0_h }, \ k=1,2, \dots
\end{equation}
\subsection{Dissipative terms and the pressure growth}
It is easy to check that
\begin{equation} \label{S1b}
P''(z) (\varrho_1 - \varrho_2)^2 \geq a\gamma (\varrho^{\gamma/2}_1 - \varrho^{\gamma/2}_2 )^2 \ \mbox{whenever}\
z \in {\rm co} \{ \varrho_1, \varrho_2 \}, \ \varrho_1, \varrho_2 > 0, \ 1 < \gamma < 2.
\end{equation}
Indeed it is enough to assume $0 < \varrho_1 \leq z \leq \varrho_2$; whence
\[
P''(z) (\varrho_1 - \varrho_2)^2 \geq a \gamma \varrho_2^{\gamma - 2} (\varrho_1 - \varrho_2)^2,
\]
and (\ref{S1b}) reduces to showing
\[
\varrho_2^{\gamma/2 - 1} (\varrho_2 - \varrho_1) \geq (\varrho^{\gamma/2}_2 - \varrho^{\gamma/2}_1 )
\ \mbox{or, equivalently,}\ \varrho_1 \varrho_2^{\gamma/2 - 1} \leq \varrho_1^{\gamma/2},
\]
where the last inequality follows immediately as $\varrho_1 \leq \varrho_2$, $1 < \gamma < 2$.
Consequently, the terms on the right-hand side of (\ref{S1}) representing the numerical dissipation and containing $P''$ satisfy
\begin{equation} \label{S2}
\begin{split}
\frac{1}{2} \intOh{ P''(s^k_h) \frac{ \left( \varrho^k_h - \varrho^{k-1}_h \right)^2}{\Delta t} }
&\geq \frac{a\gamma}{2} \intOh{ \frac{ \left( (\varrho^k_h)^{\gamma/2} - (\varrho^{k-1}_h)^{\gamma/2} \right)^2}{\Delta t} },\\
\frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{
\ju{\varrho^k_h} \ju{ P' (\varrho^k_h )}\chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) } &\geq
\frac{a\gamma h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{
\ju{ (\varrho^k_h)^{\gamma / 2} }^2 \chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) },\\
\frac{1}{2} \sum_{\Gamma \in \Gamma_h} \intG{ P''(z^k_h) \ju{ \varrho^k_h}^2 | \avg{ \vc{u}^k_h \cdot \vc{n} } | } &\geq
\frac{a\gamma}{2} \sum_{\Gamma \in \Gamma_h} \intG{ \ju{ (\varrho^k_h )^{\gamma/2} }^2 | \avg{ \vc{u}^k_h \cdot \vc{n} } | }.
\end{split}
\end{equation}
In particular, the energy balance (\ref{S1}) gives rise to
\begin{equation} \label{S3}
\begin{split}
&\intOh{ D_t \left[ \frac{1}{2} \varrho^k_h |\avo{\vc{u}^k_h}|^2 + P(\varrho^k_h) \right] }
+ \intOh{ \left[\mu |\nabla_h \vc{u}^k_h |^2 + (\mu/3 + \eta)
|{\rm div}_h \vc{u}^k_h |^2\right] }
\\
& + a\intOh{ \frac{ \left( (\varrho^k_h)^{\gamma/2} - (\varrho^{k-1}_h)^{\gamma/2} \right)^2}{\Delta t} } + \Delta t \intOh{ \varrho^{k-1}_h
\left| \frac{ \avo{\vc{u}^k_h} - \avo{\vc{u}^{k-1}_h} } {\Delta t} \right|^2 }
\\
&+ a\sum_{\Gamma \in \Gamma_h} \intG{ \ju{ (\varrho^k_h)^{\gamma/2} }^2 \max\left\{ h^\alpha ; | \avg{ \vc{u}^k_h \cdot \vc{n} } | \right\} }
\\
&+ a{h^\alpha}\sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \av{ \varrho^k_h } \cdot \ju{ \avo{\vc{u}^k_h} }^2
\chi \left( \frac{ \avg{ \vc{u}^k_h \cdot \vc{n} } }{h^\alpha} \right) }
\\
&+ \sum_{\Gamma \in \Gamma_{h, {\rm int}} } \intG{ \left( (\varrho^k_h)^{\rm in} [ \avg{ \vc{u}^k_h \cdot \vc{n} } ]^+ -
(\varrho^k_h)^{\rm out} [ \avg{ \vc{u}^k_h \cdot \vc{n}} ]^- \right) \ju{ \avo{\vc{u}^k_h} }^2 } \stackrel{<}{\sim} 0.
\end{split}
\end{equation}
\section{Consistency}
\label{C}
Our goal is to derive a consistency formulation for the discrete solutions satisfying (\ref{N3}), (\ref{N4}). To this end, it is convenient
to deal with quantities defined on $R \times \Omega_h$. Accordingly, we introduce
\begin{equation} \label{C1}
\varrho_h (t, \cdot) = \varrho^0_h \ \mbox{for} \ t < \Delta t, \ \varrho_h(t, \cdot) = \varrho^k_h \ \mbox{for}\ t \in [k \Delta t, (k+1) \Delta t),\ k = 1, 2, \dots,
\end{equation}
\begin{equation} \label{C2}
\vc{u}_h (t, \cdot) = \vc{u}^0_h \ \mbox{for} \ t < \Delta t, \ \vc{u}_h(t, \cdot) = \vc{u}^k_h \ \mbox{for}\ t \in [k \Delta t, (k+1) \Delta t), \ k = 1, 2, \dots,
\end{equation}
and
\begin{equation} \label{C3}
D_t v_h = \frac{ v(t, \cdot) - v(t - \Delta t, \cdot) }{\Delta t}, \ t > 0.
\end{equation}
For the sake of simplicity, we keep the time step $\Delta t$ constant, however, a similar ansatz obviously works also for $\Delta t = \Delta t_k$ adjusted
at each level of iteration.
A suitable consistency formulation of equation (\ref{N3}) reads
\begin{equation} \label{C10}
- \intOh{ \varrho^0_h \varphi (0, \cdot) } = \int_0^T \intOh{ \left[ \varrho_h \partial_t \varphi + \varrho_h \vc{u}_h \cdot \nabla_x \varphi \right] } \,{\rm d} t
+ \mathcal{O}(h^\beta), \ \beta > 0,
\end{equation}
for any test function $\varphi \in C^\infty_c([0, \infty) \times \Ov{\Omega}_h )$, where $\beta$ denotes a generic positive exponent, and,
accordingly, the remainder term $\mathcal{O}(h^\beta)$, that may depend also on the test function
$\varphi$, tends to zero as $h \to 0$. Similarly, we want to rewrite (\ref{N4}) in the form
\begin{equation} \label{C16}
\begin{split}
- \intOh{ \varrho^0_h \avo{ \vc{u}^0_h } \cdot \vcg{\varphi}(0, \cdot) } &= \int_0^T \intOh{
\Big[ \varrho_h \avo{\vc{u}_h} \partial_t \vcg{\varphi} + \varrho_h \avo{\vc{u}_h} \otimes \vc{u}_h : \nabla_x \vcg{\varphi} + p(\varrho_h ) {\rm div}_x \vcg{\varphi} \Big] } \ \,{\rm d} t \\
&- \int_0^T \intOh{ \Big[ \mu \nabla_h \vc{u}_h : \nabla_x \vcg{\varphi} + (\mu/3 + \eta) {\rm div}_h \vc{u}_h \cdot {\rm div}_x \vcg{\varphi} \Big] } \,{\rm d} t
+ \mathcal{O}(h^\beta)
\end{split}
\end{equation}
for any $\vcg{\varphi} \in C^\infty_c([0, \infty) \times \Omega_h; R^3)$.
\subsection{Preliminaries, some useful estimates}
We collect certain well-known estimates used in the subsequent analysis. We refer to \cite[Part II, Chapters 8,9]{FeiKaPok} for the proofs.
\subsubsection{Discrete negative and trace estimates for piecewise smooth functions}
The following inverse inequality
\begin{equation} \label{S4}
\| v \|_{L^p(\Omega_h) } \stackrel{<}{\sim} h^{ 3 \left( \frac{1}{p} - \frac{1}{q} \right) } \| v \|_{L^q(\Omega_h)}, \ 1 \leq q \leq p \leq \infty,
\end{equation}
holds for any $v \in Q_h(\Omega_h)$.
The trace estimates read
\begin{equation} \label{S4a}
\| v \|_{L^p(\Gamma)} \stackrel{<}{\sim} h^{1/p} \| v \|_{L^p(E)} \ \mbox{whenever}\ \Gamma \subset \partial E, \ 1 \leq p \leq \infty
\end{equation}
for any $v \in Q_h(\Omega_h)$.
Finally, we report a discrete version of Poincar\' e's inequality
\begin{equation} \label{S4b}
\| v - \avo{v} \|_{L^2(E)} \equiv \| v - \Pi^Q_h [v] \|_{L^2(E)} \stackrel{<}{\sim} h \| \nabla_h v \|_{L^2(E)}
\ \mbox{for any}\ v \in V_h(\Omega_h).
\end{equation}
\subsubsection{Sobolev estimates for broken norms}
We have
\begin{equation} \label{S5}
\| v \|_{L^6(\Omega_h)}^2 \stackrel{<}{\sim} \sum_{\Gamma_{h, {\rm int}}} \intG{ \frac{\ju{ v }^2}{h} } + \| v \|^2_{L^2(\Omega_h)}
\end{equation}
for any $v \in Q_h(\Omega_h)$.
In particular, we may combine the negative estimates (\ref{S4}) with (\ref{S5}) to obtain
\begin{equation} \label{S6}
\begin{split}
\| \varrho_h \|_{L^\infty(\Omega_h)} &= \left( \left\| \varrho_h^{\gamma/2} \right\|_{L^\infty(\Omega_h)} \right)^{2 / \gamma} \stackrel{<}{\sim}
h^{-1/\gamma} \left( \left\| \varrho_h^{\gamma/2} \right\|_{L^6(\Omega_h)}^2 \right)^{1 / \gamma}\\
&\stackrel{<}{\sim} h^{-1/\gamma} \left( \sum_{\Gamma_{h, {\rm int}}} \intG{ \frac{\ju{ \varrho^{\gamma/2} }^2}{h} }\right)^{1 / \gamma}
+ h^{-1/\gamma} \left( \left\| \varrho^{\gamma/2} \right\|_{L^2(\Omega_h)}^2 \right)^{1 / \gamma}\\
&\stackrel{<}{\sim} h^{- \frac{2 + \alpha}{\gamma} } \left( \sum_{\Gamma_{h, {\rm int}}} \intG{ h^\alpha {\ju{ \varrho^{\gamma/2} }^2} }\right)^{1 / \gamma}
+ h^{-1/\gamma} \left\| \varrho \right\|_{L^\gamma (\Omega_h)}
\end{split}
\end{equation}
Next, we have the discrete variant of Sobolev's inequality
\begin{equation} \label{S6b}
\| v \|^2_{L^6(\Omega_h)} \stackrel{<}{\sim} \sum_{E \in E_h} \| \nabla_h v \|^2_{L^2(E; R^3)} \equiv \| \nabla_h v \|_{L^2(\Omega_h; R^3)}^2
\end{equation}
for any $v \in V_{0,h}(\Omega_h)$.
Finally, we recall the projection estimates for the Crouzeix--Raviart spaces
\begin{equation} \label{S7}
\left\| \Pi^V_h [v] - v \right\|_{L^q(\Omega_h)} + h \left\| \nabla_h \Pi^V_h [v] - \nabla_x v \right\|_{L^q(\Omega_h;R^3)}
\stackrel{<}{\sim} h^j \| \nabla^j v \|_{L^q(\Omega_h; R^{3j})},\ j=1,2, \ 1 \leq q \leq \infty.
\end{equation}
\subsubsection{Upwind consistency formula}
We report the universal formula
\begin{equation} \label{C4}
\begin{split}
\intOh{ r \vc{u} \cdot \nabla_x \phi } &= \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ {\rm Up}[r, \vc{u}] \ju{F} }
\\
&+ \frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{r} \ju{F} \chi \left( \frac{ \avg{\vc{u} \cdot \vc{n} }}{h^\alpha} \right) }
\\
&+ \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} (F - \phi) \ju{r} [ \avg{ \vc{u} \cdot \vc{n} }]^- \ {\rm dS}_x
\\
&+ \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \phi r \Big(\vc{u} \cdot \vc{n} - \avg{ \vc{u} \cdot \vc{n} } \Big) \ {\rm dS}_x
+ \intOh{ r (F - \phi) {\rm div}_h \vc{u} }
\end{split}
\end{equation}
for any
$r, F \in Q_h(\Omega_h)$, $\vc{u} \in V_{0,h}(\Omega_h; R^3)$, $\phi \in C^1(\Omega_h)$, see \cite[Chapter 9, Lemma 7]{FeiKaPok}.
\subsection{Consistency formulation of the continuity method}
Our goal is to derive the consistency formulation (\ref{C10}) of the discrete equation of continuity (\ref{N3}).
\subsubsection{Time derivative}
\label{TD1}
We consider test functions of the form $\psi(t) \phi(x)$ to obtain
\[
\begin{split}
\int_0^T \intOh{ D_t (\varrho_h) \avo{\psi \phi} } &\ \,{\rm d} t = \int_0^T \psi \intOh{ D_t (\varrho_h) \phi } \ \,{\rm d} t \\
&= - \int_0^T \intOh{ \frac{ \psi(t + \Delta t) - \psi(t) }{\Delta t} \varrho_h \phi } \ \,{\rm d} t - \frac{1}{\Delta t} \int_{-\Delta t}^0 \intOh{ \varrho^0_h \psi (t + \Delta t) \phi }\ \,{\rm d} t
\end{split}
\]
whenever the function $\psi \in C^\infty_c[0, T)$ and $\Delta t$ is small enough so that the interval $[T - \Delta t, \infty)$ is not included in the support of $\psi$.
By means of the mean-value theorem we get that
\begin{equation} \label{C5}
\int_0^T \intOh{ D_t (\varrho_h) \avo{\psi \phi} } \ \,{\rm d} t
= - \int_0^T \intOh{ \partial_t \psi \varrho_h \phi } \ \,{\rm d} t - \intOh{ \varrho^0_h \psi (0) \phi } + \mathcal{O}(h^\beta)
\end{equation}
for any $\phi \in C(\Omega_h)$, $\psi \in C^\infty_c[0, T)$. Note that the $\mathcal{O}(h)$ term depends on the second derivative of $\psi$.
\subsubsection{Convective term - upwind}
Relation (\ref{C4}) evaluated for $r = \varrho^k_h$, $\vc{u} = \vc{u}^k_h$, $F = \avo{\phi}$, $\phi \in C^1(\Omega_h)$ gives rise to
\begin{equation} \label{C6}
\begin{split}
\intOh{ \varrho^k_h \vc{u}^k_h \cdot \nabla_x \phi } &= \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ {\rm Up}[\varrho^k_h, \vc{u}^k_h] \ju{\avo{\phi}} }
\\
&+ \frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{\varrho^k_h} \ju{\avo{\phi} } \chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) }
\\
&+ \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} (\avo{\phi} - \phi) \ju{\varrho^k_h} [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x
\\
&+ \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \phi \varrho^k_h \Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x
+ \intOh{ \varrho_h (\avo{\phi} - \phi) {\rm div}_h \vc{u}^k_h }.
\end{split}
\end{equation}
Using an elementary inequality
\begin{equation} \label{C6a}
\left| \varrho_1 - \varrho_2 \right| \leq \left| (\varrho_1)^{\gamma/2} - (\varrho_2)^{\gamma/2} \right| \left| (\varrho_1)^{1 - \gamma/2} + (\varrho_2)^{1 - \gamma/2} \right|,\
1 \leq \gamma \leq 2
\end{equation}
we get
\[
\begin{split}
\frac{h^\alpha}{2} &\left| \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{\varrho^k_h} \ju{\avo{\phi} } \chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) }
\right|
\stackrel{<}{\sim} h^{1 + \alpha} \| \phi \|_{C^1(\Ov{\Omega}_h)} \left| \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{\varrho^k_h} } \right| \\
&\stackrel{<}{\sim}
h^{1 + \alpha} \| \phi \|_{C^1(\Ov{\Omega}_h)} \left( \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{(\varrho^k_h)^{\gamma/2}}^2 } +
\sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \av{ (\varrho^k_h)^{1 - \gamma/2} }^2 } \right),
\end{split}
\]
where, by virtue of (\ref{S3}),
\[
h^{1 + \alpha} \| \phi \|_{C^1(\Ov{\Omega}_h)} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{(\varrho^k_h)^{\gamma/2}}^2 } \leq
c(\phi) h g_k,\ \Delta t \sum_{k} g_k < \infty,
\]
and, in accordance with (\ref{S1a}) and the trace estimates (\ref{S4a}),
\[
h^{1 + \alpha} \| \phi \|_{C^1(\Ov{\Omega}_h)} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \av{ (\varrho^k_h) ^{1 - \gamma/2} }^2 }
\stackrel{<}{\sim} h^\alpha c(\phi) \sum_{E \in E_h } \intE{ (\varrho^k_h) ^{2 - \gamma} } \stackrel{<}{\sim} h^\alpha.
\]
We may infer that
\begin{equation} \label{C7}
\frac{h^\alpha}{2} \left\| \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{\varrho_h} \ju{\avo{\phi} } \chi \left( \frac{ \avg{\vc{u}_h \cdot \vc{n} }}{h^\alpha} \right) }
\right\|_{L^1(0,T)} = \mathcal{O}(h^\beta), \beta > 0 \ \mbox{whenever}\ \alpha > 0.
\end{equation}
Next, using (\ref{S3}) again, we deduce
\[
\begin{split}
&\left| \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} (\avo{\phi} - \phi) \ju{\varrho^k_h} [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x
\right| \\ &\stackrel{<}{\sim} h \| \phi \|_{C^1(\Ov{\Omega}_h)} \sum_{E \in E_h}\sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} | \ju{ (\varrho^k_h)^{\gamma/2} } |
\ | \av{ (\varrho^k_h)^{1 - \gamma/2} } | \ |\avg{ \vc{u}^k_h \cdot \vc{n} } | \ {\rm dS}_x\\
&\stackrel{<}{\sim} h \left( \sum_{E \in E_h}\sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \ju{ (\varrho^k_h)^{\gamma/2} }^2 |\avg{ \vc{u}^k_h \cdot \vc{n} } | \ {\rm dS}_x
\right)^{1/2}\left( \sum_{E \in E_h}\sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} (\varrho^k_h)^{2 - \gamma} |\vc{u}^k_h| \ {\rm dS}_x
\right)^{1/2}\\
&\stackrel{<}{\sim} h^{1/2} \left( \sum_{E \in E_h}\sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \ju{ (\varrho^k_h)^{\gamma/2} }^2 |\avg{ \vc{u}^k_h \cdot \vc{n} }
| \ {\rm dS}_x
\right)^{1/2}\left( \sum_{E \in E_h} \int_{E} (\varrho^k_h)^{2 - \gamma} |\avo{\vc{u}_h}| \ \,{\rm d} {x};
\right)^{1/2}
\end{split}
\]
whence, using (\ref{S6}) to control the last term, we conclude
\begin{equation} \label{C8}
\left\| \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} (\avo{\phi} - \phi) \ju{\varrho_h} [ \avg{ \vc{u}_h \cdot \vc{n} }]^- \ {\rm dS}_x
\right\|_{L^2(0,T)} = \mathcal{O}(h^\beta).
\end{equation}
Furthermore,
\[
\begin{split}
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \phi \varrho^k_h \Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x
= \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \left( \phi - \avg{\phi}
\right) \varrho^k_h \Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x,
\end{split}
\]
where, by virtue of Poincar\' e's inequality and the trace estimates (\ref{S4a}),
\[
\begin{split}
&\left|
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \left( \phi - \avg{\phi} \right)
\varrho^k_h \Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x \right| \\
&\stackrel{<}{\sim} h \| \nabla_x \phi \|_{L^\infty(\Omega_h)}
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \varrho^k_h \left| \vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \right| \ {\rm dS}_x
\stackrel{<}{\sim} \sum_{E \in E_h} \int_{E} \varrho^k_h \left| \vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \right| \ \,{\rm d} {x}\\
&\stackrel{<}{\sim} h \sum_{E \in E_h} \| \nabla_h \vc{u}^k_h \|_{L^2(E)} \| \varrho^k_h \|_{L^2(E)} \stackrel{<}{\sim} h \| \nabla_h \vc{u}^k_h \|_{L^2(\Omega_h)} \| \varrho^k_h \|_{L^2(\Omega_h)}
\stackrel{<}{\sim} h \| \nabla_h \vc{u}^k_h \|_{L^2(\Omega_h)} \| \varrho^k_h \|_{L^\infty(\Omega_h)}^{1/2}.
\end{split}
\]
Going back to (\ref{S6}) we observe that the right-hand side is controlled as soon as
\begin{equation} \label{C9}
1 - \frac{2 + \alpha}{2 \gamma} > 0 \ \mbox{meaning}\ \alpha < 2 (\gamma - 1).
\end{equation}
Finally, it is easy to check that the last integral in (\ref{C6}) can be handled in the same way.
Thus we conclude that the consistency formulation (\ref{C10}) holds
for any test function $\varphi \in C^\infty_c([0, \infty) \times \Ov{\Omega}_h )$ as long as $\alpha > 0$, $\gamma >1$ are interrelated through (\ref{C9}).
\subsection{Consistency formulation of the momentum method}
Our goal is to take $\Pi^V_h [\vcg{\phi}]$, $\vcg{\phi} \in C^\infty_c(\Omega_h; R^3)$ as a test function in the momentum scheme (\ref{N4}). To begin, observe that
\[
\begin{split}
\intOh{ \nabla_h \vc{u}_h : \nabla_h \Pi^V_h [\vcg{\phi}] } = \intOh{ \nabla_h \vc{u}_h : \nabla_x \vcg{\phi} },& \
\intOh{ {\rm div}_h \vc{u}_h {\rm div}_h \Pi^V_h [\vcg{\phi}] } = \intOh{ {\rm div}_h \vc{u}_h {\rm div}_x \vcg{\phi} }\\
\intOh{ p(\varrho_h) {\rm div}_h \Pi^V_h [\vcg{\phi}] } &= \intOh{ p(\varrho_h) {\rm div}_x \vcg{\phi} },
\end{split}
\]
see \cite[Chapter 9, Lemma 8]{FeiKaPok}.
\subsubsection{Time derivative}
\label{Tder}
We compute
\begin{equation} \label{C11}
\begin{split}
\intOh{ D_t (\varrho^k_h \avo{\vc{u}^k_h} ) \cdot \vcg{\phi} } &= \intOh{ D_t (\varrho^k_h \avo{\vc{u}^k_h}) \cdot \Pi^V_h [\vcg{\phi} ] }
\\
&+
\intOh{ \varrho^{k-1}_h \frac{ \avo{\vc{u}^k_h} - \avo{\vc{u}^{k-1}_h} }{\Delta t} \cdot \left( \vcg{\phi} - \Pi^V_h [\vcg{\phi}] \right) }
\\
&+ \intOh{ \frac{ \varrho^k_h - \varrho^{k-1}_h }{\Delta t} \avo{\vc{u}^{k}_h} \cdot \left( \vcg{\phi} - \Pi^V_h [\vcg{\phi}] \right) },
\end{split}
\end{equation}
where
\[
\begin{split}
\left| \intOh{ \varrho^{k-1}_h \frac{ \avo{\vc{u}^k_h} - \avo{\vc{u}^{k-1}_h} }{\Delta t} \cdot \left( \vcg{\phi} - \Pi^V_h [\vcg{\phi}] \right) } \right|
&\stackrel{<}{\sim} h^2 \| \phi \|_{C^2(\Ov{\Omega}_h)} \intOh{ \varrho^{k-1}_h \left| \frac{ \avo{\vc{u}^k_h} - \avo{\vc{u}^{k-1}_h} }{\Delta t} \right| }\\
\stackrel{<}{\sim} h^2 \left( \intOh{ \varrho^{k-1}_h } \right)^{1/2} & \left( \intOh{ \varrho^{k-1}_h \left( \frac{ \avo{\vc{u}^k_h} - \avo{\vc{u}^{k-1}_h} }{\Delta t} \right)^2 }
\right)^{1/2}\\ \stackrel{<}{\sim} h^2 (\Delta t)^{-1/2} &\left( \Delta t \intOh{ \varrho^{k-1}_h \left( \frac{ \avo{\vc{u}^k_h} - \avo{\vc{u}^{k-1}_h} }{\Delta t} \right)^2 }
\right)^{1/2},
\end{split}
\]
where the most right integral is controlled in $L^2(0,T)$ by the numerical dissipation in (\ref{S3}).
As for the remaining integral, we may use inequality (\ref{C6a}) to obtain
\[
\begin{split}
&\left| \intOh{ \frac{ \varrho^k_h - \varrho^{k-1}_h }{\Delta t} \avo{\vc{u}^{k}_h} \cdot \left( \vcg{\phi} - \Pi^V_h [\vcg{\phi}] \right) } \right|
\stackrel{<}{\sim} h^2 \intOh{ \frac{ |\varrho^k_h - \varrho^{k-1}_h| }{\Delta t} | \avo{\vc{u}^{k}_h} | }\\
&\stackrel{<}{\sim} h^2 (\Delta t)^{-1} \| \vc{u}^k_h \|_{L^6(\Omega_h; R^3)} \sup_{k} \| \varrho^k_h \|_{L^{6/5}(\Omega_h)}
\stackrel{<}{\sim} h^2 (\Delta t)^{-1} h^{-1/2} \| \vc{u}^k_h \|_{L^6(\Omega_h; R^3)} \sup_{k} \| \varrho^k_h \|_{L^{1}(\Omega_h)}
\end{split}
\]
Finally, we may repeat the same argument as in Section \ref{TD1} to conclude that
\begin{equation} \label{C13}
\begin{split}
\int_0^T &\intOh{ \psi D_t (\varrho_h \avo{\vc{u}_h} ) \Pi^V_h [\vcg{\phi} ] }\ \,{\rm d} t \\ &= - \int_0^T \intOh{ \varrho_h \avo{\vc{u}_h} \cdot \vcg{\phi} \partial_t \psi }
\ \,{\rm d} t
- \intOh{ \psi(0) \varrho^0_h \avo{ \vc{u}^0_h } \cdot \phi } + \mathcal{O}(h^\beta)
\end{split}
\end{equation}
provided $\psi \in C^\infty_c[0,T)$, $\vcg{\phi} \in C^\infty_c(\Omega_h; R^3)$.
\subsubsection{Convective term - upwind}
Applying formula (\ref{C4}) we obtain
\begin{equation} \label{C14}
\begin{split}
\intOh{ \varrho^k_h \left( \avo{\vc{u}^k_h} \otimes \vc{u}^k_h \right) : \nabla_x \vcg{\phi} }
&- \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ {\rm Up}[\varrho^k_h \avo{\vc{u}^k_h}, \vc{u}^k_h] \cdot \ju{\avo{ \Pi^V_h[\vcg{\phi}] }} }
\\
&= \frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{\varrho^k_h \avo{ \vc{u}^k_h } } \cdot \ju{ \avo{ \Pi^V_h[\vcg{\phi}] }}
\chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) }
\\
&+ \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \left( \avo{ \Pi^V_h[\vcg{\phi}] } - \vcg{\phi} \right) \cdot
\ju{\varrho^k_h \avo{ \vc{u}^k_h }} [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x
\\
&+ \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \varrho^k_h \vcg{\phi} \cdot \avo{ \vc{u}^k_h } \Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x
\\
&+ \intOh{ \varrho^k_h \avo{ \vc{u}^k_h } \cdot \left( \avo{ \Pi^V_h[\vcg{\phi}] } - \vcg{\phi} \right) {\rm div}_h \vc{u}^k_h }.
\end{split}
\end{equation}
We proceed in several steps.
\medskip
{\bf Step 1}
\medskip
Applying (\ref{S7}) we get
\[
\begin{split}
&\left| \frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{\varrho^k_h \avo{ \vc{u}^k_h } } \cdot \ju{ \avo{ \Pi^V_h[\vcg{\phi}] }}
\chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) } \right|\\
&\stackrel{<}{\sim} h^{1 + \alpha} \sum_{\Gamma \in \Gamma_{h, {\rm int}}}
\intG{ \left|\ \ju{\varrho^k_h \avo{ \vc{u}^k_h } } \right|\ \chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right)} ,
\end{split}
\]
where
\begin{equation} \label{C15}
\ju{\varrho^k_h \avo{ \vc{u}^k_h } } = (\varrho^k_h)^{\rm out} \ju {\avo{ \vc{u}^k_h }} + \avo{\vc{u}^k_h} \ju{ \varrho^k_h }.
\end{equation}
Consequently
\[
\begin{split}
&\left| \frac{h^\alpha}{2} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \intG{ \ju{\varrho^k_h \avo{ \vc{u}^k_h } } \cdot \ju{ \avo{ \Pi^V_h[\vcg{\phi}] }}
\chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right) } \right| \\
&\stackrel{<}{\sim} h^{1 + \alpha} \left( \sum_{\Gamma \in \Gamma_{h, {\rm int}}}
\intG{ \av{\varrho^k_h} \ju{ \avo{ \vc{u}^k_h } }^2 \chi \left( \frac{ \avg{\vc{u}^k_h \cdot \vc{n} }}{h^\alpha} \right)} \right)^{1/2} \left( \sum_{\Gamma \in \Gamma_{h, {\rm int}}}
\intG{ \varrho^k_h } \right)^{1/2} \\
&+ h^{1 + \alpha} \sum_{\Gamma \in \Gamma_{h, {\rm int}}}
\intG{ \left| \avo{\vc{u}^k_h} \ju{ \varrho^k_h } \right| } ,
\end{split}
\]
where the first integral on the right-hand side is controlled by the numerical dissipation in (\ref{S3}) and the trace estimates.
Finally, applying the inequality (\ref{S4}), trace inequality (\ref{S4a}) and Sobolev's inequality (\ref{S6b}), we obtain
\[
\begin{split}
h^{1 + \alpha} & \sum_{\Gamma \in \Gamma_{h, {\rm int}}}
\intG{ \left|\ \avo{\vc{u}^k_h} \ju{ \varrho^k_h } \ \right| } \stackrel{<}{\sim} h^{1 + \alpha}\sum_{\Gamma \in \Gamma_{h, {\rm int}}}
\intG{ |\avo{\vc{u}^k_h}|\ \left| \av{ \varrho^k_h }^{1 - \gamma/2} \right| \ \left| \ju{ (\varrho^k_h)^{\gamma/2} } \right| } \\
&\stackrel{<}{\sim} h^{1 + \alpha} \sum_{\Gamma \in \Gamma_{h, {\rm int}}} \left( \intG{ \ju{ (\varrho^k_h)^{\gamma/2} }^2 } \right)^{1/2}
\| \avo{ \vc{u}^k_h } \|_{L^6(\Gamma)} \left\| (\varrho^k_h)^{1 - \gamma/2} \right\|_{L^3 (\Gamma)} \\
&\stackrel{<}{\sim} h^{\frac{1 + \alpha}{2}} \sum_{E \in E_h} \left( h^\alpha \int_{\partial E} \ju{ (\varrho^k_h)^{\gamma/2} }^2 \ {\rm dS}_x \right)^{1/2}
\| \avo{ \vc{u}^k_h } \|_{L^6(E)} \left\| (\varrho^k_h)^{1 - \gamma/2} \right\|_{L^3 (E)}\\
&\stackrel{<}{\sim} h^{\frac{1 + \alpha}{2}} \left\| \nabla_h \vc{u}_h \right\|_{L^2(\Omega_h)} \left\| (\varrho^k_h)^{1 - \gamma/2} \right\|_{L^3 (\Omega_h)},
\end{split}
\]
where we have used the numerical dissipation in (\ref{S3}). Thus, in order to complete the estimates we have to control
\[
\left\| (\varrho^k_h)^{1 - \gamma/2} \right\|_{L^3 (\Omega_h)}
\]
uniformly in $k$. As $1 < \gamma < 2$, it is enough to consider the critical case $\gamma = 1$, for which the inverse inequality
(\ref{S4}) gives rise to
\[
\left\| (\varrho^k_h)^{1/2} \right\|_{L^3 (\Omega_h)} = \left( \left\| (\varrho^k_h) \right\|_{L^{3/2 (\Omega_h)}} \right)^{1/2}
\stackrel{<}{\sim} h^{-1/2} \| \varrho^k_h \|^{1/2}_{L^1(\Omega_h)}.
\]
\medskip
{\bf Step 2}
\medskip
Using (\ref{C15}) we deduce
\[
\begin{split}
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} & \left( \avo{ \Pi^V_h[\vcg{\phi}] } - \vcg{\phi} \right) \cdot
\ju{\varrho^k_h \avo{ \vc{u}^k_h }} [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x\\
&= \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \left( \avo{ \Pi^V_h[\vcg{\phi}] } - \vcg{\phi} \right) \cdot
\Big( (\varrho^k_h)^{\rm out} \ju {\avo{ \vc{u}^k_h }} + \avo{\vc{u}^k_h} \ju{ \varrho^k_h } \Big) [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x
\end{split},
\]
where, furthermore,
\[
\begin{split}
&\left| \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \left( \avo{ \Pi^V_h[\vcg{\phi}] } - \vcg{\phi} \right)
(\varrho^k_h)^{\rm out} \ju {\avo{ \vc{u}^k_h } } [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x \right|\\ &\stackrel{<}{\sim}
h^2 \| \vcg{\phi} \|_{C^2(\Ov{\Omega}_h;R^3)} \left( \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} -
(\varrho^k_h)^{\rm out} \ju {\avo{ \vc{u}^k_h } }^2 [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x
\right)^{1/2} \times \\
& \times \left( \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E}
(\varrho^k_h)^{\rm out} |\avo{\vc{u}^k_h}| \ {\rm dS}_x \right)^{1/2},
\end{split}
\]
where the former integral in the product on the right-hand is controlled by the numerical dissipation in (\ref{S3}), while
\[
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E}
(\varrho^k_h)^{\rm out} |\avo{\vc{u}^k_h}| \ {\rm dS}_x \stackrel{<}{\sim} h^{-1} \| \vc{u}^k_h \|_{L^6(\Omega_h; R^3)} \| \varrho^k_h \|_{L^{6/5} (\Omega_h)},
\stackrel{<}{\sim} h^{-3/2} \| \vc{u}^k_h \|_{L^6(\Omega_h; R^3)} \| \varrho^k_h \|_{L^{1} (\Omega_h)}.
\]
Finally,
\[
\begin{split}
&\left| \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \left( \avo{ \Pi^V_h[\vcg{\phi}] } - \vcg{\phi} \right) \cdot
\avo{\vc{u}^k_h} \ju{ \varrho^k_h } [ \avg{ \vc{u}^k_h \cdot \vc{n} }]^- \ {\rm dS}_x \right| \\ &\stackrel{<}{\sim} h^2
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \| \avo{\vc{u}^{k}_h} \|_{L^6(\Gamma)}^2 \| \varrho^k_h \|_{L^{3/2}(\Gamma)}
\stackrel{<}{\sim} h \| \vc{u}^k_h \|^2_{L^6(\Omega_h)} \| \varrho^k_h \|_{L^{3/2} (\Omega_h)} \stackrel{<}{\sim} h^{3 - 3/\gamma} \| \vc{u}^k_h \|^2_{L^6(\Omega_h)} \| \varrho^k_h \|_{L^{\gamma}
(\Omega_h)},
\end{split}
\]
where the exponent $3 - 3/\gamma > 0$ as soon as $\gamma > 1$.
\medskip
{\bf Step 3}
\medskip
We write
\[
\begin{split}
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} &\int_{\Gamma_E} \varrho^k_h \vcg{\phi} \cdot \avo{ \vc{u}^k_h }
\Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x \\ &=
\sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \varrho^k_h ( \vcg{\phi} - \avg{ \vcg{\phi} } ) \cdot \avo{ \vc{u}^k_h }
\Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x,
\end{split}
\]
where, by virtue of the trace inequality (\ref{S4a}) and Poincar\' e's inequality (\ref{S4b}),
\[
\begin{split}
&\left| \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \varrho^k_h ( \vcg{\phi} - \avg{ \vcg{\phi} } ) \cdot \avo{ \vc{u}^k_h }
\Big(\vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big) \ {\rm dS}_x \right| \\
&\stackrel{<}{\sim} h \left\| \sqrt{\varrho^k_h} \right\|_{L^\infty(\Omega_h)} \sum_{E \in E_h} \sum_{\Gamma_E \subset \partial E} \int_{\Gamma_E} \sqrt{\varrho^k_h} | \avo{ \vc{u}^k_h } |
\Big| \vc{u}^k_h \cdot \vc{n} - \avg{ \vc{u}^k_h \cdot \vc{n} } \Big| \ {\rm dS}_x\\
&\stackrel{<}{\sim} h \left\| \sqrt{\varrho^k_h} \right\|_{L^\infty(\Omega_h)} \| \sqrt{\varrho^{k}_h} \avo{ \vc{u}^k_h } \|_{L^2(\Omega_h)} \|
\nabla_h \vc{u}^k_h \|_{L^2(\Omega_h; R^3)},
\end{split}
\]
where, in view of (\ref{S6})
\[
\left\| \sqrt{\varrho^k_h} \right\|_{L^\infty(\Omega_h)} \stackrel{<}{\sim} h^{- \frac{2 + \alpha}{2 \gamma}},
\ \mbox{with} \frac{2 + \alpha}{2 \gamma} < 1 \ \mbox{or}\ 0 < \alpha < 2 (\gamma - 1).
\]
\medskip
{\bf Step 4}
\medskip
Finally,
\[
\begin{split}
&\left| \intOh{ \varrho^k_h \avo{ \vc{u}^k_h } \cdot \left( \avo{ \Pi^V_h[\vcg{\phi}] } - \vcg{\phi} \right) {\rm div}_h \vc{u}^k_h } \right| \\ &\stackrel{<}{\sim}
h^2 \left\| \sqrt{\varrho^k_h} \right\|_{L^\infty(\Omega_h)} \| \sqrt{\varrho^{k}_h} \avo{ \vc{u}^k_h } \|_{L^2(\Omega_h)} \|
\nabla_h \vc{u}^k_h \|_{L^2(\Omega_h; R^3)};
\end{split}
\]
whence the rest of the proof follows exactly as in Step 3.
Summing up the previous observations, we obtain the consistency formulation of the momentum method (\ref{C16}).
\begin{Remark} \label{RC1}
As $\vcg{\varphi}$ has compact support, equation (\ref{C16}) is satisfied also on the limit domain $\Omega$ for all $h$ small enough.
\end{Remark}
Thus we have shown the following result.
\begin{Proposition} \label{PC1}
Let the pressure $p$ satisfy (\ref{i5}), with $1 < \gamma < 2$. Suppose that $[ \varrho_h, \vc{u}_h]$ is a family of numerical solutions
given through (\ref{C1}), (\ref{C2}), where $[\varrho^k_h, \vc{u}^k_h]$ satisfy (\ref{N2}--\ref{N4}), where
\begin{equation} \label{expo}
\Delta t \approx h,\ 0 < \alpha < 2 (\gamma - 1).
\end{equation}
Then
\[
- \intOh{ \varrho^0_h \varphi (0, \cdot) } = \int_0^T \intOh{ \left[ \varrho_h \partial_t \varphi + \varrho_h \vc{u}_h \cdot \nabla_x \varphi \right] } \,{\rm d} t
+ \mathcal{O}(h^\beta), \ \beta > 0,
\]
for any test function $\varphi \in C^\infty_c([0, \infty) \times \Ov{\Omega}_h )$,
\begin{equation} \label{E2}
\begin{split}
- \intOh{ \varrho^0_h \avo{ \vc{u}^0_h } \cdot \vcg{\varphi}(0, \cdot) } &= \int_0^T \intOh{
\Big[ \varrho_h \avo{\vc{u}_h} \partial_t \vcg{\varphi} + \varrho_h \avo{\vc{u}_h} \otimes \vc{u}_h : \nabla_x \vcg{\varphi} + p(\varrho_h ) {\rm div}_x \vcg{\varphi} \Big] } \ \,{\rm d} t \\
&- \int_0^T \intOh{ \Big[ \mu \nabla_h \vc{u}_h : \nabla_x \vcg{\varphi} + (\mu/3 + \eta) {\rm div}_h \vc{u}_h \cdot {\rm div}_x \vcg{\varphi} \Big] } \,{\rm d} t
+ \mathcal{O}(h^\beta), \ \beta > 0,
\end{split}
\end{equation}
for any $\vcg{\varphi} \in C^\infty_c([0, \infty) \times \Omega_h; R^3)$.
Moreover, the solution satisfies the energy inequality
\begin{equation} \label{E3}
\begin{split}
\intOh{ \left[ \frac{1}{2} \varrho_h | \avo{\vc{u}_h } |^2 + P(\varrho_h) \right] (\tau, \cdot)} &+
\int_0^\tau \intOh{ \mu |\nabla_h \vc{u}_h |^2 + (\mu/3 + \eta) |{\rm div}_h \vc{u}_h |^2 } \ \,{\rm d} t \\ &\leq
\intOh{ \left[ \frac{1}{2} \varrho^0_h | \avo{\vc{u}^0_h } |^2 + P(\varrho^0_h) \right]}
\end{split}
\end{equation}
for a.e. $\tau \in [0,T]$.
\end{Proposition}
\begin{Remark} \label{RC3}
A close inspection of the previous discussion shows that the same method can be used to handle a variable time step $\Delta t_k$ adjusted for each step of iteration by means of a CFL-type condition, {\color{black} such as $ || \vc{u}_h^{k-1} + c_h^{k-1} ||_{L^\infty(\Omega)} \Delta t_k / h \leq CFL $. Here $CFL \in (0,1]$ and $c_h^{k-1} \equiv \sqrt{p'(\rho_h^{k-1})}$ denotes the sound speed. Though this condition is necessary for stability of time-explicit
numerical schemes, it still may be appropriate even for implicit schemes for areas of high-speed flows. Note
that the only part that must be changed in the {proof of Proposition \ref{PC1} is Section \ref{Tder}}, where the time derivative in the momentum method
is estimated.}
\end{Remark}
\section{Measure-valued solutions}
\label{M}
Our ultimate goal is to perform the limit $h \to 0$. For the sake of simplicity, we consider the initial data
\[
\varrho_0 \in L^\infty( R^3), \ \varrho^0 \geq \underline{\varrho} > 0 \ \mbox{a.a. in}\ R^3,\
\vc{u}_0 \in L^2 (R^3).
\]
With this ansatz, it is easy to find the approximation $[\varrho^0_h, \vc{u}^0_h]$ such that
\begin{equation} \label{M1}
\begin{split}
\varrho^0_h &\to \varrho_0 \ \mbox{in}\ L^\gamma_{\rm loc}(\Omega), \ \varrho^0_h > 0, \ \intOh{\varrho^0_h \phi} \to
\intO{ \varrho_0 \phi } \ \mbox{for any}\ \phi \in L^\infty(R^3),\\
\varrho^0_h \avo{\vc{u}^0_h} &\to \varrho_0 \vc{u}_0 \ \mbox{in}\ L^2_{\rm loc}(\Omega; R^3),\
\intOh{ \varrho^0_h \avo{ \vc{u}^0_h } \cdot \phi } \to \intO{ \varrho_0 \vc{u}_0 \cdot \phi }
\ \mbox{for any} \ \phi \in L^\infty(R^3; R^3),\\
&\intOh{ \left[ \frac{1}{2} \varrho^0_h |\avo{\vc{u}^0_h} |^2 + P(\varrho^0_h) \right] } \to
\intO{ \left[ \frac{1}{2} \varrho_0 |{\vc{u}_0} |^2 + P(\varrho_0) \right] } \ \mbox{as}\ h \to 0.
\end{split}
\end{equation}
\subsection{Weak limit}
Extending $\varrho_h$ by $\underline{\varrho} > 0$ and $\vc{u}_h$ to be zero outside $\Omega_h$, we may use the energy estimates (\ref{E3}) to
deduce that, at least for suitable subsequences,
\[
\begin{split}
\varrho_h &\to \varrho \ \mbox{weakly-(*) in}\ L^\infty(0,T; L^\gamma(\Omega)),\ \varrho \geq 0 \\
\avo{\vc{u}_h}, \ \vc{u}_h &\to \vc{u} \ \mbox{weakly in}\ L^2((0,T) \times \Omega; R^3),\\
\mbox{where} \ \vc{u} &\in L^2(0,T; W^{1,2}_0(\Omega)), \ \nabla_h \vc{u}_h \to \nabla_x \vc{u} \ \mbox{weakly in} \ L^2((0,T) \times \Omega;
R^{3 \times 3}), \\
\varrho_h \avo{\vc{u}_h} &\to \Ov{\varrho_h \vc{u}_h} \ \ \mbox{weakly-(*) in}\ L^\infty(0,T; L^{\frac{2\gamma}{\gamma + 1}}(\Omega; R^3)),
\end{split}
\]
see { \cite{FeKaMi} or \cite[Part II, Section 10.4]{FeiKaPok}}.
\begin{Remark} \label{RM1}
Note that, by virtue of Poincar\' e's inequality (\ref{S4b}) and the energy estimates (\ref{E3}),
\[
\| \vc{u}_h - \avo{\vc{u}_h } \|_{L^2(0,T; L^2(K; R^3)) } \stackrel{<}{\sim} h \ \mbox{for any compact}\ K \in \Omega,
\]
in particular, the weak limits of $\vc{u}_h$, $\avo{\vc{u}_h}$ coincide in $\Omega$.
\end{Remark}
In addition, the limit functions satisfy the equation of continuity in the form
\begin{equation} \label{M2}
- \intO{ \varrho_0 \varphi (0, \cdot) } = \int_0^T \intO{ \left[ \varrho \partial_t \varphi + \Ov{\varrho \vc{u}} \cdot \nabla_x \varphi \right] } \,{\rm d} t
\end{equation}
for any test function $\varphi \in C^\infty_c([0, \infty) \times \Ov{\Omega} )$. It follows from (\ref{M2}) that
$\varrho \in C_{\rm weak}([0,T]; L^\gamma(\Omega))$; whence (\ref{M2}) can be rewritten as
\begin{equation} \label{M3}
\left[ \intO{ \varrho \varphi (\tau, \cdot) } \right]_{t = 0}^{t = \tau} = \int_0^\tau
\intO{ \left[ \varrho \partial_t \varphi + \Ov{\varrho \vc{u}} \cdot \nabla_x \varphi \right] } \,{\rm d} t
\end{equation}
for any $0 \leq \tau \leq T$ and any $\varphi \in C^\infty([0,T] \times \Ov{\Omega})$.
\subsection{Young measure generated by numerical solutions}
\label{young}
The energy inequality (\ref{S1}), along with the consistency (\ref{C10}), (\ref{C16}) provide a suitable platform for the use of the theory of
measure-valued solutions developed in \cite{FGSWW1}. Consider the family $[\varrho_h, \vc{u}_h]$. In accordance with the weak convergence statement
derived in the preceding part, this family generates a Young measure - a parameterized measure
\[
\nu_{t,x} \in L^\infty((0,T) \times \Omega; \mathcal{P}([0, \infty) \times R^3)) \ \mbox{for a.a.}\ (t,x) \in (0,T) \times \Omega,
\]
such that
\[
\left< \nu_{t,x}, g(\varrho, \vc{u}) \right> = \Ov{g(\varrho, \vc{u})}(t,x)\ \mbox{for a.a.}\ (t,x) \in (0,T) \times \Omega,
\]
whenever $g \in C([0, \infty) \times R^3)$, and
\[
g(\varrho_h, \vc{u}_h) \to \Ov{g(\varrho, \vc{u})} \ \mbox{weakly in}\ L^1((0,T) \times \Omega).
\]
Moreover, in view of Remark \ref{RM1}, the Young measures generated by $[\varrho_h, \vc{u}_h]$ and $[\varrho, \avo{\vc{u}_h}]$ coincide for a.a.
$(t,x) \in (0,T) \times \Omega$.
Accordingly, the equation of continuity (\ref{M3}) can be written as
\begin{equation} \label{M4}
\left[ \intO{ \varrho \varphi (\tau, \cdot) } \right]_{t = 0}^{t = \tau} = \int_0^\tau
\intO{ \left[ \varrho \partial_t \varphi + \left< \nu_{t,x}, \varrho \vc{u} \right> \cdot \nabla_x \varphi \right] } \,{\rm d} t
\end{equation}
In order to apply a similar treatment to the momentum equation (\ref{E2}), { we have to replace the expression
$\varrho_h \avo{\vc{u}_h} \otimes \vc{u}_h$ in the convective term by $\varrho_h \avo{\vc{u}_h} \otimes \avo{\vc{u}_h}$.} This is possible as
\[
\begin{split}
& \left\| \varrho_h \avo{\vc{u}_h} \otimes \vc{u}_h - \varrho_h \avo{\vc{u}_h} \otimes \avo{\vc{u}_h} \right\|_{L^1(\Omega_h; R^{3 \times 3})}
=\left\| \varrho_h \avo{\vc{u}_h} \otimes (\vc{u}_h - \avo{\vc{u}_h}) \right\|_{L^1(\Omega_h; R^{3 \times 3})}\\
&\stackrel{<}{\sim} h \| \sqrt{\varrho_h} \avo{\vc{u}_h} \|_{L^2(\Omega_h); R^3)} \| \nabla_h \vc{u}_h \|_{L^2(\Omega_h; R^{3 \times 3})}
\| \sqrt{\varrho_h} \|_{L^\infty(\Omega_h)},
\end{split}
\]
where, by virtue of (\ref{S6}),
\[
h \| \sqrt{\varrho_h} \|_{L^\infty(\Omega_h)} \stackrel{<}{\sim} h^{ 1 - \frac{2 + \alpha}{2 \gamma}},
\]
where the exponent is positive { as soon as (\ref{expo}) holds, specifically, $0 < \alpha < 2 (\gamma - 1)$}. Moreover, we have
\[
\varrho_h \avo{\vc{u}_h} \otimes \avo{ \vc{u}_h} + p(\varrho_h) \mathbb{I} \to
\left\{ \varrho \vc{u} \otimes \vc{u} + p(\varrho) \mathbb{I} \right\}\ \mbox{weakly-(*) in}\ \left[ L^\infty(0,T; \mathcal{M}(\Omega) )\right]^{3 \times 3};
\]
whence letting $h \to 0$ in (\ref{E2}) gives rise to
\[
\begin{split}
- \intO{ \varrho_0 \vc{u}_0 \cdot \vcg{\varphi}(0, \cdot) } &= \int_0^T \intO{
\Big[ \left< \nu_{t,x}; \varrho \vc{u} \right> \partial_t \vcg{\varphi} + \left\{ \varrho \vc{u} \otimes \vc{u} + p(\varrho) \mathbb{I} \right\} : \nabla_x
\vcg{\varphi} \Big] } \ \,{\rm d} t \\
&- \int_0^T \intOh{ \Big[ \mu \nabla \vc{u} : \nabla_x \vcg{\varphi} + (\mu/3 + \eta) {\rm div} \vc{u} \cdot {\rm div}_x \vcg{\varphi} \Big] } \,{\rm d} t
\end{split}
\]
or, equivalently,
\begin{equation} \label{M5}
\begin{split}
\left[ \intO{ \left< \nu_{t,x} ; \varrho \vc{u} \right> \cdot \vcg{\varphi}(0, \cdot) } \right]_{t = 0}^{t = \tau} &= \int_0^\tau \intO{
\Big[ \left< \nu_{t,x}; \varrho \vc{u} \right> \cdot \partial_t \vcg{\varphi} + \left\{ \varrho \vc{u} \otimes \vc{u} + p(\varrho) \mathbb{I} \right\} : \nabla_x
\vcg{\varphi} \Big] } \ \,{\rm d} t \\
&- \int_0^\tau \intO{ \Big[ \mu \nabla \vc{u} : \nabla_x \vcg{\varphi} + (\mu/3 + \eta) {\rm div} \vc{u} \cdot {\rm div}_x \vcg{\varphi} \Big] } \,{\rm d} t
\end{split}
\end{equation}
for any $0 \leq \tau \leq T$, $\varphi \in C^\infty_c([0,T] \times \Omega; R^3)$, where we have set
\[
\nu_{0,x} = \delta_{[\varrho_0(x), \vc{u}_0(x)]}.
\]
Finally, we introduce the \emph{concentration remainder}
\[
\mathcal{R} = \left\{ \varrho \vc{u} \otimes \vc{u} + p(\varrho) \mathbb{I} \right\} - \left< \nu_{t,x}; \varrho \vc{u} \otimes \vc{u} + p(\varrho) \mathbb{I} \right>
\in [ L^\infty(0,T; \mathcal{M}(\Omega)) ]^{3 \times 3}
\]
and rewrite (\ref{M5}) in the form
\begin{equation} \label{M6}
\begin{split}
&\left[ \intO{ \left< \nu_{t,x} ; \varrho \vc{u} \right> \cdot \vcg{\varphi}(0, \cdot) } \right]_{t = 0}^{t = \tau} \\ &= \int_0^\tau \intO{
\Big[ \left< \nu_{t,x}; \varrho \vc{u} \right> \cdot \partial_t \vcg{\varphi} + \left< \nu_{t,x}; \varrho \vc{u} \otimes \vc{u} \right>
: \nabla_x \vcg{\varphi} + \left< \nu_{t,x}, p(\varrho) \right> {\rm div}_x
\vcg{\varphi} \Big] } \ \,{\rm d} t \\
&- \int_0^\tau \intO{ \Big[ \mu \nabla \vc{u} : \nabla_x \vcg{\varphi} + (\mu/3 + \eta) {\rm div} \vc{u} \cdot {\rm div}_x \vcg{\varphi} \Big] } \,{\rm d} t
+ \int_0^\tau \intO{ \mathcal{R} : \nabla_x \varphi } \,{\rm d} t
\end{split}
\end{equation}
for any $0 \leq \tau \leq T$, $\varphi \in C^\infty_c([0,T] \times \Omega; R^3)$.
Similarly, the energy inequality (\ref{E3}) can be written as
\begin{equation} \label{M7}
\begin{split}
\left[ \intO{ \left[ \frac{1}{2} \left< \nu_{t,x}; \varrho | {\vc{u} } |^2 + P(\varrho) \right> \right] } \right]_{t = 0}^{t = \tau} &+
\int_0^\tau \intOh{ \mu |\nabla \vc{u} |^2 + (\mu/3 + \eta) |{\rm div} \vc{u} |^2 } \ \,{\rm d} t \\ + \mathcal{D}(\tau) &\leq 0
\end{split}
\end{equation}
for a.e. $\tau \in [0,T]$, with the \emph{dissipation defect} $\mathcal{D}$ satisfying
\begin{equation} \label{M8}
\int_0^\tau \| \mathcal{R} \|_{\mathcal{M}(\Omega)}\ \,{\rm d} t \stackrel{<}{\sim} \int_0^\tau \mathcal{D}(t) \ \,{\rm d} t ,\
\mathcal{D}(\tau) \geq \liminf_{h \to \infty} \int_0^\tau \intOh{ |\nabla_h \vc{u}_h|^2 }\ \,{\rm d} t - \int_0^\tau \intO{ |\nabla_x \vc{u} |^2 }\ \,{\rm d} t ,
\end{equation}
cf. \cite[Lemma 2.1]{FGSWW1}.
{At this stage, we recall the concept of \emph{dissipative measure valued solution} introduced in \cite{FGSWW1}. These are measure--valued
solutions of the Navier-Stokes system (\ref{i1}--\ref{i4}) satisfying the energy inequality (\ref{M7}), where the concentration remainder
in the momentum equation is dominated by the dissipation defect as stated in (\ref{M8}) and the following analogue of Poincar\' e's inequality
holds:
\begin{equation} \label{PI}
\lim_{h \to 0} \int_0^\tau \intOh{ | \vc{u}_h - \vc{u} |^2 } \ \,{\rm d} t \leq \liminf_{h \to \infty} \int_0^\tau \intOh{ |\nabla_h \vc{u}_h|^2 }
\,{\rm d} t - \int_0^\tau \intO{ |\nabla_x \vc{u} |^2 } \ \,{\rm d} t
(\leq \mathcal{D}(\tau)) ,
\end{equation}
where $\vc{u}$ is a weak limit of $\vc{u}_h$, or, equivalently, of $\avo{\vc{u}_h}$.
Consequently,
relations (\ref{M4}), (\ref{M6}--\ref{M8}) imply that the Young measure $\{ \nu_{t,x} \}_{t,x \in (0,T) \times \Omega}$ represents
a dissipative measure-valued solution of the Navier-Stokes system (\ref{i1}--\ref{i4})
in the sense of \cite{FGSWW1} as soon as we check (\ref{PI}).}
By standard Poincar\' e's inequality in $\Omega_h$ we get, on one hand,
\[
\intOh{ |\vc{u}_h - \vc{u} |^2 } = \intOh{ |\vc{u}_h - \Pi^V_h [\vc{u}] |^2 } + \intOh{ |\Pi^V_h [\vc{u}] - \vc{u} |^2 }
\stackrel{<}{\sim} \intOh{ |\nabla_h \vc{u}_h - \nabla_h \Pi^V_h \vc{u} |^2 } + \mathcal{O}(h^\beta).
\]
On the other hand,
\[
\liminf_{h \to \infty} \int_0^\tau \intOh{ |\nabla_h \vc{u}_h|^2 } \,{\rm d} t - \int_0^\tau \intO{ |\nabla_x \vc{u} |^2 }\ \,{\rm d} t =
\liminf_{h \to \infty} \int_0^\tau \intOh{ |\nabla_h \vc{u}_h - \nabla_x \vc{u} |^2 } \,{\rm d} t .
\]
Thus it is enough to observe that, by virtue of (\ref{S7}),
\[
\nabla_h \Pi^V_h [\vc{u}] \to \nabla_x \vc{u} \ \mbox{(strongly) in}\ L^2(\Omega_h;R^3) \ \mbox{whenever}\ \vc{u} \in W^{1,2}_0 (\Omega; R^3).
\]
Seeing that validity of (\ref{M6}) as well as the bound on the dissipation remainder (\ref{M8}) can be extended to the class of test functions
$\varphi \in C^1([0,T] \times \Ov{\Omega}; R^3)$, $\varphi|_{\partial \Omega} = 0$,
we have shown the following result.
\begin{Theorem} \label{MT1}
Let the pressure $p$ satisfy (\ref{i5}), with $1 < \gamma < 2$. Suppose that $[ \varrho_h, \vc{u}_h]$ is a family of numerical solutions
given through (\ref{C1}), (\ref{C2}), where $[\varrho^k_h, \vc{u}^k_h]$ satisfy (\ref{N2}--\ref{N4}), where
\[
\Delta t \approx h,\ 0 < \alpha < 2 (\gamma - 1),
\]
and the initial data satisfy (\ref{M1}).
Then any Young measure $\{ \nu_{t,x} \}_{t,x \in (0,T) \times \Omega}$ generated by $[\varrho^k_h, \vc{u}^k_h]$ for $h \to 0$ represents a
dissipative measure-valued solution of the Navier-Stokes system (\ref{i1}--\ref{i4}) in the sense of \cite{FGSWW1}.
\end{Theorem}
Of course, the conclusion of Theorem \ref{MT1} is rather weak, and, in addition, the Young measure need not be unique. On the other hand, however,
we may use the weak-strong uniqueness principle established in \cite[Theorem 4.1]{FGSWW1} to obtain our final convergence result.
\begin{Theorem} \label{MT2}
In addition to the hypotheses of Theorem \ref{MT1}, suppose that the Navier-Stokes system (\ref{i1}--\ref{i4}) endowed with the initial
data $[\varrho_0, \vc{u}_0]$ admits a regular solution $[\varrho, \vc{u}]$ belonging to the class
\[
\varrho, \ \nabla_x \varrho,\ \vc{u}, \nabla_x \vc{u} \in C([0,T] \times \Ov{\Omega}), \ \partial_t \vc{u} \in L^2(0,T; C(\Ov{\Omega}; R^3)),\
\varrho > 0, \ \vc{u}|_{\partial \Omega} = 0.
\]
Then
\[
\varrho_h \to \varrho \ \mbox{(strongly) in}\ L^\gamma((0,T) \times K), \ \vc{u}_h \to \vc{u} \ \mbox{(strongly) in}\ L^2((0,T) \times K; R^3)
\]
for any compact $K \subset \Omega$.
\end{Theorem}
{Indeed, the weak--strong uniqueness implies that the Young measure generated by the family of numerical solutions coincides
at each point $(t,x)$ with the Dirac mass supported by the smooth solution of the problem. In particular, the numerical solutions converge strongly and
no oscillations occur. Note that the Navier--Stokes system admits local-in-time strong solutions for arbitrary smooth initial data, see
e.g. Cho et al. \cite{ChoChoeKim} , and even global-in-time smooth solutions for small initial data, see, e.g.,
Matsumura and Nishida \cite{MANI}, as soon as the physical domain $\Omega$ is sufficiently smooth. }
{\color{black}
\section{Conclusions}
We have studied the convergence of numerical solutions obtained by
the mixed finite element--finite volume scheme applied to the isentropic Navier-Stokes equations.
We have assumed the isentropic pressure--density
state equation $p(\varrho)=a \varrho^\gamma$ with $\gamma \in (1,2)$. Remind that
this assumption is not restrictive, since the largest physically relevant
exponent is $\gamma = 5/3$. In order to establish the convergence result we have used
the concept of dissipative measure-valued solutions. These are the measure-valued solutions, that, in addition, satisfy
an energy inequality in which the dissipation defect measure
dominates the concentration remainder in the equations.
The energy inequality (\ref{S1}), along with the consistency (\ref{C10}), (\ref{C16}) gave us a suitable framework to apply the theory of
measure-valued solutions. As shown in Section~\ref{young} the numerical solutions $[\varrho_h, \vc{u}_h]$ generate a Young measure - a parameterized measure $\{ \nu_{t,x} \}_{t,x \in (0,T) \times \Omega}$, that represents a dissipative measure-valued solution of the Navier-Stokes system (\ref{i1}--\ref{i4}), cf.~Theorem \ref{MT1}. Finally, using
the weak-strong uniqueness principle established in \cite[Theorem 4.1]{FGSWW1} we have obtained the convergence of the numerical solutions to the exact regular solution, as long as the latter exists, cf.~Theorem~\ref{MT2}.
The present result is the first convergence result for numerical solutions of three-dimensional compressible isentropic
Navier-Stokes equations in the case of full adiabatic exponent $\gamma \in (1,2).$
}
\def\cprime{$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0
\advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0
\hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
|
2,869,038,154,294 | arxiv | \section{Introduction}
\label{sec:intro}
The growth of blood vessels is a complex multiscale process called angiogenesis that is the basis of organ growth and repair in healthy conditions and also of pathological developments such as cancerous tumors \cite{fol74,car05,fig08,car11}. Cells in an incipient tumor located in tissue experience lack of oxygen and nutrients, and stimulate production of vessel endothelial growth factor that, in turn, induces growth of blood vessels (angiogenesis) from a nearby primary vessel in the tumor direction \cite{fol74,car05}. Blood brings oxygen and nutrients that foster tumor growth. In angiogenesis, events happening in cellular and subcellular scales unchain endothelial cell motion and proliferation, build millimeter scale blood sprouts and networks thereof \cite{car11,CT05,GG05,fru07}. Angiogenesis imbalance contributes to numerous malignant, inflammatory, ischaemic, infectious, and immune disorders \cite{car05}. For these reasons, immense human and material resources are devoted to understanding and controlling angiogenesis. Theoretical efforts based on angiogenesis models go hand in hand with experiments \cite{lio77,sto91,cha93,cha95,and98,ton01,lev01,pla03,man04,sun05a,sun05,ste06,bau07,cap09,jac10,das10,swa11,sci11,sci13,cot14,dej14,ben14,bon14,hec15}. Models range from very simple to extraordinarily complex and often try to illuminate some particular mechanism; see the review \cite{hec15}. Realistic microscopic models involve postulating mechanisms and a large number of parameters that cannot be directly estimated from experiments, but they often yield qualitative predictions that can be tested. An important challenge is to extract mesoscopic and macroscopic descriptions of angiogenesis from the diverse microscopic models.
Early angiogenesis macroscopic models consisted of reaction-diffusion equations for densities of cell and chemicals (growth factors, fibronectin, etc.) \cite{lio77,cha93,cha95}. These models do not allow to treat the growth and evolution of individual blood vessels. Later models focused on the evolution of the cells at the tip of a vessel sprout. The ten or so cells at a vessel tip are highly motile and do not proliferate. They follow chemotactic and haptotactic clues as they advance toward hypoxic regions that experience lack of oxygen. These cells are followed by proliferating stalk cells that build a capillary in their wake. Thus {\em tip cell models} are based on the motion of single particles representing the tip cells and their trajectories constitute the advancing blood vessels \cite{sto91,and98,pla03,man04,cap09,bon14,hec15,ter16}. More realistic and necessarily more complex models illuminate tip and stalk cell dynamics, the motion of tip and stalk cells on the extracellular matrix outside blood vessels, blood circulation in newly formed vessels, and so on \cite{bau07,jac10,ben14,hec15}.
In recent work \cite{bon14,ter16}, we have been trying to bridge the gap between microscopic descriptions of early stage tumor induced angiogenesis that require large numerical simulations and macroscopic descriptions that are amenable to a more thorough theoretical study. We consider a simple tip cell model in which tip stochastic extension is driven by the gradient of growth factors (chemotaxis), there is a random branching of tips and tips join with existing blood vessels (anastomosis). We have derived a deterministic description for the density of vessel tips consisting of an integrodifferential equation for the tip density coupled to a reaction-diffusion equation for the tumor angiogenic factor (TAF, which comprises vessel endothelial and other growth factors) \cite{bon14,ter16}. The stochastic model can be made more realistic by adding equations characterizing haptotaxis, the influence of other chemicals or drugs, etc. While cell densities can be extracted from numerical simulations of microscopic models, our equation for the tip density \cite{bon14} incorporates tip branching and anastomosis as derived from a stochastic model \cite{ter16}, not postulated ad hoc. It turns out that the tip density soon forms a moving lump that advances towards the tumor. The longitudinal section of the stable lump (that we may term {\em angiton}) is approximately given by a moving soliton-like wave \cite{bon16}. This wave is an exact 1D solution of a reduced equation for the marginal tip density on the whole real line that has constant chemotactic force and no diffusion. It appears by differentiating a domain-wall solution (topological soliton) connecting two spatially homogeneous states. Numerical evidence shows that it is asymptotically stable \cite{bon16}. Technically speaking, it is not known whether two soliton-like waves in the angiogenesis model equations emerge unchanged from collisions except for a phase shift. Therefore we do not claim that angiogenesis soliton-like lump profiles are true solitons. However stable soliton-like waves are central to the arguments of the present paper and, by an abuse of language, we will call them solitons. In this, we follow extended usage in the physical literature in which other stable waves such as ``topological solitons'' \cite{MS04} or ``diffusive solitons'' \cite{rem99} are called simply solitons despite not emerging unscathed from collisions \cite{MS04,rem99}. The soliton shape and velocity depend on two collective coordinates. The vessel tip density approaches the soliton solution after an initial formation stage. After its formation and until the vessels are close to the tumor, the tip density is described by the soliton and the solution of its two collective coordinate equations.
In this paper, we deduce the equations for the angiogenesis soliton and its collective coordinates, solve the latter numerically and reconstruct the marginal tip density from the soliton formula. Then we show that it agrees with both the solution of the deterministic description and with the ensemble average of the tip density as extracted from the stochastic process. Although the fluctuations are large, we give numerical evidence that the position of the soliton peak is very close to that of the maximum of the marginal tip density for different replicas or realizations of the stochastic process. This implies that the simple description based on the soliton may give useful information about single replicas of the angiogenesis process. While our simple model needs to be completed to discuss control of angiogenesis, we show how changing a single parameter results in seemingly arresting the process.
The rest of the paper is as follows. We recall the stochastic model of \cite{bon14} and its deterministic description \cite{ter16} in Section \ref{sec:model}. By a Chapman-Enskog method, we derive a reduced equation for the marginal tip density in Section \ref{sec:reduced}. By neglecting diffusion and considering constant coefficients in the resulting equation, we find in Section \ref{sec:soliton} an analytical expression for the soliton of the marginal tip density \cite{bon16}. Section \ref{sec:cc} contains a derivation of the differential equations for the two collective coordinates of the soliton. The coefficients appearing in these equations contain spatial averages of the TAF density. In Section \ref{sec:numerical}, we explain how to calculate the coefficients in the collective coordinate equations, solve them numerically, reconstruct the soliton and, through it, the marginal vessel tip density. We compare it with direct solutions of the deterministic description and ensemble averages of the stochastic process. Although realizations of the stochastic angiogenic process provide very different looking vessel networks, we also show that the maximum of the marginal density for each realization follows closely the soliton peak. Section \ref{sec:conclusions} contains our conclusions and the Appendices are devoted to technical matters.
\section{Model} \label{sec:model}
Early stages of angiogenesis are described by a simple stochastic model in \cite{bon14,ter16}. It consists of a system of Langevin equations for the extension of vessel tips, a tip branching process and tip annihilation (anastomosis) when they merge with existing vessels. A tip $i$ is born at a random time $T^i$ from a moving tip (we ignore branching from mature vessels) and disappears at a later random time $\Theta^i$, either by reaching the tumor or by anastomosis. At time $T^i$, the velocity of the newly created tip $i$ is selected out of a normal distribution,
\begin{eqnarray}
\delta_{\sigma_v}(\mathbf{v}-\mathbf{v}_0)= \frac{e^{-|\mathbf{v}-\mathbf{v}_0|^2/\sigma_v^2}}{\pi\sigma_v^2},\label{eq1}
\end{eqnarray}
with mean $\mathbf{v}_0$ and a narrow variance $\sigma_v^2$. In addition, the probability that a tip branches from one of the existing ones during an infinitesimal time interval $(t, t + dt]$ is taken proportional to $\sum_{i=1}^{N(t)}\alpha(C(t,\mathbf{X}^i(t)))dt$, where $C(t,\mathbf{x})$ is the TAF concentration and
\begin{eqnarray}
\alpha(C)=\alpha_1\frac{C}{C_R+C},\quad C_R>0,\,\, \alpha_1>0,\label{eq2}
\end{eqnarray}
in which $C_R$ is a reference concentration. The change per unit time of the number of tips in boxes $d\mathbf{x}$ and $d\mathbf{v}$ about $\mathbf{x}$ and $\mathbf{v}$ is
\begin{eqnarray}\nonumber
&&\sum_{i=1}^{N(t)}\alpha(C(t,\mathbf{X}^i(t)))\, \delta_{\sigma_v}(\mathbf{v}^i(t)-\mathbf{v}_0)=\int_{d\mathbf{x}}\int_{d\mathbf{v}}\alpha(C(t,\mathbf{x)})\\
&&\times \delta_{\sigma_v}(\mathbf{v}-\mathbf{v}_0
\sum_{i=1}^{N(t)}\delta(\mathbf{x}-\mathbf{X}^i(t))\delta(\mathbf{v}-\mathbf{v}^i(t)) d\mathbf{x} d\mathbf{v}. \label{eq3}
\end{eqnarray}
The Langevin equations for tip extensions are
\begin{eqnarray}
&&d\mathbf{X}^i(t)=\mathbf{v}^i(t)\, dt,\nonumber\\
&&d\mathbf{v}^i(t)= \left[- k\, \mathbf{v}^i(t)+\mathbf{F}\!\left(C(t,\mathbf{X}^i(t))\right)\!\right]\! dt + \sigma\, d\mathbf{W}^i(t), \label{eq4}
\end{eqnarray}
where $\mathbf{X}^i(t)$ and $\mathbf{v}^i(t)$ are the tip position and velocity of tip $i$ at time $t$, $\mathbf{W}^i(t)$ are independent identically distributed (i.i.d.) standard Brownian motions, and $k$ (friction coefficient) and $\sigma$ are positive parameters. At each time $t$ there are $N(t)$ active tips. The chemotactic force is
\begin{eqnarray}
\mathbf{F}(C)&=& \frac{d_1}{(1+\gamma_1C)^q}\nabla_x C, \label{eq5}
\end{eqnarray}
where $d_1$, $\gamma_1$, and $q$ are positive parameters. The TAF concentration solves
\begin{eqnarray}
\frac{\partial}{\partial t}C(t,\mathbf{x})\!&\!=\!& \! d_2 \Delta_x C(t,\mathbf{x})
-\eta C(t,\mathbf{x})\nonumber \\&\times&
\!\left| \sum_{i=1}^{N(t)} \mathbf{v}^i(t)\delta_{\sigma_x}(\mathbf{x}-\mathbf{X}^i(t))\right|\!.\label{eq6}
\end{eqnarray}
Here $d_2$ (diffusivity) and $\eta$ are positive parameters, whereas $\delta_{\sigma_x}(\mathbf{x})$ is a regularized smooth delta function (e.g., a Gaussian with variances $l_x^2$ and $l_y^2$ proportional to $\sigma_x^2$ along the $x$ and $y$ directions, respectively) that becomes $\delta(\mathbf{x})$ in the limit as $\sigma_x\to 0$.
There is a counterpart to the stochastic model for the densities of vessel tips and the vessel tip flux, defined as ensemble averages over a sufficient number $\mathcal{N}$ of replicas (realizations) $\omega$ of the stochastic process:
\begin{eqnarray}
p_{\mathcal{N}}\!(t,\mathbf{x},\mathbf{v})\!&=&\!\frac{1}{\mathcal{N}}\sum_{\omega=1}^\mathcal{N}\sum_{i=1}^{N(t,\omega)}\delta_{\sigma_x}(\mathbf{x}-\mathbf{X}^i(t,\omega))\nonumber\\&\times&
\delta_{\sigma_v}(\mathbf{v}-\mathbf{v}^i(t,\omega)),\label{eq7}\\
\tilde{p}_{\mathcal N}(t,\mathbf{x})\!\!&=&\!\frac{1}{\mathcal{N}}\sum_{\omega=1}^\mathcal{N}\sum_{i=1}^{N(t,\omega)}\delta_{\sigma_x}(\mathbf{x}-\mathbf{X}^i(t,\omega)), \label{eq8}\\
\mathbf{j}_{\mathcal N}(t,\mathbf{x})\!\!&=&\!\frac{1}{\mathcal{N}}\!\sum_{\omega=1}^\mathcal{N}\!\sum_{i=1}^{N(t,\omega)}\!\!\mathbf{v}^i(t,\omega)\delta_{\sigma_x}(\mathbf{x}-\mathbf{X}^i(t,\omega)).\label{eq9}
\end{eqnarray}
As $\mathcal{N}\to\infty$, these ensemble averages tend to the tip density $p(t,\mathbf{x},\mathbf{v})$, the marginal tip density $\tilde{p}(t,\mathbf{x})$, and the tip flux $\mathbf{j}(t,\mathbf{x})$, respectively. In \cite{ter16} it is shown that the angiogenesis model has a deterministic description based on the following equation for the density of vessel tips, $p(t,\mathbf{x},\mathbf{v})$,
\begin{eqnarray}
&&\frac{\partial}{\partial t} p(t,\mathbf{x},\mathbf{v})=\alpha(C(t,\mathbf{x}))\,
p(t,\mathbf{x},\mathbf{v})\delta_{v}(\mathbf{v}-\mathbf{v}_0)\nonumber\\
&& - \gamma\, p(t,\mathbf{x},\mathbf{v}) \int_0^t \tilde{p}(s,\mathbf{x})\, ds - \mathbf{v}\cdot \nabla_x p(t,\mathbf{x},\mathbf{v}) \nonumber\\
&& - \nabla_v \cdot [(\mathbf{F}(C(t,\mathbf{x}))-k\mathbf{v}) p(t,\mathbf{x},\mathbf{v})]\nonumber\\
&&+ \frac{\sigma^2}{2} \Delta_{v} p(t,\mathbf{x},\mathbf{v}),
\label{eq10}\\
&& \tilde{p}(t,\mathbf{x})=\int p(t,\mathbf{x},\mathbf{v}')\, d \mathbf{v'}. \label{eq11}
\end{eqnarray}
The TAF equation \eqref{eq6} becomes
\begin{eqnarray}
\frac{\partial}{\partial t}C(t,\mathbf{x})=d_2 \Delta_x C(t,\mathbf{x})- \eta\, C(t,\mathbf{x})\!\left| \mathbf{j}(t,\mathbf{x})\right|\!,\label{eq12}
\end{eqnarray}
where $\mathbf{j}(t,\mathbf{x})$ is the current density (flux) vector at any point $\mathbf{x}$ and any time $t\geq 0$,
\begin{equation}
\mathbf{j}(t,\mathbf{x})= \int \mathbf{v}'
p(t,\mathbf{x},\mathbf{v}')\, d \mathbf{v'}. \label{eq13}
\end{equation}
Alternatively, if $N(t)$ becomes very large (which is precluded by anastomosis), the same deterministic description can be derived by using the law of large numbers \cite{bon14}.
\begin{table}[ht]
\begin{center}\begin{tabular}{ccccccc}
\hline
$\mathbf{x}$& $\mathbf{v}$ & $t$ &$C$& $p$ &$\tilde{p}$&$\mathbf{j}$\\
$L$ & $\tilde{v}_0$ & $\frac{L}{\tilde{v}_0}$ & $C_R$ & $\frac{1}{\tilde{v}_0^2L^2}$& $\frac{1}{L^2}$& $\frac{\tilde{v}_0}{L^2}$\\
mm&$\mu$m/hr & hr& mol/m$^2$&$10^{21}\frac{\mbox{s$^2$}}{\mbox{m$^4$}}$ & $10^{5}$m$^{-2}$&m$^{-1}$s$^{-1}$\\
$2$& 40 & 50 & $10^{-16}$ & 2.025 & 2.5 & 0.0028 \\
\hline
\end{tabular}
\end{center}
\caption{Units for nondimensionalizing the model equations. }
\label{table1}
\end{table}
The deterministic description consisting of Equations \eqref{eq10} and \eqref{eq12} is well posed, as it has been proved to have unique smooth solutions \cite{car16}. After nondimensionalization as in Table \ref{table1} \cite{bon14,ter16}, \eqref{eq10} and \eqref{eq12} become
\begin{eqnarray}
&&\frac{\partial}{\partial t} p(t,\mathbf{x},\mathbf{v})=
\frac{A\, C(t,\mathbf{x})}{1+C(t,\mathbf{x})}\,
p(t,\mathbf{x},\mathbf{v})\delta_{v}(\mathbf{v}-\mathbf{v}_0)\nonumber\\
&& - \Gamma p(t,\mathbf{x},\mathbf{v}) \int_0^t \int p(s,\mathbf{x},\mathbf{v}')\, d \mathbf{v'} ds - \mathbf{v}\cdot \nabla_x p(t,\mathbf{x},\mathbf{v}) \nonumber\\
&& - \nabla_v \cdot\left[\!\left(\frac{\delta\,\nabla_x C(t,\mathbf{x})}{[1+\Gamma_1C(t,\mathbf{x})]^q}-\beta\mathbf{v}\right) p(t,\mathbf{x},\mathbf{v})
\right]\nonumber\\
&&+ \frac{\beta}{2} \Delta_{v} p(t,\mathbf{x},\mathbf{v}),
\label{eq14}\\
&&\frac{\partial}{\partial t}C(t,\mathbf{x})=\kappa \Delta_x C(t,\mathbf{x})- \chi\, C(t,\mathbf{x})\!\left| \mathbf{j}(t,\mathbf{x})\right|\!,\label{eq15}
\end{eqnarray}
respectively. The dimensionless parameters are defined in Table \ref{table2} and the boundary conditions to solve \eqref{eq14}-\eqref{eq15} are listed in Appendix \ref{app2}.
\begin{table}[ht]
\begin{center}\begin{tabular}{cccccccc}
\hline
$\delta$ & $\beta$ &$A$& $\Gamma$& $\Gamma_1$ &$\kappa$&$\chi$&$\sigma_v$\\
$\frac{d_1C_R}{\tilde{v}_0^2}$ & $\frac{kL}{\tilde{v}_0}$ & $\frac{\alpha_1L}{\tilde{v}_0^3}$ & $\frac{\gamma}{\tilde{v}_0^2}$&$\gamma_1C_R$& $\frac{d_2}{\tilde{v}_0 L}$& $\frac{\eta}{L}$&-\\
1.5 & 5.88 & $22.42$ & 0.145 & 1& $0.0045$ & 0.002&0.08 \\
\hline
\end{tabular}
\end{center}
\caption{Dimensionless parameters. }
\label{table2}
\end{table}
\section{Reduced equation for the marginal tip density}\label{sec:reduced}
We can obtain a simpler equation for the marginal vessel tip density \eqref{eq11} provided the overall tip density approaches rapidly a local equilibrium which is a displaced Maxwellian:
\begin{eqnarray}
p^{(0)}(t,\mathbf{x},\mathbf{v})=\frac{1}{\pi}e^{-|\mathbf{v}-\mathbf{v}_0|^2}\tilde{p}(t,\mathbf{x}).\label{eq16}
\end{eqnarray}
The source terms in \eqref{eq14} (two first terms on its right hand side) select velocities on a small neighborhood of $\mathbf{v}_0$, as such velocities are the only ones for which the birth term proportional to $\alpha(C)\delta_v(\mathbf{v}-\mathbf{v}_0)$, cf Eq.\ \eqref{eq1}, can compensate the anastomosis death term. To derive the simpler equation for $\tilde{p}$, we use the Chapman-Enskog method \cite{BT10}. We first rewrite (\ref{eq14}) as
\begin{eqnarray}
\mathcal{L}p &\equiv &\beta\,\nabla_v\cdot\left(\frac{1}{2}\nabla_vp+(\mathbf{v}-\mathbf{v}_0)p\right)\nonumber\\
&=& \epsilon\left[\frac{\partial p}{\partial t} +\beta\, (\mathbf{F}-\mathbf{v}_0)\cdot\nabla_v p +\mathbf{v}\nabla_xp \right.\nonumber\\&-&\left.
\alpha p\,\delta_{v}(\mathbf{v}-\mathbf{v}_0) +\Gamma p \int_0^t \tilde{p}(s,\mathbf{x})\, ds\right]\!, \label{eq17}\\
\alpha&=&\frac{A\, C}{1+C},\label{eq18}\\
\mathbf{F}&=& \frac{\delta}{\beta}\,\frac{\nabla_x C(t,\mathbf{x})}{[1+\Gamma_1C(t,\mathbf{x})]^q}.\label{eq19}
\end{eqnarray}
We have included a scaling parameter $\epsilon$ in the right hand side of (\ref{eq17}), as we will consider that it is small compared to the left hand side. After the computations that follow, we will restore $\epsilon=1$. Note that (\ref{eq16}) satisfies
\begin{eqnarray}
\mathcal{L}p^{(0)} =0,\label{eq20}
\end{eqnarray}
i.e., (\ref{eq17}) with $\epsilon=0$. We now assume that the terms on the right hand side of (\ref{eq17}) are small compared to those on its left hand side (formally, $\epsilon\ll 1$) and that we can expand $p$ in the asymptotic series
\begin{eqnarray}
p=p^{(0)} +\epsilon p^{(1)} + \epsilon^2 p^{(2)} +\ldots .\label{eq21}
\end{eqnarray}
Inserting this into (\ref{eq11}), we find
\begin{eqnarray}
\int p^{(j)} d\mathbf{v} =0,\quad j=1,2,\ldots. \label{eq22}
\end{eqnarray}
We assume now that
\begin{eqnarray}
\frac{\partial\tilde{p}}{\partial t}=\mathcal{F}^{(0)} +\epsilon\mathcal{F}^{(1)}+\ldots,\label{eq23}
\end{eqnarray}
where the $\mathcal{F}^{(j)}$ should be determined by solvability conditions to be derived below. Inserting (\ref{eq21}) and (\ref{eq23}) in (\ref{eq17}) and equating like powers of $\epsilon$ in the result, we obtain the hierarchy of equations (\ref{eq20}) and
\begin{eqnarray}
&&\mathcal{L}p^{(1)}\! =\frac{e^{- V^2}}{\pi}\!\left[\mathcal{F}^{(0)} +\mathbf{v}\cdot\nabla_x\tilde{p} - 2\beta\mathbf{V}\!\cdot\! (\mathbf{F}-\mathbf{v}_0)\tilde{p} \right.\nonumber\\&&\quad\left.
- \alpha\tilde{p} \delta_{v}(\mathbf{V})+ \Gamma\tilde{p} \int_0^t \tilde{p}(s,\mathbf{x})\, ds\right]\!,\label{eq24}\\
&&\mathcal{L}p^{(2)}\! =\frac{e^{-V^2}}{\pi}\mathcal{F}^{(1)} +\mathbf{v}\cdot\nabla_xp^{(1)}\nonumber\\&&\quad
-2\beta\mathbf{V}\!\cdot\! (\mathbf{F}-\mathbf{v}_0)p^{(1)}- \alpha p^{(1)} \delta_{v}(\mathbf{V}) \nonumber\\&& \quad
+ \Gamma p^{(1)} \int_0^t \tilde{p}(s,\mathbf{x})\, ds,\label{eq25}
\end{eqnarray}
etc. Here $\mathbf{V}=\mathbf{v}-\mathbf{v}_0$ and $V=|\mathbf{V}|$. For these equations to have bounded solutions, we need to impose the conditions
\begin{eqnarray}
\int \mathcal{L}p^{(j)} d\mathbf{v} =0,\quad j=1,2,\ldots, \label{eq26}
\end{eqnarray}
as the adjoint problem $\mathcal{L}^\dagger v=0$ has constant solutions. For (\ref{eq24}), this condition yields
\begin{eqnarray}
\mathcal{F}^{(0)} =\frac{\alpha}{\pi}\tilde{p} -\mathbf{v}_0\cdot\nabla_x\tilde{p}- \Gamma\tilde{p} \int_0^t \tilde{p}(s,\mathbf{x})\, ds, \label{eq27}
\end{eqnarray}
which, inserted back in (\ref{eq24}), produces the equation
\begin{eqnarray}
\mathcal{L}p^{(1)} &=&\frac{e^{- V^2}}{\pi}\!\left\{\alpha\!\left[\frac{1}{\pi} -\delta_{v}(\mathbf{V})\right]\!\tilde{p} \right.\nonumber\\ &+&\left.
\mathbf{V}\cdot\!\left[\nabla_x\tilde{p}-2\beta(\mathbf{F}-\mathbf{v}_0)\tilde{p}\right]\right\}.\label{eq28}
\end{eqnarray}
The solution of (\ref{eq28}) that satisfies (\ref{eq22}) is
\begin{eqnarray}
p^{(1)} =-\frac{e^{- V^2}}{\pi}\mathbf{V}\!\cdot\!\left[\nabla_x\tilde{p}-2\beta(\mathbf{F}-\mathbf{v}_0)\tilde{p}\right] \nonumber\\
+\frac{\alpha\tilde{p}}{2\pi^2}e^{-V^2} \!\left[ \int_0^\infty e^{-t}\ln t \, dt -\ln V^2\right]\!. \label{eq29}
\end{eqnarray}
Insertion of (\ref{eq29}) into the solvability condition (\ref{eq26}) for $j=2$ produces
\begin{eqnarray}
\mathcal{F}^{(1)} &=&\frac{1}{2\beta}\Delta_x\tilde{p}+\nabla_x\cdot\!\left[\left(\mathbf{v}_0- \mathbf{F}\right)\tilde{p}\right] \nonumber\\&+&
\frac{\alpha^2\tilde{p}}{2\pi^2\beta (1+\sigma_v^2)}\ln\!\left(1+\frac{1}{\sigma_v^2}\right)\!.\label{eq30}
\end{eqnarray}
We now substitute (\ref{eq27}) and (\ref{eq30}) in (\ref{eq23}) and recall $\epsilon=1$, thereby finding the Smoluchowski-type equation
\begin{eqnarray}
\frac{\partial\tilde{p}}{\partial t}+\nabla_x\cdot(\mathbf{F}\tilde{p})-\frac{1}{2\beta}\Delta_x\tilde{p}=\mu\,\tilde{p}\nonumber\\
-\Gamma\tilde{p}\int_0^t\tilde{p}(s,\mathbf{x})\, ds, \label{eq31}\\
\mu=\frac{\alpha}{\pi}\left[1+\frac{\alpha}{2\pi\beta(1+\sigma_v^2)}\ln\!\left(1+\frac{1}{\sigma_v^2}\right)\!\right]\!.\label{eq32}
\end{eqnarray}
Note that the convective terms in \eqref{eq31} correspond to having ignored inertia in the Langevin equation \eqref{eq4}, which then becomes $d\mathbf{X}^i(t)=(\mathbf{F}/k)\, dt + (\sigma/k)\, d\mathbf{W}^i (t)$. Our perturbation procedure just renormalizes the birth term $\alpha(C)$ in \eqref{eq14} or \eqref{eq17}.
The flux (\ref{eq13}) in the reaction-diffusion equation (\ref{eq15}) is $\mathbf{j}(t,\mathbf{x})\approx \mathbf{v}_0\tilde{p}(t,\mathbf{x})$, so that (\ref{eq15}) becomes
\begin{eqnarray}
\frac{\partial}{\partial t}C(t,\mathbf{x})=\kappa \Delta_x C(t,\mathbf{x})- \chi\, C(t,\mathbf{x})\,\tilde{p}(t,\mathbf{x}),\label{eq33}
\end{eqnarray}
because $|\mathbf{v}_0|=1$ in our nondimensional units.
The boundary conditions for (\ref{eq31}) are: (i) $\tilde{p}(t,\mathbf{x})$ known at $x=1$ and equal to its instantaneous value there; and (ii) known flux $j_0$ at $x=0$ \cite{bon14}. The boundary condition (i) is a free boundary condition that avoids modeling explicitly the tumor instead of the more appropriate absorbing boundary condition $\tilde{p}=0$ at the tumor. In condition (ii), the flux can be approximated as
\begin{eqnarray}\nonumber
\int (\mathbf{v}_0+\mathbf{V})p(t,\mathbf{x},\mathbf{v})\, d\mathbf{V}&=& \mathbf{v}_0\tilde{p}+\int \mathbf{V} p^{(1)}d\mathbf{V} \\&=&
\mathbf{F}\tilde{p}- \frac{1}{2\beta} \nabla_x \tilde{p}. \nonumber
\end{eqnarray}
At $x=0$, the x-component of $\mathbf{F}$ is zero and therefore the boundary condition for $\tilde{p}$ becomes $-\frac{1}{2\beta}\frac{\partial\tilde{p}}{\partial x}= j_0$, i.e.,
\begin{eqnarray}
\left.-\frac{1}{2\beta}\frac{\partial\tilde{p}}{\partial x}\right|_{x=0}= v_0\mu\,\tilde{p}\, \theta(\tau-t), \label{eq34}
\end{eqnarray}
in which $\theta(t)=1$ if $t>0$ and $\theta(t)=0$ otherwise is the unit step function. In \eqref{eq34}, we have renormalized the birth rate coefficient $\alpha$ to $\mu$ in harmony with the change in birth rate when going from the equation for the vessel tip density \eqref{eq14} to \eqref{eq31} for the marginal vessel tip density; see \eqref{b6} in Appendix \ref{app2}. In \cite{bon14,ter16} and in the numerical calculations of this paper, $\tau=\infty$.
\section{Soliton}
\label{sec:soliton}
We now find an approximate soliton solution of (\ref{eq31}) following \cite{bon16}. Firstly, let define
\begin{eqnarray}
\rho(t,\mathbf{x}) = \int_0^t\tilde{p}(s,\mathbf{x})\, ds, \label{eq35}
\end{eqnarray}
and ignore diffusion in (\ref{eq31}), which then becomes
\begin{eqnarray}
&&\frac{\partial^2\rho}{\partial t^2}+\nabla_x\cdot\!\left(\mathbf{F}\frac{\partial\rho}{\partial t}\right)\! =\mu\frac{\partial\rho}{\partial t}-\Gamma\rho\frac{\partial\rho}{\partial t}. \label{eq36}
\end{eqnarray}
The coefficients $\kappa$ and $\chi$ in (\ref{eq33}) are very small \cite{bon14} and therefore the TAF concentration varies very slowly compared with the marginal tip density. We will also assume that the initial TAF concentration varies on a larger spatial scale than the soliton size and that the TAF gradient is directed on the $x$ axis, which constitutes a good approximation \cite{bon14}. Then $\mathbf{F}$ and $\mu$ are almost constant and we will seek a solution of the form
\begin{eqnarray}
\rho(t,\mathbf{x}) = \rho(\xi),\quad\xi= x-ct, \label{eq37}
\end{eqnarray}
for (\ref{eq36}). The resulting ordinary differential equation is
\begin{eqnarray}
&&\left(c-F_x\right)\!\frac{\partial^2\rho}{\partial\xi^2}+(\mu-\Gamma\rho)\frac{\partial\rho}{\partial\xi} =0, \label{eq38}
\end{eqnarray}
in which $F_x$ is the $x$-component of the chemotactic force $\mathbf{F}$. Integrating \eqref{eq38} once, we obtain
\begin{eqnarray}
\left(c-F_x\right)\!\frac{\partial\rho}{\partial\xi}+\left(\mu-\frac{\Gamma}{2}\rho\right)\!\rho =-K, \label{eq39}
\end{eqnarray}
where $K$ is a constant. From this, we get
\begin{eqnarray}
\left(c-F_x\right)\!\frac{2}{\Gamma}\frac{\partial\rho}{\partial\xi}=\rho^2-2\frac{\mu}{\Gamma}\rho-\frac{2K}{\Gamma}. \label{eq40}
\end{eqnarray}
Setting $\rho=\frac{\mu}{\Gamma}+\nu\tanh(\lambda\xi)$, we find $\nu^2=\frac{\mu^2+2K\Gamma}{\Gamma^2}$ and $2\nu\lambda(c-F_x)/\Gamma=-\nu^2$, thereby obtaining
\begin{eqnarray}
\rho=\frac{\mu}{\Gamma}-\frac{\sqrt{2K\Gamma+\mu^2}}{\Gamma}\tanh\!\left[\frac{\sqrt{2K\Gamma+\mu^2}}{2(c-F_x)}(\xi-\xi_0)\right]\!. \label{eq41}
\end{eqnarray}
Here $\xi_0$ is a constant of integration. Thus $\tilde{p}=\frac{\partial\rho}{\partial t}=-c\frac{\partial\rho}{\partial\xi}$ yields
\begin{eqnarray}
\tilde{p}=\frac{(2K\Gamma+\mu^2)c}{2\Gamma(c-F_x)}\mbox{sech}^2\!\left[\frac{\sqrt{2K\Gamma+\mu^2}}{2(c-F_x)}(x-ct-\xi_0)\right]\!\!. \label{eq42}
\end{eqnarray}
This is similar to the usual soliton solution of the Korteweg-de Vries equation except that we now have three parameters, $c$, $K$ and $\xi_0$. Note that the soliton appears as consequence of a dominant balance of time derivative, convection, and source terms in \eqref{eq31}. The existence of the soliton solution is consequence of the quadratic anastomosis term in \eqref{eq14} first derived in \cite{bon14}. While simulations of the deterministic \cite{bon14} and stochastic descriptions \cite{ter16} clearly exhibit a soliton-like solution, the derivation presented here first appeared in \cite{bon16}.
\section{Collective coordinates}\label{sec:cc}
In this section, we shall discuss the effect of small diffusion and a slowly varying TAF concentration on the soliton. Let the soliton solution (\ref{eq42}) be written as
\begin{eqnarray}
&&\!\tilde{p}_s=\frac{(2K\Gamma+\mu^2)c}{2\Gamma(c-F_x)}\mbox{ sech}^2s, \label{eq43}\\
&&s=\frac{\sqrt{2K\Gamma+\mu^2}}{2(c-F_x)}\,\xi, \quad\xi=x-X(t),\label{eq44}\\
&&\dot{X}=\frac{dX}{dt}=c.\label{eq45}
\end{eqnarray}
Here $X(t)$, $c(t)$ and $K(t)$ are time-dependent {\em collective coordinates} characterizing the soliton. They are supposed to vary slowly so that the marginal tip density is described by a soliton that moves and changes shape slowly according to the changes of its collective coordinates. To find equations for them, we adapt the perturbation method explained in References \cite{mer97,san14}. Note that $\tilde{p}_s$ is a function of $\xi$ and also of $\mathbf{x}$ and $t$ through $C(t,\mathbf{x})$,
\begin{equation}
\tilde{p}_s=\tilde{p}_s\!\left(\xi;K,c,\mu(C),F_x\!\left(C,\frac{\partial C}{\partial x}\right)\!\right)\!.
\label{eq46}
\end{equation}
We assume that the time and space variations of $C$, which appear when $\tilde{p}_s$ is differentiated with respect to $t$ or $x$, produce terms that are small compared to $\partial\tilde{p}_s/\partial\xi$. As indicated in Appendix \ref{app3}, we shall consider that $\mu(C)$ is approximately constant, ignore $\partial C/\partial t$ because the TAF concentration is varying slowly (the dimensionless coefficients $\kappa$ and $\chi$ appearing in the TAF equation \eqref{eq33} are very small according to Table \ref{table2}) and ignore $\partial^2\tilde{p}_s/\partial i\partial j$, where $i,j=K,\, F_x$. Appendix \ref{app1} explains what happens if we relax these assumptions. We now insert (\ref{eq43}) and \eqref{eq44} into (\ref{eq31}), thereby obtaining
\begin{eqnarray}
&&\left(F_x-\dot{X}\right)\!\frac{\partial\tilde{p}_s}{\partial\xi}+\frac{\partial\tilde{p}_s}{\partial K} \dot{K}+\frac{\partial\tilde{p}_s}{\partial c}\dot{c}+\tilde{p}_s\nabla_x\cdot\mathbf{F} \nonumber\\&&
+\frac{\partial\tilde{p}_s}{\partial F_x}\!\left(\frac{\partial F_x}{\partial t}+\mathbf{F}\cdot\nabla_xF_x\right)\! -\frac{1}{2\beta}\!\left(\frac{\partial^2\tilde{p}_s}{\partial\xi^2} \right.
\nonumber\\
&&\left.+2\frac{\partial^2\tilde{p}_s}{\partial\xi\partial F_x}\frac{\partial F_x}{\partial x}+ \frac{\partial\tilde{p}_s}{\partial F_x}\Delta_xF_x\right)\!=\mu\tilde{p}_s \nonumber\\&&
-\Gamma\tilde{p}_s\!\!\int_0^t\tilde{p}_s dt. \label{eq47}
\end{eqnarray}
Eq.\ (\ref{eq31}) with $1/\beta=0$ and constant $\mathbf{F}$ has the soliton solution \eqref{eq43}-\eqref{eq44}. Using this fact and \eqref{eq45}, (\ref{eq47}) becomes
\begin{eqnarray}
&&\frac{\partial\tilde{p}_s}{\partial K} \dot{K}+\frac{\partial\tilde{p}_s}{\partial c}\dot{c}=\mathcal{A},
\label{eq48}\\
&&\mathcal{A}=\! \frac{1}{2\beta}\frac{\partial^2\tilde{p}_s}{\partial\xi^2}\!-\!\tilde{p}_s\nabla_x\!\cdot\!\mathbf{F}\!-\!\frac{\partial\tilde{p}_s}{\partial F_x}\!\left[\mathbf{F}\!\cdot\!\nabla_xF_x\!-\!\frac{1}{2\beta}\Delta_xF_x\right]\! \nonumber\\&&\quad
+\frac{1}{\beta}\frac{\partial^2\tilde{p}_s}{\partial\xi\partial F_x}\frac{\partial F_x}{\partial x}. \label{eq49}
\end{eqnarray}
See Appendix \ref{app3} for the precise meaning of these equations.
We now find collective coordinate equations (CCEs) for $K$ and $c$. As the lump-like angiton moves on the $x$ axis, we set $y=0$ to capture the location of its maximum. On the $x$ axis, the profile of the angiton is the soliton \eqref{eq43}-\eqref{eq44}. We first multiply (\ref{eq48}) by $\partial\tilde{p}_s/\partial K$ and integrate over $x$. We consider a fully formed soliton far from primary vessel and tumor. As it decays exponentially for $|\xi |\gg 1$, the soliton is considered to be localized on some finite interval $(-\mathcal{L}/2,\mathcal{L}/2)$. The coefficients in the soliton formulas \eqref{eq43}-\eqref{eq44} and the coefficients in \eqref{eq48} depend on the TAF concentration at $y=0$, therefore they are functions of $x$ and time and get integrated over $x$. The TAF varies slowly on the support of the soliton, and therefore we can approximate the integrals over $x$ by
\begin{eqnarray}
&&\!\int_\mathcal{I}\! F(\tilde{p}_s(\xi;x,t),x) dx \nonumber\\&&\quad\quad\quad
\approx\!\frac{1}{\mathcal{L}}\int_\mathcal{I}\!\!\left(\int_{-\mathcal{L}/2}^{\mathcal{L}/2}\! F(\tilde{p}_s(\xi;x,t),x) d\xi\!\right)\! dx. \label{eq50}
\end{eqnarray}
See Appendix \ref{app3}. The interval $\mathcal{I}$ over which we integrate should be large enough to contain most of the soliton, of extension $\mathcal{L}$. Thus the CCEs hold only after the initial soliton formation stage. Near the tumor, the boundary condition affects the soliton and we should exclude an interval near $x=1$ from $\mathcal{I}$. We shall specify the integration interval $\mathcal{I}$ in the next section. Acting similarly, we multiply \eqref{eq48} by $\partial\tilde{p}_{s}/\partial c$ and integrate over $x$. From the two resulting formulas, we then find $\dot{K}$ and $\dot{c}$ as fractions. The factors $1/\mathcal{L}$ cancel out from their numerators and denominators. As the soliton tails decay exponentially to zero, we can set $\mathcal{L}\to\infty$ and obtain the following CCEs \cite{bon16}
\begin{widetext}
\begin{eqnarray}
&&\dot{K}=\frac{\int_{-\infty}^\infty \frac{\partial\tilde{p}_s}{\partial K}\mathcal{A}d\xi\int_{-\infty}^\infty\!\!\left(\frac{\partial\tilde{p}_s}{\partial c}\right)^2\!\!d\xi\!- \int_{-\infty}^\infty \frac{\partial\tilde{p}_s}{\partial c}\mathcal{A}d\xi\int_{-\infty}^\infty\!\frac{\partial\tilde{p}_s}{\partial K}\frac{\partial\tilde{p}_s}{\partial c} d\xi\!}{\int_{-\infty}^\infty\!\!\left(\frac{\partial\tilde{p}_s}{\partial K}\right)^2\!\!d\xi\!\int_{-\infty}^\infty\!\!\left(\frac{\partial\tilde{p}_s}{\partial c}\right)^2\!\!d\xi\!-\left(\int_{-\infty}^\infty \frac{\partial\tilde{p}_s}{\partial c}\frac{\partial\tilde{p}_s}{\partial K}d\xi\right)^2},\label{eq51}\end{eqnarray}
\begin{eqnarray}
&&\dot{c}=\frac{\int_{-\infty}^\infty \frac{\partial\tilde{p}_s}{\partial c}\mathcal{A}d\xi\int_{-\infty}^\infty\!\!\left(\frac{\partial\tilde{p}_s}{\partial K}\right)^2\!\!d\xi\!- \int_{-\infty}^\infty \frac{\partial\tilde{p}_s}{\partial K}\mathcal{A}d\xi\int_{-\infty}^\infty\!\frac{\partial\tilde{p}_s}{\partial K}\frac{\partial\tilde{p}_s}{\partial c} d\xi\!}{\int_{-\infty}^\infty\!\!\left(\frac{\partial\tilde{p}_s}{\partial K}\right)^2\!\!d\xi\!\int_{-\infty}^\infty\!\!\left(\frac{\partial\tilde{p}_s}{\partial c}\right)^2\!\!d\xi\!-\left(\int_{-\infty}^\infty \frac{\partial\tilde{p}_s}{\partial c}\frac{\partial\tilde{p}_s}{\partial K}d\xi\right)^2}. \label{eq52}\end{eqnarray}
\end{widetext}
In these equations, all terms varying slowly in space have been averaged over the interval $\mathcal{I}$. The last term in \eqref{eq49} is odd in $\xi$ and does not contribute to the integrals in \eqref{eq51} and \eqref{eq52} whereas all other terms in \eqref{eq49} are even in $\xi$ and do contribute. The integrals appearing in (\ref{eq51}) and (\ref{eq52}) are calculated in Appendix \ref{app4}. The resulting CCEs are
\begin{eqnarray}
&&\dot{K}= \frac{(2K\Gamma\!+\!\mu^2)^2}{4\Gamma\beta(c\!-\!F_x)^2}\frac{\frac{4\pi^2}{75}\!+\!\frac{1}{5}\!+\!\!\left(\frac{2F_x}{5 c}\!-\!\frac{2\pi^2}{75}\!-\!\frac{9}{10}\right)\!\!\frac{F_x}{c}}{\left(1-\frac{4\pi^2}{15}\right)\!\left(1-\frac{F_x}{2c}\right)^2} \nonumber\\&&\quad
-\frac{2K\Gamma+\mu^2}{2\Gamma c\!\left(1-\frac{F_x}{2c}\right)}\!\left(c\nabla_x\!\cdot\mathbf{F}\! +\!\mathbf{F}\!\cdot\!\nabla_x F_x-\frac{\Delta_xF_x}{2\beta}\right)\!, \label{eq53}
\end{eqnarray}
\begin{eqnarray}
\dot{c}=-\frac{7(2K\Gamma+\mu^2)}{20\beta(c-F_x)}\frac{1-\frac{4\pi^2}{105}}{\left(1-\frac{4\pi^2}{15}\right)\!\left(1-\frac{F_x}{2c}\right)\!} \nonumber\\
+\frac{\mathbf{F}\!\cdot\!\nabla_x F_x -(c-F_x)\nabla_x\!\cdot\!\mathbf{F}-\frac{\Delta_xF_x}{2\beta}}{2-\frac{F_x}{c}},
\label{eq54}
\end{eqnarray}
in which the functions of $C(t,x,y)$ have been averaged over the interval $\mathcal{I}$ and we have set $y=0$. We expect the CCEs \eqref{eq53}-\eqref{eq54} to describe the mean behavior of the soliton whenever it is far from primary vessel and tumor. We back this point of view by the numerical simulations reported in the next section.
\section{Numerical results}
\label{sec:numerical}
Based on numerical simulations \cite{bon16}, we expect that the vessel tip density approaches the soliton after some time. Initially there are few tips and the density is small so that the nonlinear anastomosis terms in \eqref{eq14} or in \eqref{eq31} are small. Tips proliferate and the anastomosis terms kick in. The soliton formation should be described as the solution of a semi-infinite initial-boundary value problem. Ideally, we would match the solution of the soliton formation stage with a stage of a soliton moving far from boundaries, which is the crucial stage described by Equations \eqref{eq53}-\eqref{eq54} for the collective coordinates. We expect the soliton solution to be an asymptotically stable solution of the vessel tip density equation \eqref{eq14} on the whole 1D real line and also for the 2D slab geometry considered in this paper (provided the primary vessel is at $x=-\infty$ and the tumor is at $x=+\infty$). For a slowly varying TAF density, the stable soliton will instantaneously adapt its shape and velocity according to the solution of the CCEs \eqref{eq53}-\eqref{eq54}.
In this paper, we will solve numerically the full equations \eqref{eq14} (with $q=1$) and \eqref{eq15} for the vessel tip density and the TAF density (deterministic description), which we will also obtain by ensemble averages from stochastic simulations as explained in \cite{ter16}. From these simulations, we will obtain the evolution of the soliton collective coordinates thereby reconstructing the marginal tip density at $y=0$ from \eqref{eq43}. The soliton provides a simple description of tumor induced angiogenesis that agrees with numerical simulations of the stochastic process and with numerical simulations of the deterministic description.
Both deterministic or stochastic simulations show that the soliton is formed after some time $t_0=0.2$ (10 hours) following angiogenesis initiation. To find the soliton evolution afterwards, we need to solve the CCEs \eqref{eq53}-\eqref{eq54} whose coefficients are spatial averages over a certain interval $x\in\mathcal{I}$ that depend on the TAF concentration $C(t,x,y)$ and its derivatives calculated at $y=0$. The interval $\mathcal{I}$ should exclude regions affected by boundaries. We calculate the spatially averaged coefficients in \eqref{eq53}-\eqref{eq54} by: (i) approximating all differentials by second order finite differences, (ii) setting $y=0$, and (iii) averaging the coefficients from $x=0$ to 0.6 by taking the arithmetic mean of their values at all grid points in the interval $\mathcal{I}=(0,0.6]$. For $x>0.6$, the boundary condition at $x=1$ influences the outcome and therefore we leave values for $x>0.6$ out of the averaging.
\begin{figure}[h]
\begin{center}
\includegraphics[width=6.0cm]{fig1a.eps}
\includegraphics[width=6.0cm]{fig1b.eps}
\includegraphics[width=6.0cm]{fig1c.eps}
\end{center}
\vskip-3mm
\caption{ Evolution of the collective coordinates: (a) $K(t)$, (b) $c(t)$, and (c) $X(t)$. \label{fig1}}
\end{figure}
The initial conditions for the CCEs \eqref{eq45}, \eqref{eq53} and \eqref{eq54} are set as follows. $X(t_0)=X_0$ is the location of the marginal tip density maximum, $\tilde{p}(t_0,x=X_0,0)$. We find $X_0=0.22$ from the deterministic description and $X_0=0.2$ from the stochastic description. We set $c(t_0)=c_0=X_0/t_0$. $K(t_0)=K_0$ is determined so that the maximum marginal tip density at $t=t_0$ coincides with the soliton peak. This yields $K_0=173$ (deterministic description) and 39 (stochastic description). Solving the CCEs \eqref{eq45}, \eqref{eq53} and \eqref{eq54} with these initial conditions, we obtain the curves depicted in Figure \ref{fig1}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=9.0cm]{fig2a.eps}
\includegraphics[width=9.0cm]{fig2b.eps}
\end{center}
\vskip-4mm \caption{Deterministic description: Comparison between the maximum value of $\tilde{p}(t,x,0)$ and its value as predicted by soliton collective coordinates. (a) Evolution of the maximum value of the marginal tip density (relative error smaller than 4.5\%). (b) Evolution of the position of the maximum marginal tip density on $[0,1]$ (at $t=20$ and 22 h, the absolute error is the space step in the numerical method, $\Delta x = 0.02$; at $t=24$ h, the error is $4\Delta x$). \label{fig2}}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=9.0cm]{fig3a.eps}
\includegraphics[width=9.0cm]{fig3b.eps}
\end{center}
\vskip-4mm \caption{Same as in Figure \ref{fig2} for the stochastic description. The zoom in Figure \ref{fig3}(a) corresponds to Figure \ref{fig2}(a) but we have drawn the same figure with a larger time span to show more clearly the time interval over which the soliton approximates the maximum marginal tip density. The relative error is smaller than $6.7\%$ for the maximum marginal tip density (calculated by ensemble average over 400 realizations \cite{ter16}), whereas the error in the predicted position of the maximum marginal tip density is $\Delta x = 0.02$ at 22h and $2\Delta x$ at 24 h. \label{fig3}}
\end{figure}
Using the soliton collective coordinates depicted in Figure \ref{fig1} and \eqref{eq43}-\eqref{eq44}, we reconstruct the marginal vessel tip density and find its maximum value and the location thereof for all times $t>t_0$. Figure \ref{fig2} shows that the soliton as predicted from the CCEs \eqref{eq45}, \eqref{eq53} and \eqref{eq54} compares very well with the tip density obtained by direct numerical simulation of the deterministic equations. An alternative way to find the coefficients of the CCEs and their proper initial conditions is to use ensemble averages of the stochastic process. Figure \ref{fig3} shows that such reconstruction of the soliton agrees very well with the vessel tip density provided by ensemble averages of the stochastic process during the 14 hour time interval when soliton motion is not affected by boundaries. There is a large discrepancy between the maximum marginal tip density as predicted by the soliton and by the stochastic process during the first 10 hours of angiogenesis, which clearly marks the duration of the initial stage of soliton formation. After this stage, we note that the location of the maximum of the marginal tip density is very closely predicted by the location of the soliton peak as a function of time, both by using ensemble averages of the stochastic process as in Figure \ref{fig3} or by solving numerically the deterministic description as in Figure \ref{fig2}. This is also clearly shown in the reconstruction of the soliton marginal tip density depicted in Figure \ref{fig4}.
\begin{figure}[htbp]
\begin{center}
\end{center}
\includegraphics[width=10cm]{fig4a.eps}\\
\includegraphics[width=10cm]{fig4b.eps}
\caption{Comparison of the marginal tip density profile to that of the moving soliton for (a) and (b): Deterministic description; (c) and (d): Stochastic description averaged over 400 replicas.
\label{fig4}}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=9.0cm]{fig5.eps}
\end{tabular}
\end{center}
\vskip-4mm \caption{ Position of the soliton peak density compared to that of the maximum marginal tip density for different replicas of the stochastic process. \label{fig5}}
\end{figure}
So far, our reconstructions have been based on ensemble averages or, what is quite similar, the marginal tip density as given by the deterministic description. In past work \cite{ter16}, we have shown that fluctuations about the mean are large and therefore the stochastic process is not self-averaging for a single realization: anastomosis precludes the formation of a large number of active tips that may enforce mean-field behavior. However a deterministic description is still possible for averages over a sufficiently large number of realizations of the stochastic process (four hundred realizations suffice), as explained extensively in \cite{ter16}. This raises an important question: {\em How well do these ensemble averages and the soliton construction represent single replicas of the stochastic process?} Figure \ref{fig5} gives a positive answer for the location of the soliton peak: The position of the soliton peak is a good approximation to the location of the maximum marginal tip density for different replicas of the stochastic process. While vessel networks may differ widely from replica to replica, the position of the maximum marginal tip density is about the same for different replicas. As the maximum of the marginal tip density is a good measure of the advancing vessel network, the soliton peak location also characterizes it. The existence of other seemingly self-averaging quantities related to the soliton is an open question.
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}\hskip -0.4cm
\includegraphics[width=9.0cm]{fig6.eps}
\end{tabular}
\end{center}
\vskip -0.6cm \caption{ Marginal vessel tip density profiles at 24 (dashed lines) and 36 hours (solid lines) for $\beta=5.88$ (blue lines), and $\beta=29.4$ (red lines). \label{fig6}}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}\hskip -0.4cm
\includegraphics[width=9.0cm]{fig7.eps}
\end{tabular}
\end{center}
\vskip -1.6cm \caption{Comparison between the vessel networks of two replicas after 36 hours for (a) $\beta=5.88$, and (b) $\beta=29.4$. The TAF level curves have also been depicted. \label{fig7}}
\end{figure}
We can use the soliton construction as a simple means to evaluate the influence of new mechanisms on angiogenesis. For instance, suppose that some drug causes the friction coefficient $\beta$ to increase fivefold. Then the marginal tip density gets delayed as shown by Figure \ref{fig6}. This can be evaluated easily and cheaply by solving the CCEs. What does this mean for replicas of the angiogenesis process? Figure \ref{fig7} displays the vessel networks formed after 36 hours for $\beta = 5.88$ and 29.4 in two different replicas of the stochastic process. For $\beta = 5.88$, the vessel network of one replica of the angiogenesis process has reached the tumor at $x=1$ after 36 hours, for $\beta= 29.4$ the vessel network is only half way through its road to the tumor after that time. Had the increase in $\beta$ been the result of some therapy, we could have ascertained its merits by solving the CCEs and inferring the arrest of the vessel network from the result.
\section{Conclusions}
\label{sec:conclusions}
Previous work has shown that a simple stochastic model of tumor induced angiogenesis could be described deterministically by an integrodifferential equation of Fokker-Planck type with a linear birth term and a nonlinear death (anastomosis) term \cite{bon14,ter16}. Anastomosis keeps the number of vessel tips rather small (about one hundred) and therefore the vessel tip density has to be reconstructed from ensemble averages of the stochastic process, which is not self-averaging. Numerical simulations of stochastic and deterministic equations show that the vessel tip density advances from the primary vessel towards the tumor as a stable moving lump or angiton whose profile along the $x$ axis is soliton-like \cite{bon14,ter16,bon16}. An analytic formula for the longitudinal profile of the angiton (called the ``soliton'' in this paper) can be deduced by ignoring spatio-temporal variation of the tumor angiogenic factor and diffusion \cite{bon16}. This formula involves two collective coordinates that characterize the shape and velocity of the soliton \cite{bon16}.
In the present work, we have derived the reduced equation for the marginal tip density by means of a Chapman-Enskog method. We have deduced the differential equations for the collective coordinates whose terms involve spatial averages over the fully grown soliton far from the tumor. We can deduce these equations both from the deterministic description and from ensemble averages of the full stochastic model. In both cases, the soliton provides a good reconstruction of the deterministic marginal tip density or its version based on ensemble averages, provided the soliton is not too close to the tumor. As said before, fluctuations are large because anastomosis keeps a small number of active vessel tips at all time. Nevertheless, we have shown that the position of the maximum marginal tip density as given by the soliton is quite close to that given by any replica of the stochastic angiogenesis process. This indicates that the simple soliton construction yields good predictions of the evolution of the blood vessel network.
There are mechanisms not included in our stochastic conceptual model of angiogenesis. However, many mechanisms such as haptotaxis can be included by adding terms to the force $\mathbf{F}$ in the Langevin equation for the vessel tips that depend on additional continuum fields (fibronectin, matrix degrading enzymes, etc; see e.g., \cite{cap09}). The effects of anti-angiogenic factors could be treated by including additional reaction-diffusion equations and their effects on the vessel tips \cite{lev01}. Such terms can be straightforwardly incorporated to the equations for the soliton collective coordinates using the same methodology as explained in the present paper. There are other models that postulate reinforced random walks \cite{and98,pla03,man04} or cellular Potts models with Monte Carlo dynamics \cite{bau07,sci11,hec15} instead of Langevin equations to describe the extension of vessel tips. Insofar as Fokker-Planck equations can be derived from master equations in appropriate limits \cite{gar10} and branching and anastomosis are similar to those of our conceptual model, we could use the same methodology as in the present paper to study such models. Let us recall that the soliton solution comes through a balance of birth and death terms, convection and time derivative terms in the equation for the marginal tip density. These terms would also appear in special limits of the random walk or cellular Potts models. We consider the work presented in this paper a blueprint for using the soliton methodology to analyze more complex angiogenesis models and a first step to control angiogenesis through soliton dynamics.
\acknowledgments
We thank Vincenzo Capasso, Bjorn Birnir and Boris Malomed for fruitful discussions. This work has been supported by the Ministerio de Econom\'\i a y Competitividad grant MTM2014-56948-C2-2-P.
|
2,869,038,154,295 | arxiv | \section{Introduction}
\label{sec:intro}
Overwhelming evidence from astrophysical observations indicates that only 20\%
of the matter in the universe is made of ordinary matter, while the remaining
80\% is made of some form of matter called ``dark matter''~\cite{Bertone:2004pz}. Although these
indirect astrophysical observations convince us that dark matter exists, dark
matter has not yet been directly observed. The standard model of particle
physics, which has been very successful in explaining the properties of
ordinary matter, can neither explain dark matter's existence nor its
properties. The discovery and identification of dark matter would have a
profound impact on cosmology, astronomy, and particle physics. A leading dark
matter candidate consistent with all astrophysical data is a weakly interacting
massive particle (WIMP)~\cite{Goodman:1985,Jungman:1996,Freedman:2003}.
WIMPs could be studied in standard particle physics through either observations of ordinary matter particles produced through DM annihilations in the halo of the Milky Way, production of DM particles through high-energy collisions in the accelerators such as the Large Hadron Collider (LHC), or WIMPs could be detected through their interactions with atomic nuclei in specially designed detectors. Since the collision rates are expected to be very small, large detectors with low backgrounds and excellent detection capability for rare collisions are needed to detect
dark matter.
There are many direct detection experiments deployed in underground laboratories around the world.
An incomplete recent review describing these experiments can be found in Ref.~\cite{Akimov:2011za}.
When WIMPs scatter with atoms in a detection medium, they will recoil and generate kinetic motion
of atoms (heat), ionization (free electrons) and scintillation (de-excitation of excited electrons).
Direct detection experiments measure one or two or even possibly three of these signatures.
Clearly, the convenience of measuring any of these signals depends on the choice
of material. In pure semiconductors, such as those used by the CDMS collaboration~\cite{CDMS} and others, it is easy
to measure the electric current generated by electrons as well as hole carriers. In the case of noble liquid detectors
(XENON100~\cite{XENON100}, LUX~\cite{LUX}, and others \cite{others}), a light signal is usually measured by photo multiplier tubes (PMTs); ionization electrons drifting in an external electric field are either detected through their charge or through electroluminescence. In crystals, the light intensity is
usually the only signal measured. In fact, the well-known DAMA/LIBRA experiment measured scintillation
light only~\cite{DAMA}. For heat measurements, the detector has to be kept at very low temperature, typically at tens of milli Kelvin, which is a cryogenic challenge, particularly for large masses.
Among all the direct detection experiments, the xenon dual-phase technology appears to be particularly promising. In fact, for the last 3--4 years, the XENON100 and LUX experiments
have produced the best limits over a wide range of WIMP masses~\cite{XENON100,LUX}.
There are many reasons why liquid xenon (LXe) appears to be a good choice. First,
the detection of both prompt scintillation
and ionization electrons in dual-phase mode allows not only discrimination between nuclear and electron
recoils, but also a fiducialization of events through the time projection chamber (TPC) technology. Second, xenon does not have long-lived radioactive isotopes and can be highly purified. Third, xenon has a large atomic mass, which entails a large
WIMP scattering cross section. WIMP-nucleus scattering is coherent and hence the cross section is proportional
to the square of the atomic mass number $A$. Moreover, xenon has a large $Z$ which improves self-shielding from external
gamma rays. Finally, xenon is not prohibitively expensive, allowing detector target masses to reach ton-scale
with reasonable cost. Xenon liquefaction temperature is around $-100^\circ$\,C, thus cryogenics is
relatively easy to manage.
The Particle and Astrophysical Xenon (PandaX) collaboration was established in 2009 and was first supported by
the Ministry of Science and Technology in China through a 973-project and by the Ministry of Education
through a 985III-project at Shanghai Jiao Tong University (SJTU). The initial
collaboration consisted mainly of physicists from SJTU, from Shandong University and
from the Shanghai Institute of Applied Physics, Chinese Academy of Sciences. The collaboration was joined
later by groups from University of Maryland, Peking University, and University of Michigan in 2011, and by the Yalong River Hydropower Development Co. Ltd. in 2012. The collaboration has been further supported by the National Science Foundation of China (NSFC), and by some of the collaborating institutions.
The PandaX experiment uses a liquid-xenon detector system suitable for both
direct dark-matter detection and $^{136}$Xe double-beta decay search. Similar to the XENON and LUX experiments,
the PandaX detector operates in dual-phase mode, allowing detection of both prompt scintillation and
ionizations through proportional scintillation. The central time projection chamber will be staged, with
the first stage accommodating a target mass of about 120\,kg, similar to that of
XENON100. In stage II, the target mass will be increased to about 0.5\,ton. In the final stage, the detector can be upgraded to a multi-ton target mass. Most sub-systems and the stage-I TPC were developed in the particle physics lab at SJTU, and have been
transported to the China Jinping underground lab (CJPL) in August 2012. After successful installation, two engineering runs
were carried out in 2013. The system entered commissioning in December 2013
and has been collecting science data since late March 2014. A small prototype for PandaX was
developed and is running in the particle physics lab at SJTU~\cite{karl}.
The underground lab, CJPL, emerged from a government-led project to construct two large hydropower plants next to and through the
Jinping Mountain, Sichuan, China~\cite{jinping}. Jinping is located about 500\,km southwest of Chengdu, the capital of Sichuan
province. It can be accessed either by car from Chengdu, or by a short flight to Xichang, followed by a 1.5\,hr car
ride. A detailed description of CJPL can be found in Ref.~\cite{CJPL}.
Some discussion about the lab will be given in Sec. \ref{sec:background}.
In this paper, we describe the goals and the technical realization of the PandaX detector system.
In Sec. II, we describe the cryogenic system to liquefy, purify and maintain a ton-scale xenon detector.
In Sec. III, we consider the design and construction of the stage-I TPC. In Sec. IV, we discuss the properties and performance of the photomultiplier tube system. In Sec. V, we discuss the sources of background for the experiment, including cosmic rays, the shield for environmental neutrons and gamma rays, and the xenon distillation system. We also briefly discuss the background simulations. In Sec. VI, we describe the data taking electronics and data acquisition system (DAQ). We conclude the report with Sec. VII.
\section{Cryogenic and Gas-handling System}
\label{sec:cryogenics}
A reliable cryogenic and gas handling system is crucial for any noble-liquid detector system.
As the size of the detector increases, cryogenic stability, safety, and efficiency of gas handling and purification become increasingly important issues.
The PandaX goal is to build a system storing, circulating and cooling xenon gas up to 2--3\,tons.
So far the system has been set up and tested with ~500\,kg xenon.
A detailed description of the cryogenics system has been published in Ref.~\cite{Li:2012zs}.
A schematic sketch of the cryogenic and gas handling systems is shown in Fig.~\ref{fig:cryosys} (left panel).
The detector is encapsulated in a double-walled cryostat for thermal insulation.
Additionally, seven layers of aluminized Mylar foil are used in the insulating vacuum to reduce the heat load into the cryogenic detector. The inner vessel is 0.75\,m in diameter and 1.25\,m high.
It is over-dimensioned for stage I to accommodate the future upgrade to stage II. It contains appropriate filler material to reduce the amount of xenon needed during operation. The outer vessel is made of 5\,cm high-purity oxygen-free copper.
The cryogenic system was designed as a series of independent modules, each
with a specific function. The modules are connected to the detector via common
tubing used by all modules not unlike the bus structure in a computer system
connecting all devices. This modular cryogenic system was named the Cooling
Bus (Figure~\ref{fig:cryosys},right).
Each module performs a separate function such as evacuation, heat exchange, condensation, emergency cooling and sensor mounting.
The gas handling system consists of a gas storage and recovery system, a circulation pump, a purification getter and a heat exchanger.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{cryo1.jpg}
\includegraphics[width=0.49\textwidth]{cryo2a.jpg}
\end{center}
\caption{Left panel: Schematic layout of the cryogenic and the gas system for PandaX. Right panel: Photograph of the Cooling Bus for liquefaction and recirculation. In the foreground is the getter for purification.}
\label{fig:cryosys}
\end{figure*}
The detector and the Cooling Bus are connected by concentric tubes on the evacuation module consisting of a
fore-pump and a turbo pump for the inner vessel, and a pair of pumps and a zeolite filled cryo pump for the outer vessel.
Normally the vacuum jacket is maintained by the fore-pump and the turbo pump for the outer vessel.
A cryo-sorption pump is also connected to continue
pumping the outer vacuum in case of power failures.
The system is cooled by a single Iwatani PC150 pulse tube refrigerator (PTR), driven by a 7.3\,kW air-cooled M600 compressor from Oxford Instruments.
Its cooling power was measured to be around 180\,W~\cite{Li:2012zs}.
It is mounted on a cylindrical copper block made of the oxygen-free copper.
The copper block closes off the inner chamber and acts as a cold-finger for liquefying the xenon gas.
The PTR can thus be serviced or replaced without exposing the detector volume to air.
A copper cup that serves as a heater is installed between the PTR cold-head and the cold-finger. The temperatures of the cold-head and
cold-finger are measured by Pt100 temperature sensors.
A PID temperature controller (Lakeshore 340) regulates the heating power to keep the temperature of the cold-finger constant.
In case of failure of the PTR system, e.g. power failure, a pressure sensor will start the flow of LN2 through a cooling coil above a pressure set point about 0.5 bar above the normal operating pressure. This cooling will continue until the xenon pressure dropped below a second set point, about 0.5 bar below the normal operating pressure. The pressure sensor and the LN2 control valve are powered by a dedicated uninterruptible power supply (UPS). The warming up and LN2 cooling are shown by their effect on the xenon pressure in Fig.~\ref{fig:ecool}. Note that the emergency cooling system is always operational, but normally does not contribute to the detector cooling since the set points are never reached. This cooling can continue indefinitely if sufficient LN2 is available. As ultimate safety device a burst disc limits the maximum pressure in the detector. The burst pressure with 3.5 barA is sufficiently above the emergency set point not to trigger when LN2 cooling is available. The pressure, however, is low compared with the maximum allowable pressure of the PMT. Since the burst pressure is rather critical for safeguarding the xenon and the PMT, a
burst disc with a precisely controlled pressure rating of $\pm 5$ \% was chosen.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.90\textwidth]{cryo4.png}
\end{center}
\caption{Results of an emergency cooling system test: Inner pressure as a function of time when the electric power is off.}
\label{fig:ecool}
\end{figure*}
A heat-exchange module is used as part of the purification system which is designed to purify the liquid xenon continuously.
Constantly recirculating the xenon through a high temperature getter (SAES, PS4-M750-R-2, Max: 100\,SLPM)
gradually improves purity
and removes electro-negative molecules originating from out-gassing of the surfaces of the detector materials.
The impurity level in LXe determines the attenuation length for scintillation light and the life time
for drifting charges. The most common electronegative impurities are H$_2$O, O$_2$, CO$_2$, and CO).
The gas recirculation system is driven by a double diaphragm pump (KNF, PM26937-1400.12) and/or
a custom-made Q-drive~\cite{wang}. The two pumps are installed in parallel, and can backup each in cases of falling.
The flow rate in the current operation is about 30\,SLPM. The setup is sufficiently powerful to purify xenon gas in a ton-scale detector.
For ton-scale operation, hundreds of thousand of liters of xenon gas has to be recovered and stored at room temperature.
At Jinping lab, custom-made 220\,L steel high-pressure bottles and LN$_2$ dewars are used for this purpose.
Their working pressure is 8\,MPa, with each dewar storing about 250\,kg xenon gas.
These bottles can be cooled down by filling LN$_2$ into the dewars to recover xenon gas from the detector.
Tests showed that it takes 2--3 days to recover about 500\,kg xenon in the detector for the \mbox{stage-I} experiment.
To control the LXe level in the inner vessel, a volume of about 10L, called the overflow chamber, is used. The LXe can flow through a pipe from
the TPC to the overflow chamber. A Bowden Cable is attached to the end of the pipe, and then we can tune the height of the pipe's outlet from outside. With this method, the liquid level in TPC could be controlled with precision of ~0.1mm level. The liquid flowing to the overflow chamber will be recirculated through the getter and liquefied back into the inner vessel.
The cryogenic system for the PandaX LXe detector was tested extensively in the SJTU particle physics lab in 2012. Since moving it to CJPL in August 2012, two engineering runs were performed. The system was filled with
450\,kg of xenon, which was later recovered. It takes approximately 3\,days to liquefy the xenon gas with LN$_2$ assisted cooling.
Each time the system performed as expected.
Fig.~\ref{fig:running} shows typical values for inner pressure, outer vacuum level, and cold finger temperature as a function of time
as the system is running. These values can be accessed through a slow-control system via the internet. They indicate stable running conditions.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.95\textwidth]{cryo2.jpg}
\includegraphics[width=0.95\textwidth]{cryo3.jpg}
\end{center}
\caption{Inner pressure, outer vacuum level and temperature of the cold-finger vs time, with 450\,kg liquid xenon in the inner vessel.}
\label{fig:running}
\end{figure*}
\section{PandaX Stage-I TPC}
\label{sec:tpc}
The time projection chamber for the PandaX stage-I experiment is optimized for light collection efficiency to achieve a low energy threshold
for a high sensitivity to light dark matter at around 10\,GeV/c$^2$. The diameter of the field cage is 60\,cm and the drift length is 15\,cm, accommodating a liquid xenon target mass of 120~kg.
For stage II, the design is similar except for the drift length which will be increased to 60\,cm.
The mechanical design for the TPC is shown in Fig.~\ref{fig:design-stage1a}. There are three main parts to the TPC: the top PMT array, the field cage and the bottom PMT array. In the following subsections, we will discuss
in detail its construction.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.40\textwidth]{InnerStructure-Run5.pdf}
\includegraphics[width=0.59\textwidth]{InnerStructure-Run5_cross-section.pdf}
\end{center}
\caption{Mechanical design of stage-I TPC showing the top PMT array, the field cage and the bottom PMT array. Left panel: Full view showing the completely integrated TPC. Right panel: Cross-sectional view showing the detailed components.}
\label{fig:design-stage1a}
\end{figure*}
\subsection{Field cage}
The field cage, shown in Fig.~\ref{fig:electrodes}, which is the main part of the TPC, contains a teflon cylinder of 15\,cm height and 60\,cm diameter. The teflon cylinder has 36 pieces of 5-mm thick teflon panels interlocking with adjacent panels. Among them, 18 pieces of teflon panels are designed with 5-mm thick tablets on their top and bottom ends, by which each of them are fixed on a teflon support by teflon bolts. Those 18 pieces of teflon supports serve as the bearing carrier of the field cage and hold all other components besides the teflon panels. There are altogether four electrodes named anode, gate grid, cathode and screening electrodes from top to bottom in the TPC. Each electrode has an inner diameter of 60\,cm, which is the same as that of the field cage.
To fabricate the gate grid, the cathode and the screening electrodes, stainless steel wires (304 or 316L) with 200\,$\mu$m diameter are pressed between two stainless steel rings by screws (see Fig.~\ref{fig:electrodes} center). The rings are made of 316L stainless steel with 3\,mm thickness and 15\,mm width. The spacing between two wires is 5\,mm, which results in 96\% optical transparency for the electrode. Each wire has a tension of 2.8\,N (43\% of yield strength), provided by 288\,g weight rods hanging on both side of each wire during production. The anode electrode is made of photo-etched mesh with crossing bars of 200\,$\mu$m width and 5\,mm spacing, providing 92\% optical transparency. The anode mesh is fixed above the grid with 5-mm thick teflon rings (see Fig.~\ref{fig:electrodes} right). Thus the distance between the anode mesh and the gate grid wires is 8\,mm.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=1.45in]{InnerStructure_TPC-Run5.pdf}\hspace*{2mm}
\includegraphics[height=1.45in]{wire-electrode.jpg}\hspace*{2mm}
\includegraphics[height=1.45in]{anode-mesh-teflon-ring.jpg}
\end{center}
\caption{Left panel: Mechanical design of the field cage showing the teflon panels and supports, three electrodes and the field shaping rings. Center panel: Photograph of the wire electrode for the gate grid, the cathode and the screening electrodes. Right panel: Photograph showing the anode mesh is fixed above the grid ring with a teflon ring in between.}
\label{fig:electrodes}
\end{figure*}
The anode, the gate grid and the cathode electrodes are fixed on the teflon supports as shown in Fig.~\ref{fig:electrodes}. The teflon supports are mounted to the top copper plate by PEEK bolts so the field cage is integrated with the top PMT array. To make the drift field uniform, 14 pieces of OFHC copper shaping rings were arranged outside the teflon panels, between the gate grid and the cathode in equal spacing. They are made of OFHC copper tubes with 6\,mm outer diameter and 5\,mm inner diameter, and clamped by teflon supports and teflon panels (see Fig.~\ref{fig:resistor}). 500\,M$\Omega$\, surface-mount resistors (SM20D from Japan FineChem Company, Inc.), rated for maximum voltage of 5 kV, are tied by bare copper wires between each two adjacent shaping rings, between shaping rings and cathode or gate grid, and between gate grid and anode (see Fig.~\ref{fig:resistor}). Two resistor chains are mounted on the electrodes to prevent interruption of the experiment if a single resistor breaks during operation. A photograph showing the field cage, integrated with the top PMT array, installed in the detector is shown in Fig.~\ref{fig:resistor}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=1.5in]{TPC-11.jpg}\hspace*{3mm}
\includegraphics[height=1.5in]{resistor-anode-gate.jpg}\hspace*{3mm}
\includegraphics[width=0.26\textwidth]{TPC-from-top.jpg}
\end{center}
\caption{Photographs showing the resistors between the shaping rings, the gate grid and the cathode (left), a resistor mounted between anode and the gate grid (center), and the field cage, together with the top PMT array, hanging from the top flange of the inner vessel (right).}
\label{fig:resistor}
\end{figure*}
\subsection{PMT arrays}
The bearing carrier of the top PMT array is an 8-mm thick oxygen-free high-conductive (OFHC) copper plate which holds 143 pieces of Hamamatsu R8520-406 PMTs.
All top PMTs are fixed on bases by pin-socket connections directly while all bases are mounted on the OFHC copper plate by spring-loaded screws. From the edge to the center, the 143 PMTs are arranged uniformly in 6 concentric rings around a center PMT. The spacing of adjacent rings is 52.5\,mm and the number of PMTs in each ring is 36, 36, 28, 20, 14 and 8 from the edge to the center. To achieve a better position reconstruction for events at large radius, the diameter of the outermost PMT ring is 630\,mm, which is larger than that of the field cage. The teflon reflector covering the space between PMTs is made of five pieces of 6-mm thick teflon plates with openings according to the arrangement of PMTs. Those teflon plates are ``interlocking'' with each other to avoid gaps from appearing due to the shrinkage of teflon at LXe temperature, and are mounted on the OFHC copper plate by PEEK bolts.
The bottom PMT array is supported by a 8\,mm thick OFHC copper plate which holds the 37 pieces of Hamamatsu R11410-MOD PMTs. Unlike the top PMTs, each bottom PMT is held by a pair of stainless steel clamps, fixed in between two 8\,mm thick teflon clamps which are fixed on the OFHC copper plate by stainless steel bolts, and all bases are fixed on each PMT by pin-socket connection directly, as shown in Fig.~\ref{fig:botpmtpic}. Teflon reflectors are mounted between the PMT windows to increase light collection. The teflon reflector is made up of 7 pieces of 2.4-mm thick teflon sheets, which are fixed on the OFHC copper plate by teflon bolts. To reduce the electric field strength near the bottom PMT photocathode to acceptable levels, the grounded screening electrode wires are at 5\,mm above bottom PMT windows. Similar to in the field cage, 36 pieces of shorter teflon panels are designed to improve light collection efficiency, and fixed on the OFHC copper plate by sharing the same teflon bolts with the screening electrode. The entire array with all PMTs assembled on the copper plate is shown in Fig.~\ref{fig:botpmtpic}. The PMT HV and signal cables are connected on the bottom flange of the inner vessel. To minimize the empty space below the PMTs and to shield gamma rays from the bottom flange, a copper filler is installed below the PMT array (see Fig.~\ref{fig:botpmtpic}).
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=1.55in]{InnerStructure_BottomPMTArray-Run5.pdf}\hspace*{3mm}
\includegraphics[height=1.55in]{bottom-pmt-array.jpg}\hspace*{3mm}
\includegraphics[height=1.55in]{bottom-pmt-on-filler.png}
\end{center}
\caption{Left panel: Mechanical design of the bottom PMT array and the teflon reflectors. Center panel: Photograph showing all 37 R11410 PMTs installed on the copper plate of the bottom PMT array. Right panel: Photograph showing the bottom PMTs with the copper filler and how they are connected to the bottom feedthroughs.}
\label{fig:botpmtpic}
\end{figure*}
\subsection{Initial operation of the TPC}
For a dual phase operation of the PandaX TPC, the anode is connected to the ground, while the gate and cathode grids are connected with negative high voltages (HV). The cathode is connected to a custom-made HV feedthrough, while the gate grid is connected to a commercial feedthrough rated up to 10 kV. During the engineering run in liquid xenon with no PMT signal readout, the high voltage on the cathode reached 36\,kV before electrical breakdown, and the high voltage on the gate grid reached 6\,kV (the maximum value of the power supply). However, during the full operation of the detector with PMT readout, micro-discharge signals were observed by the PMTs when the cathode voltage reaches 20~kV. Similar discharge signals were observed when the gate grid is above 5~kV. These discharges produce many small signals at the single or a few photoelectrons level, preventing the useful data taking. Thus it sets the limitation of the drift field at 1~kV/cm across the 15~cm drift gap.
To extract electrons from the liquid to the gas phase, we set the liquid level in between the anode and the gate grids. The liquid level can be adjusted with an overflow point controlled by an external motion-feedthrough. The level should be set at least 3~mm above the gate wires so that it covers the 3~mm-thick stainless steel ring to avoid discharges from any sharp point on the ring if it's exposed in the gas. We set the level at 4~mm above the gate grid for the initial operation of the TPC, giving a 4~mm gas gap. For the extraction field in the gas above the liquid xenon, a 5~kV voltage between the anode and gate grids provides an extraction field of 8.3~kV/cm, corresponding to an electron extraction yield of 90\% according to~\cite{Aprile:JPG14}.
Following the adjustment of high voltages on the cathode and gate grids, both S1 and S2 signals are observed by the PMTs after the liquid xenon reached a good purity.
A typical S1-S2 waveform summed from all PMTs and their S2 signal distribution among the PMTs for a single-site event is shown in Fig.~\cite{fig:waveform}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[height=2.8in]{waveform.pdf}\hspace*{0mm}
\end{center}
\caption{The summed waveform and S2 signal distribution on all PMTs of a typical single-site event during the calibration run.}
\label{fig:waveform}
\end{figure*}
\section{Photomultiplier system}
\label{sec:pmt}
PandaX uses specially-developed PMTs to detect prompt and proportional scintillation light. The photomultiplier system has to satisfy many requirements, such as good quantum efficiency to VUV light (178nm) from xenon, low radioactivity, single photoelectron (SPE) resolution, good timing resolution, cryogenic operation ($-100^\circ$\,C) suitability, high-pressure operation ($>$3\,atm) suitability, and minimal outgassing from the bases and cables. These requirements are met by the one-inch Hamamatsu R8520-406 and by the three-inch Hamamatsu R11410-MOD photomultiplier tubes, which instrument the top and
bottom photomultiplier arrays, respectively. In this section, we describe various aspects
of the PMT system, including the basic properties, bases, high voltage and decoupler, feedthrough and cabling, calibration and test results.
\subsection{Photomultiplier tubes}
The Hamamatsu model R8520-406 is a 10-stage, one-inch square photomultiplier tube,
rated for a temperature range of $-110^\circ$\,C to $+50^\circ$\,C, and 5-atm pressure resistance. The
typical gain is $10^6$ at a HV setting of 800\,V. The
cathode window is made of Synthetic Silica, and the cathode material
is Bialkali, yielding an excellent quantum efficiency of about 30\% at 175\,nm. The
radioactivity of this tube has been measured by XENON100 collaboration in Ref.~\cite{ref:Xenon_rad}.
The Hamamatsu model R11410-MOD with ceramic stem is a 12-stage, three-inch circular photomultiplier
tube, also rated
for LXe temperature with a typical gain of $5\times10^6$ at 1500\,V.
The maximum pressure rating, updated in 2011, is
0.4\,MPa (absolute)~\cite{ref:R11410-MOD}. The quantum efficiency is $>$30\% at
175\,nm. Radiopurity measurements were performed
elsewhere~\cite{ref:Xenon_rad}, as well as in our own counting station at CJPL (see Sec.~\ref{Sec:counting-station}).
Note that for low temperature operation, flying leads were cut off to a length of
10\,mm from the ceramic stem for attachment to the voltage divider.
\subsection{Voltage dividers and bases}
Positive HV voltage dividers (bases)
are chosen to put the photocathode at ground a) to reduce
the noise level and improve the single photoelectron (SPE)
resolution, b) to eliminate potential
interference with the electric field profile of the TPC, and c) to allow
bringing signal and HV in/out the vessel through the same coaxial cable
(decouplers will be placed outside the vessel) to minimize the
number of cable feedthroughs.
\begin{figure}[!htb]
\centering
\subfigure[~R8520-406]
{
\label{fig:R8520_sch}
\includegraphics[width=0.9\textwidth, angle=0]{1-in-1.pdf}
}
\subfigure[~R11410-MOD]
{
\label{fig:R11410_sch}
\includegraphics[width=0.9\textwidth, angle=0]{3-inch-1.pdf}
}
\caption{PMT base schematics for (a) the R8520-406 and (b) the R11410-MOD bases.
}
\label{fig:base_sch}
\end{figure}
The design of the voltage divider for the R8520-406 PMT follows
recommendations from Hamamatsu, with schematics
shown in Fig.~\ref{fig:R8520_sch}.
To limit the heat load for cryogenic operation, the total resistance of
the divider chain is 13\,MOhm, so that heat power from each base is only 0.05\,W
under normal voltage setting (800\,V). The base is back-terminated
with 100\,kOhm resistor R$_{16}$, which increases not only the charge
collection at the output end but also the low frequency band
width, which is critical to minimize signal distortion for S2-type signals.
With the rest of the frontend electronics terminated at 50\,Ohm, this results in some signal reflection
(see Sec.~\ref{sec:bench}). A 10\,nF capacitor C$_{4}$
between the cathode and anode is necessary to remove signal oscillation.
The design of the R11410-MOD base is very similar to that of the R8520-406 base, with its schematic
shown in Fig.~\ref{fig:R11410_sch}. For all base capacitors,
low background ceramic X7R capacitors from Kyocera Inc., rated at 1\,kV, were selected.
Two capacitors were put in series between the cathode and the anode for the R11410-MOD base.
To reduce radioactivity, C$_{2}$ and
C$_{3}$ were removed from both bases. Signal distortion of typical S2 pulses were
measured to be negligible (see Sec.~\ref{sec:bench}).
A photograph of an R8520-406 base is shown in Fig.~\ref{fig:bases}.
Several technical issues were addressed in its construction.
Cirlex, a kapton-based material, was chosen as the base material due to its good radiopurity and low outgassing characteristics. Pure silver tracks were deposited onto the
PCB without other add-ons. Ceramic capacitors from Kyocera,
and lead-free soldering tin (Sil-Fos) from Lucas-Milhaupt/Handy \& Harman
were selected and used on the base to minimize radioactivity.
KAP3, a UHV coaxial cable
from MDC Inc was selected as the signal/HV cable. The receptacles for the
PMT pins were chosen from Mil-Max Inc. The base of the R11410-MOD PMT was
constructed in a very similar way, as shown in the right panel of Fig.~\ref{fig:bases}.
\begin{figure}[!htbp]
\centering
\subfigure[~R8520-406]
{
\includegraphics[height=2.2in]{R8520_base_pic.pdf}
}
\subfigure[~R11410-MOD]
{
\includegraphics[height=2.2in]{R11410_base_pic.pdf}
}
\caption{Photographs of the PMT bases for (a) the R8520-406 and (b) the R11410-MOD models.
}
\label{fig:bases}
\end{figure}
The PMTs and bases were radioassayed in the HPGe counting station
in Jinping,
with results summarized in Sec.~\ref{Sec:counting-station}. The radioactivity levels
from the PMTs are in agreement with those reported in Ref.~\cite{ref:Xenon_rad}.
The high
$^{238}$U/$^{232}$Th/$^{40}$K content in the one-inch base is likely due to the
particular type of pin receptacles used, since other material components
are identical to those on the three-inch bases.
\subsection{Signal-HV decoupler and high voltage system}
The signal (fast pulses) and the DC high voltage are
decoupled outside the detector via a decoupler module.
A similar design from the Daya Bay experiment~\cite{ref:decoupler} was followed.
A schematic sketch of the decoupler is shown in Fig.~\ref{fig:decoupler} (left).
The decoupling capacitor, rated for 2\,kV, is chosen to be 100\,nF based on
a SPICE simulation and bench tests to minimize the signal
distortion for S2-like signals. At the input
of the DC high voltage, a 3-stage high pass filter is implemented to remove
the ripples from the high voltage supply.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{decoupler_diagram2.pdf}
\includegraphics[width=0.45\textwidth]{decoupler_pic.pdf}
\caption{Left panel: Schematic diagram of a single channel signal-HV decoupler circuit. Right panel: Photograph of a 48-channel decoupler module.}
\label{fig:decoupler}
\end{figure}
A photograph of a 48-channel decoupler module (4 U) is
shown in Fig.~\ref{fig:decoupler} (right).
Each module consists of four
12-channel PCBs. The high voltage
inputs into the module are through four China-standard mil-spec DB25 connectors, tested for 2\,kV. Cables leading to PMTs are through
48 SHV connectors on the back panel, and the decoupled signals to the electronics
are through the 48 BNC connectors on the front panel.
The PMT high voltage system is from CAEN SpA, with a
SY1527LC~\cite{ref:SY1527} main frame
and four A1932AP~\cite{ref:A1932A} modules, each supplying 48 HV channels
up to 3\,kV. The ground of these HV channels is configured as float
from the ground of the main frame. The output connector on each module is a
52-pin HV connector by Radiall. A custom fan-out cable is made
to connect each Radiall connector to four DB-25 connectors, as the
side input to the decoupler box (see Fig.~\ref{fig:decoupler}).
To reduce the 200\,kHz noise from the HV supply, additional RC filters
were implemented before the HV enters into the decoupler box.
\subsection{Cabling}
The PandaX PMT cabling scheme is shown in Fig.~\ref{fig:cabling}.
As mentioned previoulsy, the KAP3 cables connect to individual PMT bases. For the top
PMT bases, the cables exit the inner vessel through six
48-pin double-sided high voltage CF35 feedthrough flanges by Kyocera.
The lower PMT feedthroughs
are two commercial double-ended 41-pin low temperature HV feedthroughs by MPF Products Inc.
The connectors to these feedthroughs are custom-made with PEEK as insulator
and lead-free sockets by TE/AMP.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{cabling_scheme.pdf}
\caption{Schematic overview of the PandaX cabling scheme.}
\label{fig:cabling}
\end{figure}
The RG316 coaxial cables carry the PMT signals/HV into the outer vessel vacuum.
From the feedthrough flange on the inner vessel, every six cables
are grouped with the other end soldered onto a male China-standard
mil-spec DB15 connector, with core and ground individually separated.
A custom cable assembly, consisting of 12 RG316 cables,
connect two of these connectors to a 24-pin double-sided vacuum feedthrough
by LEMO Inc.
Eight such LEMO feedthroughs are sealed against a ISO160 flange on the
outer vessel via o-rings. For the
bottom PMT cables, a cable assembly with RG316 cables connects
the bottom feedthrough with another MPF double-sided 41-pin feedthrough mounted
on the wall of the outer vessel.
The cables outside the outer vessels are RG316 as well, each connecting to the
decoupler modules through individual SHV connectors. The total cable length
from the decoupler to the PMT base is approximately 5\,m.
\subsection{LED calibration system}
A fiber optics system is installed in the detector to carry external
LED light pulses into the detector for single photoelectron calibration.
Optical fibers feed into the inner vessel through commercial ultra-high
vacuum fiber feedthroughs. The open ends of the fibers were inserted
into three teflon rods
mounted on the wall of the TPC.
Three external blue LEDs are driven by custom-built
pulsers~\cite{ref:LEDPulser}. Fast light pulses ($<$10\,ns) are triggered by a
TTL pulse, which also serves as the external trigger of the frontend electronics,
with its intensity controlled by a negative DC voltage.
\subsection{PMT performance during commissioning}
\label{sec:bench}
PMT properties were measured with the full electronics chain during commissioning of the PandaX detector. Liquid xenon was filled into the
detector with the liquid-gas interface located between the grid of the extraction
field and the anode. A typical S1-S2 waveform from a bottom PMT measured by the digitizer is shown in Fig.~\ref{fig:wf}, in which a multi-step scattering is clearly identified.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{S1-S2-R11410.pdf}
\caption{A typical multi-site event waveform seen by a bottom PMT. The waveform is digitized by the CAEN V1724 FADC.}
\label{fig:wf}
\end{figure}
Low intensity LED runs were used to calibrate the gains of the PMTs.
The charge distribution for two typical PMTs are shown in
Fig.~\ref{fig:spe_spectrum}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{10406.pdf}
\includegraphics[width=0.45\textwidth]{21401.pdf}
\caption{Histograms of typical charge distributions for a R8520-406 (left)
and a R11410-MOD (right) PMT with fits overlaid (red curves).
The leftmost peak in each histogram is the pedestal.
The abscissa
are the integrated area of the SPE in ADC bits,
with a conversion of 1 bit = 2.75$\times10^{-3}$ pC before the
Phillips amplifier.}.
\label{fig:spe_spectrum}
\end{figure}
The gain and SPE resolution can be obtained by fitting the
SPE charge peak.
All PMT gains were adjusted to approximately $2\times10^6$ using an empirical
HV dependence of V$^\beta$, where $\beta$ is about 7(8) for the top (bottom)
bottom PMTs.
A typical resolution at this gain setting
for the R8520-406 PMTs is 60\% with the average at 71\%, biased by a few noisy
tubes. For the R11410-MOD PMTs, it is 39\%, including
the contribution from the phototubes as well as the entire electronics
chain.
The dark rate of PMTs in LXe are monitored periodically by taking DAQ runs
with random triggers. At a pulse finding threshold of 0.5\,photoelectron,
typical dark rate of a R8520-406 tube is 30\,Hz, and that for R11410-MOD
is 1\,kHz, with their distributions shown in Fig.~\ref{fig:dark_rate}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{Bottom_dark_rate_LXe.pdf}
\includegraphics[width=0.45\textwidth]{Top_dark_rate_LXe.pdf}
\caption{Distribution of PMT dark rate in LXe for the bottom (left)
and top array. The uncertainties are computed based on the standard
deviations of the run-by-run fluctuations.}
\label{fig:dark_rate}
\end{figure}
Compared to the
initial data taken at room temperature, when the inner vessel
was filled with GXe, the top PMT dark rate dropped by a factor of 40,
while the bottom PMT dark rates only decreased by a factor of 2. To search for S1
pulses in the waveform, the offline software requires that at least
two phototubes fired within a window of approximately 100\,ns to form a signal.
Based on the random trigger run, the accidental probability due to PMT dark
rate within an S1 search window of 75\,$\mu$s is less
than 3\%.
The afterpulsing charge distribution and probability are also measured tube-by-tube. Low intensity LED runs were performed to produce SPEs in a narrow time window, and looked
for afterpulses in subsequent time windows. The average afterpulsing probability for the R11410-MOD tubes, computed as the ratio of hits between 0 to 4,000 ns (subtracted by the random hits estimated from the pretrigger hits) to the primaries, is 1.5\% in LXe and 1.7\% at room temperature. The same average probability for R8520-406 is 2.2\% (2.4\%) at room (LXe) temperature.
\section{Experimental Background}
\label{sec:background}
Controlling, understanding and reducing experimental background is crucial in dark matter direct detection experiments. As the target mass grows, this becomes ever more important. In this section, the various background sources to the PandaX experiment are considered, keeping in mind its evolution through the various stages to a ton-scale experiment. We briefly describe the Jinping underground lab, and the expected cosmic
ray and environmental radioactivity backgrounds. We further describe the passive PandaX shield, the low-background counting station at CJPL, the special Kr purification tower, and results from Monte Carlo background simulations for the stage I experiment.
\subsection{CJPL and cosmic ray flux}
The Jinping deep underground lab, CJPL, emerged from a government-led project to construct two large hydropower plants next to and in the Jinping Mountain, Sichuan, China~\cite{jinping}, with a combined power output of 8.4\,GW. Jinping is located about 500\,km southwest of Chengdu, the capital of Sichuan province. It can be accessed either by car from Chengdu, or by a short flight to Xichang, followed by a 1.5\,hr car ride. CJPL was constructed jointly by Tsinghua University and the Yalong River Hydropower Development Company, Ltd. (EHDC) in Sichuan, China, in 2009.
\begin{figure}[!htbp]
\centering
\includegraphics[height=2in]{lab1.jpg}
\caption{Schematic picture of Jinping underground lab's L-shaped structure off traffic tunnel A.}
\label{fig:lab}
\end{figure}
The Jinping facility is located near the middle of traffic tunnel A, one of the two traffic tunnels that are maintained by the 21$^{\text st}$ Bureau of the China Railroad Construction Company. Its location, shown in Fig.~\ref{fig:lab}, is an ideal environment for low background experiments.
The facility is shielded by 2,400\,m of mainly marble, which is radioactively
quiet rock, as shown in
Table~\ref{tab:rock}. PandaX occupies the first 10\,m in a $6\times 6\times 40$\,m$^3$ cavern as depicted in
Fig.~\ref{fig:lab}. At a depth of 6,800 m.w.e. the muon flux was
measured~\cite{cite:CJPLcosmicray} at 62 events/(m$^2\cdot \text{year})$ (compared to about 100\,Hz/m$^2$
at sea level). Monte Carlo simulations show
that neutrons generated by those muons in the detector, the shielding
material, and the surrounding rock and concrete produce a nuclear recoil-like
background of less than 0.002/year in a 25\,kg detector. Natural radioactivity
from radon gas in the the lab is reduced to about 10\,Bq/m$^3$ by
flushing dry nitrogen gas through the passive shield, and contributes at negligible levels to the overall background.
\begin{table}[!htbp]
\centering
\includegraphics[height=1.5in]{lab2.png}
\caption{Radioactivity of the rocks surrounding the CJPL lab.}
\label{tab:rock}
\end{table}
\subsection{Passive shield}
The PandaX passive shield was built to attenuate neutrons and gamma rays from environmental materials such as cavern wall rocks and concrete. It is shaped as an octagonal structure with a 316-cm width and a 368-cm height. It encapsulates a cylindrical shape of 124\,cm in
diameter and 175\,cm in height. The innermost layer of the shield is a cylindrical OFHC copper
vessel. Surrounding the shield structure is a steel platform that supports the cryogenic system and
the electronics. The total weight of the passive shield is 93\,tons, composed of 12\,ton copper, 58\,ton lead, 20\,ton polyethylene, and 3\,ton steel.
The shield was designed with two important requirements: (i) allow less than one neutron or gamma induced background event per year in the energy region useful
for dark matter detection, and (ii) satisfy the space constraint of CJPL, since the space allocation for PandaX was limited to a length of about 10\,m, and a width of 3.5\,m.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{shield1.png}
\includegraphics[width=0.45\textwidth]{shield2.png}
\caption{Left panel: Passive shield cross sectional view. Right panel: Top view of the shield.}
\label{fig:sectionview}
\end{figure}
To satisfy these requirements, the shield was made of a 5 layered structure of, starting from the outside, 40\,cm polyethylene, 20\,cm lead, 20\,cm polyethylene, and 5\,cm copper. The innermost layer is a cylindrical 5-cm thick copper vessel, used both as the last layer of the shielding and as the wall of the cryostat.
The cross-sectional and top views of the shield are shown in Fig.~\ref{fig:sectionview}.
The horizontal leveling of the copper vessel is adjustable with two rotary degrees of freedom
with the range $±0.25^\circ$, used to adjust the leveling between liquid xenon surface
and TPC grids.
In the vertical direction, the maximum height of the crane's lifting hook is 510\,cm, which must accommodate
the length of the lifting rope (50\,cm), the height of the inner vessel plus the outer vessel, and the height of the shield base (~90\,cm).
Thus the maximum height of the outer vessel is about 185\,cm. In the horizontal direction,
after subtracting the shield thickness and construction space, the diameter of the outer vessel is limited to 135\,cm.
There are four 6-inch side openings around the outer vessel as shown in Fig.~\ref{fig:sectionview},
designed for the cooling bus tubes, the signal cables, and the high voltage and calibration feedthroughs.
Because polyethylene and lead have poor mechanical strengths, a steel structure had to be installed to support the weight of the shield. To contain radioactivity, most of the steel is used outside the lead shield.
Each side of the polyethylene plate is made with 45$^\circ$-sloped edges, and the small gaps between the adjacent plates are tangential to the inner space to prevent neutron leakage.
The shield covers are easily removal for detector installation and maintenance. They consist of one copper cover, one assembly cover, three lead covers and two outer polyethylene covers.
In the cover design, the payload of the crane is an important factor. The assembly cover weighes about 2,800\,kg, consisting of an outer top copper plate, an inner polyethylene layer, and some lead bricks. The lead cover is comprised of a steel box
and 14 layers of lead plates with 5\,mm thickness. The steel boxes are mechanically rigid enough such that they can be lifted by the crane. Each steel box weighes about 3.5\,tons. The outer polyethylene cover is divided into two parts due to the space limitations in the lab.
\subsection{Counting station}
\label{Sec:counting-station}
The reach the required sensitivity for PandaX, stringent requirements are placed on the radioactivity levels of most detector components and materials. According to Monte Carlo simulations, the radioactive elements (we are mostly concerned with $^{238}$U, $^{232}$Th, $^{137}$Cs, $^{60}$Co, and $^{40}$K) in most central components should not exceed activity levels of mBq per piece or per kg. A counting station was constructed at CJPL to select materials that meets this requirement.
An illustration of the counting station is shown in Fig.~\ref{fig:counting1}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{counting_station_pic.png}
\includegraphics[width=0.45\textwidth]{bkg_GEMMX.pdf}
\caption{Left panel: Illustration of the HPGe and the shielding.
The HPGe (grey) is surrounded by several OFHC copper plates (yellow) with a total thickness of 10cm.
The 20-cm thick lead shielding (blue) is built with lead bricks, each brick with a size of
5$\times$10$\times$20\,cm$^3$. Right panel: Background energy spectrum accumulated over a one-month period.}
\label{fig:counting1}
\end{figure}
It houses an Ortec GEMMX low background, high purity germanium detector (HPGe), model 94100-HJ-LB-C-108. It has a relative efficiency of 175\%
and a resolution of 2.3\,keV (FWHM) at 1.33\,MeV. The signal is read out by an Ortec DSEPC502 DAQ system with a shaping time of 12\,$\mu$s.
The HPGe is shielded by 10-cm thick OFHC copper and 20-cm thick pure lead.
While the radioactive elements $^{226}$Ra($^{238}$U), $^{228}$Th($^{232}$Th), $^{137}$Cs and $^{60}$Co in the copper are all well below 1\,mBq/kg, $^{40}$K is relatively high, at 4$\pm$1\,mBq/kg, most probably due to sample surface contamination during counting.
The $^{210}$Pb level in the lead
ranges from 110 to 290\,Bq/kg. The counting chamber has a cross section of 20$\times$20\,cm$^2$ and is 34\,cm high (including the HPGe).
It is continuously flushed with dry nitrogen gas at 7--10\,L/min. Since the counting station is under operation at the CJPL underground laboratory, no anti-cosmic-ray system is needed.
The background energy spectrum, as shown in the right panel of Fig.~\ref{fig:counting1}, indicates traces of $^{210}$Pb, $^{228}$Th($^{232}$Th), $^{226}$Ra($^{238}$U)
and $^{40}$K.
If we assume that all $^{226}$Ra photon lines originate from radon gas in the counting chamber, the $^{222}$Rn level is 1\,Bq/m$^3$ and constitutes one of the dominant background sources.
Many detector components were counted, including the three-inch and the one-inch PMTs. Some of the results are listed in Table~\ref{table:counting_results}.
A very high level of $^{226}$Ra($^{238}$U) is found in the stainless steel sample from the inner vessel.
For PandaX stage-II, a new inner vessel will be built from more radiopure stainless steel.
\begin{table}[th!]
\center
\begin{tabular}{|c|c||c|c|c|c|c|c|}
\hline
samples & unit & $^{226}$Ra & $^{228}$Th & $^{40}$K &$^{137}$Cs & $^{60}$Co & $^{235}$U \\ \hline
PMT8520 & mBq/PMT & $<$0.11 & $<$0.08 & 9.8$\pm$1.0 & 0.20$\pm$0.05 & 0.50$\pm$0.05 & $<$0.12 \\ \hline
Base 8520& mBq/base& .07$\pm$0.2 & $<$0.32 & $<$2.4 & 0.4$\pm$0.1 & $<$0.07 & $<$0.18 \\ \hline
PMT11410& mBq/PMT & $<$0.72 & $<$0.83 & 15$\pm$8 & $<$0.31 & 3.4$\pm$0.4 & 1.4$\pm$1.2 \\ \hline
Base 11410&mBq/base& 1.1$\pm$0.1 & 0.16$\pm$0.15 & $<$1.3 & 0.33$\pm$0.08 & $<$0.06 & 0.4$\pm$0.2 \\ \hline
PTFE samples& mBq/kg & 2.3-33 & $<$1-$<$14 & $<$6 - 89 & 6 - 68 & $<$0.3 - $<$6 & $<$3 - $<$8 \\ \hline
SS sample & mBq/kg & 118$\pm$3 & 17$\pm$4 & 53$\pm$14 & $<$0.9 & 6.0$\pm$0.8 & $<$8 \\ \hline
\end{tabular}
\caption{
Radioactivity of the PMTs and some other detector components.
The PTFE samples are used for the TPC construction. The stainless steel (SS) sample
is from the stage-I inner vessel.
}
\label{table:counting_results}
\end{table}
\subsection{Xenon purification system}
Commercially supplied xenon contains a Kr contamination of about 10\,ppb. Therefore, a tower to remove
Kr from xenon using standard distillation methods was designed and constructed at SJTU.
The distillation tower was designed to fulfill the following requirements:
The amount of Kr in the purified xenon should be reduced by up to three orders of magnitude compared to that in the original xenon. The collection efficiency of xenon should be 99\%. The processing speed of the system should be 5\;kg xenon per hour, so that 840\,kg xenon gas can be purified in a week.
Xenon should be fed into the distillation system as a gas.
The McCabe-Thiele (M-T) method was used to design the distillation column~\cite{mt}.
The original xenon gas flows into the distillation column through a feeding point, and the purified xenon is exhausted from the re-boiler. The concentration of Kr in purified xenon is lower than that in original xenon. The off gas is exhausted from the condenser which contains the highest concentration of Kr. The reflux of xenon is controlled both by the heating power provided by the re-boiler, and the cooling power provided by the GM cryo-cooler.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.5\textwidth]{krr.png}
\includegraphics[width=0.35\textwidth]{krr1.jpg}
\caption{Left panel: Schematic overview of the technological process of the cryogenic distillation system, 1--Getter, 2--Heat exchanger, 3--Cryo-cooler 1,
4--Cryo-cooler 2, 5--Condenser, 6--Distillation column, 7--Liquid meter, 8--Re-boiler, 9--Vacuum jacket. Right panel: Photograph of the actual
distillation system.}
\label{fig:techpro}
\end{figure}
The flow diagram of the distillation system is illustrated in Fig.~\ref{fig:techpro}. The original xenon gas pressure is set to 215\,kPa before entering the distillation system. The flow rate of the original xenon gas is controlled by both the needle valve FM1 and the flow controller. The original xenon gas flows through the getter, and through the heat exchanger, which is used to pre-cool the xenon gas. For this purpose, the purified xenon gas extracted from the distillation apparatus is used as the cooling medium~\cite{mk}. After the pre-cooling process, the temperature of the xenon gas is reduced to 192\,K sequentially by GM cryo-cooler 1, and the cooled xenon gas is fed into the distillation column. After being fed in, the xenon gas is further cooled down by the liquid xenon that drops down from the condenser to reach gas-liquid equilibrium at the packings. The xenon gas in the condenser is liquefied by GM cryo-cooler 2. The output cooling power is controlled by electrical heaters installed at cryo-cooler 2 and by the PID temperature controller. The temperature is kept at 179.5\,K, which is controlled by silicon diode temperature sensors. The saturated xenon gas with the highest concentration of krypton, named the ``off gas'', is collected from the condenser, while the flow rate is controlled by the needle valve FM3 and the flow controller. The re-boiler is a cylindrical stainless steel vessel with a height of 318\,mm and a diameter of 273\,mm, respectively. Some of the liquid xenon in the re-boiler is vaporized and returns to the distillation column for recycling, and the other liquid xenon, called ``purified xenon'', is collected in a collection bottle~\cite{slutsky} via the heat exchanger. The flow rate of the purified xenon gas is controlled by the needle valve FM2 and by the flow controller. The purified xenon and off gas are collected in stainless steel bottles cooled by liquid nitrogen.
A photograph of the distillation column is shown in the right panel of Fig.~\ref{fig:techpro}.
The required purity level of the purified xenon determines the height of the distillation column. In general, the higher the distillation column, the higher the purity. To guarantee the required purity level, the total height of the distillation column is set to 4\,m. The lengths of the rectifying and the stripping sections of the distillation column are 1.9\,m and 2.1\,m, respectively. The inner diameter of the distillation column is 0.08\,m. The cryogenic distillation column is thermally insulated by high vacuum multilayer insulation, and the vacuum is maintained at about $6\times 10^{-3}$\,Pa by a vacuum pump. The total heat leakage of the column is less than 6\,W.
The condenser at the top of the distillation tower is designed as funnel-type.
Its output power is adjusted by three heaters installed at the cold head of the cryo-cooler, and its temperature is regulated at 179.5\,K.
The electrical heater is adopted in the re-boiler at the bottom of the distillation tower, and the temperature of the re-boiler can be controlled at 180\,K. A capacitive liquid meter is installed in the re-boiler to monitor the amount of liquid xenon.
Stainless steel 304 was used as the material of the distillation tower.
A new type of high efficient structured packing PACK-13C~\cite{li} was adopted for this distillation column, which reflects the advantages of structured packing and random packing with the specific surface area of 1,135\,m$^2$/m$^3$. The PACK-13C structure was fabricated by suppressing double-layer silk screen, made of stainless steel, into a corrugated shape.
This distillation system has been used to purify 500\,kg xenon gas that contained ppb levels of Kr concentrations. The concentration of Kr in the off gas was about 10$^{-6}$\,mol/mol, which is consistent with the transfer of the majority of the Kr in the original xenon sample to the off-gas. Assuming mass conservation, the concentration of Kr in the purified xenon is expected to be at the ppt level, which satisfies the requirements for G2 dark matter experiments.
\subsection{Background simulation}
The three main background sources for the PandaX experiment are (i) the detector components, (ii) the CJPL lab concrete walls (including the rock), and (iii) the cosmic ray muons (including induced secondary particles). To estimate the background levels,
a Geant4 based Monte Carlo simulation package was constructed with the lab and detector geometry implemented.
To simulate the energy deposited inside the sensitive liquid xenon volume from various background sources, three selection cuts were applied to the simulated events.
First, a fiducial volume (FV) was defined in the center of the liquid xenon sensitive volume, with a radius
of 25\,cm and a height of 5\,cm. The fiducial volume was surrounded in all directions by at least a 5-cm thick layer of additional liquid xenon.
With the so-called FV cut, events that deposit more than 0.1\,keV energy outside the FV are rejected.
Second, under the assumption that the TPC position reconstruction resolution is 3\,mm,
only single-site events are accepted. Those are events with no energy deposits above 1\,keV and separation of more than 3\,mm.
Third, for the so-called electron recoil events, i.e., events with energy deposit from gamma or electron, but not from neutron interactions, an additional 99.5\% rejection efficiency from the S2-S1 selection is assumed.
The number of background events per year, within a 5 to 15\,keV window is summarized in Table~\ref{table:simulation_results}. Other copper components include the shaping rings, the copper plates to hold the top and bottom PMT arrays, and the copper filler.
\begin{table}[th!]
\center
\begin{tabular}{|c|c|}
\hline
background source & \# bkgd. events / year \\ \hline\hline
cosmic muon & $<$0.0015 \\ \hline
CJPL concrete wall & $<$0.009 \\ \hline
PMTs and bases & 4.4 \\ \hline
PTFE reflectors & 0.44 (gamma) + 0.66 (neutron) \\ \hline
inner SS vessel & 3.2 (gamma) + 0.24 (neutron) \\ \hline
Cu components & 1.2 \\ \hline
outer Cu vessel & 0.39 \\ \hline
Rn (1 Bq/m$^3$) & 0.007 \\ \hline
\end{tabular}
\caption{
Number of background events per year from various detector components and sources. The contributions from PMTs etc are for the stage-I experiment.
}
\label{table:simulation_results}
\end{table}
At present, CJPL has the largest overburden and the correspondingly
lowest cosmic muon flux at $(2.0\pm0.4) \times 10^{-10}$/(cm$^2\cdot$s)~\cite{cite:CJPLcosmicray}.
Ten thousand muons were simulated and no single event survived the three selection cuts.
To increase simulation statistics, 1 million muon-induced neutrons following the energy distribution parameterized
in Ref.~\cite{cite:Mei_Hime_underground_bkg} were simulated and only 1 event survived the FV and single-site cuts.
After all selection cuts, the cosmic muon induced background within 5 to 15\,keV is less than 0.0015 per year.
The CJPL concrete walls contain (1.63$\pm$0.17)\,kBq/kg of $^{226}$Ra,
(6.5$\pm$0.9)\,Bq/kg of $^{232}$Th and (19.9$\pm$3.4)\,Bq/kg of $^{40}$\,K~\cite{cite:CDEXintroduction}.
The gamma induced background events within the 5 to 15\,keV window is less than 0.001 per year, thus negligible for PandaX stage-I.
The neutron flux from the CJPL concrete walls from ($\alpha$,n) and spontaneous fission of U and Th was simulated together with the SOURCES-4A~\cite{cite:source4a} package and Geant4,
and is $2.0\times 10^{-6}$/(cm$^2$$\cdot$s).
The induced background within the 5 to 15\,keV window is less than 0.008 per year.
The simulations have shown that the dominant background originates from the detector components. The gamma-induced background event rate is 18.1\,mDRU after the single-site and fiducial volume selections, including 8.5\,mDRU from the PMTs and 6.1\,mDRU from the stainless steel inner vessel.
If we assume the Kr in the liquid xenon is at the 70\,ppt level, it will contribute an additionally 2.8\,mDRU.
\section{Electronics and Data Acquisition System}
The electronics and data acquisition system takes the signals detected by the
PMTs
and, after a sequence of signal processing,
produces and records raw data for physics analysis. The main
functionality of the system includes signal amplification, waveform
sampling and digitization, trigger and event readout, as well
as system monitoring.
The PandaX detector is
instrumented with 180 PMTs to detect
scintillation photons. A typical physics event
consists of
two time-correlated signals, S1 and S2. Combining the signals recorded by all photomultiplier tubes, the
typical range of the S1 signal for low energy nuclear
recoil events extends from a few to a few tens of
photoelectrons. The charge of the corresponding S2 signal is typically
one hundred times larger,
with a width of a few $\mu$s. For stage I, the maximum electron drift length
is 15\,cm. Assuming a drift speed of 2\,mm/$\mu$s, the
maximum separation between S1 and S2 is 75\,$\mu$s.
A channel-by-channel digitizer system (flash ADC or FADC) was chosen for PandaX. The electronics and data acquisition system have to satisfy the following requirements: (i) the digitizers have to be able to measure the charge of the SPEs
accurately. For a gain of 2$\times10^{6}$, the raw SPE signal has an amplitude of about 2\,mV and a width of 10--20\,ns. This sets the specification of the digitizer to have a sampling rate of
at least 100\,MS/s and to have a charge resolution of 14\,bit. Pre-amplification
is also desired to boost the raw signal size; (ii) the trigger has to be highly efficient for low energy nuclear recoils. For these events, since both the S1 and S2 amplitudes are small, a charge
trigger has to be used with a threshold lower than 200\,PE (on S2); (iii) each readout window needs to be larger than 150\,$\mu$s so that the S1-S2 event pair is captured in a single readout regardless of which
generated the trigger; and (iv) the event rate, together with the event size, determines the bandwidth
for data transfer. Based on simulations, an event rate
with a 1\,keV$_{ee}$ threshold is about 3\,Hz. It is therefore required that the
DAQ is capable of taking more than 10\,Hz continuously without deadtime.
\subsection{Electronics design}
The design of the PandaX electronics is very similar to that of previous expriments~\cite{XENON100, LUX}.
The hardware components were commercially available, while the
DAQ software was developed in-house. The schematic layout of the electronics system is shown in
Fig.~\ref{fig:schematic_EnDAQ}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.95\textwidth]{daq_schematic.pdf}
\caption{Schematic layout of the electronics and DAQ system.}
\label{fig:schematic_EnDAQ}
\end{figure}
The PandaX eletronics system is a mixed VME and NIM system.
The NIM amplifier modules (Phillips 779) take PMT signals from the decoupler, amplify them by a factor of 10, and feed them to FADC modules. The CAEN V1724 FADCs perform waveform sampling of the input signals (100\,MS/s, 14\,bit) and provide digitally summed signals at a 40\,MS/s frequency (Esum). When receiving trigger signals, the analog samples are recorded in the event buffer and ready to be read out by the DAQ computer via optical link.
The signals of the digital sum from all FADC modules are summed via FAN-IN/OUT modules to form the Esum signal, which is integrated by a spectroscopy amplifier (ORTEC 575A) with a 1.5\,$\mu$s shaping time, producing a signal for which the height scales with the total charge of all channels. This charge signal is fed into a CAEN V814 discriminator. Tests have shown that the threshold can be lowered to about 150-200\,PE, sufficient to trigger S2 for low energy nuclear recoil events. Efforts are continuing to further reduce that threshold. The resulting \emph{Trig} signal is fed into a general logic module (CAEN V1495), which makes a trigger decision together with all \emph{Busy} signals from the FADCs as the VETO. The resulting output signal is the main trigger for the FADC readout window. The scaler module (CAEN V830) is used to monitor the system performance, as for example the trigger efficiency and the live time.
Fiber optical links connect the VME electronics to the DAQ computer
When there are data ready in the FADCs' buffers, the DAQ program will read out the data from all FADCs units via the so-alled MBLT cycle. The data segments are built as physics events and are written to a disk array attached to the computer for permanent storage.
A photograph of the electronics system at CJPL and a schematic layout of the rack components are shown in Fig.~\ref{fig:rack}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{rack_layout.pdf}
\caption{Photograph of the electronics system in Jinping (left) and a schematic layout of the rack components (right). The decoupler units (4x4U) as well as the HV supply crate can be found in the left rack, together with NIM crate hosting the Phillips pre-amplifiers. The right rack contains the VME electronics and the trigger logic.}
\label{fig:rack}
\end{figure}
\subsection{Data acquisition and online data processing software}
The DAQ software was written in C++ using the software library
provided by CAEN. The basic logic flow uses a simple framework which allows to
configure the DAQ via an input XML file and
start/end a run. The data is transferred from the FADC to the
DAQ computer via optical bridge using the MBLT protocol to minimize
the system deadtime. The raw data file is in custom binary format,
encoding all waveforms. Waveform samples consistent with the baseline
are suppressed with a threshold of 20 ADC bits using the FPGA algorithm
provided by CAEN (so-called baseline suppression). This suppression leads to about a factor 8 reduction in
data size.
The downstream data processing and monitoring scheme is illustrated in
Fig.~\ref{fig:daq3}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.65\textwidth]{daq_architecture.png}
\caption{The architecture of the data acquisition and online monitoring
system.}
\label{fig:daq3}
\end{figure}
To achieve real time monitoring of the system performance,
a separate process parses the output file on the fly and produces
data quality figures.
The DAQ computer is located inside the underground laboratory, with binary
data written to a local disk array during data taking. A real-time analyzer program reads the data on the fly and generate
plots of hit map, wave forms and trigger rate on a web page.
A file copier program, running on the same
server, copies the new data to a remote file system provided by a net
attached storage (NAS) server outside the laboratory, connected to the
underground lab via fiber links. The NAS server
is attached to a multi-core computing server. A file tracker program will
invoke an analysis chain once raw binary file copying is completed
to start converting the raw binary file into a root format, analyzing
the data and generating more detailed data quality plots, all viewable via
a web browser.
The data transfer rate was tested throughout the entire processing chain
to an upper limit of 60\,MB/s, corresponding roughly to a trigger rate of
100\,Hz with a 200\,$\mu$s readout windows on all channels, including baseline suppression
enabled.
\section{Summary}
In this paper, we report the design and performance for a dark matter direct detection search experiment operating in xenon dual-phase mode in the Jinping underground laboratory in China. The cryogenic system, the gas circulation system, the gas purification system, the level adjusting mechanism, the TPC high voltage, the PMT and the S1 and S2 signals have all been tested in two engineering runs, and performed as expected. The experiment entered science data collection in late March 2014, and is expected to report results later this year. After completion of stage I, the detector will be upgraded to accommodate a larger target mass.
This project has been supported by a 985 grant from Shanghai Jiao Tong University, a 973 grant from
Ministry of Science and Technology of China (No. 2010CB833005), a grant from National Science Foundation of China (No.11055003), and
a grant from the Office of Science and Technology in Shanghai
Municipal Government (No. 11DZ2260700). The project has also been sponsored by Shandong University, Peking University, the University of Maryland,
and the University of Michigan.
We would like to thank many people including E. Aprile, X. F. Chen, C. Hall, T. D. Lee, Z. Q. Lin, C. Liu, L. L\"u, Y. H. Peng, L. Teng, W. L. Tong, H. G. Wang,
J. White, Y. L. Wu, Q. H. Ye, and Q. Yue for helps and discussions at various level. We are particularly indebted to Jie Zhang for his strong support and crucial help
during many stages of this project. Finally, we thank the following organizations and personnel for indispensable logistics and other supports:
the CJPL administration including directors J. P. Chen and K. J. Kang and manager J. M. Li, Yalong River Hydropower
Development Company, Ltd. including the chairman of the board H. S. Wang, and manager X. T. Chen and his Jinping tunnel management
team from the 21st Bureau of the China Railway Construction Co.
\clearpage
|
2,869,038,154,296 | arxiv | \section{Introduction}
The AdS/CFT conjecture \cite{Maldacena:1997re} allows us to give geometrical descriptions of super-conformal field theories in different dimensions in terms of a given brane configuration in Supergravity. For the case of $\mathcal{N}=4$ Supersymmetric theories, these gauge/gravity dualities had been established in different dimensions, e.g. \cite{Lozano:2020txg}-\cite{Lozano:2021rmk} for AdS$_{2}$, \cite{Macpherson:2018mif}-\cite{Lozano:2019jza} for AdS$_{3}$,\cite{DHoker:2007zhm}-\cite{Akhond:2021ffz} for AdS$_{4}$, \cite{Gaiotto:2009gz}-\cite{Aharony:2012tz} for AdS$_{5}$, \cite{DHoker:2016ujz}-\cite{Legramandi:2021aqv} for AdS$_{6}$ and \cite{Apruzzi:2013yva}-\cite{Nunez:2018ags} for AdS$_{7}$ among others.
When considering the Abelian T-duality (ATD) transformation of a Supergravity solution with a certain holographically dual field theory, since it is a symmetry of string theory, both geometries describe the same dynamics in different ways. Therefore both geometries are holographically dual to the same field theory.
The previous statement is no longer valid if one considers non-Abelian T-duality (NATD) transformations. These transformations, firstly developed for pure NS-NS fields in \cite{delaOssa:1992vci} and then extended for backgrounds containing R-R fields that have a $SU(2)$ isometry subgroup in \cite{Sfetsos:2010uq} (the extension beyond $SU(2)$ was done in \cite{Lozano:2011kb} and an explicit flux transformation was given in \cite{Kelekci:2014ima}), do not preserve the symmetries of the compact manifold along which one performs the transformations. Hence, the background obtained after the NATD transformations describes different dynamics with respect to its seed, which can be seen at the level of the string $\sigma$-model \cite{Giveon:1993ai}. In this manner NATD is not a symmetry of String Theory but rather a solution generating method, see e.g. \cite{Gasperini:1993nz,Elitzur:1994ri}.
It is then natural to ask what is the corresponding dual field theory of a background obtained after NATD transformations. This was first addressed in \cite{Itsios:2012zv,Itsios:2013wd}, where it was shown that it is indeed possible to describe properties of the NATD geometry in terms of those of its predecessor, e.g. observables charged under the dualised isometries changed after the transformation while non-charged (neutral) observables and the holographic central charge remain the same. However, no precise proposal for a dual field theory to NATD geometries was given until the work of \cite{Lozano:2016kum} in AdS$_5 \times S^{5}$, where the dual theory came in the form of a `long quiver' CFT preserving $\mathcal{N}=2$ supersymmetry.
In \cite{Lozano:2016wrs} the NATD of the AdS$_{4}\times S^{7}$ reduction to Type IIA Supergravity was shown to have a brane-configuration given by an unbounded Hanany-Witten set-up \cite{Hanany:1996ie}, which suggests that the dual field theory is not well defined. Using the formalism of \cite{Lozano:2016kum,Assel:2011}, a consistent way of completing the field theory was then developed, such that the completed form can be written in terms of a linear quiver. This leads to a new Type IIB background, in which the NATD appears as a zoom-in on a region of this completed background.
Here we continue with the discussion about the holographic description of $\text{AdS}_4$ backgrounds obtained via NATD using the electrostatic-like problem set-up developed in \cite{Akhond:2021ffz}, providing an interpretation of NATDs as a zoom-in on backgrounds that have a well defined holographic dual, along similar lines to the $\text{AdS}_6$ case in \cite{Legramandi:2021uds}.
The outline of the paper is as follows: in section 2 we present two solutions to Type IIB Supergravity on AdS$_{4}$, which correspond to the ATD and the NATD found in \cite{Lozano:2016wrs}, computing the respective Page charges in each case. In section 3 we argue how the NATD appears as a zoom-in of the backgrounds found in \cite{Akhond:2021ffz}, which were shown to be dual to linear quiver field theories. In section 4 we study a new AdS$_{4}$ Supergravity solution. We conclude in section 5 with a summary of the paper and some final remarks.
\section{AdS$_{4}$ Type IIB Background}
In \cite{DHoker:2007zhm}, an infinite family of solutions to Type IIB Supergravity that are dual to $\mathcal{N}=4$ super-conformal field theories were proposed. It was then showed in \cite{Akhond:2021ffz} that the same family of solutions can be obtained in the `electrostatic' formalism. In order to match the conformality and global symmetries of the field theory, the Supergravity background geometry must contain an AdS$_{4}$ factor and a couple of two spheres $S^{2}_{1}$ and $S^{2}_{2}$. The other NS and R background fields must preserve the isometries of the geometry. The Type IIB AdS$_4 \times S^2 \times S^2$ has the following background
\begin{equation}\label{eqn:back1}
\begin{gathered}
ds_{10,st}^2=f_1(\sigma,\eta)\bigg[ds^2(\text{AdS}_4)+f_2(\sigma,\eta)ds^2(S_1^2)+f_3(\sigma,\eta)ds^2(S_2^2)+f_4(\sigma,\eta)(d\sigma^2+d\eta^2)\bigg],
\\e^{-2\Phi}=f_5(\sigma,\eta),~~~~~B_2=f_6(\sigma,\eta)\text{Vol}(S_1^2), ~~~C_2=f_7(\sigma,\eta)\text{Vol}(S_2^2), ~~\tilde{C}_4=f_8(\sigma,\eta)\text{Vol}(\text{AdS}_4),
\\f_1=\frac{\pi}{2}\sqrt{\frac{\sigma^3\partial_{\eta\sigma}^2V}{\partial_{\sigma}(\sigma \partial_{\eta}V)}},~~~~ f_2=-\frac{\partial_{\eta}V\partial_{\sigma}(\sigma \partial_{\eta}V)}{\sigma\Lambda},~~~~f_3=\frac{\partial_{\sigma}(\sigma \partial_{\eta}V)}{\sigma \partial_{\eta \sigma}^2V}, ~~~~ f_4=-\frac{\partial_{\sigma}(\sigma \partial_{\eta}V)}{\sigma^2\partial_{\eta}V},
\\f_5=-16\frac{\Lambda \partial_{\eta}V}{\partial_{\eta \sigma}^2V},~~~~f_6=\frac{\pi}{2}\Bigg(\eta-\frac{\sigma \partial_{\eta}V\partial_\eta^2V}{\Lambda}\Bigg),~~~~f_7=-2\pi\Bigg(\partial_\sigma(\sigma V)-\frac{\sigma \partial_\eta V \partial_\eta ^2V}{\partial_{\eta \sigma}^2V}\Bigg),
\\f_8=-\pi^2\sigma^2\bigg(3\partial_\sigma V + \frac{\sigma \partial_\eta V \partial_\eta^2V}{\partial_\sigma(\sigma\partial_\eta V)}\bigg), ~~~~ \Lambda = \partial_\eta V \partial_{\eta \sigma}^2V + \sigma\big((\partial_{\eta\sigma}^2V)^2+(\partial_\eta^2V)^2\big),
\end{gathered}
\end{equation}
with
\begin{equation}\label{eqn:back2}
F_1=0,~~~~H=dB_2,~~~~~F_3=dC_2,~~~~~F_5=d\tilde{C}_4+*d\tilde{C}_4.
\end{equation}
Notice that all functions $f_{1},...,f_{8}$ can be determined in terms of a single function $V(\sigma,\eta)$, and partial derivatives.
In order to be a solution of Type IIB Supergravity, $V$ must satisfy
\begin{equation}\label{eqn:2dpde}
\partial_\sigma(\sigma^2\partial_\sigma V(\sigma,\eta))+\sigma^2\partial_\eta^2 V(\sigma,\eta)=0.
\end{equation}
By using the redefinition
\begin{equation}
V(\sigma,\eta)=\frac{\partial_\eta W(\sigma,\eta)}{\sigma},
\end{equation}
equation \eqref{eqn:2dpde} becomes the 2D Laplace equation
\begin{equation}\label{eqn:laplace}
\partial_\sigma^2 W(\sigma,\eta)+\partial_\eta^{2}W(\sigma,\eta)=0.
\end{equation}
Within this set-up, the boundary conditions imposed in \cite{Akhond:2021ffz} were analogous to an electrostatic problem in which there are two parallel conducting planes and a line of charge density given by $\mathcal{R}(\eta)$, which is usually called the `Rank function'. This Rank function, which must be linear by sections in order to have quantized Page charges, completely characterises the brane configuration and therefore the field content of the dual quiver theory.
We are interested in solutions of the form \eqref{eqn:back1} which do not necessarily satisfy the same boundary conditions. In the variables $z,\bar{z}=\sigma\pm i\eta$, any solution to the Laplace equation can be written as $W=f(z)+\bar{f}(\bar{z})$. In what follows we will only consider the case in which $f(z)$ and $\bar{f}(\bar{z})$ are regular, hence we can expand them in a Taylor series. After imposing a reality condition for $W$, namely $W=W^{\star}$, and going back to the variables $\sigma,\eta$, the most general solution to equation \eqref{eqn:laplace} (ignoring boundary conditions) takes the form
\begin{equation}\label{eqn:W}
W = \alpha_{0} + \sum^{+\infty}_{n=1}\left[ \alpha_{n}\sum^{\floor{\frac{n}{2}}}_{k=0} \binom{n}{2k} (-1)^{k} \sigma^{n-2k} \eta^{2k} + \beta_{n}\sum^{\floor{\frac{n-1}{2}}}_{k=0} \binom{n}{2k+1} (-1)^{k} \sigma^{n-2k-1} \eta^{2k+1} \right],
\end{equation}
from which we obtain the following general solution to equation \eqref{eqn:2dpde}
\begin{equation}\label{eqn:Veq}
V = \sum^{+\infty}_{n=1}\left[ \alpha_{n}\sum^{\floor{\frac{n}{2}}}_{k=0} 2k\binom{n}{2k} (-1)^{k} \sigma^{n-2k-1} \eta^{2k-1} + \beta_{n}\sum^{\floor{\frac{n-1}{2}}}_{k=0} (2k+1)\binom{n}{2k+1} (-1)^{k} \sigma^{n-2k-2} \eta^{2k} \right].
\end{equation}
We will now show that different solutions appear by considering truncations of the Taylor series.
\subsection{Abelian T-dual (ATD) Solution}
We first consider the case in which all the coefficients $\alpha_{n}$ and $\beta_{n}$ are set to zero, except for $\alpha_{2}$ and $\alpha_{3}$. This then leads to the following potential
\begin{equation}\label{eqn:A}
V_\text{ATD}(\sigma,\eta) = 2\alpha_{2}\frac{\eta}{\sigma} -6\alpha_{3}\eta.
\end{equation}
Let us now set $\alpha_{2}=6\alpha_{3}$, $\alpha_{3} = M/12$ and use the change of coordinates
\begin{equation}
\sigma = 2\cos^{2}\left(\frac{\mu}{2}\right), \ \ \ \ \eta = \frac{2}{\pi}r.
\end{equation}
The range of the coordinates are $\mu\in [0,\pi]$ and, as we will explain below, it is convenient to choose $r\in [n\pi,(n+1)\pi]$, where the points $r=n\pi$ and $r=(n+1)\pi$ are identified. With this parametrization, the background metric reads
\begin{equation}\label{eqn:abelian}
ds^{2} = \pi\cos\left(\frac{\mu}{2}\right)\left[ds^{2}(\text{AdS}_4)
+ \sin^{2}\left(\frac{\mu}{2}\right)ds^{2}(S^{2}_{1})
+ \cos^{2}\left(\frac{\mu}{2}\right)ds^{2}(S^{2}_{2})
+ d\mu^{2}+\frac{4}{\pi^{2}\sin^{2}(\mu)}dr^{2}\right] \end{equation}
and the rest of the background fields are given by
\begin{align}
e^{-2\Phi} &=4 M^{2}\tan^{2}\left(\frac{\mu}{2}\right),\\
B_{2} &= r \, \text{Vol}(S^{2}_{1}), \\
C_{2} &= 2M r\, \text{Vol}(S^{2}_{2}),\\
\tilde{C}_{4} &= 6\pi M r\, \text{Vol}(\text{AdS}_{4}).
\end{align}
The field strengths for the gauge forms are
\begin{align}
H_{3} &= dr \wedge\text{Vol}(S^{2}_{1}),\\
F_{3} &=2M dr \wedge \text{Vol}(S^{2}_{2}) ,\\
F_5 &= 6\pi M dr\wedge\text{Vol}(\text{AdS}_4) - \frac{3}{4}\pi^{2}M\sin^3(\mu)d\mu \wedge \text{Vol}(S_{1}^{2}) \wedge \text{Vol}(S_{2}^{2}).
\end{align}
By observation, one can easily see the presence of singularities at $\mu=\pi$ (from the D6 branes in Type IIA) and at $\mu=0$ (from the NS5 branes), as discussed in \cite{Lozano:2016wrs}, and hence the system is defined in the interval $\mu \in [0,\pi]$ (corresponding to $\sigma \in [0,1]$). The dilaton diverges to positive infinity as $\mu \rightarrow 0$ and to negative infinity as $\mu \rightarrow \pi$. Hence, the above Supergravity description is not well defined in these limits.
This solution corresponds to the Abelian T-dual of the Type IIA solution obtained from a dimensional reduction of 11D Supergravity \cite{Lozano:2016wrs}.
\subsubsection*{Page Charges}
We now calculate the Page Charges, using the following definition
\begin{equation}\label{eqn:PageCharge}
Q_{D_p/NS5}=\frac{1}{(2\pi)^{7-p}\alpha'}\int_{\Sigma_{8-p}}\widehat{F}_{8-p},
\end{equation}
with $\alpha'=1$, $2\kappa_{10}^2T_{D_p}=(2\pi)^{7-p}$ and $\widehat{F}_p=F_p \wedge e^{-B_2}$. The background of equation \eqref{eqn:back1} has the following Page fluxes
\begin{equation}\label{eqn:hats}
\widehat{F}_3=F_3,~~~~~~~~~~~~~~~~~ \widehat{F}_5=F_5-B_2 \wedge F_3.
\end{equation}
When integrating the charges, it is convenient to choose $r\in [n\pi,(n+1)\pi]$ since this is the minimal building block of the brane set-up, i.e. is the smallest size interval in which we get integer charges for all the gauge forms. \\\\
For the NS5 charge, we integrate in the submanifold $\mathcal{M}_{1}=(r,S^{2}_{1})$ for any value of $\mu$
\begin{equation}
N_{NS5}=\frac{1}{(2\pi)^2}\int_{\mathcal{M}_{1}}H_{3}=1.
\end{equation}
For the D5 charge, the integration is performed in the three-cycle $\mathcal{M}_{2}=(r,S^{2}_{2})$ for any value of $\mu$
\begin{equation}
N_{D5}=\frac{1}{(2\pi)^2}\int_{\mathcal{M}_{2}}F_3=2M.
\end{equation}
Finally, for the D3 charge, we integrate in the cycle $\mathcal{M}_{3}=(\mu,S^{2}_{1},S^{2}_{2})$ for any value of $r$
\begin{align}
N_{D3} &=-\frac{1}{(2\pi)^4}\int_{\mathcal{M}_{3}}\widehat{F}_5=M,
\end{align}
where the negative charge suggests that the orientation of the cycle should be swapped in this case.
If we had considered instead the interval $r\in [n\pi,(n+2)\pi]$, the NS5 and D5 charges would have counted the branes between $[n\pi,(n+1)\pi]$ and $[(n+1)\pi,(n+2)\pi]$, thus not giving the smallest possible building block.
The full brane configuration for this geometry is described in figure
\ref{fig:1BraneSetup}, in which M D3-branes are stretched between NS5-branes located at $n\pi$ and $(n+1)\pi$, with 2M D5-branes in each interval. The two NS5-branes represent the same brane, as the points $n\pi$ and $(n+1)\pi$ are identified. The corresponding balanced linear quiver diagram is then given in figure \ref{fig:Quiver1}.
\begin{figure}
\centering
\subfigure[Brane set-up]
{
\begin{tikzpicture}
\draw[->] (4,2) -- (4,2.5) node[above] {$\mu$};
\draw[->] (4,2) -- (4.5,2) node[right] {$r$};
\draw (2,1.5) node[above] {D5};
\node[label=above:] at (1.7,1.2){\LARGE $\otimes$};
\node[label=above:] at (2.3,1.2){\LARGE $\otimes$};
\draw (1,-2) -- (1,2.5);
\draw (1,2.6) node[above] {NS5};
\draw (1,-2.16) node[below] {$n\pi$};
\draw[dashed] (3,-2) -- (3,2.5);
\draw (3,2.6) node[above] {NS5};
\draw (3,-2) node[below] {$(n+1)\pi$};
\draw (1,0) -- (3,0);
\draw (2,-1) node[above] {D3};
\end{tikzpicture}
\label{fig:1BraneSetup}
}
\quad\quad\quad\quad\quad\quad\quad
\subfigure[Linear quiver]
{
\centering
\begin{tikzpicture}[square/.style={regular polygon,regular polygon sides=4}]
\node at (0,2) [square,inner sep=0.1em,draw] (f100) {$2M$};
\node at (0,0) [circle,inner sep=0.5em,draw] (c100) {$M$};
\draw (c100) -- (f100);
\draw (0,-2) node[above] {};
\draw (2,0) node[above] {};
\draw (-2,0) node[above] {};
\draw (0,-2.3) node[above] {};
\draw plot [smooth, tension=6] coordinates {(0.38,-0.38) (0,-1.5) (-0.38,-0.38)};
\end{tikzpicture}
\label{fig:Quiver11}
\label{fig:Quiver1}
}
\caption{The Hanany-Witten (NS5, D3, D5) brane set-up for the ATD solution in (a), along with the corresponding balanced linear quiver diagram, (b). The brane set-up consists of $M$ D3-branes stretched between NS5-branes located at $n\pi$ and $(n+1)\pi$, with $2M$ D5-branes in each interval (taking $M=1$). Since the points $r=n\pi$ and $r=(n+1)\pi$ are identified, the two NS5-branes in the figure correspond to the same brane (represented using a dashed line).}
\end{figure}
\newpage
\subsection{Non-Abelian T-dual (NATD) Solution}
Another well known solution can be obtained by considering the case in which only $\beta_{3}$ and $\beta_{4}$ are non-vanishing, namely
\begin{equation}\label{eqn:NA}
V_\text{NATD} = \beta_{3}\left( 3\frac{\eta^{2}}{\sigma} -3\sigma \right) + \beta_{4}(4\sigma^{2}-12\eta^{2}).
\end{equation}
After fixing $\beta_{3}= 8\beta_{4}$ and $\beta_{4}=M/96$, and using the change of coordinates
\begin{equation}
\sigma = 2\cos^{2}\left(\frac{\mu}{2}\right), \ \ \ \ \eta = \frac{2}{\pi}r,
\end{equation}
we obtain the following metric
\begin{equation}
ds^{2} = \pi\cos\left(\frac{\mu}{2}\right)\left[ds^{2}(
\text{AdS}_{4})
+\frac{4}{\pi^{2}}\frac{r^{2}}{\Delta}\sin^{2}\left(\frac{\mu}{2}\right)ds^{2}(S^{2}_{1})
+\cos^{2}\left(\frac{\mu}{2}\right)ds^{2}(S^{2}_{2})
+d\mu^{2}+\frac{4}{\pi^{2}\sin^2(\mu)}dr^{2}\right],
\end{equation}
where $\Delta = \Delta(\mu,r)$ is given by
\begin{equation}
\Delta(\mu,r) = \sin^{2}\left(\frac{\mu}{2}\right)\sin^{2}(\mu)
+\frac{4}{\pi^{2}}r^{2},
\end{equation}
with the rest of the background fields given by
\begin{align}
e^{-2\Phi}&=M^2\tan^{2}\left(\frac{\mu}{2}\right) \Delta, \\
B_{2} &=\frac{4}{\pi^{2}}\frac{r^{3}}{\Delta}\text{Vol}(S^{2}_{1}),\\
C_{2} &= \frac{\pi}{4}M\left( \frac{4}{\pi^{2}}r^{2}-\cos(\mu)\left(\cos^{2}(\mu)-3\right) \right)\text{Vol}(S^{2}_{2}),\\
\tilde{C}_{4} &= \frac{\pi^{2}}{8}M\left( \frac{24}{\pi^{2}}r^{2} -( \cos(2\mu)-4\cos(\mu) ) \right)\text{Vol}(\text{AdS}_{4}).
\end{align}
The gauge fields lead to the following field strengths
\begin{align}
H_{3} &= \frac{r^{2}}{2\pi^2\Delta^{2}}
\left[ 8\left( \frac{4}{\pi^{2}}r^{2} + 3\sin^{2}\left(\frac{\mu}{2}\right)\sin^{2}(\mu) \right)dr
- r\bigg(\sin(\mu)+4\sin(2\mu)-3\sin(3\mu)\bigg) d\mu \right]
\wedge \text{Vol}(S^{2}_{1}),\\
F_{3} &= \frac{\pi}{2}M\left[ \frac{4}{\pi^{2}}r \, dr - \frac{3}{2}\sin^{3}(\mu) d\mu \right]\wedge \text{Vol}(S^{2}_{2}),
\end{align}
together with the self-dual 5-form
\begin{equation}
\begin{aligned}
F_{5} &= M\frac{\pi^{2}}{8}
d\left( \frac{24}{\pi^{2}}r^{2} -( \cos(2\mu)-4\cos(\mu) ) \right)\wedge\text{Vol}(\text{AdS}_{4})
\\ &\phantom{=}-M
\frac{r^{2}\sin^{2}(\mu)}{\pi\Delta}\left( 3r\sin(\mu) d\mu +2\sin^{2}\left(\frac{\mu}{2}\right)dr
\right) \wedge\text{Vol}(S^{2}_{1})\wedge\text{Vol}(S^{2}_{2}).
\end{aligned}
\end{equation}
This solution reproduces the non-Abelian T-dual of the Type IIA solution, in \cite{Lozano:2016wrs}. As in the ATD, we have $\mu \in [0,\pi]$ due to the presence of singularities. Once again, the above description becomes weak when $\mu \rightarrow 0$ and $\mu \rightarrow \pi$, as the dilaton diverges in these limits.
\subsubsection*{Page Charges}
Now we compute the Page charges using equations \eqref{eqn:PageCharge}. Note that unlike the ATD, in this case, the region of $r$ is non-compact. However the charges below are calculated over the interval $r \in [n\pi,(n+1)\pi]$, with $n \in \mathbb{N}$, since each one of these intervals is a minimal building block of the brane set-up.
Before computing the charges, it is convenient to perform the following large gauge transformation
\begin{equation}
B_{2} \rightarrow B_{2}-n\pi\text{Vol}(S^{2}_{1}).
\end{equation}
For the NS5 charge, we integrate over the submanifold $\Sigma_{1}=(r,S^{2}_{1})$ for $\mu=0$ and $\mu=\pi$
\begin{equation}
N_{NS5}=\frac{1}{(2\pi)^{2}}\int_{\Sigma_{1}|_{\mu=0,\pi}}H_{3}=1.
\end{equation}
This suggests that the location of the NS5-branes correspond to the location of the singularities of the background.\\
For the D5 charge, we integrate over the submanifold $\Sigma_{2}=(r,S^{2}_{2})$ for any value of $\mu$
\begin{equation}
N_{D5}=\frac{1}{(2\pi)^{2}}\int_{\Sigma_{2}}F_{3}=(2n+1)M.
\end{equation}
Finally, for the D3 charge, we integrate over the cycle $\Sigma_{3}=(\mu,S^{2}_{1},S^{2}_{2})$ for any value of $r$
\begin{equation}
N_{D3}=-\frac{1}{(2\pi)^{4}}\int_{\Sigma_{3}}\widehat{F}_{5}=Mn.
\end{equation}
The full brane configuration of the entire geometry (for the case $M=1$) is described in figure
\ref{fig:BraneSetup}, in which $Mn$ D3-branes are stretched between two NS5-branes located at $n\pi$ and $(n+1)\pi$, with $(2n+1)M$ D5-branes in each interval. The corresponding (overbalanced) linear quiver diagram is then given in figure \ref{fig:Quiver2}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[->] (10.8,2.3) -- (10.8,2.8) node[above] {$\mu$};
\draw[->] (10.8,2.3) -- (11.3,2.3) node[right] {$r$};
\node[label=above:D5] at (0,1.5){\LARGE $\otimes$};
\node[label=above:] at (1.5,1.5){\LARGE $\otimes$};
\node[label=above:D5] at (2,1.5){\LARGE $\otimes$};
\node[label=above:] at (2.5,1.5){\LARGE $\otimes$};
\node[label=above:] at (3.5,1.5){\LARGE $\otimes$};
\node[label=above:] at (4,1.5){\LARGE $...$};
\node[label=above:] at (4.5,1.5){\LARGE $\otimes$};
\draw (4,1.85) node[above] {D5};
\node[label=above:] at (7.5,1.5){\LARGE $\otimes$};
\node[label=above:] at (8,1.5){\LARGE $...$};
\node[label=above:] at (8.5,1.5){\LARGE $\otimes$};
\draw (8,1.85) node[above] {D5};
\draw (1,-2) -- (1,2.5);
\draw (1,2.6) node[above] {NS5};
\draw (1,-2.16) node[below] {$\pi$};
\draw (3,-2) -- (3,2.5);
\draw (3,2.6) node[above] {NS5};
\draw (3,-2.09) node[below] {$2\pi$};
\draw (5,-2) -- (5,2.5);
\draw (5,2.6) node[above] {NS5};
\draw (5,-2.1) node[below] {$3\pi$};
\draw (7,-2) -- (7,2.5);
\draw (7,2.6) node[above] {NS5};
\draw (7,-2.19) node[below] {$n\pi$};
\draw (9,-2) -- (9,2.5);
\draw (9,2.6) node[above] {NS5};
\draw (9,-2) node[below] {$(n+1)\pi$};
\draw (1,0) -- (3,0);
\draw (3,0.5) -- (5,0.5);
\draw (3,0.25) -- (5,0.25);
\draw (5.4,0) node[right] {.~~.~~.};
\draw (7,0.75) -- (9,0.75);
\draw (7,0.5) -- (9,0.5);
\draw (7,0.25) -- (9,0.25);
\node at (8,0.05){$.$};
\node at (8,-0.25){$.$};
\node at (8,-0.55){$.$};
\draw (7,-0.75) -- (9,-0.75);
\draw (9.4,0) node[right] {.~~.~~.};
\draw (2,-0.75) node[above] {D3};
\draw (4,-.5) node[above] {D3};
\draw (8,-1.5) node[above] {D3};
\end{tikzpicture}
\end{center}
\caption{The Hanany-Witten (NS5, D3, D5) brane set-up for the NATD solution (taking $M=1$), in which $Mn$ D3-branes are stretched between two NS5 branes located at $n\pi$ and $(n+1)\pi$, with $(2n+1)M$ D5-branes in each interval.}
\label{fig:BraneSetup}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[square/.style={regular polygon,regular polygon sides=4}]
\node at (0,0) [square,inner sep=0.3em,draw] (f100) {$M$};
\node at (2,2) [square,inner sep=0.1em,draw] (f200) {$3M$};
\node at (4,2) [square,inner sep=0.1em,draw] (f300) {$5M$};
\node at (6,2) [square,inner sep=0.1em,draw] (f400) {$7M$};
\node at (2,0) [circle,inner sep=0.5em,draw] (c100) {$M$};
\node at (4,0) [circle,inner sep=0.4em,draw] (c200) {$2M$};
\node at (6,0) [circle,inner sep=0.4em,draw] (c300) {$3M$};
\draw (c100) -- (f200);
\draw (c100) -- (c200);
\draw (c200) -- (f300);
\draw (c200) -- (c300);
\draw (c300) -- (f400);
\draw (c300) -- (7.5,0);
\draw (8,0) node[right] {.~~.~~.};
\end{tikzpicture}
\end{center}
\caption{The corresponding overbalanced linear quiver diagram for the NATD geometry.}
\label{fig:Quiver2}
\end{figure}
\newpage
\section{On the Completion of Geometries}
\subsection{NATD as a zoom-in}
In \cite{Legramandi:2021uds}, by using the electrostatic problem formalism, evidence was presented on how the non-Abelian T-dual of AdS$_{6}$ in massive Type IIA backgrounds could be thought of as a particular zoom-in of a more general solution, which possesses a well-defined holographic dual. Here, we give a similar argument for the NATD of AdS$_{4}$ in Type IIA Supergravity.
In \cite{Akhond:2021ffz}, the infinite family of backgrounds dual to balanced linear quivers is derived from the following potential (we consider $\sigma>0$ for simplicity)
\begin{equation}\label{LinearQuivers}
V_{\text{LQ}}(\sigma,\eta) = \sum^{+\infty}_{k=0} \frac{a_{k}}{\sigma} \cos\left( \frac{k\pi\eta}{P} \right) e^{-\frac{k\pi\sigma}{P}},
\end{equation}
with $a_{k}$ the coefficients of the Fourier expansion of the respective Rank function (which contains the ranks of colour and flavour groups). Expanding this potential in a Taylor series gives
\begin{equation}
V_{\text{LQ}}(\sigma,\eta)
= \frac{1}{\sigma} \sum^{+\infty}_{k=0}a_{k}
- \frac{\pi}{P} \sum^{+\infty}_{k=0}k\, a_{k}
+ \frac{\pi^{2}}{2P^{2}}\left( -\frac{\eta^{2}}{\sigma} + \sigma \right) \sum^{+\infty}_{k=0}k^{2}\, a_{k}
+ \frac{\pi^{3}}{6P^{3}}\left( 3\eta^{2}-\sigma^{2} \right)\sum^{+\infty}_{k=0}k^{3}\, a_{k}
+ ...
\end{equation}
By defining
\begin{equation}
\beta_{1} = \sum^{+\infty}_{k=0}a_{k}, \, \, \,\,
\beta_{2} = - \frac{\pi}{P}\sum^{+\infty}_{k=0}k\, a_{k},\,\,\,\,
\beta_{3} = -\frac{1}{3}\frac{\pi^{2}}{2P^{2}}\sum^{+\infty}_{k=0}k^{2}\, a_{k}, \,\,\,\,
\beta_{4} = -\frac{1}{4}\frac{\pi^{3}}{6P^{3}}\sum^{+\infty}_{k=0}k^{3}\, a_{k},
\end{equation}
one can check that the truncated expansion completely contains the potential for the NATD ($\beta_{1}$ does not appear on any background fields nor in the Rank function, and $\beta_{2}$ appears as a gauge transformation of $C_{2}$). As for the $\text{AdS}_6$ case, this suggest that the $\text{AdS}_4$ NATD corresponds to a zoom-in of a background for which we can completely characterise the holographic dual. This means that even though the NATD holographic dual is not properly defined, it can be completed into a 3d $\mathcal{N}=4$ linear quiver.
\subsection{Comments on the ATD}\label{ATD general solution}
The ATD background can be found via a similar zoom-in in the family of orthogonal functions to \eqref{LinearQuivers}, namely
\begin{equation}\label{eqn:ATDzoomin}
V_{\perp}(\sigma,\eta) = \sum^{+\infty}_{k=0} \frac{a_{k}}{\sigma} \sin\left( \frac{k\pi\eta}{P} \right) e^{-\frac{k\pi\sigma}{P}}.
\end{equation}
Notice that, although this is a solution to the equation of motion, it does not satisfy the boundary conditions of the electrostatic problem. This means that it is not possible to use the same arguments to suggest that the ATD be completed into a more general geometry that has a well defined holographic dual. This is consistent with the fact that the ATD already has a precise holographic dual: the same one as its seed background.
\section{A New Supergravity Solution}
In this section we investigate the background obtained when using a third finite part of the infinite Taylor series expansion of $V(\sigma,\eta)$. Note that, in addition to the previous two examples, it is the only other solution for which the $\text{AdS}_4$ warp factor $f_1(\sigma,\eta)$ depends only on a single coordinate. For a constant $f_1(\sigma,\eta)$, the string background was shown to be integrable for the case of $\text{AdS}_7$ in \cite{Filippas:2019puw} (where the classical 2d dynamics of the $\sigma$-model can be written in a Lax pair leading to an infinite number of conserved charges). However, in this case, a constant $f_1(\sigma,\eta)$ is not possible. Hence, an $\text{AdS}_4$ warp factor of a single coordinate is the next best option, as integrability may occur for fixed values of the coordinate.
Now we present the new solution of Type IIB Supergravity that comes from considering the following higher order truncation to equation \eqref{eqn:ATDzoomin}
\begin{equation}\label{eqn:new}
V_\text{HO} = \alpha_{4}\left( 4\frac{\eta^{3}}{\sigma}-12\eta\sigma \right).
\end{equation}
Consider the change of coordinates
\begin{align}
\sigma &= \frac{2\sqrt{2}}{\pi}~r\sin(\theta),\\
\eta &= \frac{2\sqrt{2}}{\pi}~r\cos(\theta).
\end{align}
After fixing $\alpha_{4} = M/64$, we find
\begin{equation}
ds^{2} = r\left( ds^{2}(\text{AdS}_{4})
+ \frac{\cos(2\theta)}{\tilde{\Delta}}ds^{2}(S^{2}_{1})
+ 2\sin^{2}\left(\theta\right) ds^{2}(S^{2}_{2})
+ \frac{2}{\cos(2\theta)}\left(\frac{dr^{2}}{r^2}+d\theta^{2}\right)\right),
\end{equation}
where $\tilde{\Delta} = 2\cos^{2}(\theta)+1$. In order to ensure that the metric does not change signature, we must make the restriction $\theta \in \left[0, \frac{\pi}{4} \right]$.
In addition, there is the Dilaton
\begin{equation}
e^{-2\Phi} = \frac{9M^2}{\pi^2} r^{2}\cos(2\theta)\tilde{\Delta},
\end{equation}
and the NS and R forms
\begin{align}
B_{2} &= 2\sqrt{2}\,\frac{r\cos(\theta)}{\tilde{\Delta}} \text{Vol}(S^{2}_{1}), \\
C_{2} &= \frac{12}{\pi}M r^{2}\sin^{3}(\theta)\cos(\theta) \text{Vol}(S^{2}_{2}),\\
\tilde{C}_{4} &= \frac{6\sqrt{2}}{\pi}Mr^{3}\cos(\theta)\text{Vol}(\text{AdS}_{4}),
\end{align}
which lead to the following field strengths
\begin{align}
H_{3} &= \frac{\sqrt{2}}{\tilde{\Delta}^{2}}\bigg( 2 \cos(\theta)\tilde{\Delta}\,dr
+ \,r\,\Big(\sin(3\theta)-\sin(\theta)\Big)\, d\theta \bigg)\wedge\text{Vol}(S^{2}_{1}) ,\\
F_{3} &= \frac{M}{\pi}\Big( 24 \sin^{3}(\theta)\cos(\theta)\, r dr
+ 12 r^{2}(2\tilde{\Delta}-3)\sin^{2}(\theta)\, d\theta\Big)\wedge \text{Vol}(S^{2}_{2}),
\end{align}
and also
\begin{equation}
\begin{aligned}
F_{5} &= d\left( \frac{6\sqrt{2}}{\pi}Mr^{3}\cos(\theta) \right)\wedge\text{Vol}(\text{AdS}_{4})\\
&\phantom{=} - \frac{12\sqrt{2}}{\pi \tilde{\Delta}}M\,r^{2}\cos(2\theta)\sin^{2}(\theta)
\Big( \sin(\theta)\, dr + 3r\cos(\theta) \, d\theta \Big)
\wedge\text{Vol}(S^{2}_{1})\wedge \text{Vol}(S^{2}_{2}).
\end{aligned}
\end{equation}
This background has the nice property of a dilatation, in which, under the transformation $r \rightarrow \lambda r$ (where $\lambda$ is some constant), everything simply scales independently by differing powers of $\lambda$, leaving the forms of the background otherwise invariant. This is a slightly weaker condition to the ATD case, in which the background metric and field strengths are totally translation invariant. It is interesting to observe however, that the solution presented here, appearing as a higher order truncation of equation \eqref{eqn:ATDzoomin}, comes from the same family of solutions as the ATD.
\subsubsection*{Page Charges}
In the previous two solutions, the interval of $\mu$ was fixed by singularities, but we were free to choose the form of the interval of $r$ in order to ensure quantisation of the Page charges. In the NATD case, this included splitting $r \in [0,+\infty[$ into an infinite number of finite `minimal building block' intervals. The same needs to be done here, noting the presence of a singularity at $r=0$. Hence, choosing $\theta \in [0,\frac{\pi}{4}]$ and $r \in [n\pi,(n+1)\pi]$, with $n \in \mathbb{N}$, integer quantisation of the Page charges can be achieved. \\\\
For the NS5 charge, we integrate in the submanifold $\chi_{1}=(r,S^{2}_{1})$, fixing $\theta=\pi/4$ to obtain integer quantisation (as taking $\theta=0$ leads to an irrational number of $N_{NS5}$ branes)
\begin{equation}
N_{NS5}=\frac{1}{(2\pi)^{2}}\int_{\chi_{1}|_{\theta=\pi/4}}H_{3}=1.
\end{equation}
For the D5 charge, we first integrate in the submanifold $\chi_{2}=(r,S^{2}_{2})$, fixing $\theta=\pi/4$ (as $\theta=0$ gives zero charge)
\begin{equation}
N_{D5}=\frac{1}{(2\pi)^{2}}\int_{\chi_{2}|_{\theta=\pi/4}}F_{3}=3M(2n+1).
\end{equation}
Finally, for the D3 charge, we integrate in the submanifold $\chi_{3}=(\theta,S^{2}_{1},S^{2}_{2})$ at fixed $r=n\pi$
\begin{equation}
N_{D3}=-\frac{1}{(2\pi)^{4}}\int_{\chi_{3}|_{r=n\pi}}\widehat{F}_{5}=2Mn^3.
\end{equation}
The brane configuration for this geometry is shown in figure \ref{fig:3BraneSetup}, in which $2Mn^3$ D3-branes are stretched between two NS5-branes located at $n\pi$ and $(n+1)\pi$, with $3M(2n+1)$ D5-branes in each interval. The corresponding (overbalanced) linear quiver diagram is then given in figure \ref{fig:Quiver3}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[->] (11.5,2.3) -- (11.5,2.8) node[above] {$\theta$};
\draw[->] (11.5,2.3) -- (12,2.3) node[right] {$r$};
\node[label=above:] at (0,1.5){\LARGE $\otimes$};
\node[label=above:D5] at (0.5,1.5){\LARGE $\otimes$};
\node[label=above:] at (1,1.5){\LARGE $\otimes$};
\node[label=above:] at (2,1.5){\LARGE $\otimes$};
\node[label=above:] at (2.5,1.5){\LARGE $...$};
\node[label=above:] at (3,1.5){\LARGE $\otimes$};
\draw (2.5,1.85) node[above] {D5};
\node[label=above:] at (4,1.5){\LARGE $\otimes$};
\node[label=above:] at (4.5,1.5){\LARGE $...$};
\node[label=above:] at (5,1.5){\LARGE $\otimes$};
\draw (4.5,1.85) node[above] {D5};
\node[label=above:] at (8,1.5){\LARGE $\otimes$};
\node[label=above:] at (8.5,1.5){\LARGE $...$};
\node[label=above:] at (9,1.5){\LARGE $\otimes$};
\draw (8.5,1.85) node[above] {D5};
\draw (1.5,-2) -- (1.5,2.5);
\draw (1.5,2.6) node[above] {NS5};
\draw (1.5,-2.16) node[below] {$\pi$};
\draw (3.5,-2) -- (3.5,2.5);
\draw (3.5,2.6) node[above] {NS5};
\draw (3.5,-2.09) node[below] {$2\pi$};
\draw (5.5,-2) -- (5.5,2.5);
\draw (5.5,2.6) node[above] {NS5};
\draw (5.5,-2.1) node[below] {$3\pi$};
\draw (7.5,-2) -- (7.5,2.5);
\draw (7.5,2.6) node[above] {NS5};
\draw (7.5,-2.19) node[below] {$n\pi$};
\draw (9.5,-2) -- (9.5,2.5);
\draw (9.5,2.6) node[above] {NS5};
\draw (9.5,-2) node[below] {$(n+1)\pi$};
\draw (1.5,0) -- (3.5,0);
\draw (1.5,-0.25) -- (3.5,-0.25);
\draw (3.5,0.75) -- (5.5,0.75);
\draw (3.5,0.5) -- (5.5,0.5);
\draw (3.5,0.25) -- (5.5,0.25);
\node at (4.5,0.05){$.$};
\node at (4.5,-0.25){$.$};
\node at (4.5,-0.55){$.$};
\draw (3.5,-0.75) -- (5.5,-0.75);
\draw (5.9,0) node[right] {.~~.~~.};
\draw (7.5,0.75) -- (9.5,0.75);
\draw (7.5,0.5) -- (9.5,0.5);
\draw (7.5,0.25) -- (9.5,0.25);
\node at (8.5,0.05){$.$};
\node at (8.5,-0.25){$.$};
\node at (8.5,-0.55){$.$};
\draw (7.5,-0.75) -- (9.5,-0.75);
\draw (9.9,0) node[right] {.~~.~~.};
\draw (2.5,-1) node[above] {D3};
\draw (4.5,-1.5) node[above] {D3};
\draw (8.5,-1.5) node[above] {D3};
\end{tikzpicture}
\end{center}
\caption{The Hanany-Witten (NS5, D3, D5) brane set-up for the new solution, in which $2Mn^3$ D3-branes are stretched between two NS5-branes located at $n\pi$ and $(n+1)\pi$, with $3M(2n+1)$ D5-branes in each interval (taking $M=1$ in the figure).}
\label{fig:3BraneSetup}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[square/.style={regular polygon,regular polygon sides=4}]
\node at (0,0) [square,inner sep=0.1em,draw] (f100) {$3M$};
\node at (2,2) [square,inner sep=0.1em,draw] (f200) {$9M$};
\node at (4,2) [square,inner sep=-0.15em,draw] (f300) {$15M$};
\node at (6,2) [square,inner sep=-0.15em,draw] (f400) {$21M$};
\node at (2,0) [circle,inner sep=0.4em,draw] (c100) {$2M$};
\node at (4,0) [circle,inner sep=0.25em,draw] (c200) {$16M$};
\node at (6,0) [circle,inner sep=0.25em,draw] (c300) {$54M$};
\draw (c100) -- (f200);
\draw (c100) -- (c200);
\draw (c200) -- (f300);
\draw (c200) -- (c300);
\draw (c300) -- (f400);
\draw (c300) -- (7.5,0);
\draw (8,0) node[right] {.~~.~~.};
\end{tikzpicture}
\end{center}
\caption{The corresponding overbalanced linear quiver diagram for the new geometry.}
\label{fig:Quiver3}
\end{figure}
\newpage
\section{Conclusions}
In this paper we used the electrostatic problem formalism developed in \cite{Akhond:2021ffz} to obtain three AdS$_{4}\times S^{2}\times S^{2}$ backgrounds in Type IIB Supergravity, derived from the polynomial solutions of the 2D Laplace equation defining the geometry. However, they do not satisfy the boundary conditions that lead to linear quiver field theories. Two of these solutions correspond to the Abelian and non-Abelian T-duals of the Type IIA background obtained from the dimensional reduction of AdS$_{4}\times S^{7}$. We therefore re-derive the NATD solution using a different approach to \cite{Lozano:2016wrs}, characterising the solutions by specific terms in the polynomial expansion of the potential. Hence, the dual Field Theory should be more easily determined via this approach. The third solution is a new Supergravity background, which in addition to the ATD and NATD cases, form the three backgrounds with the simplest possible $\text{AdS}_4$ warp factor - depending on only a single coordinate. We then computed the Page charges for each background, and described the Hanany-Witten brane set-up and linear quiver diagrams for all three geometries. As in previous references, we find that the NATD brane set-up is unbounded, making the dual field theory description obscure.
We then showed how the potential leading to the NATD solution emerges as a truncation of the full solution of the electrostatic problem, which leads to linear quiver field theories as holographic duals. This agrees with the picture that NATD solutions can be completed into a background that has a well-defined holographic dual. The ATD backgrounds are self-consistent on their own, i.e. since they describe the same dynamics as their seed, they are dual to the same field theory. The new background has a nice dilatation and originates from the same general solution as the ATD case.
Interestingly, each term in the expansion of the potential is in itself a solution, meaning any unique combination will lead to a different background - with the ATD and NATD cases emerging naturally. The dynamics of the full $\text{AdS}_4$ Type IIB Supergravity solution dual to the linear quivers is then constructed from the separate constituent dynamics of each term in the Taylor series.
It is worth noting that, although they are obtained as a truncation of different families of solutions, the NATD and the new solution have a brane set-up that are unbounded. It would be interesting to see whether all other solutions that can be obtained as a truncation share this same property. In the same line, if one were to find a solution coming from a higer order term in the Taylor series expansion of the potential that leads to the linear quiver, one could use the same arguments as for the NATD to argue that those solutions can be completed. This is not true for the solutions that belong to the same family as the ATD, such as the new background presented here. Since these solutions come from a potential that does not satisfy the boundary conditions of the electrostatic problem, it is not possible to argue that they can be completed in a similar way, with the ATD being an exception to this argument. It is then yet to be seen whether those solutions can be completed into a background with a well-defined holographic dual.
\section*{Acknowledgments}
The authors thank Mohammad Akhond, Andrea Legramandi, Carlos Nuñez and Lucas Schepers for useful discussions and guidance. The work of R.S. is supported by STFC grant ST/W507878/1.
|
2,869,038,154,297 | arxiv | \section*{Introduction}
\label{sec:intro}
Automatic analysis of electrocardiograms (ECGs)
promises substantial improvements in critical care. ECGs offer an inexpensive and non-invasive way to diagnose irregularities in heart functioning. \textit{Arrhythmias} are abnormal heartbeats which alter both the morphology and frequency of ECG waves, and can be detected in an ECG exam. However, identifying and classifying arrhythmias manually is not only error-prone but also cumbersome. Clinicians may have to analyze each heartbeat in an ECG record, and in critical care settings, carefully analysing each heartbeat is nearly impossible. As a consequence, the medical machine learning (ML) community has worked extensively on computational models to automatically detect and characterize arrhythmias~\cite{ebrahimi2020review, luz2016ecg}.
Rajpurkar et al. \cite{hannun2019cardiologist} demonstrated that modern ML models trained on a large and diverse corpus of patients can exceed the performance of certified cardiologists in detecting abnormal heartbeats. But their Convolutional Neural Network model was trained on a manually annotated dataset of more than 64,000 ECG records from over 29,000 patients. Clearly, research on automated arrhythmia detection has moved the burden of monitoring ECG in critical care to annotating and curating large databases on which ML models can be trained and validated.
This currently prevailing process involves laborious manual data labeling that is a major bottleneck of supervised medical ML applications in practice.
Popular ML techniques, in particular deep learning, require a large supply of reliably annotated training data, containing records from a diverse cohort of patients.
According to Moody et al.~\cite{moody2001impact} and our own experience,
raw medical data is abundant, but its thorough characterization can be involved and expensive.
This reliance on labeled data forces researchers to often use static and older datasets, despite evolving patient populations, systematic improvements in understanding of diseases, and advances in medical equipment.
Recent developments in e.g.\ web-based tools to visualize and annotate ECG signals have not reduced the annotation time and effort significantly.
For example, it took $4$ doctors, almost $3$ months to annotate $15,000$ short ECG records using the LabelECG tool~\cite{ding2019labelecg}.
In general, gold standard expert annotations can be costly. Conservative estimates place the hourly cost of highly qualified labor for the related task of EEG annotation between $\$50$ and $\$200$ per hour~\cite{abend2015much}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width = .9\textwidth]{Workflow.png}
\caption{Data programming with time series heuristics can affordably train competitive end models for automated ECG adjudication. Instead of labeling each data point by hand (fully supervised setting), experts encode their domain knowledge using noisy labeling functions (LFs). A label model then learns the unobserved empirical accuracy of LFs and uses them to produce probabilistic data label estimates using weighted majority vote.}
\label{fig:workflow}
\end{figure*}
In this work, we explore the use of multiple cheaper albeit perhaps noisier supervision sources to learn an arrhythmia detector, without access to ground truth labels of individual samples.
We follow the recently proposed \textit{data programming (DP)}~\cite{ratner2016data} framework in which a factor graph is used to model user-designed heuristics to obtain a probabilistic label for each heartbeat instance.
DP has gained attention from the medical imaging and general ML community and has been used for various tasks such as automated detection of seizures from electroencephalography \cite{saab2020weak}, intracranial hemorrhage detection with computed tomography, or automated triage of Extremity Radiograph Series \cite{dunnmon2020cross}.
Our experiments with ECG data from the MIT-BIH Arrhythmia Database indicate that with as few as $6$ heuristics, we are able to train an arrhythmia detection model with only a small amount of human effort.
The resulting model is competitively accurate when compared to a model trained on the same data with full supply of pointillistic ground truth annotations.
It can also outperform another alternative model trained using active learning, a popular technique used to reduce data labeling efforts when they are expensive.
We also show that domain heuristics can be automatically tuned to account for inter-patient variability and further boost reliability of the resulting models.
While many different types of arrhythmias exist, for illustration purposes we focus on identifying heartbeats showing Premature Ventricular Contractions (PVCs). Whereas isolated infrequent PVCs are usually benign, frequent PVCs with exceptionally wide QRS complexes\footnote[2]{QRS complexes are generally the most prominent spike seen on a typical ECG. They are a combination of the Q wave, R wave and S wave, which occur in rapid succession, and represent an electrical impulse.} may be indicative of heart disease and eventually lead to sudden cardiac death~\cite{moulton1990premature}.
However, our approach is general and applicable to all classes of abnormal heartbeats.
\section*{Related Work}
\paragraph{Automated Arrhythmia Detection} Automatically detecting abnormal heartbeats is a widely studied problem. Most researchers in the past relied on manually labeled corpora such as the MIT-BIH Arrhythmia Database, the AHA Database for Evaluation of Ventricular Detectors, etc., to train and validate their models~\cite{ebrahimi2020review, luz2016ecg}. Rajpurkar et al.~\cite{hannun2019cardiologist} recently demonstrated that a deep Convolutional Neural Network (CNN) can even exceed the performance of experienced cardiologists. However, their model was trained on as many as 64,121 thirty-second ECG records from 29,163 patients, manually-labeled by a group of certified cardiographic technicians.
Hence, to fuel advances in automated arrhythmia detection and, more generally, in ML-aided healthcare, there is a clear need to affordably label vast amounts of data.
Some recent studies have attempted to address the annotation bottleneck, albeit at a different context and scale. These studies have used semi-supervised or active learning to incrementally improve the accuracy of models without significant expert intervention. For instance, to overcome inter-patient variability without additional manual labeling of patient specific data, Zhai et al.~\cite{zhai2020semi} iteratively updated the preliminary predictions of their trained CNN using a semi-supervised approach. Correspondingly, Wang et al. \cite{wang2019global} used active learning on newly acquired data to choose the most informative unlabeled data points and incorporate them in the training set. Sayantan et al.~\cite{sayantan2018classification} used active learning to improve their model's classification results with the help of an expert. So far, the work which comes closest to addressing the problem of intelligently labeling vast quantities of ECG data is that of Pasolli et al.~\cite{pasolli2010active}. Starting from a small sub-optimal training set, the authors proposed three active learning strategies to choose additional heartbeat instances to further train an Support Vector Machine (SVM) model. Their work demonstrated that models trained using active learning can achieve impressive performance while using few labeled samples. In this work, we also compare the performance of our weakly supervised method with active learning. As against Pasolli et al.~\cite{pasolli2010active} who used a manually curated set of ECG features and trained an SVM using margin sampling, we train a Convolutional Neural Network (CNN) which can automatically learn rich feature representations using uncertainty sampling, another popular active learning strategy.
\paragraph{Weak Supervision} Of late, some studies have explored the use of multiple noisy heuristics to programmatically label data at scale. The recently proposed Data Programming framework~\cite{ratner2016data}, where experts express their domain knowledge in terms of intuitive yet perhaps noisy labeling functions (LFs), is a prominent example. Recent studies have used DP for a wide range of clinical applications, ranging from detecting aortic valve malformations using cardiac MRI sequences \cite{fries2019weakly}, seizures using EEG \cite{saab2020weak} and brain hemorrhage using 3D head CT scans \cite{dunnmon2020cross}. With the exception of Khattar et al.~\cite{khattar2019multi}, most prior work on DP has been on image~\cite{dunnmon2020cross, fries2019weakly} or natural language~\cite{ratner2017snorkel} modalities. Moreover, prior DP research either used weak annotations from lab technicians, students~\cite{saab2020weak}, or heuristics built on auxiliary modalities (e.g., clinician notes, text reports \cite{dunnmon2020cross}, patient videos \cite{saab2020weak}), some of which only allow for coarse annotation of the entire time series rather than of the individual segments. On the contrary, we define heuristics directly on time series. This enables seamless labeling of entire time series or their segments using the same framework.
\section*{Methodology}\label{sec:methods}
In this section, we will describe how we use domain knowledge to define heuristics to detect PVC in ECG time series. These heuristics will noisily label subsets of data. We will model these noisy labels to obtain an estimate of the unknown true class label for each data point. We then use the estimated labels to train the final classifier, which will be evaluated on held out test data and compared to alternative models trained using ground-truth labels directly. Fig.~\ref{fig:workflow} describes the full workflow we follow to train the end model $f$.
\noindent\paragraph{Domain Knowledge to Identify PVC}
\label{sec:heuristics}
A Premature Ventricular Contraction is a fairly common event when the heartbeat is initiated by an impulse from an ectopic focus which may be located anywhere in the ventricles rather than the sinoatrial node. On the ECG, a PVC beat appears earlier than usual with an abnormally tall and wide QRS-complex, with its ST-T vector directed opposite to the QRS vector, Fig.~\ref{fig:PVCandNormalbeat}(ii).
These generic characteristics allowed one non-domain-expert user to define $6$ heuristics in less than $30$ minutes. The user was initially unfamiliar with clinical ECG interpretation and referred to an online textbook \cite{PVCreference} to develop heuristics.
Expert clinicians are likely able to define heuristics more rapidly and thoroughly.
The heuristics listed below are defined directly on time series. This is in contrast to prior work which uses weak annotations or heuristics defined on an auxiliary modality such as text or images.
\textbf{Heuristics:} i.\ R-wave appears earlier than usual, ii.\ R-wave is taller than usual, iii.\ R-wave is wider than usual, iv.\ QRS-vector is directed opposite to the ST-vector, v.\ QRS-complex is inverted, vi.\ Inverted R-wave is taller than usual.
\paragraph{Modeling Labeling Functions over Patient Time Series}
We will now describe our formal assumptions about the dataset and heuristics, and introduce the modeling procedure. Given an ECG dataset of $p$ patients $X=\{x^j\}_{j=1}^p$, where $x^j \in \mathbb{R}^{T}$ are raw ECG vectors of length $T$, we can segment each ECG $x^j$ into $B<T$ beats such that $x^j=\{x^j_1,\dots,x^j_B\}$. Each segment $b \in \{1,\dots,B\}$ has an unknown class label $y_b \in \{-1,1\}$, where $y^j_b=1$ represents a premature ventricular contraction (\texttt{PVC}).
Our goal is to use domain knowledge to model the unknown $y^j_b$, without having to annotate the instances individually, to then train an end classifier $f(x^j_b)=y^j_b$ for automatic detection of PVC.
We define $m$ labeling functions (LFs) $\{\lambda_h(x^j_b)\}_{h=1}^{m}$ directly on the time series. These LFs noisily label subsets of beats with $\lambda_h(x^j_b)=\{-1,0,1\}$ corresponding to votes for \textit{negative, abstain, or positive}. These LFs do not have to be perfect and may conflict on some samples, but must have accuracy better than random \cite{boecking2020interactive}. DP uses this voting behavior to infer true labels by learning the empirical accuracies, propensities and, optionally, dependencies of the LFs via a factor graph.
We use a factor graph as introduced in Ratner et al.~\cite{ratner2016data} to model the $m$ user defined labeling functions. For simplicity, we assume that the LFs are independent conditioned on the unobserved class label.
Let $Y^j=(y^j_1,\dots,y^j_B) \in \{-1,1\}^B$ be the concatenated vector of the unobserved class variable for the $B$ beat segments of patient $j$ and $\Lambda^j=\{-1,0,1\}^{B \times m}$ be the LF output matrix where $\Lambda^j_{ik}=\lambda_k(x^j_i)$ is the output of LF $k$ on beat $i$ of patient $j$.
We define a factor for LF accuracy as
\begin{equation*}
\phi_{i,k}^{Acc}(\Lambda,Y) \triangleq \mathbbm{1}\{\Lambda_{ik}=y_i\}
\end{equation*}
We also define a factor of LF propensity as
\begin{equation*}
\phi_{i,k}^{Lab}(\Lambda,Y) \triangleq \mathbbm{1}\{\Lambda_{ik}\neq 0\}
\end{equation*}
Then, the label model for a patient $j$ is defined as
\begin{align}
\begin{split}\label{eq:labelmodel}
p_{\theta}( Y^j, \Lambda^j )
&\triangleq Z_{\theta}^{-1}
\exp (\sum_{i=1}^B \sum_{k=1}^m\theta_k \phi_{i,k}^{Acc}(\Lambda^j_i,y^j_i) + \sum_{i=1}^B \sum_{k=1}^m\theta_k \phi_{i,k}^{Lab}(\Lambda^j_i,y^j_i))
\end{split}
\end{align}
where $Z_{\theta}$ is a normalizing constant. We use Snorkel~\cite{ratner2017snorkel} to learn $\theta$ by minimizing the negative log marginal likelihood given the observed $\Lambda^j$. Finally, as introduced in Ratner et al. \cite{ratner2016data}, the end classifier $f$ is trained with a noise aware loss function that uses probabilistic labels $\hat{Y}^j = p_{\theta}( Y^j| \Lambda^j )$.
\begin{figure*}[!hbtp]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.25\textwidth]{Normalbeat.png}
&\hspace{15mm}\includegraphics[width=0.23\textwidth]{PVCbeat.png} \\
(i)&\hspace{10mm}(ii) \\
\end{tabular}
\caption{Examples of a normal (i) and PVC (ii) heartbeat. Dotted green horizontal lines represent the ECG baselines detected during pre-processing, blue and red vertical lines mark the QRS-complexes and T-waves.}
\label{fig:PVCandNormalbeat}
\end{figure*}
\noindent\paragraph{From Domain Knowledge to an Automated Arrhythmia Detector}
First, we minimally pre-processed the raw ECG signals by removing baseline wandering using a forward/backward, fourth-order high-pass Butterworth filter \cite{lenis2017comparison}. To segment ECG ($x^j$) into individual beats ($x^j_b$), we followed a simple segmentation procedure, where we considered the time segment between two alternate QRS-complexes to be a heartbeat.
\textbf{\begin{figure}[!htbp]
\centering
\includegraphics[width = .9\textwidth]{Subject_220.png}
\caption{Distribution of the location, height (topological prominence), and width of QRS-complexes of one patient. To turn a loosely defined heuristic such as ``\textit{R-wave appears earlier than usual}'' into an LF, we must characterize the "usual" location of the R-wave. To accomplish this, we we fit a robust Gaussian distribution to model the variance of R-wave locations, and assume any beat located two standard deviations earlier (solid green line at $\text{T}_{\texttt{Early R-wave}} = 292 \ ms$), than the estimated mean, to be a PVC beat. Since these attributes vary widely from patient-to-patient, we automatically compute these thresholds separately for each patient and heuristic.}
\label{fig:Tuning}
\end{figure}}
We had to determine the precise locations of the QRS-complexes and T-waves. As in most prior work, we used the approximate locations of the R-wave available for each ECG record in the database, along with Scipy's peak finding algorithm \cite{2020SciPy-NMeth} to find the exact locations of the R and T waves. Further, we used the RANSAC algorithm~\cite{fischler1981random} to fit a robust linear regression line to each ECG record, to determine its baseline (horizontal green lines in Fig.~\ref{fig:PVCandNormalbeat}). The baselines were used to accurately characterize the height and depth of the R and T-waves\footnote[3]{The topological prominence measure returned by Scipy's peak finding algorithm was imprecise.} (blue and red vertical lines in Fig.~\ref{fig:PVCandNormalbeat}).
Next, we defined $6$ simple LFs based on the domain knowledge
to assign probabilistic labels (\texttt{PVC}, \texttt{OTHER} or \texttt{ABSTAIN}) to each beat. Fig.~\ref{fig:labelingfunctions} provides example pseudocodes for two of the LFs that were defined. To express the loosely-defined domain knowledge we described previously as LFs, we have to automatically assign thresholds to them. For instance, one heuristic to identify a PVC beat is to check whether its ``\textit{R-wave appears earlier than usual}''. To turn this heuristic into a LF ($\text{LF}_{\texttt{Early R-wave}}$), one has to determine the ``usual'' position of the R-wave. For this, we used the Minimum Covariance Determinant algorithm~\cite{rousseeuw1999fast} to find the covariance of the most-normal subset of the frequency histogram. We then set the threshold to the value $2$ standard deviations away from the estimated mean in the direction of interest. For example, for a particular subject (Fig.~\ref{fig:Tuning}) $\text{LF}_{\texttt{Early R-wave}}$ returns \texttt{PVC} for any beat with the R-wave appearing earlier than $\text{T}_{\texttt{Early R-wave}} = 238$ ms (vertical green line). To account for inter-patient variability, we automatically compute these subjective thresholds for each heuristic and every patient separately. Note that some of our heuristics did not require estimating any subject-specific parameters.
\begin{figure*}[!hbt]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.47\textwidth]{LF2.png}
&\includegraphics[width=0.47\textwidth]{LF1.png}\\
\end{tabular}
\caption{Example Python code for $\text{LF}_{\texttt{Wide R-wave}}$ and $\text{LF}_{\texttt{Early R-wave}}$. The \texttt{findRwaveWidth()} and \texttt{findRwave()} sub-routines return the precise width and positions of the R-wave in a beat, while the variables \texttt{WIDTH\_RWAVE} and \texttt{TIME\_RWAVE\_APPEARS\_EARLIER} reflect the thresholds $\text{T}_{\texttt{Wide R-wave}}$ and $\text{T}_{\texttt{Early R-wave}}$.}
\label{fig:labelingfunctions}
\end{figure*}
\paragraph{The End-Model Classifier}
With these heuristics, we use the label model in Eq.~(\ref{eq:labelmodel}) to obtain probabilistic labels for heartbeats of all training patients in the MIT-BIH Arrhythmia Database.
We use these probabilistic labels and the segmented beats to train a noise-aware ResNet classifier, in which we weigh each sample according to the maximum probability that it belongs to either class.
Recent studies have shown that ResNet not only performs on par with most of the state-of-the-art time series classification models~\cite{fawaz2019deep}, but also works well for automatic arrhythmia detection~\cite{ebrahimi2020review}.
\section*{Experiments and Results}
\label{sec:experiments}
\paragraph{Data}
The Massachusetts Institute of Technology – Beth Israel Hospital (MIT-BIH) Arrhythmia Database~\cite{moody2001impact} is one of the most commonly used datasets to evaluate automated Arrhythmia detection models. It contains $48$ half-hour excerpts of two-lead ECG recordings from $47$ subjects. In most records, the first channel is the Modified Limb lead II (MLII), obtained by placing electrodes on the chest. We only used the first channel to detect PVC events, since the QRS-complex is more prominent in MLII. The second channel is usually lead V1, but may also be V2, V4 or V5 depending on the subject. We refer the interested reader to Moody et al.~\cite{moody2001impact} for more details on how the database was curated and originally annotated.
\noindent\paragraph{Experimental Setup}
Our experimental setup follows the evaluation protocol of arrhythmia classification models stipulated by the American Association of Medical Instrumentation (AAMI) as described in~\cite{aami2008r}. The AAMI standard, however, does not specify which heartbeats or patients should be used for training classification models, and which for evaluating them~\cite{luz2016ecg}. Hence, we used the inter-patient heartbeat division protocol proposed by De Chazal et al.~\cite{de2004automatic} to partition the MIT-BIH Arrhythmia Database into subsets DS1 and DS2 to make model evaluation realistic. Furthermore, in the MIT-BIH Database, PVCs only account for $8\%$ of the $100,000$ beats, thus to prevent issues stemming from high class imbalance, we randomly oversampled \texttt{PVC} beats in DS1 before using it to train the ResNet classifier. The architecture of our ResNet models is the same as in Fawaz et al.~\cite{fawaz2019deep}. To tune the learning rate, batch size, and number of feature maps hyper-parameters, we split the training data into train and validation subsets in the $70/30$ proportion. All the models were trained for $25$ epochs. In the next subsection, we report the results of the ResNet models which had the best true positive rate (TPR) at low false positive rate (FPR) on the validation data.
We also compare the performance of our weakly supervised model with an active learning (AL) alternative. For AL, we used ResNet to iteratively identify data points for manual labeling using uncertainty sampling~\cite{settles2012active}. The model initially had access to a randomly sampled balanced seed set of 100 labeled data points. In each AL iteration, we retrained ResNet using the training data extended with 100 newly labeled data points. We continued this process until the training set consisted of 4,000 points. AL hyper-parameters (the query size and size of the seed set) are similar to Pasolli and Melgani's setup~\cite{pasolli2010active}. Table~\ref{tab:results} reports the performance of the final ResNet model trained on 4000 data points incrementally labeled using AL, averaged over 10 random initializations of the seed set.
\begin{table*}[!htbp]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c|c c c c c c c c c}
\Xhline{1pt}
\textbf{Model} & $\textbf{TPR}$ & $\textbf{TNR}$ & $\textbf{PPV}$ & $\textbf{FPR}$ & $\textbf{Acc}$ & $\textbf{FPR}_{50\% \ \text{TPR}}$ & $\textbf{FNR}_{50\% \ \text{TNR}}$ & $\textbf{TPR}_{1\% \ \text{FPR}}$ & $\textbf{TNR}_{1\% \ \text{FNR}}$ \\ \hline
\textbf{Fully sup.} & 0.884 & 0.970 & 0.664 & 0.030 & 96.25 & 0.005 & 0.028 & \textbf{0.793} & 0.266 \\
\textbf{Pr.\ labels} & 0.645 & 0.960 & 0.523 & 0.039 & 85.84 & 0.019 & 0.140 & 0.165 & 0.252 \\
\textbf{Active learn.} & 0.514 & \textbf{0.993} & \textbf{0.821} & \textbf{0.007} & 94.15 & 0.020 & 0.021 & 0.604 & 0.405 \\
\textbf{Weak sup.} & \textbf{0.892} & 0.965 & 0.629 & 0.036 & \textbf{97.25} & \textbf{0.004} & \textbf{0.013} & 0.707 & \textbf{0.466} \\ \Xhline{1pt}
\end{tabular}}
\caption{Results on held-out test set. Weakly supervised ResNet performs on par with the fully supervised model and outperforms ResNet trained using active learning. $\textbf{FPR}_{50\% \ \text{TPR}}$ and $\textbf{FNR}_{50\% \ \text{TNR}}$ represent the FPR and FNR at $50\%$ TPR and TNR, respectively. Similarly, $\textbf{TPR}_{1\% \ \text{FPR}}$ and $\textbf{TNR}_{1\% \ \text{FNR}}$ represent the TPR and TNR at $1\%$ FPR and FNR, respectively. The reported AL results are averaged over 10 independent initializations of the random seed set. All measures are computed with \texttt{PVC} as the positive class.}
\label{tab:results}
\end{table*}
\noindent\paragraph{Results}
We trained ResNet models on DS1 as a training set, using either probabilistic labels or the full ground truth, and evaluated them on the held-out set DS2. The results, summarized in Tab.~\ref{tab:results}, reveal that the end classifier trained using weak supervision is competitive with to the model trained on the full ground truth data. Moreover, our weakly supervised model also outperformed the ResNet trained using 4,000 data points obtained via active learning.
\begin{figure*}[!hbt]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.23\textwidth]{specificity_AL.png}
&\includegraphics[width=0.23\textwidth]{sensitivity_AL.png}
&\includegraphics[width=0.23\textwidth]{TNR1FNR_AL.png}
&\includegraphics[width=0.23\textwidth]{TPR1FPR_AL.png}\\
\end{tabular}
\caption{Active learning results. Our weakly supervised model either exceeds or matches the performance of its AL counterpart. The shaded red regions correspond to the 95\% Wilson's score intervals.}
\label{fig:Activelearningresults}
\end{figure*}
Let us review the key insights stemming from these results. First, the thresholds for the labeling functions that were automatically determined by our proposed auto-thresholding algorithm varied quite drastically across subjects. For instance, the threshold on the position of the R-wave, $\text{T}_{\texttt{Early R-wave}}$ had a mean of $230 \ ms$ and a standard deviation of $77.14 \ ms$.
This simple personalization of the LF parameters turned out to be the key to good generalization properties of the end-model; it failed to perform well when these parameters were fixed to reasonable global settings.
The auto-thresholding algorithm is a practically important contribution of our work, at it allows our methods to scale across diverse cohorts of subjects while mitigating potentially excessive manual effort in tuning LFs to specific patients. However, unsurprisingly, even with auto-tuning, our LFs and the estimated probabilistic labels (denoted {``Pr.\ labels"} in Tab.~\ref{tab:results}) were not perfect. In fact, we observed high variability in the performance of Pr.\ labels across different subjects, when compared to ground truth. For example, while they had almost perfect sensitivity for Subject 228 ({TPR} = 0.994), they performed extremely poorly for Subject 214 ({TPR} = 0). Overall, Pr.\ labels had low TPR and high TNR on their own on the training set and held-out test set, which is understandable given the prior class imbalance.
\begin{figure*}[!hbt]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{Normal_beat_1_WS.png}
&\includegraphics[width=0.3\textwidth]{PVC_beat_17109_WS.png}
&\includegraphics[width=0.3\textwidth]{PVC_beat_17468_WS.png} \\
(i)&(ii)&(iii) \\
\includegraphics[width=0.3\textwidth]{Normal_beat_1_GT.png} &\includegraphics[width=0.3\textwidth]{PVC_beat_17109_GT.png}
&\includegraphics[width=0.3\textwidth]{PVC_beat_17468_GT.png}\\
(iv)&(v)&(vi) \\
\end{tabular}
\caption{Class Activation Maps for weakly [(i) - (iii)] and fully supervised [(iv) - (vi)] ResNet reveal that the models discriminate between \texttt{PVC} and \texttt{OTHER} beats primarily based on the morphology of the QRS-complex.
The models appear to have learned to focus on similar regularities. Graphs (i) and (iv) represent an example \texttt{OTHER} beat, while the others show two examples of \texttt{PVC} beats.}
\label{fig:PVC&NormalCAM}
\end{figure*}
Tab.~\ref{tab:results} summarizes performance metrics on the held-out test set (True Positive Rate (TPR), True Negative Rate (TNR), Positive Predictive Value (PPV), False Positive Rate (FPR), and Accuracy (Acc)) measured at the 50-50 class likelihood threshold, chosen for consistency with prior literature on the PVC prediction task. The weakly supervised ResNet ({Weak sup.}) significantly improves sensitivity to the {PVC} class compared to just using the LF labels directly ({Pr.\ labels}) to directly predict the test data. This illustrates that WS ResNet is able to generalize effectively beyond the hypothesis learned by the noisy weak LFs. Our end model performs on par with the fully supervised ResNet ({Fully sup.}) trained on the same data but using all the available pointillistic labels in the MIT-BIH Database.
Tab.~\ref{tab:results} also compares performance of the four models under consideration at operational settings of pragmatic interest in clinical practice, that is at very low error rates. We report the ability to confidently identify positive cases at FPR of 1\%, and the ability to confidently identify negatives at FNR of 1\%. We complement these results with the error rates observed at 50\% probability of detection of both negative and positive cases.
The results show very little operational utility potential from applying the inferred probabilistic labels directly.
However, our weakly supervised ResNet model trained on those inferred labels is highly competitive to the equivalently structured ResNet trained on the abundant supply of manually annotated data.
Weakly supervised ResNet appears particularly strong at identifying negative cases, while its positive recall performance is close to the ground-truth based equivalent.
Next, we compare performance of the weakly supervised ResNet versus ResNet trained using active learning (``Active learn."\ in Tab.~\ref{tab:results}), and it looks better on all performance metrics barring TNR, PPV and FPR. Graphs in Fig.~\ref{fig:Activelearningresults} show that in the range of up to 4,000 pointillistically labeled training data points, weakly supervised models either outperforms or matches its active learning counterpart, but at a drastically lower requirement of human effort.
To closely examine what the weakly and fully supervised ResNet models are learning, we plotted Class Activation Maps~\cite{zhou2016learning} of a normal and PVC beats in Fig.~\ref{fig:PVC&NormalCAM}. It is evident that to discriminate between PVCs and other beats, our models are primarily paying attention to the QRS complexes in these examples.
Moreover, it also appears from these plots that both models not only perform on par, but they also tend to focus on similar signatures of the ECG signals.
This observation suggests at least some equivalence between the model trained on ground-truth annotation and the one trained on labels inferred from a small number of simple heuristics.
These results reassure us that the more expensive process can be effectively replaced by the proposed framework of weak supervision that uses a few labeling functions based on high-level aspects of domain knowledge derived directly from the time series characteristics.
\section*{Discussion and Conclusion
We demonstrated that weak supervision with domain heuristics defined directly on time series provides a promising avenue for training medical ML models without the need for large, manually annotated datasets.
To support this claim, we developed an arrhythmia detection model which performs on par with its fully-supervised counterpart, and does not need point-by-point data annotation.
This weakly supervised model has been developed in a fraction of time that would be required to provide a fully labeled training set.
We only needed a handful of heuristics to infer probabilistic labels sufficient to yield a reliable end model.
These simple heuristics reflected basic clinical intuition that can be gleaned from ECG diagnostics tutorials.
We expect that engaging expert clinicians to harvest additional heuristics would allow further improvements.
We stipulate that the proposed approach does not only save effort and time, but it also aligns the process of knowledge acquisition from domain experts better with human nature,
than its tedious pointillistic data annotation alternative. Further, we show that domain heuristics can be automatically tuned to patient specific characteristics by defining parameter tuning rules.
In our example, auto-tuning of ECG waveform interpretations accounts for inter-patient variability, while keeping manual labor at its minimum.
The ML community has devised several techniques to overcome the limitations of expensive pointillistic labeling such as intelligently choosing the most informative training samples to label \cite{settles2012active}, combining both labeled and unlabeled data \cite{van2020survey} and harnessing the power of crowds \cite{foncubierta2012ground, orting2020survey}. While semi-supervised learning has been successfully applied to improve arrhythmia detection models without patient-specific data~\cite{zhai2020semi}, these methods still rely on a significant proportion of labeled training data to start with. On the other hand, crowdsourcing has shown promise in generating ground truth for e.g.\ medical imaging, but prior research~\cite{foncubierta2012ground} found several limitations such as the lack of trustworthiness, inability of non-expert workers to annotate fine-grained categories and ethical concerns around patient privacy. Active learning, however, has by far been the most commonly utilized technique in settings where annotating large quantities of data {\em en-masse} is prohibitively expensive \cite{wang2014797}.
Multiple avenues of future work include modelling dependencies between LFs to improve both the efficiency and accuracy of label models, and developing a library of time series primitives to streamline development of LFs for such data.
We would also like to build interfaces to support interactive discovery of LFs and to rigorously validate resulting end models.
Further, we intend to investigate hybrid approaches that will opportunistically combine weak supervision with pointillistic active learning,
and conduct user studies with clinicians to better understand the challenges and opportunities for interactive harvesting of domain knowledge.
We also aim to enable detection of other types of abnormalities that can be seen in ECG data, and apply our approach to other types of hemodynamic monitoring waveforms.
Time series data is prevalent in healthcare. However the costs of preparing such data for training and validation of new models, as well as for the maintenance of already developed models,
prohibit the otherwise realizable benefits from widespread adoption of machine learning in clinical decision support.
We believe that approaches similar to the one presented in this paper could help making a decisive push towards
proliferating beneficial uses of machine learning in this important field of its application.
\section*{Acknowledgements}
This work was partially supported by the Defense Advanced Research Projects Agency award FA8750-17-2-0130, and by the Space Technology Research Institutes grant from National Aeronautics and Space Administration’s Space Technology Research Grants Program.
\makeatletter
\makeatother
\bibliographystyle{unsrt}
|
2,869,038,154,298 | arxiv | \section{Introduction}
TikTok, a short-form video-sharing platform with over one billion active users \cite{stokel2022tiktok}, is currently one of the largest social media platforms in the world. Despite its popularity, the platform’s apparent influence on public understanding of the Russian invasion of Ukraine remains perplexing. Major media organizations have dubbed the ongoing war started by the Russian invasion of Ukraine “the first TikTok war” \cite{dang2022tiktok,frenkel2022tiktok,paul2022tiktok,chayka2022watching, tiffany2022myth}, alluding to the platform’s significant role in shaping how the war is seen around the world.
TikTok has emerged as a prominent source for information about the war in Ukraine. For instance, over a quarter of people below age 25 in the United States consider TikTok their primary news source \cite{matsa2022americans}, suggesting that many young people in the West are building their understanding of the war from TikTok videos. The app has also emerged as a force within the Russian media ecosystem: after Russia criminalized criticism of the military, TikTok prevented Russian users from uploading new content and seeing new content from outside Russia, thus shaping what information Russians receive and disperse in relation to the war \cite{oremus2022tiktok}. TikTok evidently has substantial power over how major world events like the invasion of Ukraine are perceived and discussed \cite{lorenz2022tiktok}, but this influence is currently poorly understood.
We seek to investigate, and enable others to investigate, the relationship between TikTok and the discourse surrounding the invasion of Ukraine, as well as its underlying mechanics. The aim is that research from this dataset will help us understand the specific differences between the wider world of news and older social media, and the new world of TikTok. We want to learn how the affordances of the platform result in divergence in language use, discourse topics and interaction patterns. In this work we will show that those social dynamics can be uncovered in this data.
To create the dataset, we identified hashtags and keywords explicitly related to the conflict to collect a core set of videos (or \textquote{TikToks}). We then compiled comments associated with these videos. In total we collected approximately 16 thousand videos and 11 million comments, from approximately 6 million users.
Here we present a preliminary analysis of social dynamics in the dataset by observing three successive levels of social structure. At the macro-level, we look at word use and the languages used on the platform; at the meso-level, we expose semantic clusters reflecting real-world perspectives; and at the micro-level, we investigate dyadic user relationships. We show that the data contains evidence of rich political and social processes at each stage.
To create this dataset, we developed a TikTok web scraper, PyTok, that allows us to download a variety of data from the platform. We have open sourced PyTok on Github at \url{https://github.com/networkdynamics/pytok}. We have released the data required to recreate the dataset (in order to allow user withdrawal from the dataset) here: \url{https://doi.org/10.5281/zenodo.7534952}. We also release the scripts used to prepare this analysis here: \url{https://github.com/networkdynamics/polar-seeds}.
\section{Method}
In this section, we will explain how we built the dataset. We required a method that both searched TikTok content and pulled the results (whether videos, comments, or users) returned. An initial search of social media scraping libraries revealed several available datasets, with none completely fitting our needs. Specifically, some libraries are exclusively browser automation based, which results in slow collection times. Others are entirely API-based, which is vulnerable to TikTok API changes and CAPTCHAs \cite{freelon2018computational}.
We therefore opted to develop a library that strikes a balance between both approaches: initial requests use a web browser automation library that can fetch browser cookies without in-depth knowledge of the TikTok API, followed by API requests once the relevant cookies are stored for improved performance. We started the library as a fork of the TikTok-Api developed by David Teacher \footnote{https://github.com/davidteather/TikTok-Api}. Our library can collect videos from hashtags, general searches, and a user's history, as well as comments from videos. Video downloading has not yet been included as a feature.
\begin{table*}[h]
\centering
\begin{tabular}{c|c}
Search Term & Meaning \\
\hline
\foreignlanguage{russian}{володимирзеленський} & Ukrainian for \textquote{Volodymr Zelensky} \\
\foreignlanguage{russian}{славаукраїні} & Ukrainian for: \textquote{Glory to Ukraine} \\
\foreignlanguage{russian}{россия} & Ukrainian for \textquote{Russia} \\
\foreignlanguage{russian}{війнавукраїні} & Ukrainian for \textquote{War in Ukraine} \\
\foreignlanguage{russian}{війна} & Ukrainian for \textquote{War} \\
\foreignlanguage{russian}{нівійні} & Ukrainian for \textquote{No War} \\
\foreignlanguage{russian}{нетвойне} & Russian for \textquote{No War} \\
\foreignlanguage{russian}{зеленский} & Russian for \textquote{Zelensky} \\
\foreignlanguage{russian}{зеленський} & Ukrainian for \textquote{Zelensky} \\
\foreignlanguage{russian}{путинхуйло} & Russian for \textquote{Fuck Putin} \\
\foreignlanguage{russian}{путінхуйло} & Ukrainian for \textquote{Fuck Putin} \\
\end{tabular}
\caption{Translations of search terms used for finding videos related to the invasion. Note the generally pro-Ukraine sentiment, due to these hashtags being both more popular, and selected due to common co-occurence with seed set hashtags.}
\label{tab:translations}
\end{table*}
Using our library, we collected all videos appearing under TikTok's search functionality (under this URL: \url{https://www.tiktok.com/tag/{search-term}}) for a range of hashtags. However, hashtag searches seem to be limited to approximately the 1000 most viewed videos. Therefore, to expand the video set, we also used the general search functionality which allows more videos to be found for a search term, for example using the URL: \url{https://www.tiktok.com/search/video?q={search-term}}. This search comes at the cost of including more unrelated videos due to the fuzzy search functionality. In some cases where a search term produced many unrelated videos, searching with a hashtag in the general search functionality improved the specificity of the search.
We used the following search terms in our video collection: \\
\texttt{standwithukraine, russia, nato, putin, moscow, zelenskyy, stopwar, stopthewar, ukrainewar, ww3, \foreignlanguage{russian}{володимирзеленський, славаукраїні, путінхуйло, россия, війнавукраїні, зеленський, нівійні, війна, нетвойне, зеленский, путинхуйло}, \#denazification, \#specialmilitaryoperation, \#africansinukraine, \#putinspeech, \#whatshappeninginukraine}. \\
We built the list of search terms via an initial qualitatively derived seed set of hashtags (the first ten) and collected videos tagged with these terms. We then looked at the ranked co-occurring hashtags in the videos to expand our set of search terms. Although we started with only English-language hashtags in the seed set, we realised that Ukrainian and Russian hashtags had the highest co-occurrences with our seed set of hashtags, so we added them as the next 10 search terms in the list. These search terms are not all strictly associated with the invasion of Ukraine, but in our initial explorations we found that, in general, the majority of content collected using the terms was pertinent. See Table \ref{tab:translations} for translations of the non-English search terms.
Due to the limited search functionality of TikTok, only the most popular videos are provided for a search. So in order to find less popular videos, one needs to use less popular search terms. We were able to find five terms related to the invasion that had a small total view count, which we added to our search terms to get a set of videos with a range of view counts. Note these last five search terms are written with a hashtag, which is needed to increase the specificity of TikTok's fuzzy search engine for these less popular terms.
Due to the fuzzy search functionality of TikTok, some of the videos returned by the search did not contain an exact search match and in some cases were completely unrelated. For example, in the most extreme case we observed, results for \textquote{denazification} mostly returned results for \textquote{derealization}, a topic unrelated to Ukraine. In this case, all videos with \textquote{derealization} were removed, due to the clear proliferation of completely unrelated videos. We found that after the removal of \textquote{derealization}, 95\% of the videos in the dataset included at least of our 25 keyword search terms. We leave in the 5\% of videos that do not contain one of our search terms, with the intuition that most of the time the fuzzy search functionality returns videos related to the war even when they did not contain exact search terms. If a cleaner dataset is required, the videos can easily by filtered to only include a more precise set of hashtags.
With the initial video set fetched, we also collected the comments for each of these videos (limiting to 1000 comments per video to ensure reasonable collection duration), to provide additional text data for analysis.
Note that, unlike Twitter, TikTok does not allow others to see a user's follows. Furthermore, although a user can display the videos they have liked on their profile, this option is seldom used. This prevents methods developed for Twitter using follower or like networks from being used on TikTok data.
Because TikTok data collection methods are still in their infancy, our methodology represents an important contribution to the computational study of TikTok. In particular, our methodology is flexible and allows researchers who are interested in studying other subjects or phenomena on TikTok to curate it to their needs. However, the issues we encountered with TikTok's limited search functionalities highlight how challenging unbiased sampling is with these constraints. It is urgent that future work focus on best practise methodologies for obtaining more representative samples of data from TikTok.
\section{Dataset}
We have released the dataset, including only the data required to re-create it (to allow users to delete content from TikTok and be removed from the dataset if they wish), at the following repository: \url{https://doi.org/10.5281/zenodo.7534952}. Contained in this repository are the unique identifiers in a CSV required to re-create the dataset, alongside scripts that will automatically pull the full dataset. The full dataset will be created as a collection of JSON files, and there is a script available in the repository to assemble the most salient data into CSV files. Additional metadata can be found at the dataset repository.
In total the full dataset contains approximately 15.9 thousand videos and 10.8 million comments, from 6.3 million users. There are approximately 1.9 comments on average per user captured, and 1.5 videos per user who posted a video.
We plan to further expand this dataset as collection processes progress and the war continues. We will version the dataset to ensure reproducibility.
\section{Experiments}
Our objective for this dataset was to enable investigation into social dynamics on TikTok around an real-world event that the platform and its users had strong interaction with. To this end, we conducted three initial investigations on this dataset. Each investigation analyzes a different level of social interaction as it pertains to the Ukraine conflict.
At the macro-level of social interaction on the platform, we want to understand how the relative use of words and languages changed over time, especially Russian and Ukrainian. We then zoom in to the meso-level, from language in general to themes, looking at the extent to which politically significant semantic clusters were exposed by topic modelling. Finally, at the most granular level, we look at dyadic relationships in an interaction network to observe the nature of individuated user interactions on the platform.
\subsection{Words and Languages}
We first took a macro view of how language prominence evolved on the platform, looking at how the words used on the platform as a whole changed over time. Particularly, we wanted to measure the use of languages and words from the beginning of the invasion and onwards, to understand if there was simply one changepoint at the start of the invasion, or if there were more complex, ongoing changes. The time series produced by this exposes the temporal share of language communities and the various attention cycles of the platform.
To do this we used simple keyword, hashtag, and language searches. Figure \ref{fig:all_over_time} features a range of measures showing the variation in data over the time-span of the invasion.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figs/all_over_time.png}
\caption{Variety of metrics showing the change in language and languages used over the course of the invasion. Note the uptick in comments at the end of the period, due to TikTok putting more recent comments in the top ranking comments.}
\label{fig:all_over_time}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{figs/topics_over_time.png}
\caption{Frequency of a set of topics over the timeline of the invasion. The set of topics was selected for diversity of temporal and categorical nature. Topic names show the top 4 words in the topic as chosen by TF-IDF.}
\label{fig:topic_timeline}
\end{figure*}
At this general level of social interaction, we see a number of behavioural changes over time. Notably, we see evidence of the mass adoption of the Ukrainian language instead of Russian over time by users who at some point use the Ukrainian language \cite{afanasiev2022war}. We also see a slow change from majority English text to Russian text over the course of the invasion, indicating sustained attention from Russian speakers but a decrease in attention from English speakers. It seems likely that the medium of video allows greater mutual interaction between different language populations, engendering sophisticated temporal language dynamics that may not be seen on a text based platform.
We also examined country and leader name mentions, finding that attention was held on Putin and Russia throughout the war with only minor fluctuations, indicating a surprisingly sustained attention cycle. Conversely, we see sustained attention on Ukraine but limited attention on Zelensky (here tracking the total usage of Zelensky, Zelenskyy and Zelenskiy, the three common spellings). Despite heavy attention on Zelensky in the news, attention was not held particularly highly on him on TikTok.
In general the data clearly shows that there were a number of large-scale language use changes in the run up to, at the time of, and after, the invasion, particularly of a political nature.
\subsection{Topics}
As many young people increasingly use TikTok as a primary news source, it is important to verify if the information they consume about major world events is aligned with credible news media coverage to impede the power of misinformation. Additionally, we wanted to investigate how TikTok users discussed the invasion of Ukraine over time as a preliminary step to understanding how TikTok shapes a user's worldview. We therefore zoomed into the meso-level to find semantic clusters on the platform and identify if they reflect perspectives of news media coverage of the invasion. For this experiment, we used topic modelling to examine video comments, as they represent the richest and most numerous source of data in our dataset. We used the BERTopic library \cite{grootendorst2022bertopic} and found that using a RoBERTa model fine-tuned on a Twitter dataset \cite{barbieri2020tweeteval} combined with HDBSCAN based hierarchical clustering provided the clearest breakdown of discourse on the platform.
A timeline of the frequency of a set of topics, hand-picked for diversity of temporal and categorical nature, can be found in Figure \ref{fig:topic_timeline}.
Here we can clearly see that there is a diverse range of content reflecting various political perspectives occurring on the platform. We might expect to only see content that is aligned with perspectives in the general public media discourse, but here we also see highly popular platform specific content, indicative of discourse unique to TikTok.
The frequency of topic use over time reflects a combination of temporal dynamics, from spiking at the time of invasion to increasing over the course of the war. We see discussion of world wars spiking well before the official start of the invasion, mention of a popular song in pro-Russia circles spiking just after the invasion, and discussion of rising oil prices spiking as prices reacted to events. However, other topics show steady use, including the popular pro-Russia chant \textquote{uraaa}, and references to \textquote{fake news}. We see that TikTok features a range of discussion centred on subjects both in the media and unique to the space, indicating complex temporal meso-level processes that are causing the shifts in major topics occurring on the platform.
\subsection{Interactions}
\begin{figure*}[!h]
\centering
\includegraphics[width=\textwidth]{figs/degree_distributions.png}
\caption{Degree distributions of the multi-graph interaction network. Note the majority of interactions are comments on videos, followed by comments replying to other comments. Video shares and video mentions are fairly infrequent in this dataset.}
\label{fig:degree_dists}
\end{figure*}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figs/edge_counts.png}
\caption{Distribution of interaction counts in dyadic user relationships. We see approximately 10 million dyadic user relationships with 1 interaction, and 1 relationship with more than 500 interactions.}
\label{fig:edge_counts}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figs/edge_dir_share.png}
\caption{Reciprocity share in dyadic user relationships. A reciprocity of 0 indicates the interactions are entirely one sided, i.e. only one of the users is taking part in the interaction with the other not responding. A reciprocity of 0.5 indicates perfect reciprocity, i.e. the two users interact with each the same amount. Colour represents the number of instances with that particular combination of interaction count and reciprocity.}
\label{fig:edge_dir_share}
\end{figure}
In previous analyses we confirmed that a diverse set of discourse patterns are occurring on the platform. Next, we wanted to gain an initial insight into the extent to which the content on the platform was occurring in a social manner. In other words, are there interactions between users like in a social network, or does TikTok look more like an entertainment platform where everyone is watching different media and not interacting with each other? This has major implications for the role that TikTok plays as a mass-scale platform with potential influence on millions of people across the world, particularly in a geopolitical situation as complex and high-stakes as the invasion of Ukraine. Specifically, evidence of interactions between users would indicate that people are actively discussing the invasion of Ukraine and thus contributing to the global discourse about the Russian-Ukraine war, making TikTok an important site for modern political engagement. However, a lack of interaction could suggest that users are simply consuming content about the invasion, but not using TikTok to discuss or debate specific topics in a meaningful way on the platform.
To enable this analysis, we zoomed in to the micro-level of interaction patterns between users on the platform. We constructed a multi-graph user interaction network from the data through the content on the platform of videos and their associated comments. This network consists of users as nodes, where edges can be any kind of observed interaction between a user. The possible interactions are: sharing a video (duets, stitches etc.), commenting on a video, replying to a comment, mentioning another user in a comment, and mentioning another user in a video caption.
Figure \ref{fig:degree_dists} shows the degree distributions of the multi-graph, including for each type of interaction. We see that the degree distributions are heavy tailed, i.e. some users have a very large number of interactions with other users on the platform. Figure \ref{fig:edge_counts} shows the distribution of interaction counts for all user dyads in the network. Again this is heavy tailed, i.e. most users have only one interaction between them, but some have hundreds. Figure \ref{fig:edge_dir_share} shows the reciprocity of interactions in the network. Here we see that most interactions are one sided, i.e. one user always interacting with another in one direction, but there are some more even-sided interactions.
At the most granular level, we see that though most interactions are uni-directional and one-time events, seeming to indicate that in general TikTok is used as an entertainment platform with minimal personal social interaction. However, there is a heavy-tail of reciprocal and frequent interactions in user dyads. The small number of high interaction count dyads could be evidence of localised and concentrated social activity in TikTok. It would be informative for further work on this to shed light on the kinds of users who engage in such interactions - are they content creators? well known users? or entirely random accounts? The profile of the interacting users would shed light on the nature and import of these interactions on TikTok.
\section{Conclusion}
An initial exploration of the dataset presented here indicates that there are myriad ongoing complex language dynamics on the TikTok platform related to the invasion of Ukraine. These language dynamics can be seen at macro, meso, and micro-levels of complexity, from coarse language changes to patterns in dyadic relationships, demonstrating that this dataset exposes TikTok's particular social dynamics. It is clear that there are variety of behaviours occurring on, and perhaps driven by, the platform, and that these behaviours can be seen in this dataset. Further research on these processes could prove fruitful and significant.
We hope that, by releasing this dataset and library, researchers can work towards a more holistic understanding of the dynamics and behaviours occurring on TikTok, and resulting stronger comparative analyses between platforms produce a deeper comprehension of these social processes.
\section{Ethical Statement}
While this dataset could allow the exposure of the influence of TikTok on its user base and improve the tranparency of the platform, it also represents an analysis of personal data that TikTok users may not have fully informed consent on. Though all data analyzed in this dataset is public facing, they may not have released that data with the knowledge of all its final uses. This is why we have the released the dataset in such a way that if users want to remove the content they have on TikTok, it will no longer be able to be accessed as part of this dataset.
\section{Acknowledgments}
We would like to thank Dinara Karimova for her help with translations and interpreting our results.
\printbibliography
\end{document}
\section{Introduction}
TikTok, a short-form video-sharing platform with over one billion active users \cite{stokel2022tiktok}, is currently one of the largest social media platforms in the world. Despite its popularity, the platform’s apparent influence on public understanding of the Russian invasion of Ukraine remains perplexing. Major media organizations have dubbed the ongoing war started by the Russian invasion of Ukraine “the first TikTok war” \cite{dang2022tiktok,frenkel2022tiktok,paul2022tiktok,chayka2022watching, tiffany2022myth}, alluding to the platform’s significant role in shaping how the war is seen around the world.
TikTok has emerged as a prominent source for information about the war in Ukraine. For instance, over a quarter of people below age 25 in the United States consider TikTok their primary news source \cite{matsa2022americans}, suggesting that many young people in the West are building their understanding of the war from TikTok videos. The app has also emerged as a force within the Russian media ecosystem: after Russia criminalized criticism of the military, TikTok prevented Russian users from uploading new content and seeing new content from outside Russia, thus shaping what information Russians receive and disperse in relation to the war \cite{oremus2022tiktok}. TikTok evidently has substantial power over how major world events like the invasion of Ukraine are perceived and discussed \cite{lorenz2022tiktok}, but this influence is currently poorly understood.
We seek to investigate, and enable others to investigate, the relationship between TikTok and the discourse surrounding the invasion of Ukraine, as well as its underlying mechanics. The aim is that research from this dataset will help us understand the specific differences between the wider world of news and older social media, and the new world of TikTok. We want to learn how the affordances of the platform result in divergence in language use, discourse topics and interaction patterns. In this work we will show that those social dynamics can be uncovered in this data.
To create the dataset, we identified hashtags and keywords explicitly related to the conflict to collect a core set of videos (or \textquote{TikToks}). We then compiled comments associated with these videos. In total we collected approximately 16 thousand videos and 11 million comments, from approximately 6 million users.
Here we present a preliminary analysis of social dynamics in the dataset by observing three successive levels of social structure. At the macro-level, we look at word use and the languages used on the platform; at the meso-level, we expose semantic clusters reflecting real-world perspectives; and at the micro-level, we investigate dyadic user relationships. We show that the data contains evidence of rich political and social processes at each stage.
To create this dataset, we developed a TikTok web scraper, PyTok, that allows us to download a variety of data from the platform. We have open sourced PyTok on Github at \url{https://github.com/networkdynamics/pytok}. We have released the data required to recreate the dataset (in order to allow user withdrawal from the dataset) here: \url{https://doi.org/10.5281/zenodo.7534952}. We also release the scripts used to prepare this analysis here: \url{https://github.com/networkdynamics/polar-seeds}.
\section{Method}
In this section, we will explain how we built the dataset. We required a method that both searched TikTok content and pulled the results (whether videos, comments, or users) returned. An initial search of social media scraping libraries revealed several available datasets, with none completely fitting our needs. Specifically, some libraries are exclusively browser automation based, which results in slow collection times. Others are entirely API-based, which is vulnerable to TikTok API changes and CAPTCHAs \cite{freelon2018computational}.
We therefore opted to develop a library that strikes a balance between both approaches: initial requests use a web browser automation library that can fetch browser cookies without in-depth knowledge of the TikTok API, followed by API requests once the relevant cookies are stored for improved performance. We started the library as a fork of the TikTok-Api developed by David Teacher \footnote{https://github.com/davidteather/TikTok-Api}. Our library can collect videos from hashtags, general searches, and a user's history, as well as comments from videos. Video downloading has not yet been included as a feature.
\begin{table*}[h]
\centering
\begin{tabular}{c|c}
Search Term & Meaning \\
\hline
\foreignlanguage{russian}{володимирзеленський} & Ukrainian for \textquote{Volodymr Zelensky} \\
\foreignlanguage{russian}{славаукраїні} & Ukrainian for: \textquote{Glory to Ukraine} \\
\foreignlanguage{russian}{россия} & Ukrainian for \textquote{Russia} \\
\foreignlanguage{russian}{війнавукраїні} & Ukrainian for \textquote{War in Ukraine} \\
\foreignlanguage{russian}{війна} & Ukrainian for \textquote{War} \\
\foreignlanguage{russian}{нівійні} & Ukrainian for \textquote{No War} \\
\foreignlanguage{russian}{нетвойне} & Russian for \textquote{No War} \\
\foreignlanguage{russian}{зеленский} & Russian for \textquote{Zelensky} \\
\foreignlanguage{russian}{зеленський} & Ukrainian for \textquote{Zelensky} \\
\foreignlanguage{russian}{путинхуйло} & Russian for \textquote{Fuck Putin} \\
\foreignlanguage{russian}{путінхуйло} & Ukrainian for \textquote{Fuck Putin} \\
\end{tabular}
\caption{Translations of search terms used for finding videos related to the invasion. Note the generally pro-Ukraine sentiment, due to these hashtags being both more popular, and selected due to common co-occurence with seed set hashtags.}
\label{tab:translations}
\end{table*}
Using our library, we collected all videos appearing under TikTok's search functionality (under this URL: \url{https://www.tiktok.com/tag/{search-term}}) for a range of hashtags. However, hashtag searches seem to be limited to approximately the 1000 most viewed videos. Therefore, to expand the video set, we also used the general search functionality which allows more videos to be found for a search term, for example using the URL: \url{https://www.tiktok.com/search/video?q={search-term}}. This search comes at the cost of including more unrelated videos due to the fuzzy search functionality. In some cases where a search term produced many unrelated videos, searching with a hashtag in the general search functionality improved the specificity of the search.
We used the following search terms in our video collection: \\
\texttt{standwithukraine, russia, nato, putin, moscow, zelenskyy, stopwar, stopthewar, ukrainewar, ww3, \foreignlanguage{russian}{володимирзеленський, славаукраїні, путінхуйло, россия, війнавукраїні, зеленський, нівійні, війна, нетвойне, зеленский, путинхуйло}, \#denazification, \#specialmilitaryoperation, \#africansinukraine, \#putinspeech, \#whatshappeninginukraine}. \\
We built the list of search terms via an initial qualitatively derived seed set of hashtags (the first ten) and collected videos tagged with these terms. We then looked at the ranked co-occurring hashtags in the videos to expand our set of search terms. Although we started with only English-language hashtags in the seed set, we realised that Ukrainian and Russian hashtags had the highest co-occurrences with our seed set of hashtags, so we added them as the next 10 search terms in the list. These search terms are not all strictly associated with the invasion of Ukraine, but in our initial explorations we found that, in general, the majority of content collected using the terms was pertinent. See Table \ref{tab:translations} for translations of the non-English search terms.
Due to the limited search functionality of TikTok, only the most popular videos are provided for a search. So in order to find less popular videos, one needs to use less popular search terms. We were able to find five terms related to the invasion that had a small total view count, which we added to our search terms to get a set of videos with a range of view counts. Note these last five search terms are written with a hashtag, which is needed to increase the specificity of TikTok's fuzzy search engine for these less popular terms.
Due to the fuzzy search functionality of TikTok, some of the videos returned by the search did not contain an exact search match and in some cases were completely unrelated. For example, in the most extreme case we observed, results for \textquote{denazification} mostly returned results for \textquote{derealization}, a topic unrelated to Ukraine. In this case, all videos with \textquote{derealization} were removed, due to the clear proliferation of completely unrelated videos. We found that after the removal of \textquote{derealization}, 95\% of the videos in the dataset included at least of our 25 keyword search terms. We leave in the 5\% of videos that do not contain one of our search terms, with the intuition that most of the time the fuzzy search functionality returns videos related to the war even when they did not contain exact search terms. If a cleaner dataset is required, the videos can easily by filtered to only include a more precise set of hashtags.
With the initial video set fetched, we also collected the comments for each of these videos (limiting to 1000 comments per video to ensure reasonable collection duration), to provide additional text data for analysis.
Note that, unlike Twitter, TikTok does not allow others to see a user's follows. Furthermore, although a user can display the videos they have liked on their profile, this option is seldom used. This prevents methods developed for Twitter using follower or like networks from being used on TikTok data.
Because TikTok data collection methods are still in their infancy, our methodology represents an important contribution to the computational study of TikTok. In particular, our methodology is flexible and allows researchers who are interested in studying other subjects or phenomena on TikTok to curate it to their needs. However, the issues we encountered with TikTok's limited search functionalities highlight how challenging unbiased sampling is with these constraints. It is urgent that future work focus on best practise methodologies for obtaining more representative samples of data from TikTok.
\section{Dataset}
We have released the dataset, including only the data required to re-create it (to allow users to delete content from TikTok and be removed from the dataset if they wish), at the following repository: \url{https://doi.org/10.5281/zenodo.7534952}. Contained in this repository are the unique identifiers in a CSV required to re-create the dataset, alongside scripts that will automatically pull the full dataset. The full dataset will be created as a collection of JSON files, and there is a script available in the repository to assemble the most salient data into CSV files. Additional metadata can be found at the dataset repository.
In total the full dataset contains approximately 15.9 thousand videos and 10.8 million comments, from 6.3 million users. There are approximately 1.9 comments on average per user captured, and 1.5 videos per user who posted a video.
We plan to further expand this dataset as collection processes progress and the war continues. We will version the dataset to ensure reproducibility.
\section{Experiments}
Our objective for this dataset was to enable investigation into social dynamics on TikTok around an real-world event that the platform and its users had strong interaction with. To this end, we conducted three initial investigations on this dataset. Each investigation analyzes a different level of social interaction as it pertains to the Ukraine conflict.
At the macro-level of social interaction on the platform, we want to understand how the relative use of words and languages changed over time, especially Russian and Ukrainian. We then zoom in to the meso-level, from language in general to themes, looking at the extent to which politically significant semantic clusters were exposed by topic modelling. Finally, at the most granular level, we look at dyadic relationships in an interaction network to observe the nature of individuated user interactions on the platform.
\subsection{Words and Languages}
We first took a macro view of how language prominence evolved on the platform, looking at how the words used on the platform as a whole changed over time. Particularly, we wanted to measure the use of languages and words from the beginning of the invasion and onwards, to understand if there was simply one changepoint at the start of the invasion, or if there were more complex, ongoing changes. The time series produced by this exposes the temporal share of language communities and the various attention cycles of the platform.
To do this we used simple keyword, hashtag, and language searches. Figure \ref{fig:all_over_time} features a range of measures showing the variation in data over the time-span of the invasion.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figs/all_over_time.png}
\caption{Variety of metrics showing the change in language and languages used over the course of the invasion. Note the uptick in comments at the end of the period, due to TikTok putting more recent comments in the top ranking comments.}
\label{fig:all_over_time}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{figs/topics_over_time.png}
\caption{Frequency of a set of topics over the timeline of the invasion. The set of topics was selected for diversity of temporal and categorical nature. Topic names show the top 4 words in the topic as chosen by TF-IDF.}
\label{fig:topic_timeline}
\end{figure*}
At this general level of social interaction, we see a number of behavioural changes over time. Notably, we see evidence of the mass adoption of the Ukrainian language instead of Russian over time by users who at some point use the Ukrainian language \cite{afanasiev2022war}. We also see a slow change from majority English text to Russian text over the course of the invasion, indicating sustained attention from Russian speakers but a decrease in attention from English speakers. It seems likely that the medium of video allows greater mutual interaction between different language populations, engendering sophisticated temporal language dynamics that may not be seen on a text based platform.
We also examined country and leader name mentions, finding that attention was held on Putin and Russia throughout the war with only minor fluctuations, indicating a surprisingly sustained attention cycle. Conversely, we see sustained attention on Ukraine but limited attention on Zelensky (here tracking the total usage of Zelensky, Zelenskyy and Zelenskiy, the three common spellings). Despite heavy attention on Zelensky in the news, attention was not held particularly highly on him on TikTok.
In general the data clearly shows that there were a number of large-scale language use changes in the run up to, at the time of, and after, the invasion, particularly of a political nature.
\subsection{Topics}
As many young people increasingly use TikTok as a primary news source, it is important to verify if the information they consume about major world events is aligned with credible news media coverage to impede the power of misinformation. Additionally, we wanted to investigate how TikTok users discussed the invasion of Ukraine over time as a preliminary step to understanding how TikTok shapes a user's worldview. We therefore zoomed into the meso-level to find semantic clusters on the platform and identify if they reflect perspectives of news media coverage of the invasion. For this experiment, we used topic modelling to examine video comments, as they represent the richest and most numerous source of data in our dataset. We used the BERTopic library \cite{grootendorst2022bertopic} and found that using a RoBERTa model fine-tuned on a Twitter dataset \cite{barbieri2020tweeteval} combined with HDBSCAN based hierarchical clustering provided the clearest breakdown of discourse on the platform.
A timeline of the frequency of a set of topics, hand-picked for diversity of temporal and categorical nature, can be found in Figure \ref{fig:topic_timeline}.
Here we can clearly see that there is a diverse range of content reflecting various political perspectives occurring on the platform. We might expect to only see content that is aligned with perspectives in the general public media discourse, but here we also see highly popular platform specific content, indicative of discourse unique to TikTok.
The frequency of topic use over time reflects a combination of temporal dynamics, from spiking at the time of invasion to increasing over the course of the war. We see discussion of world wars spiking well before the official start of the invasion, mention of a popular song in pro-Russia circles spiking just after the invasion, and discussion of rising oil prices spiking as prices reacted to events. However, other topics show steady use, including the popular pro-Russia chant \textquote{uraaa}, and references to \textquote{fake news}. We see that TikTok features a range of discussion centred on subjects both in the media and unique to the space, indicating complex temporal meso-level processes that are causing the shifts in major topics occurring on the platform.
\subsection{Interactions}
\begin{figure*}[!h]
\centering
\includegraphics[width=\textwidth]{figs/degree_distributions.png}
\caption{Degree distributions of the multi-graph interaction network. Note the majority of interactions are comments on videos, followed by comments replying to other comments. Video shares and video mentions are fairly infrequent in this dataset.}
\label{fig:degree_dists}
\end{figure*}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figs/edge_counts.png}
\caption{Distribution of interaction counts in dyadic user relationships. We see approximately 10 million dyadic user relationships with 1 interaction, and 1 relationship with more than 500 interactions.}
\label{fig:edge_counts}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figs/edge_dir_share.png}
\caption{Reciprocity share in dyadic user relationships. A reciprocity of 0 indicates the interactions are entirely one sided, i.e. only one of the users is taking part in the interaction with the other not responding. A reciprocity of 0.5 indicates perfect reciprocity, i.e. the two users interact with each the same amount. Colour represents the number of instances with that particular combination of interaction count and reciprocity.}
\label{fig:edge_dir_share}
\end{figure}
In previous analyses we confirmed that a diverse set of discourse patterns are occurring on the platform. Next, we wanted to gain an initial insight into the extent to which the content on the platform was occurring in a social manner. In other words, are there interactions between users like in a social network, or does TikTok look more like an entertainment platform where everyone is watching different media and not interacting with each other? This has major implications for the role that TikTok plays as a mass-scale platform with potential influence on millions of people across the world, particularly in a geopolitical situation as complex and high-stakes as the invasion of Ukraine. Specifically, evidence of interactions between users would indicate that people are actively discussing the invasion of Ukraine and thus contributing to the global discourse about the Russian-Ukraine war, making TikTok an important site for modern political engagement. However, a lack of interaction could suggest that users are simply consuming content about the invasion, but not using TikTok to discuss or debate specific topics in a meaningful way on the platform.
To enable this analysis, we zoomed in to the micro-level of interaction patterns between users on the platform. We constructed a multi-graph user interaction network from the data through the content on the platform of videos and their associated comments. This network consists of users as nodes, where edges can be any kind of observed interaction between a user. The possible interactions are: sharing a video (duets, stitches etc.), commenting on a video, replying to a comment, mentioning another user in a comment, and mentioning another user in a video caption.
Figure \ref{fig:degree_dists} shows the degree distributions of the multi-graph, including for each type of interaction. We see that the degree distributions are heavy tailed, i.e. some users have a very large number of interactions with other users on the platform. Figure \ref{fig:edge_counts} shows the distribution of interaction counts for all user dyads in the network. Again this is heavy tailed, i.e. most users have only one interaction between them, but some have hundreds. Figure \ref{fig:edge_dir_share} shows the reciprocity of interactions in the network. Here we see that most interactions are one sided, i.e. one user always interacting with another in one direction, but there are some more even-sided interactions.
At the most granular level, we see that though most interactions are uni-directional and one-time events, seeming to indicate that in general TikTok is used as an entertainment platform with minimal personal social interaction. However, there is a heavy-tail of reciprocal and frequent interactions in user dyads. The small number of high interaction count dyads could be evidence of localised and concentrated social activity in TikTok. It would be informative for further work on this to shed light on the kinds of users who engage in such interactions - are they content creators? well known users? or entirely random accounts? The profile of the interacting users would shed light on the nature and import of these interactions on TikTok.
\section{Conclusion}
An initial exploration of the dataset presented here indicates that there are myriad ongoing complex language dynamics on the TikTok platform related to the invasion of Ukraine. These language dynamics can be seen at macro, meso, and micro-levels of complexity, from coarse language changes to patterns in dyadic relationships, demonstrating that this dataset exposes TikTok's particular social dynamics. It is clear that there are variety of behaviours occurring on, and perhaps driven by, the platform, and that these behaviours can be seen in this dataset. Further research on these processes could prove fruitful and significant.
We hope that, by releasing this dataset and library, researchers can work towards a more holistic understanding of the dynamics and behaviours occurring on TikTok, and resulting stronger comparative analyses between platforms produce a deeper comprehension of these social processes.
\section{Ethical Statement}
While this dataset could allow the exposure of the influence of TikTok on its user base and improve the tranparency of the platform, it also represents an analysis of personal data that TikTok users may not have fully informed consent on. Though all data analyzed in this dataset is public facing, they may not have released that data with the knowledge of all its final uses. This is why we have the released the dataset in such a way that if users want to remove the content they have on TikTok, it will no longer be able to be accessed as part of this dataset.
\section{Acknowledgments}
We would like to thank Dinara Karimova for her help with translations and interpreting our results.
\printbibliography
\end{document}
|
2,869,038,154,299 | arxiv | \section{Introduction}
\label{introduction:sec}
The evolution of young stellar objects (YSOs), over a wide range of masses, seems to be characterised by significant
accretion from a magnetised circumstellar disc and
ejection of collimated jets. Both processes are tightly related: jets can remove excess angular momentum,
so that some of the disc material can accrete onto the star~\citep[see, e.g.][]{konigl00,pudritz_PPV}.
Their interplay is relatively well understood and tested in low-mass YSOs, whereas
not much is known about high-mass YSOs (HMYSOs; $M > 8$\,M$_\odot$; O and early B spectral types), as their formation mechanism and evolution are still a matter of debate~\citep[see, e.g.][]{tan}.
Substantial progress in understanding the formation mechanism of HMYSOs is gained by connecting
the gas dynamics close to the forming star~\citep[e.g.][]{kraus10,ilee,cesaroni,sanna15} with the parsec-scale jet and outflow emission being driven by the central accreting
source~\citep[e.g.][]{sanna14,caratti15}.
Moreover, compelling observational evidence for the accretion disc scenario has recently come from the detection of compact discs in Keplerian
rotation~\citep[e.g.][]{kraus10,ilee,cesaroni} and parsec-scale collimated jets driven by HMYSOs~\citep[e.g.][]{stecklum,varricatt,caratti15}.
\object{IRAS\,13481-6124} ($\alpha$(J2000)=13:51:37.856, $\delta$(J2000)=-61:39:07.52) is among the few HMYSOs where an accretion disc~\citep[][]{kraus10,ilee,boley},
an outflow, and a collimated parsec-scale jet~\citep[][]{kraus10,stecklum12,caratti15} have been detected.
Located at a distance of $\sim$3.2\,kpc, IRAS\,13481-6124 has a bolometric luminosity of 5.7$\times$10$^4$\,L$_\odot$~\citep[][]{lumsden13}.
By modelling the spectral energy distribution (SED), \citet{grave} inferred an age of $\sim$10$^4$\,yr and a protostellar mass of $\sim$20\,M$_\odot$ (i.e. an O9 ZAMS spectral type).
NIR interferometric observations with the Very Large Telescope Interferometer (VLTI) in the $K$-band continuum revealed a compact dusty disc~\citep[$\sim$5.4\,mas in diameter, or $\sim$17.3\,au at 3.2\,kpc;][]{kraus10},
tilted by $\sim$45$\degr$ with respect to the plane of the sky. Modelling of the CO band-head emission lines at 2.3\,$\mu$m suggests a disc in Keplerian rotation~\citep[][]{ilee}.
Perpendicular to the disc, a well-collimated (precession angle$\sim$8$\degr$) parsec-scale jet has also been detected~\citep[with position angle -P.A.- of $\sim$206$\degr$/26$\degr$ east of north, blue- and red-shifted lobes, respectively;][]{caratti15},
being traced by shocked H$_2$ and [\ion{Fe}{ii}] lines. The shocked H$_2$ emission is observed down to $\sim$20\,000\,au from the source~\citep[][]{stecklum12}.
Closer to the source (a few arcseconds), the H$_2$ emission is mostly excited by fluorescence and it is not detected at the smallest spatial scales observed with SINFONI~\citep[$\sim$0.1$\arcsec$ or $\sim$320\,au;][]{stecklum12}.
NIR spectroscopy also shows bright \ion{H}{i} emission, confined to the HMYSO circumstellar environment~\citep[$\leq$10\,000\,au;][]{stecklum12}.
The Br$\gamma$ line on source shows a shallow
P\,Cygni profile and high-velocity wings~\citep[several hundred km\,s$^{-1}$;][]{stecklum12}.
Their spectro-astrometric analysis indicates a photocentre shift between line and continuum.
The Br$\gamma$ line profile might then be the product of different kinematical components, which originate from different locations:
magnetospheric accretion flows, a hot disc atmosphere, strong winds from the stellar surface, or from the inner regions of the disc (within a few au), or an extended
and collimated ionised jet.
To clarify the nature of the different kinematic components of the Br$\gamma$ line and probe the circumstellar environment of IRAS\,13481-6124, we therefore
carried out, for the first time, VLTI/AMBER spectro-interferometric observations of this HMYSO at medium spectral resolution.
This is the first NIR interferometric study that spatially and spectrally resolves an HMYSO in both continuum and Br$\gamma$ line.
Section~\ref{observations:sec} reports our interferometric observations and data reduction, while the interferometric
results are presented in Sect.~\ref{results:sec}. Finally, in Sect.~\ref{discussion:sec}, we discuss the origin of the Br$\gamma$ in IRAS\,13481-6124.
\section{Observations and data reduction}
\label{observations:sec}
IRAS\,13481-6124 was observed on the 28 February 2013 in two runs
at the ESO/VLTI with AMBER~\citep[][]{petrov07} and the UT1-UT2-UT3 telescope configuration.
The projected baseline lengths extend from $\sim$38\,m to $\sim$85\,m, and their position angle ranges between $\sim$38$\degr$ and
$\sim$62$\degr$, i.e. relatively close to being parallel to the jet P.A.. Details of the observational settings are reported in Table~\ref{tab:obs}.
We used AMBER medium spectral resolution mode in the $K$-band (MR-2.1 mode with nominal $ \mathrm R $ = 1500)
covering the spectral range from 1.926 to 2.275\,$\mu$m around the Br$\gamma$-line emission (at 2.166\,$\mu$m).
Although IRAS\,13481-6124 in the $K$ band is bright enough ($K$=4.9\,mag) to be observed interferometrically with AMBER and the UTs, its
$H$-band magnitude ($H$=7.6\,mag) exceeds the limit for employing the fringe tracker FINITO~\citep[][]{gai}, which works in that band.
The adopted detector integration time (DIT) was 0.3\,s per interferogram and we integrated on source for about 25 and 30 minutes in the first and second run, respectively.
Owing to the better average seeing of the second run ($\sim$0.75$\arcsec$ vs. $\sim$0.95$\arcsec$) and the slightly longer exposure on source, the second-run data have a higher
S/N ratio. Star HD 103125 was observed before and after the science observations with the same observational settings,
and used as an interferometric calibrator to derive the transfer function.
To reduce our interferograms, we applied our own data reduction software, that is based on the P2VM algorithm~\citep[][]{tatulli07} and that
provides us with wavelength-dependent visibilities, wavelength-dependent differential phases, closure phases, and wavelength-calibrated spectra.
The wavelength calibration was refined using the numerous telluric lines present in the observed wavelength
range~\citep[for more details on the wavelength calibration method, see][]{weigelt,rebeca15}. We estimate
an uncertainty in the wavelength calibration of $\sim$1.5\,\AA~($\sim$20\,km\,s$^{-1}$). The spectral resolution measured on the spectrally
unresolved telluric features around the Br$\gamma$ line is $ \mathrm R \sim$2200 or $\Delta$v $\sim$ 140\,km\,s$^{-1}$.
To convert the observed wavelengths into radial velocities, we used a local standard of rest (LSR) velocity of -37.9\,km\,s$^{-1}$~\citep[][]{lumsden13}. Therefore all the velocities provided
in this paper are with respect to the LSR.
\begin{table*}
\centering
\caption{\label{tab:obs} Log of the VLTI/AMBER observations of IRAS\,13481-6124.}
\begin{scriptsize}
\begin{tabular}{ccccccccccc}
\hline\hline
IRAS 13481-6124 & \multicolumn{2}{c}{Time [UT]}& Unit Telescope & Spectral & Wavelength & DIT\tablefootmark{a} & N\tablefootmark{b} & Seeing & Baseline & PA \\
Observation & Start & End & array & mode\tablefootmark{c} & range & & & & & \\
date & & & & & ($\mu$m) & (s) & &($\arcsec$) & (m) & ($\degr$) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
2013 Feb. 28 & 08:30 & 08:55 & UT1-UT2-UT3 & MR-K-2.1 & 1.926--2.275 & 0.3 & 3200 & 0.8-1.1 & 40/46/85 & 54/38/45 \\
2013 Feb. 28 & 09:11 & 09:42 & UT1-UT2-UT3 & MR-K-2.1 & 1.926--2.275 & 0.3 & 4800 & 0.6-0.9 & 38/44/81 & 62/45/53 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{scriptsize}
\begin{flushleft}
\hspace{0mm}
\tablefoot{
\tablefoottext{a}{Detector integration time per interferogram.}
\tablefoottext{b}{Number of interferograms.}
}
\end{flushleft}
\end{table*}
\begin{figure*}[h!]
\includegraphics[width= 18.8 cm]{Fig1.eps}
\caption{\label{fig:interferometry} {\it Left: Panel\,1.} AMBER-MR interferometric measurements of the Br$\gamma$ line in IRAS\,13481-6124 for run 1 (inserts 1a--1d).
From top to bottom: line flux (1a), wavelength-dependent visibilities (1b), differential phases (1c), and closure phase (1d),
observed at three different projected baselines (see labels in figure).
For clarity, the differential phases of the first and last baselines are shifted by +20$\degr$ and -20$\degr$, respectively.
{\it Middle left: Panel\,2.} Visibilities and continuum-corrected (pure) Br$\gamma$-line visibilities of our AMBER-MR observation of IRAS\,13481-6124 for run 1 (inserts 2b--2d).
From top to bottom: line flux (2a), visibilities of first (2b), second (2c) and third baseline (2d).
{\it Middle right: Panel\,3.} AMBER-MR interferometric measurements of the Br$\gamma$ line in IRAS\,13481-6124 for run 2 (inserts 3a--3d).
{\it Right: Panel\,4.} Visibilities and continuum-corrected (pure) Br$\gamma$-line visibilities of our AMBER-MR observation of IRAS\,13481-6124 for run 2 (inserts 4b--4d).
Blue dashed lines encompass the emission peak of the Br$\gamma$ line.}
\end{figure*}
\section{Results}
\label{results:sec}
Our AMBER-MR spectrum shows a rising continuum and a bright Br$\gamma$ emission line with a P\,Cygni profile (inserts 1a--4a of Figure~\ref{fig:interferometry}).
No other lines above a three sigma threshold are detected in the spectrum.
Figure~\ref{fig:interferometry} shows our interferometric observables (line profile - inserts 1a--4a; visibilities - inserts 1b and 3b; differential phases - inserts 1c and 3c, closure phase
- inserts 1d and 3d) of the Br$\gamma$ line and adjacent continuum
for the first (Panel\,1) and second run (Panel\,3) along with the inferred continuum-corrected Br$\gamma$-line visibilities (inserts 2b--2d and 4b--4d)
in four spectral channels (namely those with line-to-continuum ratio larger or equal to 1.1) of run 1 (Panel\,2) and run 2 (Panel\,4).
Visibilities give information on the size of the emitting region (continuum and/or line), whereas the closure phase quantifies its asymmetry.
Differential phases are a measure of the photocentre shift of the line with respect to the continuum.
The upper inserts of Figure~\ref{fig:interferometry} (1a--4a) display the Br$\gamma$-line profile normalised to the continuum.
The line is spectrally resolved and shows a wide range of velocities (from $\sim$-500 to 500\,km\,s$^{-1}$).
The emission line peaks at about 50\,km\,s$^{-1}$, with a broad red-shifted wing extending up to $\sim$500\,km\,s$^{-1}$.
The absorption feature on the blue-shifted side ranges from about -500 to -150\,km\,s$^{-1}$.
The P\,Cygni profile indicates the presence of outflowing matter in the form of a stellar wind or jet, which produces
the typical blue-shifted absorption feature through self-absorption~\citep[see, e.g.][]{mitchell}. The outflow terminal velocity is $\sim$-500\,km\,s$^{-1}$.
In Fig.~\ref{fig:interferometry}, upper-middle (1b and 3b) and lower-middle (1c and 3c) inserts of Panel\,1 (left) and Panel\,3 (middle-right)
show the wavelength-dependent visibilities and differential phases for the three observed baselines of run 1 (Panel\,1) and run 2 (Panel\,1), respectively,
whereas the closure phases are shown in the lower inserts (1b and 3b). The results of the two runs are similar although,
as mentioned in Sect.~\ref{observations:sec}, the second run data have higher S/N ratio.
Across the line profile, changes in the visibility and differential phase with respect to the continuum are observed.
Blue dashed lines depict the position of the Br$\gamma$-emission peak in the four panels.
The Br$\gamma$ visibility at the line peak is larger than the continuum visibility at all six baselines
(see middle-upper panels of left and middle-right inserts in Fig.~\ref{fig:interferometry}),
indicating that, on average, the Br$\gamma$-emitting region is spatially resolved and more compact than the continuum.
The absolute visibilities of the shortest baselines (red and green lines) display higher values, which indicates that both continuum and line are less spatially resolved along
these baselines with respect to the longest ones. These short baselines are comparable in length and thus the wavelength-dispersed visibilities are very similar. Notably,
the values of the visibility across the Br$\gamma$-absorption feature at the two longest baselines (85 and 81\,m, P.A. = 45$\degr$ and 53$\degr$, respectively)
are slightly smaller than the average continuum visibility.
This means that the Br$\gamma$ photons at high blue-shifted velocities must originate from a region larger than the continuum, whereas the red-shifted Br$\gamma$ emission must
come from a region that is smaller than the continuum. In other words, the size of the blue-shifted gas in absorption is larger than that of the emitting gas,
suggesting the presence of a wind or an outflow.
The differential phase (DP) at the four longest baselines displays an `S' shape, more pronounced at the 85 and 81\,m baselines, whereas
the differential phase at the shortest baselines (40 and 38\,m, P.A. = 62$\degr$ and 54$\degr$, respectively) shows a significant displacement with respect to the continuum that appears only
close to the line peak and along the red-shifted wing (see middle-lower panels of left and middle-right inserts in Fig.~\ref{fig:interferometry}).
This indicates that the outflow is detected mainly at the longest baselines and at P.A.s up to 53$\degr$, i.e. that the outflow is quite collimated (within $\sim$27$\degr$ from the
jet axis, P.A.~206$\degr$/26$\degr$).
Moreover, since the differential phases are related to the line photocentre shifts with respect to the continuum,
our findings indicate that both red-shifted and blue-shifted line emission wings, at the longest baselines, are spatially extended and located in opposite directions with respect to the continuum emission.
We note that the observed closure phase (CP) (see lower panels of left and middle-right inserts in Fig.~\ref{fig:interferometry}) differs from zero.
Inside the error bars,
no significant closure-phase variations of the Br$\gamma$ line, with respect to the continuum, are detected.
This is because of the high noise level in the wavelength differential CPs ($\sim$10$\degr$),
which is about one order of magnitude higher than that in the DPs.
The non-zero closure phase originates from the asymmetry of the brightness distribution of the continuum.
Owing to the disc inclination of $\sim$45$\degr$ with respect to the plane of the sky, the far side of the inner disc rim (i.e. the one towards the blue-shifted side of the jet)
displays an area larger (i.e. brighter) than
the near side~\citep[see panel\,d of Fig.~1 in][]{kraus10}. To confirm this scenario, we compare our observations
with the synthetic images around 2.16\,$\mu$m that are derived from the radiative transfer model presented by \citet{kraus10}.
Although this model was originally adjusted to fit AMBER visibilities and closure phases taken with the auxiliary telescopes,
it also provides a very good fit to our new, higher SNR UT data. The predicted CP value of $\sim$26$\degr$ is comparable to the measured
values of 25$\degr$$\pm$4$\degr$ (run 2) and 29$\degr$$\pm$6$\degr$ (run 1).
Notably the continuum asymmetry also affects the observed DPs of the Br$\gamma$ line, causing the blue-shifted DPs to be systematically smaller than the red-shifted ones.
\begin{figure}
\centering
\includegraphics[width=9.0cm]{fig2_n_old.eps}
\caption{\label{fig:UV} Br$\gamma$ displacement for different (radial) velocity channels (with velocity bins from -550 to 550\,km\,s$^{-1}$) derived from the
measured differential phases, after correcting for the continuum contribution. Different colours indicate different velocity bins.
Black solid and red dashed ellipses indicate the extent of the K-band continuum with its uncertainty~\citep[1.94$\pm$0.37\,mas][]{kraus10},
whereas the red line shows the position of the jet axis~\citep[][]{caratti15}.}
\end{figure}
To infer the size of the Br$\gamma$-emitting region across the different velocity channels, we first compute the continuum-subtracted (or pure-line)
visibilities at the six baselines (see middle-left and right inserts in Fig.~\ref{fig:interferometry}), following \citet{weigelt07}.
The ring-fit diameter of the continuum is taken from \citet{kraus10} (5.4\,mas or 17.3\,au at 3.2\,kpc).
The average diameter of the Br$\gamma$-emitting region (averaged over 8 channels, 4 for each baseline, as plotted for the pure line visibilities of Fig.~\ref{fig:interferometry})
is $\sim$4\,mas (i.\,e. $\sim$13\,au at 3.2\,kpc) at the two shortest baselines
(37.7 and 39.8\,m), $\sim$3.4\,mas ($\sim$11\,au) at the medium baselines (43.7 and 45.9\,m),
and $\sim$2\,mas (6.4\,au) at the longest baselines (80.5 and 84.9\,m). On average the size of the Br$\gamma$-emitting region is smaller
than the continuum-emitting region.
It is worth noting, however, that
the P\,Cygni absorption in the blue wing of the Br$\gamma$ line makes
the continuum correction of the visibility uncertain for wavelengths
shorter than the peak wavelength of the emission
line. In this case, the continuum-correction overestimates
the size of the line-emitting region. This might explain why the values of the pure line visibilities (middle-left and right inserts of Fig.~\ref{fig:interferometry})
decrease, moving from red-shifted to blue-shifted wavelengths.
On the other hand, if this effect is real, it would imply that the two lobes are not spatially symmetric, probably because of the screening effect of the disc
on the red-shifted lobe of the outflow.
Finally, the calibrated DPs were converted into photocentre shifts (p) of the Br$\gamma$-velocity components, with respect to the continuum emission,
by solving the following equation: $p_i = (-DP_i \lambda)/(2 \pi B_i)$, where $p_i$ is
the photocentre displacement projection of the 2D photocentre vector $\vec{p}$ on the baseline $B_i$, and $\lambda$ is
the wavelength of the considered spectral channel~\citep[][]{lebouquin}.
A single astrometric solution (i.e. a single 2D vector $\vec{p}$) was fitted to all baselines within each spectral channel.
As for the visibilities, we then subtracted the continuum contribution to get
the photocentre displacements of the pure-line emitting region~\citep[see Eq.~3 in][]{kraus12a}. Figure~\ref{fig:UV} shows the continuum-subtracted Br$\gamma$-line photocentre displacements (with the
errorbars in polar coordinates), which were
derived from the astrometric solution in the different (radial) velocity channels
(colour coded). Both the inner rim, defined by the study of the NIR continuum~\citep[see][]{kraus10}, and the known jet axis~\citep[see][]{caratti15}, are shown as a black solid ellipse and red solid line, respectively.
The red dashed ellipses indicate the uncertainty on the size of the inner rim~\citep[1.97$\pm$0.37\,mas;][]{kraus10}.
Fig.~\ref{fig:UV} shows that the photocentre-shifts in the blue-shifted (green, cyan and blue dots; from ${\rm v}_{\rm r}\sim$-420 to $\sim$-140\,km\,s$^{-1}$) and red-shifted
(orange, red, and magenta dots; from ${\rm v}_{\rm r} \sim$140 to $\sim$520\,km\,s$^{-1}$) wings of the Br$\gamma$ are increasing considerably with increasing velocities and are roughly displaced along a straight line,
close to the jet axis, in the blue- and red-shifted lobes, respectively.
The significance of the red-shifted data points at the highest velocity (magenta dots) is reduced with respect to the other points since they are seen out of the obscuring
disc only because of the large uncertainty of their position.
The low-velocity components (black dots; with ${\rm v}_{\rm r} \sim$-50 and 50\,km\,s$^{-1}$) are also aligned with the jet axis, but are much closer to the central source ($\lesssim$2\,au),
indicating a strong acceleration of the gas in the inner 10\,au. We note that,
at 2\,au, the Keplerian velocity of a 20\,M$_\sun$ object is $\sim$95\,km\,s$^{-1}$ (or ${\rm v}_{\rm r}\sim$67\,\,km\,s$^{-1}$), that is,
it is not spectrally resolved by our observations ($\Delta$v $\sim$ 140\,km\,s$^{-1}$).
Our findings thus suggest that, on the one hand, the high-velocity component is tracing the jet/outflow,
and, on the other, the low-velocity component is tracing slow moving gas, such as a disc wind (in Keplerian rotation) or
the very base of the jet, namely the region of the disc where the jet footpoints are located.
\section{Origin of the Br$\gamma$ line in IRAS\,13481-6124}
\label{discussion:sec}
The origin of the Br$\gamma$ line in YSOs is controversial and there are
several physical mechanisms that could produce this emission, such as accretion of matter onto the star~\citep[e.g.][]{eisner09},
or outflowing material from a disc wind~\citep[e.g.][]{weigelt,rebeca15,caratti15b}, extended wind or outflow~\citep[e.g.][]{rebeca16}, or jet~\citep[e.g.][]{stecklum12}.
Our AMBER/VLTI observations of IRAS\,13481-6124 suggest that the main Br$\gamma$ emission emanates from an ionised jet.
The observed Br$\gamma$ P\,Cygni profile indicates the presence of a fast ionised jet/outflow ($\sim$500\,km\,s$^{-1}$).
This value agrees well with the radio jet velocities measured in other HMYSOs~\citep[see, e.g.][]{heathcote98,marti,curiel,torrelles11}.
Visibilities and differential phases at high velocities indicate that the outflowing matter is spatially extended, from a few au to a few tens of au, and the photocentre shifts
grow with increasing radial velocities.
Both observables also suggest that the flow is relatively well collimated ($\lesssim$30$\degr$) close to the known parsec-scale jet P.A.,
implying that we are observing a collimated wind or flow.
Initial jet opening angles are typically $\sim$30$\degr$ in YSOs and a considerable degree of focusing in low-mass YSOs happens at several au from the source~\citep[see, e.g.][]{ray07}.
It is thus reasonable to presume that the corresponding jet collimation in HMYSOs occurs at even larger distances from the source (a few tens of au).
Indeed, as magneto-centrifugal launching models are scale-free~\citep[see, e.g.][]{ferreira97,ferreira04,pudritz_PPV}, for higher masses
the configuration readjusts to larger scales consistently with the different values of the gravitational potential.
As IRAS\,13481-6124 has a parsec-scale jet, we could then argue that the high-velocity component of the Br$\gamma$ line (500 < ${\rm v}_{\rm r}$ < 100\,km\,s$^{-1}$)
is tracing the (poorly collimated) jet, which extends from a few au to tens of au from the central source.
On the other hand, the low-velocity component (${\rm v}_{\rm r}$ < 100\,km\,s$^{-1}$),
although partially resolved, is more compact, being located $\lesssim$2\,au from the source and
well inside the inner-rim disc. This slow moving gas component is also aligned with the jet axis.
Owing to our limited spectral resolution and given that this velocity is also compatible with Keplerian rotation
(at a distance of $\lesssim$2\,au), this component might originate from a disc wind or the jet foot-point.
We note that a similar geometry for the Br$\gamma$ emission in another HMYSO (\object{W 33A}) was inferred by \citet{davies} with spectro-astrometry.
Finally, even if the error bars are considered, the Br$\gamma$ visibility is lower than one, i.e. it is spatially resolved at the longest baselines, indicating that the
bulk of the emission cannot originate from accretion, which would not be spatially resolved at our baselines.
Moreover, infall gas would have free-fall velocities of several hundreds of km\,s$^{-1}$, which are not detected close to the source.
In conclusion, most of the observed Br$\gamma$ emission must originate from the ionised jet.
In principle, it should be fully ionised because of being
exposed to the UV radiation of a 20\,M$_\sun$ HMYSO~\citep[][]{tanaka}.
As for the case of irradiated jets in massive star-forming regions~\citep[see, e.g.][]{reipurth98,bally06},
the gas of the jet would then be fully traced by the HI emission and its denisty proportional to the HI line intensity.
Therefore, measurements of intensity, velocity, and size of the Br$\gamma$-emitting region might provide a good estimate
of the mass-loss rate.
\begin{acknowledgements}
A.C.G., R.G.L., and T.P.R. were supported by Science Foundation Ireland, grant 13/ERC/I2907.
A.K. and S.K. acknowledge support from a STFC Ernest Rutherford fellowship and grant (ST/J004030/1, ST/K003445/1), and Marie-Sklodowska Curie CIG grant (Ref. 618910).
A.S. was supported by the Deutsche
Forschungsgemeinschaft (DFG) Priority Program 1573.
This research has also made use of NASA's Astrophysics Data System Bibliographic Services and the SIMBAD database operated
at the CDS, Strasbourg, France.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,154,300 | arxiv | \section{Introduction}\label{intro_grad}
The physical, chemical, and kinematic information carried by the stellar
spectra are fundamental for understanding how the Milky Way formed and evolved.
The increasing demand for stellar spectra by astronomers engaged in Galactic
archaeology led to large spectroscopic surveys that could be carried out thanks
to the availability of efficient multi-object spectrographs,
to the fast growth of the data storage capability, and the
computational power of modern computers.
Past, present, and future surveys (such as the RAdial Velocity Experiment, RAVE, \citealp[Steinmetz
at al.][]{rave}; the Sloan Extension for Galactic
Understanding and Exploration, SEGUE, \citealp[Yanny et
al.,][]{yanny}; the Large Sky Area Multi-Object Fiber Spectroscopic
Telescope, LAMOST, \citealp[Zhao et al.][]{zhao}; The Apache Point
Observatory Galactic Evolution Experiment, APOGEE, \citealp[Allende Prieto
et al.][]{apogee}; the Galactic Archaeology with
HERMES-GALAH Survey, \citealp[Zucker et al.][]{zucker}; the Gaia-ESO Public
Spectroscopic Survey, \citealp[Gilmore et al.][]{gilmore}; the 4-Metre
multi-Object Spectroscopic Telescope, 4MOST, \citealp[de Jong et
al.][]{dejong}; Gaia, \citealp[Perryman et al.][]{perryman},
\citealp[Lindegren et al.][]{lindegren}) delivered and
will deliver millions of stellar spectra that need to be analyzed to
derive stellar parameters (effective temperatures \temp, gravity \logg,
metallicity \met) and elemental abundances. The analysis of such an amount of data
is a challenge that can be addressed by its automation.
Today there is considerable effort to develop software for this purpose.
Some software packages implement the classical spectral analysis by measuring equivalent
widths (EW) of isolated, well known absorption lines and by deriving
stellar parameters from the excitation equilibrium and ionization balance
(such as the Fast Automatic Moog Analysis, FAMA, \citealp[Magrini et
al.][]{magrini}; GALA, \citealp[Mucciarelli et al.][]{mucciarelli};
ARES \citealp[Sousa et al.][]{sousa}).
These programs are particularly oriented to
high-resolution, high signal-to-noise (S/N) spectra, for which isolated lines
can be recognized, and their EW can be reliably measured thanks to a safe
continuum placement.
Other methods are based on grids of synthetic spectra, but they
differ in the ``line-fitting" or ``full-spectrum-fitting" approach, i.e.,
by fitting isolated absorption lines one-by-one (e.g., MyGIsFOS,
\citealp[Sbordone et al.][]{sbordone}; Stellar Parameters
Determination Software, SPADES, \citealp[Posbic et al.][]{posbic}), or
by fitting full spectral ranges (e.g., the MATrix Inversion for Spectral SynthEsis, MATISSE,
\citealp[Recio-Blanco et al.][]{recio-blanco}; neural networks, among others
{\it statnet} by \citealp[Bailer-Jones][]{bailer-jones};
FERRE, \citealp[Allende Prieto et al.][]{allendeprieto}).
Spectroscopy Made Easy (SME, \citealp[Valenti \& Piskunov][]{valenti})
distinguishes itself from the other codes since it
synthesizes on-the-fly single absorption lines or parts of the
spectrum to be matched with the observed ones.\\
The line-fitting analysis can derive stellar
parameters and chemical abundances with the drawback of neglecting the
significant amount of information carried by the unused part of the
observed spectrum. This penalizes the line-fitting approach to low S/N, low
metallicity, and low resolution spectra, in which the number of usable lines
may be too small to carry out this analysis.
On the other hand, the full-spectrum-fitting approach cannot deliver
chemical abundances
because the grid of synthetic spectra needed to cover the
whole parameter and chemical space (and account for
many elemental abundances) would be too big to be handled
\footnote{The ASPCAP pipeline can derive
abundances for up to 15 elements with a technique that can be classified as
a line-fitting approach. See Elia Garcia Perez et al.
\cite{eliagarciaperez}.}.
Other techniques can use real spectra as templates
(``The Cannon", \citealp[Ness et al.][]{ness}; ULySS, \citealp[Koleva et
al.][]{koleva}) with the advantage of overcoming the systematic errors that
stem from the synthetic spectra (due to our incomplete knowledge of atomic parameters
and stellar atmospheres) but share with the previous techniques the
challenge to collect a number of templates
large enough to uniformly cover the stellar parameter and chemical space.\\
The accuracy of the parameters derived with any of the methods proposed so
far (including this work) depends on two fundamental pillars: the
precision and accuracy of i) the atomic parameters of the absorption lines
employed and ii) the reliability of the stellar atmosphere models. In both
these areas, significant
progress has been made in recent years, and because of their importance, they deserve
further support. The recent praiseworthy efforts to supply laboratory
oscillator strengths (\citealp[Ruffoni et al.][]{ruffoni_ges},
but see also other works cited later on) cover a number of lines that may meet
the needs of the classical spectral analysis (i.e., the line-fitting
approach), but these are too few for the needs of a full-spectrum-fitting analysis.
On the other hand, the 3D stellar atmosphere modeling
(\citealp[Asplund][]{asplund2005}; \citealp[Freytag et
al.][]{freytag}; \citealp[Magic et al.][]{magic}) show that realistic
model atmospheres can reproduce the observed spectra with great accuracy.
However, the computational power required today to analyze wide spectral
ranges with these tools is prohibitive.
To this lively field that is rich in new ideas, we contribute with a software called
\Space\ (Stellar Parameters And Chemical abundances Estimator) that
implements a new method of performing stellar spectral analysis. \Space\ is based on a
method born from the experience of the RAVE chemical pipeline \cite[Boeche et
al.][]{boeche11} developed to derive elemental abundances from the
spectra of the RAVE survey (Steinmetz et al. \citealp{rave}; Kordopatis
et al. \citealp{kordopatisDR4}).
The RAVE chemical pipeline relies on stellar parameters that must be
provided by other sources, and it only derives chemical abundances.
\Space\ extends the RAVE chemical pipeline's foundations and performs
an independent, complete spectral analysis. Although \Space\ employs a
full-spectrum-fitting approach, it derives stellar parameters as much as
chemical elemental abundances for FGK-type stars. Unlike other codes
dedicated to stellar parameter estimation, \Space\ does not rely on a
library of synthetic spectra, or measure the EW of
absorption lines, but it makes use of functions that describe how
the EW of the lines changes in the parameter and chemical space.
In the next section we explain the general concepts on which \Space\ is
based.
\section{Method}\label{sec_method}
The usual methods employed to estimate stellar parameters from spectra are i)
to directly compare the observed spectrum with the synthetic one to find
the best match and ii) to measure the EWs of the absorption lines of the
observed spectrum from which the stellar parameters are inferred.
In both cases the spectrum must be synthesized, and the stellar parameters of the
synthetic spectrum are varied until the spectrum (first case) or
the line's EWs (second case) match the observed ones. Any spectrum synthesis
depends on a stellar atmosphere model,
that represents the physical conditions in the stellar atmosphere to
the best of our knowledge.
For this reason stellar parameters and chemical abundances obtained from spectral
analysis are indirect measurements, and we say that they are derived (and not
measured).
Regardless of the method employed, to estimate the stellar parameters from spectra
we must construct a spectrum model and compare it to the observed
spectrum. \Space\ makes no exception: it constructs a spectrum model
and compares it with the observed spectrum with a simple $\chi^2$ analysis.
Its peculiarity is the novel way to construct the spectrum model, which is
not a direct synthesis. At first glance this method may look cumbersome, but it
eventually gives consistent advantages that we describe below.\\
Consider a stellar spectrum of low to medium spectral resolution\footnote{In
the following the spectral resolution at wavelength $\lambda$ is defined as
R$=\frac{\lambda}{\Delta \lambda}$, where $\Delta \lambda$ is the
Full-Width-Half-Maximum (FWHM) of the instrumental profile.}
($R\sim2\,000-20\,000$) with a known instrumental profile. An absorption line can be fit
with a Voigt profile of known FWHM and strength (i.e., EW).
We start from the na\"ive idea that a normalized spectrum can be reproduced by
subtracting Voigt profiles of appropriate wavelengths, FWHMs,
and EWs from a constant function equal to one (representing the normalized
continuum). Under the weak line approximation
the spectrum so constructed would reproduce the observed spectrum
with fair precision\footnote{We here want just outline the main idea.
To generalize the method the weak line approximation
must be removed, and this is discussed in Sec.~\ref{sec_ewlibrary}.}.
To construct a full spectrum in this way we
need to know the EWs of the lines at the wanted \temp, \logg, and
abundance [El/H]\footnote{We define chemical
abundance of a generic element ``El"
as [El/H]=$\log\frac{N(El)}{N(H)}-\log\frac{N(El)_{\odot}}{N(H)_{\odot}}$ where
$N$ is the number of particle per unit volume.}
of the generic element ``El" the lines belong to.
For this purpose we synthesize the lines for a grid of stellar parameters \temp,
\logg, and chemical abundance [El/H]\footnote{The microturbulence employed
is a function of \temp\ and \logg\ as clarified in Appendix \ref{appx_microt}.},
measure the EWs at such points,
and store them into a library that we called the EW library.
The EW library contains all the information that describes the
strength of the lines in the stellar parameter and chemical space.
So defined, the EW of an absorption line is a function of the stellar parameters that we call
General Curve-Of-Growth (GCOG) to remember that it
is the generalization of the well known Curve-Of-Growth (COG) function
(which can be obtained from the GCOG by fixing the parameters \temp\ and
\logg, and leave the abundance [El/H] as free variable).
By using the EW library we can construct spectrum models with
stellar parameters and abundances corresponding to grid
points of the library.
To overcome the discreteness of the grid in the parameter space, we use
continuous functions that fit the EWs of the lines in the parameter and
chemical space. This can be done with polynomial functions that we
call ``polynomial GCOGs" and that we store in the ``GCOG library".
The advantage of this method is that
we just need to vary the parameters \temp, \logg, and abundances [El/H] in
the polynomial GCOGs to vary the strength of the lines and construct
spectrum models for any stellar parameters and abundances until we find the
one that matches the observed spectrum best\footnote{The difference to
the RAVE chemical pipeline (Boeche et al. \citealp{boeche11}) is that this
one takes \temp\ and \logg\ as external input and uses polynomial COGs to
only derive chemical abundances.}. This method is implemented in the code
that we call \Space.
To achieve this result, three steps are necessary: i) to build a line list of
absorption lines that must be as much complete as possible (possibly all the
lines visible in stellar spectra) ii) to build an EW library where the EWs of
every absorption line are stored as a function of \temp, \logg, and [El/H], and
iii) to use the EW library to fit the polynomial GCOGs and store their
coefficients in the GCOG library that is employed by the code \Space\ to construct the
spectrum model. These steps are outlined in the next sections.\\
\section{The line list}\label{sec_lines_list}
To build the EW library we need a list of atomic and molecular absorption
lines and their physical parameters.
These physical parameters are: wavelength, atom or molecule
identification, oscillator strength ($f$, often expressed as logarithm
\loggf, where $g$ is the statistical weight),
excitation potential ($\chi$),
van der Waals damping constant $C6$, and dissociation energy $D_0$ (only for
molecules).
The atomic line list was taken from the Vienna Atomic Line Database (VALD,
Kupka et al. \citealp{kupka}), with the option that selects the lines with
expected strengths larger than 1\% in at least one of the normalized
spectra of the Sun, Arcturus, and Procyon\footnote{The VALD web interface
lists the lines which strengths are larger than 1\% of the normalized
flux of a synthetic spectrum which stellar parameters correspond to the
nearest point of the stellar parameters grid
to the stellar parameters provided by the user. For instance, for
the Sun the closest grid point is at \temp=5750~K, \logg=4.5, \met=0~dex.}.
Afterwards, the EWs of these lines were
re-computed with the code MOOG \cite[Sneden][]{sneden}
and only the lines with EW$>$1m\AA\ in at least
one of these stars were included in the line list.
The molecular line list was taken from Kurucz \cite{kurucz} and selected
with the same procedure employed for the atomic lines.
In the present work the line list covers the wavelength intervals
5212-6860\AA\ and 8400-8924\AA. We chose the first interval because
it is commonly covered by optical spectra, while the second
interval (the \ion{Ca}{II}\ triplet region) becomes particularly important because
of Gaia spectral coverage. Extentions of the line list to other wavelength
ranges can be done in the future.
\subsection{The atomic lines}\label{sec_atomic_lines}
The wavelengths and excitation potentials were adopted from the VALD database.
The oscillator strengths are discussed in Sec.\ref{sec_gf_calibration}.
The van der Waals damping constants $C6$ were taken from the VALD database
when such values are available.
When VALD does not provide the damping constants,
we adopted the Uns\" old approximation (computed by MOOG) multiplied by
the enhancement factor E$_{\gamma}$ following the recipe of Edvardsson et
al. \cite{edvardsson} and Chen et al. \cite{chen}. For the neutral iron
lines \ion{Fe}{I}, this recipe assigns E$_{\gamma}=1.2$ for lines with
$\chi\le2.6$eV and E$_{\gamma}=1.4$ for $\chi>2.6$eV (Simmons \& Blackwell
\citealp{blackwell}), whereas for the ionized \ion{Fe}{II}\ lines E$_{\gamma}=2.5$
(Holweger et al. \citealp{holweger90}). For \ion{K}{I}, \ion{Ti}{I}, and \ion{V}{I}\
E$_{\gamma}=1.5$ (Chen et al. \citealp{chen}), for \ion{Na}{I}\ E$_{\gamma}=2.1$
(Holweger et al. \citealp{holweger71}), for \ion{Ca}{I}\ E$_{\gamma}=1.8$ (Oneill
\& Smith \citealp{oneill}), for \ion{Ba}{II}\ E$_{\gamma}=3.0$ (Holweger \&
Mueller \citealp{holweger74}). For any other element,
E$_{\gamma}=2.5$ (Maeckle et al. \citealp{maeckle}).\\
Precise damping constants were computed by Barklem et
al. \cite{barklem00} and Barklem \& Aspelund-Johansson \cite{barklem05}.
Such values are contained in the MOOG data files and, by setting
the MOOG keyword ``{\it damping}=1" we imposed to use the Barklem values
when they are available.
There are few cases for which there are no Barklem damping
constants and for which the enhancement factor E$_{\gamma}$ does not apply.
These are:\\
\begin{itemize}
\item The strong and broad lines of \ion{H}{I}.
Our synthesis with MOOG under Local Thermodynamic Equilibrium (LTE)
assumptions and one dimensional (1D) stellar atmosphere models (see
Sec.\ref{sec_spectra_models} for details) renders a too weak H$\alpha$ line
at the line core, whereas the synthetic Paschen \ion{H}{I}\ lines
in the near infrared are too strong at the tip of the line, and
too weak in the wings with respect to
the observed lines. For all these lines we adopted the
Uns\"old approximation and calibrate the \loggf s by hand to improve the
fit. However, the match between synthetic and observed \ion{H}{I}\ lines remains
unsatisfactory and the lines at 6562.797\AA, 8467.258\AA, 8502.487\AA,
8598.396\AA, 8665.022\AA, 8750.476\AA, and 8862.787\AA\
are neglected during the \Space\ estimation process. The other Paschen \ion{H}{I}\ lines
in the interval 8400-8924\AA\ are so weak in our standard stars that
they can be neglected in the \temp\ and \logg\ range considered.
\item For \ion{Si}{I}\ the damping constants reported in VALD appear always too
small. In fact, the \ion{Si}{I}\ lines observed in real spectra are always broader
than the lines synthesized with the VALD damping constant. Also the
value E$_{\gamma}=2.5$ suggested by Holweger \cite{holweger73}
appears too small\footnote{This is in contrast with Wedemeyer
\cite{wedemeyer} who found that the Uns\" old approximation can well
describe the wings of silicon lines on the Sun.}.
After some tests, we adopted E$_{\gamma}=4.5$
which improves the match of the wings in many (but not all) \ion{Si}{I}\
lines.
\item For the \ion{Mg}{I}\ lines 8712.682\AA, 8717.815\AA,
and 8736.016\AA\ we adopted E$_{\gamma}=6.0$ in order to
match better their broad wings. These lines are
multiplets treated as one line (as explained in the following).
\end{itemize}
The need for the adjustments of the E$_{\gamma}$ just reported was
recognized during the firsts attempts to calibrate the \loggf\ and
applied before the final calibration procedure (
described later in Sec.~\ref{sec_spectra_models}).
In the line list there are multiplets where the lines are so close that
they are physically blended. Because lines of multiplets have the same $\chi$,
these physically blended multiplets can be described (as a first approximation)
as if they were one single line.
For multiplets with lines closer than 0.1\AA\ we
adopted one single line with the same $\chi$ and wavelength, which
is the average of the multiplet's wavelengths. As \loggf\ we adopt the
multiplet's largest \loggf, which is afterwards calibrated by the \loggf\
calibration routine (described in Sec.\ref{sec_gf_calibration}).\\
\subsection{The molecular lines}
Molecular lines of several species are present in the
considered wavelength ranges. While in hot stars molecules
have a very low probability to form and their spectral lines have negligible
strengths, in cool stars molecular lines become important.
In the range 5212-6860\AA\ the spectra of cool stars
show many absorption features that belong to the species CN, CH, MgH, and TiO.
However, the very high number of molecular lines
present in the range 5212-6860\AA\ prevents us from performing a reliable
calibration of their \loggf s (this is discussed in Sec.~\ref{sec_CN}).
Therefore, in this work we only treat the CN molecule in the wavelength region
8400-8924\AA\ where the CN lines are sparse and
most of them can be identified one by one.
Physical parameters such as wavelengths and excitation potential are
taken from Kurucz \cite{kurucz}.
The CN molecule dissociation energy $D_0=7.63$eV was taken from Reddy et al. \cite{reddy}.
Oscillator strengths for the CN were taken from Kurucz and afterwards
calibrated by the \loggf\ calibration routine (Sec.\ref{sec_gf_calibration}).
Molecular multiplets are treated like the atomic multiplets (see
Sec.\ref{sec_atomic_lines}).\\
\section{The \loggf\ calibration procedure}\label{sec_gf_calibration}
After the preparation of the line list outlined in
Sec.~\ref{sec_lines_list} we focus on the accuracy of the $gf$-values.\\
Most of the $gf$-values were derived from
theoretical and semi-empirical calculations (e.g., Seaton \citealp{seaton};
\citealp[Kurucz \& Peytremann][]{kurucz_gf})
which are known to have significant errors \cite[Bigot \&
Th{\'e}venin][]{bigot}. Although substantial efforts have been and are
currently being made to obtain precise $gf$-value from laboratory measurements
(from Blackwell et al. \citealp{blackwell72}, \citealp{blackwell76},
\citealp{blackwell79}, \citealp{blackwell82}, to the more recent works by Ruffoni et al.
\citealp{ruffoni}) the number of lines for which laboratory $gf$-values are
available is still small with respect to the number of lines visible in a
stellar spectrum. Besides, the lines targeted for $gf$ laboratory
measurements are the unblended ones, important for the classical
spectral analysis. This leaves the blended lines uncovered by the laboratory
measurements.
To improve the quality of the numerous (but inaccurate) $gf$-values provided by
the theoretical computations, some authors calibrate the oscillator
strengths by setting the $gf$-values to match the strength of the
synthetic line with the corresponding line in the Sun spectrum
(among others Gurtovenko \& Kostik \citealp{gurtovenko81},\citealp{gurtovenko82};
Th\'evenin \citealp{thevenin89};\citealp{thevenin90}; Borrero
et al. \citealp{borrero}), in two stars like the Sun and
Arcturus (Kirby et al. \citealp{kirby}; Boeche et al.
\citealp{boeche11}) or in three stars like the Sun, Procyon and $\epsilon$
Eri \cite[Lobel et al.][]{lobel}. Recently, Martins et al. \cite{martins}
used the spectra of three different stars (the Sun, Arcturus, and Vega)
and a statistical technique
(the cross-entropy algorithm) to recover the oscillator strengths and broadening
parameters that minimize the difference between the observed spectra and the
synthetic ones. These works employ the idea that, by using more than one
star we can disentangle lines in blends and recover their individual atomic
parameters\footnote{This idea was also employed by \citealp[Boeche et al.][]{boeche11}
for the $gf$ calibration alone, but using a less sophisticated method.}.\\
For our line list we decided to calibrate
the oscillator strengths of any blended or isolated line on five different
stellar spectra.
This is necessary because \Space\ employs a full-spectrum-fitting analysis,
and the few lines with reliable oscillator strengths
available would not be sufficient.
In the following we outline our calibration method. This method
compares the strengths of the synthetic lines with
two (or more) stellar spectra in order to correct the $gf$ values of isolated
and blended lines. The method was first proposed in
Boeche et al. \cite{boeche11} and we report it here.\\
In the framework of 1D atmosphere models and LTE assumptions, the EW of a line is a
function of parameters such as \temp, \logg, the abundance [El/H], the excitation
potential $\chi$, the $gf$-value, the damping
constant, and the microturbulence $\xi$. Given the atomic parameters $\chi$
and damping constant C6, and assuming that we know with good accuracy the stellar
parameters \temp, \logg, [El/H], and $\xi$ of the stars employed, the EW of a line
can be described as a function of \loggf\ alone \\
\begin{equation}\label{eq_ew}
EW=\mathcal{F}(\log gf).
\end{equation}
Since we know the stellar parameters of the Sun,
by measuring the EW of a line in the Sun spectrum we can
determine its $gf$-value by using equation~(\ref{eq_ew}). This is the so
called ``astrophysical calibration" of the $gf$-values and it only applies
to isolated lines.\\
Now consider a blend made of two lines
$l_1$ and $l_2$.
The equivalent width of the whole blend in the solar spectrum can be written as
\begin{equation}\label{eq_ew_oneline}
EW_{Sun}^{blend} = \mathcal{F}_{Sun}(\log gf_1,\log gf_2),\\\nonumber
\end{equation}
and the equation is underdetermined because the two oscillator strengths are
unknown. To make it determined we need to measure the EW of the
blend in another star for which we know the stellar parameters and abundances.
Then, we can write the equation system\\
\begin{eqnarray}\label{eq_ew_twolines}
EW_{Sun}^{blend} & = & \mathcal{F}_{Sun}(\log gf_1,\log gf_2)\\\nonumber
EW_{star}^{blend} & = & \mathcal{F}_{star}(\log gf_1,\log gf_2),\\\nonumber
\end{eqnarray}
and the system is determined. In the general case of a blend composed of $n$ number
of lines, the $gf$-values can be determined by measuring the EWs of
the blends in $k$ number of different well known stars and the equation
system\\
\begin{equation}\label{eq_ew_manylines}
EW_{k}^{blend} = \mathcal{F}_{k}(\log gf_1,...,\log gf_n)\\\nonumber
\end{equation}
is determined when $k \ge n$. Degeneracy can happen when more than one line
belongs to the same element and their excitation potentials $\chi$ are
the same (like in multiplets). In this case the lines behave as one line.
If the lines are close in wavelength we can approximate them as if they were
one single line (as described in Sec.~\ref{sec_atomic_lines}). If more than
one line belongs to the same element and the $\chi$s are
not the same, the degeneracy can be broken by choosing stars with different
stellar parameters, so that the contribution of the lines to the total
EW of the blend is different in different spectra.\\
Ideally, equation system (\ref{eq_ew_manylines})
states that for any line or blend, the \loggf s can be astrophysically calibrated. In practice,
this is not fully true for at least two reasons.
First, we can only work with a limited number of stars, which may not be large enough to solve
all the possible blends (for instance, when $n>k$, equation system~(\ref{eq_ew_manylines})
is underdetermined).
Second, uncertainties in the
EW measurements, in the continuum correction, or in the atmospheric
parameters prevent the equality between the measured and the
synthesized EWs (which can be seen as the left- and righthand side of the equation system
(\ref{eq_ew_manylines})). In this realistic case (which is the case of this
work) we can only minimize
the residuals between the left- and the righthand side terms of the
equation system. Besides, because the analytical form of the $\mathcal{F}$ functions in
the equation system are unknown, the solution of the system relies
on an iterative process where the EWs of the observed spectra (lefthand terms of the system)
are compared with the EWs resulting from the synthesis of the spectra (righthand
terms of the system) and the variables \loggf s are varied
until the residuals are minimized.
A further difficulty arises when some of the elemental abundances of the stars
employed are unknown. If an element of unknown abundance has isolated lines
in the spectra of these stars, then we can derive its abundance after the \loggf\ of these
lines have been calibrated on the Sun or on another well known star.\\
In the following we outline our solution, which makes use of
the spectra of the Sun and other four stars.
\subsection{Stellar spectra and atmosphere models}\label{sec_spectra_models}
To minimize the residuals between the left- and righthand side
terms of the equation system (\ref{eq_ew_manylines}),
we used high resolution and high S/N spectra
of five stars. We chose the spectra of the Sun, Arcturus
(both from Hinkle et al., \citealp{hinkle}), Procyon,
$\epsilon$ Eri, and $\epsilon$ Vir (from Blanco-Cuaresma et al.,
\citealp{blanco-cuaresma}). These five stars belong to the Gaia FGK benchmark
stars proposed as standard stars for calibration purposes
(Blanco-Cuaresma et al., \citealp{blanco-cuaresma}; Jofr\'e et al.,
\citealp{jofre}).
The Sun and Arcturus spectra were observed with the same instrument
(the Coud\'e feed telescope and spectrograph at Kitt Peak),
while the other spectra were observed with three different
spectrometers. The spectra of Procyon and
$\epsilon$ Vir were observed with the NARVAL spectropolarimeter with a
spectral resolution of R$\sim$81\,000 covering the full range of wavelengths
between 3000\AA\ and 11\,000\AA. For $\epsilon$ Eri, Blanco-Cuaresma et al.
only provide spectra taken with the UVES and HARPS spectrometers, which
present gaps in the wavelength coverage (at $\sim$5304-5336\AA\ for HARPS and
at $\sim$5770-5840\AA\ and 8540-8661\AA\ for UVES).
For this reason we employed the HARPS spectrum for
the wavelength range 5712-6260\AA\ and the UVES spectrum everywhere else.
Unfortunately, the gap at 8540-8661\AA\ cannot be covered, because
no spectra from other instruments are available in this wavelength range.
This means that for this wavelength interval, $\epsilon$ Eri does not play any
role in the $\log gf$ calibration, which is only performed on the spectra of
the other four stars.
All the spectra were re-sampled to a dispersion of 0.01\AA/pix to
match the dispersion of the synthetic spectra. In fact, the calibration
routine (described in Sec.~\ref{sec_corr_routine}) requires that both must have the same
sampling).
Hinkle et al. provided the spectra free of telluric lines (they were
subtracted from the observed spectra), therefore
they cannot affect the strengths of the absorption lines.
Conversely, the telluric lines affect
the Blanco-Cuaresma spectra and, for the wavelength ranges
here considered, they affect mainly the range $\sim$6274-6320\AA.
The presence of telluric lines superimposed on an absorption line can affect
its \loggf\ calibration. However, we verified that the final
effect on the calibrated \loggf s is in general weak or negligible because the
calibration is performed simultaneously on two spectra free from telluric
lines and on other three on which the telluric lines do not lie at the same
wavelengths because of the different velocity correction $\Delta v$
applied (the spectra exibith different radial velocities).\\
To synthesize the spectra we used the code MOOG (Sneden et al.,
\citealp{sneden}) and the stellar atmosphere models
from the ATLAS9 grid \cite[Castelli \& Kurucz][]{castelli} updated to the
2012 version\footnote{http://wwwuser.oats.inaf.it/castelli/grids.html}.
For the solar spectrum we assumed an
effective temperature \temp=5777K, gravity \logg=4.44, metallicity
\met=0.00~dex. For Arcturus we assumed the stellar parameters of
Ram{\'{\i}}rez \& Allende Prieto \cite{ramirez}, while
for the other stars we adopted the stellar parameters
given in Jofr\'e et al. \cite{jofre}.
The microturbulence $\xi$ adopted is inferred during the calibration process as
explained in Sec.~\ref{sec_microt}.
All these stellar parameters are summarized in Tab.~\ref{tab_st_params}.
The elemental abundances adopted for the Sun are [El/H]=0~dex by
definition, with solar abundances adopted from
Grevesse \& Sauval \cite{grevesse}. For Arcturus we adopted the elemental
abundances given by Ram\'irez \& Allende Prieto \cite{ramirez}. For elements
for which Ram\'irez \& Allende Prieto gave no abundance we impose [El/H]=\met\ at the beginning of the
calibration and leave the possibility to change the abundance during the
calibration process. In fact, if the element has an isolated line its
\loggf\ can be calibrated on the Sun and its abundance on Arcturus can be
therefore derived. Similarly, at the beginning of the calibration process we
impose [El/H]=\met\ for the elements of the other stars and allow
possible changes of [El/H] during the process.\\
Because the instrumental resolution varies with wavelength, and because
MOOG adopts one constant FWHM per synthesized interval (used to convolve the
synthetic spectrum with the adopted instrumental profile)
we synthesized the spectra in four pieces covering the
wavelength ranges 5212-5712\AA, 5712-6260\AA, 6260-6860\AA, and 8400-8924\AA.
The line profile of the synthetic spectra was broadened with a Gaussian
profile (to reproduce the instrumental profile of the spectrograph)
and a macroturbulence profile, the best matching values
of which were chosen via eye inspection for every wavelength range.
While the macroturbulence is constant across these wavelengths, the Gaussian
instrumental profile broadens with wavelength.
In the last column of Tab.~\ref{tab_st_params} we report the macroturbulence
$v_{mac}$ adopted, while in Tab.~\ref{tab_fwhm} we summarize the best matching
Gaussian FWHM chosen for the four wavelength ranges.
\begin{table}[t]
\caption[]{Effective temperature (K), gravity, metallicity (dex), micro- and
macroturbulence (in \kmsec) adopted to synthesize the spectra of the standard stars.}
\label{tab_st_params}
\vskip 0.3cm
\centering
\begin{tabular}{l|c|ccccc}
\hline
\noalign{\smallskip}
star & sp. class & \temp & \logg & \met & $\xi$ & $v_{mac}$\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Sun & G2V & 5777 & 4.44 & 0.00 & 1.3 & 2.5\\
Arcturus &K1.5III& 4286 & 1.66 & -0.52 & 1.7 & 5.3\\
Procyon &F5IV-V& 6554 & 3.99 & -0.04 & 2.1 & 7.5\\
$\epsilon$ Eri &K2Vk:& 5050 & 4.60 & -0.09 & 1.1 & 3.5\\
$\epsilon$ Vir &G8III& 4983 & 2.77 & +0.15 & 1.5 & 6.0\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\caption[]{Instrumental FWHMs adopted for the synthetic spectra in
the four wavelength ranges.}
\label{tab_fwhm}
\vskip 0.3cm
\centering
\begin{tabular}{l|cccc}
\hline
\noalign{\smallskip}
star & \multicolumn{4}{c}{FWHM(\AA) at} \\
& 5212- & 5712- & 6260- & 8400\\
& 5712\AA & 6260\AA & 6860\AA & 8924\AA\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Sun & 0.04 & 0.05 & 0.06 & 0.07 \\
Arcturus & 0.04 & 0.04 & 0.04 & 0.07 \\
Procyon & 0.05 & 0.05 & 0.06 & 0.08 \\
$\epsilon$ Eri & 0.05 & 0.03 & 0.07 & 0.08 \\
$\epsilon$ Vir & 0.05 & 0.05 & 0.06 & 0.07 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\begin{figure*}[t]
\begin{minipage}[t]{6cm}
\includegraphics[width=6cm,viewport=30 28 232 390]{initial_deg.pdf}
\caption{Distributions of the $NEWR$s of the elements Fe (gray points) and Ti
(black points) as a function of their EW for the five stars before the beginning of
the calibration routine.}
\label{initial}
\end{minipage}
\hfill
\begin{minipage}[t]{6cm}
\includegraphics[width=6cm,viewport=30 28 232 390]{ini_abd_101iter_deg.pdf}
\caption{As in Fig.~\ref{initial} but after 100 iterations of the \loggf\ calibration
routine.}
\label{first_gf_corr}
\end{minipage}
\hfill
\begin{minipage}[t]{6cm}
\includegraphics[width=6cm,viewport=30 28 232 390]{final_corr_deg.pdf}
\caption{As in Fig.~\ref{initial} but at the final stage, after many
iterations of the calibration routine and properly adjusted abundances.}
\label{final_corr}
\end{minipage}
\end{figure*}
\subsection{The \loggf s calibration routine and the abundances
correction}\label{sec_corr_routine}
The calibration routine consists of two parts: the $\log gf$s calibration and
the abundances correction. The first part is semi-automatic, the second part
is manual.
The procedure begins with the first synthesis of the five
spectra by using the code MOOG. At the beginning, we fix the abundances that
are known and these remain unchanged through the whole process with few
exceptions illustrated later on.
For the Sun the abundances are fixed at [El/H]=0~dex. For Arcturus
we fix the abundances of 16 elements as given Ram{\'{\i}}rez
\& Allende Prieto \cite{ramirez}. For the other elements we set
[El/H]=[M/H]. Assuming the elemental abundances of the other
stars to be unknown\footnote{Precise abundances of these stars have been derived by
Jofr{\'e} et al. \cite{jofre15}, whose results were not yet public at the time of this work.},
we assumed at the beginning [El/H]=[M/H].
Then the first \loggf\ calibration continues as follows:
\begin{enumerate}
\item The 5 spectra are synthesized by using the adopted line list and
atmosphere models.
\item The observed spectra are re-normalized with the same routine used for
the \Space\ code (see Sec.\ref{sec_renorm}). In this case the
interval has a radius of 5\AA\ and only normalized fluxes larger than 0.98 are
considered.
\item With MOOG (driver {\it ewfind}) the equivalent widths $EW_i^k$ of the lines
for the 5 spectra are computed. These are the expected EWs of the lines
if they were isolated.
\item The isolation degree parameter $iso$ for the $i$-th line
and $k$-th star is computed as
$$
iso_i^k=\frac{EW_i^k}{\sum_{\lambda_i-0.3\AA}^{\lambda_i+0.3\AA} EW_i^k}
$$
where $\lambda_i$ is the central wavelength of the $i$-th line.
It approximates the fraction of the flux absorbed by the $i$-th line over the
total flux absorbed by any other line present in an interval 0.6\AA\ wide centered on the $i$-th line.
\item The Normalized Equivalent Width Residual ($NEWR$) for the $i$-th line and the
$k$-th star is computed as follows
$$
NEWR_i^k=\frac{F_i^{k,synt}-F_i^{k,obs}}{1-F_i^{k,obs}}
$$
where $F_i$ is the flux integrated over an interval
centered on the $i$-th line. The width of the interval is 0.05\AA\ if the
$EW<70$m\AA, 0.10\AA\ if $70\leq EW<150$m\AA, 0.15\AA\ if $150\leq EW<200$m\AA,
0.20\AA\ if $200\leq EW<250$m\AA, and 0.30\AA\ if $EW\geq 250$m\AA.
The $NEWR$ represents the residual between the strengths of the synthetic and the
observed line. When the $NEWR$ is negative this means that the synthetic line is
stronger than the observed one (and vice versa).
\item The $\log gf$ calibration of the $i$-th line is performed by adding the quantity
$$
\Delta \log gf_i=-\frac{\sum_k NEWR_i^k \cdot iso_i^k}{\sum_k iso_i^k}
$$
to the $\log gf_i$ .
If the line belongs to an atom the weighted sum considers all the five spectra,
otherwise (i.e., if it belongs to a molecule) the spectrum of Procyon is neglected (because no
molecular line is visible on its spectrum).
If $\Delta \log gf_i>0.05$, we confine it to this value to avoid
divergences. If $\Delta \log gf_i<0.01$ then we set it to zero.
If $\log gf_i<$-9.99 or $\log gf_i>$3.0 the line is
removed from the line list.
\item The EWs of the lines are computed with the driver {\it ewfind} of MOOG. The
lines with $EW\leq$3m\AA\ in all of the five spectra are removed from the
line list.
\item The routine is repeated from step~1.
\end{enumerate}
This routine is always followed with three exceptions:
i) for very strong lines (some tens of lines) the
chosen interval (step~5 of the routine) is larger than 0.05\AA\ to
match the full line instead of the core alone;
the interval was chosen after eye inspection;
ii) many intense lines on the star $\epsilon$ Eri have particularly
wide wings, which cannot be well synthesized; therefore, for these lines
the calibration $\Delta \log gf$ was computed by neglecting
the $\epsilon$ Eri spectrum; iii) for strong lines like the \ion{H}{I}\ lines,
the \ion{Na}{I}\ doublet at 5889 and 5895\AA, the \ion{Ca}{II}\ triplet in the infrared
region, and few other \ion{Fe}{I}\ intense lines the \loggf s were set by hand
after eye inspection.
\begin{figure*}[t]
\centering
\includegraphics[width=16cm,clip,viewport=98 123 475 599]{final_abd_ini_gf_281014.pdf}
\caption{Gray thick lines: observed spectra. Dotted lines: synthesized spectra
with VALD \loggf s but final abundances. Black lines: synthetic spectra
with calibrated \loggf s and final abundances.
}
\label{final_abd_ini_llist}
\end{figure*}
\begin{table*}
\caption{Final chemical abundances for the 5 stars at the end of the
\loggf\ calibration compared with the abundances by Ram{\'{\i}}rez \&
Allende Prieto and Allende Prieto et al. \cite{allende}.
The values with an asterisk, ``*", are the abundances of the non-ionized
element derived by Ram{\'{\i}}rez \& Allende Prieto
\cite{ramirez}.}\label{tab_5stars_abd}
\centering
\begin{tabular}{l|l|cc|cc|cc|c}
\hline
\noalign{\smallskip}
element & N &\multicolumn{2}{c}{Arcturus} & \multicolumn{2}{c}{Procyon} &
\multicolumn{2}{c}{$\epsilon$ Eri}& $\epsilon$ Vir\\
\hline
& & [El/H] & [El/H] & [El/H] & [El/H] & [El/H] & [El/H] & [El/H]\\
& & us & Ramirez & us & Allende Prieto & us & Allende Prieto & us \\
\hline
Mg & 24 & -0.15 & -0.15 & -0.04 & -0.01 & -0.09 & -0.03 & 0.15\\
Si & 228 & -0.19 & -0.19 & -0.02 & 0.07 & 0.01 & -0.01 & 0.25\\
Ca & 74 & -0.41 & -0.41 & -0.04 & 0.25 & -0.09 & -0.01 & 0.15\\
Sc & 76 & -0.37 & -0.37* & -0.04 & 0.07 & -0.09 & 0.02 & -0.02\\
Ti & 463 & -0.25 & -0.25* & 0.02 & 0.13 & -0.07 & 0.01 & 0.01\\
V & 265 & -0.32 & -0.32 & -0.04 & & -0.02 & & 0.02\\
Cr & 225 & -0.57 & -0.57 & -0.04 & & -0.09 & & 0.08\\
Fe & 1436 & -0.52 & -0.52 & -0.06 & 0.03 & -0.05 & -0.06 & 0.15\\
Co & 186 & -0.23 & -0.43 & -0.04 & 0.05 & -0.09 & -0.08 & 0.15\\
Ni & 194 & -0.46 & -0.46 & -0.04 & 0.07 & -0.09 & -0.06 & 0.13\\
\hline
\end{tabular}
\end{table*}
At the beginning of the calibration routine the $NEWR$s are distributed
as shown in Fig.~\ref{initial}
(in this and in the two following figures we only show the $NEWR$s of the elements
Fe and Ti for the sake of clarity).
After 100 iterations the $NEWR$ distributions emerge as in
Fig.~\ref{first_gf_corr}. Note that the dispersion of the points has decreased
and that the Fe lines (gray points) in Procyon, and the Ti lines (black
points) in $\epsilon$ Vir have an offset. The
offsets are due to the assumption of the wrong Fe and Ti abundances for these stars at the
beginning of the procedure. To continue with the \loggf s calibration we
must apply the second part of the calibration process, i.e., the abundance
correction, which is done manually. The negative offset of the Ti lines in $\epsilon$
Vir indicates that the Ti lines are too strong, therefore, to match the observed
spectrum, the Ti abundance must be decreased.
Similarly, when the offset is positive, the abundance must be increased.
These evaluations and the consequent changes in abundance are done by observing the
distribution of the $NEWR$ for lines for which the isolation degree parameter $iso$
is larger than 0.99 (which implies the selection of isolated lines alone) in order to guess the right
abundances from isolated lines. If no isolated lines are present, the
abundances remain [El/H]=[M/H].
The abundance correction is performed after the \loggf s calibration and
both are carried out many times until no more lines
are rejected by the calibration routine and the $NEWR$ distributions
are centered on zero.\\
We want to spend a few more words on the abundance correction. The optimal
condition for the \loggf s calibration would be to have precise elemental
abundances for all the stars employed (as required by
equation~(\ref{eq_ew_manylines}))
so that the abundance correction would not be necessary.
Because at the time of this work precise abundances of these stars were not available,
in order fulfill the condition in equation~(\ref{eq_ew_manylines})
as much as possible, we adopt the known
abundances, i.e., the Sun and the Arcturus abundances. For the
other stars (or elements) for which we do not have chemical abundances,
we adopted [El/H]=[M/H] and then followed the method described before
(i.e., observing the distribution of the $NEWR$s of the isolated lines of
an element and change its abundance to minimize the average of the absolute
$NEWR$ values) whenever the adopted initial abundance was not satisfactory for this element.
In the case of the element \ion{Co}{} on Arcturus, we decided to follow this method
and we adjusted its abundance to [Co/H]=$-0.23$~dex because by using the
value [Co/H]=$-0.43$~dex derived by Ram{\'{\i}}rez \& Allende Prieto it was
not possible to minimize the absolute average $NEWR$ values for all the stars.\\
We want to stress that the solar abundances were never
changed during the whole process. The Sun is synthesized with the
Grevesse \& Sauval \cite{grevesse} solar abundances and
these abundances must not be changed because this is the reference point
on which the whole calibration procedure is based. Without this reference
point no calibration is possible, otherwise the equation system
(\ref{eq_ew_manylines}) would become underdetermined.\\
\subsection{Setting the microturbulence}\label{sec_microt}
At the beginning of this work we tested several times the calibration
routine in order to find the best way to follow. In some of these tests
we adopted the microturbulence values reported in Jofr\'e et al.
\cite{jofre}, which are 1.2, 1.3, 1.8, 1.1, and 1.1 \kmsec\ for the Sun,
Arcturus, Procyon, $\epsilon$ Eri, and $\epsilon$ Vir, respectively. With
these values we could not find satisfactory results, which means that the
$NEWR$s of the isolated Fe lines did not align close to the $NEWR=0$ line for some
of the stars, no matter what Fe abundance was adopted.
Therefore, we decided to change the $\xi$ values interactively
during the calibration process to minimize the absolute $NEWR$s values. The
$NEWR$s values are sensitive to microturbulence and in
Fig.~\ref{newr_microt_variation} we show the difference in the $NEWR$s
distributions observed by changing the microturbulence by 0.4~\kmsec\ (while
all other parameters are fixed) for the
Sun and Arcturus, with the best performing $\xi$ value in the lefthand panels.
Our final best $\xi$ values are reported in Tab.~\ref{tab_st_params}.
\subsection{The final line list}\label{sec_final_ll}
The whole calibration
procedure described in the previous section is performed by
applying alternatively the \loggf\
routine and the abundance correction iteratively until convergence of the
abundances and until no more lines are removed by the \loggf\ routine. The
process began with a line list of 8\,947 lines. After the convergence, the
final line list counts 4\,643 lines. The $NEWR$s distribution of the final
line list is shown in Fig.~\ref{final_corr}, while in Tab.~\ref{tab_5stars_abd}
we report the abundances derived with the abundance correction procedure for
the elements that we consider reliable (see Sec.~\ref{sec_loggf_accuracy}
for further explanations). The final line list with the calibrated \loggf s
is released together with the GCOG library.
At the end of the calibration procedure we verified by eye inspection
the good match between the synthetic and observed
spectra over the whole wavelength range considered. In some
cases there are unidentified absorption lines for which none of the lines
given in the VALD database seems to match. These lines are neglected
during the analysis performed by \Space.
We removed by hand some lines of the line list because
were clearly erroneous (but not removed by the calibration routine
because they lie under incorrectly fitted lines, like under the \ion{Ca}{}\
triplet lines, for instance) or because their \loggf s were badly affected
by unidentified lines. In Fig.~\ref{final_abd_ini_llist} we compare
part of the synthetic and the observed spectra before and after the \loggf s
calibration. For this figure the synthesis was performed using the
final abundances reported in Tab.~\ref{tab_5stars_abd}, so that the
differences between the spectra synthesized with the VALD \loggf s and the
calibrated \loggf s are only due to the difference in \loggf s.
Fig.~\ref{final_abd_ini_llist}
shows that the spectra synthesized with the new calibrated \loggf s
match the observed spectra better than the ones synthesized with the
VALD \loggf s.
\begin{figure}
\includegraphics[width=9cm,viewport=28 28 408 196]{newr_microt_variation_deg.pdf}
\caption{$NEWR$ distributions for the Fe lines in Arcturus (top panels) and
the Sun (bottom panels) using microturbulences $\xi$ that differ by 0.4
\kmsec. Here we use Fe lines with isolation degree parameter $iso>0.99$
(i.e., these Fe lines are isolated).}
\label{newr_microt_variation}
\end{figure}
\subsection{Validation of the calibrated \loggf
s}\label{sec_loggf_validation}
In Fig.~\ref{comp_loggf} (left panel) we compare the original \loggf s
of the VALD database with our final calibrated \loggf s for those lines that
have $EW>5$m\AA\ in the solar spectrum. The residuals have an average offset
of $\sim+0.01$ with a dispersion of $\sim0.29$ (statistic computed excluding
strong lines for which \loggf\ was calibrated by hand and after
rejection of outliers with a 3$\sigma$ clipping).
Because VALD is a database that collects data from several sources with
different degrees of precision, we want to verify the robustness of our calibrated
\loggf s comparing them with precise values.
For this purpose, we accessed the NIST database \cite[Kramida et al.][]{nist}
and select lines that have a $gf$ precision better than 10\% ($\sim$0.04
in \loggf) and $EW>5$m\AA\ on the solar spectrum. With these
criteria we found 328 lines in common with our line list. After removing the
lines with $EW<5.0$m\AA\ (for which we expect the largest calibrated \loggf\ errors) we
are left with 223 lines belonging to
the elemental species \ion{C}{I}, \ion{N}{I}, \ion{O}{I}, \ion{Na}{I}, \ion{Mg}{I}, \ion{Si}{II}, \ion{Sc}{I}, \ion{Sc}{II}, \ion{Ti}{I},
\ion{V}{I}, \ion{Cr}{I}, \ion{Mn}{I}, \ion{Fe}{I}, and \ion{Co}{I}. The comparison between the NIST and the
calibrated \loggf s is shown in right panel of Fig.~\ref{comp_loggf}.
The comparison with high precision \loggf s shows that our calibrated \loggf
s have an offset of $\sim-0.12$~dex with a dispersion of $\sim$0.1~dex.
The negative offsets say that our \loggf s are more negative than the
corresponding NIST \loggf s. This can be due to several causes,
which may be
i) an inappropriate line profile of the synthetic spectra (the line
profiles can vary for lines with different strengths and lines broadening
parameters)
ii) an inappropriate continuum normalization of the observed spectra,
iii) neglected Non Local Thermodynamic Equilibrium (NLTE) and 3D effects,
iv) errors in the stellar parameters adopted to synthesize the spectra.
As a last remark, one may regard at the solar abundances adopted (Grevesse \&
Sauval \citealp{grevesse}) as
the cause of the offset of our calibrated \loggf s.
If these abundances (and in particular the Fe
abundance, which has the higher number of lines represented in
Fig.~\ref{comp_loggf}) were too high, the calibration would render
\loggf s lower than expected. Some works, based on laboratory \loggf s,
derived a solar iron abundance\footnote{Here we define
$\log[\epsilon(Fe)]=\log\frac{N(Fe)}{N(H)}+12$.} of
$\log[\epsilon(Fe)]=7.44$ (\citealp[Ruffoni et al.][]{ruffoni_ges};
\citealp[Bergemann et al.][]{bergemann2012}). Since we adopted
$\log[\epsilon(Fe)]=7.50$, this would explain part of the offset
observed for our \loggf s.
\begin{figure}
\includegraphics[width=9cm,bb=14 30 390 224]{comp_loggf_deg.pdf}
\caption{{\bf Right}: comparison between the VALD and our calibrated \loggf s.
Black and gray points represent atomic and molecular CN lines, respectively. {\bf Left}: comparison
between the NIST and our \loggf s values for those lines with NIST \loggf\ precision
better than 10\%. Only lines having $EW>5$m\AA\ in the solar spectrum are
reported here.
The offsets and standard deviations are computed as ``calibrated minus
reference" after rejecting the outliers (crossed points) with a 3$\sigma$
clipping.}
\label{comp_loggf}
\end{figure}
\subsection{On the accuracy of the calibrated \loggf
s}\label{sec_loggf_accuracy}
Although we verified by eye the good match between the spectra synthesized
with our final line list and the observed spectra, this does not ensure the
good accuracy of the astrophysically calibrated \loggf s for all the lines of
the line list.
In fact, the spectra exhibit many blended features composed of many lines
that cannot be fully resolved, because for such blends the equation system
(\ref{eq_ew_manylines}) is underdetermined. On the other hand, weak lines
($EW\lesssim10$m\AA) are the ones more affected by imprecision of the continuum
placement or by blends. A difference in 0.5\% of the normalized flux between
the synthetic and the observed spectra can look like a ``good match" to
an eye inspection, but it leads to a very poor accuracy of the \loggf\ of a
line having an EW of a few m\AA. Another source of uncertainty comes from the abundance
correction procedure when the lines of one element are all weak in the Sun' s
spectrum. With the Sun as reference point, when the lines are weak the
match with the synthetic spectrum is subject to the uncertainties discussed above,
so that the reference point becomes uncertain. For this reason,
the derived elemental abundances output by the code \Space\ (in its
present version), are for those
species for which the number and strength of
lines are big enough for a good abundance estimation of the
five stars during the abundance correction process outlined in
Sec.~\ref{sec_corr_routine}. These elemental abundances are the ones
reported Tab.~\ref{tab_5stars_abd}. The abundances of other elements are also
internally derived by \Space\ but are used as ``dummy" elements and
rejected at the end of the analysis.\\
There are further reasons why the calibrated \loggf s of some lines
may be not physically meaningful. We employed stellar atmosphere models
that are one-dimensional and the physical processes are assumed to take
place in Local Thermodynamic Equilibrium (LTE). This is an approximation that, in
some cases, is too rough to describe real stellar atmospheres. Some absorption lines
suffer of non-LTE effects, which can affect the observed EW.
Therefore, if we perform an astrophysical calibration of the \loggf s of
one of these lines under LTE assumptions, the calibrated \loggf\ value
can be significantly different from the real value (which expresses the
probability of the electronic transition) and the difference accounts for
the neglected non-LTE effect. This is not the right way to correct for non-LTE
effects and it may lead to systematic errors when stellar parameters
and the chemical abundances are derived.\\
During the \loggf s calibration and abundance correction procedure, we identified several
strong lines that cannot be correctly synthesized in our five standard
stars. The profiles of these synthetic lines have too strong (or too weak) wings
with respect to the observed lines in the spectra of the standard stars.
Some of these lines are reported in Tab.~\ref{tab_lines_reject} with a
qualitative goodness of fit of the wings (and strength) between the synthetic and observed
lines. For some lines (such as most of the \ion{H}{I}\ lines and the \ion{Na}{I}\ doublet at
$\sim$5890) we changed the \loggf s (and also the damping constants for
the Paschen \ion{H}{I}\ lines) by hand in order to match the strength
of these lines in the solar spectrum. However, the match is often not satisfactory.
Most of the \ion{Mn}{I}\ lines show a line width too narrow in synthetic spectra
with respect to the observed ones, and in the Sun synthetic spectrum these lines
are too strong at the core, although their EWs seem to be close to the observed ones.
The \ion{Mn}{I}\ abundance is therefore rejected from the \Space\ results.
All these discrepancies can be due to non-LTE effects, 3D effects, and
hyperfine splitting of the lines that we do not take into account in the present
work.
\begin{table}[t]
\caption[]{Qualitative match of the wings of some intense lines between the synthetic
and the observed ones. The symbols ``+" and ``-" mean that the synthetic
line is too strong or too weak (respectively) with respect to the observed
ones. ``Ok" means that the match is satisfactory. These lines may suffer of
non-LTE and/or 3D effects.}
\label{tab_lines_reject}
\vskip 0.3cm
\centering
\begin{tabular}{l|ccccc}
\hline
\noalign{\smallskip}
wavelength & Sun & Arcturus & Procyon & $\epsilon$ Eri& $\epsilon$ Vir\\
\hline
\noalign{\smallskip}
5269.537 \ion{Fe}{I} & ok & + & ok & ok & + \\
5328.039 \ion{Fe}{I} & ok & + & ok & ok & + \\
5371.489 \ion{Fe}{I} & - & + & ok & -- & ok \\
5405.775 \ion{Fe}{I} & ok & + & ok & ok & + \\
5889.9510 \ion{Na}{I} & ok & + & - & ok & - \\
5895.9240 \ion{Na}{I} & ok & + & + & ok & - \\
8498.023 \ion{Ca}{II} & ok & - & ok & ok & - \\
8542.091 \ion{Ca}{II} & ok & - & ok & & - \\
8662.141 \ion{Ca}{II} & ok & - & ok & & - \\
8806.756 \ion{Mg}{I} & ok & - & ok & ok & - \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
Because these ``poorly matching" lines can negatively affect the stellar parameter estimations, they
are rejected from the analysis performed by \Space.\\
However, the fact that the spectra synthesized with our line list with
calibrated \loggf s match reasonably well\footnote{The residuals between the
synthetic and the observed spectra have a standard deviation of
$\sim1-2$\% of the normalized flux.} the great majority of the spectral
range of our standard stars (which span a wide range in temperature and
gravity) and that most of the abundances derived
during the abundance correction process are close to the ones reported in
high-resolution studies (see Tab.~\ref{tab_5stars_abd}), suggests that our line
list under LTE assumption can be employed to derive reliable stellar parameters and
chemical abundances in the \temp\ and \logg\ ranges covered by the
five calibration stars adopted in this work.
\begin{figure}
\includegraphics[width=9cm,bb=97 286 493 499]{comp_sun_61CygA.pdf}
\caption{Normalized spectra of the Sun (black line) and 61CygA (gray line).
The ``plus" symbols indicate the positions of the atomic lines.}
\label{comp_sun_61CygA}
\end{figure}
\subsection{The molecular lines}\label{sec_CN}
In the previous sections we discussed mainly the atomic lines, although
molecular lines of several species are present in the wavelength ranges
considered. During the preparation of this work we did several tests
to verify whether a \loggf s calibration of atomic and molecular lines together
was possible. We found that i) the calibration is not always
possible, and ii) when it is possible, the calibrated \loggf s are
physically meaningless and can be only used as dummy values.
The first point applies to the wavelength range 5212-6860\AA\ where the
very high number of molecular lines of the species CN, CH, MgH, and TiO
generate a forest of weak lines in cool star spectra that makes
the identification of the lines impossible and the equation system (\ref{eq_ew_manylines})
becomes underdetermined.
In Fig.~\ref{comp_sun_61CygA} we compare the spectrum of the Sun (normalized by Hinkle et al.
\citealp{hinkle}, \temp=5777~K, \logg=4.44, \met=0.0~dex) and 61CygA (normalized by
Blanco-Cuaresma et
al. \citealp{blanco-cuaresma}, \temp=4374~K, \logg=4.63, \met=$-0.33$~dex).
The forest of weak molecular lines in 61CygA is
so dense that it creates a ``pseudo-continuum" that hides the real continuum
and prevents the correct estimation of the EWs of the atomic lines.
This convinced us that, at present, our method cannot calibrate \loggf s of
molecular lines in the interval 5212-6860\AA.
Besides, to calibrate \loggf s of atomic lines we need spectra ``free" of
molecular lines. Therefore we verified that the standard stars employed for the
\loggf\ calibration are not significantly affected by molecular lines.
We verified that this is true for dwarf stars having \temp$\gtrsim5000$~K
and for a giant star like Arcturus.\\
In the interval 8400-8924\AA\ the second answer above applies:
here we can identify the CN lines and calibrate their \loggf s, but we strongly doubt
the accuracy of the calibration. When the original \loggf\ by Kurucz
are applied, the synthetic CN lines of Arcturus are far too strong
with respect to the observed ones. Molecular lines are known to be
prone to NLTE effects
(Hinkle \& Lambert,\citealp{hinkle_molecule}; Schweitzer et al.
\citealp{schweitzer}; Plez, \citealp{plez}) and 3D effects \cite[Ivanauskas
et al.][]{ivanauskas}, and their strengths may not be correctly
reproduced under 1D LTE assumption.
In order to match the strenghts of the CN lines observed on the Sun and on
Arcturus at the same time we needed to set the Arcturus \ion{C}{}\ and \ion{N}{}\ abundances
to [C/H]=[N/H]=$-0.34$~dex, which lie between the atomic
abundances by Ram{\'{\i}}rez \& Allende Prieto
\cite{ramirez} who found [C/H]=$-0.09$~dex and [N/H]=$-0.42$~dex
and the ones of Smith et al. \cite{smith} who found [C/H]=$-0.56$~dex and
[N/H]=$-0.28$~dex.\\
We believe that Arcturus' low \ion{C}{}\ and \ion{N}{}\ abundances found
by us merely counterbalance the 3D NLTE effects that we could not take in account.
Thus, the CN lines
in the wavelength interval 8400-8924\AA\ are employed by \Space\ as ``dummy"
lines and the results are rejected after the estimation process.
\section{The Equivalent Widths (EW) library}\label{sec_ewlibrary}
We built the EW library using the driver {\it ewfind} of the code MOOG,
which computes the expected EW of the absorption lines for a given stellar
atmosphere model. We employed the atmosphere models grid ATLAS9
by Castelli \& Kurucz \cite{castelli} updated to the
2012 version. The Castelli \& Kurucz grid has steps
in stellar parameters (500~K in \temp, 0.5 in \logg, and 0.5 in \met) that
are too wide for our needs. We linearly interpolated the models to obtain a
finer grid with steps of 200K in \temp, 0.4 in \logg, and 0.2~dex in \met\ and
covering the ranges 3600-7400~K in \temp, 0.2 to 5.4 in \logg\footnote{The
Castelli \& Kurucz grid has a \logg\ upper limit of 5.0. To explore the
$\chi^2$ space around this limit (necessary for a cool dwarf stars that can
have \logg$\sim4.8$, for instance) \Space\ needs to construct spectral models
with \logg$>$5.0. Thus, we extended the EW library to \logg=5.4 by
computing the EW of the lines with a linear extrapolation.},
and $-2.4$ to $+0.4$~dex in \met. In the following we always refers to this grid.
The microturbulence $\xi$ assigned to each atmosphere model is computed
as a function of \temp\ and \logg. This function is described in
Appendix~\ref{appx_microt}.\\
Note that in Sec.~\ref{sec_method} we defined
the GCOG as a function of
the three variables \temp, \logg, and [El/H] (and not \met).
However, to construct the EW library we need the metallicity \met\ of the atmosphere
model. In fact, besides the \temp, \logg, and the abundance [El/H], the EW of a line
also depends on the opacity of the stellar atmosphere in which the line
forms, which is driven by atmospheric metallicity \met. This means that to compute the GCOG
of a line we must also define the metallicity of the atmosphere model,
making (in this specific case) the GCOG a function of four variables.
Therefore, we define the stellar parameter grid in the three dimensions \temp,
\logg, and \met\ plus a fourth dimension that accounts for the relative abundance
[El/M].
To construct the EW library, for every point of the grid and every line of our line list we computed
the EW of the lines at 6 different abundance enhancements with respect to
the nominal metallicity of the atmosphere model, that means (for the generic element
El) [El/M]$=-0.4$, $-0.2$, $0.0$, $+0.2$, $+0.4$, and $+0.6$. These 6 points
belong to the COG of the lines for every grid point.
The EW library so constructed contains the EWs of the lines synthesized as they were
isolated. Because \Space\ constructs the spectrum model by summing up the absorption lines
with given EWs, the spectrum model is realistic if the lines are
isolated or, in case of blends, if the EWs of the involved lines are small
(i.e., weak line approximation).
Because these conditions are not always satisfied in a real spectrum,
in the following we discuss how to remove the weak line approximation.
\subsection{The weak line approximation problem}
Consider the case of two or more lines that are instrumentally blended but
physically isolated in a spectrum. We can write\\
\begin{equation}\label{eq_blend_instr}
EW_{tot}=\sum^n_i EW_i,
\end{equation}
where $EW_{tot}$ is the total EW of the blended feature, $EW_i$ are the EWs of
the lines computed as isolated, and $n$ the number of
lines considered.\\
Consider now the same lines as before, but now they are physically blended.
If the lines have small EWs, then equation~(\ref{eq_blend_instr}) is still
(approximately) valid because their line opacity is
small and does not affect the local opacity significantly.
This is what we call weak line approximation. Under these conditions
we can use the $EW_i$ of the lines contained in
the EW library and,
assuming a line profile, subtract the lines from a normalized continuum to obtain a
spectrum model which approximates the synthetic spectrum well.
Unfortunately, the weak line approximation can rarely be applied because
strong and broad absorption lines are common in real spectra.
In case of strong lines, equation~(\ref{eq_blend_instr}) is not true anymore,
because the opacities of the lines diminish reciprocally the flux absorbed by them,
and equation~(\ref{eq_blend_instr}) becomes the inequality\\
\begin{equation}\label{eq_blend_physic}
EW_{blend}<\sum^n_i EW_i
\end{equation}
where $EW_{blend}$ indicates the total equivalent width of the blend and
$EW_i$ is as in equation~(\ref{eq_blend_instr}).
In this case, by summing up the $EW_i$ of the lines contained in the EW
library would render a spectrum model where the blends are too strong with
respect to the synthetic ones.
This is shown in Fig.~\ref{EW_corr_example} where a blended
feature constructed using equation~(\ref{eq_blend_instr}) (dotted line)
with EWs from the EW library turns out to be much stronger than the
$EW_{blend}$ of the feature synthesized by MOOG (black line).
\begin{figure}[t]
\centering
\includegraphics[width=9cm,bb=83 484 277 669]{EW_corr_example.pdf}
\caption{Comparison between the synthetic spectrum with stellar parameters
\temp=4200~K, \logg=1.4, and \met=0.0~dex (the black solid line)
and the corrispondent spectrum model
(the gray solid line) constructed by \Space\ using the EWs corrected
for the opacity of the neighbor lines as described in Sec.~\ref{sec_corr_opac}.
The dotted line is the spectrum model constructed using the EW of the lines
computed as if they were isolated (i.e., no correction for the opacity of
the neighbor lines). Plus, cross, and triangle symbols indicate the
position of the \ion{Fe}{}, \ion{V}{}, and \ion{Ni}{}\ lines, respectively.}
\label{EW_corr_example}
\end{figure}
To correctly
reproduce the blend, the $EW_i$ of the EW library
must be corrected for the opacity of the neighboring lines, so that the
EWs employed to construct the spectrum model are smaller than the
corresponding isolated lines.
These corrected quantities that we call ``equivalent widths corrected
for the opacity of the neigbouring lines" ($EW_i^c$) are smaller than $EW_i$
and satisfy the equation\\
\begin{equation}\label{eq_blend_corr}
EW_{blend}=\sum^n_i EW_i^c.
\end{equation}
The quantity $EW_i^c$ cannot be computed with MOOG. In fact, to know the
quantity $EW^c$ we need to compute the fraction of the contribution
function due to each absorber present in the stellar atmosphere
(continuum, atoms, molecule) that form the blend. These fractions of the contribution
function are not usually computed by spectral synthesis codes. This information is
lost when the spectral synthesis code computes the
total opacity $\kappa_{\lambda}$ at wavelength $\lambda$
by summing up the opacities of all the absorbers to obtain the optical depth
$\tau_{\lambda}$.
The way to compute the fraction of the contribution function is discussed
in Sec.~\ref{sec_contrib_lines} and, although a rigorous solution was found it cannot be
used to correct the EWs of the library. An approximate solution must be
adopted and this is outlined in Sec.~\ref{sec_corr_opac}.
\subsection{Separating the contributions of each
absorber}\label{sec_contrib_lines}
In the attempt to obtain the quantity $EW^c$,
we tackled and solved the problem to compute the
fractions of the contribution function due to each absorber individually.
Unfortunately, the result turned out to be inapplicable for our purpose: we can
compute the flux absorbed by each absorber but this cannot be written in terms of EW.
To explain this apparent paradox, we here outline the general result
and point the reader to Appendix~\ref{appendix_cf} for the full detailed solution.
Although the problem concerns blends, the simple case of one isolated line is
also illustrative for multiple absorbers like in blends. In fact, in the case of one line
the opacity is due to two absorbers: the continuum and the line.
In Fig.~\ref{flux_level_8446.388} we show an absorption line
(gray solid line) and the continuum in absence of the absorption line (dotted line).
The EW of this line, as commonly defined, is represented by the area
between the gray and the dotted line. In this way, the EW does not
represent the total flux absorbed by the line, because the continuum (dotted
line) has been computed in absence of the line, i.e. the opacity of the
line has been neglected. When the line opacity is taken in account,
the continuum level is higher (the dashed line of
Fig.~\ref{flux_level_8446.388}) because it absorbs less radiation.
In fact, when the line is present, its
opacity diminishes the intensity of the radiation and the continuum absorber is left
with less radiation to absorb. Therefore, the real flux absorbed by the line
is represented by the area included between the gray and the dashed lines of
Fig.~\ref{flux_level_8446.388}, which is much bigger than the EW as
usually defined. This proves that, although we can
precisely determine the real quantity of flux absorbed by a line (in a
synthetic spectrum), we still miss the solution of our problem.
In fact, to reconstruct a spectrum model by summing up
the absorbed fluxes we need to consider the continuum level at any
wavelength. At the stage of development of our work, the variation of the
continuum level as function of the strength of the lines looks too
complicated to be implemented. Therefore we must follow another method to
approximate the $EW^c$ quantities and apply
equation~(\ref{eq_blend_corr}) to construct the spectrum models.
\begin{figure}[t]
\centering
\includegraphics[width=9cm,bb=89 484 277 669]{flux_level_8446_388.pdf}
\caption{Emerging (synthetic) fluxes obtained when opacities of the continuum and the
line are accounted separately.
The gray solid line is the synthetic spectrum of the line \ion{Fe}{I}\ at
8446.388\AA, i.e. the emerging flux when the line and continuum opacities are
accounted for. The black solid line represents the emerging flux
when the emissions alone are accounted for.
The dotted line is the level of the continuum when the continuum opacity
alone is accounted for. The dashed line is the level of the continuum across the
line when both line and continuum opacities are accounted for.
The y-axis is expressed as logarithm (base
10) of the normalized flux.}
\label{flux_level_8446.388}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=6cm,angle=0,bb=150 73 630 551]{ew_5445.pdf}
\caption{Two dimensional section of the General COG of the \ion{Fe}{I}\ line at 5445.042\AA\ as a function of
\temp\ and \logg. The \met\ and Fe abundance has been fixed at 0.0~dex.}
\label{ew_5445}
\end{figure}
\subsection{Approximated correction for the opacity of the neighbor lines}\label{sec_corr_opac}
The method to approximate the quantity $EW^c$ is based on the idea that when
the first derivative of the COG (expressed as EW as function of abundance)
inside a blend is small this means that its contribution to the absorbed flux
(i.e., the EW) is small too, similarly to what happens to the isolated
line. The method, outlined in the following example, makes use of
EWs of synthesized isolated lines and blended features.
The EWs of isolated lines are computed with the MOOG driver {\it ewfind}
which numerically intergrates the depression of the synthetic line with
respect to the continuum. Because this driver does not handle more than one
line per time, to compute the EW of blends we need to synthesize the blend
and numerically integrate the depression\footnote{For a faster procedure we
modified the driver {\it ewfind} to make it handle more than one line per
time.}.\\
Consider one line in a blend composed of two (or more) lines indexed with $i$. The lines belong to
different elements $El_i$. By using MOOG we compute the equivalent widths EWs of
the lines as isolated for 6 different abundances
[$El_i$/M]=$-0.4$, $-0.2$, 0.0, +0.2, +0.4, +0.6~dex
so that we have six points of the COG of the line (we call it COG$^i_{iso}$).
Similarly, for every line we synthesize the whole blend and we measure the
total equivalent width of the blend $EW^i_{blend}$.
This is done by synthesizing the blend in which all the lines have constant
[$El_i$/M]=0.0 but for the $i$-th line which assumes six different abundances
[$El_i$/M]. The six EWs of the blend so obtained represent the
COG of the $i$-th line in the blend (we call them
COG$^i_{blend}$). If the opacity of one line is not affected by the other line,
then COG$^i_{iso}$=COG$^i_{blend}$, otherwise COG$^i_{iso}\ne$COG$^i_{blend}$. In
particular, if the first derivative of COG$^i_{blend}$ is smaller than the
one of COG$^i_{iso}$, it means that the contribution of the
$i$-th line to the absorbed flux of the blend (this is the $EW^c$ quantity we look for) is
smaller than the one absorbed when the $i$-th line is isolated.
Thus, the quantity COG$^i_{blend}$ can be used to approximate $EW^{c}$ as
follows:
\begin{enumerate}
\item Compute the first derivatives of the curves-of-growth $\delta COG^i_{iso}$ and
$\delta COG^i_{blend}$.
\item Perform a first correction of $EW^i_{iso}$ as follows\\
$$
EW^{i,c}_{iso}=EW^i_{iso}\cdot\frac{\delta COG^i_{blend}}{\delta COG^i_{iso}}
$$
\item Under the assumption that the ratio of $EW^{i,c}_{iso}$ between the lines is
conserved in the blend, the contribution to the absorbed flux of the
$i$-th line in the blend is approximated as\\
$$
EW^{i,c}=EW^{i,c}_{iso}\cdot\frac{EW^i_{blend}}{\sum_i EW^{i,c}_{iso}}.
$$
\end{enumerate}
In the general case of a blend of $n$ lines and $m$ elements with $m<n$, there
are two or more lines that belong to the same element $El$. In this case, the
strengths of the $El$ lines would change together when we change the
abundance [$El$/M] during the synthesis of the blend, and this must be avoided
in order to evaluate the COG$^i_{blend}$ of the target line.
This problem is solved by changing
the \loggf\ (and not the abundance) of the line, so that the target line is
the only line the strength of which changes in the blend.\\
The corrected values $EW^{i,c}$ are computed for all the EWs contained in
the EW library considering any line closer than $\Delta \lambda=0.5$\AA\
to the target line. Fig.~\ref{EW_corr_example} shows the improvement
obtained for a blend when the corrected values $EW^c$s are used to construct the spectrum model
(gray solid line) with respect to the model constructed with EWs (dotted
line).\\
The limit of 0.5\AA\ is satisfactory for most of the lines. For a few intense
and broad lines (for instance, the \ion{Ti}{II}\ at 5226.538\AA), which can affect lines
farther than 0.5\AA, a larger limit would be necessary. At this stage of
development, \Space\ neglects these lines during the analysis.
\begin{figure*}[t]
\centering
\includegraphics[width=18cm,bb=91 342 545 614]{GCOG_TGME_sections_gray.pdf}
\caption{One dimensional sections of the GCOGs of several absorption lines in the range
5212-5222\AA\ as a function of \temp, \logg, \met, and abundance [El/M].
The three (out of four) fixed stellar parameters are reported in the
panels. The gray points connected with gray lines represent the $EW^{i,c}$s of the
lines. The black lines are one dimensional sections of the polynomials GCOGs of the same
absorption lines.}
\label{GCOG_TGME_sections}
\end{figure*}
\section{The General Curve-Of-Growth (GCOG) library}
The COG of a line is a function that gives the
EW of a line as a function of the abundance
of the element the line belongs to.
This function can be recovered from the EW
library where, for each absorption line, we stored six points
of the COG between the [El/M] values $-0.4$ and $+0.6$~dex in steps of
0.2~dex.
The EW library also contains the EWs that the lines
assume over a grid spanning a wide range in the stellar parameters \temp,
\logg, and \met.
Because the EW of a line changes not only as a function of the abundance
but also as a function of \temp\ and \logg, we extend the concept of COG.
We call {\it General Curve-of-Growth} (GCOG) the function that
describes the EW of a line as function of the variables \temp, \logg, and
[El/H], where [El/H] represents the abundance of the generic element El the
line belongs to (see Fig.~\ref{ew_5445}).
Unfortunately the GCOG has no analytical form, therefore, to obtain the EW
of a line we must approximate the GCOG with a polynomial function in the
parameter space.
In principle the GCOG has a three dimensional domain (\temp, \logg,
[El/H]). As already reported in Sec.\ref{sec_ewlibrary}, because we rely on a grid of stellar
atmosphere models the opacity of which depends on \met, we must construct the polynomials
in a four-dimensional space (\temp, \logg, \met, [El/M]). We refer to these
functions as ``polynomial GCOGs", and they are constructed to approximate the
GCOG of the lines. Thanks to the polynomial GCOGs, \Space\ can compute the
expected EW of any line at any point of the parameter space (\temp, \logg,
\met, [El/H]) removing in this way the discontinuity of the grid in the EW library.
\subsection{The polynomial GCOGs}\label{sec_poly_GCOG}
We fit a polynomial GCOG for every absorption line in the parameter space.
Because of the difficulties in fitting a
function over points covering the whole parameter space, the polynomial
GCOGs fit the EWs that the line assumes over a limited stellar parameter
interval surrounding the points of the grid. The width of this interval is 800~K in \temp, 1.6 in \logg,
0.8~dex in \met, and 1.0~dex in [El/M], which includes five grid points for the
first three dimensions and six for the last dimension.
For instance, for the grid
point \temp=4200~K, \logg=1.4, and \met=0.0 the polynomial GCOG fits the EWs
that the line has at \temp=3800, 4000, 4200, 4400, and 4600~K, \logg=0.6, 1.0,
1.4, 1.8, and 2.2, \met=$-0.4$, $-0.2$, 0.0, $+0.2$, and $+0.4$~dex, and the six
abundance points [El/M]=$-0.4$, $-0.2$, 0.0, $+0.2$, $+0.4$, and $+0.6$~dex
(Fig.~\ref{GCOG_TGME_sections}). In total, every polynomial GCOG
fits 750 EWs. The polynomial GCOG function has the form\\
\begin{equation}\label{poly_GCOG}
EW_{poly}=\sum_{i,j}^{i+j \leq 4} a_{ij} (\mbox{\temp})^i (\mbox{\logg})^j.
\end{equation}
So defined, the polynomial GCOG has 70 coefficients $a_{ij}$ that are computed by using a
minimization routine that minimizes the $\chi^2$ between the polynomial and the
given EWs. The residuals between the $EW_{poly}$ (given by the polynomial
GCOG) and the EWs of the library are shown in Fig.~\ref{GCOG_residuals}.
The residuals are on average 2.6\% of the expected EW, which is
equivalent to an error of $\sim 0.01$~dex in chemical abundance.
Fig.~\ref{GCOG_TGME_sections} shows the polynomial GCOG of 50 absorption
lines compared with the expected EW plotted as a function of \temp,
\logg,\ \met, and abundance [El/M].
\section{The \Space\ code}\label{sec_space_code}
In the following we outline the main structure of the code. This is not intended
to be a user manual. A detailed tutorial on how to use \Space\ and the full
description of the available functionalities of the code is provided
together with the code.\\
The \Space\ code is written in FORTRAN95. It processes one spectrum per
run. The observed spectrum must be wavelength calibrated, continuum
normalized and radial velocity corrected.
When launched, \Space\ reads the parameter file that must include the name
of the spectrum to process, the address of the GCOG library, a first guess
of the FWHM, and other optional settings.
In the following we
outline the algorithm that summarized the \Space\ analysis procedure
specifying the most important routines that we explain later.
This algorithm carries out the following steps:\\
\begin{enumerate}
\item Upload the observed spectrum.
\item Make a first rough estimation of the stellar parameters \temp, \logg,
and \met. (This is performed by the ``starting
point routine".)
\item Find the closest grid point to the estimated \temp, \logg, and \met\
and upload the corresponding polynomial GCOG.
\item Derive \temp, \logg, and \met. (This is performed by the TGM routine.)
\item Find the closest grid point to the derived \temp, \logg, and \met.
If it is different from the previous grid point, then upload the polynomial
GCOG of the new grid point and go to step~4, otherwise continue.
\item Re-normalize the observed spectrum. (This is performed by the
re-normalization routine.)
\item Derive \temp, \logg, \met\ from the re-normalized spectrum. (This is performed by the TGM routine.)
\item Find the closest grid point to the estimated \temp, \logg, and \met.
If it is different from the previous grid point, then upload the polynomial
GCOG of the new grid point and go to step~6, otherwise continue.
\item Derive the chemical abundances [El/M]. (This is performed by the ABD routine.)
\item Go to step~6 and repeat until convergence.
\item Derive the confidence limits for \temp, \logg, \met, and [El/M]
(optional)
\item End the process and write out the results.
\end{enumerate}
Every step is composed of routines and sub-routines. The
most important ones are described in the following.\\
These algorithm can be executed with or without step~10.
This is controlled by the keyword {\it ABD\_loop} (abundance loop) that can be
used by \Space. When {\it ABD\_loop} is switched on, step~10 is executed,
otherwise it is skipped.
The two settings show significant differences when run on real
and synthetic spectra. This is discussed in Sec.\ref{sec_validation}.
\begin{figure}[t]
\centering
\includegraphics[width=9cm,bb=24 25 403 239]{GCOG_residuals_deg.pdf}
\caption{{\bf Grey points}: residuals between the EW given by the polynomial GCOG and the
EW of the library as a function of EW for 100 absorption lines (in the range
5212-5235\AA) at the grid point \temp=4200~K, \logg=1.4 and \met=0.0~dex.
{\bf Black points}: as before but for the line \ion{Fe}{I}\ at 5231.395\AA\ alone.
The residuals are normalized for the
EW so that the values in the y-axis and the statistic in the panel
express the errors of the polynomial GCOG normalized to the expected EW.
}
\label{GCOG_residuals}
\end{figure}
\subsection{The ``make model" routine}\label{sec_makemodel}
To derive the stellar parameters and the chemical abundances,
\Space\ constructs several spectrum models and compares them to the observed spectrum,
looking for the model that renders the minimum $\chi^2$.
The routine that constructs the model (called the ``make model" routine) is
therefore particularly important and it follows this algorithm:\\
\begin{enumerate}
\item Set the initial spectrum model with the same number of
pixels and wavelengths of the observed spectrum and initial flux normalized
to one. We call it the ``working model".
\item Consider the stellar parameter with which the model must be constructed.
\item Consider the first absorption line of the line list.
\item Compute the $EW^c$ of the absorption line by using
its polynomial GCOG.
\item Compute the strength of the line profile at every pixel around the center of the
line and subtract it from the working model. The result is the
new working model.
\item Consider the next line and go to step~4 until the last
line has been reproduced.
\end{enumerate}
The line profile adopted is a Voigt function approximated with the
implementation by McLean et al. \cite{mclean}. We modified this
implementation so that the line profile becomes broader as a
function of \logg\ and EW with a law that can be different for some special lines
(for instance, lines with large damping constants).
For a detailed description of the line profile adopted we refer the reader to
Appendix~\ref{app_voigt}.
\subsection{The ``starting point" routine}
This routine finds the first rough estimation of the stellar parameters.
It uses the ``TGM routine" outlined in the next section, with the
difference that the polynomial GCOG employed has been computed not over a
small volume of the parameter space (as explained in Sec.~\ref{sec_poly_GCOG})
but over the whole parameter space. This polynomial GCOG has larger errors
with respect to the other polynomials contained in the GCOG library, but it
permits a rough and fast estimation of the parameters, which is used as
starting point by the next TGM routine.
\subsection{The ``TGM routine"}
This part of the code is responsible for deriving the stellar parameters.
It employs the Levenberg-Marquadt method to minimize the $\chi^2$ between
the models and the observed spectrum in the parameter space (\temp,
\logg, \met). At the fourth
step of the main algorithm, the TGM routine uses the observed spectrum as
provided by the user, while at the seventh step it uses the observed spectrum
after the re-normalization (explained in section~\ref{sec_renorm}).
Because the polynomial GCOGs were computed over an
interval of stellar parameters that covers 800~K in \temp, 1.6 in \logg, and
0.8~dex in \met\ (see Sec.\ref{sec_poly_GCOG}) centered on a grid point (we
call it the ``central point"), its reliability decreases with the distance
from the central point.
When the stellar parameters given by the TGM routine are close to a grid
point that is not the central point, the TGM routine stops,
uploads the polynomial GCOG for this new grid point, and repeats.
In this way the routine always finds the minimum $\chi^2$ close to the
central point, where the EWs provided by the polynomial GCOG are the most
reliable.
For some spectra (e.g., spectra with very low S/N, spectra of very cool/very
hot stars, or spectra of stars with high rotational velocity \Vrot)
the TGM routine may try to move beyond the extension of the GCOG library. In
this case \Space\ writes a warning message and exits with no results.\\
Apart from the stellar parameters, the TGM routine also estimates two other
parameters: the radial velocity and the FWHM of the instrumental profile.
Because \Space\ only processes radial-velocity-corrected spectra, the radial
velocity estimation of \Space\ is not a real measurement, but it is an internal setting
to improve the match between the model spectrum and the observed spectrum.
Usually this quantity amounts to a small fraction of FWHM.
Similarly, \Space\ searches for the FWHM that matches best the instrumental
profile.
The optimization of the FWHM gives to \Space\ some flexibility in estimating
the stellar parameters for stars with a rotational velocity \Vrot\ different
from zero. However, the Voigt profile adopted by
\Space\ cannot properly fit the shape of the lines of stars with high \Vrot,
and the \Vrot\ limit beyond which the line profile becomes inadequate depends
on the spectral resolution. This limit is higher for low-resolution spectra
in which the instrumental line profile dominates over the physical profile of
the line.
\subsection{The re-normalization routine}\label{sec_renorm}
As stressed before, \Space\ can only handle flux-normalized spectra.
However, it can perform a re-normalization to adjust the continuum
level. This operation may be unnecessary
for high-resolution and high-S/N spectra, where the continuum is
clearly detectable and a normalization done with the commonly used IRAF task
{\it continuum} is usually satisfactory.
For low-resolution, low-S/N spectra, and in particular for spectra crowded
of lines, the continuum cannot be clearly identified. At low resolution, the
lines are instrumentally blended and they create a pseudo-continuum that can lie
under the real continuum. In this case, the IRAF task {\it continuum} cannot
correctly estimate the continuum and renders a too low continuum level, leading to an
underestimation of the metallicity (as well as the other stellar parameters,
since they are correlated). As an example, in Fig.~\ref{comp_spectra_SN100} we show
a low- and a high-resolution spectrum (synthetic) of a high-metallicity star, and the result of the
continuum normalization performed with the IRAF task {\it continuum}, using
a spline function and {\it low\_rej=1} and {\it high\_rej=4}, settings that
take the presence of absorption lines in the spectra in account.
\begin{figure}[t]
\centering
{\includegraphics[width=9cm,bb=66 287 503 562]{comp_spectra_SN100.pdf}}
\caption{Synthetic spectrum with the stellar parameters \temp=4212~K, logg=1.8,
\met=0.0~dex, and S/N=100 at resolution R=20\,000 (top) and R=5\,000
(bottom). The black line is the correctly normalized spectrum (synthesized by
MOOG and noise added),
while the dashed line is the same spectrum after normalization with the task
{\it continuum} of IRAF.}
\label{comp_spectra_SN100}
{\includegraphics[width=9cm,bb=69 286 503 562]{plot_normaliz.pdf}}
\caption{Comparison between the IRAF and \Space\ normalization.
{\bf Top}: Solid and dashed black lines are as in bottom panel of
Fig.~\ref{comp_spectra_SN100}. The gray line was obtained by processing the
dashed line (IRAF normalized spectrum) with the \Space\ re-normalization
routine. {\bf Bottom}: residuals between the IRAF
normalized (dashed line) and \Space\ normalized (gray line) spectra with
respect to the correctly normalized spectrum.}
\label{plot_normaliz}
\end{figure}
Because of the instrumental blend of the lines, the low-resolution spectrum
suffers of a too low continuum estimation and its normalized flux is too high.
When the same {\it continuum} settings are used for very low S/N spectra
the normalized spectra suffer the opposite problem:
the noise dominates the spectrum and the flux distribution becomes nearly
symmetric with respect to the continuum.
(This commonly happens in spectroscopic surveys when the number of spectra
to normalize is big and the parameters of the task {\it continuum}
cannot be set by hand for every spectrum).
In this case, the settings {\it low\_rej=1}
and {\it high\_rej=4} are not appropriate and cause a too high estimated continuum
(and an overestimation of the metallicity).
To fix this problem, \Space\ re-normalizes the observed spectrum.
In the following, $f_{obs}$ indicates the normalized flux of the
observed spectrum and $f_{model}$ indicates the normalized flux of the spectrum
model. The re-normalization routine works as follows:\\
\begin{enumerate}
\item Consider the $i$-th pixel at wavelength $\lambda(i)$ and the
$n$ pixels for which $\lambda-\lambda(i)<30\AA$
\item Compute the average of the observed flux $\overline{f_{obs}}$, the average
of the residuals $\overline{(f_{obs}-f_{model})}$ and their standard deviation
$\sigma_{res}$ of the $n$ pixels defined before.
\item From the set of $n$ pixels defined in step~1, reject the pixels with
$\overline{f_{obs}}\leq(\overline{f_{obs}}-2\cdot\sigma_{res})$. The new set of
pixels has now $m\leq n$ number of pixels.
\item With the new set of $m$ pixels, compute the new average of the observed
flux $\overline{f_{obs}}$, the average of the model
$\overline{f_{model}}$.
\item Compute the continuum level at the $i$-th pixel as
$$
cont(i)=1.+(\overline{flux_{obs}}-\overline{flux_{model}})
$$
\item Re-normalize the $i$-th pixel of the observed spectrum as
$$
f_{renorm}(i)=\frac{f_{obs}(i)}{cont(i)}
$$
\item Move to the next $i$-th pixel and go to step~1 until all the pixels
have been processed.
\end{enumerate}
In Fig.~\ref{plot_normaliz} we show the synthetic spectrum at R=5\,000 (also seen
in the bottom panel of Fig.~\ref{comp_spectra_SN100}) together with
the IRAF normalized spectrum and the spectrum after the
re-normalization. The re-normalization routine employed by \Space\ can greatly
decrease the offset caused by the IRAF normalization.
\subsection{The ABD routine}
The routine to derive the chemical abundances (called ``the ABD routine")
works similarly as the TGM routine. The ABD routine is run after the TGM
routine. The abundances [El/M] are varied by a minimization routine until
the $\chi^2$ between the model and the observed spectrum is minimized.
\subsection{Internal errors estimation}\label{sec_error_est}
\Space\ can estimate the expected errors for the parameters \temp, \logg, and
the chemical abundances [El/M]. The routine dedicated to this task finds the
confidence limits of the stellar parameters intended as the region of the parameter
space {\it that contains a certain percentage of the probability distribution
function} \cite[Press et al.][]{nr}. If we want to determine the extension of the region
that has a 68\% of probability to include the resulting parameter (say
\temp$^{best}$) with the lowest $\chi^2$ ($\chi^2_{best}$), this region is an interval
that has an upper and a lower limit \temp$^{up}$
and \temp$^{low}$ with $\chi^2=\chi^2_{best}+1$. Because
the stellar parameters are correlated, the confidence limits of one
parameter are a function of the others, so that the upper and lower confidence limits of
\temp\ as a function of \logg\ correspond to the largest and smallest values
of \temp\ with $\chi^2=\chi^2_{best}+\Delta\chi^2$ where $\Delta\chi^2$
depends on the number of degrees of freedom \cite[Press et al.][]{nr}.
The higher the number of degrees of freedom, the larger the confidence limits.
Because the determination of
these limits is computationally expensive, we limited this determination
to three variables, namely \temp, \logg, and \met\ for these three stellar
parameters, and \temp, \logg, and [El/M] for the
chemical abundance of the generic element El. This means that we determine the upper and
lower limits of any parameter at $\chi^2=\chi^2_{best}\pm3.53$.
These confidence limits must be considered as internal errors (therefore smaller than the real
errors) because they do not take external errors into account like
the mismatch between the atmosphere model and the real stellar atmosphere,
uncertainties in the atomic transition probability, in the continuum
placements and other uncertainties in the spectrum model construction.\\
The error estimation is an option left to the user because it is computationally
expensive: when done, it can easily double (or more) the time required to process
the same spectrum without error estimation.
\subsection{Output results}
At the end of the process, \Space\ writes four output files. One file called
``space\_TGM\_ABD.dat" contains the resulting stellar parameters \temp, \logg, and chemical
abundances with their confidence intervals, the number
of lines measured, and a few other parameters like the $\chi^2$ of the best
matching spectrum model, internal RV and FWHM. The second output file
called ``space\_model.dat" contains a table the columns of which correspond to i)
the pixel wavelength of the observed spectrum, ii) the flux of the observed
spectrum,
iii) the flux of the observed spectrum after re-normalization, iv) the flux of the
best matching model, v) the continuum level adopted for the
re-normalization, and vi) the
weights of the pixels (rejected pixels have weight=0). The third output file
``space\_ew\_meas.txt" contains the EW employed by \Space\ to construct
the model. We want to stress that {\em these are not the EWs of the
absorption lines} but merely the $EW^{c}$ (EW corrected for the opacity of the
neighboring lines) computed from the polynomial GCOG during
the construction of the best matching model.
The fourth output file
``space\_msg.txt" contains the warning messages generated when something goes wrong
during the analysis.
\begin{table*}[th]
\caption[]{Ages, iron abundance ranges, and coefficients $m$ and $q$ of the
linear law $[El/Fe]=m\cdot[Fe/H]+q$ that expresses the chemical abundances
used to synthesize the spectra of the three mock stellar populations.}
\label{tab_three_pop}
\vskip 0.3cm
\centering
\begin{tabular}{ll|c|cc|cc|cc|cc}
\hline
\noalign{\smallskip}
mock & age & [Fe/H] range & \multicolumn{2}{c|}{\ion{C}{}, \ion{N}{}, \ion{O}{}} &
\multicolumn{2}{c|}{\ion{Mg}{}} & \multicolumn{2}{c|}{\ion{Al}{}, \ion{Si}{}, \ion{Ca}{}, \ion{Ti}{}} &
\multicolumn{2}{c}{other elements}\\
population &(Gyr)& (dex) & $m$ & $q$ & $m$ & $q$ & $m$ & $q$ &
$m$ & $q$\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
thin disk stars& 5 & $0.0\leq$[Fe/H]$\leq+0.2$& 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0
& 0.0\\
& & $-0.8<$[Fe/H]$<0.0$& $-0.4$ & 0.0 & $-0.3$ & 0.0 & $-0.15$
& 0.0 & 0.0 & 0.0\\
\hline
\noalign{\smallskip}
halo/thick disk stars& 10 & $-1.0<$[Fe/H]$<-0.2$& $-0.5$ & $+0.1$ & $-0.3$ &
$+0.1$ & $-0.2$ & $+0.1$ & 0.0 & 0.0\\
& & $-2.2\leq$[Fe/H]$\leq-1.0$& $0.0$ &$+0.6$ & $0.0$ & $+0.4$ &
$0.0$ & $+0.3$ & 0.0 & 0.0\\
\hline
\noalign{\smallskip}
accreted stars & 10 & $-2.2\leq$[Fe/H]$\leq-1.0$& 0.0 & 0.0 & 0.0 & 0.0 & 0.0
& 0.0 & 0.0 & 0.0\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table*}
\section{Validation}\label{sec_validation}
To establish the precision and accuracy of the stellar parameters and
chemical abundances derived by \Space, we run the code on several sets of
synthetic and real spectra with well known parameters and compare them with
the parameters derived with \Space. We test \Space\ on the wavelength ranges
5212-6270\AA\ and 6310-6900\AA, which avoids the range 6270-6310\AA\ where
the presence of telluric lines can affect the analysis. The tests are
performed on spectra with spectral resolution between 2\,000 and
20\,000\footnote{At R$\lesssim$20\,000 the line profile implemented in \Space\
is expected to work best. See Appendix~\ref{app_voigt} for a short discussion
on the accuracy of the line profile as a function of the spectral
resolution.}.\\
Before the presentation of these tests, we illustrate and discuss the
accuracy with which \Space\ constructs the spectrum models (i.e. how close these
models match the synthetic spectra from which they are derived).
\subsection{Spectrum models accuracy}
In Sec.~\ref{sec_makemodel} we outlined the algorithm that constructs the
spectrum model which must be compared with the observed spectrum. Our goal
was to make a spectrum model that looks as close as possible to a synthetic spectrum.
To evaluate the accuracy with which the spectrum models constructed by
\Space\ match the corresponding synthetic spectra, we synthesized two
spectra and
compared them with the corresponding spectrum models constructed with the
same stellar parameters. The goodness of the match illustrates the precision with
which the strength of the lines (encoded in the EW library first, and then
in the GCOG library) and the line profile adopted, can reproduce a realistic
spectrum model. We chose to synthesize the spectra of a dwarf star
(\temp=5800~K, \logg=4.2, \met=0.0~dex) and of a giant star (\temp=4200~K, \logg=1.4, and
\met=0.0~dex) degraded to a spectral resolution of R=12\,000\footnote{The
accuracy with which the model spectra match the synthetic ones can change
with spectral resolution and stellar parameters. For the sake of brevity, we
only present here two spectra as exemplary cases.}. The
relatively high metallicity adopted generates spectra rich in lines, allowing
us to verify how well \Space\ can reproduce blended features (more numerous in
spectra of giants) and
how good it fits the profile of strong lines (usually broader in dwarf stars).
In the case of blended features we test the correction for the opacity of
the neighboring lines applied to the EW library (Sec.~\ref{sec_corr_opac}),
for isolated strong and weak lines we verify the goodness of the Voigt profile adopted (described in
Appendix~\ref{app_voigt}). In general, any line is affected by the precision
with which the polynomial GCOGs represent the expected EWs.
The comparison between models and synthetic spectra is shown in
Fig.~\ref{test_reconstr}. The top panel of the figure shows that the
normalized flux of the models differs by no more than 1\% for most of the
wavelengths, with a general standard deviation $\sigma$ of 0.2\% and 0.6\%
for the dwarf and the giant, respectively. This statistic was computed after
the rejection of the gray shaded areas, which were rejected during the
analysis because in the case of real spectra they are affected by unidentified lines, lines with NLTE effects, or
lines for which the correction for the opacity of the neighboring lines is not
satisfactory (see last paragraph of Sec.~\ref{sec_corr_opac}).\\
Although not perfect, the spectral models match the corresponding synthetic
spectra with a satisfactory degree of accuracy .
\begin{figure}[t]
\centering
{\includegraphics[width=9cm,bb=64 290 505 686]{test_reconstr.pdf}}
\caption{Comparison between spectra models and the corrispondent synthetic
spectra. The black lines are synthetic spectra of a cool giants stars
(bottom panel) and a warm dwarf stars (middle panel) with stellar parameters
as reported in the panels. The overplotted colored lines represent the
spectrum models constructed by \Space\ for the corrispondent giant (red
line) and dwarf (blue line) star. The shaded areas indicate the wavelength
ranges rejected during the \Space\ analysis (see text for more details).
The residuals (model minus synthetic spectra) are reported in the top panel, together with the standard
deviation of the residuals for the dwarf and giants spectra (in blue and red
colors, respectively) after the exclusion of the gray shaded areas. The
color version of this plot is available in the electronic edition.}
\label{test_reconstr}
\end{figure}
\subsection{Tests on synthetic spectra}
To verify the ability of \Space\ to distinguish the stellar parameters and chemical abundances of
different Milky Way stellar populations of different ages, metallicity, and
evolutionary
stages, we synthesized the spectra of three mock stellar populations with
characteristics that mimic the thin disk stars, the halo/thick disk stars, and
accreted stars with non-enhanced $\alpha$ abundances (a dwarf galaxy accreted by the Milky
Way, for instance). All the synthetic spectra were synthesized with
MOOG, adopting the final line list described in Sec.~\ref{sec_final_ll} and
atmosphere models from the grid ATLAS9 by Castelli \& Kurucz \cite{castelli}
(updated to the 2012 version) linearly interpolated to the wanted stellar parameters.
\subsubsection{Construction of the synthetic mock populations}
We construct a mock sample for a total number of 1200 spectra for the three
populations (300, 600, and 300 spectra for
the thin, halo/thick, and accreted populations, respectively),
randomly chosen from the PARSEC isochrones (Bressan et al.
\citealp{bressan} complemented by Chen et al. \citealp{chen_bressan})
to cover the stellar parameter range 3600 to 7500K in \temp, 0.2 to 5.0 in
\logg, and $-2.0$ to $+0.3$~dex in [Fe/H].
In this way, the mock sample covers uniformly the chosen isochrones.\\
Their chemical abundances were chosen with the following characteristics:\\
\begin{itemize}
\item {\bf mock thin disk stars}: they cover the iron abundance ranges
$-0.8\leq[Fe/H]{ \mbox(dex)}<+0.3$ and their \temp\ and \logg\ were taken
from
isochrones with an age of 5Gyr. Their $\alpha$-elements enhancement [El/Fe]
becomes progressively higher for lower [Fe/H]. (In the next plots this
population is represented by blue points.)
\item {\bf mock halo/thick disk stars}: they cover the iron abundance
ranges $-2.0\leq[Fe/H]{\mbox(dex)}<-0.2$ and their \temp\ and \logg\ were
taken from the
isochrones with an age of 10Gyr. Their $\alpha$-element enhancement [El/Fe]
becomes progressively higher for lower [Fe/H] down to $[Fe/H]=-1.0$~dex, and
stays constantly high ($\sim+0.4$) for $[Fe/H]<-1.0$~dex.
(In the next plots this
population is represented by red crosses.)
\item {\bf mock accreted stars}: they cover the iron abundance
ranges $-2.0\leq[Fe/H]{\mbox(dex)}<-1.0$ and their \temp\ and \logg\ were
adopted from
isochrones of an age of 10Gyr. Their $\alpha$-element enhancements [El/Fe]
are equal to zero. (In the next plots this
population is represented by dark green triangles.)
\end{itemize}
\begin{figure}[t]
\centering
{\includegraphics[width=9cm,bb=101 288 359 455]{TG_R12_SN100_synt_ABDloop.pdf}}
\caption{Distribution of the three mock populations on the \temp\ and
\logg\ plane as synthesized (left panels) and derived by \Space\ (right
panel). The blue points, red crosses, and green triangles represent the
thin, halo/thick disc, and accreted stars, respectively.
The solid, dashed, and dotted black lines show isochrones at
[M/H]=0.0~dex and 5~Gyr, [M/H]=$-1.0$~dex and 10~Gyr, and [M/H]=$-2.0$~dex and
10~Gyr, respectively. The light gray errorbars represent the confidence
intervals of the individual measurements. A missing
errorbar indicates that the error is larger than the parameter grid. The
color version of this plot is available in the electronic edition.}
\label{TG_R12_SN100_synt_ABDloop}
\end{figure}
\begin{figure*}[t]
\begin{minipage}{18cm}
\centering
{\includegraphics[width=14cm,bb=85 286 531 524]{correl_R12_SN100_synt_ABDloop.pdf}}
\caption{Residuals between derived and reference parameters (y-axis) as a
function of the reference parameters (x-axis). Symbols and colors are as in
Fig.\ref{TG_R12_SN100_synt_ABDloop}. The color version of this plot is
available in the electronic edition.}
\label{correl_R12_SN100_synt_ABDloop}
\vskip 0.3cm
\centering
{\includegraphics[width=14cm,bb=85 286 531 524]{XFe_R12_SN100_synt_ABDloop.pdf}}
\caption{Chemical abundances derived by \Space\ for the three mock
populations. Symbols and colors are as in Fig.\ref{TG_R12_SN100_synt_ABDloop}. The
reference abundances of these three populations are represented by the
light blue, light red, and light green solid lines for the thin disc, the
halo/thick disc, and accreted populations, respectively. The color version
of this plot is available in the electronic edition.}
\label{XFe_R12_SN100_synt_ABDloop}
\end{minipage}
\end{figure*}
More precisely, the abundance of the generic element $El$ follows
a linear law which can be expressed as $[El/Fe]=m\cdot[Fe/H]+q$,
where $m$ and $q$ have different values for different [Fe/H] intervals
and elements. The exact $m$ and $q$ values for each element
are listed in Tab.~\ref{tab_three_pop}. The distributions of these
populations in \temp\ and \logg\ are shown in the left panel of
Fig.~\ref{TG_R12_SN100_synt_ABDloop}.
The samples were degraded to resolutions of R=20\,000, 12\,000, 5\,000, and
2\,000 and to signal-to-noise ratios of S/N=100, 50, 30, and 20 (by
adding Poissonian noise) for a total amount of
19\,200 spectra. The stellar parameters and abundances of these spectra were
derived with \Space\ and compared with the expected ones.
The measurements were performed with the keyword {\it ABD\_loop} on.
We switched off the internal
re-normalization to evaluate the goodness of the GCOG library to provide the
right EW of the lines and the ability of \Space\ to reproduce the
correct line profile of the absorption lines.
\begin{figure}[t]
\centering
\includegraphics[width=9cm,bb=62 286 508 691]{internal_errors_TGM_R.pdf}
\caption{{\bf Black symbols}: standard deviations of the residuals of the stellar parameters
(expressed as estimated minus reference values) as a function of S/N
for the four different resolutions considered here. {\bf Grey symbols}: as the black
symbols but the y-axis expresses the average semi-width of the estimated confidence
interval. The black and gray symbols would match if the error distributions
were Gaussian and there were no systematic errors.}
\label{internal_errors_TGM_R}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm,bb=66 191 470 641]{plot_bressan_MC_TGM.pdf}
\caption{Distributions of the derived stellar parameters (black solid histogram),
and the lower and upper limits of the confidence intervals (dotted-dashed red
histogram and thick dashed green histogram, respectively) as computed in
Sec.~\ref{sec_error_est} for 100 Monte Carlo realizations
of the synthetic spectrum with \temp=4538~K,
\logg=2.42, [Fe/H]=0.30~dex, S/N=100, and R=12\,000. The black
point represents the median of the derived stellar parameters
while the errorbars show the lower and upper limit of the interval that
holds the 68\% of the measurements. The vertical dashed lines represent the
reference values. The color version of this plot is available in the
electronic edition.}
\label{plot_bressan_MC_TGM}
\end{figure}
\begin{figure}[ht]
\centering
{\includegraphics[width=8cm,bb=70 287 505 689]{plot_bressan_MC_ElH.pdf}}
\caption{As in Fig.~\ref{plot_bressan_MC_TGM} but for the elemental abundances
of \ion{Mg}{}, \ion{Ca}{}, and \ion{Cr}{}. The color version of this plot is available in the
electronic edition.}
\label{plot_bressan_MC_ElH}
\end{figure}
\subsubsection{Results}
We run \Space\ on the sets of synthetic spectra just outlined.
For the sake of brevity, we only report
here the representative case of R=12\,000 and S/N=100.
In Fig.~\ref{TG_R12_SN100_synt_ABDloop} we show the
distribution in \temp\ and \logg\ of the synthetic spectra (left panel)
in comparison with the same parameters derived by
\Space\ (right panel). The \temp\ and \logg\ derived by \Space\ follow
closely the isochrones for all three synthetic populations.
In Fig.~\ref{correl_R12_SN100_synt_ABDloop} we show
the residuals between the derived and reference values as a function
of the reference stellar parameters while in
Fig.~\ref{XFe_R12_SN100_synt_ABDloop} we report the distribution of
the chemical abundances in the chemical plane for nine elements
derived from the same spectra.
\subsubsection{Errors estimation in synthetic spectra}
In the panels of Fig.~\ref{correl_R12_SN100_synt_ABDloop} we report the dispersions of the
measurements around the expected values of \temp, \logg, and \met, while the confidence
interval of the single measurements (computed as explained in
Sec.~\ref{sec_error_est}) are shown with the light gray errorbars.
The errorbars in Fig.~\ref{correl_R12_SN100_synt_ABDloop} are often smaller
than the dispersion of the residuals, which suggests an underestimation of the errors.
This is summarized in
Fig.~\ref{internal_errors_TGM_R} where the overall dispersion of the
residuals for the stellar parameters (derived minus reference, black symbols) are compared with the
half-width of the confidence intervals (gray symbols) for different resolutions and
S/Ns. The black and gray symbols are closer where the stochastic noise
dominates, i.e., at low S/N, while for high S/N the confidence intervals always
underestimate the stellar parameter dispersions. The reason can be guessed from
Fig.~\ref{correl_R12_SN100_synt_ABDloop}: the overall dispersion $\sigma$
is inflated by the presence of systematic errors
in \temp, \logg, and [Fe/H] for which the computed confidence intervals
cannot account. The latter
can only account for the stochastic errors. We proved
the last statement by generating 100 Monte Carlo realizations of a few synthetic
spectra, derived their stellar parameters and chemical abundances, and
compared them with the confidence intervals computed by \Space. The
distributions of the parameters and the confidence intervals obtained with
this test show that the confidence intervals only account for the stochastic noise
and fail to recognize the systematic errors when present (the shift between the average
of the black histogram and the expected value represented with a black dashed
line in Fig.~\ref{plot_bressan_MC_TGM}). The chemical abundances
recovered by \Space\ for the three mock samples (see Fig.~\ref{XFe_R12_SN100_synt_ABDloop})
are accurate and follow the expected sequences traced by the colored solid
lines. No particular systematic error is visible and the errorbars appear to
be a good representation of the dispersion around the expected value.
This is supported by
the statistic obtained from the 100 Monte Carlo realizations cited before
and illustrated in Fig.~\ref{plot_bressan_MC_ElH}. A further discussion of
the systematics errors seen in this section can be found in
Sec.~\ref{sec_discussion}.
\subsection{Tests on real spectra}\label{sec_test_real}
We employed sets of publicly available spectra like the ELODIE spectral library
\cite[Prugniel et al.][]{prugniel}, the spectra of the benchmark stars \cite[Jofr\'e et
al.][]{jofre}, and the spectra of the S4N catalogue \cite[Allende Prieto et
al.][]{allende}.
For the ELODIE spectra we selected those spectra for which
the authors report literature stellar parameters flagged as being of good and
excellent quality (quality flags ``3" and ``4") to be compared with the stellar
parameters derived by \Space. For the benchmark and S4N stars we compare our
results with the high quality stellar parameters provided by Jofr\'e et al.
and Allende Prieto et al., respectively.
All these spectra have high spectral resolution
($>$60\,000) and high S/N. To test \Space\ on spectra of lower resolution
and S/N, we degraded the spectra to resolutions of R=20\,000, 12\,000, 5\,000, and 2\,000 and to
S/N=100, 50, 30, and 20 by adding artificial Poissonian noise\footnote{Many of these
spectra have S/N that are not high (S/N$\sim$60-100) so that by adding
artificial noise the final S/N is actually lower than the nominal one.}.
Although the original spectra were already normalized, we re-normalized them
after degrading them with the IRAF task {\it continuum} in order to simulate the continuum
obtained when these spectra are normalized at their nominal
spectral resolution. Then, the spectra were processed with \Space\ and the
derived stellar parameters compared with the reference values.
For the sake of brevity, we only report
here the representative case of R=12\,000 and S/N=100.
We measured the spectra with the {\it ABD\_loop} keyword
in the wavelength range 5212-6270\AA\ and
6310-6900\AA\ to avoid the telluric lines in the range 6270-6310\AA\ that may
degrade the quality of the measurements. In this case we switched on the
internal re-normalization as usually done for spectra for which the continuum level
must be refined.
\begin{figure}[t]
\centering
{\includegraphics[width=9cm,bb=101 288 359 505]{TG_R12_SN100_real_ABDloop.pdf}}
\caption{{\bf Left panel}: Distribution of the reference stellar parameters
in the \temp\ and
\logg\ plane for the ELODIE library (solid points), the benchmark stars
(crosses), and the S4N stars (triangles). {\bf Right panel}: As before but
with the stellar parameters derived by \Space. The colors code the [Fe/H].
Errorbars are reported in light gray. The solid, dashed, and dotted black lines trace the isochrones for
[M/H]=0.0~dex and 5~Gyr, [M/H]=$-1.0$~dex and 10~Gyr, and [M/H]=$-2.0$~dex and
10~Gyr, respectively. The color version of this plot is available in the
electronic edition.}
\label{TG_R12_SN100_real_noABDloop}
\end{figure}
\subsubsection{Results}
The distributions on the \temp\ and \logg\ plane of the reference parameters
and the derived parameters with \Space\ are shown in the left and right
panels of Fig.\ref{TG_R12_SN100_real_noABDloop}, respectively.
The derived parameters appear to follow fairly well the isochrones.
In Fig.~\ref{correl_R12_SN100_real_ABDloop}
the residuals between the derived and expected values as a function
of the reference stellar parameters show small residuals in all the panels,
except for the middle right panel, which reveals a systematically low \logg\
with lower [Fe/H]. This feature is discussed in
Sec.\ref{sec_logg_systematic}.
\subsubsection{Error estimation in real spectra}
Because for real spectra we do not have exact stellar parameters
but estimations from high resolution spectra, our evaluation of the
errors relies on the dispersion of the
residuals between the parameters derived by \Space\ and the high resolution
parameters that we use as reference. This is
summarized in Fig.~\ref{errors_real} for different resolutions and S/N
ratios. Because
the reference parameters also suffer from errors, the
dispersions reported in Fig.~\ref{errors_real} are actually
an overestimation of the \Space\ errors because they result from the
quadratic sum of the reference errors plus the \Space\ errors\footnote{
The reference values of the S4N and the benchmark stars' gravity was derived from parallaxes.
Therefore we expect that, being such estimation usually much smaller than
the ones derived from spectra, for these stars the \Space\ errors are the
major contributors to the dispersion of the redisuals.}.
For the individual elements, Fig.~\ref{XFe_R12_SN100_real_ABDloop}
shows nine relative abundances derived from the same spectra. For these quantities
reference values can be only found for the S4N spectra, for which we have
seven elements in common. The comparison between derived and reference
abundances is reported in Fig.~\ref{plot_s4n}.
\subsection{Measure of the whole ELODIE spectral library}
We derived stellar parameters and chemical abundances for the whole ELODIE
spectral library at a resolution of R=12\,000 and S/N=100.
\Space\ provided results for 1386 spectra (out of 1959 spectra of the
library) while it did not converge for those spectra which stellar parameters
are beyond the stellar parameter volume covered by the GCOG library.
The high number of stars provided by the full ELODIE library gives a more
robust statistic with respect to the previous test. The derived
parameters are reported in appendix~\ref{appendix_tests_real},
Tabs.~\ref{ELODIE_table}, \ref{benchmark_table}, and
\ref{S4N_table}, and are plotted in Fig.~\ref{R12_SN100_elodie_whole_noABD}.
\subsection{Discussion}\label{sec_discussion}
Tests on synthetic and real spectra showed that the derived stellar
parameters and chemical abundances are reliable and have a good precision.
The accuracy suffers from systematic errors (in particular for \temp, \logg, and
[Fe/H]) highlighted in the test on
synthetic spectra. This particularly affects spectra that have high density of
strong lines, for which the correction for the opacity of the neighbor
lines (seen in Sec.~\ref{sec_corr_opac}) applied to a wavelength interval
0.5\AA\ wide becomes insufficient. In this case, the expected EW of the
lines stored in the EW library (and encoded in the GCOG library) is too
large and leads to misestimations of the stellar parameters with the
systematic errors seen in Fig.~\ref{correl_R12_SN100_synt_ABDloop}.
However, these errors are relatively small (up
to 100~K in \temp, 0.2 in \logg, and 0.1~dex in [Fe/H]). While these errors
affect mostly metal rich cool dwarfs (\temp$<$4500~K, [Fe/H]$>$0~dex) and,
to some extent, cool giants (\logg$<$0.5) in synthetic spectra, in the test
with real spectra they do not seem seem to play a significant role
(Fig.~\ref{correl_R12_SN100_real_ABDloop}) perhaps because they are smaller
than the stochastic errors. On the other hand, the measurements done with the
whole ELODIE sample (Fig.~\ref{R12_SN100_elodie_whole_noABD}) show an
underestimation of the gravity for dwarfs stars cooler than \temp$<$4800K
(they do not follow the isochrones, as expected) and an apparent
gravity overestimation
of the red clump stars of $\sim+0.25$ in a
general picture that confirms the goodness of the results in every other
respect.\\
Another source of systematic errors is the adopted line profile
(Appendix~\ref{app_voigt}). The \Space\ line profile is an empirical function of the EW, broadening
constants, and \logg\ of the star, and it proved to fit reasonably well the
lines for most of the stellar parameters. However, there are some
discrepancies that causes the systematic deviations from the expected stellar
parameters just discussed. An improved function for the line profile can reduce
the systematic errors and this is one of the many possible improvements that
are left for the next version of \Space.\\
\begin{figure*}[t]
\begin{minipage}{18cm}
\centering
{\includegraphics[width=14cm,bb=85 286 531 524]{correl_R12_SN100_real_ABDloop.pdf}}
\caption{Residuals between derived and reference parameters (y-axis) as a
function of the reference parameters (x-axis). ELODIE, benchmark, and S4N
stars are indicated with black points, red crosses, and blue triangles,
respectively. Errorbars are reported in light gray. A missing
errorbar indicate that the error is larger than the parameters grid.
The color version of this plot is available in the electronic edition.}
\label{correl_R12_SN100_real_ABDloop}
\vskip 0.3cm
\centering
{\includegraphics[width=14cm,bb=85 286 531 524]{XFe_R12_SN100_real_ABDloop.pdf}}
\caption{Chemical abundances derived by \Space\ for the ELODIE, benchmark,
and S4N stars. Symbols are as in Fig.\ref{correl_R12_SN100_real_ABDloop}.
The color version of this plot is available in the electronic edition.}
\label{XFe_R12_SN100_real_ABDloop}
\end{minipage}
\end{figure*}
Despite the errors in stellar parameters, the resulting chemical abundances
are reliable, as shown in Fig.~\ref{XFe_R12_SN100_synt_ABDloop} and
Fig.~\ref{XFe_R12_SN100_real_ABDloop} for synthetic and real spectra,
respectively. For the synthetic spectra the distribution of the derived chemical
abundances on the chemical plane follows closely the expected values (light colored
solid lines in Fig.~\ref{XFe_R12_SN100_synt_ABDloop}), while for real
spectra the chemical abundance distributions follow fairly well the pattern
expected for the
Milky Way stars. A one-to-one comparison of the derived chemical abundances
with the reference abundances of the S4N spectra (Fig.~\ref{plot_s4n})
reveals that some of the elements may suffer from systematic errors. In
particular \ion{Sc}{}\ and \ion{Ti}{}\ seem to be underestimated by $\sim$0.1~dex with
respect to the S4N estimations. For the
other elements, the abundances agree fairy well.
It is not clear what may cause the underestimation of the \ion{Sc}{}\ and \ion{Ti}{}.
The absorption lines of these two elements
are weak in the Sun, which makes the calibration
of the \loggf s and the determination of the abundances of the other
standard stars used for the calibration (Sec.~\ref{sec_gf_calibration})
difficult.
This can lead to a systematic offset of the calibrated \loggf s of the lines
of these elements, and therefore to an offset in the derived abundances.
On the other hand, in
Sec.~\ref{sec_loggf_validation} we showed how the calibrated \loggf s seems
to be smaller than the good quality \loggf s we took as reference. This would
lead to an overestimation of the chemical abundances, which is the opposite
of the underestimation seen. Moreover, it would affect all the elements and
not \ion{Sc}{}\ and \ion{Ti}{}\ alone.\\
Most of the systematic errors seen in synthetic spectra become
indistinguishable in the tests with real spectra, where the stochastic errors are larger.
However, there is at least one systematic error highlighted by the
tests on real spectra that must be discussed. It affects the \logg\ and it is
discussed in the next section.
\subsubsection{On the systematic error in \logg\ in real
spectra}\label{sec_logg_systematic}
The results obtained with synthetic and real spectra prove to be reliable
and in fair agreement for all the stellar parameters but for \logg, for
which \Space\ derives a too low gravity for metal poor spectra. The
absence of this systematic error in the tests on synthetic spectra
excludes that the error may originate in the way in which \Space\ constructs the
spectrum models. In the attempt to shed light on this, we tested \Space\
with different settings, and we found that running \Space\ with and without the
keyword {\it ABD\_loop} (which executes or skips the
step~10 of the algorithm outlined in Sec.\ref{sec_space_code})
leads to results that are in agreement for all the
stellar parameters but for \logg. In Fig.~\ref{plot_logg_comp} the residuals in
\logg\ are shown as a function of [Fe/H] for both settings for comparison
purposes. With {\it ABD\_loop}, real spectra show the systematic error just
cited, whereas this is absent in synthetic spectra. Conversely, without {\it
ABD\_loop} the systematic error in \logg\ for real spectra is greatly
reduced, but the same systematic with opposite sign appears in synthetic
spectra for the $\alpha$-enhanced stars (red crosses in
Fig.~\ref{plot_logg_comp}) and not for the non-enhanced ones (green triangles).
This seems to be in agreement with
the real spectra, because for the stars here considered, the low metallicity
stars are $\alpha$-enhanced too. This suggests that the only stars affected by
this systematic are the $\alpha$-enhanced stars.
\begin{figure*}[t]
\centering
{\includegraphics[width=16cm,bb=58 113 570 375]{plot_s4n.pdf}}
\caption{Comparison between derived and reference chemical abundances of
seven elements for the S4N
spectra. Points marked with crosses have been excluded by the statistic reported in the
panels.}
\label{plot_s4n}
\end{figure*}
This indicates that the discrepancy observed between synthetic and real
spectra does not depend on the method employed, but that it may originate from
i) incorrect microturbulence adopted for the EW library or from
ii) the adopted 1D atmosphere models and LTE assumption, for which the discrepancy
to the physical conditions of real stars becomes larger for lower metallicity (Asplund
\citealp{asplund2005}; \citealp[Bergemann et al.][]{bergemann2012}). We are inclined to exclude that
the atomic parameters like damping constants or oscillator strengths may
play a significant role in this systematic error because otherwise it should equally
affect metal-rich and metal-poor stars.
The future development of a new GCOG library that accounts for NLTE effects and 3D
atmosphere models should shed light on the origin of this systematic and, hopefully,
remove it. Because this problem cannot be solved in the present work, we
choose to leave the option to the user whether to use the {\it ABD\_loop}
keyword. In the appendix of this work, the
results of the tests on real spectra run without the {\it ABD\_loop}
keyword are presented.\\
A further systematic error is the underestimation ($\sim-0.2$) of \logg\
for dwarf (\logg$\gtrsim4.2$)stars. This effect is smaller in the
test with synthetic spectra than in real spectra.
The fact that for synthetic spectra this is small seems to
suggest that the origin of the problem may, as before, lie in the basic assumption made,
such as the LTE assumption and the stellar atmosphere models adopted.\\
\subsection{Remark}
In this paper we mostly aimed at validating the method proposed. For the sake of
brevity, we do not discuss the tests done by using other functions available in
\Space\ and we just briefly mention two of them here\footnote{The full list of
functions available is provided with the tutorial that accompany the
code \Space.}. \Space\ accepts keywords like {\it T\_force} and {\it G\_force}
that force \Space\ to look for solutions with fixed \temp\ and/or \logg\
given by the user. This is particularly useful for low S/N spectra
for which \Space\ cannot converge to precise stellar parameters. For
instance, a robust photometric temperature passed to \Space\ with the
keyword {\it T\_force} helps to improve
the \logg\ and chemical abundances estimations. Another useful keyword is {\it
alpha} which forces \Space\ to derive the abundance of the $\alpha$-elements (\ion{Mg}{}, \ion{Si}{},
\ion{Ca}{}\ and \ion{Ti}{}) as if they were one single element while any other elements
(excluding
\ion{C}{}, \ion{N}{}, and \ion{O}{}) are considered to be a separate single element called ``metals". As
before, this is useful to get abundances from low-quality spectra that carry little
information.
\section{Publication}
The source code of \Space\ will be publicly available soon together with the
line list and the GCOG library. The code will be released under a GPL license.
In addition, a VO-integrated service allowing operation of \Space\
without installation is available \cite[Boeche et al.][]{boeche2015}.
As simple Web front end to this service can be found at
http://dc.g-vo.org/SP\_ACE.
\section{Future work}
In this work we outlined the method the code \Space\ relies upon and
the solutions chosen up to now, which prove to work but are far from perfect.
Many improvements and further developments are possible. Among the most
important we cite the following ones:
\begin{itemize}
\item {\bf Extension of the stellar parameter grid}: it is possible to
extend the coverage of the GCOG library to hotter temperature than the
actual covered ones (\temp$<$7400~K) and to higher gravities. The latter
have been
extended to \logg=5.4 with an extrapolation because stellar atmosphere
models with \logg$>$5.0 are not provided by the grid ATLAS9. An extension of
the stellar atmosphere grid (and subsequent extension of the GCOG library)
up to \logg$\sim$6 is planned for the near future.
\item {\bf Extension of the line list}: currently the wavelength range
covered by the GCOG library is 5212-6860\AA\ and 8400-8920\AA. We plan to
extend the wavelength range, in particular toward bluer wavelengths. With
the extension of the grid to hotter stars, the line list will be augmented
by ionized/high excitation potential lines only visible at high temperatures.
\item {\bf Molecular lines}: at the present time the molecular lines we
take in account are the CN lines in the range 8400-8920\AA. An extension to
the optical region would improve the derivation of stellar parameters for
cool stars. How this problem can be solved in the framework of the method used
by \Space\ is still unclear.
\item {\bf Opacity correction}: an improved method to correct the EW for
the opacity of the neighboring lines has to be found. A further investigation
of the rigorous solution proposed in Appendix~\ref{appendix_cf}, or
new techniques of line deconvolution (like the one proposed by Sennhauser et
al. \citealp{sennhauser}) may lead to a solution.
\item{\bf Improved line profile}: the present line profile adopted in
\Space\ is an empirical function that represents fairly well the shape of the
lines over a wide range of parameters, but still is not good enough at
the borders of the parameter grid. It is possible and desirable to find a
new improved line profile function that would permit the removal of some of the
systematic errors seen in synthetic spectra.
\item {\bf Extension to other stellar atmosphere models}: the present EW library has
been constructed with the 2012 version of the ATLAS9 atmosphere grid by Castelli \& Kurucz
\cite{castelli}, but it can be done with any other atmosphere models. The creation of
EW and GCOG libraries based on MARCS \cite[Gustafsson et al.][]{gustafsson} or PHOENIX
\cite[Husser et al.][]{husser} models is desirable and we plan to do it in the
near future.
\item {\bf Extension to 3D models and NLTE assumptions}: although the
construction of a whole EW library with 3D atmosphere model and NLTE
assumptions seems still prohibitive in terms of computing costs, the
integration of the present EW library with few important absorption lines
the EWs of which have been computed under NLTE assumptions and/or a 3D atmosphere model is
doable. For instance, computing the EWs of H$\alpha$,
the \ion{Fe}{I}\ at 5269.537\AA, and one line of the \ion{Ca}{II}\ triplet with 3D
atmosphere models and/or under NLTE assumptions and integrating these EWs into
the present EW library would greatly increase the ability of \Space\ to constrain
the stellar parameters, in particular for low metallicity or low S/N
spectra where only strong lines can be seen.
\end{itemize}
While for some of the above points the amount of work may be considerable,
for other points the necessary work is small and it would bring significant
improvements in a short time.
\begin{figure}[t]
\centering
\includegraphics[width=9cm,bb=63 287 509 690]{errors_real.pdf}
\caption{Standard deviations of the residuals of the stellar parameters
derived from real spectra
as a function of S/N for the fourth different resolutions considered.}
\label{errors_real}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{\hsize}{!}
{\includegraphics[width=9cm,bb=75 286 378 455]{plot_logg_comp.pdf}}
\caption{Residuals between derived and reference \logg\ for real spectra
(top) and synthetic spectra (bottom) when the long (left panels) and short
(right panels) versions of \Space\ are used to derive the stellar
parameters. The symbols are as in Fig.~\ref{correl_R12_SN100_real_ABDloop} and
Fig.\ref{TG_R12_SN100_synt_ABDloop} for real and synthetic spectra,
respectively. The color version of this plot is available in the electronic
edition.}
\label{plot_logg_comp}
\end{figure}
\section{Conclusions}
In this work we proposed and described a new method to derive stellar
parameters and chemical elemental abundances from stellar spectra. Based on calibrated
oscillator strengths of a complete line list and on 1D
atmosphere models and under LTE assumptions, this method relies on
polynomial functions (stored in the GCOG library) that describe the
EWs of the lines as a function of
the stellar parameters and chemical abundance. The method is implemented
in the code \Space, which constructs on the fly spectral models and
minimizes the $\chi^2$ computed between the models and the observed spectrum.
The method has a full-spectrum-fitting approach, which means i) it assures the reliability
of the spectrum models by calibrating the oscillator strengths of the line
list adopted in high-resolution spectra of stars with well-known stellar
parameters, and ii) it employs all the possible absorption lines (thousands)
in a wide wavelength range to derive the stellar parameters and abundances,
exploiting the information carried by lines that are usually rejected in the classical
analysis because they are blended or have unreliable theoretical atomic parameters. This
approach proved to be successful, obtaining reliable stellar parameters and
chemical abundances even in spectra carrying little information,
such as low-resolution or low S/N spectra, for which the classical analysis
based on EW measurements cannot be applied.\\
The method is far from perfect, but we believe it shows considerable
promise already
at the present stage of development. It is highly automated, so
that it is suitable for the analysis of large spectroscopic surveys. It is
flexible, in the sense that its internal re-normalization and internal
re-setting of the radial velocity of the spectrum make \Space\ independent
from the initial quality of the normalization and RV correction performed by
previous users or reduction pipelines. It is independent from the
stellar atmosphere models used to create the GCOG library on which
\Space\ relies. In fact, the GCOG library can be constructed by using
any stellar atmosphere models available in the literature, or under LTE or NLTE
assumptions, with no need to change the code \Space.\\
An on-line version of the code \Space\
is available on the German Astrophysical Virtual Observatory
web server at the address http://dc.g-vo.org/SP\_ACE.
The source code will be made publicly available soon.
\begin{acknowledgements}
B.C. thanks: H.-G.~Ludwig for the numerous useful discussions on
atomic parameters and stellar atmosphere models; M. Demleitner and H. Heinl
for their support in preparing the web front-end of \Space\ and useful
discussions.
We acknowledge advice and assistance in publishing the web service
provided by the German Astrophysical Virtual Observatory (GAVO).
We acknowledge funding from Sonderforschungsbereich SFB 881 ``The Milky Way
System" (subproject A5) of the German Research Foundation (DFG).
\end{acknowledgements}
\newpage
|
2,869,038,154,301 | arxiv | \section{INTRODUCTION}
Semantic description and understanding of dynamic road scenes from an egocentric video is a central problem in realization of effective driving assistance technologies required to interpret and predict road user behavior. In the driving context, scene refers to the place where such behaviors occur, and includes attributes such as environment (road types), weather, road-surface, traffic, lighting, etc. Importantly, scene context features serve as important priors for other downstream tasks such as recognition of objects, behavior, action, intention, as well as robust navigation, and localization. For example, cross-walks at intersections are likely places to find pedestrians crossing or waiting to cross. Likewise, knowing that an ego-vehicle is approaching an intersection helps auxiliary modules to look for traffic lights to slow down. Needless to say, effective solutions to the traffic scene classification problem provide contextual cues that promise to help driving assist technologies to reach human level visual understanding and reasoning.
\begin{figure}
\includegraphics[height=6cm,width=\linewidth]{Images/Intro_AEP.png}
\centering
\caption{Viewpoint variations when approaching, entering, and passing through an intersection.}
\label{fig:Intro_AEP}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.5,width=\textwidth]{Images/finalintro.jpg}
\caption{Temporal video annotations at multiple levels, including Places, Environments, Weather, and Road Surface.
}
\centering
\label{fig:intro}
\end{figure*}
The vast majority of research in scene classification has been conducted to address the problem of single image classification~\cite{zhou2017places}~\cite{yu2015lsun}. Recently, dynamic scene classification datasets~\cite{shroff2010moving} and associated algorithms~\cite{feichtenhofer2016dynamic} have emerged that exploit spatiotemporal features.
However, majority of previous work consider a stationary camera and study image motion (i.e. spatial displacement of image features) that is induced by projected movement of scene elements over time. Typical examples include a rushing river, waterfall, or motion of cars on the highway from a surveillance camera. For driving tasks, scene understanding requires a dynamic representation characterized by displacement of image motion attained from moving traffic participants as well as variations in image formation that emerge from the vehicle’s ego-motion. The latter is an important and challenging problem that has not been addressed, primarily due to a lack of related datasets for driving scenes.
To address this solution gap, this paper critically examines the dynamic traffic scene classification problem under space-time variations in viewpoint (and therefore scene appearance) that arise from the egocentric formation of images collected from a moving vehicle. In particular, a novel driving scene video dataset is introduced to enable dynamic traffic scene classification. The dataset includes temporal annotations on place, environment (road-type), and weather/surface conditions and explicitly labels the viewpoint variations using multiple levels. Specifically, the place categories are annotated temporally with fine grained labels such as Approaching (A), Entering (E), and Passing (P), depending on the ego-car’s relative position to the place of interest. An example of this multi-level temporal annotation is depicted in Figure~\ref{fig:Intro_AEP} and ~\ref{fig:intro}. This example illustrates the result of view variations (caused by changing distance to the intersection) as a vehicle approaches the scene of interest (i.e. intersection). The video clip is labeled using the three layers (A,E,P) to highlight the distinct appearance changes and showcases the proposed fine grained annotation strategy that is important for vehicle navigation and localization.
The main contributions of this work are as follows. First, a dataset is released that includes 80 hours of diverse high quality driving video data clips collected in San Francisco Bay area
\footnote{\url{https://usa.honda-ri.com/hsd}}. The dataset includes temporal annotations for road places, road environment, weather, and road surface conditions. This dataset is intended to promote research in fine-grained dynamic scene classification for driving scenes. The second contribution includes development of machine learning algorithms that utilize the semantic context and temporal nature of the dataset to improve classification results. Finally, we present algorithms and experimental results that showcase how extracted features can serve as strong priors and help with tactical driver behavior understanding.
\begin{figure*}
\includegraphics[height=6cm,width=\linewidth]{Images/datastats.jpg}
\centering
\caption{Left bar plot shows the fine grained distribution of Road Places, where ``I'' denotes intersection. Right bar plots depict other classes supported by our dataset. Top left and bottom row shows the number of frames used to benchmark classification algorithms on Road Environment, Road Surface and Weather. The top right plot shows each 3-way, 4-way or 5-way intersection labelled as an intersection with traffic signals, stop signs, or with none of the above.}
\label{placechart}
\end{figure*}
\section{RELATED WORK}
\subsection{Driving Data sets}
Large scale public datasets geared towards automated or advanced driver assist systems ~\cite{ geiger2012we, jain2015car, chen2018lidar, xu2017end, ramanishka2018toward, yu2018bdd100k, maddern20171}, and scene understanding ~\cite{huang2018apolloscape, cordts2016cityscapes, madhavan2017bdd,neuhold2017mapillary}, have helped the development of algorithms to better understand the scene layout and behavior of traffic participants. These datasets have limitations since they either do not adequately support dynamic scene classification or provide only a non exhaustive list of driving scene classes.
Several papers support pixel wise annotations for semantic segmentation~\cite{madhavan2017bdd,huang2018apolloscape,neuhold2017mapillary,cordts2016cityscapes}. While it may be useful to learn semantic segmentation models to parse the scene, we cannot infer the type of scene reliably. This would mean that separate models need to be developed to aggregate the semantic segmentation outputs and infer the type of scene. Other datasets provide models for understanding ego and participant behaviors in driving scenes~\cite{jain2015car,ramanishka2018toward}, but they do not have an exhaustive list for scene classes. With respect to datasets, most similar to our work is described in~\cite{maddern20171,garg2018don't}. While they provide labels for scene and weather classification, the dataset is more focused toward image retrieval and localization problems.
\subsection{Scene and Weather classification}
MIT Places dataset~\cite{zhou2017places} and Large Scene understanding dataset (LSUN) ~\cite{yu2015lsun} were introduced to benchmark several deep learning based classification algorithms. While our data set serves a similar purpose , but for traffic scenes, we also support temporal classification to benchmark algorithms that are robust to spatio-temporal variations of the same scene. Moreover driving scenes have an unbalanced class distribution along with less inter class variation, making classification much more challenging. For example, fine grained classification of 3-way and 4-way intersection from a single frontal camera view is very challenging due to small scene variations between these two classes.
Existing frame based scene and weather classification can be grouped into the following methods: adding semantic segmentation and contextual information \cite{yao2012describing,li2017multi,lin2017rscm}, using hand crafted features \cite{bolovinou2013dynamic,feichtenhofer2016dynamic,sikiric2014image}, multi resolution features \cite{wang2017knowledge,wu2017traffic}, or use multiple sensor fusion \cite{jonsson2011road,jonsson2015road}. Given the success and superior deep learning classification methods, we elected to use a learning based approach along with experimentation on how to add semantic segmentation and temporal feature aggregation to improve the results.
\begin{table}[]
\centering
\setlength{\tabcolsep}{0.1pt}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Datasets} & \textbf{Temporal} & \textbf{Purpose} & \textbf{Areas} & \textbf{Road} \\ \hline
Cityscapes\cite{cordts2016cityscapes} & N & Sem. Segmentation & U & Y \\ \hline
Appolscape\cite{huang2018apolloscape} & Y & Sem. Segmentation & U & Y \\ \hline
BDD-Nexar \cite{madhavan2017bdd} & N & Sem. Segmentation & U, H & Y \\ \hline
Kitti\cite{geiger2012we} & Y & Detection & U & Y \\ \hline
DR(eye)VE \cite{dreyeve2018} & Y & Driver Behavior & U & Y \\ \hline
HDD\cite{ramanishka2018toward} & Y & Driver Behavior & U,H,L & Y \\ \hline
LSUN\cite{yu2015lsun} & N & Scene Understanding & - & N \\ \hline
Places\cite{zhou2017places} & N & Scene Understanding & - & N \\ \hline
Ours & Y & Scene Understanding & U,H,L,R & Y \\ \hline
\end{tabular}
\caption{Comparison of datasets. U-\textit{Urban}, H-\textit{Highway}, L-\textit{Local}, R-\textit{Rural}, Temporal - \textit{Temporal annotations}, Road- \textit{Traffic scenes}}
\label{tablecomp}
\end{table}
\subsection{Temporal Classification}
Video classification and human activity recognition tasks ~\cite{karpathy2014large,tran2015learning,feichtenhofer2016convolutional} have helped develop various state of the art deep learning methods for temporal aggregation. These methods aggregate spatio-temporal features through Long short-term memory modules (LSTM)~\cite{yue2015beyond} or Temporal Convolution Networks (TCN) ~\cite{r2plus1d_cvpr18}. While such methods help activity recognition tasks by understanding object level motion primitives, they do not translate directly for temporal scene classification. In fact, frame based result averaging might be more suitable for our problem. Moreover, the entire scene is the focus of our task, not just the human actor.
Recently, work has been done for region proposal generation~\cite{chao2018rethinking,xu2017r}, where two stream architectures are used to generate the start and end time of the event as well as the class activity. Our work is inspired by these methods. Our best model is a two stream architecture that decouples the region proposal and classification tasks. Specifically, we use the proposal generator to trim the untrimmed video and aggregate the features to classify the entire trimmed segment. This method outperforms simple frame based averaging techniques. For example, it is better to come to conclusion if the class is a 4-way intersection by looking at the segment (approaching, entering and passing) in its entirety rather than on a per frame basis. This helps the model parse the same intersection from various viewpoints. Details of the proposed method is provided in Section IV.
\section{Overview Of the Honda Scenes Dataset}
\label{dataset}
\subsection{Data collection platform}
Our collection platform involves two instrumented vehicles, each with different camera hardware and setup. The first vehicle contains \textit{three} NVIDIA RCCB(60FOV) and \textit{one} NVIDIA RCCB(100FOV). The second vehicle contains \textit{two} Pointgrey Grasshopper (80FOV) and \textit{one} Grasshopper (100FOV). This varied setup enables development of algorithms that support better generalization to camera hardware and positioning. The cameras cover approximately 160 degree frontal view.
Data was collected around the San Francisco Bay Area region over a course of six months under different weather and lighting conditions. Urban, Residential(Local), Highway, Ramp, and Rural areas are covered in this dataset. Different routes are taken for each recording session to avoid overlap in the scenes. Moreover, targeted data collection is done to reduce the impact of unbalanced class distribution for rare classes such as railway crossing or rural scenes. The total size of the post-processed dataset is approximately 60 GB and 80 video hours. The videos are converted to a resolution of $1280 \times 720$ at $30$ fps.
\subsection{Data set statistics and comparison}
Table~\ref{tablecomp} shows the overall comparison with other data-sets. Our dataset is the only large scale driving dataset for the purpose of driving scene understanding. The datasets were annotated with exhaustive list of classes typical to driving scenarios. Three persons annotate each task and cross-check results to ensure quality. Intermediate models are trained to check the annotations and to scale the dataset with human in loop. The ELAN \cite{brugman2004annotating} annotation tool is used to annotate videos at multiple levels. The levels include Road Places, Road Environment (Road Types), Weather, and Road Surface condition. The data is split into training and validation in such a way so as to avoid any geographical overlap. This enforces generalization of models to new unseen areas, changes in lighting condition, and changes in viewpoint orientation. Further details about the class distribution are described below.
\textbf{Road Places and Environment} The 80 hour video clips are annotated with Road Place and Road Environment labels in a hierarchical and in a causal manner. There are three levels in the hierarchy. At the top level, Road Environment is annotated, followed by the Road Place classes at the mid level, and the fine grained annotations such as \textit{approaching}, \textit{entering}, \textit{passing} at bottom level. This forms a descriptive dataset that allows our algorithms to learn the inter dependencies between the levels.
The Road Environment labels include \textit{urban}, \textit{local}, \textit{highway} and \textit{ramps}. The \textit{local} label includes residential scenes which are typically less traffic prone and contain more driveways as opposed to \textit{urban} scenes. The \textit{Ramps} class generally appear at highway exits and are connectors between two highways or a highway and other road types.
Each fine grained annotation is clearly defined based on the view from the ego-vehicle. The \textit{three-way}, \textit{four-way}, and \textit{five-way} intersections each have \textit{approaching}, \textit{entering}, and \textit{passing} labels based on the ego-vehicle's position from the stop-line, traffic signal and or stop sign. Similarly \textit{construction zones}, \textit{rail crossing}, \textit{overhead bridge} and \textit{tunnels} are labelled based on the ego-vehicle's position from the construction event, railway tracks, overhead bridge, and tunnels, respectively. Since the notion of entering does not exist or is too abrupt for \textit{lane merge, lane branch or zebra crossing} classes, these categories are annotated with only \textit{approaching} and \textit{passing} fine grained labels. The overall class distribution in illustrated in Figure~\ref{placechart}.
\textbf{Weather and Road Surface condition} Due to the lack of snow weather conditions in the San Francisco Bay Area, a separate targeted data collection was performed in Japan specifically for snow weather and snow surface conditions. This also helps the weather and surface prediction models generalize well to different places and road types.
Video sequences are semi-automatically labeled using weather data and GPS information before further processing and quality checking by human annotators.
The temporal annotations for weather contain classes such as \textit{clear}, \textit{overcast}, \textit{snowing}, \textit{raining} and \textit{foggy} weather conditions. The Road Surface has \textit{dry}, \textit{wet}, \textit{snow} labels. Only frames with sufficient snow coverage on the road (more than 50\%) are labeled as snow surface condition.
This maintains the road surface and the weather condition labels to be mutually exclusive to each other.
While we do provide temporal annotations, only sampled frames are used for our experiments. When predicting the conditions on untrimmed test videos, the results are averaged over a temporal window as these conditions do not change drastically frame to frame. Figure~\ref{placechart} shows the distribution of images over classes for weather and road surface conditions.
\section{METHODOLOGY}
This section describes the proposed methods for dynamic road scene classification for holistic scene understanding with respect to an ego vehicle driving on a road. Our proposed methods are able to predict \textit{road weather}, \textit{road surface condition}, \textit{road environment} and \textit{road places}.
\subsection{Experiments}
All experiments are based on the \textit{resnet50} model. It should be noted that that the proposed methods can be applied using any base Convolutional Neural Network (CNN). Any performance improvement on the base CNN could directly transfer to performance improvement on our methods. These models run on the NVIDIA P100 at 10 \textit{fps}.
\textbf{Road Weather and Road Surface Condition: } To classify weather and road surface, we chose to train a frame based \textit{resnet50}~\cite{he2016deep} model. For weather and road surface, approximately \textit{$3000$} images of each class were used to fine-tune models pre-trained on the places365~\cite{zhou2017places} dataset. Since \textit{foggy} weather is a rare class, it was not used in our current set of experiments to avoid an unbalanced class distribution. As a first experiment, we finetuned a \textit{resent50} pretrained on the places365 dataset.
The weather category is independent of traffic participants in the scene. Therefore, the base model \textit{resnet50} was fine-tuned on images where traffic participants were masked out as shown in Figure~\ref{sampleimage}. A semantic segmentation model based on Deeplab~\cite{chen2018deeplab} was used to segment and mask the traffic participants and allow the model to focus on the scene.
The results are presented in Table~\ref{weathersurfacetable}, illustrating that semantic masking improves performance.
\begin{figure}
\centering
\includegraphics[width=4cm,height=1.9cm]{Images/input_mask.png}
\includegraphics[width=4cm,height=1.9cm]{Images/output_mask.png}
\qquad
\caption
{A sample image from our training set and the corresponding RGB-masked image where the traffic participants are removed.}
\label{sampleimage}
\end{figure}
\begin{table}[t]
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{|*{10}{c|}}
\hline
\multicolumn{1}{|c}{Input} &
\multicolumn{5}{|c}{Weather} & \multicolumn{4}{|c|}{Road Surface} \\ \hline
\hline
-& clear & overcast & rain & snow & \textbf{mean} & dry & wet & snow & \textbf{mean} \\ \hline
\hline
RGB & 0.86 & 0.83 & 0.83 & 0.82 & 0.83 & 0.92 & 0.90 & 0.992 & 0.94 \\ \hline
RGB\textit{(masked)} & \textbf{0.92} & 0.83 & \textbf{0.96} & \textbf{0.94} &\textbf{0.91 } & \textbf{0.93} &\textbf{0.92} & \textbf{0.997} &\textbf{0.95} \\ \hline
\end{tabular}}
\caption{class-wise F-Scores for weather classification and road surface condition classification models}
\label{weathersurfacetable}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{|*{6}{l|}}
\hline
Input & Highway & Urban & Local & Ramp & \textbf{mean}\\ \hline \hline
RGB & 0.86 & 0.81 & 0.33 & 0.07 & 0.52\\ \hline
RGB\textit{ (masked)} & \textbf{0.91} & \textbf{0.83} & 0.33 & 0.20 & \textbf{0.56}\\\hline
RGBS\textit{ (4 channel)} & 0.89 & 0.81 & \textbf{0.34} & 0.13 & 0.54 \\\hline
S\textit{ (1 channel)}& 0.90 & 0.81 & 0.24 & \textbf{0.25} & 0.55\\ \hline
\end{tabular}
}
\caption{Class-wise F-scores for road environment }
\label{roadtypetables}
\end{center}
\end{table}
\textbf{ Road Environment: } For Road Environment, experiments were performed with \textit{resnet50} pre-trained on places365 dataset. Similar to weather and road surface experiments the input to the model was progressively changed with no change to the training protocol. More specifically, experiments were conducted on the original input images (RGB), images concatenated with semantic segmentation (RGBS), images with traffic participants masked using semantic segmentation (RGB-masked), and finally only using a one channel semantic segmentation image (S). The class wise results are shown in Table \ref{roadtypetables}.
Interestingly, while RGB-masked images show overall best performance, semantic segmentation alone outperforms just RGB images, especially for \textit{ramp} class. This might be due to the fact that scene structure is sufficient to understand the curved and the uncluttered nature of highway ramps. However, while decomposing the images to scene semantics allows the model to learn valuable structure information, it loses important texture cues about the type of buildings and driveways. Hence there is a lot of confusion between \textit{local} and \textit{urban} class resulting in lower \textit{local} performance.
\begin{figure*}
\includegraphics[scale=0.21,trim={0.6cm 0 0.6cm 0.2cm},clip]{Images/placesmodel.jpg}
\centering
\caption{Event Proposal Outline}
\label{places_block}
\end{figure*}
\textbf{ Road Places: } Similar to road environment experiments, RGB, RGBS, masked-RGB, S were used to fine-tune a \textit{resnet50} model on places365 for road places. In these frame-based experiments, the approaching, entering and passing sub-classes are treated as separate classes, ie, \textit{approaching 3-way Intersection}, \textit{entering 3-way Intersection} and \textit{passing 3-way Intersection} are treated as 3 different classes.
In addition to frame-based experiments, standard LSTM\cite{hochreiter1997long} architectures were added to our best frame based models. Such models would allow the capture of the temporal aspect of our labels(\textit{approaching, entering, passing}). While the performance of LSTM and Bi-LSTM models improve our results we hypothesize that decoupling the rough locality \textit{(approaching, entering, passing)} and the event class might help our models to better understand the scene.
Hence we propose a two stream architecture for event proposal and prediction as depicted in Figure \ref{places_block}. The event proposal network proposes candidate frames for the start and end of each event. This involves a classification network to predict \textit{approaching, entering, passing} as the class labels and allows the model to learn temporal cues such as \textit{approaching} is always followed by \textit{entering} and then \textit{passing}. These candidate frames are then sent as an event window to the prediction network. The prediction module aggregates all frames in this window through global average pooling and produces a singular class label for the entire event. The prediction module is similar to the R2D model in \cite{r2plus1d_cvpr18}. During Testing we first segment out the event windows as proposals, followed by final event classification using event prediction module.
Summary of our results are shown in Table \ref{roadplacestable}. It must be noted for the temporal experiments, while different input data were used, only our best results of RGB-Masking are displayed. We note that the performance of our model is worse than the BiLSTM model for Branch and Merge classes - possibly because these events are very short and feature averaging done by the prediction module doesn't help.
\begin{table}[]
\centering
\begin{tabular}{m{20mm}m{20mm}m{20mm}} \toprule
Type & Input & Mean F-score \\ \bottomrule
& RGB & 0.208 \\ \cmidrule(l){2-3}
& S & 0.169 \\ \cmidrule(l){2-3}
Frame Based & RGBS & 0.216 \\ \cmidrule(l){2-3}
& RGB-Mask & 0.233 \\ \bottomrule
& LSTM &0.243 \\ \cmidrule(l){2-3}
Temporal & Bi-LSTM &0.275 \\ \cmidrule(l){2-3}
& Event Proposal \textbf{(ours}) & 0.285 \\ \bottomrule
\end{tabular}
\label{roadplacestable}
\end{table}
\begin{table}[]
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{lllllllllllllll}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{CLASS}} & \multicolumn{1}{c|}{B} & \multicolumn{3}{c|}{\textbf{I5}} & \multicolumn{3}{c|}{\textbf{RC}} & \multicolumn{3}{c|}{\textbf{C}} & \multicolumn{2}{c|}{\textbf{LM}} & \multicolumn{2}{c|}{\textbf{RM}} \\ \cline{2-15}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{-} & \multicolumn{1}{l|}{A} & \multicolumn{1}{c|}{E} & \multicolumn{1}{c|}{P} & \multicolumn{1}{c|}{A} & \multicolumn{1}{c|}{E} & \multicolumn{1}{c|}{P} & \multicolumn{1}{c|}{A} & \multicolumn{1}{c|}{E} & \multicolumn{1}{c|}{P} & \multicolumn{1}{c|}{A} & \multicolumn{1}{c|}{P} & \multicolumn{1}{c|}{A} & \multicolumn{1}{c|}{P} \\ \hline
\multicolumn{1}{|l|}{Bi-LSTM} & \multicolumn{1}{l|}{0.88} & \multicolumn{1}{l|}{0.00} & \multicolumn{1}{l|}{0.00} & \multicolumn{1}{l|}{0.09} & \multicolumn{1}{l|}{\textbf{0.24}} & \multicolumn{1}{l|}{0.14} & \multicolumn{1}{l|}{0.46} & \multicolumn{1}{l|}{0.02} & \multicolumn{1}{l|}{0.05} & \multicolumn{1}{l|}{0.29} & \multicolumn{1}{l|}{\textbf{0.09}} & \multicolumn{1}{l|}{\textbf{0.28}} & \multicolumn{1}{l|}{\textbf{0.16}} & \multicolumn{1}{l|}{\textbf{0.23}} \\ \hline
\multicolumn{1}{|l|}{Ours} & \multicolumn{1}{l|}{\textbf{0.92}} & \multicolumn{1}{l|}{0} & \multicolumn{1}{l|}{0} & \multicolumn{1}{l|}{0} & \multicolumn{1}{l|}{0.23} & \multicolumn{1}{l|}{\textbf{0.47}} & \multicolumn{1}{l|}{\textbf{0.46}} & \multicolumn{1}{l|}{\textbf{0.02}} & \multicolumn{1}{l|}{\textbf{0.06}} & \multicolumn{1}{l|}{\textbf{0.38}} & \multicolumn{1}{l|}{0.056} & \multicolumn{1}{l|}{0.08} & \multicolumn{1}{l|}{0.13} & \multicolumn{1}{l|}{0.16} \\ \hline
\multicolumn{1}{|l|}{\multirow{2}{*}{CLASS}} & \multicolumn{1}{l|}{-} & \multicolumn{3}{c|}{\textbf{O/B}} & \multicolumn{3}{c|}{\textbf{I3}} & \multicolumn{3}{c|}{\textbf{I4}} & \multicolumn{2}{c|}{\textbf{LB}} & \multicolumn{2}{l|}{\textbf{RB}} \\ \cline{2-15}
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l|}{A} & \multicolumn{1}{l|}{E} & \multicolumn{1}{l|}{P} & \multicolumn{1}{l|}{A} & \multicolumn{1}{l|}{E} & \multicolumn{1}{l|}{P} & \multicolumn{1}{l|}{A} & \multicolumn{1}{l|}{E} & \multicolumn{1}{l|}{P} & \multicolumn{1}{l|}{A} & \multicolumn{1}{l|}{P} & \multicolumn{1}{l|}{A} & \multicolumn{1}{l|}{P} \\ \hline
\multicolumn{1}{|l|}{Bi-LSTM} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l|}{0.23} & \multicolumn{1}{l|}{0.55} & \multicolumn{1}{l|}{0.53} & \multicolumn{1}{l|}{0.03} & \multicolumn{1}{l|}{\textbf{0.28}} & \multicolumn{1}{l|}{\textbf{0.27}} & \multicolumn{1}{l|}{0.14} & \multicolumn{1}{l|}{0.68} & \multicolumn{1}{l|}{0.66} & \multicolumn{1}{l|}{\textbf{0.36}} & \multicolumn{1}{l|}{\textbf{0.22}} & \multicolumn{1}{l|}{\textbf{0.28}} & \multicolumn{1}{l|}{\textbf{0.28}} \\ \hline
\multicolumn{1}{|l|}{Ours} & \multicolumn{1}{l|}{-} & \multicolumn{1}{l|}{\textbf{0.42}} & \multicolumn{1}{l|}{\textbf{0.58}} & \multicolumn{1}{l|}{\textbf{0.59}} & \multicolumn{1}{l|}{\textbf{0.08}} & \multicolumn{1}{l|}{0.16} & \multicolumn{1}{l|}{0.23} & \multicolumn{1}{l|}{\textbf{0.31}} & \multicolumn{1}{l|}{\textbf{0.70}} & \multicolumn{1}{l|}{\textbf{0.67}} & \multicolumn{1}{l|}{0.30} & \multicolumn{1}{l|}{0.19} & \multicolumn{1}{l|}{0.24} & \multicolumn{1}{l|}{0.22} \\ \hline
& & & & & & & & & & & & & &
\end{tabular}%
}
\caption{Summary of F-score results for Places. B-\textit{Background}, I-\textit{Intersection}, RC-\textit{Railway},
C-\textit{Construction}, LM-\textit{Left Merge}, RM-\textit{Right Merge}, LB\textit{-Left Branch}, RB-\textit{Right Branch}, O/B-\textit{Overhead Bridge}}
\label{roadplacestable}
\end{table}
\subsection{Implementation details}
All resent50 models fine-tuned in this paper were pre-trained on the places365 dataset. Data augmentation was performed to reduce over-fitting - random flips, random resize, and random crop were employed. All experiments were performed on NVIDIA P100. All videos were sampled at 3Hz to obtain frames used in experiments. The SGD optimizer was used for frame-based experiments and the Adam optimizer was used for the LSTM based experiments.
\subsection{Visualization of learned representations}
It has been shown that even CNNs trained on just image labels have localization ability ~\cite{zhou2016learning,selvaraju2017grad,zhang2018top}. Here, we use one such method - Class activation maps ~\cite{zhou2016learning} to show the localization ability of our models. Figure \ref{heatmap} shows some localizations produced by our place, weather and surface classification CNNs.
\begin{figure}
\centering
\subfigure[]{\includegraphics[width=\linewidth]{Images/placeheatmap.jpg}}
\subfigure[]{\includegraphics[width=\linewidth]{Images/weatherheatmap.jpg}}
\qquad
\caption
{Example localization of our models. (a) Place classification: heat maps show that our models are able to localize the distinctive cues for different place classes, (b) Weather and Road Surface conditions: the heat-map activations correctly fall on snow regions to predict the weather while always focusing on road to predict the road surface conditions.}
\label{heatmap}
\end{figure}
\section{Behavior Understanding}
Honda Research Institute Driving Dataset (HDD)~\cite{ramanishka2018toward} was released to enable research on naturalistic driver behavior understanding. The dataset includes 104 hours of driving using a highly instrumented vehicle. Ego-vehicle driving behaviors such a \textit{left turn, right turn, lane merge} are annotated. A CNN + LSTM architecture as shown in Figure~\ref{hddmodel} was proposed to classify the behavior temporally. Due to inclusion of the vehicle's Controller Area Network (CAN) bus signal, which records various signals such as steering angle and speed, the results for \textit{left turn, right turn} were significantly higher than difficult actions such as \textit{lane change, lane merge, and branch}. Hence the current architecture that infers action directly from image fails to capture important cues.
We propose that using an intermediate scene context feature representation helps the model attend to important cues in the scene as evidenced by our attention maps in Figure \ref{heatmap}. For a fair comparison we use the same RGB images to extract intermediate representations from a frame based \textit{resnet50} model trained on our dataset. This would correspond to the first row in our Table \ref{roadplacestable}. While replacing the input as our scene context features, we keep the rest of the architecture and training protocol exactly the same. Since we only replace the model weights the number of parameters does not change.
As shown in Table~\ref{hddtable}, scene context features improve the overall mean average precision (mAP), especially for rare and difficult classes. Though our model is trained on a different dataset, scene context features embed a better representation as opposed to direct image features. Since the model is able to describe the scene and attend to different regions, it is able to associate actions better with the scenes. For example, lane branch action occurs in the presence of a possible lane branch, or U-Turn generally occurs at intersections; otherwise, it is a false positive. Hence our dataset and pre-trained models can serve as priors (\textit{ false positive removers)} and descriptive scene cues (\textit{soft attention to scene)} for other driving related tasks.
\begin{figure}
\includegraphics[width=\linewidth]{Images/HDD.jpg}
\centering
\caption{(a) Tactical Driver behavior understanding pipeline in \cite{ramanishka2018toward}. (b) Modification done using our scene context features.}
\label{hddmodel}
\end{figure}
\begin{table}[t]
\begin{center}
\resizebox{0.7\linewidth}{!}{
\begin{tabular}{|*{3}{c|}}
\hline
Ego-motion behavior & HDD\cite{ramanishka2018toward}& \textbf{Ours}\\ \hline \hline
Intersection Passing & 0.77 & 0.80\\ \hline
Left turn & 0.76 & 0.78\\ \hline
Right turn & 0.77 & 0.78\\ \hline
Left lane Change & 0.42 & 0.55\\ \hline
Right lane change & 0.23 & 0.52\\ \hline
Left lane branch & 0.25 & 0.47\\ \hline
Right lane branch & 0.01 & 0.17\\ \hline
Crosswalk Passing & 0.12 & 0.17\\ \hline
RailRoad Passing & 0.03 & 0.02\\ \hline
Merge & 0.05 & 0.07\\ \hline
U-turn & 0.18 & 0.29\\ \hline
Overall & 0.33 & \textbf{0.42}\\ \hline
\end{tabular}
}
\caption{Mean average precision (mAP) without (HDD) and with (Ours) scene context features.}
\label{hddtable}
\end{center}
\end{table}
\section{Conclusion}
In this paper, we introduced a novel traffic scene dataset and proposed algorithms that utilize the spatio-temporal nature of the dataset for various classification tasks.
We demonstrated experimentally that hard attention through semantic segmentation helps scene classification. For the various scene classes studied in this paper, we showed that motion of the scene elements captured by our temporal models provide a more descriptive representations of traffic scene video than simply using their static appearance.
Our models perform better than the conventional CNN + LSTM architectures used for temporal activity recognition. Furthermore, we have shown the importance of weak object localization for various classification tasks. Experimental observations based on trained models show that dynamic classification of road scenes provide important priors for higher level understanding of driving behavior.
In future work, we plan to explore annotation of object boundaries for rare classes to achieve better supervision for attention. Finally, work is ongoing on developing models that are causal and can support multiple outputs to address issues with place classes which are not mutually exclusive (e.g. construction zone at an intersection).
\section*{ACKNOWLEDGMENT}
We would like to thank our colleagues Yi-Ting Chen, Haiming Gang, Kalyani Polagani, and Kenji Nakai for their support and input.
\clearpage
|
2,869,038,154,302 | arxiv | \section{Introduction}
A key issue in computational neuroscience is the interpretation of neural signaling, as expressed by a neuron's sequence of action potentials.
An emerging notion is that neurons may in fact encode information at multiple timescales simultaneously \cite{fairhall2001multiple,wark2009timescales,panzeri2010sensory,lundstrom2010multiple}: the precise timing of spikes may be conveying high-frequency information, and slower measures, like the rate of spiking, may be relating low-frequency information. Such multi-timescale encoding comes naturally, at least for sensory neurons, as the statistics of the outside world often exhibit self-similar multi-timescale features \cite{vanHateren1997processing} and the magnitude of natural signals can extend over several orders.
Since neurons are limited in the rate and resolution with which they can emit spikes, the mapping of large dynamic-range signals into spike-trains is an integral part of attempts at understanding neural coding.
Experiments have extensively demonstrated that neurons adapt their response when facing persistent changes in signal magnitude. Typically, adaptation changes the relation between the magnitude of the signal and the neuron's discharge rate. Since adaptation thus naturally relates to neural coding, it has been extensively scrutinized
\cite{brenner2000adaptive,wark2007sensory,famulare2009feature}. Importantly, adaptation is found to additionally exhibit features like dynamic gain control, when the standard deviation but not the mean of the signal changes \cite{fairhall2001multiple}, and long-range time-dependent changes in the spike-rate response are found in response to large magnitude signal steps, with the changes following a power-law decay (e.g. \cite{drew2006models}).
Tying the notions of self-similar multi-scale natural signals and adaptive neural coding together, it has recently been suggested that neuronal adaptation allows neuronal spiking to communicate a {\em fractional derivative} of the actual computed signal \cite{lundstrom2008fractional,lundstrom2010multiple}. Fractional derivatives are a generalization of standard `integer' derivatives (`first order', `second order'), to real valued derivatives (e.g. `0.5th order'). A key feature of such derivatives is that they are non-local, and rather convey information over essentially a large part of the signal spectrum \cite{lundstrom2008fractional}.
Here, we show how neural spikes can encode temporal signals when the spike-train {\em itself} is taken as the fractional derivative of the signal. We show that this is the case for a signal approximated by a sum of shifted power-law kernels starting at respective times $t_i$ and decaying proportional to $1/(t-t_i)^{\beta}$.
Then, the fractional derivative of this approximated signal corresponds to a sum of spikes at times $t_i$, provided that the order of fractional differentiation $\alpha$ is equal to $1-\beta$: a spike-train {\em{is}} the $\alpha = 0.2$ fractional derivative of a signal approximated by a sum of power-law kernels with exponent $\beta = 0.8$. Such signal encoding with power-law kernels can be carried out for example with simple standard thresholding spiking neurons with a refractory reset following a power-law.
As fractional derivatives contain information over many time-ranges, they are naturally suited for predicting signals. This links to notions of predictive coding, where neurons communicate deviations from expected signals rather than the signal itself. Predictive coding has been suggested as a key feature of neuronal processing in e.g. the retina \cite{hosoya2005dynamic}. For self-similar scale-free signals, future signals may be influenced by past signals over very extended time-ranges: so-called long-memory. For example, fractional Brownian motion (fBm) can exhibit long-memory, depending on their Hurst-parameter $H$. For $H>0.5$ fBM models which exhibit long-range dependence (long-memory) where the autocorrelation-function follows a power-law decay \cite{wornell1999signal}. The long-memory nature of signals approximated with sums of power-law kernels naturally extends this signal approximation into the future along the autocorrelation of the signal, at least for self-similar $1/f^{\gamma}$ like signals. The key ``predictive'' assumption we make is that a neuron's spike-train up to time $t$ contains all the information that the past signal contributes to the future signal $t'>t$.
The correspondence between a spike-train as a fractional derivative and a signal approximated as a sum of power-law kernels is only exact when spike-trains are taken as a sum of Dirac-$\delta$ functions and the power-law kernels as $1/t^{\beta}$. As both responses are singular, neurons would only be able to approximate this. We show empirically how sums of (approximated) $1/t^{\beta}$ power-law kernels can accurately approximate long-memory fBm signals via simple difference thresholding, in an online greedy fashion. Thus encodings signals, we show that the power-law kernels approximate synthesized signals with about half the number of spikes to obtain the same Signal-to-Noise-Ratio, when compared to the same encoding method using similar but exponentially decaying kernels.
We further demonstrate the approximation of sine wave modulated white-noise signals with sums of power-law kernels. The resulting spike-trains, expressed as ``instantaneous spike-rate'', exhibit the phase-presession as in \cite{lundstrom2010multiple}, with suppression of activity on the ``back'' of the sine-wave modulation, and stronger suppression for lower values of the power-law exponent (corresponding to a higher order for {\em our} fractional derivative).
We find the effect is stronger when encoding the actual sine wave envelope, mimicking the difference between thalamic and cortical neurons reported in \cite{lundstrom2010multiple}. This may suggest that these cortical neurons are more concerned with encoding the sine wave envelope.
The power-law approximation also allows for the transparent and straightforward implementation of temporal signal filtering by a post-synaptic, receiving neuron. Since neural {\em de}coding by a receiving neuron corresponds to adding a power-law kernel for each received spike, modifying this receiving power-law kernel then corresponds to a temporal filtering operation, effectively exploiting the wide-spectrum nature of power-law kernels. This is particularly relevant, since, as has been amply noted \cite{drew2006models,fusi2005cascade}, power-law dynamics can be closely approximated by a weighted sum or cascade of exponential kernels. Temporal filtering would then correspond to simply tuning the weights for this sum or cascade. We illustrate this notion with an encoding/decoding example for both a high-pass and low-pass filter.
\begin{figure}[t]
\center
\includegraphics[width=110mm,height=40mm]{LNL}
\caption{Linear-Non-Linear filter, with spike-decoding front-end and spike-encoding back-end.
}
\label{fig:LNL}
\end{figure}
\vspace{-0.00025cm}
\section{Power-law Signal Encoding}
\vspace{-0.0002cm}
Neural processing can often be reduced to a Linear-Non-Linear (LNL) filtering operation on incoming signals \cite{bishop1995neural} (figure \ref{fig:LNL}), where inputs are linearly weighted and then passed through a non-linearity to yield the neural activation. As this computation yields analog activations, and neurons communicate through spikes, the additional problem faced by spiking neurons is to decode the incoming signal and then encode the computed LNL filter again into a spike-train. The standard spiking neuron model is that of Linear-Nonlinear-Poisson spiking, where spikes have a stochastic relationship to the computed activation \cite{chichilnisky2001simple}. Here, we interpret the spike encoding and decoding in the light of processing and communicating signals with fractional derivatives \cite{lundstrom2008fractional}.
At least for signals with mainly (relatively) high-frequency components, it has been well established that a neural signal can be decoded with high fidelity by associating a fixed kernel with each spike, and summing these kernels \cite{rieke1999spikes}; keeping track of doublets and triplet spikes allows for even greater fidelity. This approach however only worked for signals with a frequency response lacking low frequencies \cite{rieke1999spikes}. Low-frequency changes lead to ``adaptation'', where the kernel is adapted to fit the signal again \cite{fairhall2001efficiency}. For long-range predictive coding, the absence of low frequencies leaves little to predict, as the effective correlation time of the signals is then typically very short as well \cite{rieke1999spikes}.
Using the notion of predictive coding in the context of (possible) long-range dependencies, we define the goal of signal encoding as follows: let a signal $x_j(t)$ be the result of the continuous-time computation in neuron $j$ up to time $t$, and let neuron $j$ have emitted spikes $t_j$ up to time $t$. These spikes should be emitted such that the signal $x_j(t')$ for $t'<t$ is decoded up to some signal-to-noise ratio, {\em and} these spikes should be predictive for $x_j(t')$ for $t'>t$ in the sense that no additional spikes are needed at times $t'>t$ to convey the predictive information up to time $t$.
Taking kernels as a signal filter of fixed width, as in the general approach in \cite{rieke1999spikes} has the important drawback that the signal reconstruction incurs a delay for the duration of the filter: its detection cannot be communicated until the filter is actually matched to the signal. This is inherent to any backward-looking filter-maching solution. Alternatively, a predictive coding approach could rely on only on a very short backward looking filter, minimizing the delay in the system, and continuously computing a forward predictive signal. At any time in the future then, only deviations of the actual signal from this expectation are communicated.
\subsection{Spike-trains as fractional derivative}
As recent work has highlighted the possibility that neurons encode fractional derivatives, it is noteworthy that the non-local nature of fractional calculus offers a natural framework for predictive coding. In particular, as we will show, when we assume that the predictive information about the future signal is fully contained in the current set of spikes, a signal approximated as a sum of power-law kernels corresponds to a fractional derivative in the form of a sum of Dirac-$\delta$ functions, which the neuron can obviously communicate through timed spikes.
The fractional derivative $r(t)$ of a signal $x(t)$ is denoted as $D^{\alpha} x(t) $, and intuitively expresses:
\[
r(t) = \frac{d^{\alpha}}{d t^{\alpha}} x(t),
\]
where $\alpha$ is the fractional order, e.g. $0.5$. This is most conveniently computed through the Fourier transformation in the frequency domain, as a simple multiplication:
\[
R(\omega) = H(\omega) X(\omega),
\]
where the Fourier-transformed fractional derivative operator $H(\omega)$ is by definition $(i\omega)^{\alpha}$ \cite{lundstrom2008fractional}, and $X(\omega)$ and $R(\omega)$ are the Fourier transforms of $x(t)$ and $r(t)$ respectively.
We assume that neurons carry out predictive coding by emitting spikes such that all predictive information is contained in the current spikes, and no more spikes will be fired if the signal follows this prediction. Approximating spikes by Dirac-$\delta$ functions, we take the spike-train up to some time $t_0$ to be the fractional derivative of the past signal {\em and} be fully predictive for the expected influence the past signal has on the future signal:
\[
r(t) = \sum_{t_i < t_0} \delta(t-t_i)
\]
The task is to find a signal $\hat{x}(t)$ that corresponds to an approximation of the actual signal $x(t)$ up to $t_0$, and where the predicted signal contribution $x(t)$ for $t>t_0$ due to $x(t<t_0)$ does not require additional future spikes. We note that a sum of power-law decaying kernels with power-law $t^{-\beta}$ for $\beta = 1-\alpha$ corresponds to such a fractional derivative: the Fourier-transform for a power-law decaying kernel of form $t^{-\beta}$ is proportional to $(i\omega)^{\beta-1}$, hence for a signal that just experienced a single step from 0 to 1 at time $t$ we get:
\[
R(\omega) = (i\omega)^{\alpha} (i\omega)^{\beta-1},
\]
and setting $\beta = 1-\alpha$ yields a constant in Fourier-space, which of course is the Fourier-transform of $\delta(t)$. It is easy to check that shifted power-law decaying kernels, e.g. $(t-t_a)^{-\beta}$ correspond to a shifted fractional derivative $\delta(t-t_a)$, and the fractional derivative of a sum of shifted power-law decaying kernels corresponds to a sum of shifted delta-functions. Note that for decaying power-laws, we need $\beta >0$, and for fractional derivatives we require $\alpha >0$.
Thus, with the reverse reasoning, a signal approximated as the sum of power-law decaying kernels corresponds to a spike-train with spikes positioned at the start of the kernel, and, beyond a current time $t$, this sum of decaying kernels is is interpreted as a prediction of the extent to which the future signal can be predicted by the past signal.
Obviously, both the Dirac-$\delta$ function and the $1/t^{\beta}$ kernels are singular (figure \ref{fig:approxkern}a) and can only be approximated. For real applications, only some part of the $1/t^{\beta}$ curve can be considered, effectively leaving the magnitude of the kernel and the high frequency component (the extend to which the initial $1/t^{\beta}$ peak is approximated) as free parameters. Figure \ref{fig:approxkern}b illustrates the signal approximated by a random spikes train; as compared to a sum of exponentially decaying $\alpha$-kernels, the long-memory effects of power-law decay kernels is evident.
\begin{figure}[t]
\center
\includegraphics[width=120mm]{figApproxKernB}
\caption{a) Signal $x(t)$ and corresponding fractional derivative $r(t)$: $1/t^{\beta}$ power-laws and delta-functions; b) power-law approximation, timed to spikes; compared to sum of $\alpha$-functions (black dashed line). c) Approximated $1/t^{\beta}$ power-law kernel for different values of $k$ from eq. (2). d) The approximated $1/t^{\beta}$ power-law kernel (blue line) can be decomposed as a weighted sum of $\alpha$-functions with various decay time-constants (dashed lines).
\vspace{-0.00025cm}
}
\label{fig:approxkern}
\end{figure}
\subsection{Practical encoding}
To explore the efficacy of the power-law kernel approach to signal encoding/decoding, we take a standard thresholding online approximation approach, where neurons communicate only deviations between the current computed signal $x(t)$ and the emitted approximated signal $\hat{x}(t)$ exceeding some threshold $\theta$.
The emitted signal $\hat{x}(t)$ is constructed as the (delayed) sum of filter kernels $\kappa$ each starting at the time of the emitted spike:
\[
\hat{x}(t) = \sum_{t_j < t} \kappa(t-(t_j+\Delta)),
\]
the delay $\Delta$ corresponds to the time-window over which the neuron considers the difference between computed and emitted signal. In a spiking neuron, such computation would be implemented simply by for instance a refractory current following a power-law. Allowing for both positive and negative spikes (corresponding to tightly coupled neurons with reversed threshold polarity \cite{rieke1999spikes}), this would expand to:
\[
\hat{x}(t) = \sum_{t_j^+ < t} \kappa(t-(t_j^+ +\Delta))-\sum_{t_j^- < t} \kappa(t-(t_j^- +\Delta)).
\]
Considering just the fixed time-window thresholding approach, a spike is emitted each time the difference between the computed signal $x(t)$ and the emitted signal $\hat{x}(t)$ plus (or minus) the kernel $\kappa(t)$ summed over some time-window exceeds the threshold $\theta$:
\begin{align}
r(t_0) & = \delta(t_0) & \quad \text{if} \sum_{\tau=t_0-\Delta}^{t_0} |x(\tau)-\hat{x}(\tau)| - |x(\tau)-(\hat{x}(\tau)+\kappa(\tau))|) > \theta, \nonumber \\
& = -\delta(t_0) & \quad \text{if} \sum_{\tau=t_0-\Delta}^{t_0} |x(\tau)-\hat{x}(\tau)| - |x(\tau)-(\hat{x}(\tau)-\kappa(\tau))|) > \theta,
\end{align}
the signal approximation improvement is computed here as the absolute value of the difference between the current signal noise and the signal noise when a kernel is added (or subtracted).
As an approximation of $1/t^{\beta}$ power-law kernels, we let the kernel first quickly rise, and then decay according to the power-law. For a practical implementation, we use a $1/t^{\beta}$ signal multiplied by a modified version of the logistic sigmoid function $\text{logsig}(t) = 1 / (1 + \exp(-t))$: $v(t,k) = 2 \,\text{logsig}(k t)-1$, such that the kernel becomes:
\begin{equation}
\kappa(t) = \lambda v(t,k) 1/t^{\beta},
\label{eq:plaw}
\end{equation}
where $\kappa(t)$ is zero for $t'<t$, and parameter $k$ determines the angle of the initial increasing part of the kernel. The resulting kernel is further scaled by a factor $\lambda$ to achieve a certain signal approximation precision (kernels for power-law exponential $\beta = 0.5$ and several values of $k$ are shown in figure \ref{fig:approxkern}c). As an aside, the resulting (normalized) power-law kernel can very accurately be approximated over multiple orders of magnitude by a sum of just 11 $\alpha$-function exponentials (figure \ref{fig:approxkern}d).
Next, we compare the efficiency of signal approximation with power-law predictive kernels as compared to the same approximation using standard fixed kernels. For this, we synthesize self-similar signals with long-range dependencies. We first remark on some properties of self-similar signals with power-law statistics, and on how to synthesize them.
\subsection{Self-similar signals with power-law statistics}
There is extensive literature on the synthesis of statistically self-similar signals with $1/f$-like statistics, at least going back to Kolmogorov \cite{kolmogorov1940kurven} and Mandelbrot \cite{mandelbrot1968fractional}. Self-similar signals exhibit slowly decaying variances, long-range dependencies and a spectral density following a power law. Importantly, for wide-sense self-similar signals, the autocorrelation functions also decays following a power-law.
Although various distinct classes of self-similar signals with $1/f$-like statistics exist \cite{wornell1999signal}, fractional Brownian motion (fBm) is a popular model for many natural signals. Fractional Brownian motion is characterized by its Hurst-paramater $H$, where $H=0.5$ corresponds to regular Brownian motion, and fBM models with $H>0.5$ exhibit long-range (positive) dependence. The spectral density of an fBm signal is proportional to a power-law, $1/f^{\gamma}$, where $\gamma = 2H+1$.
We used fractional Brownian motion to generate self-similar signals for various $H$ values, using the {\tt wfbm} function from the Matlab wavelet toolbox.
\section{Signal encoding/decoding}
\vspace{-0.00025cm}
\subsection{Encoding long-memory self-similar signals}
\begin{figure}[t]
\center
\includegraphics[width=140mm]{figApproxSigB}
\caption{Left: example of encoding of fBm signal with power-law kernels. Using an exponentially decaying kernel (inset) required 1398 spikes vs. 618 for the power-law kernel ($k=50$), for the same SNR. Right: SNR for various $\beta$ power-law exponents using a fixed number of spikes (48Hz), with curves for different $H$-parameters, each curve averaged over five 16s signals. The dashed blue curve plots the $H=0.6$ curve, using less spikes (36Hz); the flat bottom dotted line shows the average performance of the non-power-law exponentially decaying kernel, also for $H=0.6$.
\vspace{-0.00025cm}
}
\label{fig:approxsign}
\end{figure}
We applied the thresholded kernel approximation outlined above to synthesized fBm signals with $H>0.5$, to ensure long-term dependence in the signal. An example of such encoding is given in figure \ref{fig:approxsign}, left panel, using both positive and negative spikes, (inset, red line: the power-law kernel used). When encoding the same signal with kernels without the power-law tail (inset, blue line), the approximation required more than twice as many spikes for the same Signal-to-Noise-Ratio (SNR).
In figure \ref{fig:approxsign}, right panel, we compared the encoding efficacy for signals with different $H$-parameters, as a function of the power-law exponent, using the same number of spikes for each signal (achieved by changing the $\lambda$ parameter and the threshold $\theta$).
We find that more slowly varying signals, corresponding to higher $H$-parameters, are better encoded by the power-law kernels,
More surprisingly, we find
and signals are consistently best encoded for low $\beta$-values, in the order of $0.1 - 0.3$.
Similar results were obtained for different values of $k$ in equation \eqref{eq:plaw}.
We should remark that without negative spikes, there is no longer a clear performance advantage for power-law kernels (even for large $\beta$): where power-law kernels are beneficial on the rising part of a signal, they lose on downslopes where their slow decay cannot follow the signal.
\subsection{Sine-wave modulated white-noise}
Fractional derivatives as an interpretation of neuronal firing-rate has been put forward by a series of recent papers \cite{lundstrom2008fractional,lundstrom2009sensitivity,lundstrom2010multiple}, where experimental evidence was presented to suggest such an interpretation.
A key finding in \cite{lundstrom2010multiple} was that the instantaneous firing rate of neurons along various processing stages of a rat's whisker movement exhibit a phase-lead relative to the amplitude of the movement modulation. The phase-lead was found to be greater for cortical neurons as compared to thalamic neurons. When the firing rate corresponds to the $\alpha$-order fractional derivative, the phase-lead would correspond to greater fractional order $\alpha$ in the cortical neurons \cite{lundstrom2008fractional} . We used the sum-of-power-laws to approximate both the sine-wave-modulated white noise and the actual sine-wave itself, and found similar results (figure \ref{fig:phaselead}): smaller power-law exponents, in our interpretation also corresponding to larger fractional derivative orders, lead to increasingly fewer spikes at the back of the sine-wave (both in the case where we encode the signal with both positive and negative spikes -- then counting only the positive spikes -- and when the signal is approximated with only positive spikes -- not shown). We find an increased phase-lead when approximating the actual sine-wave kernel as opposed to the white-noise modulation, suggesting that perhaps cortical neurons more closely encode the former as compared to thalamic neurons.
\begin{figure}[t]
\center
\includegraphics[width=140mm,height=50mm]{sinePhaseLead}
\caption{Sinewave phase-lead. Left: when encoding sine-wave modulated white noise (inset); right: encoding the sine-wave signal itself (inset). Average firing rate is computed over 100ms, and normalized to match the sine-wave kernel.
\vspace{-0.00035cm}
}
\label{fig:phaselead}
\end{figure}
\subsection{Signal Frequency Filtering}
For a receiving neuron $i$ to properly interpret a spike-train $r(t)_j$ from neuron $j$, both neurons would need to keep track of past events over extended periods of time: current spikes have to be added to or subtracted from the future expectation signal that was already communicated through past spikes. The required power-law processes can be implemented in various manners, for instance as a weighted sum or a cascade of exponential processes \cite{drew2006models,lundstrom2008fractional}.
A natural benefit of implementing power-law kernels as a weighted sum or cascade of exponentials is that a receiving neuron can carry out temporal signal filtering simply by tuning the respective weight parameters for the kernel with which it decodes spikes into a signal approximation.
In figure \ref{fig:frequencyfilt2}, we illustrate this with power-law kernels that are transformed into high-pass and low-pass filters. We first approximated our power-law kernel \eqref{eq:plaw} with a sum of 11 exponentials (depicted in the left-center inset). Using this approximation, we encoded the signal (figure \ref{fig:frequencyfilt2}, center). The signal was then reconstructed using the resultant spikes, using the power-law kernel approximation, but with some zeroed out exponentials (respectively the slowly decaying exponentials for the high-pass filter, and the fast-decaying kernels for the low-pass filter). Figure \ref{fig:frequencyfilt2}, most right, shows the resulting filtered signal approximations. Obviously, more elaborate tuning of the decoding kernel with a larger sum of kernels can approximate a vast variety of signal filters.
\begin{figure}[t]
\center
\includegraphics[width=140mm]{figFilterFreq}
\caption{Illustration of frequency filtering with modified decoding kernels. The square boxes show the respective kernels in both time and frequency space. See text for further explanation.
\vspace{-0.0005cm}
}
\label{fig:frequencyfilt2}
\end{figure}
\section{Discussion}
\vspace{-0.00045cm}
Taking advantage of the relationship between power-laws and fractional derivatives, we outlined the peculiar fact that a sum of Dirac-$\delta$ functions, when taken as a fractional derivative, corresponds to a signal in the form of a sum of power-law kernels. Exploiting the obvious link to spiking neural coding, we showed how a simple thresholding spiking neuron can compute a signal approximation as a sum of power-law kernels; importantly, such a simple thresholding spiking neuron closely fits standard biological spiking neuron models, when the refractory response follows a power-law decay (e.g. \cite{pozzorini2010}). We demonstrated the usefulness of such an approximation when encoding slowly varying signals, finding that encoding with power-law kernels significantly outperformed similar but exponentially decaying kernels that do not take long-range signal dependencies into account.
Compared to the work where the firing rate is considered as a fractional derivative, e.g. \cite{lundstrom2008fractional}, the present formulation extends the notion of neural coding with fractional derivatives to individual spikes, and hence finer temporal variations: each spike effectively encodes very local signal variations, while also keeping track of long-range variations.
The interpretation in \cite{lundstrom2008fractional} of the fractional derivative $r(t)$ as a {\em rate} leads to a 1:1 relation between the fractional derivative order and the power-law decay exponent of adaptation of about $0.2$ \cite{lundstrom2008fractional,xu1996logarithmic,drew2006models}. For such fractional derivative $\alpha$, our derivation implies a power-law exponent for the power law kernels $\beta = 1-\alpha \approx 0.8$, consistent with our sine-wave reconstruction, as well as with recent adapting spiking neuron models \cite{pozzorini2010}. We find that when signals are approximated with non-coupled positive and negative neurons (i.e. one neuron encodes the positive part of the signal, the other the negative), such much faster-decaying power-law kernels encode more efficiently than slower decaying ones. Non-coupled signal encoding obviously fair badly when signals rapidly change polarity; this however seems consistent with human illusory experiences \cite{stocker2009}.
As noted, the singularity of $1/t^{\beta}$ power-law kernels means that initial part of the kernel can only be approximated. Here, we initially focused our simulation on the use of long-range power-law kernels for encoding slowly varying signals. A more detailed approximation of this initial part of the kernel may be needed to incorporate effects like gain modulation \cite{hong2008intrinsic,famulare2009feature}, and determine up to what extent the power-law kernels already account for this phenomenon. This would also provide a natural link to existing neural models of spike-frequency adaptation, e.g. \cite{jolivet2006integrate}, as they are primarily concerned with modeling the spiking neuron behavior rather than the computational aspects.
We used a greedy online thresholding process to determine when a neuron would spike to approximate a signal, this in contrast to offline optimization methods that place spikes at optimal times, like Smith \& Lewicki \cite{smith2005efficient}. The key difference of course is that the latter work is concerned with decoding a signal, and in effect attempts to determine the effective neural (temporal) filter. As we aimed to illustrate in the signal filtering example, these notions are not mutually exclusive: a receiving neuron could very well filter the incoming signal with a carefully shaped weighted sum of kernels, and then, when the filter is activated, signal the magnitude of the match through fractional spiking.
Predictive coding seeks to find a careful balance between encoding known information as well as future, derived expectations \cite{tishby2000information}. It does not seem unreasonable to formulate this balance as a no-going-back problem, where current computations are projected forward in time, and corrected where needed. In terms of spikes, this would correspond to our assumption that, absent new information, no additional spikes need to be fired by a neuron to transmit this forward information.
The kernels we find are somewhat in contrast to the kernels found by Bialek et. al. \cite{rieke1999spikes}, where the optimal filter exhibited both a negative and a positive part and no long-range ``tail''. Several practical issues may contribute to this difference, not least the relative absence of low frequency variations, as well as the fact that the signal considered is derived from the fly's H1 neurons. These two neurons have only partially overlapping receptive fields, and the separation into positive and negative spikes is thus slightly more intricate. We need to remark though that we see no impediment for the presented signal approximation to be adapted to such situations, or situations where more than two neurons encode fractions of a signal, as in population coding, e.g. \cite{huys2007fast}.
Finally, we would like to remark that the issue of long-range temporal dependencies such as discussed here seems to be relatively unappreciated. As pointed out in \cite{drew2006models}, long-range power-law dynamics would seem to offer a variety of ``hooks'' for computation through time, like for temporal difference learning and relative temporal computations (and possibly exploiting the many noted correspondences between spatial and temporal statistics \cite{schwartz2007space}).
{\bf Acknowledgement: } work by JOR supported by NWO Grant 612.066.826, SMB partly by NWO Grant 639.021.203.
\small
\bibliographystyle{unsrt}
|
2,869,038,154,303 | arxiv | \section{Introduction}\label{intro}
For a projective algebraic variety $X\subset {\mathbb P} W$, the \ti{$k$-th secant variety} $\sigma_k(X)$ is defined
by
\begin{equation}
\sigma_k(X) = \overline{ \bigcup_{x_1\cdots x_k\in X}{\mathbb P} \langle x_1\cdots x_k\rangle }\subset {\mathbb P} W
\end{equation}
where $\langle x_1\cdots x_k\rangle\subset W$ denotes the linear span of the points $x_1\cdots x_k$ and the overline
denotes Zariski closure. Let $V$ be an $(n+1)$-dimensional complex vector space and $W=S^d V$ be the subspace of symmetric $d$-way tensors in $V^{\otimes d}$. Equivalently, we can also think of $W$ as the space of homogeneous polynomials of degree $d$ in $n+1$ variables. When $X$ is the Veronese embedding $v_d({\mathbb P} V)$ of rank one symmetric $d$-way tensors over $V$ in ${\mathbb P} W$, then $\sigma_k(X)$ is the variety of symmetric $d$-way tensors of border rank at most $k$ (see subsection \ref{prelim} for terminology and details).
If $X$ is an irreducible variety and $\sigma_k(X)$ its $k$-secant variety, then it is well known that
\begin{equation}\label{sing_ineq}
\Sing(\sigma_k (X))\supseteq \sigma_{k-1}(X)~,
\end{equation}
(e.g. see \cite[coro. 1.8]{Ad}). Equality holds in many basic examples, like determinantal varieties defined by minors of a generic matrix, but the strict inequality also holds for some other tensors (e.g. just have a look at \cite[coro. 7.17]{MOZ} for the case $\sigma_2(X)$ when $X$ is the Segre embedding ${\mathbb P} V_1\times\cdots\times{\mathbb P} V_r$ or \cite[figure 1, p.18]{AOP2} for the third secant variety of Grassmannian $\mathbb{G}(2,6)$).
Therefore, it should be very interesting to compute more cases and to give a general treatment about singularities of secant varieties. Further, the knowledge of singular locus is known to be very crucial to the so-called \ti{identifiablity problem}, which is to determine uniqueness of a tensor decomposition (see \cite[thm. 4.5]{COV}). It has recently been paid more attention in this context. In this paper, we deal with the case of third secant variety of Veronese embeddings, $\sigma_3(v_d ({\mathbb P} V))$.
From now on, let $X$ be the Veronese variety $v_d({\mathbb P} V)$ in ${\mathbb P} S^d V={\mathbb P}^N$ with $N=\dim_{\mathbb C} S^d V-1={n+d\choose n}-1$. One could ask the following problem:
\begin{Problem}\label{prbm_Vero} Let $V={\mathbb C}^{n+1}$. Determine for which triple $(k,d,n)$ it does hold that the singular locus
\[\Sing(\sigma_k(v_d({\mathbb P} V)))=\sigma_{k-1}(v_d({\mathbb P} V))\]
for every $k\ge2, d\ge2$ and $n\ge1$ or describe $\Sing(\sigma_k(v_d({\mathbb P} V)))$ if it is not the case.
\end{Problem}
We'd like to remark here that our question is a set-theoretic one. First, it is classical that the answer to Problem \ref{prbm_Vero} is true for the binary case (i.e. $n=1$) (see e.g. \cite[thm. 1.45]{IK}) and also for the case of quadratic forms (i.e. $d=2$) (see e.g. \cite[thm. 1.26]{IK}). In the case of $k=2$, Kanev proved in \cite[thm. 3.3]{Kan} that this holds for any $d,n$. Thus, we only need to take care of the cases of $k\ge3, d\ge3$ and $n\ge2$. For the first case $(k,d,n)=(3,3,2)$, it is also well-known that the singular locus of the Aronhold hypersurface $\sigma_3(v_3({\mathbb P}^2))$ in ${\mathbb P}^9$ is equal to $\sigma_2(v_3({\mathbb P}^2))$ (e.g. \cite[see remarks in section 2]{Ott}). Look at the table in Figure \ref{sing3table}.
\section{Singularities of third secant of $v_d({\mathbb P}^n)$}\label{sect_Vero}
Choose any form $f\in S^{d}V$. We define the \defi{span} of $f$ to be $\langle f\rangle:=\{\partial\in V^{\vee}|\partial(f)=0\}^{\perp}$ in $V$. So, $f$ also belongs to $S^d \langle f\rangle$ and $\dim\langle f\rangle$ is the minimal number of variables in which we can express $f$ as a homogeneous polynomial of degree $d$. Note that $\dim\langle f\rangle=1$ means $f\in v_d({\mathbb P} V)$ by definition. We often abuse $f$ to denote the point $[f]$ in ${\mathbb P} S^d V$ represented by it. We say a form $f\in\sigma_3(X)\setminus \sigma_2(X)$ to be \ti{degenerate} if $\dim\langle f\rangle=2$ and \ti{non-degenerate} otherwise. We begin this section by stating our main theorem for the cases of $k=3, d\ge3$ and $n\ge2$.
\begin{Thm}[Singularity of $\sigma_3(v_d({\mathbb P}^n))$]\label{sing3vero} Let $X$ be the $n$-dimensional Veronese variety $v_d({\mathbb P} V)$ in ${\mathbb P}^N$ with $N={n+d\choose d}-1$. Then, the following holds that the singular locus
\begin{displaymath}
\Sing(\sigma_3(X))=\sigma_2(X)
\end{displaymath}
as a set for all $(d,n)$ with $d\ge3$ and $n\ge2$ unless $d=4$ and $n\ge3$. In the exceptional case $d=4$, for each $n\ge3$ the singular locus $\Sing(\sigma_3(v_4({\mathbb P} V)))$ is $D\cup\sigma_2(v_4({\mathbb P} V))$, where $D$ denotes the locus of all the degenerate forms $f$ (i.e. $\dim\langle f\rangle=2$) in $\sigma_3(v_4({\mathbb P} V))\setminus\sigma_2(v_4({\mathbb P} V))$.
\end{Thm}
\begin{proof}
Combine Corollary \ref{thm_d=3}, Theorem \ref{thm_nondeg} and \ref{thm_deg}.
\end{proof}
We can sum up all the relevant results into the following table:
\begin{figure}[!htb]
$$
\begin{array}{|l|c|c|}
\hline
{\bf (k,d,n)}&{\bf Singular~locus~of~}\sigma_k(v_d({\mathbb P}^n))&{\bf Comment~\&~Reference}\\
\hline
(\ge2,\ge2,1)&\sigma_{k-1}&\trm{Classical; case of binary forms, \cite[thm. 1.45]{IK}}\\
\hline
(\ge2,2,\ge1)&\sigma_{k-1}&\trm{Symmetric matrice case, \cite[thm. 1.26]{IK}}\\
\hline
(2,\ge2,\ge1)&\sigma_{1}&\trm{\cite[thm. 3.3]{Kan}}\\
\hline
(3,3,2)&\sigma_{2}&\trm{Aronhold hypersurface, \cite[remarks in $\mathfrak{S}$.2]{Ott}}\\
\hline
(3,\ge4,2)&\sigma_{2}&\trm{Thm. \ref{sing3vero}+Thm. \ref{thm_deg}}\\
\hline
(3,3,\ge3)&\sigma_{2}&\trm{Coro. \ref{thm_d=3}}\\
\hline
(3,4,\ge3)&D\cup\sigma_{2}&\trm{Only exceptional case ($d=4$), Thm. \ref{thm_deg}}\\
\hline
(3,\ge5,\ge3)&\sigma_{2}&\trm{Thm. \ref{sing3vero}+Thm. \ref{thm_deg}}\\
\hline
\end{array}~.
$$
\caption{Singular locus of $\sigma_k(v_d({\mathbb P}^n))$.}
\label{sing3table}
\end{figure}
\subsection{Preliminaries}\label{prelim}
For the proof, we recall some preliminaries on (border) ranks and geometry of symmetric tensors and list a few known facts on them for future use.
First of all, the equations defining $\sigma_3(v_d({\mathbb P} V))$ come from so-called \defi{symmetric flattenings}.\\
Consider the polynomial ring $S^\bullet V={\mathbb C}[x_0,\ldots,x_n]$ (we call this ring $S$) and consider another polynomial ring $T=S^\bullet V^\vee={\mathbb C}[y_0,\ldots,y_n]$, where $V^\vee$ is the \ti{dual space} of $V$. Define the differential action of $T$ on $S$ as follows: for any $g\in T_{d-k}, f\in S_d$, we set
\begin{equation}
g\cdot f=g(\partial_0,\partial_1,\ldots,\partial_n)f\in S_k~.
\end{equation}
Let us take bases for $S_k$ and $T_{d-k}$ as
\begin{align}\label{bases}
\mbf{X}^{I}=\frac{1}{i_0 !\cdots i_n !}x_0^{i_0}\cdots x_n^{i_n}&\quad\trm{and}\quad
\mbf{Y}^{J}=y_0^{j_0}\cdots y_n^{j_n}~,
\end{align}
with $|I|=i_0+\cdots+i_n=k$ and $|J|=j_0+\cdots+j_n=d-k$. For a given $f=\sum_{|I|=d}a_I\cdot \mbf{X}^I$ in $S_d$, we have a linear map
\[\phi_{d-k,k}(f):T_{d-k}\to S_k,\quad g\mapsto g\cdot f\] for any $k$ with $1\le k\le d-1$, which can be represented by the following ${k+n\choose n}\times{d-k+n\choose n}$-matrix:
\begin{equation}\label{flatmat}
\left(\begin{array}{ccc}
&&\\
&a_{I,J}&\\
&&\end{array}\right) \quad \trm{with $a_{I,J}=a_{I+J}$}~,
\end{equation}
in the bases defined above. We call this the \ti{symmetric flattening} (or \ti{catalecticant}) of $f$. It is easy to see that the transpose $\phi_{d-k,k}(f)^{T}$ is equal to $\phi_{k,d-k}(f)$.
Given a homogeneous polynomial $f$ of degree $d$, the minimum number of linear forms $l_{i}$ needed to write $f$ as a sum of $d$-th powers is the so-called (Waring) \defi{rank} of $f$ and denoted by $\rank(f)$. The (Waring) \defi{border rank} is this notion in the limiting sense. In other words, if there is a family $\{f_{\epsilon}\mid \epsilon >0 \}$ of polynomials with constant rank $r$ and $\lim_{\epsilon \to 0}f_{\epsilon} = f$, then we say that $f$ has border rank at most $r$. The minimum such $r$ is called the border rank of $f$ and denoted by $\brank(f)$. Note that by definition $\sigma_k(v_d({\mathbb P} V))$ is the variety of homogeneous polynomials $f$ of degree $d$ with border rank $\brank(f)\le k$.
It is obvious that if $f$ has (border) rank 1, then any symmetric flattening $\phi_{d-k,k}(f)$ has rank 1. By subadditivity of matrix rank, we also know that $\rank~\phi_{d-k,k}(f)\le r$ if $\brank(f)\le r$. We have the following known result for the defining equations of $\sigma_3(X)$;
\begin{Prop}[Defining equations of $\sigma_3(v_d({\mathbb P}^n))$]\label{eqn_s3} Let $X$ be the $n$-dimensional Veronese variety $v_d({\mathbb P} V)$ in ${\mathbb P}^N$ with $N={n+d\choose n}-1$. For any $(d,n)$ with $d\ge3, n\ge2$, $\sigma_3(X)$ is defined scheme-theoretically by the $4\times 4$-minors of the two symmetric flattenings
\[\phi_{d-1,1}(F)\colon {S^{d-1}V}^{\vee}\to V\quad\trm{and}\quad\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}(F)\colon {S^{d-\lfloor \frac d2\rfloor}V}^{\vee}\to S^{\lfloor \frac d2\rfloor}V~,\]
where $F$ is the form $\displaystyle\sum_{I\in{\mathbb N}^{n+1}} a_I\cdot\mbf{X}^I$ of degree $d$ as considering the coefficients $a_I$'s indeterminate.
\end{Prop}
\begin{proof}
Aronhold invariant ($n=2$, see e.g. \cite[p.247]{IK}) and symmetric inheritance (Proposition 2.3.1 in \cite{LO}) prove the result for the case $d=3$. For any $d\ge4$, see Theorem 3.2.1 (1) in \cite{LO}.
\end{proof}
Since there is a natural $\operatorname{SL}_{n+1}({\mathbb C})$-group action on $\sigma_3(X)$, we may take the $\operatorname{SL}_{n+1}({\mathbb C})$-orbits inside $\sigma_3(X)$ into consideration for the study of singularity. And we could also regard a canonical representative of each orbit as below.
First, suppose $f\in\sigma_3(X)\setminus\sigma_2(X)$ is a degenerate form (i.e. $\dim\langle f\rangle=2$). Choose $x_0, x_1$ as the basis of $\langle f\rangle$. Then, we recall the following lemma
\begin{Lem}\label{deg_normal} For any $d\ge4$ and $n\ge1$, any general degenerate form $f\in \sigma_3(v_d({\mathbb P} V))\setminus\sigma_2(v_d({\mathbb P} V))$ can be written as $x_0^{d}+\alpha \cdot x_1^{d}+\beta\cdot (x_0+x_1)^{d}$, up to $\operatorname{SL}_{n+1}({\mathbb C})$-action, for some nonzero $\alpha, \beta\in{\mathbb C}$.
\end{Lem}
\begin{proof} Since $\dim\langle f\rangle=2$, let $U:=\langle f\rangle={\mathbb C}\langle x_0,x_1\rangle$, a subspace of $V$. For such a $f\in \sigma_3(v_d({\mathbb P} V))\setminus\sigma_2(v_d({\mathbb P} V))$, it is easy to see that \[3=\brank(f)\le\brank(f,U)~,\] where the latter is the border rank of $f$ being considered as a polynomial in $S^\bullet U$. On the other hand, we also have $\brank(f,U)\le3$, because the symmetric flattenings $\phi_{d-1,1}(f,U)$ and $\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}(f,U)$ are just submatrices of $\phi_{d-1,1}(f)$ and of $\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}(f)$ respectively and therefore all their $4\times4$-minors also vanish (so, $f\in\sigma_3(v_d({\mathbb P} U))$). Since $\rank(f,U)$ and $\brank(f,U)$ coincide for a \ti{general} $f$ in the rational normal curve case (see e.g. \cite{CG}), we have $\rank(f,U)=3$. Thus, for some nonzero $\lambda,\mu\in{\mathbb C}$ we can write $f$ as
\begin{align*}
f(x_0,x_1)&=(a_0 x_0+a_1 x_1)^{d}+(b_0 x_0+b_1 x_1)^d +\{\lambda(a_0 x_0+a_1 x_1)+\mu(b_0 x_0+b_1 x_1)\}^{d}\\
&=X_0^{d}+(\frac{\lambda}{\mu})^d\cdot X_1^d+\lambda^d\cdot(X_0+X_1)^d~,
\end{align*}
by some scailing and using a $\operatorname{SL}_{n+1}({\mathbb C})$-change of coordinates, which proves our assertion.\end{proof}
\begin{Remk}\label{d<=3} There are some remarks related to Lemma \ref{deg_normal} as follows:
\begin{itemize}
\item[(a)] Note that there does not exist a degenerate form corresponding to an orbit in $\sigma_3(v_d({\mathbb P} V))\setminus\sigma_2(v_d({\mathbb P} V))$ if $d\le 3$. In this case, if $f$ is degenerate, then $f$ always belongs to $\sigma_2(v_d({\mathbb P} V))$, for the $\phi_{d-1,1}(f)$ have at most two nonzero rows and all the $3\times3$-minors of $\phi_{d-1,1}(f)$ vanish.
\item[(b)] In fact, in $d=4$ case, Lemma \ref{deg_normal} holds for \ti{all} degenerate form $f\in\sigma_3(v_4({\mathbb P} V))\setminus\sigma_2(v_4({\mathbb P} V))$, because there exist only $\rank~3$ forms in $\sigma_3(v_4({\mathbb P}^1))\setminus\sigma_2(v_4({\mathbb P}^1))$ (see \cite{CG} and also \cite[chap.4]{LT}).
\end{itemize}
\end{Remk}
Now, let's put all types of canonical representatives for $\operatorname{SL}_{n+1}({\mathbb C})$-orbits together as follows:
\begin{Thm}\label{normal_form} There are 4 types of homogeneous forms representing $\operatorname{SL}_{n+1}({\mathbb C})$-orbits in $\sigma_3(v_d({\mathbb P} V))\setminus\sigma_2(v_d({\mathbb P} V))$;
\[\trm{$x_0^{d}+x_1^{d}+x_2^{d}~,\quad x_0^{d-1}x_1+x_2^{d}~,\quad x_0^{d-2}x_1^{2}+x_0^{d-1}x_2$}~,\] which correspond to all the three non-degenerate orbits and the binary type corresponding to $D$, the locus of all orbits represented by degenerate forms, which appears only if $d\ge4$ and can be written as $x_0^{d}+\alpha x_1^{d}+\beta(x_0+x_1)^{d}$ for some nonzero $\alpha, \beta\in{\mathbb C}$ in case of a general point of $D$.\end{Thm}
\begin{proof}
Combine \cite[thm. 10.2]{LT}, Lemma \ref{deg_normal} and Remark \ref{d<=3}.
\end{proof}
Let us introduce more basic terms and facts. Let $Z\subset{\mathbb P} W$ be a variety and $\hat{Z}$ be its affine cone in $W$. Consider a (closed) point $p\in \hat{Z}$ and say $[p]$ the corresponding point in ${\mathbb P} W$. We denote the \ti{affine tangent space to $Z$ at $[p]$} in $W$ by $\hat{T}_{[p]}Z$ and we define the \defi{(affine) conormal space to $Z$ at $[p]$}, $\hat{N}^{\vee}_{[p]}Z$ as the annihilator $(\hat{T}_{[p]}Z)^{\perp}\subset W^{\vee}$. Since $\dim \hat{N}^{\vee}_{[p]}Z+\dim\hat{T}_{[p]}Z=\dim W$ and $\dim Z\le \dim \hat{T}_{[p]}Z-1$, we get that $\dim \hat{N}^{\vee}_{[p]}Z\le \codim(Z,{\mathbb P} W)$ and the equality holds if and only if $Z$ is smooth at $[p]$. This conormal space is quite useful to study the tangent space of $Z$.
Let us recall the \defi{apolar ideal} $f^{\perp}\subset T$. For any given form $f\in S^d V$, we call $\partial\in T_t$ \ti{apolar} to $f$ if the differentiation $\partial(f)$ gives zero (i.e. $\partial\in\ker\phi_{t,d-t}(f)$). And we define the \defi{apolar ideal} $f^{\perp}\subset T$ as
\[f^\perp:=\{\partial\in T~|~\partial(f)=0\}~.\]
It is straightforward to see that $f^\perp$ is indeed an ideal of $T$. Moreover, it is well-known that the quotient ring $T_f:=T/f^\perp$ is an \ti{Artinian Gorenstein algebra with socle degree $d$} (see e.g. \cite{IK}).
In our case, we have a nice description of the conormal space in terms of this apolar ideal as follows:
\begin{Prop}\label{conormal_prop} Let $X$ be the $n$-dimensional Veronese variety $v_d({\mathbb P} V)$ as above and $f$ be any form in $S^d V$. Suppose that $f$ corresponds to a (closed) point of $\sigma_3(X)\setminus\sigma_2(X)$ and that $\rank~\phi_{d-1,1}(f)=3,~\rank~\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}(f)=3$. Then, for any $(d,n)$ with $d\ge4, n\ge2$ we have
\begin{equation}\label{conormal1}
\hat{N}^{\vee}_{f}\sigma_3(X)=(f^\perp)_1\cdot(f^\perp)_{d-1}+(f^\perp)_{\lfloor \frac d2\rfloor}\cdot(f^\perp)_{d-\lfloor \frac d2\rfloor}~,
\end{equation}
where the sum is taken as a ${\mathbb C}$-subspace in $T_d={S^d V}^{\vee}$.
\end{Prop}
\begin{proof}
First, recall that $\phi_{d-k,k}(f)^{T}=\phi_{k,d-k}(f)$. We also note that
\[\ker \phi_{d-k,k}(f)=(f)^\perp_{d-k}\quad\trm{and}\quad(\im~\phi_{d-k,k}(f))^\perp=\ker (\phi_{d-k,k}(f)^{T})=\ker \phi_{k,d-k}(f)=(f)^\perp_{k}~.\]
Whenever $\rank~\phi_{d-1,1}(f)=3$ and $\rank~\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}(f)=3$, we have
\begin{equation}\label{conormal}
\hat{N}^{\vee}_{f}\sigma_3(X)=\langle\ker \phi_{d-1,1}(f)\cdot(\im~\phi_{d-1,1}(f))^{\perp}\rangle+\langle\ker \phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}(f)\cdot(\im~\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}(f))^{\perp}\rangle
\end{equation}
(see \cite[Proposition 2.5.1]{LO}), which proves the proposition.
\end{proof}
\begin{Remk}\label{deg_conormal} Note that, in case of $n=2$ or $\dim\langle f\rangle=2$ (i.e. degenerate form), to compute conormal space $\hat{N}^{\vee}_{f}\sigma_3(X)$ we only need to consider the symmetric flattening $\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}$ so that we have
\begin{equation}\label{conormal2}
\hat{N}^{\vee}_{f}\sigma_3(X)=(f^\perp)_{\lfloor \frac d2\rfloor}\cdot(f^\perp)_{d-\lfloor \frac d2\rfloor}~.
\end{equation}
For $n=2$ case, $\phi_{d-1,1}(f)$ has only 3 rows, there is no non-trivial $4\times4$-minor to give a local equation of $\sigma_3(X)$ at $f$. In case of $\dim\langle f\rangle=2$, we may consider $f\in{\mathbb C}[x_0,x_1]_d$ and choose bases as (\ref{bases}). Then, we could write the matrix of $\phi_{d-1,1}$ and its evaluation at $f$, $\phi_{d-1,1}(f)$ as
\begin{align*}
\phi_{d-1,1}=\left(\begin{array}{c:cccccc}
&y_0^{d-1}&y_0^{d-2}y_1&\cdots&y_n^{d-1}\\ \hdashline
x_0&&&&\\
x_1&&&&\\
x_2&&a_I&&\\
\vdots&&&\\
x_n&&&
\end{array}\right)~,&\quad\phi_{d-1,1}(f)=
\left(\begin{array}{c:cccccc}
&y_0^{d-1}&y_0^{d-2}y_1&\cdots&y_n^{d-1}\\ \hdashline
x_0&\ast&\ast&\cdots&\ast\\
x_1&\ast&\ast&\cdots&\ast\\
x_2&0&0&\cdots&0\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
x_n&0&0&\cdots&0
\end{array}\right)~.
\end{align*}
So, each $4\times4$-minor of $\phi_{d-1,1}$ (say $D_4(\phi_{d-1,1})$) has at most rank $2$ at $f$. Hence, we see that all the partial derivatives in the Jacobian
\[\frac{\partial D_4(\phi_{d-1,1})}{\partial a_I}(f)=0\]
for each index $I$ with $|I|=d$ and $D_4(\phi_{d-1,1})$ doesn't contribute to span the conormal space of $\sigma_3(X)$ at $f$, because at least one row of $D_4(\phi_{d-1,1})$ (say $(a_I~a_J~a_K~a_L)$) vanishes at $f$ and the Laplace expansion of $D_4(\phi_{d-1,1})$ along this row
\[D_4(\phi_{d-1,1})=\pm\bigg(a_I\cdot D^I_3(\phi_{d-1,1})-a_J\cdot D^J_3(\phi_{d-1,1})+a_K\cdot D^K_3(\phi_{d-1,1})-a_L\cdot D^L_3(\phi_{d-1,1})\bigg)\]
guarantees all the partials of $D_4(\phi_{d-1,1})$ become zero at $f$ as follows: for example, we see that
\begin{align*}
\pm\frac{\partial D_4(\phi_{d-1,1})}{\partial a_I}(f)=&~ D^I_3(\phi_{d-1,1})(f)+a_I(f)\cdot\frac{\partial D^I_3(\phi_{d-1,1})}{\partial a_I}(f)-a_J(f)\cdot\frac{\partial D^J_3(\phi_{d-1,1})}{\partial a_I}(f)\\
&+a_K(f)\cdot\frac{\partial D^K_3(\phi_{d-1,1})}{\partial a_I}(f)-a_L(f)\cdot\frac{\partial D^L_3(\phi_{d-1,1})}{\partial a_I}(f)~=~0~,
\end{align*}
\end{Remk}
where $a_I(f)=a_J(f)=a_K(f)=a_L(f)=0$ and $D^I_3(\phi_{d-1,1})(f)=0$ because of $\rank~D^I_3(\phi_{d-1,1})$ is at most 2 at $f$.
\subsection{Cases of non-degenerate orbits}
For the locus of non-degenerate orbits in $\sigma_3(X)\setminus\sigma_2(X)$, we may consider a useful reduction method through the following arguments:
\begin{Lem}\label{span_lem} For every $f\in \sigma_3(v_d({\mathbb P}^n))$ ($d,n\ge 2$), there exists a linear
${\mathbb P}^2={\mathbb P} U\subset{\mathbb P}^n={\mathbb P} V$ such that $f\in \sigma_3(v_d({\mathbb P} U))$. In particular, for every $f\in \sigma_3(v_d({\mathbb P}^n))\setminus\sigma_2(v_d({\mathbb P}^n))$, $2\le\dim\langle f\rangle\le 3$.
\end{Lem}
\begin{proof}
When $f\in\sigma_3(v_d({\mathbb P}^n))$ (i.e. border rank $\le 3$), the image of the flattening
${S^{d-1}{\mathbb C}^{n+1}}^{\vee}\to{\mathbb C}^{n+1}$ has dimension $\le 3$ and it is contained in the required 3-dimensional subspace $U$, i.e. $\dim\langle f\rangle\le 3$.
\end{proof}
Recall that we denote the locus of degenerate forms in $\sigma_3(X)\setminus\sigma_2(X)$ by $D$ (see Theorem \ref{sing3vero} for notation). Then, by Lemma \ref{span_lem}, we have an obvious corollary as follows:
\begin{Coro}\label{coro_span} For each $f\in\sigma_3(v_d({\mathbb P}^n))\setminus\left(D\cup\sigma_2(v_d({\mathbb P}^n))\right)$, There exists a unique 3-dimensional subspace $U$ such that $f\in \sigma_3(v_d({\mathbb P} U))$.
\end{Coro}
\begin{proof} For those $f$, which correspond to three orbits in Theorem \ref{normal_form}, the dimension of $\langle f\rangle$ is exactly $3$ so that the subspace $U=\langle f\rangle$ is precisely determined in the claimed cases.
\end{proof}
When $d=3$, we also have an immediate corollary as follows:
\begin{Coro}[$d=3$ case]\label{thm_d=3} For every $n\ge 2$ and $d=3$,
$\sigma_3(v_3({\mathbb P}^n))\setminus\sigma_2(v_3({\mathbb P}^n))$ is smooth.
\end{Coro}
\begin{proof}
By Remark \ref{d<=3} (a), there is no degenerate orbit in this case. So, it comes directly from the smoothness result on Aronhold hypersurface (i.e. $n=2$ case in Figure \ref{sing3table}) and using the fibration argument in the proof of Theorem \ref{thm_nondeg} for any $n\ge3$.
\end{proof}
Here is the theorem for non-degenerate orbits for any $d\ge4$:
\begin{Thm}[Non-degenerate locus]\label{thm_nondeg} For every $n\ge 2$ and $d\ge4$,
$\sigma_3(v_d({\mathbb P}^n))\setminus\left(D\cup\sigma_2(v_d({\mathbb P}^n))\right)$ is smooth.
\end{Thm}
\begin{proof}
Let our ${\mathbb P}^n={\mathbb P} V$ with $V={\mathbb C}\langle x_0,x_1,\cdots,x_n\rangle$ and its dual $V^{\vee}={\mathbb C}\langle y_0,y_1,\cdots,y_n\rangle$. First, we claim that one may reduce the problem to the case of $n=2$. Construct the following map
$$\sigma_3(v_d({\mathbb P}^n))\setminus\left(D\cup\sigma_2(v_d({\mathbb P}^n))\right)\st{\pi}\mathrm{Gr}({\mathbb P} U,{\mathbb P}^n)\quad\trm{with $\dim {\mathbb P} U=2$.}$$
This map is well defined by Corollary \ref{coro_span} and each fiber $\pi^{-1}({\mathbb P} U)$ is isomorphic to
$\sigma_3(v_d({\mathbb P} U))\setminus\left(D\cup\sigma_2(v_d({\mathbb P} U))\right)$. So, if we prove our theorem for the case $n=2$, then the fibers of $\pi$ are all isomorphic and smooth. Hence $\pi$ becomes a fibration over a smooth variety with smooth fibers. This shows that its domain
$\sigma_3(v_d({\mathbb P}^n))\setminus\left(D\cup\sigma_2(v_d({\mathbb P}^n))\right)$ is smooth, so proving our assertion.\\
So, from now on, let us assume $d\ge4$ and $n=2$. We can consider three different cases according to Theorem \ref{normal_form}.\\
Case (i) $f_1=x_0^{d}+x_1^{d}+x_2^{d}$ (Fermat-type). It is well-known that this Fermat-type $f_1$ becomes an almost transitive $\operatorname{SL}_{3}({\mathbb C})$-orbit, which corresponds to a general point of $\sigma_3(v_d({\mathbb P}^2))$, Thus, $\sigma_3(v_d({\mathbb P}^2))$ is smooth at $f_1$.
Case (ii) $f_2=x_0^{d-1}x_1+x_2^d$ (Unmixed-type). By Remark \ref{deg_conormal} (i.e. $n=2$ case), we just need to consider $(f_2^\perp)_{\lfloor \frac d2\rfloor}\cdot(f_2^\perp)_{d-\lfloor \frac d2\rfloor}$ as (\ref{conormal2}) to compute $\dim \hat{N}^{\vee}_{f_2}\sigma_3(X)$. Say $s:=\lfloor \frac d2\rfloor$. For $d\ge4$, we have $2\le s \le d-s\le d-2$. Note that $\dim \hat{N}^{\vee}_{f_2}\sigma_3(X)\le \codim(\sigma_3(X),{\mathbb P} U)={d+2\choose 2}-9$. So, it is enough to show $\dim \hat{N}^{\vee}_{f_2}\sigma_3(X)\ge{d+2\choose 2}-9$ for proving non-singularity of $f_2$.
Since the summands of $f_2$ separate the variables (i.e. unmixed-type), we could see that the apolar ideal $f_2^\perp$ is generated as
\[f_2^\perp=\bigg(\{Q_1=y_0 y_2, Q_2=y_1^2, Q_3=y_1 y_2\}\bigcup~\{\trm{other generators in degree $\ge d$}\}\bigg)~.\]
So, we have
\begin{align*}
&(f_2^\perp)_s=\{h\cdot Q_i~|~\forall h\in T_{s-2},~i=1,2,3~\}~\trm{and}~(f_2^\perp)_{d-s}=\{h^\prime\cdot Q_i~|~\forall h^\prime\in T_{d-s-2},~i=1,2,3~\}\\
&\Rightarrow\quad\hat{N}^{\vee}_{f_2}\sigma_3(X)=(f_2^\perp)_s \cdot (f_2^\perp)_{d-s}=\{h^{\prime\prime}\cdot Q_i Q_j~|~\forall h^{\prime\prime}\in T_{d-4},~i,j=1,2,3~\}~.
\end{align*}
Thus, if we denote the ideal $(Q_1,Q_2,Q_3)$ by $I$, then $\dim\hat{N}^{\vee}_{f_2}\sigma_3(X)$ is equal to the value of Hilbert function $H(I^2,t)$ at $t=d$. But, it is easy to see that $I^2$ has a minimal free resolution as
\[0\to T(-6)\to T(-5)^6\to T(-4)^6\to I^2\to 0~,\]
which shows the Hilbert function of $I^2$ can be computed as
\begin{align*}
H(I^2,d)&=6{d-4+2\choose 2}-6{d-5+2\choose 2}+{d-6+2\choose 2}\\
&=\left\{ \begin{array}{ll}
0 & \textrm{$(d\le3)$}\\
&\\
{d+2\choose 2}-9 & \textrm{$(d\ge4)$}
\end{array} \right.~.
\end{align*}
This implies that $\dim\hat{N}^{\vee}_{f_2}\sigma_3(X)={d+2\choose 2}-9$ for any $d\ge4$, which means that our $\sigma_3(X)$ is smooth at $f_2$ (see also Figure \ref{Mink1}).
\begin{figure}[!hbt]
\begin{align*}
\definecolor{zzzzff}{rgb}{0.6,0.6,1}
\definecolor{ffqqtt}{rgb}{1,0,0.2}
\definecolor{ttttff}{rgb}{0.2,0.2,1}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2.0cm,y=2.0cm]
\clip(-0.3,-0.44) rectangle (2.32,2.22);
\fill[line width=1.6pt,color=zzzzff,fill=zzzzff,fill opacity=0.25] (0.98,1.73) -- (0.1,0.18) -- (0.23,0) -- (1.78,0) -- (1.78,0.37) -- cycle;
\draw [line width=0.4pt] (0.98,1.73)-- (0,0);
\draw [line width=0.4pt] (0,0)-- (2,0);
\draw [line width=0.4pt] (2,0)-- (0.98,1.73);
\draw [line width=1.6pt,color=zzzzff] (0.98,1.73)-- (0.1,0.18);
\draw [line width=1.6pt,color=zzzzff] (0.1,0.18)-- (0.23,0);
\draw [line width=1.6pt,color=zzzzff] (0.23,0)-- (1.78,0);
\draw [line width=1.6pt,color=zzzzff] (1.78,0)-- (1.78,0.37);
\draw [line width=1.6pt,color=zzzzff] (1.78,0.37)-- (0.98,1.73);
\draw (0.75,0.73) node[anchor=north west] {\large$P_1$};
\draw [->] (0.98,1.73) -- (0.98,2.1);
\draw [->] (0,0) -- (-0.29,-0.21);
\draw [->] (2,0) -- (2.3,-0.23);
\draw (0.7,2.23) node[anchor=north west] {$ j $};
\draw (-0.3,0.15) node[anchor=north west] {$ i $};
\draw (2.16,0.15) node[anchor=north west] {$ k $};
\draw (-0.03,0.02) node[anchor=north west] {$d-s$};
\begin{scriptsize}
\fill [color=ttttff] (0.98,1.73) circle (2.5pt);
\draw [color=ffqqtt] (0,0) circle (2.5pt);
\draw[color=ffqqtt] (2,0) circle (2.5pt);
\draw[color=ffqqtt] (1.89,0.18) circle (2.5pt);
\fill [color=ttttff] (1.78,0) circle (2.5pt);
\fill [color=ttttff] (1.78,0.37) circle (2.5pt);
\fill [color=ttttff] (0.23,0) circle (2.5pt);
\fill [color=ttttff] (0.1,0.18) circle (2.5pt);
\fill [color=ttttff] (0.88,1.55) circle (2.5pt);
\fill [color=ttttff] (1.08,1.55) circle (2.5pt);
\fill [color=ttttff] (0.22,0.38) circle (2.5pt);
\fill [color=ttttff] (0.36,0.18) circle (2.5pt);
\fill [color=ttttff] (0.47,0) circle (2.5pt);
\fill [color=ttttff] (1.67,0.17) circle (2.5pt);
\fill [color=ttttff] (1.53,0) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}
\definecolor{ffwwtt}{rgb}{1,0.4,0.2}
\definecolor{ffqqtt}{rgb}{1,0,0.2}
\definecolor{ccttqq}{rgb}{0.8,0.2,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2.0cm,y=2.0cm]
\clip(-0.5,-0.44) rectangle (2.32,2.22);
\fill[line width=1.6pt,color=ffwwtt,fill=ffwwtt,fill opacity=0.25] (0.97,1.3) -- (0.28,0.16) -- (0.38,0) -- (1.66,0) -- (1.65,0.31) -- cycle;
\draw [line width=0.4pt] (0.97,1.3)-- (0.19,0);
\draw [line width=0.4pt] (0.19,0)-- (1.87,0);
\draw [line width=0.4pt] (1.87,0)-- (0.97,1.3);
\draw [line width=1.6pt,color=ffwwtt] (0.97,1.3)-- (0.28,0.16);
\draw [line width=1.6pt,color=ffwwtt] (0.28,0.16)-- (0.38,0);
\draw [line width=1.6pt,color=ffwwtt] (0.38,0)-- (1.66,0);
\draw [line width=1.6pt,color=ffwwtt] (1.66,0)-- (1.65,0.31);
\draw [line width=1.6pt,color=ffwwtt] (1.65,0.31)-- (0.97,1.3);
\draw (0.75,0.69) node[anchor=north west] {\large$P_2$};
\draw [->] (0.97,1.3) -- (0.98,1.8);
\draw [->] (0.19,0) -- (-0.29,-0.21);
\draw [->] (1.87,0) -- (2.3,-0.23);
\draw (0.7,1.91) node[anchor=north west] {$ j $};
\draw (-0.25,0.11) node[anchor=north west] {$ i $};
\draw (2.13,0.13) node[anchor=north west] {$k $};
\draw (0.13,0.01) node[anchor=north west] {$s$};
\begin{scriptsize}
\fill [color=ttttff] (0.97,1.3) circle (2.5pt);
\draw [color=ffqqtt] (1.87,0) circle (2.5pt);
\draw [color=ffqqtt] (0.19,0) circle (2.5pt);
\draw [color=ffqqtt] (1.76,0.15) circle (2.5pt);
\fill [color=ttttff] (1.66,0) circle (2.5pt);
\fill [color=ttttff] (1.65,0.31) circle (2.5pt);
\fill [color=ttttff] (0.38,0) circle (2.5pt);
\fill [color=ttttff] (0.28,0.16) circle (2.5pt);
\fill [color=ttttff] (0.89,1.17) circle (2.5pt);
\fill [color=ttttff] (1.06,1.17) circle (2.5pt);
\fill [color=ttttff] (0.39,0.33) circle (2.5pt);
\fill [color=ttttff] (0.49,0.17) circle (2.5pt);
\fill [color=ttttff] (0.58,0) circle (2.5pt);
\fill [color=ttttff] (1.55,0.16) circle (2.5pt);
\fill [color=ttttff] (1.45,0) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}
\definecolor{qqqqff}{rgb}{0,0,1}
\definecolor{qqzzzz}{rgb}{0,0.6,0.6}
\definecolor{ffqqtt}{rgb}{1,0,0.2}
\definecolor{ttttff}{rgb}{0.2,0.2,1}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2.0cm,y=2.0cm]
\clip(-0.6,-0.44) rectangle (2.72,2.30);
\fill[line width=1.6pt,color=qqzzzz,fill=qqzzzz,fill opacity=0.25] (0.98,1.82) -- (0.11,0.34) -- (0.4,-0.03) -- (1.61,-0.04) -- (1.61,0.75) -- cycle;
\draw [line width=0.4pt] (0.98,1.82)-- (-0.11,-0.03);
\draw [line width=0.4pt] (-0.11,-0.03)-- (2.08,-0.04);
\draw [line width=0.4pt] (2.08,-0.04)-- (0.98,1.82);
\draw [line width=1.6pt,color=qqzzzz] (0.98,1.82)-- (0.11,0.34);
\draw [line width=1.6pt,color=qqzzzz] (0.11,0.34)-- (0.4,-0.03);
\draw [line width=1.6pt,color=qqzzzz] (0.4,-0.03)-- (1.61,-0.04);
\draw [line width=1.6pt,color=qqzzzz] (1.61,-0.04)-- (1.61,0.75);
\draw [line width=1.6pt,color=qqzzzz] (1.61,0.75)-- (0.98,1.82);
\draw (0.56,0.89) node[anchor=north west] {\large$P_1+P_2$};
\draw [->] (0.98,1.82) -- (0.99,2.31);
\draw [->] (-0.11,-0.03) -- (-0.41,-0.3);
\draw [->] (2.08,-0.04) -- (2.35,-0.32);
\draw (0.75,2.29) node[anchor=north west] {$j$};
\draw (-0.41,0.11) node[anchor=north west] {$i$};
\draw (2.19,0.11) node[anchor=north west] {$k$};
\draw (-0.19,-0.02) node[anchor=north west] {$d$};
\begin{scriptsize}
\fill [color=ttttff] (0.98,1.82) circle (2.5pt);
\draw [color=ffqqtt] (-0.11,-0.03) circle (2.5pt);
\draw [color=ffqqtt] (2.08,-0.04) circle (2.5pt);
\draw[color=ffqqtt] (1.86,0.32) circle (2.5pt);
\fill [color=ttttff] (1.61,-0.04) circle (2.5pt);
\fill [color=ttttff] (1.61,0.75) circle (2.5pt);
\fill [color=ttttff] (0.4,-0.03) circle (2.5pt);
\fill [color=ttttff] (0.11,0.34) circle (2.5pt);
\fill [color=ttttff] (0.87,1.63) circle (2.5pt);
\fill [color=ttttff] (1.09,1.63) circle (2.5pt);
\fill [color=ttttff] (0.25,0.58) circle (2.5pt);
\fill [color=ttttff] (0.4,0.35) circle (2.5pt);
\fill [color=ttttff] (0.68,-0.03) circle (2.5pt);
\fill [color=ttttff] (1.59,0.31) circle (2.5pt);
\draw [color=ffqqtt] (1.75,0.52) circle (2.5pt);
\draw[color=ffqqtt] (1.97,0.13) circle (2.5pt);
\draw[color=ffqqtt] (1.87,-0.04) circle (2.5pt);
\draw[color=ffqqtt] (1.73,0.14) circle (2.5pt);
\fill [color=qqqqff] (0.25,0.15) circle (2.5pt);
\fill [color=qqqqff] (0.54,0.16) circle (2.5pt);
\draw[color=ffqqtt] (0,0.15) circle (2.5pt);
\draw[color=ffqqtt] (0.16,-0.03) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}
\end{align*}
\caption{Case of $f_2=x_0^{d-1}x_1+x_2^d$. $P_1$ is the lattice polytope in ${\mathbb R}_{\ge0}^3$ consisting of exponent vectors $(i,j,k)$ of the monomials $y_0^i y_1^k y_2^k$ in $(f_2^\perp)_{d-s}$ and $P_2$ is the one corresponding to $(f_2^\perp)_{s}$. $P_1+P_2$ is the Minkowski sum of two polytopes whose lattice points are exactly the exponent vectors of $\hat{N}^{\vee}_{f_2}\sigma_3(X)=(f_2^\perp)_{d-s}\cdot(f_2^\perp)_{s}$, which
contains all the monomial of $T_d$ but 9 monomials $y_0^d,~~ y_0^{d-1}y_1,~~ y_0^{d-2}y_1^2,~~ y_0^{d-3}y_1^3,~~ y_0^{d-1}y_2,~~ y_0 y_2^{d-1},~~ y_2^d,~~ y_1 y_2^{d-1},~~ y_0^{d-2}y_1 y_2$. This also shows $\dim\hat{N}^{\vee}_{f_2}\sigma_3(X)={d+2\choose d}-9$.}
\label{Mink1}
\end{figure}
Case (iii) $f_3=x_0^{d-2}x_1^{2}+x_0^{d-1}x_2$ (Mixed-type). In this case, we similarly use a computation of $\dim \hat{N}^{\vee}_{f_3}\sigma_3(X)$ via $(f_3^\perp)_{s}\cdot(f_3^\perp)_{d-s}$ to show the smoothness of $f_3$ (recall $s:=\lfloor \frac d2\rfloor$ and $2\le s \le d-s\le d-2$).
Let $Q_1:=y_0y_2-\frac{d-1}{2}y_1^2\in T_2$. We easily see that
\[f_3^\perp=\bigg(\{Q_1, Q_2=y_1 y_2, Q_3=y_2^2\}\bigcup~\{\trm{other generators in degree $\ge d-1$}\}\bigg)~.\]
Let $I$ be the ideal generated by three quadrics $Q_1,Q_2,Q_3$. By the same reasoning as (ii), we have
\begin{align*}
\dim\hat{N}^{\vee}_{f_3}\sigma_3(X)&=\dim(f_3^\perp)_s \cdot (f_3^\perp)_{d-s}= H(I^2,d)=\left\{ \begin{array}{ll}
0 & \textrm{$(d\le3)$}\\
&\\
{d+2\choose 2}-9 & \textrm{$(d\ge4)$}
\end{array} \right.~,
\end{align*}
because in this case $I^2$ also has the same minimal free resolution $0\to T(-6)\to T(-5)^6\to T(-4)^6\to I^2\to 0$. Hence, we obtain the smoothness of $\sigma_3(X)$ at $f_3$ (see also Figure \ref{Mink2}).
\begin{figure}[!htb]
\begin{align*}
\definecolor{wwwwww}{rgb}{0.4,0.4,0.4}
\definecolor{zzzzff}{rgb}{0.6,0.6,1}
\definecolor{ffqqtt}{rgb}{1,0,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2.0cm,y=2.0cm]
\clip(-0.3,-0.44) rectangle (2.32,2.22);
\fill[line width=1.6pt,color=zzzzff,fill=zzzzff,fill opacity=0.25] (0.98,1.73) -- (0,0) -- (1.49,0) -- (1.64,0.19) -- (1.63,0.62) -- cycle;
\draw [line width=0.4pt] (0.98,1.73)-- (0,0);
\draw [line width=0.4pt] (0,0)-- (2,0);
\draw [line width=0.4pt] (2,0)-- (0.98,1.73);
\draw [line width=1.6pt,color=zzzzff] (0.98,1.73)-- (0,0);
\draw [line width=1.6pt,color=zzzzff] (0,0)-- (1.49,0);
\draw [line width=1.6pt,color=zzzzff] (1.49,0)-- (1.64,0.19);
\draw [line width=1.6pt,color=zzzzff] (1.64,0.19)-- (1.63,0.62);
\draw [line width=1.6pt,color=zzzzff] (1.63,0.62)-- (0.98,1.73);
\draw (0.8,0.86) node[anchor=north west] {\large$P_1$};
\draw [->] (0.98,1.73) -- (0.98,2.1);
\draw [->] (0,0) -- (-0.29,-0.21);
\draw [->] (2,0) -- (2.3,-0.23);
\draw (0.73,2.13) node[anchor=north west] {$ j $};
\draw (-0.25,0.21) node[anchor=north west] {$ k $};
\draw (2.04,0.21) node[anchor=north west] {$ i $};
\draw (-0.01,0.03) node[anchor=north west] {$d-s$};
\draw [line width=1.6pt,dash pattern=on 5pt off 5pt,color=wwwwww] (1.77,0.4)-- (1.76,0);
\begin{scriptsize}
\draw [color=ffqqtt] (2,0) circle (2.5pt);
\draw [color=ffqqtt] (1.89,0.18) circle (2.5pt);
\fill [color=black] (1.63,0.62) circle (2.5pt);
\fill [color=black] (1.49,0) circle (2.5pt);
\fill [color=ffqqtt] (1.77,0.4) ++(-3pt,0 pt) -- ++(3pt,3pt)--++(3pt,-3pt)--++(-3pt,-3pt)--++(-3pt,3pt);
\draw [color=ffqqtt] (1.76,0) circle (2.5pt);
\fill [color=black] (1.64,0.19) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}
\definecolor{wwwwww}{rgb}{0.4,0.4,0.4}
\definecolor{ffwwtt}{rgb}{1,0.4,0.2}
\definecolor{ffqqtt}{rgb}{1,0,0.2}
\definecolor{ttttff}{rgb}{0.2,0.2,1}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2.0cm,y=2.0cm]
\clip(-0.5,-0.44) rectangle (2.32,2.22);
\fill[line width=1.6pt,color=ffwwtt,fill=ffwwtt,fill opacity=0.25] (0.97,1.3) -- (0.19,0.01) -- (1.4,0) -- (1.51,0.16) -- (1.52,0.51) -- cycle;
\draw [line width=0.4pt] (0.97,1.3)-- (0.19,0);
\draw [line width=0.4pt] (0.19,0)-- (1.87,0);
\draw [line width=0.4pt] (1.87,0)-- (0.97,1.3);
\draw [line width=1.6pt,color=ffwwtt] (0.97,1.3)-- (0.19,0.01);
\draw [line width=1.6pt,color=ffwwtt] (0.19,0.01)-- (1.4,0);
\draw [line width=1.6pt,color=ffwwtt] (1.4,0)-- (1.51,0.16);
\draw [line width=1.6pt,color=ffwwtt] (1.51,0.16)-- (1.52,0.51);
\draw [line width=1.6pt,color=ffwwtt] (1.52,0.51)-- (0.97,1.3);
\draw (0.8,0.71) node[anchor=north west] {\large$P_2$};
\draw [->] (0.97,1.3) -- (0.98,1.8);
\draw [->] (0.19,0) -- (-0.29,-0.21);
\draw [->] (1.87,0) -- (2.3,-0.23);
\draw (0.73,1.85) node[anchor=north west] {$j$};
\draw (-0.25,0.15) node[anchor=north west] {$k$};
\draw (2.16,0.15) node[anchor=north west] {$i$};
\draw (0.13,0.03) node[anchor=north west] {$s$};
\draw [line width=1.6pt,dash pattern=on 5pt off 5pt,color=wwwwww] (1.63,0.34)-- (1.63,0);
\begin{scriptsize}
\draw[color=ffqqtt] (1.87,0) circle (2.5pt);
\draw [color=ffqqtt] (1.76,0.15) circle (2.5pt);
\fill [color=black] (1.52,0.51) circle (2.5pt);
\fill [color=black] (1.4,0) circle (2.5pt);
\fill [color=ffqqtt] (1.63,0.34) ++(-3pt,0 pt) -- ++(3pt,3pt)--++(3pt,-3pt)--++(-3pt,-3pt)--++(-3pt,3pt);
\draw [color=ffqqtt] (1.63,0) circle (2.5pt);
\fill [color=black] (1.51,0.16) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}
\definecolor{zzqqff}{rgb}{0.6,0,1}
\definecolor{qqzztt}{rgb}{0,0.6,0.2}
\definecolor{ffwwqq}{rgb}{1,0.4,0}
\definecolor{ttttff}{rgb}{0.2,0.2,1}
\definecolor{qqzzzz}{rgb}{0,0.6,0.6}
\definecolor{ffqqtt}{rgb}{1,0,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2.0cm,y=2.0cm]
\clip(-0.6,-0.44) rectangle (2.72,2.32);
\fill[line width=1.6pt,color=qqzzzz,fill=qqzzzz,fill opacity=0.25] (0.98,1.82) -- (1.37,1.15) -- (1.37,0.69) -- (1.38,0.32) -- (1.26,0.13) -- (1.15,-0.04) -- (-0.11,-0.03) -- cycle;
\draw [line width=0.4pt] (0.98,1.82)-- (-0.11,-0.03);
\draw [line width=0.4pt] (-0.11,-0.03)-- (2.08,-0.04);
\draw [line width=0.4pt] (2.08,-0.04)-- (0.98,1.82);
\draw (0.45,0.67) node[anchor=north west] {\large$P_1+P_2$};
\draw [->] (0.98,1.82) -- (0.99,2.31);
\draw [->] (-0.11,-0.03) -- (-0.41,-0.3);
\draw [->] (2.08,-0.04) -- (2.35,-0.32);
\draw (0.76,2.24) node[anchor=north west] {$j$};
\draw (-0.41,0.11) node[anchor=north west] {$k$};
\draw (2.29,0.11) node[anchor=north west] {$i$};
\draw (-0.14,0.01) node[anchor=north west] {$d$};
\draw [line width=1.6pt,color=qqzzzz] (0.98,1.82)-- (1.37,1.15);
\draw [line width=1.6pt,color=qqzzzz] (1.37,1.15)-- (1.37,0.69);
\draw [line width=1.6pt,color=qqzzzz] (1.37,0.69)-- (1.38,0.32);
\draw [line width=1.6pt,color=qqzzzz] (1.38,0.32)-- (1.26,0.13);
\draw [line width=1.6pt,color=qqzzzz] (1.26,0.13)-- (1.15,-0.04);
\draw [line width=1.6pt,color=qqzzzz] (1.15,-0.04)-- (-0.11,-0.03);
\draw [line width=1.6pt,color=qqzzzz] (-0.11,-0.03)-- (0.98,1.82);
\draw [line width=1.6pt,dash pattern=on 5pt off 5pt,color=wwwwww] (1.64,0.7)-- (1.63,0.31);
\draw [line width=1.6pt,dash pattern=on 5pt off 5pt,color=wwwwww] (1.63,0.31)-- (1.63,-0.04);
\draw [line width=1.6pt,dash pattern=on 5pt off 5pt,color=wwwwww] (1.51,0.92)-- (1.51,0.5);
\draw [line width=1.6pt,dash pattern=on 5pt off 5pt,color=wwwwww] (1.51,0.5)-- (1.5,0.13);
\draw [line width=1.6pt,dash pattern=on 5pt off 5pt,color=wwwwww] (1.38,0.32)-- (1.39,-0.05);
\begin{scriptsize}
\draw[color=ffqqtt] (2.08,-0.04) circle (2.5pt);
\draw[color=ffqqtt] (1.86,0.32) circle (2.5pt);
\draw[color=ffqqtt] (1.63,-0.04) circle (2.5pt);
\fill[color=ffqqtt] (1.64,0.7) ++(-3pt,0 pt) -- ++(3pt,3pt)--++(3pt,-3pt)--++(-3pt,-3pt)--++(-3pt,3pt);
\fill[color=ffqqtt] (1.51,0.92) ++(-3pt,0 pt) -- ++(3pt,3pt)--++(3pt,-3pt)--++(-3pt,-3pt)--++(-3pt,3pt);
\fill[color=ffqqtt] (1.51,0.5) ++(-3pt,0 pt) -- ++(3pt,3pt)--++(3pt,-3pt)--++(-3pt,-3pt)--++(-3pt,3pt);
\draw[color=ffqqtt] (1.75,0.52) circle (2.5pt);
\draw[color=ffqqtt] (1.97,0.13) circle (2.5pt);
\draw[color=ffqqtt] (1.87,-0.04) circle (2.5pt);
\draw[color=ffqqtt] (1.73,0.14) circle (2.5pt);
\fill [color=black] (1.37,0.69) circle (2.5pt);
\fill [color=black] (1.37,1.15) circle (2.5pt);
\fill [color=black] (1.38,0.32) circle (2.5pt);
\draw [color=ffqqtt] (1.5,0.13) circle (2.5pt);
\draw [color=ffqqtt] (1.63,0.31) circle (2.5pt);
\fill [color=black] (1.39,-0.05) circle (2.5pt);
\fill [color=black] (1.26,0.13) circle (2.5pt);
\fill [color=black] (1.15,-0.04) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}
\end{align*}
\caption{Case of $f_3=x_0^{d-2}x_1^{2}+x_0^{d-1}x_2$. $P_1$ (resp. $P_2$) is the lattice polytope consisting of exponent vectors $(i,j,k)$ of the monomials $y_0^i y_1^k y_2^k$ in $(f_3^\perp)_{d-s}$ (resp. in $(f_3^\perp)_{s}$). A \ti{dashed} line means an equivalent relation between monomials given by the multiples of $Q_1$ in $P_1$ and $P_2$ and by those of $Q_i Q_j$'s in $P_1+P_2$. The quotient space $T_d/(f_3^\perp)_{d-s}\cdot(f_3^\perp)_{s}$ can be represented by 9 \ti{circle} monomials in $P_1+P_2$ $y_0^d,~~ y_0^{d-1}y_1,~~ y_0^{d-2}y_1^2,~~ y_0^{d-3}y_1^3,~~ y_0^{d-1}y_2,~~ y_0^{d-2} y_2^{2},~~ y_0^{d-3} y_1^2y_2,~~ y_0^{d-3}y_1 y_2^{2},~~ y_0^{d-2}y_1 y_2$ modulo \ti{dashed} relations, which says $\dim\hat{N}^{\vee}_{f_3}\sigma_3(X)={d+2\choose d}-9$, the non-singularity at $f_3$.}
\label{Mink2}
\end{figure}
\end{proof}
\begin{Remk} From the viewpoint of apolarity, the three cases in Theorem \ref{thm_nondeg} can be explained geometrically as follows: if we consider the base locus of the ideal $I$, which is generated by the three quadrics in each apolar ideal $f_i^\perp$, then case (i) corresponds to three distinct points, case (ii) to one reduced point and one non-reduced of length 2, and case (iii) to one non-reduced point of length 3 (not lying on a line).\end{Remk}
\subsection{Degenerate case : binary forms}
Since there is no degenerate form for $d=3$ (see Remark \ref{d<=3} (a)), it is enough to consider the smoothness of the degenerate locus for $d\ge4$.
\begin{Thm}[Degenerate locus]\label{thm_deg} Let $D$ be the locus of all the degenerate forms in $\sigma_3(v_d({\mathbb P}^n))\setminus\sigma_2(v_d({\mathbb P}^n))$. Then, for any $d\ge4, n\ge2$, $\sigma_3(v_d({\mathbb P}^n))$ is singular on $D$ if and only if $d=4$ and $n\ge3$.
\end{Thm}
\begin{proof} Let $f_D$ be any form belong to $D$. For this degenerate case, by Remark \ref{deg_conormal}, we have
\[\hat{N}^{\vee}_{f_D}\sigma_3(X)=(f_D^\perp)_{\lfloor \frac d2\rfloor}\cdot(f_D^\perp)_{d-\lfloor \frac d2\rfloor}~.\]
First of all, let us consider $f_D$ as a polynomial in ${\mathbb C}[x_0,x_1]$ (i.e. $f_D=f_D(x_0,x_1)$). Then, by Hilbert-Burch theorem (see e.g. \cite[thm. 1.54]{IK}) we know that $T/f_D^\perp$ is an Artinian Gorenstein algebra with socle degree $d$ and that $f_D^\perp$ is a complete intersection of two homogeneous polynomials $F, G$ of each degree $a$ and $b$ with $a+b=d+2$ \ti{as an ideal of ${\mathbb C}[y_0,y_1]$}. Since $\rank~\phi_{d-3,3}(f_D)=3$, there is one-dimensional kernel of $\phi_{d-3,3}(f_D)$ in ${\mathbb C}[y_0,y_1]_3$, which gives one \ti{cubic} generator $F$ in $f_D^\perp$.
When $f_D$ is general, $f_D=x_0^{d}+\alpha x_1^{d}+\beta (x_0+x_1)^{d}$ for some $\alpha,\beta\in{\mathbb C}^\ast$ by Lemma \ref{deg_normal}, so we have $F=y_0^2 y_1-y_0 y_1^2$. Even for the case $f_D$ being not general, we have $F=y_0^2 y_1$ up to change of coordinates, because the apolar ideal of this non-general $f_D$ corresponds to the case with one multiple root on ${\mathbb P}^1$ (see \cite{CG} and also \cite[chap.4]{LT}).
Therefore, we obtain that
\[\trm{$f_D^\perp=\big(\trm{$F=y_0^2 y_1-y_0 y_1^2$ or $y_0^2 y_1$},G\big)$ for some polynomial $G$ of degree $(d-1)$}\]
and that $f_D^\perp$ \ti{as an ideal in $T={\mathbb C}[y_0,y_1,\ldots,y_n]$} has its degree parts $(f_D^\perp)_{\lfloor \frac d2\rfloor}$ and $(f_D^\perp)_{d-\lfloor \frac d2\rfloor}$, both of which are generated by $F,y_2,\ldots,y_n$, since $d\ge4$ so that $\lfloor \frac d2\rfloor, d-\lfloor \frac d2\rfloor<d-1$.
Now, let us compute the dimension of conormal space as follows:\\
i) $d=4$ case (i.e. $\lfloor \frac d2\rfloor=2$) : In this case, we have
\[\hat{N}^{\vee}_{f_D}\sigma_3(X)=(f_D^\perp)_{2}\cdot(f_D^\perp)_{2}=(y_2,\ldots,y_n)_2\cdot(y_2,\ldots,y_n)_2=(\{y_i y_j~|~2\le i,j\le n\})_4~.\] So, we get
\begin{align*}
\dim \hat{N}^{\vee}_{f_D}\sigma_3(X)&=\dim T_4 -\dim\big\langle y_0^4,y_0^3 y_1,\cdots,y_1^4\big\rangle-\dim\big\langle\{y_0^3\cdot\ell, y_0^2 y_1\cdot\ell, y_0 y_1^2\cdot\ell, y_1^3\cdot\ell~|~\ell=y_2,\ldots,y_n\}\big\rangle\\
&={4+n\choose 4}-5-4(n-1)~.
\end{align*}
This shows us that $\sigma_3(X)$ is singular at $f_D$ if and only if $n\ge3$, because the expected codimension is ${4+n\choose 4}-3n-3$.
ii) $d=5$ case (i.e. $\lfloor \frac d2\rfloor=2$) : Recall that $F$ is $y_0^2 y_1-y_0 y_1^2$ or $y_0^2 y_1$, the cubic generator of $f_D^\perp$. Then,
\[\hat{N}^{\vee}_{f_D}\sigma_3(X)=(f_D^\perp)_{2}\cdot(f_D^\perp)_{3}=(y_2,\ldots,y_n)_2\cdot(F,y_2,\ldots,y_n)_3~.\]
\begin{align*}
\dim \hat{N}^{\vee}_{f_D}\sigma_3(X)&=\dim T_5 -\dim\big\langle y_0^5,y_0^4 y_1,\cdots,y_1^5\big\rangle\\
&\quad-\dim\bigg\langle\{y_0^4\cdot\ell, y_0^3 y_1\cdot\ell, y_0^2 y_1^2\cdot\ell, y_0 y_1^3\cdot\ell, y_1^4\cdot\ell\}\setminus\{y_0 F\cdot\ell, y_1 F\cdot\ell~|~\ell=y_2,\ldots,y_n\}\bigg\rangle\\
&={5+n\choose 5}-6-3(n-1)=\trm{expected $\codim(\sigma_3(X),{\mathbb P} S^5 V)$}~,
\end{align*}
which gives that $\sigma_3(X)$ is smooth at $f_D$ in this case.
ii) $d\ge6$ case : Here we have $\hat{N}^{\vee}_{f_D}\sigma_3(X)=(f_D^\perp)_{\lfloor \frac d2\rfloor}\cdot(f_D^\perp)_{d-\lfloor \frac d2\rfloor}=(F,y_2,\ldots,y_n)_{\lfloor \frac d2\rfloor}\cdot(F,y_2,\ldots,y_n)_{d-\lfloor \frac d2\rfloor}~.$
\begin{align*}
\dim \hat{N}^{\vee}_{f_D}\sigma_3(X)&=\dim T_d -\dim\bigg\langle\{y_0^{d-1}\cdot\ell, y_0^{d-2} y_1\cdot\ell, \ldots, y_1^{d-1}\cdot\ell\}\setminus\{y_0^{d-4}F\cdot\ell,\ldots,y_1^{d-4}F\cdot\ell~|~\ell=y_2,\ldots,y_n\}\bigg\rangle\\
&\quad-\dim\bigg(\{y_0^d,y_0^{d-1} y_1,\cdots,y_1^d\}\setminus\{y_0^{d-6}\cdot F^2,y_0^{d-7}y_1\cdot F^2,\ldots,y_1^{d-6}\cdot F^2\}\bigg)\\
&={d+n\choose d}-\big\{d-(d-3)\big\}(n-1)-\big\{(d+1)-(d-5)\big\}\\&={d+n\choose d}-3(n-1)-6=\trm{expected $\codim(\sigma_3(X),{\mathbb P} S^d V)$}~,
\end{align*}
which implies that $\sigma_3(X)$ is also smooth at $f_D$.
\end{proof}
\subsection{Defining equations of $\Sing(\sigma_3(X))$} As an immediate corollary of Theorem \ref{sing3vero}, we obtain defining equations of the singular locus in our third secant of Veronese embedding $\sigma_3(X)$.
\begin{Coro} Let $X$ be the $n$-dimensional Veronese embedding as above. The singular locus of $\sigma_3(X)$ is cut out by $3\times 3$-minors of the two symmetric flattenings $\phi_{d-1,1}$ and $\phi_{d-2,2}$ unless $d=4$ and $n\ge3$ case, in which the (set-theoretic) defining ideal of the locus is the intersection of the ideal generated by the previous $3\times 3$-minors and the ideal generated by $3\times 3$-minors of $\phi_{d-1,1}$ and $4\times 4$-minors of $\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}$.
\end{Coro}
\begin{proof}
It is well-known that $\sigma_2(X)$ is cut out by $3\times 3$-minors of the two $\phi_{d-1,1}$ and $\phi_{d-2,2}$ (see \cite[thm. 3.3]{Kan}). It is also easy to see that $D$, the locus of degenerate forms inside $\sigma_3(X)$, is cut out by $3\times 3$-minors of $\phi_{d-1,1}$ and $4\times 4$-minors of $\phi_{d-\lfloor \frac d2\rfloor,\lfloor \frac d2\rfloor}$ by the argument in Remark \ref{deg_conormal} and Proposition \ref{eqn_s3}. Thus, using these two facts the conclusion is straightforward by Theorem \ref{sing3vero}.
\end{proof}
\paragraph*{\textbf{Acknowledgements}} The author would like to express his deep gratitude to Giorgio Ottaviani for introducing the problem, giving many helpful suggestions to him, and encouraging him to complete this work. He also gives thanks to Luca Chiantini and Luke Oeding for useful conversations with them.
|
2,869,038,154,304 | arxiv |
\section{The Eigenvalues of the Periodic Landau Hamiltonian}
\label{Spectrum_Landau_Hamiltonian_Section}
In this appendix, we investigate the eigenvalues of the periodic Landau Hamiltonian and their multiplicity. We shall fix an arbitrary charge $q\in \mathbb{N}$ and we consider the space $L_{\mathrm{mag}}^{q,2}(Q_B)$ of $L^2_{\mathrm{loc}}(\mathbb{R}^3)$-functions $\Psi$, which are gauge-perodic with respect to the magnetic translations
\begin{align}
T_{B,q}(v)\Psi(x) &:= \mathrm{e}^{\i \frac{q\mathbf B}{2}\cdot (v \wedge x)} \Psi(x+v), & v &\in \mathbb{R}^3, \label{Magnetic_Translation_charge-q}
\end{align}
of the lattice $\Lambda_B$ defined above \eqref{Fundamental_cell}, that is, these functions satisfy $T_{B, q}(\lambda)\Psi = \Psi$ for every $\lambda\in \Lambda_B$. The magnetic translations obey $T_{B,q}(v+w) = \mathrm{e}^{\i \frac{q\mathbf B}{2} \cdot (v\wedge w)} T_{B, q}(v) T_{B,q}(w)$, whence the group $\{T_{B,q}(\lambda)\}_{\lambda\in \Lambda_B}$ is abelian.
On the Sobolev space $H_{\mathrm{mag}}^{q,2}(Q_B)$ of gauge-periodic functions, where
\begin{align}
H_{\mathrm{mag}}^{q,m}(Q_B) &:= \bigl\{ \Psi\in L_{\mathrm{mag}}^{q,2}(Q_B) : \Pi^\nu \Psi\in L_{\mathrm{mag}}^{q, 2} (Q_B) \quad \forall \nu\in \mathbb{N}_0^3, |\nu|_1\leq m\bigr\} \label{Periodic_Sobolev_Space_charge-q}
\end{align}
for $m\in \mathbb{N}_0$, we consider the Landau Hamiltonian $\Pi_q^2$ with magnetic momentum given by $\Pi_q :=-\i \nabla + q \mathbf A$. This operator commutes with the translations in \eqref{Magnetic_Translation_charge-q} and the magnetic flux through the unit cell $Q_B$ is equal to $2\pi q$, see \eqref{Fundamental_cell} and the discussion below \eqref{Fundamental_cell}. In this respect, Sections \ref{Magnetically_Periodic_Samples} and \ref{Periodic Spaces} correspond to the special cases $q=1$ and $q=2$, respectively.
We choose a Bloch--Floquet decomposition $\mathcal{U}_{\mathrm{BF}}$ (see also Section \ref{Schatten_Classes}) such that $\Pi_q$ fibers according to
\begin{align}
\mathcal{U}_{\mathrm{BF}} \, \Pi_q \, \mathcal{U}_{\mathrm{BF}}^* = \int_{[0,1)^3}^\oplus \mathrm{d} \vartheta \; \Pi_q(\vartheta) \label{Bloch-Floquet-decomposition}
\end{align}
with fiber momentum operators
\begin{align*}
\Pi_q(\vartheta) := -\i \nabla + q\mathbf A + \sqrt{2\pi B} \, \vartheta
\end{align*}
acting on the magnetic Sobolev space $H_{\mathrm{mag}}^{q,2}(Q_B)$ in \eqref{Periodic_Sobolev_Space_charge-q}.
\begin{prop}
\label{Spectrum_Landau_Hamiltonian}
For every $B >0$, $q\in \mathbb{N}$, and $\vartheta\in [0, 1)^3$, the spectrum of $\Pi_q(\vartheta)^2$ consists of the isolated eigenvalues
\begin{align}
E_{q,B, \vartheta}(k, p) &:= q \, B \, (2k+1) + 2\pi \, B \, (p + \vartheta_3)^2, & k\in \mathbb{N}_0, \; p\in \mathbb{Z}. \label{Spectrum_Landau_Hamiltonian_eq1}
\end{align}
Furthermore, their multiplicity is finite and equals
\begin{align}
\dim \ker ( \Pi_q(\vartheta)^2 - E_{q,B, \vartheta}(k, p)) = q.
\label{Spectrum_Landau_Hamiltonian_eq2}
\end{align}
\end{prop}
In preparation for the proof, we first note that, by rescaling, $\Pi_q(\vartheta)^2$ is isospectral to $B \, (-\i \nabla + \frac q2 e_3\wedge x+ \sqrt{2\pi}\, \vartheta )^2$. We henceforth assume that $B =1$.
Furthermore, we introduce the notation $x = (x_\perp , x_3)^t$ and define the two-dimensional operator $\Pi_{\perp, q}(\vartheta) := (\Pi_q^{(1)}(\vartheta), \Pi_q^{(2)}(\vartheta))^t$. This operator acts on functions $\psi_\perp$ satisfying the gauge-periodic condition $T_{\perp,q} (\lambda) \psi_\perp = \psi_\perp$ for all $\lambda \in \sqrt{2\pi}\, \mathbb{Z}^2$ with
\begin{align}
T_{\perp,q} (v) \psi_\perp(x) &:= \mathrm{e}^{\i \frac{q}{2} (v_1x_2 - v_2x_1)} \psi_\perp(x + v), & v &\in \mathbb{R}^2.
\end{align}
The following result is well known, even for more general lattices, see for example \cite[Proposition 6.1]{Tim_Abrikosov}. We include the proof for the sake of completeness, adding the treatment of the perturbation by $\vartheta$.
\begin{lem}
\label{Spectrum_Landau_Hamiltonian_Lemma}
For every $q\in \mathbb{N}$, the spectrum of the operator $\Pi_{\perp, q}(\vartheta)^2$ consists of the isolated eigenvalues $E_q(k) := (2k+1)q$, $k\in \mathbb{N}_0$. Each of $E_q(k)$ is $q$-fold degenerate.
\end{lem}
\begin{proof}
Since $[\Pi_q^{(1)}(\vartheta), \Pi_q^{(2)}(\vartheta)] = -\i q$, the creation and annihilation operators
\begin{align}
a(\vartheta) &:= \frac{1}{\sqrt{2q}} \bigl( \Pi_q^{(1)}(\vartheta) - \i \Pi_q^{(2)}(\vartheta)\bigr), & a^*(\vartheta) &:= \frac{1}{\sqrt{2q}} \bigl( \Pi_q^{(1)}(\vartheta) + \i \Pi_q^{(2)}(\vartheta)\bigr) \label{Creation-Annihilation}
\end{align}
satisfy $[a(\vartheta), a^*(\vartheta)] = 1$ and it is easy to show that
\begin{align}
\Pi_q(\vartheta)^2 = q\, (2\, a^*(\vartheta) a(\vartheta) + 1). \label{Spectrum_Landau_Hamiltonian_eq3}
\end{align}
From this, we read off the formula for $E_q(k)$.
The rest of the proof is devoted to the statement about the degeneracy. First, with the help of the creation and annihilation operators, it is easy to show that the degeneracy of $E_q(k)$ is equal to that of $E_q(0)$ for all $k\in \mathbb{N}_0$. Therefore, it is sufficient to determine the degeneracy of $E_q(0)$. By \eqref{Spectrum_Landau_Hamiltonian_eq3}, $\ker(\Pi_{\perp, q}^2(\vartheta) - q)$ equals $\ker(a(\vartheta))$ so it suffices to determine the latter. A straightforward calculation shows that
\begin{align*}
\mathrm{e}^{\frac q4 |x_\perp - \frac 2q \sqrt{2\pi} J\vartheta|^2} \; a(\vartheta) \; \mathrm{e}^{-\frac q4|x_\perp - \frac 2q \sqrt{2\pi} J\vartheta|^2} &= - \frac{\i}{\sqrt{2q}} \, [ \partial_{x_1} - \i \partial_{x_2}], & J := \bigl(\begin{matrix} & -1 \\ 1\end{matrix}\bigr).
\end{align*}
Therefore, the property $\psi_\perp \in\ker a(\vartheta)$ is equivalent to the function $\xi := \mathrm{e}^{\frac q4 |x_\perp - \frac 2q \sqrt{2\pi} J\vartheta|^2} \psi_\perp$ satisfying $\partial_{x_1}\xi - \i \partial_{x_2}\xi =0$. If we identify $z = x_1 + \i x_2\in \mathbb{C}$, then $J\vartheta = \i (\vartheta_1 + \i \vartheta_2)$ and this immediately implies that the complex conjugate function $\ov \xi$ solves the Cauchy-Riemann differential equations, whence it is entire. We define the entire function
\begin{align*}
\Theta(z) := \mathrm{e}^{-2\i z \Re \vartheta} \, \mathrm{e}^{-\frac{q}{2\pi} (z - \frac{2\pi \i}{q} \vartheta)^2} \; \ov {\xi \Bigl( \sqrt{\frac 2\pi} \, z\Bigr)}.
\end{align*}
A tedious calculation shows that the gauge-periodicity of $\psi_\perp$ is equivalent to the relations
\begin{align}
\Theta(z + \pi) &= \Theta(z), \label{Spectrum_Landau_1}\\
\Theta(z + \i \pi) &= \mathrm{e}^{-2\pi \vartheta} \, \mathrm{e}^{-2\i q z}\, \mathrm{e}^{q\pi} \, \Theta(z). \label{Spectrum_Landau_2}
\end{align}
Therefore, it suffices to show that the space of entire functions $\Theta$ which obey \eqref{Spectrum_Landau_1} and \eqref{Spectrum_Landau_2} is a vector space of dimension $q$. We claim that \eqref{Spectrum_Landau_1} implies that $\Theta$ has an absolutely convergent Fourier series expansion of the form
\begin{align}
\Theta(z) = \sum_{k\in \mathbb{Z}} c_k \; \mathrm{e}^{2\i kz}. \label{Spectrum_Landau_3}
\end{align}
To prove this, we first note that, for fixed imaginary part $x_2$, we may expand $\Theta$ in an absolutely convergent series $\Theta(z) = \sum_{k\in \mathbb{Z}} a_k(x_2) \mathrm{e}^{2\i kx_1}$ with
\begin{align*}
a_k(x_2) = \frac 1\pi \int_0^\pi \mathrm{d} x_1 \; \mathrm{e}^{-2\i k x_1} \, \Theta(x_1 + \i x_2).
\end{align*}
By the Cauchy-Riemann equations, it is easy to verify that $a_k' = -2k\, a_k$. Therefore, the number $c_k := \mathrm{e}^{2k x_2} \, a_k(x_2)$ is independent of $x_2$ and provides the expansion \eqref{Spectrum_Landau_3}. Furthermore, \eqref{Spectrum_Landau_2} implies that $c_{k+q} = \mathrm{e}^{-\pi (2k+q)} \mathrm{e}^{2\pi \vartheta} c_k$. Therefore, the series \eqref{Spectrum_Landau_3} is fully determined by the values of $c_0, \ldots, c_{q-1}$ and we conclude that1³³1³³1³³1111 $\ker a(\vartheta)$ is a $q$-dimensional vector space.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Spectrum_Landau_Hamiltonian}]
As mentioned before, it suffices to prove the proposition for $B = 1$. It is easy to verify that for any $\vartheta \in [0,1)^3$ the spectrum of $(\Pi_q(\vartheta)^{(3)})^2$ consists of the simple eigenvalues $2\pi\, (p+ \vartheta_3)^2$ with $p\in \mathbb{Z}$. Since $\Pi_q(\vartheta)^{(3)}$ and $\Pi_{\perp,q}(\vartheta)$ commute, Lemma \ref{Spectrum_Landau_Hamiltonian_Lemma} implies the existence of an orthonormal basis of eigenvectors for $\Pi_q(\vartheta)^2$ of the form $\psi_\perp^{k, m}(x_\perp) \psi_3^{\vartheta, p}(x_3)$ with $k\in \mathbb{N}_0$, $m = 1, \ldots, q$ and $p\in \mathbb{Z}$, corresponding to the eigenvalue $E_{q,1, \vartheta}(k,p)$. This proves the formulas \eqref{Spectrum_Landau_Hamiltonian_eq1} and \eqref{Spectrum_Landau_Hamiltonian_eq2}.
\end{proof}
\subsection{Introduction}
Ginzburg--Landau (GL) theory has been introduced as the first macroscopic and phenomenlogical description of superconductivity in 1950 \cite{GL}. The theory comprises a system of partial differential equations for a complex-valued function, the order parameter, and an effective magnetic field. Ginzburg--Landau theory has been highly influencial and investigated in numerous works, among which are \cite{Sigal1, Sigal2, Serfaty, SandierSerfaty, Correggi3, Correggi2, Giacomelli1, Giacomelli2, Giacomelli3} and references therein.
Bardeen--Cooper--Schrieffer (BCS) theory of superconductivity is the first commonly accepted and Nobel prize awarded microscopic theory of superconductivity \cite{BCS}. As a major breakthrough, the theory features a pairing mechanism between the electrons below a certain critical temperature, which causes the electrical resistance in the system to drop to zero in the superconducting phase. This effect is due to an effective attraction between the electrons, which arises as a consequence of the phonon vibrations of the lattice ions in the superconductor.
One way to formulate BCS theory mathematically is via the BCS free energy functional or BCS functional for short. As Leggett pointed out in \cite{Leg1980}, the BCS functional can be obtained from a full quantum mechanical description of the system by restricting attention to quasi-free states, see also \cite{de_Gennes}. Such states are determined by their one-particle density matrix and the Cooper pair wave function. The BCS functional has been studied intensively from a mathematical point of view in the absence of external fields in \cite{Hainzl2007, Hainzl2007-2, Hainzl2008-2, Hainzl2008-4, FreiHaiSei2012, BraHaSei2014, FraLe2016, DeuchertGeisinger} and in the presence of external fields in \cite{HaSei2011, BrHaSei2016, FraLemSei2017, A2017, CheSi2020}. The BCS gap equation arises as the Euler--Lagrange equation of the BCS functional and its solution is used to compute the spectral gap of an effective Hamiltonian, which is open in the superconducting phase. BCS theory from the point of view of its gap equation is studied in \cite{Odeh1964, BilFan1968, Vanse1985, Yang1991, McLYang2000, Yang2005}.
The present article continues a series of works, in which the \emph{macroscopic} GL theory is derived from the \emph{microscopic} BCS theory in a regime close to the critical temperature and for weak external fields. This endeavor has been initiated by Gor'kov in 1959 \cite{Gorkov}. The first mathematically rigorous derivation of the GL functional from the BCS functional has been provided by Frank, Hainzl, Seiringer, and Solovej for periodic external electric and magnetic fields in 2012 in \cite{Hainzl2012}. An important assumption of this work is that the flux of the external magnetic field through the unit cell of periodicity of the system vanishes. This excludes for example a homogeneous magnetic field. The techniques from this GL derivation have been further developed in \cite{Hainzl2014} to compute the BCS critical temperature shift caused by the external fields. The first important step towards overcoming the zero magnetic flux restriction in \cite{Hainzl2012,Hainzl2017} has been made by Frank, Hainzl, and Langmann, who considered in \cite{Hainzl2017} the problem of computing the BCS critical temperature shift for systems exposed to a homogeneous magnetic field within the framework of linearized BCS theory. Recently, the derivation of the GL functional and the computation of BCS critical temperature shift (for the full nonlinear model) could be extended to the case of a constant magnetic field by Deuchert, Hainzl, and Maier in \cite{DeHaSc2021}. The goal of the present work is to further extend the results in \cite{DeHaSc2021} to the case of general external magnetic fields with an arbitrary flux through the unit cell.
GL theory arises from BCS theory when the temperature is sufficiently close to the critical temperature and when the external fields are weak and slowly varying. More precisely, if $0 < h \ll 1$ denotes the ratio between the microscopic and the macroscopic length scale, then the external electric field $W$ and the magnetic vector potential $\mathbf A$ are given by $h^2 W(hx)$ and $h\mathbf A(hx)$, respectively. Furthermore, the temperature regime is such that $T - {T_{\mathrm{c}}} = -{T_{\mathrm{c}}} Dh^2$ for some constant $D >0$, where ${T_{\mathrm{c}}}$ is the critical temperature in absence of external fields. When this scaling is in effect, it is shown in \cite{Hainzl2012} and \cite{DeHaSc2021} that the Cooper pair wave function $\alpha(x,y)$ is given by
\begin{align}
\alpha(x,y) = h\, \alpha_*(x - y) \, \psi \left( \frac{h(x+y)}{2}\right) \label{eq:intro1}
\end{align}
to leading order in $h$. Here, $\alpha_*$ is the microscopic Cooper pair wave function in the absence of external fields and $\psi$ is the GL order parameter.
Moreover, the influence of the external fields causes a shift in the critical temperature of the BCS model, which is described by linearized GL theory in the same scaling regime. More precisely, it has been shown in \cite{Hainzl2014,Hainzl2017}, and \cite{DeHaSc2021} that the critical temperature shift in BCS theory is given by
\begin{align}
{T_{\mathrm{c}}}(h) = {T_{\mathrm{c}}} (1 - {D_{\mathrm{c}}} h^2) \label{eq:intro2}
\end{align}
to leading order, where ${D_{\mathrm{c}}}$ denotes a critical parameter that can be computed using linearized GL theory.
The present work is an extension of the paper \cite{DeHaSc2021}, where the case of a constant magnetic field was considered. In this article, we incorporate periodic electric fields $W$ and general vector potentials $\mathbf A$ that give rise to periodic magnetic fields. This, in particular, generalizes the results in \cite{Hainzl2012,Hainzl2014} to the case of general external magnetic fields with non-zero flux through the unit cell. We show that within the scaling introduced above, the Ginzburg--Landau energy arises as leading order correction on the order $h^4$. Furthermore, we show that the Cooper pair wave function admits the leading order term \eqref{eq:intro1} and that the critical temperature shift is given by \eqref{eq:intro2} to leading order. The main technical novelty of this article is a further development of the phase approximation method, which has been pioneered in the framework of BCS theory for the case of the constant magnetic field in \cite{Hainzl2017} and \cite{DeHaSc2021}. It allows us to compute the BCS energy of a class of trial states (Gibbs states) in a controlled way. This trial state analysis is later used in the proofs of the upper and of the lower bound for the BCS free energy. The proof of our lower bound additionally uses a priori bounds for certain low-energy BCS states that include the magnetic field and have been established in \cite{DeHaSc2021}.
\subsection{Gauge-periodic samples}
\label{Magnetically_Periodic_Samples}
Our objective is to study a system of three-dimensional fermionic particles that is subject to weak and slowly varying external electromagnetic fields within the framework of BCS theory. Let us define the magnetic field $\mathbf B \coloneqq h^2 e_3$. It can be written in terms of the vector potential $\mathbf A_{\mathbf B}(x) \coloneqq \frac{1}{2} \mathbf B \wedge x$, where $x \wedge y$ denotes the cross product of two vectors $x,y \in \mathbb{R}^3$, as $\mathbf B = \curl \mathbf A_\mathbf B$. To the vector potential $\mathbf A_{\mathbf B}$ we associate the magnetic translations
\begin{align}
T(v)f(x) &\coloneqq \mathrm{e}^{\i \frac {\mathbf B} 2\cdot (v\wedge x)} f(x+v), & v &\in \mathbb{R}^3, \label{Magnetic_Translation}
\end{align}
which commute with the magnetic momentum operator $-\i \nabla + \mathbf A_\mathbf B$. The family $\{ T(v) \}_{v\in \mathbb{R}^3}$ satisfies $T(v+w) = \mathrm{e}^{\i \frac{\mathbf B}{2} \cdot (v \wedge w)} T(v) T(w)$ and is therefore a unitary representation of the Heisenberg group. We assume that our system is periodic with respect to the Bravais lattice $\Lambda_h \coloneqq \sqrt{2\pi} \, h^{-1} \, \mathbb{Z}^3$ with fundamental cell
\begin{align}
Q_h &\coloneqq \bigl[0, \sqrt{2\pi} \, h^{-1}\bigr]^3 \subseteq \mathbb{R}^3. \label{Fundamental_cell}
\end{align}
Let $b_i = \sqrt{2\pi} \, h^{-1} \, e_i$ denote the basis vectors that span $\Lambda_h$. The magnetic flux through the face of the unit cell spanned spanned by $b_1$ and $b_2$ equals $2 \pi$, and hence the abelian subgroup $\{ T(\lambda) \}_{\lambda \in \Lambda_h}$ is a unitary representation of the lattice group.
Our system is subject to an external electric field $W_h(x) = h^2W(hx)$ with a fixed function $W \colon \mathbb{R}^3 \rightarrow \mathbb{R}$, as well as a magnetic field defined in terms of the vector potential $\mathbf A_h(x) = h \mathbf A(hx)$, which admits the form $\mathbf A \coloneqq \mathbf A_{e_3} + A$ with $A \colon \mathbb{R}^3\rightarrow \mathbb{R}^3$ and $\mathbf A_{e_3}$ as defined above. We assume that $A$ and $W$ are periodic with respect to $\Lambda_1$. The flux of the magnetic field $\curl A_h$ through all faces of the unit cell $Q_h$ vanishes because $A_h$ is a periodic function. Accordingly, the magnetic field $\curl \mathbf A_h$ has the same fluxes through the faces of the unit cell as $\mathbf B$.
The above representation of $\mathbf A_h$ is general in the sense that any periodic magnetic field field $B(x)$ that satisfies the Maxwell equation $\divv B = 0$ can be written as the curl of a vector potential $A_B$ of the form $A_B(x)= \frac{1}{2} b \wedge x + A_{\mathrm{per}}(x)$, where $b$ denotes the vector with components given by the average magnetic flux of $B$ through the faces of $Q_h$ and $A_{\mathrm{per}}$ is a periodic vector potential. For more information concerning this decomposition we refer to \cite[Chapter 4]{Diss_Marcel}. For a treatment of the two-dimensional case, see \cite{Tim_Abrikosov}.
\subsection{The BCS functional}
\label{BCS_functional_Section}
In BCS theory a state is conveniently described by its generalized one-particle density matrix, that is, by a self-adjoint operator $\Gamma$ on $L^2(\mathbb{R}^3) \oplus L^2(\mathbb{R}^3)$, which obeys $0 \leq \Gamma \leq 1$ and is of the form
\begin{align}
\Gamma = \begin{pmatrix} \gamma & \alpha \\ \ov \alpha & 1 - \ov \gamma \end{pmatrix}. \label{Gamma_introduction}
\end{align}
Here, $\ov \alpha$ denotes the operator $\alpha$ with the complex conjugate integral kernel in the position space representation. Since $\Gamma$ is self-adjoint we know that $\gamma$ is self-adjoint and that $\alpha$ is symmetric in the sense that its integral kernel satisfies $\alpha(x,y) = \alpha(y,x)$. This symmetry is related to the fact that we exclude spin degrees of freedom from our description and assume that all Cooper pairs are in a spin singlet state. The condition $0 \leq \Gamma \leq 1$ implies that the one-particle density matrix $\gamma$ satisfies $0 \leq \gamma \leq 1$ and that $\alpha$ and $\gamma$ are related through the inequality
\begin{align}
\alpha \alpha^* \leq \gamma ( 1- \gamma). \label{gamma_alpha_fermionic_relation}
\end{align}
Let us define the magnetic translations $\mathbf T(\lambda)$ on $L^2(\mathbb{R}^3)\oplus L^2(\mathbb{R}^3)$ by
\begin{align*}
\mathbf T(v) &\coloneqq \begin{pmatrix}
T(v) & 0 \\ 0 & \ov{T(v)}\end{pmatrix}, & v &\in \mathbb{R}^3.
\end{align*}
We say that a BCS state $\Gamma$ is \emph{gauge-periodic} provided $\mathbf T(\lambda) \, \Gamma \, \mathbf T(\lambda)^* = \Gamma$ holds for any $\lambda\in \Lambda_h$. This implies the relations $T(\lambda) \, \gamma \, T(\lambda)^* = \gamma$ and $T(\lambda)\,\alpha \,\ov{T(\lambda)}^* = \alpha$, or, in terms of integral kernels,
\begin{align}
\gamma(x, y) &= \mathrm{e}^{\i \frac \mathbf B 2 \cdot (\lambda \wedge (x-y))} \; \gamma(x+\lambda,y+ \lambda), \notag\\
\alpha(x, y) &= \mathrm{e}^{\i \frac \mathbf B 2 \cdot (\lambda \wedge (x+y))} \; \alpha(x+\lambda,y+ \lambda), & \lambda\in \Lambda_h. \label{alpha_periodicity}
\end{align}
We further say that a gauge-periodic BCS state $\Gamma$ is \emph{admissible} if
\begin{align}
\Tr \bigl[\gamma + (-\i \nabla + \mathbf A_\mathbf B)^2\gamma\bigr] < \infty \label{Gamma_admissible}
\end{align}
holds. Here $\Tr[\mathcal{R}]$ denotes the trace per unit volume of an operator $\mathcal{R}$ defined by
\begin{align}
\Tr [\mathcal{R}] &\coloneqq \frac{1}{|Q_h|} \Tr_{L^2(Q_h)} [\chi \mathcal{R} \chi], \label{Trace_per_unit_volume_definition}
\end{align}
where $\chi$ denotes the characteristic function of the cube $Q_h$ in \eqref{Fundamental_cell} and $\Tr_{L^2(Q_h)}[\cdot]$ is the usual trace over an operator on $L^2(Q_h)$. By the condition in \eqref{Gamma_admissible}, we mean that $\chi \gamma \chi$ and $\chi (-\i \nabla + \mathbf A_\mathbf B)^2 \gamma \chi$ are trace-class operators.
Eqs.~\eqref{gamma_alpha_fermionic_relation}, \eqref{Gamma_admissible}, and the same inequality with $\gamma$ replaced by $\overline{\gamma}$ imply that $\alpha$, $(-\i \nabla + \mathbf A_\mathbf B)\alpha$, and $(-\i \nabla + \mathbf A_\mathbf B) \ov \alpha$ are locally Hilbert--Schmidt. We will rephrase this property as a notion of $H^1$-regularity for the kernel of $\alpha$ in Section~\ref{Preliminaries} below.
Let $\Gamma$ be an admissible BCS state. We define the Bardeen--Cooper--Schrieffer free energy functional, or BCS functional for short, at temperature $T\geq 0$ by the formula
\begin{align}
\FBCS(\Gamma) &\coloneqq \Tr\bigl[ \bigl( (-\i \nabla + \mathbf A_h)^2 - \mu + W_h \bigr)\gamma \bigr] - T\, S(\Gamma) \notag \\
&\hspace{120pt} - \frac{1}{|Q_h|} \int_{Q_h} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r\; V(r) \, |\alpha(X,r)|^2,
\label{BCS functional}
\end{align}
where $S(\Gamma)= - \Tr [\Gamma \ln(\Gamma)]$ denotes the von Neumann entropy per unit volume and $\mu\in \mathbb{R}$ is a chemical potential. The interaction energy is written in terms of the center-of-mass and relative coordinates $X = \frac{x+y}{2}$ and $r = x-y$. Throughout this paper, we write, by a slight abuse of notation, $\alpha(x,y) \equiv \alpha(X,r)$. That is, we use the same symbol for the function depending on the original coordinates and for the one depending on $X$ and $r$.
The natural space for the interaction potential guaranteeing that the BCS functional is bounded from below is $V \in L^{\nicefrac 32}(\mathbb{R}^3) + L_{\varepsilon}^{\infty}(\mathbb{R}^3)$, that is, the set of interaction potentials, for which $V$ is relatively form bounded with respect to the Laplacian. Under these assumptions it can be shown that the BCS functional satisfies the lower bound
\begin{align}
\FBCS(\Gamma) &\geq \frac 12 \Tr \bigl[ \gamma + (-\i \nabla + \mathbf A_\mathbf B)^2 \gamma\bigr] - C \label{BCS functional_bounded_from_below}
\end{align}
for some constant $C>0$. In other words, the BCS functional is bounded from below and coercive on the set of admissible BCS states.
The normal state $\Gamma_0$ is the unique minimizer of the BCS functional when restricted to admissible states with $\alpha = 0$ and reads
\begin{align}
\Gamma_0 &\coloneqq \begin{pmatrix} \gamma_0 & 0 \\ 0 & 1-\ov\gamma_0 \end{pmatrix}, & \gamma_0 &\coloneqq \frac 1{1 + \mathrm{e}^{ ((-\i\nabla + \mathbf A_h)^2 + W_h-\mu)/T}}. \label{Gamma0}
\end{align}
Its name is motivated by the fact that it is also the unique minimizer of the BCS functional if the temperature $T$ is chosen sufficiently large.
We define the BCS free energy by
\begin{align}
F^{\mathrm{BCS}}(h, T) \coloneqq \inf\bigl\{ \FBCS(\Gamma) - \FBCS(\Gamma_0) : \Gamma \text{ admissible}\bigr\} \label{BCS GS-energy}
\end{align}
and say that the system is superconducting at temperature $T$ if $F^{\mathrm{BCS}}(h, T) < 0$. Although it is not difficult to prove that the BCS functional has a minimizer, we refrain from giving a proof here. If we assume that the BCS functional has a minimizer $\Gamma$ then the condition $F^{\mathrm{BCS}}(h, T) < 0$ implies $\alpha = \Gamma_{12} \neq 0$.
The goal of this paper is to derive an asymptotic formula for $F^{\mathrm{BCS}}(h, T)$ for small $h > 0$. This will allow us to derive Ginzburg--Landau theory and to show how the critical temperature depends on the external electric and magnetic field and on $h$. For our main results to hold, we need the following assumptions.
\begin{asmp}
\label{Assumption_V}
We assume that the interaction potential $V$ is a radial function that satisfies $(1+|\cdot|^2) V\in L^2(\mathbb{R}^3) \cap L^\infty(\mathbb{R}^3)$. Moreover, the electric and the magnetic potentials $W\in W^{1, \infty}(\mathbb{R}^3)$ and $A\in W^{3, \infty}(\mathbb{R}^3; \mathbb{R}^3)$ are $\Lambda_1$-periodic functions, i.e. $W(x + \lambda) = W(x)$ and $A(x + \lambda) = A(x)$ for $\lambda\in \Lambda_1$ and all $x\in \mathbb{R}^3$. We also assume that $A(0) = 0$.
\end{asmp}
\subsection{The critical temperature of the BCS functional}
\subsection{The translation-invariant BCS functional}
\label{BCS_functional_TI_Section}
In the absence of external fields we describe the system by translation-invariant states, that is, we assume that the integral kernels of $\gamma$ and $\alpha$ are of the form $\gamma(x-y)$ and $\alpha(x-y)$. The trace per unit volume is in this case defined with respect to a cube with sidelength $1$. We denote the resulting translation-invariant BCS functional by $\mathcal{F}^{\mathrm{BCS}}_{\mathrm{ti},T}$. The translation-invariant BCS functional is studied in detail in \cite{Hainzl2007}, see also the review article \cite{Hainzl2015}. In \cite{Hainzl2007} it has been shown that there is a unique critical temperature ${T_{\mathrm{c}}} \geq 0$ such that $\mathcal{F}^{\mathrm{BCS}}_{\mathrm{ti},T}$ has a minimizer with $\alpha \neq 0$ for $T < {T_{\mathrm{c}}}$. The normal state in \eqref{Gamma0} with $h=0$ is the unique minimizer if $T\geq {T_{\mathrm{c}}}$. Moreover, the critical temperature ${T_{\mathrm{c}}}$ can be characterized by a linear criterion: It equals the unique temperature $T$ such that the linear operator
\begin{align*}
K_{T} - V
\end{align*}
has zero as its lowest eigenvalue. Here $K_T = K_T(-\i \nabla)$ with the symbol
\begin{align}
K_T(p) \coloneqq \frac{p^2 - \mu}{\tanh \frac{p^2-\mu}{2T}}. \label{KT-symbol}
\end{align}
The operator $K_{T} - V $ is understood to act on the space $L_{\mathrm{sym}}^2(\mathbb{R}^3)$ of reflection-symmetric square-integrable functions on $\mathbb{R}^3$. To be precise, the results in \cite{Hainzl2007} have been proven without the assumption $\alpha(-x) = \alpha(x)$ for a.e. $x\in \mathbb{R}^3$. In this case, the operator $K_{T_{\mathrm{c}}} - V$ acts on functions in the Hilbert space $L^2(\mathbb{R}^3)$ instead of $L_{\mathrm{sym}}^2(\mathbb{R}^3)$. The results in \cite{Hainzl2007}, however, equally hold in the case of symmetric Cooper pair wave functions. That is, they hold in the same way if $V$ is reflection symmetric and if the translation-invariant BCS functional is minimized over functions $\gamma(x)$ and $\alpha(x)$ that are both assumed to be reflection symmetric.
We note that the function $K_T(p)$ satisfies the inequalities $K_T(p) \geq 2T$ for $\mu \geq 0$, as well as $K_T(p)\geq |\mu|/\tanh(|\mu|/(2T))$ for $\mu < 0$. Our assumptions on $V$ guarantee that the essential spectrum of $K_T-V$ equals that of $K_T$, and hence an eigenvalue below $2T$ for $\mu \geq 0$ or below $|\mu|/\tanh(|\mu|/(2T))$ for $\mu < 0$ is necessarily isolated and of finite multiplicity. This, in particular, applies to an eigenvalue of $K_T - V$ at $0$.
We are interested in the situation, where ${T_{\mathrm{c}}} > 0$ and where the translation-invariant BCS functional has a unique minimizer with a radial Cooper pair wave function (s-wave Cooper pairs) for $T$ close to ${T_{\mathrm{c}}}$. The following assumptions guarantee that we are in such a situation. Part (b) should be compared to \cite[Theorem~2.8]{DeuchertGeisinger}.
\begin{asmp}
\label{Assumption_KTc}
We assume that the interaction potential $V$ is such that the following holds:
\begin{enumerate}[(a)]
\item We have ${T_{\mathrm{c}}} >0$.
\item The lowest eigenvalue of $K_{{T_{\mathrm{c}}}} - V$ is simple.
\end{enumerate}
\end{asmp}
As has been shown in \cite[Theorem 3]{Hainzl2007}, our first assumption is satisfied if $V \geqslant 0$ does not vanish identically. Throughout this paper we denote by $\alpha_*$ the unique solution to the equation
\begin{align}
K_{T_{\mathrm{c}}} \alpha_* = V\alpha_*. \label{alpha_star_ev-equation}
\end{align}
Since $V$ is radial we know that the same is true for $\alpha_*$. Without loss of generality we will assume that $\alpha_*$ is real-valued and satisfies $\Vert \alpha_*\Vert_{L^2(\mathbb{R}^3)} = 1$. If we write the above equation as $\alpha_* = K_{{T_{\mathrm{c}}}}^{-1} V\alpha_*$, we see that $V\in L^\infty(\mathbb{R}^3)$ implies $\alpha_*\in H^2(\mathbb{R}^3)$. Moreover, we know from \cite[Proposition 2]{Hainzl2012} that
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} x \; \bigl[ |x^\nu \alpha_*(x)|^2 + |x^\nu \nabla \alpha_*(x)|^2 \bigr] < \infty \label{Decay_of_alphastar}
\end{align}
holds for $\nu \in \mathbb{N}_0^3$.
\subsection{The Ginzburg--Landau functional}
\label{Ginzburg-Landau-functional_Section}
We say that a function $\Psi$ on $Q_h$ is \textit{gauge-periodic} if the magnetic translations of the form
\begin{align}
T_h(\lambda)\Psi(X) &\coloneqq \mathrm{e}^{\i \mathbf B \cdot (\lambda \wedge X)} \; \Psi(X + \lambda ), & \lambda &\in \Lambda_h, \label{Magnetic_Translation_Charge2}
\end{align}
leave $\Psi$ invariant. We highlight that $T(\lambda)$ in \eqref{Magnetic_Translation} equals $T_h(\lambda)$ provided we replace $\mathbf B$ by $2\mathbf B$. Let $\Lambda_0, \Lambda_2, \Lambda_3 >0$, $\Lambda_1, D \in \mathbb{R}$, and let $\Psi$ be a gauge-periodic function in the case $h=1$. The Ginzburg--Landau functional is defined by
\begin{align}
\mathcal E^{\mathrm{GL}}_{D}(\Psi) &\coloneqq \frac{1}{|Q_1|} \int_{Q_1} \mathrm{d} X \; \bigl\{ \Lambda_0 \; |(-\i\nabla + 2\mathbf A)\Psi(X)|^2 + \Lambda_1 \, W(X)\, |\Psi(X)|^2 \notag \\
&\hspace{150pt} - D \, \Lambda_2\, |\Psi(X)|^2 + \Lambda_3\,|\Psi(X)|^4\bigr\}. \label{Definition_GL-functional}
\end{align}
We highlight the factor $2$ in front of the magnetic vector potential in \eqref{Definition_GL-functional}. Its appearance is due to the fact that $\Psi$ describes the center-of-mass motion of Cooper pairs carrying twice the charge of a single fermion.
The Ginzburg--Landau energy is defined by
\begin{align*}
E^{\mathrm{GL}}(D) \coloneqq \inf \bigl\{ \mathcal E^{\mathrm{GL}}_{D}(\Psi) : \Psi\in H_{\mathrm{mag}}^1(Q_1)\bigr\}.
\end{align*}
We also define the critical parameter
\begin{align}
{D_{\mathrm{c}}} &\coloneqq \frac 1{\Lambda_2} \inf \spec_{L_{\mathrm{mag}}^2(Q_1)} \bigl(\Lambda_0 \, (-\i \nabla + \mathbf A)^2 + \Lambda_1 \; W\bigr). \label{Dc_Definition}
\end{align}
As has been shown in \cite[Lemma 2.5]{Hainzl2014}, we have $E^{\mathrm{GL}}(D) < 0$ if $D > {D_{\mathrm{c}}}$ and $E^{\mathrm{GL}}(D) =0$ if $D \leq {D_{\mathrm{c}}}$.
In our analysis we encounter the Ginzburg--Landau functional in an $h$-dependent version, where $Q_1, \mathbf A, W$, and $D$ in \eqref{Definition_GL-functional} are replaced by $Q_h, \mathbf A_h, W_h$, and $h^2 D$ respectively. If we denote this functional by $\mathcal E^{\mathrm{GL}}_{D, h}(\Psi)$ we have
\begin{equation*}
\inf \bigl\{ \mathcal E^{\mathrm{GL}}_{D, h}(\Psi) : \Psi\in H_{\mathrm{mag}}^1(Q_h)\bigr\} = h^4 E^{\mathrm{GL}}(D),
\end{equation*}
which follows by scaling. More precisely, for given $\psi$ the function
\begin{align}
\Psi(X) &\coloneqq h \; \psi(h \, X), & X\in \mathbb{R}^3, \label{GL-rescaling}
\end{align}
obeys
\begin{align}
\mathcal E^{\mathrm{GL}}_{D, h}(\Psi) = h^4 \mathcal E^{\mathrm{GL}}_{D}(\psi). \label{EGL-scaling}
\end{align}
\subsection{Main results}
\label{Main_Result_Section}
Our first main result concerns an asymptotic expansion of the BCS free energy in the small parameter $h>0$. The precise statement is captured in the following theorem.
\begin{bigthm}
\label{Main_Result}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} hold, let $D \in \mathbb{R}$, and let the coefficients $\Lambda_0, \Lambda_1, \Lambda_2$, and $\Lambda_3$ be given by \eqref{GL-coefficient_1}-\eqref{GL_coefficient_3} below. Then there are constants $C>0$ and $h_0 >0$ such that for all $0 < h \leq h_0$, we have
\begin{align}
F^{\mathrm{BCS}}(h,\, {T_{\mathrm{c}}}(1 - Dh^2)) = h^4 \; \bigl( E^{\mathrm{GL}}(D) + R \bigr), \label{ENERGY_ASYMPTOTICS}
\end{align}
with $R$ satisfying the estimate
\begin{align}
Ch \geq R \geq - \mathcal{R} \coloneqq -C h^{\nicefrac 1{6}}. \label{Rcal_error_Definition}
\end{align}
Moreover, for any approximate minimizer $\Gamma$ of $\FBCS$ at $T = {T_{\mathrm{c}}}(1 - Dh^2)$ in the sense that
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) \leq h^4 \bigl( E^{\mathrm{GL}}(D) + \rho\bigr)
\label{BCS_low_energy}
\end{align}
holds for some $\rho \geq 0$, we have the decomposition
\begin{align}
\alpha(X, r ) = \alpha_*(r) \Psi(X) + \sigma(X,r) \label{Thm1_decomposition}
\end{align}
for the Cooper pair wave function $\alpha = \Gamma_{12}$. Here, $\sigma$ satisfies
\begin{align}
\frac{1}{|Q_h|} \int_{Q_h} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; |\sigma(X, r)|^2 &\leq C h^{\nicefrac {11}3}, \label{Thm1_error_bound}
\end{align}
$\alpha_*$ is the normalized zero energy eigenstate of $K_{{T_{\mathrm{c}}}}-V$, and the function $\Psi$ obeys
\begin{align}
\mathcal E^{\mathrm{GL}}_{D, h}(\Psi) \leq h^4 \left( E^{\mathrm{GL}}(D) + \rho + \mathcal{R} \right). \label{GL-estimate_Psi}
\end{align}
\end{bigthm}
Our second main result is a statement about the dependence of the critical temperature of the BCS functional on $h>0$ and on the external fields.
\begin{bigthm}
\label{Main_Result_Tc} \label{MAIN_RESULT_TC}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} hold. Then there are constants $C>0$ and $h_0 >0$ such that for all $0 < h \leq h_0$ the following holds:
\begin{enumerate}[(a)]
\item Let $0 < T_0 < {T_{\mathrm{c}}}$. If the temperature $T$ satisfies
\begin{equation}
T_0 \leq T \leq {T_{\mathrm{c}}} \, ( 1 - h^2 \, ( {D_{\mathrm{c}}} + C \, h^{\nicefrac 12}))
\label{eq:lowertemp}
\end{equation}
with ${D_{\mathrm{c}}}$ in \eqref{Dc_Definition}, then we have
\begin{equation*}
F^{\mathrm{BCS}}(h,T) < 0.
\end{equation*}
\item If the temperature $T$ satisfies
\begin{equation}
T \geq {T_{\mathrm{c}}} \, ( 1 - h^2 \, ( {D_{\mathrm{c}}} - \mathcal{R} ) )
\label{eq:uppertemp}
\end{equation}
with ${D_{\mathrm{c}}}$ in \eqref{Dc_Definition} and $\mathcal{R}$ in \eqref{Rcal_error_Definition}, then we have
\begin{equation*}
\FBCS(\Gamma) - \FBCS(\Gamma_0) > 0
\end{equation*}
unless $\Gamma = \Gamma_0$.
\end{enumerate}
\end{bigthm}
\begin{bems}
\label{Remarks_Main_Result}
\begin{enumerate}[(a)]
\item Theorem~\ref{Main_Result} and Theorem~\ref{Main_Result_Tc} extend similar results in \cite{Hainzl2012} and \cite{Hainzl2014} to the case of general external electric and magnetic fields. In these references the main restriction is that the vector potential is assumed to be periodic, that is, the corresponding magnetic field has vanishing flux through the faces of the unit cell $Q_h$, compare with the discussion in Section~\ref{Magnetically_Periodic_Samples}. Removing this restriction causes major mathematical difficulties already for the constant magnetic field because its vector potential cannot be treated as a perturbation of the Laplacian. More precisely, it was possible in \cite{Hainzl2012,Hainzl2014} to work with a priori bounds for low-energy states that do not involve the external magnetic field. As noticed in the discussion below Remark~6 in \cite{Hainzl2017}, this is not possible if the magnetic field has nonzero flux through the faces of the unit cell. To prove a priori bounds that involve a constant magnetic field one has to deal with the fact that the components of the magnetic momentum operator do not commute, which leads to significant technical difficulties. These difficulties have been overcome in \cite{DeHaSc2021}, which allowed us to extend the results \cite{Hainzl2012,Hainzl2014} to the case of a system in a constant magnetic field. Our proof of Theorem~\ref{Main_Result} and Theorem~\ref{Main_Result_Tc} uses these a priori bounds, and should therefore be interpreted as an extension of the methods in \cite{DeHaSc2021} to the case of general external electric and magnetic fields. The main technical novelty of this article is a further development of the phase approximation method, which has been pioneered in the framework of BCS theory for the case of the constant magnetic field in \cite{Hainzl2017} and \cite{DeHaSc2021}. It allows us to compute the BCS free energy of a class of trial states (Gibbs states) in a controlled way, and is the key new ingredient for our proof of upper and lower bounds for the BCS free energy in the presence of general external fields.
\item When we compare our result in Theorem~\ref{Main_Result} to the main Theorem in \cite{Hainzl2012}, we notice the following differences: (1) We use microscopic coordinates while macroscopic coordinates are used in \cite{Hainzl2012,Hainzl2014}, see the discussion above \cite[Eq.~(1.4)]{Hainzl2012}. (2) Our free energy is normalized by a large volume factor, see \eqref{Trace_per_unit_volume_definition} and \eqref{BCS functional}. This is not the case in \cite{Hainzl2012,Hainzl2014}. Accordingly, the GL energy appears on the order $h^4$ in our setting and on the order $h$ in the setting in \cite{Hainzl2012}. (3) The leading order of the Cooper pair wave function in \cite[Theorem 1]{Hainzl2012} is of the form
\begin{equation}
\frac{1}{2} \alpha_*(x-y) (\Psi(x) + \Psi(y)).
\label{DHS1:eq:remarksA1}
\end{equation}
This should be compared to \eqref{Thm1_decomposition}, where relative and center-of-mass coordinates are used. When we use the a priori bound for $\Vert \nabla \Psi \Vert_2$ below Eq.~(5.61) in \cite{Hainzl2012}, we see that this decomposition equals that in \eqref{Thm1_decomposition} to leading order in $h$.
\item The Ginzburg--Landau energy appears at the order $h^4$. This needs to be compared to the energy of the normal state, which is of order $1$ in $h$. To understand the order of the GL energy we need to realize that each factor of $\Psi$ in $\mathcal E^{\mathrm{GL}}_{D, h}$ defined below \eqref{Dc_Definition} carries a factor $h$. This follows from the scaling in \eqref{GL-rescaling} and the fact that the GL energy is normalized by the volume factor $|Q_h|^{-1}$. Moreover, every magnetic momentum operator carries a factor $h$ because $\Psi$ varies on the length scale $h^{-1}$ and the electric potential carries a factor $h^2$. In combination, these considerations explain the size of all terms in the GL functional. It is worth noting that the prefactor $-h^2 D$ in front of the quadratic term without external fields equals $(T-T_{\mathrm{c}})/T_{\mathrm{c}}$.
\item The size of the remainder in \eqref{Thm1_error_bound} should be compared to the $L^2$-norm per unit volume of the leading order part of the Cooper pair wave function in \eqref{Thm1_decomposition}, which satisfies
\begin{equation*}
\frac{1}{| Q_h |} \int_{Q_h} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; | \alpha_*(r) \Psi(X) |^2 \sim h^2
\end{equation*}
if $D>0$. We highlight that $\alpha_*$ varies on the microscopic length scale $1$ and that $\Psi$ captures the effects of the external fields on the macroscopic length scale $h^{-1}$.
\item Our bounds show that $D$ in Theorem~\ref{Main_Result} can be chosen as a function of $h$ as long as $|D| \leq D_0$ holds for some constant $D_0>0$.
\item The upper bound for the error in \eqref{Rcal_error_Definition} is worse than the corresponding bound in \cite{DeHaSc2021} by the factor $h^{-1}$. It is of the same size as the comparable error term in \cite[Theorem~1]{Hainzl2012}.
\item Theorem~\ref{MAIN_RESULT_TC} gives bounds on the temperature regions where superconductivity is present or absent. The interpretation of the theorem is that the critical temperature of the full model satisfies
\begin{equation*}
{T_{\mathrm{c}}}(h) = {T_{\mathrm{c}}} \left( 1 - D_{\mathrm{c}} h^2 \right) + o(h^2),
\end{equation*}
with the critical temperature ${T_{\mathrm{c}}}$ of the translation-invariant problem. The coefficient $D_{\mathrm{c}}$ is determined by linearized Ginzburg--Landau theory, see \eqref{Magnetic_Translation_Charge2}. The above equation allows us to compute the upper critical field $B_{\mathrm{c}2}$, above which superconductivity is absent. It also allows to to compute the derivative of $B_{\mathrm{c}2}$ with respect to $T$ at ${T_{\mathrm{c}}}$, see \cite[Appendix~A]{Hainzl2017}.
\item We expect that the assumption $0 < T_0 < {T_{\mathrm{c}}}$ in part (a) of Theorem~\ref{MAIN_RESULT_TC}, which also appeared in \cite{Hainzl2017,DeHaSc2021}, is only of technical nature. We need it because our trial state analysis breaks down as $T$ approaches zero. We note that there is no such restriction in part (b) of Theorem~\ref{MAIN_RESULT_TC} or in Theorem~\ref{Main_Result}.
\end{enumerate}
\end{bems}
\subsection{Organization of the paper and strategy of proof}
For the convenience of the reader we give here a short summary of the organization of the paper and the proof of our two main results.
In Section~\ref{sec:HeuristicComputation} we provide a brief non-rigorous computation that shows from which terms in the BCS functional the different terms in the GL functional arise. Afterwards we complete in Section~\ref{Preliminaries} the introduction of our mathematical setup. That is, we collect useful properties of the trace per unit volume and introduce the relevant spaces of gauge-periodic functions.
In Section~\ref{Upper_Bound} we collect the results of our trial state analysis. We introduce a class of Gibbs states with Cooper pair wave functions that admit a product structure of the form $\alpha_*(r) \Psi(X)$ to leading order in $h$. Here, $\alpha_*$ is the ground state wave function in \eqref{alpha_star_ev-equation} and $\Psi$ is a gauge-periodic function. We state and motivate several results concerning the Cooper pair wave function and the BCS energy of these states. Afterwards we use these statements to provide the proof of the upper bound for the BCS free energy in \eqref{ENERGY_ASYMPTOTICS} as well as the proof of Theorem~\ref{Main_Result_Tc} (a). It is important to note that these results are needed again in Section \ref{Lower Bound Part B}, where we give the proof of the lower bound on the BCS free energy in \eqref{ENERGY_ASYMPTOTICS} and the proof of Theorem \ref{Main_Result_Tc} (b).
In Section~\ref{Proofs} we provide the proofs of the results in Section~\ref{Upper_Bound} concerning the Cooper pair wave function and the BCS free energy of our trial states. It is the main part of our article and contains the main technical novelties. Our approach is based on an application of the phase approximation method for general magnetic fields to our nonlinear setting. The phase approximation is a well-known tool in the physics literature, see, e.g., \cite{Werthamer1966}, and has also been used in the mathematical literature to study spectral properties of Schr\"odinger operators involving a magnetic field, for instance in \cite{NenciuCorn1998,Nenciu2002}. In the case of a constant magnetic field, this method has been pioneered within the framework of linearized BCS theory in \cite{Hainzl2017} and for the full nonlinear model in \cite{DeHaSc2021}. An application of the phase approximation method to the case of a magnetic field with zero flux through the unit cell is contained in unpublished notes by Frank, Geisinger, Hainzl, and Tzaneteas \cite{BdGtoGL}. The main technical novelty in Section~\ref{Proofs} is a further development of the phase approximation method for general external fields in our nonlinear setting. This allows us to compute the BCS free energy of a class of trial states (Gibbs states) in a controlled way, which is the key new ingredient for the proof of upper and lower bounds for the BCS free energy in the presence of general external fields. Our approach should also be compared to the trial state analysis in \cite{Hainzl2012,Hainzl2014}, where a semi-classical expansion is used to treat magnetic fields with zero flux through the unit cell. We highlight that the trial state analysis for general external fields requires considerably more effort than the one for a constant magnetic field in \cite{DeHaSc2021}. This is also reflected in the length of the proofs.
In Section~\ref{Lower Bound Part A}, we prove a priori estimates for BCS states, whose BCS free energy is smaller than or equal to that of the normal state $\Gamma_0$ in \eqref{Gamma0} (low-energy states). More precisely, we show that the Cooper pair wave function of any such state is, to leading order as $h \to 0$, given by $\alpha_*(r) \Psi(X)$ with $\alpha_*$ in \eqref{alpha_star_ev-equation} and with a gauge-periodic function $\Psi$. The proof of the same statement in the case of a constant magnetic field has been the main novelty in \cite{DeHaSc2021}. To treat the case of general external fields, we perturbatively remove the periodic vector potential $A$ and the electric potential $W$. This allows us to reduce the problem to the case of a constant magnetic field treated in \cite{DeHaSc2021}. Similar a priori estimates for the case of magnetic fields with zero flux through the unit cell had been proved for the first time in \cite{Hainzl2012}.
The proofs of the lower bound on \eqref{ENERGY_ASYMPTOTICS} and of Theorem~\ref{Main_Result_Tc}~(b), which go along the same lines as those presented in \cite{Hainzl2012,Hainzl2014,DeHaSc2021}, are given in Section~\ref{Lower Bound Part B}. They complete the proofs of Theorem \ref{Main_Result} and \ref{Main_Result_Tc}. The main idea is to use the a priori estimates in Section \ref{Lower Bound Part A} to replace a general low-energy state in the BCS functional by a Gibbs state, whose Cooper pair wave function has the same leading order behavior for small $h$, in a controlled way. This, in particluar, allows us to estimate the BCS energy of a general low-energy state in terms of that of a Gibbs state, which has been computed in Sections~\ref{Upper_Bound} and \ref{Proofs}. Because of the considerable overlap in content with the related section in \cite{DeHaSc2021}, we shortened the proofs in this section to a minimal length.
Throughout the paper, $c$ and $C$ denote generic positive constants that change from line to line. We allow them to depend on the various fixed quantities like $h_0$, $D_0$, $\mu$, ${T_{\mathrm{c}}}$, $V$, $A$, $W$, $\alpha_*$, etc. Further dependencies are highlighted.
\subsection{Heuristic computation of the terms in the Ginzburg--Landau functional}
\label{sec:HeuristicComputation}
In the following we present a brief and non-rigorous computation of the BCS energy of the trial state that we use in the proof of the upper bound for the BCS free energy in Section~\ref{UPPER_BOUND}. The goal is to show from which terms in the BCS functional the different terms in the Ginzburg--Landau (GL) functional arise. A more detailed and more precise discussion of these issues can be found in Section~\ref{UPPER_BOUND}.
Our trial state (a Gibbs state) is defined by
\begin{equation}
\begin{pmatrix} \gamma_{\Delta} & \alpha_{\Delta} \\ \overline{\alpha_{\Delta}} & 1 - \overline{\gamma_{\Delta}} \end{pmatrix} =
\Gamma_{\Delta} = \frac{1}{1+\mathrm{e}^{\beta H_{\Delta}}}.
\label{eq:heuristicstrialstate}
\end{equation}
Here, $\beta^{-1} = T = {T_{\mathrm{c}}}(1 - Dh^2)$ and the Hamiltonian is given by
\begin{equation*}
H_{\Delta} = \begin{pmatrix} (-\mathrm{i} \nabla + \mathbf A_h )^2 - \mu & \Delta \\ \overline{\Delta} & -\overline{(-\mathrm{i} \nabla + \mathbf A_h )^2} + \mu \end{pmatrix},
\end{equation*}
where the operator $\Delta$ is defined via its integral kernel
\begin{equation*}
\Delta(x,y) = -2\, V \alpha_* (x-y)\, \Psi_h \left( \frac{x+y}{2} \right).
\label{eq:AB1}
\end{equation*}
The function $\Psi_h(X) = h \psi(hX)$ is chosen such that $\psi$ is a minimizer of the Ginzburg--Landau functional in \eqref{Definition_GL-functional}. We therefore have
\begin{equation*}
\frac{1}{| Q_h|} \int_{Q_h} \mathrm{d} X \ | \Psi_h(X) |^2 \sim h^2
\end{equation*}
as well as
\begin{equation}
\Tr [ \Delta^* \Delta ] = \frac{4}{| Q_h|} \int_{Q_h} \mathrm{d} X \; | \Psi_h(X) |^2 \int_{\mathbb{R}^3} \mathrm{d} r \; | V(r) \alpha_*(r) |^2 \sim h^2.
\label{eq:heurusticscalingdelta}
\end{equation}
Here $r=x-y$ and $X=(x+y)/2$ denote relative and center-of-mass coordinates. The operator $\Delta$ in \eqref{eq:heuristicstrialstate} is therefore a small perturbation if $0 < h \ll 1$.
The BCS free energy of the trial state $\Gamma_{\Delta}$ is given by
\begin{align*}
\FBCS(\Gamma_{\Delta}) - \FBCS(\Gamma_0) &= \frac{1}{2} \Tr\left[ H_0 (\Gamma_{\Delta} - \Gamma_0) \right] - T S(\Gamma) + T S(\Gamma_0) \\
&\hspace{30pt} - \frac{1}{| Q_h |} \int_{Q_h} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; V(r) | \alpha_{\Delta}(X,r) |^2,
\end{align*}
where $\Gamma_0$ denotes the normal state in \eqref{Gamma0}. Applications of
\begin{equation*}
\Tr\left[ H_0 (\Gamma_{\Delta} - \Gamma_0) \right] = \Tr\left[ H_{\Delta} \Gamma_{\Delta} - H_0 \Gamma_0 \right] - \Tr[ (H_{\Delta} - H_0 ) \Gamma_{\Delta} ],
\end{equation*}
and \cite[Eqs.~(4.3-4.5)]{Hainzl2012}, allow us to rewrite this formula as
\begin{align}
\FBCS(\Gamma_{\Delta}) - \FBCS(\Gamma_0) =& -\frac {1}{2 \beta}\Tr_0 \bigl[ \ln\bigl( 1+\exp\bigr( - \beta H_\Delta \bigr)\bigr) - \ln\bigl( 1+\exp\bigl( -\beta H_0 \bigr)\bigr)\bigr] \nonumber \\
&+ \frac{\langle \alpha_*, V\alpha_*\rangle_{L^2(\mathbb{R}^3)}}{|Q_h|} \int_{Q_h} \mathrm{d} X \ |\Psi_h(X)|^2 \nonumber \\
&- \frac{1}{|Q_h|} \int_{Q_h} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; V(r) \, \bigl|\alpha(X,r) - \alpha_*(r) \Psi_h(X)\bigr|^2. \label{eq:heuristics1}
\end{align}
Here $\Tr_0[A] = \Tr[PAP] + \Tr[QAQ]$ with
\begin{equation*}
P = \begin{pmatrix}
1 & 0 \\ 0 & 0 \end{pmatrix}
\end{equation*}
and $Q = 1 - P$.
To identify the terms in the GL functional, we need to expand the terms in \eqref{eq:heuristics1} in powers of $h$. To that end, we first expand them up to fourth order in powers of $\Delta$ because the Ginzburg--Landau functional is a fourth order polynomial in $\Psi_h$. Afterwards, we expand the resulting terms in powers of $h$, that is, we use that the external fields $W_h(x) = h^2 W(hx)$ and $\mathbf A_h(x) = h \mathbf A(hx)$ as well as the temperature $T = {T_{\mathrm{c}}}(1 - Dh^2)$ with $D>0$ depend on $h$. It turns out that the last term on the right side of \eqref{eq:heuristics1} is of the order $o(h^4)$. Since the GL energy appears on the order $h^4$ it does not contribute to it. In our trial state analysis in Sections~\ref{Upper_Bound}~and~\ref{Proofs} we show that there is a linear operator $L_{T,\mathbf A,W}$ and a cubic map $N_{T,\mathbf A,W}(\Delta)$ such that
\begin{align}
-\frac {1}{2\beta}\Tr_0 \bigl[ \ln\bigl( 1+\exp\bigl( - \beta H_\Delta \bigr)\bigr) - \ln\bigl( 1+\exp\bigl( -\beta H_0 \bigr)\bigr)\bigr] & \nonumber \\
&\hspace{-120pt} = -\frac{1}{4} \langle \Delta, L_{T,\mathbf A,W} \Delta \rangle + \frac{1}{8} \langle \Delta, N_{T,\mathbf A,W}(\Delta) \rangle + o(h^4) \label{eq:heuristicquadraticterms}
\end{align}
holds. In combination with the first term in the second line of \eqref{eq:heuristics1}, the quadratic terms in \eqref{eq:heuristicquadraticterms} contain the quadratic terms in the Ginzburg--Landau functional:
\begin{align*}
&-\frac{1}{4} \langle \Delta, L_{T,\mathbf A,W} \Delta \rangle + \frac{\langle \alpha_*, V\alpha_*\rangle_{L^2(\mathbb{R}^3)}}{|Q_h|} \int_{Q_h} \mathrm{d} X \; |\Psi_h(X)|^2 \\
&= \frac{1}{|Q_h|} \int_{Q_h} \mathrm{d} X \; \bigl\{ \Lambda_0 \; |(-\i\nabla + 2\mathbf A_h)\Psi_h(X)|^2 + \Lambda_1 \, W_h(X)\, |\Psi_h(X)|^2 - Dh^2 \, \Lambda_2\, |\Psi_h(X)|^2 \bigr\} \\
&\hspace{0.5cm} + o(h^4)
\end{align*}
with $\Lambda_0, \Lambda_1,$ and $\Lambda_2$ defined in \eqref{GL-coefficient_1}-\eqref{GL-coefficient_2}. From the quartic term in \eqref{eq:heuristicquadraticterms} we will extract the quartic term in the GL functional:
\begin{equation*}
\frac{1}{8} \langle \Delta, N_{T,\mathbf A,W}(\Delta) \rangle = \frac{\Lambda_3}{|Q_h|} \int_{Q_h} \mathrm{d} X \; |\Psi_h(X)|^4 + o(h^4).
\end{equation*}
The coefficient $\Lambda_3$ is defined in \eqref{GL_coefficient_3}. Accordingly,
\begin{equation*}
\FBCS(\Gamma_{\Delta}) - \FBCS(\Gamma_0) = h^4 \left( E^{\mathrm{GL}}(D) + o(1) \right),
\end{equation*}
where we used $\mathcal E^{\mathrm{GL}}_{D, h}(\Psi_h) = h^4 E^{\mathrm{GL}}(D)$ in the last step.
\section{Introduction and Main Results}
\input{1_Introduction_and_Main_Result/1.1_Introduction}
\input{1_Introduction_and_Main_Result/1.2_Gauge-periodic_samples}
\input{1_Introduction_and_Main_Result/1.3_BCS-functional}
\input{1_Introduction_and_Main_Result/1.4_Critical_temperature_of_BCS-functional}
\input{1_Introduction_and_Main_Result/1.5_Translation_Invariant_BCS-functional}
\input{1_Introduction_and_Main_Result/1.6_Ginzburg-Landau_functional}
\input{1_Introduction_and_Main_Result/1.7_Main_Results}
\input{1_Introduction_and_Main_Result/1.8_Organization_of_paper}
\subsection{Schatten classes}
\label{Schatten_Classes}
The trace per unit volume in \eqref{Trace_per_unit_volume_definition} gives rise to Schatten classes of periodic operators, whose norms play an important role in our proofs. In this section we recall several well-known facts about these norms.
For $1 \leq p < \infty$, the $p$\tho\ local von-Neumann--Schatten class $\mathcal{S}^p$ consists of all gauge-periodic operators $A$ having finite $p$-norm, that is, $\Vert A\Vert_p^p \coloneqq \Tr (|A|^p) <\infty$. The space of bounded gauge-periodic operators $\mathcal{S}^\infty$ is equipped with the usual operator norm. We note that the $p$-norm is not monotone decreasing in the index $p$. This should be compared to the usual Schatten norms, where such a property holds, see the discussion below \cite[Eq. (3.9)]{Hainzl2012}.
We recall that the triangle inequality
\begin{align*}
\Vert A + B\Vert_p \leq \Vert A\Vert_p + \Vert B\Vert_p
\end{align*}
holds for operators in $\mathcal{S}^p$ for $1 \leq p \leq \infty$. We also have the generalized version of Hölder's inequality
\begin{align}
\Vert AB\Vert_r \leq \Vert A\Vert_p \Vert B\Vert_q, \label{Schatten-Hoelder}
\end{align}
which holds for $1 \leq p,q,r \leq \infty$ with $\frac{1}{r} = \frac{1}{p} + \frac{1}{q}$. The familiar inequality
\begin{align*}
| \Tr A | \leq \Vert A \Vert_1
\end{align*}
also holds in the case of local Schatten norms.
The above inequalities can be deduced from their versions for the usual Schatten norms, see, e.g., \cite{Simon05}, with the help of the magnetic Bloch--Floquet decomposition. We refer to \cite[Section XIII.16]{Reedsimon4} for an introduction to the Bloch--Floquet transformation and to \cite{Stefan_Peierls} for a treatment of the magnetic case. To be more precise, a gauge-periodic operator $A$ satisfies the unitary equivalence
\begin{align*}
A \cong \int^{\oplus}_{[0,\sqrt{ 2 \pi }\, h]^3} \mathrm{d} k \; A_{k},
\end{align*}
which we use to write the trace per unit volume as
\begin{align}
\Tr A = \int_{[0,\sqrt{ 2 \pi }\, h]^3} \frac{\mathrm{d} k}{(2\pi)^3} \; \Tr_{L^2(Q_h)} A_{k}. \label{eq:ATPUV}
\end{align}
Here, $\Tr_{L^2(Q_h)}$ denotes the usual trace over $L^2(Q_h)$. When we use that $(AB)_k = A_k B_k$ holds for gauge-periodic operators $A$ and $B$, the above mentioned inequalities for the trace per unit volume are implied by their usual versions.
\subsection{Gauge-periodic Sobolev spaces}
\label{Periodic Spaces}
In this section we introduce Banach spaces of gauge-periodic functions, which will be used to describe Cooper pair wave functions of BCS states.
When working with center-of-mass and relative coordinates $(X,r)$ it is useful to define the magnetic momentum operators
\begin{align}
\Pi_{\mathbf A} &\coloneqq -\i\nabla_X + 2 \mathbf A(X), & \tilde \pi_{\mathbf A} &\coloneqq -\i\nabla_r + \frac 12 \mathbf A(r). \label{Magnetic_Momenta_full_COM}
\end{align}
We will also use the notation
\begin{align}
\Pi &\coloneqq -\i\nabla_X + 2 \mathbf A_{\mathbf B}(X), & \tilde \pi &\coloneqq -\i\nabla_r + \frac 12 \mathbf A_\mathbf B(r). \label{Magnetic_Momenta_COM}
\end{align}
If several coordinates appear in an equation we sometimes write $\Pi_X$ and $\tilde \pi_r$ to highlight on which coordinate $\Pi$ and $\tilde \pi$ are acting.
A function $\Psi \in L_\mathrm{loc}^p(\mathbb{R}^3)$ with $1 \leq p \leq \infty$ belongs to the space $L_{\mathrm{mag}}^p(Q_h)$ provided $T_h(\lambda)\Psi = \Psi$ holds for all $\lambda\in\Lambda_h$ (with $T_h(\lambda)$ in \eqref{Magnetic_Translation_Charge2}). We endow $L_{\mathrm{mag}}^p(Q_h)$ with the usual $p$-norm per unit volume
\begin{align}
\Vert \Psi\Vert_{L_{\mathrm{mag}}^p(Q_h)}^p &\coloneqq \fint_{Q_h} \mathrm{d} X \; |\Psi(X)|^p \coloneqq \frac{1}{|Q_h|} \int_{Q_h} \mathrm{d} X \; |\Psi(X)|^p \label{Periodic_p_Norm}
\end{align}
if $1 \leq p < \infty$ and with the $L^{\infty}(Q_h)$-norm if $p=\infty$. When it does not lead to confusion we use the abbreviation $\Vert \Psi\Vert_p$.
Analogously, for $m\in \mathbb{N}_0$, we define the Sobolev spaces of gauge-periodic functions corresponding to the constant magnetic field as
\begin{align}
H_{\mathrm{mag}}^m(Q_h) &\coloneqq \bigl\{ \Psi\in L_{\mathrm{mag}}^2(Q_h) : \Pi^\nu \Psi\in L_{\mathrm{mag}}^2(Q_h) \quad \forall \nu\in \mathbb{N}_0^3, |\nu|_1\leq m\bigr\}, \label{Periodic_Sobolev_Space}
\end{align}
where $|\nu |_1 \coloneqq \sum_{i=1}^3 \nu_i$ for $\nu\in \mathbb{N}_0^3$. It is a Hilbert space when endowed with the inner product
\begin{align}
\langle \Phi, \Psi\rangle_{H_{\mathrm{mag}}^m(Q_h)} &\coloneqq \sum_{|\nu|_1\leq m} h^{-2 - 2|\nu|_1} \; \langle \Pi^\nu \Phi, \Pi^\nu \Psi\rangle_{L_{\mathrm{mag}}^2(Q_h)}. \label{Periodic_Sobolev_Norm}
\end{align}
We note that if $\Psi$ is a gauge-periodic function then so is $\Pi^\nu \Psi$, since the magnetic momentum operator $\Pi$
commutes with the magnetic translations $T_h(\lambda)$ in \eqref{Magnetic_Translation_Charge2}. Furthermore, $\Pi$ is a self-adjoint operator on $H_{\mathrm{mag}}^1(Q_h)$.
The norms introduced in \eqref{Periodic_p_Norm} and \eqref{Periodic_Sobolev_Norm} display a scaling behavior with respect to $h$, which is motivated by the Ginzburg--Landau scaling in \eqref{GL-rescaling}. More precisely, whenever $\psi \in L_{\mathrm{mag}}^p(Q_1)$ and $\Psi(x) = h \psi(hx)$, then
\begin{align}
\Vert \Psi\Vert_{L_{\mathrm{mag}}^p(Q_h)} = h \, \Vert \psi\Vert_{L_{\mathrm{mag}}^p(Q_1)} \label{Periodic_p_Norm_scaling}
\end{align}
for every $1\leq p \leq \infty$. That is, $\Psi \sim h$ in any $p$-norm per unit volume.
The inner product in \eqref{Periodic_Sobolev_Norm} is chosen such that
\begin{align*}
\Vert \Psi\Vert_{H_{\mathrm{mag}}^m(Q_h)} = \Vert \psi\Vert_{H_{\mathrm{mag}}^m(Q_1)}
\end{align*}
holds. This follows from \eqref{Periodic_p_Norm_scaling} and the fact that $\Vert \Pi^\nu\Psi\Vert_2^2$ scales as $h^{2 + 2|\nu|_1}$ for $\nu\in \mathbb{N}_0^3$. Such scaled norms have also been used in \cite{DeHaSc2021} but not in \cite{Hainzl2012,Hainzl2014}.
For the sake of completeness, let us also mention the following magnetic Sobolev inequality. There is a constant $C>0$ such that for any $h>0$ and any $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, we have
\begin{align}
\Vert \Psi\Vert_{L_{\mathrm{mag}}^6(Q_h)}^2 &\leq C \, h^{-2}\, \Vert \Pi \Psi\Vert_{L_{\mathrm{mag}}^2(Q_h)}^2. \label{Magnetic_Sobolev}
\end{align}
The proof can be found in \cite{DeHaSc2021} below Eq.~(2.7).
The Cooper pair wave function $\alpha$ of an admissible BCS state $\Gamma$ belongs to the Hilbert--Schmidt class $\mathcal{S}^2$ defined in Section \ref{Schatten_Classes}, see the discussion below \eqref{Trace_per_unit_volume_definition}. The symmetry and the gauge-periodicity of the kernel of $\alpha$ in \eqref{alpha_periodicity} can be reformlated as
\begin{align}
\alpha(X,r) &= \mathrm{e}^{\i \mathbf B \cdot (\lambda \wedge X)} \; \alpha(X+ \lambda, r), \quad \lambda\in \Lambda_h; & \alpha(X,r) &= \alpha(X, -r) \label{alpha_periodicity_COM}
\end{align}
in terms of center-of-mass and relative coordinates. In other words, $\alpha(X,r)$ is a gauge-periodic function of the center-of-mass coordinate $X \in \mathbb{R}^3$ and a reflection-symmetric function of the relative coordinate $r \in \mathbb{R}^3$. We make use of the unitary equivalence of $\mathcal{S}^2$ and the space
\begin{align*}
{L^2(Q_h \times \Rbb_{\mathrm s}^3)} \coloneqq L_{\mathrm{mag}}^2(Q_h) \otimes L_{\mathrm{sym}}^2(\mathbb{R}^3),
\end{align*}
which consists of all square-integrable functions satisfying \eqref{alpha_periodicity_COM}. We also define the norm
\begin{align*}
\Vert \alpha\Vert_{{L^2(Q_h \times \Rbb_{\mathrm s}^3)}}^2 \coloneqq \fint_{Q_h} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; |\alpha(X, r)|^2 = \frac{1}{|Q_h|} \int_{Q_h} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; |\alpha(X, r)|^2.
\end{align*}
The identity $\Vert \alpha\Vert_2 = \Vert \alpha\Vert_{{L^2(Q_h \times \Rbb_{\mathrm s}^3)}}$ follows from \eqref{alpha_periodicity_COM}. In the following we therefore identify the scalar products $\langle \cdot, \cdot\rangle$ on ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ and $\mathcal{S}^2$ with each other and we do not distinguish between operators in $\mathcal{S}^2$ and their kernels as this does not lead to confusion.
By $H^1(Q_h\times \mathbb{R}_{\mathrm s}^3)$ we denote the Sobolev space of all functions $\alpha\in L^2(Q_h\times \mathbb{R}_{\mathrm s}^3)$, which have finite $H^1$-norm defined by
\begin{align}
\Vert \alpha\Vert_{H^1(Q_h\times \mathbb{R}_{\mathrm s}^3)}^2 &\coloneqq \Vert \alpha\Vert_2^2 + \Vert \Pi\alpha\Vert_2^2 + \Vert \tilde \pi\alpha\Vert_2^2 \label{H1-norm}
\end{align}
with $\Pi$ and $\tilde \pi$ in \eqref{Magnetic_Momenta_COM}.
We highlight that the norm in \eqref{H1-norm} is equivalent to the two norms
\begin{align}
\Tr [\alpha\alpha^*] + \Tr [(-\i \nabla + \mathbf A_\mathbf B)\alpha \alpha^* (-\i \nabla + \mathbf A_\mathbf B)] + \Tr [(-\i \nabla + \mathbf A_\mathbf B) \alpha^* \alpha (-\i \nabla + \mathbf A_\mathbf B)] \label{Norm_equivalence_1}
\end{align}
and
\begin{align}
\Vert \alpha\Vert_2^2 + \Vert (-\i \nabla + \mathbf A_\mathbf B)\alpha\Vert_2^2 + \Vert \alpha (-\i \nabla + \mathbf A_\mathbf B)\Vert_2^2, \label{Norm_equivalence_2}
\end{align}
compare also with the discussion below \eqref{Trace_per_unit_volume_definition}. We also note that the $H^m$-norm in \eqref{Periodic_Sobolev_Norm} and the $H^1$-norm in \eqref{H1-norm} are equivalent to the norms that we obtain when $\mathbf A_{\mathrm{B}}$ is replaced by $\mathbf A = \mathbf A_{\mathrm{B}} + A$ with a periodic vector potential $A \in L^{\infty}(\mathbb{R}^3)$.
\subsection{Periodic Sobolev spaces}
\section{Preliminaries}
\label{Preliminaries}
\input{2_Preliminaries/2.1_Schatten_classes}
\input{2_Preliminaries/2.2_Gauge-periodic_Sobolev_spaces}
\input{2_Preliminaries/2.3_Periodic_Sobolev_spaces}
\subsection{The Gibbs states \texorpdfstring{$\Gamma_\Delta$}{GammaDelta}}
For $\Psi\in L_{\mathrm{mag}}^2(Q_h)$ we define the gap function $\Delta\in {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ by
\begin{align}
\Delta(X,r) \coloneqq \Delta_\Psi(X, r) &\coloneqq -2 \; V\alpha_*(r) \Psi(X). \label{Delta_definition}
\end{align}
We also introduce the one-particle Hamiltonian
\begin{align}
\mathfrak{h}_{\mathbf A, W} &\coloneqq \mathfrak{h}_\mathbf A + W \coloneqq (-\i \nabla +\mathbf A_h )^2 + W_h - \mu \label{hfrakAW_definition}
\end{align}
as well as
\begin{align}
H_{\Delta} &\coloneqq H_0 + \delta \coloneqq \begin{pmatrix}
\mathfrak{h}_{\mathbf A, W} & 0 \\ 0 & -\ov{\mathfrak{h}_{\mathbf A, W}}
\end{pmatrix} + \begin{pmatrix}
0 & \Delta \\ \ov \Delta & 0
\end{pmatrix} = \begin{pmatrix}
\mathfrak{h}_{\mathbf A, W} & \Delta \\ \ov \Delta & -\ov {\mathfrak{h}_{\mathbf A, W}}
\end{pmatrix}. \label{HDelta_definition}
\end{align}
The Gibbs state at inverse temperature $\beta = T^{-1} >0$ is defined by
\begin{align}
\begin{pmatrix} \gamma_\Delta & \alpha_\Delta \\ \ov{\alpha_\Delta} & 1 - \ov{\gamma_\Delta}\end{pmatrix} = \Gamma_\Delta \coloneqq \frac{1}{1 + \mathrm{e}^{\beta H_\Delta}}. \label{GammaDelta_definition}
\end{align}
We highlight that the choice $\Delta =0$ yields the normal state $\Gamma_0$ in \eqref{Gamma0}. In our proof of the upper bound for the free energy in \eqref{ENERGY_ASYMPTOTICS} we will choose $\Psi$ as a minimizer of the Ginzburg--Landau functional in \eqref{Definition_GL-functional}, which satisfies the scaling in \eqref{GL-rescaling}. Since the $L^2(\mathbb{R}^3)$-norm of $V\alpha_*$ is of the order $1$, the local Hilbert--Schmidt norm of $\Delta$ is of the order $h$ in this case. In the proof of the lower bound we have less information about the function $\Psi$. The related difficulties are discussed in Remark~\ref{rem:alpha} below.
\begin{lem}[Admissibility of $\Gamma_\Delta$]
\label{Gamma_Delta_admissible}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} hold. Then, for any $h>0$, any $T>0$, and any $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, the state $\Gamma_\Delta$ in \eqref{GammaDelta_definition} is admissible.
\end{lem}
The choice of the states $\Gamma_\Delta$ is motivated by the following observation. Using standard variational arguments one can show that any minimizer $\Gamma$ of the BCS functional satisfies the nonlinear Bogolubov--de Gennes equation
\begin{align}
\Gamma &= \frac 1{1 + \mathrm{e}^{\beta \, \mathbb{H}_{V\alpha}}}, & \mathbb{H}_{V\alpha} = \begin{pmatrix} \mathfrak{h}_{\mathbf A, W} & -2\, V\alpha \\ -2\, \ov{V\alpha} & -\ov{\mathfrak{h}_{\mathbf A, W}}\end{pmatrix}. \label{BdG-equation}
\end{align}
Here, $V\alpha$ is the operator given by the integral kernel $V(r)\alpha(X,r)$. Since we are interested in approximate minimizers of the BCS functional, we choose $\Gamma_{\Delta}$ as an approximate solution to the BdG-equation in \eqref{BdG-equation}. The next result shows that, as far as the leading order behavior of $\alpha_{\Delta}$ is concerned, this is indeed the case. It should be compared to \eqref{Thm1_decomposition}.
\begin{prop}[Structure of $\alpha_\Delta$]
\label{Structure_of_alphaDelta} \label{STRUCTURE_OF_ALPHADELTA}
Let Assumption \ref{Assumption_V} and \ref{Assumption_KTc} (a) be satisfied and let $T_0>0$ be given. Then, there is a constant $h_0>0$ such that for any $0 < h \leq h_0$, any $T\geq T_0$, and any $\Psi\in H_{\mathrm{mag}}^2(Q_h)$ the function $\alpha_\Delta$ in \eqref{GammaDelta_definition} with $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition} has the decomposition
\begin{align}
\alpha_\Delta(X,r) &= \Psi(X) \alpha_*(r) - \eta_0(\Delta)(X,r) - \eta_{\perp}(\Delta)(X,r). \label{alphaDelta_decomposition_eq1}
\end{align}
The remainder functions $\eta_0(\Delta)$ and $\eta_\perp(\Delta)$ have the following properties:
\begin{enumerate}[(a)]
\item The function $\eta_0$ satisfies the bound
\begin{align}
\Vert \eta_0\Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2 &\leq C\; \bigl( h^5 + h^2 \, |T - {T_{\mathrm{c}}}|^2\bigr) \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^6 + \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2\bigr). \label{alphaDelta_decomposition_eq2}
\end{align}
\item The function $\eta_\perp$ satisfies the bound
\begin{align}
\Vert \eta_\perp\Vert_{{H^1(Q_h \times \Rbb_{\mathrm s}^3)}}^2 + \Vert |r|\eta_\perp\Vert_{{L^2(Q_h \times \Rbb_{\mathrm s}^3)}}^2 &\leq C \; h^6 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2. \label{alphaDelta_decomposition_eq3}
\end{align}
\item The function $\eta_\perp$ has the explicit form
\begin{align*}
\eta_\perp(X, r) &= \int_{\mathbb{R}^3} \mathrm{d} Z \int_{\mathbb{R}^3} \mathrm{d} s \; k_T(Z, r-s) \, V\alpha_*(s) \, \bigl[ \cos(Z\cdot \Pi) - 1\bigr] \Psi(X)
\end{align*}
with $k_T(Z,r)$ defined in Section~\ref{Proofs} below \eqref{MTA_definition}. Moreover, for any radial $f,g\in L^2(\mathbb{R}^3)$ the operator
\begin{align*}
\iiint_{\mathbb{R}^9} \mathrm{d} Z \mathrm{d} r \mathrm{d} s \; f(r) \, k_T(Z, r-s) \, g(s) \, \bigl[ \cos(Z\cdot \Pi) - 1\bigr]
\end{align*}
commutes with $\Pi^2$. In particular,
if $P$ and $Q$ are two spectral projections of $\Pi^2$ with $P Q = 0$, then $\eta_\perp$ satisfies the orthogonality property
\begin{align}
\bigl\langle f(r) \, (P \Psi)(X), \, \eta_{\perp}(\Delta_{Q\Psi}) \bigr\rangle = 0.
\label{alphaDelta_decomposition_eq4}
\end{align}
\end{enumerate}
\end{prop}
\begin{bem}
\label{rem:alpha}
The statement of Proposition \ref{Structure_of_alphaDelta} should be read in two different ways, depending on whether we are interested in proving the upper or the lower bound for the BCS free energy. In the former case, the bound on $\Vert |r|\eta_\perp\Vert_{{L^2(Q_h \times \Rbb_{\mathrm s}^3)}}$ in part (b) and part (c) are irrelevant. The reason is that the gap function $\Delta\equiv \Delta_\Psi$ is defined with a minimizer $\Psi$ of the GL functional, whose $H_{\mathrm{mag}}^2(Q_B)$-norm is uniformly bounded. In this case all remainder terms can be estimated using \eqref{alphaDelta_decomposition_eq2} and \eqref{alphaDelta_decomposition_eq3}.
In contrast, in the proof of the lower bound for the BCS free energy in Section~\ref{Lower Bound Part B} we are forced to work with a trial state $\Gamma_{\Delta}$, whose gap function is defined in terms of a function $\Psi$ that is related to a low-energy state of the BCS functional. The properties of such a function are captured in Theorem~\ref{Structure_of_almost_minimizers} below. In this case we only have a bound on the $H_{\mathrm{mag}}^1(Q_h)$-norm of $\Psi$ at our disposal. To obtain a function in $H_{\mathrm{mag}}^2(Q_h)$, we introduce a regularized version of $\Psi$ as in \cite[Section~6]{Hainzl2012}, \cite[Section~6]{Hainzl2014}, \cite[Section~7]{Hainzl2017}, and \cite[Section~6]{DeHaSc2021} by $\Psi_\leq \coloneqq \mathbbs 1_{[0,\varepsilon]}(\Pi^2)\Psi$ for some $h^2 \ll \varepsilon \ll 1$, see Corollary \ref{Structure_of_almost_minimizers_corollary}. The $H_{\mathrm{mag}}^2(Q_h)$-norm of $\Psi_\leq$ is not uniformly bounded in $h$, see \eqref{Psileq_bounds} below. This causes a certain error term in the proof of the lower bound to be large, a priori. Part (b) and (c) of Proposition~\ref{Structure_of_alphaDelta} are needed to overcome this problem. Since many details of the relevant proof in Section~\ref{Lower Bound Part B} have been omitted because they go along the same lines as those in \cite[Section~6]{DeHaSc2021} we refer to \cite[Remark~3.3]{DeHaSc2021} for more details.
\end{bem}
\subsection{The BCS energy of the states \texorpdfstring{$\Gamma_\Delta$}{GammaDelta}}
In this section we compute the BCS free energy of our trial states $\Gamma_\Delta$. The goal is to show that this energy minus the energy of the normal state $\Gamma_0$ is, to leading order as $h \to 0$, given by the Ginzburg--Landau energy of the function $\Psi$ appearing in the definition of $\Delta$. For a brief heuristic summary of these computations we refer to Section~\ref{sec:HeuristicComputation}.
We start our discussion by introducing the operators $L_{T, \mathbf A, W}$ and $N_{T, \mathbf A, W}$ that naturally appear when the BCS energy of $\Gamma_{\Delta}$ is expanded in powers of the gap function $\Delta$, see \eqref{eq:heuristicquadraticterms}. The Matsubara frequencies are given by
\begin{align}
\omega_n &\coloneqq \pi (2n+1) T, \qquad n \in \mathbb{Z}. \label{Matsubara_frequencies}
\end{align}
For a local Hilbert--Schmidt operator $\Delta$ with integral kernel $\Delta(x,y)$ satisfying \eqref{alpha_periodicity} and $\Delta(x,y) = \Delta(y,x)$ we define the linear map $L_{T, \mathbf A, W}$ by
\begin{align}
L_{T, \mathbf A, W}\Delta &\coloneqq -\frac 2\beta \sum_{n\in \mathbb{Z}} \frac 1{\i \omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac 1 {\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}}. \label{LTAW_definition}
\end{align}
The operator $\mathfrak{h}_{\mathbf A, W}$ is defined in \eqref{hfrakAW_definition}.
In the parameter regime we are interested in, we obtain the quadratic terms in the Ginzburg--Landau functional from $\langle \Delta, L_{T, \mathbf A, W}\Delta\rangle$. The spectral properties of the operator $L_{T, \mathbf A, W}$ have been studied in great detail for $W=0$ and $\mathbf A = \mathbf A_{e_3}$ in \cite{Hainzl2017}. This allows the authors to compute the BCS critical temperature shift caused by a small constant magnetic field within the framework of linearized BCS theory. That this prediction is accurate also if the nonlinear problem is considered has been shown in \cite{DeHaSc2021}.
Moreover, the nonlinear (cubic) map $N_{T, \mathbf A, W}$ is defined by
\begin{align}
N_{T, \mathbf A, W}(\Delta) &\coloneqq \frac 2\beta \sum_{n\in \mathbb{Z}} \frac 1{\i \omega_n - \mathfrak{h}_{\mathbf A, W}}\, \Delta \, \frac 1{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \, \ov \Delta \, \frac 1{\i\omega_n - \mathfrak{h}_{\mathbf A, W}}\, \Delta \, \frac 1{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}}. \label{NTAW_definition}
\end{align}
The expression $\langle \Delta, N_{T, \mathbf A, W}(\Delta)\rangle$ gives rise to the quartic term in the Ginzburg--Landau functional. The operator $N_{T, \mathbf A, W}$ also appeared in \cite{BdGtoGL} and in \cite{DeHaSc2021}.
From the following lemma we know that $L_{T,\mathbf A,W} \Delta$ and $N_{T, \mathbf A, W}(\Delta)(\Delta)$ are both in ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ provided $\Delta$ satisfies the symmetry relations in \eqref{alpha_periodicity_COM} and some mild regularity assumptions.
\begin{lem}
\label{lem:propsLN}
The map $L_{T,\mathbf A,W}$ is a bounded linear operator on ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$. Assume that the integral kernel $\Delta \in L^{2}(Q_h \times \mathbb{R}_{\mathrm{s}}^3)$ defines a bounded operator on $L^2(\mathbb{R}^3)$. Then we have $N_{T, \mathbf A, W}(\Delta) \in {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$.
\end{lem}
The following representation formula for the BCS functional is the starting point of our proofs of Theorems \ref{Main_Result} and \ref{Main_Result_Tc}. It will be used to prove upper and lower bounds, and is therefore formulated for general BCS states and not only for our trial states. The proof of Proposition~\ref{BCS functional_identity} can be found in \cite[Proposition 3.4]{DeHaSc2021}.
\begin{prop}[Representation formula for the BCS functional]
\label{BCS functional_identity} \label{BCS FUNCTIONAL_IDENTITY}
Let $\Gamma$ be an admissible state. For any $h>0$, let $\Psi\in H_{\mathrm{mag}}^1(Q_h)$ and let $\Delta \equiv \Delta_\Psi$ be as in \eqref{Delta_definition}. For $T>0$ and if $V\alpha_*\in L^{\nicefrac 65}(\mathbb{R}^3) \cap L^2(\mathbb{R}^3)$, there is an operator $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)\in \mathcal{S}^1$ such that
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0)& \notag\\
&\hspace{-70pt}= - \frac 14 \langle \Delta, L_{T, \mathbf A, W} \Delta\rangle + \frac 18 \langle \Delta, N_{T, \mathbf A, W} (\Delta)\rangle + \Vert\Psi \Vert_{L_{\mathrm{mag}}^2(Q_h)} \, \langle \alpha_*, V\alpha_*\rangle_{L^2(\mathbb{R}^3)} \notag \\
&\hspace{-40pt}+ \Tr\bigl[\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)\bigr] \notag\\
&\hspace{-40pt}+ \frac{T}{2} \mathcal{H}_0(\Gamma, \Gamma_\Delta) - \fint_{Q_h} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; V(r) \, \bigl| \alpha(X,r) - \alpha_*(r) \Psi(X)\bigr|^2, \label{BCS functional_identity_eq}
\end{align}
where
\begin{align}
\mathcal{H}_0(\Gamma, \Gamma_\Delta) \coloneqq \Tr_0\bigl[ \Gamma(\ln \Gamma - \ln \Gamma_\Delta) + (1 - \Gamma)(\ln(1-\Gamma) - \ln(1 - \Gamma_\Delta))\bigr] \label{Relative_Entropy}
\end{align}
denotes the relative entropy of $\Gamma$ with respect to $\Gamma_\Delta$. Moreover, $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)$ obeys the estimate
\begin{align*}
\Vert\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta) \Vert_1 \leq C \; T^{-5} \; h^6 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^6.
\end{align*}
\end{prop}
The definition \eqref{Relative_Entropy} of the relative entropy uses a weaker form of trace called $\Tr_0$, which is defined as follows. We call a gauge-periodic operator $A$ acting on $L^2(\mathbb{R}^3)\oplus L^2(\mathbb{R}^3)$ weakly locally trace class if $P_0AP_0$ and $Q_0AQ_0$ are locally trace class, where
\begin{align}
P_0 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \label{P0}
\end{align}
and $Q_0 = 1-P_0$. For such operators the weak trace per unit volume is defined by
\begin{align}
\Tr_0 (A)\coloneqq \Tr\bigl( P_0AP_0 + Q_0 AQ_0\bigr). \label{Weak_trace_definition}
\end{align}
If an operator $A$ is locally trace class then it is also weakly locally trace class but the converse statement does not hold in general. The converse is true, however, if $A \geqslant 0$. We highlight that if $A$ is locally trace class then the weak trace per unit volume and the trace per unit volume coincide. Before their appearance in the context of BCS theory in \cite{Hainzl2012,Hainzl2014,DeHaSc2021}, weak traces of the above kind appeared in \cite{HLS05,FLLS11}.
Let us have a closer look at the right side of \eqref{BCS functional_identity_eq}. From the terms in the first line we will extract the Ginzburg--Landau functional, see Theorem \ref{Calculation_of_the_GL-energy} below. The terms in the second and third line contribute to the remainder. The term in the second line is small in absolute value, but the techniques used to bound the third line differ for upper and lower bounds. This is responsible for the different qualities of the upper and lower bounds in Theorems \ref{Main_Result} and \ref{Main_Result_Tc}, see \eqref{Rcal_error_Definition}. For an upper bound we choose $\Gamma \coloneqq \Gamma_\Delta$. Hence $\mathcal{H}_0(\Gamma_\Delta, \Gamma_\Delta)=0$ and the last term in \eqref{BCS functional_identity_eq} can be estimated with the help of Proposition~\ref{Structure_of_alphaDelta}. To obtain a lower bound, the third line needs to be bounded from below using the lower bound for the relative entropy in \cite[Lemma~6.1]{DeHaSc2021}, which appeared for the first time in \cite[Lemma~5]{Hainzl2012}.
Before we state the next result, we introduce the function
\begin{align}
\hat{V\alpha_*}(p) \coloneqq \int_{\mathbb{R}^3} \mathrm{d}x\; \mathrm{e}^{-\i p\cdot x} \, V(x)\alpha_*(x), \label{Gap_function}
\end{align}
which also fixes our convention of the Fourier transform.
\begin{thm}[Calculation of the GL energy]
\label{Calculation_of_the_GL-energy} \label{CALCULATION_OF_THE_GL-ENERGY}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} (a) hold and let $D\in \mathbb{R}$ be given. Then, there is a constant $h_0>0$ such that for any $0 < h \leq h_0$, any $\Psi\in H_{\mathrm{mag}}^2(Q_h)$, $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}, and $T = {T_{\mathrm{c}}}(1 - Dh^2)$, we have
\begin{align}
- \frac 14 \langle \Delta, L_{T, \mathbf A, W} \Delta\rangle + \frac 18 \langle \Delta, N_{T, \mathbf A, W} (\Delta)\rangle + \Vert \Psi\Vert_{L_{\mathrm{mag}}^2(Q_h)}^2 \; \langle \alpha_*, V\alpha_*\rangle_{L^2(\mathbb{R}^3)} & \notag\\
&\hspace{-60pt}= \mathcal E^{\mathrm{GL}}_{D, h}(\Psi) + R(h). \label{Calculation_of_the_GL-energy_eq}
\end{align}
Here,
\begin{align*}
|R(h)|\leq C \, \bigl[ h^5 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + h^6 \, \Vert\Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2 \bigr] \, \bigl[ 1 + \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \bigr]
\end{align*}
and with the functions
\begin{align}
g_1(x) &\coloneqq \frac{\tanh(x/2)}{x^2} - \frac{1}{2x}\frac{1}{\cosh^2(x/2)}, & g_2(x) &\coloneqq \frac 1{2x} \frac{\tanh(x/2)}{\cosh^2(x/2)}, \label{XiSigma}
\end{align}
the coefficients $\Lambda_0$, $\Lambda_1$, $\Lambda_2$, and $\Lambda_3$ in $\mathcal E^{\mathrm{GL}}_{D, h}$ are given by
\begin{align}
\Lambda_0 &\coloneqq \frac{\beta_c^2}{16} \int_{\mathbb{R}^3} \frac{\mathrm{d} p}{(2\pi)^3} \; |(-2)\hat{V\alpha_*}(p)|^2 \; \bigl( g_1 (\beta_c(p^2-\mu)) + \frac 23 \beta_c \, p^2\, g_2(\beta_c(p^2-\mu))\bigr), \label{GL-coefficient_1}\\
\Lambda_1 &\coloneqq \frac{{\beta_{\mathrm{c}}}^2}{4} \int_{\mathbb{R}^3} \frac{\mathrm{d} p}{(2\pi)^3} \; |(-2)\hat{V\alpha_*}(p)|^2 \; g_1({\beta_{\mathrm{c}}} (p^2-\mu)), \label{GL-coefficient_W} \\
\Lambda_2 &\coloneqq \frac{\beta_c}{8} \int_{\mathbb{R}^3} \frac{\mathrm{d} p}{(2\pi)^3} \; \frac{|(-2)\hat{V\alpha_*}(p)|^2}{\cosh^2(\frac{\beta_c}{2}(p^2 -\mu))},\label{GL-coefficient_2} \\
\Lambda_3 &\coloneqq \frac{\beta_c^2}{16} \int_{\mathbb{R}^3} \frac{\mathrm{d} p}{(2\pi)^3} \; |(-2) \hat{V\alpha_*}(p)|^4 \; \frac{g_1(\beta_c(p^2-\mu))}{p^2-\mu}.\label{GL_coefficient_3}
\end{align}
\end{thm}
It has been argued in \cite{Hainzl2012,Hainzl2014} that the coefficients $\Lambda_0$, $\Lambda_2$, and $\Lambda_3$ are positive. The coefficient $\Lambda_1$ can, in principle, have either sign. Its sign is related to the derivative of $T_{\mathrm{c}}$ with respect to $\mu$, see the remark below Eq.~(1.21) in \cite{Hainzl2012}.
We highlight the small factor $h^5$ in front of the $H_{\mathrm{mag}}^1$-norm of $\Psi$ in the bound for $|R(h)|$. It is worse than the comparable estimate in \cite[Theorem 3.5]{DeHaSc2021}, which is a consequence of the presence of the periodic vector potential $A$. The error is, however, of the same size as the related error terms in \cite{Hainzl2012,Hainzl2014}.
Theorem \ref{Calculation_of_the_GL-energy} provides us with a result for the BCS energy for temperatures of the form $T = {T_{\mathrm{c}}}(1 - Dh^2)$ with $D \in \mathbb{R}$ fixed. In the proof of Theorem~\ref{Main_Result_Tc}~(a) we also need the information that our system is superconducting for smaller temperatures. The precise statement is captured in the following proposition.
\begin{prop}[A priori bound on Theorem \ref{Main_Result_Tc} (a)]
\label{Lower_Tc_a_priori_bound}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} (a) hold and let $T_0>0$. Then, there are constants $h_0>0$ and $D_0>0$ such that for all $0 < h \leq h_0$ and all temperatures $T$ obeying
\begin{align*}
T_0 \leq T < {T_{\mathrm{c}}} (1 - D_0 h^2),
\end{align*}
there is a BCS state $\Gamma$ with
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) < 0. \label{Lower_critical_shift_2}
\end{align}
\end{prop}
\subsection{The upper bound on \texorpdfstring{(\ref{ENERGY_ASYMPTOTICS})}{(\ref{ENERGY_ASYMPTOTICS})} and proof of Theorem \ref{Main_Result_Tc} (a)}
\label{Upper_Bound_Proof_Section}
The results in the previous section can be used to prove the upper bound on \eqref{ENERGY_ASYMPTOTICS} and Theorem~\ref{Main_Result_Tc} (a). These proof are almost literally the same as in the case of a constant magnetic field, and we therefore refer to \cite[Section 3.3]{DeHaSc2021} for a detailed presentation.
\section{Trial States and their BCS Energy}
\label{Upper_Bound} \label{UPPER_BOUND}
In this section we introduce a class of trial states (Gibbs states), state several results concerning their Cooper pair wave function and their BCS free energy, and use these results to prove the upper bound on \eqref{ENERGY_ASYMPTOTICS} as well as Theorem~\ref{Main_Result_Tc}~(a). The trial states $\Gamma_{\Delta}$ are of the form stated in \eqref{eq:heuristicstrialstate}. In Proposition~\ref{Structure_of_alphaDelta} we show that if $\Delta$ is given by $V \alpha_*(r) \Psi(X)$ with a gauge periodic function $\Psi$ that is small in an appropriate sense for small $h$, then $[\Gamma_{\Delta}]_{12} = \alpha_{\Delta} \approx \alpha_*(r) \Psi(X)$ to leading order in $h$. In Proposition~\ref{BCS functional_identity} we prove a representation formula for the BCS functional that allows us to compute the BCS energy of the trial states $\Gamma_\Delta$. Finally, in Theorem~\ref{Calculation_of_the_GL-energy} we extract the terms of the Ginzburg--Landau functional from the BCS free energy of $\Gamma_{\Delta}$. The proofs of these statements are given in Section~\ref{Proofs}. Our trial analysis should be viewed as further development of that in \cite{DeHaSc2021} for the constant magnetic field. The techniques we develop in Sections \ref{Upper_Bound} and \ref{Proofs} are based on gauge-invariant perturbation, which has been pioneered in the framework of linearized BCS theory for a constant external magnetic field in \cite{Hainzl2017}. Our approach should also be compared to the trial state analysis in \cite{Hainzl2012,Hainzl2014}, where a semi-classical expansion is used to treat magnetic fields with zero flux through the unit cell.
\input{3_Upper_Bound/3.1_Gibbs_states_GammaDelta}
\input{3_Upper_Bound/3.2_BCS_energy_GammaDelta}
\input{3_Upper_Bound/3.3_Proof_of_Upper_Bound}
\subsection{Schatten norm estimates for operators given by product kernels}
\label{Estimates_on_product_wave_functions_Section}
During our trial state analysis, we frequently need Schatten norm estimates for operators defined by integral kernels of the form $\tau(x-y) \Psi((x+y)/2)$. The relevant estimates are provided in the following lemma, whose proof can be found in \cite[Lemma 4.1]{DeHaSc2021}.
\begin{lem}
\label{Schatten_estimate}
Let $h>0$, let $\Psi$ be a gauge-periodic function on $Q_h$ and let $\tau$ be an even and real-valued function on $\mathbb{R}^3$. Moreover, let the operator $\alpha$ be defined via its integral kernel $\alpha(X,r) \coloneqq \tau(r)\Psi(X)$, i.e., $\alpha$ acts as
\begin{align*}
\alpha f(x) &= \int_{\mathbb{R}^3} \mathrm{d} y \; \tau(x - y) \Psi\bigl(\frac{x+y}{2}\bigr) f(y), & f &\in L^2(\mathbb{R}^3).
\end{align*}
\begin{enumerate}[(a)]
\item Let $p \in \{2,4,6\}$. If $\Psi\in L_{\mathrm{mag}}^p(Q_h)$ and $\tau \in L^{\frac {p}{p-1}}(\mathbb{R}^3)$, then $\alpha \in \mathcal{S}^p$ and
\begin{align*}
\Vert \alpha\Vert_p \leq C \; \Vert \tau\Vert_{\frac{p}{p-1}} \; \Vert \Psi\Vert_p.
\end{align*}
\item For any $\nu > 3$, there is a $C_\nu >0$, independent of $h$, such that if $(1 +|\cdot|)^\nu \tau\in L^{\nicefrac 65}(\mathbb{R}^3)$ and $\Psi\in L_{\mathrm{mag}}^6(Q_h)$, then $\alpha \in \mathcal{S}^\infty$ and
\begin{align*}
\Vert \alpha\Vert_\infty &\leq C_\nu \, h^{-\nicefrac 12} \; \max\{1 , h^\nu\} \; \Vert (1 + |\cdot|)^\nu \tau\Vert_{\nicefrac 65} \; \Vert \Psi\Vert_6.
\end{align*}
\end{enumerate}
\end{lem}
\subsection{Proof of Proposition \ref{BCS functional_identity}}
\label{BCS functional_identity_proof_Section}
We recall the definitions of $\Delta(X, r) = -2\, V\alpha_*(r) \Psi(X)$ in \eqref{Delta_definition}, the Hamiltonian $H_\Delta$ in \eqref{HDelta_definition} and $\Gamma_\Delta = (1 + \mathrm{e}^{\beta H_\Delta})^{-1}$ in \eqref{GammaDelta_definition}. Throughout this section we assume that the function $\Psi$ in the definition of $\Delta$ is in $H_{\mathrm{mag}}^1(Q_h)$. From Lemma~\ref{Gamma_Delta_admissible}, which is proved in Section~\ref{sec:proofofadmissibility} below, we know that $\Gamma_{\Delta}$ is an admissible BCS state in this case. We define the anti-unitary operator
\begin{align*}
\mathcal{J} &:= \begin{pmatrix} 0 & J \\ -J & 0\end{pmatrix}
\end{align*}
with $J$ defined below \eqref{Gamma_introduction}. The operator $H_\Delta$ obeys the relation $\mathcal{J} H_\Delta \mathcal{J}^* = - H_\Delta$, which implies $\mathcal{J}\Gamma_\Delta \mathcal{J}^* = 1 - \Gamma_\Delta$. Using this and the cyclicity of the trace, we write the entropy of $\Gamma_{\Delta}$ as
\begin{align}
S(\Gamma_\Delta) = \frac 12 \Tr[ \varphi(\Gamma_\Delta)] \label{entropy matrix},
\end{align}
where $\varphi(x) := -[x\ln(x) + (1 - x) \ln(1-x)]$ for $0 \leq x \leq 1$.
In order to rewrite the BCS functional, it is useful to introduce a weaker notion of trace per unit volume. More precisely, we call a gauge-periodic operator $A$ acting on $L^2(\mathbb{R}^3)\oplus L^2(\mathbb{R}^3)$ weakly locally trace class if $P_0AP_0$ and $Q_0AQ_0$ are locally trace class, where
\begin{align}
P_0 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \label{P0}
\end{align}
and $Q_0 = 1-P_0$, and we define its weak trace per unit volume by
\begin{align}
\Tr_0 (A):= \Tr\bigl( P_0AP_0 + Q_0 AQ_0\bigr). \label{Weak_trace_definition}
\end{align}
If an operator is locally trace class then it is also weakly locally trace class but the converse need not be true. It is true, however, in case of nonnegative operators. If an operator is locally trace class then its weak trace per unit volume and its usual trace per unit volume coincide.
Before their appearance in the context of BCS theory in \cite{Hainzl2012,Hainzl2014}, weak traces of the above kind appeared in \cite{HLS05,FLLS11}. In \cite[Lemma 1]{HLS05} it has been shown that if two weak traces $\Tr_P$ and $\Tr_{P'}$ are defined via projections $P$ and $P'$ then $\Tr_{P}(A) = \Tr_{P'}(A)$ holds for appropriate $A$ if $P - P'$ is a Hilbert--Schmidt operator.
Let $\Gamma$ be an admissible BCS state and recall the normal state $\Gamma_0$ in \eqref{Gamma0}.
In terms of the weak trace per unit volume, the BCS functional can be written as
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) \hspace{-90pt}& \notag\\
&= \frac 12 \Tr\bigl[ (H_0\Gamma - H_0\Gamma_0) - T \varphi(\Gamma) + T \varphi(\Gamma_0)\bigr] - \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; V(r)\, |\alpha(X,r)|^2 \notag\\
&= \frac 12 \Tr_0\bigl[ (H_\Delta\Gamma_\Delta - H_0\Gamma_0) - T\varphi(\Gamma_\Delta) + T\varphi(\Gamma_0)\bigr] \label{kinetic energy} \\
&\hspace{55pt} + \frac 12 \Tr_0\bigl[ (H_\Delta \Gamma - H_\Delta \Gamma_\Delta) - T \varphi(\Gamma) + T\varphi(\Gamma_\Delta)\bigr] \label{relative_entropy_term}\\
&\hspace{55pt} - \frac 12 \Tr_0 \begin{pmatrix}
0 & \Delta \\ \ov \Delta & 0 \end{pmatrix} \Gamma - \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; V(r)\, |\alpha(X,r)|^2. \label{interaction-term}
\end{align}
Note that we added and subtracted the first term in \eqref{kinetic energy} and that we added and subtracted the first term in \eqref{interaction-term} to replace the Hamiltonian $H_0$ in \eqref{relative_entropy_term} by $H_\Delta$. The operators inside the traces in \eqref{kinetic energy} and \eqref{relative_entropy_term} are not necessarily locally trace class, which is the reason we introduce the weak local trace. We also note that \eqref{relative_entropy_term} equals $\frac T2$ times the relative entropy $ \mathcal{H}_0(\Gamma, \Gamma_\Delta)$ of $\Gamma$ with respect to $\Gamma_{\Delta}$, defined in \eqref{Relative_Entropy}.
The first term in \eqref{interaction-term} can be written as
\begin{align}
-\frac 12\Tr_0 \begin{pmatrix} 0 & \Delta \\ \ov \Delta & 0\end{pmatrix} \Gamma &
=2\Re \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r\; (V\alpha_*)(r) \Psi(X) \; \ov \alpha(X,r). \label{additional-term}
\end{align}
The integrands in \eqref{interaction-term} and \eqref{additional-term} are equal to
\begin{equation*}
-|\alpha(X,r)|^2 + 2\Re\alpha_*(r)\Psi(X) \; \ov \alpha (X,r) = -\bigl|\alpha(X, r) - \alpha_*(r)\Psi(X)\bigr|^2 + \bigl|\alpha_*(r)\Psi(X)\bigr|^2.
\end{equation*}
To rewrite \eqref{kinetic energy} we need the following identities, whose proofs are straightforward computations:
\begin{align}
\Gamma_\Delta &= \frac 12 - \frac 12 \tanh\bigl( \frac \beta 2 H_\Delta\bigr), & \ln(\Gamma_\Delta) &= -\frac \beta 2 H_\Delta - \ln\bigl( 2\cosh\bigl( \frac \beta 2 H_\Delta\bigr)\bigr), \notag\\
1 - \Gamma_\Delta &= \frac 12 + \frac 12 \tanh\bigl( \frac\beta 2H_\Delta\bigr), & \ln(1 - \Gamma_\Delta) &= \frac\beta 2H_\Delta - \ln\bigl( 2\cosh\bigl( \frac \beta 2H_\Delta\bigr)\bigr). \label{GammaRelations}
\end{align}
Eq.~\eqref{GammaRelations} implies
\begin{align}
\Gamma_\Delta \ln(\Gamma_\Delta) + (1-\Gamma_\Delta)\ln(1-\Gamma_\Delta) = -\ln\bigl( 2\cosh\bigl( \frac \beta 2H_\Delta\bigr) \bigr) + \frac{\beta}{2} H_\Delta \tanh\bigl( \frac\beta 2 H_\Delta\bigr)\bigr),
\label{GammaDeltaRelation}
\end{align}
as well as
\begin{align*}
\beta H_\Delta \Gamma_\Delta - \varphi(\Gamma_\Delta) &= \frac{\beta}{2} H_\Delta - \ln \bigl( 2 \cosh\bigl( \frac \beta 2H_\Delta\bigr)\bigr).
\end{align*}
This allows us to rewrite \eqref{kinetic energy} as
\begin{align}
\frac 1{2\beta}\, \Tr_0\bigl[ (\beta H_\Delta\Gamma_\Delta - \beta H_0\Gamma_0) - \varphi(\Gamma_\Delta) + \varphi(\Gamma_0)\bigr] & \notag \\
&\hspace{-180pt}= \frac 14 \Tr_0 \bigl[ H_\Delta - H_0\bigr] - \frac 1{2\beta}\Tr_0 \bigl[ \ln\bigl( \cosh\bigl( \frac \beta 2H_\Delta\bigr)\bigr) - \ln\bigl( \cosh\bigl( \frac \beta 2H_0\bigr)\bigr)\bigr]. \label{trace-difference-ln}
\end{align}
We note that $H_\Delta - H_0$ is weakly locally trace class and that its weak trace equals $0$. This, in particular, implies that the second term on the right side of \eqref{trace-difference-ln} is weakly locally trace class. To summarize, our intermediate result reads
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) \hspace{-60pt} & \notag\\
&= -\frac 1{2\beta}\Tr_0 \bigl[ \ln\bigl( \cosh\bigl( \frac \beta 2H_\Delta\bigr)\bigr) - \ln\bigl( \cosh\bigl( \frac \beta 2H_0\bigr)\bigr)\bigr] \notag\\
&\hspace{30pt} + \Vert\Psi \Vert_{L_{\mathrm{mag}}^2(Q_B)} \, \langle \alpha_*, V\alpha_*\rangle_{L^2(\mathbb{R}^3)} \notag \\
&\hspace{30pt} +\frac{T}{2} \mathcal{H}_0(\Gamma, \Gamma_\Delta) - \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; V(r) \, \bigl|\alpha(X,r) - \alpha_*(r)^t \Psi(X)\bigr|^2. \label{BCS functional-intermediate}
\end{align}
In order to compute the first term on the right side of \eqref{BCS functional-intermediate}, we need Lemma~\ref{BCS functional_identity_Lemma} below. It is the main technical novelty of our trial state analysis and should be compared to the related part in the proof of \cite[Theorem~2]{Hainzl2012}. The main difference between our proof of Lemma~\ref{BCS functional_identity_Lemma} and the relevant parts of the proof of \cite[Theorem~2]{Hainzl2012} is that we use the product representation of the hyperbolic cosine in \eqref{cosh-Product} below instead of a Cauchy integral representation of the function $z \mapsto \ln(1+e^{-z})$. In this way we obtain better decay properties in the subsequent resolvent expansion, which simplifies the analysis considerably.
As already noted above, the admissibility of $\Gamma_{\Delta}$ implies that the difference between the two operators in the first term on the right side of \eqref{BCS functional-intermediate} is weakly locally trace class. We highlight that this is a nontrivial statement because each of the two operators separately does not share this property. We also highlight that our proof of Lemma~\ref{BCS functional_identity_Lemma} does not require this as an assumption, it implies the statement independently.
In combination with \eqref{BCS functional-intermediate}, Lemma~\ref{BCS functional_identity_Lemma} below proves Proposition~\ref{BCS functional_identity}. Before we state the lemma, we recall the definitions of the operators $L_{T, \mathbf A, W}$ and $N_{T, \mathbf A, W}$ in \eqref{LTAW_definition} and \eqref{NTAW_definition}, respectively.
\begin{lem}
\label{BCS functional_identity_Lemma}
Let $V\alpha_*\in L^{\nicefrac 65}(\mathbb{R}^3)\cap L^2(\mathbb{R}^3)$. For any $B>0$, any $\Psi\in H_{\mathrm{mag}}^1(Q_B)$, and any $T>0$, the operator
\begin{align*}
\ln\bigl( \cosh\bigl( \frac \beta 2H_\Delta\bigr)\bigr) - \ln \bigl( \cosh\bigl( \frac \beta 2H_0\bigr)\bigr)
\end{align*}
is weakly locally trace class and its weak local trace equals
\begin{align}
-\frac 1{2\beta} \Tr_0\Bigl[ \ln\bigl( \cosh\bigl( \frac \beta 2H_\Delta\bigr)\bigr) - \ln \bigl( \cosh\bigl( \frac \beta 2H_0\bigr)\bigr) \Bigr] & \notag\\
&\hspace{-140pt}= -\frac 14\langle \Delta, L_{T, \mathbf A, W}\Delta\rangle + \frac 18 \langle \Delta, N_{T, \mathbf A, W}(\Delta)\rangle + \Tr\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta). \label{BCS functional_identity_Lemma_eq2}
\end{align}
The operator $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)$ is locally trace class and its trace norm satisfies the bound
\begin{align*}
\Vert \mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta) \Vert_1 &\leq C\; T^{-5} \; B^3 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}^6.
\end{align*}
\end{lem}
\begin{proof}[Proof of Lemma \ref{BCS functional_identity_Lemma}]
We recall the Matsubara frequencies in \eqref{Matsubara_frequencies} and write the hyperbolic cosine in terms of the following product expansion, see \cite[Eq. (4.5.69)]{Handbook},
\begin{align}
\cosh\bigl(\frac \beta 2x\bigr) &= \prod_{k=0}^\infty \Bigl( 1 + \frac{x^2}{\omega_k^2}\Bigr).
\label{cosh-Product}
\end{align}
We have
\begin{align*}
0 &\leq \sum_{k= 0}^\infty \ln \bigl( 1 + \frac{x^2}{\omega_k^2}\bigr) = \ln \bigl( \cosh\bigl( \frac \beta 2 x\bigr)\bigr) \leq \frac\beta 2 \; |x|, & x &\in \mathbb{R},
\end{align*}
and accordingly
\begin{align*}
\ln\bigl( \cosh\bigl( \frac{\beta}{2} H_\Delta\bigr)\bigr) = \sum_{k=0}^\infty \ln \bigl( 1+ \frac{H_\Delta^2}{\omega_k^2}\bigr)
\end{align*}
holds in a strong sense on the domain of $|H_\Delta|$. Since $\Delta$ is a bounded operator by Lemma \ref{Schatten_estimate}, the domains of $|H_\Delta|$ and $|H_0|$ coincide. The identity
\begin{align}
\ln\bigl( \cosh\bigl( \frac{\beta}{2} H_\Delta\bigr)\bigr) - \ln\bigl( \cosh\bigl( \frac{\beta}{2} H_0\bigr)\bigr) = \sum_{k=0}^\infty \bigl[ \ln \bigl( \omega_k^2+ H_\Delta^2 \bigr) - \ln \bigl( \omega_k^2 + H_0^2 \bigr) \bigr] \label{BCS functional_identity_Lemma_1}
\end{align}
therefore holds in a strong sense on the domain of $|H_0|$. Elementary arguments show that
\begin{align}
\ln\bigl( \omega^2 + H_\Delta^2\bigr) - \ln\bigl(\omega^2 + H_0^2\bigr) = -\lim_{R\to\infty} \int_\omega^R \mathrm{d} u \; \Bigl[\frac{2u}{u^2 + H_\Delta^2} - \frac{2u }{u^2 + H_0^2}\Bigr] \label{ln-integral}
\end{align}
holds for $\omega>0$ in a strong sense on the domain of $\ln(1+|H_0|)$. Therefore, by \eqref{BCS functional_identity_Lemma_1} and \eqref{ln-integral}, we have
\begin{align}
\ln\bigl( \cosh\bigl(\frac \beta 2 H_\Delta\bigr)\bigr) - \ln\bigl( \cosh\bigl( \frac \beta 2 H_0\bigr)\bigr) & \notag \\
&\hspace{-100pt}= -\i \sum_{k=0}^\infty \int_{\omega_k}^\infty \mathrm{d} u \; \Bigl[\frac{1}{\i u - H_\Delta } - \frac{1}{\i u - H_0} + \frac{1}{\i u + H_\Delta} - \frac{1}{\i u + H_0}\Bigr] \label{lncosh-difference}
\end{align}
in a strong sense on the domain of $|H_0|$. By a slight abuse of notation, we have incorporated the limit in \eqref{ln-integral} into the integral.
In the next step we use the resolvent expansion
\begin{align}
(z-H_\Delta)^{-1} = (z-H_0)^{-1} + (z-H_0)^{-1} \; (H_\Delta - H_0)\; (z-H_\Delta)^{-1} \label{Resolvent_Equation}
\end{align}
to see that the right side of \eqref{lncosh-difference} equals
\begin{align*}
\mathcal{O}_1 + \mathcal{D}_2 + \mathcal{O}_3 + \mathcal{D}_4 + \mathcal{O}_5 - 2\beta \, \mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta),
\end{align*}
with two diagonal operators $\mathcal{D}_2$ and $\mathcal{D}_4$, three offdiagonal operators $\mathcal{O}_1$, $\mathcal{O}_3$ and $\mathcal{O}_5$ and a remainder term $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)$. The index of the operators reflects the number of $\delta$ matrices appearing in their definition. The diagonal operators $\mathcal{D}_2$ and $\mathcal{D}_4$ are given by
\begin{align*}
\mathcal{D}_2 &:= -\i \sum_{k=0}^\infty\int_{\omega_k}^\infty \mathrm{d} u \; \Bigl[\frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0} + \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0} \Bigr] , \\
\mathcal{D}_4 &:= -\i \sum_{k=0}^\infty\int_{\omega_k}^\infty \mathrm{d} u \; \Bigl[\frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0} \\
&\hspace{120pt}+ \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\Bigr]
\end{align*}
and the offdiagonal operators read
\begin{align*}
\mathcal{O}_1 &:= -\i \sum_{k=0}^\infty\int_{\omega_k}^\infty \mathrm{d} u \; \Bigl[\frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0} + \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\Bigr], \\
\mathcal{O}_3 &:= -\i \sum_{k=0}^\infty\int_{\omega_k}^\infty \mathrm{d} u \; \Bigl[\frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0} \\
&\hspace{85pt}+ \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\Bigr],\\
\mathcal{O}_5 &:= -\i \sum_{k=0}^\infty\int_{\omega_k}^\infty \mathrm{d} u \; \Bigl[\frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0} \\
&\hspace{85pt}+ \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\Bigr].
\end{align*}
Since the operators $\mathcal{O}_1$, $\mathcal{O}_3$, and $\mathcal{O}_5$ are offdiagonal, they are weakly locally trace class and their weak local trace equals $0$. We also note that the operator $\mathcal{O}_1$ is not necessarily locally trace class, which is why we need to work with the weak local trace. The operator $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)$ is defined by
\begin{align*}
\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta) & \\
&\hspace{-50pt}:= \frac{\i}{2\beta} \sum_{k=0}^\infty\int_{\omega_k}^\infty \mathrm{d} u \; \Bigl[\frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_\Delta}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0}\delta \frac{1}{\i u - H_0} \\
&\hspace{40pt}+ \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_\Delta}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\delta \frac{1}{\i u + H_0}\Bigr].
\end{align*}
It remains to compute the traces of $\mathcal{D}_2$ and $\mathcal{D}_4$, and to estimate the trace norm of $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)$. We first consider $\mathcal{D}_2$ and use Hölder's inequality in \eqref{Schatten-Hoelder} to estimate
\begin{align}
\Bigl\Vert \frac{1}{\i u \pm H_0} \delta \frac{1}{\i u \pm H_0} \delta \frac{1}{\i u \pm H_0}\Bigr\Vert_1 &\leq \Bigl\Vert \frac{1}{\i u \pm H_0} \Bigr\Vert_\infty^3 \; \Vert \delta\Vert_2^2 = \frac{2}{u^3} \; \Vert \Delta\Vert_2^2.
\end{align}
Therefore, Lemma \ref{Schatten_estimate} shows that the combination of series and integral defining $\mathcal{D}_2$ converges absolutely in local trace norm.
In particular, $\mathcal{D}_2$ is locally trace class and we may arbitrarily interchange the trace with the sum and the integral to compute its trace. We do this, use the cyclicity of the trace, and obtain
\begin{equation}
\Tr \mathcal{D}_2 = -\i \sum_{k=0}^\infty \int_{\omega_k}^\infty \mathrm{d} u \; \Tr\Bigl[\Bigl(\frac{1}{\i u - H_0}\Bigr)^2 \delta \frac{1}{\i u - H_0}\delta + \Bigl(\frac{1}{\i u + H_0}\Bigr)^2\delta \frac{1}{\i u + H_0}\delta \Bigr]. \label{eq:A7}
\end{equation}
Integration by parts shows
\begin{align*}
\int_{\omega_k}^\infty \mathrm{d} u \; \Bigl(\frac{1}{\i u \pm H_0}\Bigr)^2 \delta \frac{1}{\i u \pm H_0}\delta &= -\i\; \frac{1}{\i \omega_k \pm H_0} \delta \frac{1}{\i \omega_k \pm H_0} \delta \\
&\hspace{80pt}- \int_{\omega_k}^\infty \mathrm{d} u \; \frac{1}{\i u \pm H_0} \delta \Bigl( \frac{1}{\i u \pm H_0}\Bigr)^2 \delta,
\end{align*}
and another application of the cyclicity of the trace yields
\begin{align}
\Tr \int_{\omega_k}^\infty \mathrm{d} u \; \Bigl(\frac{1}{\i u \pm H_0}\Bigr)^2 \delta \frac{1}{\i u \pm H_0}\delta &= -\frac \i 2 \; \Tr \frac{1}{\i \omega_k \pm H_0} \delta \frac{1}{\i \omega_k \pm H_0} \delta. \label{BCS functional_identity_Lemma_2}
\end{align}
Note that
\begin{align}
\frac{1}{\i \omega_k \pm H_0}\, \delta\, \frac{1}{\i \omega_k \pm H_0} \, \delta
&= \begin{pmatrix}
\frac{1}{\i \omega_k \pm \mathfrak{h}_h} \, \Delta \frac{1}{\i \omega_k \mp \ov{\mathfrak{h}_h}} \, \ov \Delta \\ & \frac{1}{\i \omega_k \mp \ov{\mathfrak{h}_h}}\, \ov \Delta \frac{1}{\i \omega_k \pm \mathfrak{h}_h} \, \Delta
\end{pmatrix}. \label{Calculation-entry}
\end{align}
We combine this with \eqref{eq:A7} and \eqref{BCS functional_identity_Lemma_2} and summarize the cases $\pm$ into a single sum over $n\in \mathbb{Z}$. This yields
\begin{align*}
-\frac{1}{2\beta}\Tr \mathcal{D}_2 &= \frac{1}{2\beta}\sum_{n\in \mathbb{Z}} \Bigl\langle \Delta, \frac{1}{\i\omega_n - \mathfrak{h}_h} \Delta \frac{1}{\i\omega_n + \ov{\mathfrak{h}_h}}\Bigr\rangle = - \frac 14\langle \Delta, L_{T, \mathbf A, W}\Delta\rangle,
\end{align*}
where $L_{T, \mathbf A, W}$ is the operator defined in \eqref{LTAW_definition}.
We argue as above to see that the integrand in the series and integral defining $\mathcal{D}_4$ is bounded by $C \Vert \Delta\Vert_4^4 \, u^{-5}$. Moreover, we have $\Vert \Delta\Vert_4^4 \leq CB^2 \Vert V\alpha_*\Vert_{\nicefrac 43}^4 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}^4$ by \eqref{Magnetic_Sobolev} and Lemma \ref{Schatten_estimate}. Therefore, $\mathcal{D}_4$ is absolutely local trace norm convergent as well. The trace of $\mathcal{D}_4$ is computed similar to that of $\mathcal{D}_2$. With $N_{T, \mathbf A, W}$ defined in \eqref{NTAW_definition}, the result reads
\begin{align}
-\frac{1}{2\beta}\Tr \mathcal{D}_4 &= \frac 18\langle \Delta, N_{T, \mathbf A, W}(\Delta)\rangle. \label{NTB_size_bound_2}
\end{align}
In case of $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)$, we bound the trace norm of the operator inside the integral by $u^{-7}\Vert \Delta\Vert_6^6$. Using \eqref{Magnetic_Sobolev} and Lemma~\ref{Schatten_estimate}, we estimate the second factor by a constant times $\Vert V\alpha_*\Vert_{\nicefrac 65}^6 B^{-3}\Vert \Pi\Psi\Vert_2^6 \leq CB^3 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}^6$. Finally, integration over $u$ yields the term $6\pi^{-6} T^{-6} (2k+1)^{-6}$, which is summable in $k$. This proves the claimed bound for the trace norm of $\mathcal{R}_{T, \mathbf A, W}^{(1)}(\Delta)$.
\end{proof}
\subsection{Proof of Theorem \ref{Calculation_of_the_GL-energy}}
\label{Calculation_of_the_GL-energy_proof_Section}
Before we start with the proof of Theorem~\ref{Calculation_of_the_GL-energy}, we briefly mention the two main steps. In the first step we compute $\langle \Delta, L_{T, \mathbf A, W} \Delta\rangle$. To that end, we decompose the operator $L_{T, \mathbf A, W}$ into several increasingly simpler parts, which allows us to extract the quadratic terms in the GL functional. The related analysis can be found in Sections~\ref{sec:Q1}--\ref{Summary_quadratic_terms_Section}. The quartic term in the GL functional emerges from $\langle \Delta, N_{T, \mathbf A, W}(\Delta) \rangle$. In the second step we study the nonlinear operator $N_{T, \mathbf A, W}$ and introduce comparable steps of simplification as for $L_{T, \mathbf A, W}$. The related analysis starts in Section~\ref{sec:NT}. The relation to the existing literature will be discussed mostly in Sections~\ref{sec:repLT}, \ref{Approximation_of_LTA^W_Section}, and \ref{sec:NT} after the relevant mathematical objects have been introduced.
\input{4_Proofs/4.3_Calculation_of_GL-energy/2_decomposition_LTAW}
\input{4_Proofs/4.3_Calculation_of_GL-energy/3_analysis_LTA}
\input{4_Proofs/4.3_Calculation_of_GL-energy/4_analysis_LTAW_electric_part}
\input{4_Proofs/4.3_Calculation_of_GL-energy/5_summary_quadratic_terms}
\input{4_Proofs/4.3_Calculation_of_GL-energy/6_analysis_NTAW}
\subsection{Magnetic resolvent estimates}
\label{DHS2:Phase_approximation_method_Section}
In this section we provide bounds for the resolvent kernel
\begin{align}
G^z_h(x,y) &\coloneqq \frac{1}{z - (-\i \nabla + \mathbf A_h)^2 + \mu}(x,y), & x,y &\in \mathbb{R}^3 \label{Ghz_definition}
\end{align}
of the magnetic Laplacian that will be applied extensively in the proofs of Proposition~\ref{Structure_of_alphaDelta} and Theorem~\ref{Calculation_of_the_GL-energy}. Our analysis is based on gauge-invariant perturbation theory in the spirit of Nenciu, see \cite[Section~V]{Nenciu2002}, and generalizes the analysis for the constant magnetic field in \cite[Section~2]{Hainzl2017} and \cite[Section~4.4.1]{DeHaSc2021}. In case of a bounded magnetic vector potential, versions of some of our results appeared in \cite{BdGtoGL}.
We introduce the non-integrable phase factor, also called the Wilson line, by
\begin{align}
\Phi_\mathbf A(x,y) \coloneqq -\int_y^x \mathbf A(u) \cdot \mathrm{d} u \coloneqq -\int_0^1 \mathrm{d} t\; \mathbf A(y + t(x-y))\cdot (x-y). \label{PhiA_definition}
\end{align}
In case of the constant magnetic field the right side of \eqref{PhiA_definition} reduces to $\frac{\bold{B}}{2}\cdot(x \wedge y)$. We also define the gauge-invariant kernel $g_h^z(x,y)$ via the equation
\begin{align}
G_h^z(x,y) = & \, \mathrm{e}^{\i \Phi_{\mathbf A_h}(x,y)} g_h^z(x,y), & x,y &\in \mathbb{R}^3. \label{ghz_definition}
\end{align}
It should be compared to the translation-invariant (and gauge-invariant) kernel $g_B^z(x-y)$ introduced in \cite[Eq.~(4.27)]{Hainzl2017} for the constant magnetic field. Its gauge-invariance makes it the natural starting point for a perturbative analysis.
The integral kernel of the operator $(z+\Delta + \mu)^{-1}$ will be denoted by $g_0^z(x-y)$. The main result of this subsection is the following proposition.
\begin{prop}
\label{gh-g_decay}
Assume that $\mathbf A = \mathbf A_{e_3} + A$ with $A \in W^{3,\infty}(\mathbb{R}^3,\mathbb{R}^3)$. For $t, \omega\in \mathbb{R}$ let
\begin{align}
f(t, \omega) \coloneqq \frac{|\omega| + |t + \mu|}{(|\omega| + (t + \mu)_-)^2}, \label{gh-g_decay_f}
\end{align}
where $x_- \coloneqq -\min\{x,0\}$. For any $a\geq 0$, there are constants $\delta_a , C_a > 0$ such that for all $t, \omega\in \mathbb{R}$ and for all $h \geq 0$ with $f(t, \omega) \, h^2 \leq \delta_a$ there are ($h$-dependent) even $L^1(\mathbb{R}^3)$-functions $\rho^{\i\omega + t}$, $\rho_\nabla^{\i\omega + t}$, $\tau^{\i\omega + t}$, and $\tau_\nabla^{\i\omega + t}$ such that
\begin{align}
|g_h^{\i\omega + t}(x,y)| &\leq \rho^{\i\omega + t} (x-y), \notag\\
|\nabla_x g_h^{\i\omega + t}(x,y)| &\leq \rho_\nabla^{\i\omega + t} (x-y), \notag \\
|\nabla_y g_h^{\i\omega + t}(x,y)| &\leq \rho_\nabla^{-\i\omega + t} (x-y), \label{gh-g_decay_eq1}
\end{align}
as well as
\begin{align}
|g_h^{\i\omega + t}(x,y) - g_0^{\i\omega + t}(x - y)| &\leq \tau^{\i\omega + t} (x-y), \notag \\
| \nabla_x g_h^{\i\omega + t}(x,y) - \nabla_x g_0^{\i \omega + t}(x - y)| &\leq \tau_\nabla^{\i\omega + t} (x-y), \notag \\
|\nabla_y g_h^{\i\omega + t}(x,y) - \nabla_y g_0^{\i \omega + t}(x - y)| &\leq \tau_\nabla^{-\i\omega + t} (x-y). \label{gh-g_decay_eq2}
\end{align}
Furthermore, we have the estimates
\begin{align}
\Vert \, |\cdot|^a \rho^{\i\omega + t} \Vert_1 &\leq C_a \, f(t, \omega)^{1 + \frac a2}, \notag \\
\Vert \, |\cdot|^a \rho_\nabla^{\i\omega + t} \Vert_1 &\leq C_a \, f(t,\omega)^{\frac 12 + \frac a2} \, \Bigl[ 1 + \frac{|\omega| + |t - \mu|}{|\omega| + (t - \mu)_-} \Bigr], \label{gh-g_decay_eq3}
\end{align}
and
\begin{align}
\Vert \, |\cdot|^a \tau^{\i\omega + t} \Vert_1 &\leq C_a \, h^3 \, f(t, \omega)^{\frac 52 + \frac a2}, \notag \\
\Vert \, |\cdot|^a \tau_\nabla^{\i\omega + t} \Vert_1 &\leq C_a \, h^3 \, f(t, \omega)^{2 + \frac a2} \Bigl[ 1 + \frac{|\omega| + |t - \mu|}{|\omega| + (t - \mu)_-} \Bigr]. \label{gh-g_decay_eq4}
\end{align}
\end{prop}
\begin{bem}
The bounds in the above proposition should be compared to those for $g_B^z(x)$ in \cite[Lemma~10]{Hainzl2017} (estimates without gradient) and \cite[Lemma~4.5]{DeHaSc2021} (estimates with gradient). Although the kernel $g_h^z(x,y)$ defined in \eqref{ghz_definition} is not translation-invariant, $|g_h^z(x,y)|$, $|g_h^z(x,y) - g_0^z(x-y)|$, and the same terms with a gradient can be bounded by translation-invariant kernels. Moreover, these translation-invariant kernels satisfy $L^1$-norm bounds that are mostly of the same quality as those obtained for the kernels $|g_B^z(x)|$, $|g_B^z(x) - g_0^z(x)|$, and the same terms with a gradient in \cite{Hainzl2017,DeHaSc2021}. We highlight that, in comparison to \cite[Eq.~(4.34)]{DeHaSc2021}, we lose a power of the small parameter $h$ in the estimate in \eqref{gh-g_decay_eq4}. This is due to the second term in the bracket in \eqref{Thz_definition} below and it is in accordance with comparable bounds in \cite{Hainzl2012} and \cite{Hainzl2014}. The fact that the above kernels can be bounded from above by translation-invariant kernels is an important ingredient for the proofs of Proposition~\ref{Structure_of_alphaDelta} and Theorem~\ref{Calculation_of_the_GL-energy}.
\end{bem}
Before we give the proof of Proposition~\ref{gh-g_decay} we provide two lemmas. The first lemma concerns $L^1$-norm bounds for the kernel $g_0^z$ and its gradient. Its proof can be found in \cite[Lemma 4.4]{DeHaSc2021}. The bound for $g_0^z$ (but not the one for $\nabla g_0^z$) appeared previously in \cite[Lemma~9]{Hainzl2014}.
\begin{lem}
\label{g_decay}
Let $a > -2$. There is a constant $C_a >0$ such that for $t,\omega\in \mathbb{R}$, we have
\begin{align}
\left \Vert \, |\cdot|^a g_0^{\i \omega + t}\right\Vert_1 &\leq C_a \; f(t, \omega)^{1+ \frac a2}
\label{g_decay_eq1}
\end{align}
with $f(t, \omega)$ in \eqref{gh-g_decay_f}. Furthermore, for any $a > -1$, there is a constant $C_a >0$ with
\begin{align}
\left \Vert \, |\cdot|^a \nabla g_0^{\i\omega + t} \right\Vert_1 \leq C_a \; f(t, \omega)^{\frac 12 + \frac a2} \; \Bigl[ 1 + \frac{|\omega| + |t+ \mu|}{|\omega| + (t + \mu)_-}\Bigr]. \label{g_decay_eq2}
\end{align}
\end{lem}
The second lemma provides us with formulas for the gradient of the function $\Phi_\mathbf A(x,y)$ defined in \eqref{PhiA_definition} with respect to $x$ and $y$.
\begin{lem}
\label{PhiA_derivative}
Assume that $\mathbf A = \mathbf A_{e_3} + A$ with $A \in W^{2,\infty}(\mathbb{R}^3,\mathbb{R}^3)$. Then we have
\begin{align}
\nabla_x \Phi_\mathbf A(x,y) &= -\mathbf A(x) + \tilde \mathbf A(x,y), & \nabla_y \Phi_\mathbf A(x,y) &= \mathbf A(y) - \tilde \mathbf A(y,x), \label{PhiA_derivative_eq1}
\end{align}
where
\begin{align}
\tilde \mathbf A (x,y) \coloneqq \int_0^1 \mathrm{d}t \; t \curl \mathbf A(y + t(x - y)) \wedge (x-y) \label{Atilde_definition}
\end{align}
is the transversal Poincar\'e gauge relative to $y$.
\end{lem}
\begin{bem}
The function $\Phi_\mathbf A(x,y)$ is a gauge transformation that relates $\mathbf A(x)$ and $\tilde \mathbf A(x, y)$.
\end{bem}
\begin{proof}[Proof of Lemma~\ref{PhiA_derivative}]
From Morrey's inequality we know that $\curl \mathbf A$ is Lipschitz continuous, and hence the line integral in \eqref{Atilde_definition} is well defined. For two vector fields $v$ and $w$ we have
\begin{align}
\nabla (v \cdot w) = (v\cdot \nabla)w + (w\cdot \nabla) v + v\wedge \curl w + w\wedge \curl v. \label{PhiA_derivative_4}
\end{align}
We apply this equality for fixed $y\in \mathbb{R}^3$ to
\begin{align*}
v(x) &= \int_0^1\, \mathrm{d}t \; \mathbf A(y + t(x-y)), & w(x) &= x-y.
\end{align*}
Our definition implies $\curl w = 0$ and we find that
\begin{align}
-\nabla_x \Phi_\mathbf A(x,y) &= \Bigl( \int_0^1 \mathrm{d}t \; \mathbf A(y + t(x-y)) \cdot \nabla_x\Bigr) (x-y) \notag \\
&\hspace{-60pt} + \left( (x-y) \cdot \nabla_x\right) \int_0^1 \mathrm{d}t \; \mathbf A(y + t(x-y)) + (x-y) \wedge \int_0^1 \mathrm{d}t \; t \, \curl \mathbf A(y + t(x-y)). \label{PhiA_derivative_1}
\end{align}
The first term on the right side equals
\begin{align}
\sum_{i=1}^3 \int_0^1\mathrm{d}t \; \mathbf A_i(y + t(x-y)) \, \partial_i (x-y) &= \int_0^1 \mathrm{d}t \; \mathbf A(y + t(x-y)). \label{PhiA_derivative_2}
\end{align}
To rewrite the second term on the right side of \eqref{PhiA_derivative_1}, we use integration by parts and find
\begin{align}
\left( (x-y)\cdot \nabla_x\right) \int_0^1 \mathrm{d}t \; \mathbf A(y + t(x-y)) &= \int_0^1 \mathrm{d}t \; t \, \frac{\mathrm{d}}{\mathrm{d}t} \mathbf A(y + t(x-y)) \notag \\
&\hspace{-50pt} = t \, \mathbf A(y + t(x-y))\Big|_0^1 - \int_0^1 \mathrm{d}t \; \mathbf A(y + t(x-y)). \label{PhiA_derivative_3}
\end{align}
Therefore, the sum of the terms in \eqref{PhiA_derivative_2} and \eqref{PhiA_derivative_3} equals $\mathbf A(x)$. Since the last term on the right side of \eqref{PhiA_derivative_1} equals $-\tilde \mathbf A(x,y)$, this proves the first equation in \eqref{PhiA_derivative_eq1}. The second equation follows from $\Phi_\mathbf A(x,y) = -\Phi_\mathbf A(y,x)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{gh-g_decay}]
We use the abbreviation $z = \i \omega + t$ throughout the proof. In the first step we express $G_h^z(x,y)$ in \eqref{Ghz_definition} in terms of the kernel
\begin{align}
\tilde G_h^z(x,y) \coloneqq \mathrm{e}^{\i \Phi_{\mathbf A_h}(x,y)} \, g_0^z(x-y). \label{Gtildehz_definition}
\end{align}
From \eqref{PhiA_derivative_eq1} we know that
\begin{align}
(-\i \nabla_x + \mathbf A_h(x)) \; \mathrm{e}^{\i \Phi_{\mathbf A_h}(x,y)} = \mathrm{e}^{\i \Phi_{\mathbf A_h}(x,y)}\; (-\i\nabla_x + \tilde \mathbf A_h(x,y)), \label{PhiA_Magnetic_Momentum_Action}
\end{align}
where $\tilde \mathbf A(x,y)$ in \eqref{Atilde_definition} is our vector potential in Poincar\'e gauge. Furthermore, a short computation shows that \eqref{PhiA_Magnetic_Momentum_Action} implies the operator equation
\begin{align}
(z - (-\i \nabla + \mathbf A_h)^2 + \mu) \tilde G_h^z = \mathbbs 1 - T_h^z, \label{Ghz_Thz_relation}
\end{align}
where $T_h^z$ is the operator defined by the integral kernel
\begin{align}
T_h^z (x,y) \coloneqq \mathrm{e}^{\i \Phi_{\mathbf A_h}(x,y)} \bigl( 2 \, \tilde \mathbf A_h(x,y) (-\i \nabla_x) -\i \divv_x \tilde \mathbf A_h(x,y) + |\tilde \mathbf A_h(x,y)|^2 \bigr) g_0^z(x-y). \label{Thz_definition}
\end{align}
Since $g_0^z$ is a radial function and the vector $\tilde \mathbf A(x,y)$ is perpendicular to $x-y$ the first term on the right side vanishes. The operator $T_h^z$ also appears in \cite{Nenciu2002}.
We claim that
\begin{align}
|T_h^z(x,y) | &\leq M_\mathbf A \, h^3 \; \eta^z(x-y) \label{Thz_boundedness_1}
\end{align}
holds, where $\eta^z$ is the ($h$-dependent) function
\begin{align}
\eta^z(x) &\coloneqq \bigl( |x| + \Vert \curl \mathbf A\Vert_\infty^{\nicefrac 12} \, h \, |x|^2\bigr) \, |g_0^z(x)| \label{hz_definition}
\end{align}
and
\begin{align}
M_\mathbf A \coloneqq \max\bigl\{ \Vert \curl(\curl \mathbf A) \Vert_\infty \, , \, \Vert \curl \mathbf A \Vert_\infty^{\nicefrac 32} \bigr\}. \label{MA_definition}
\end{align}
We note that the bound in \eqref{Thz_boundedness_1} holds as an equality with a similar function on the right side in the case of the constant magnetic field, see the proof of Lemma~10 in \cite{Hainzl2017}.
To prove \eqref{Thz_boundedness_1}, we first derive a bound for $|\divv_x \tilde \mathbf A(x,y)|$. For two vector fields $v$ and $w$ we have
\begin{align*}
\divv (v \wedge w) = \curl (v) \cdot w - \curl (w) \cdot v.
\end{align*}
We apply the above equality with the choice $v(x) = \curl \mathbf A(y + t(x-y))$, $w(x) = x-y$ and use $\curl w =0$ to write
\begin{align*}
\divv_x \bigl(\curl \mathbf A(y+t(x-y)) \wedge (x-y)\bigr) &= t \; \curl(\curl \mathbf A) (y + t (x-y)) \cdot (x-y).
\end{align*}
Since $\curl(\curl \mathbf A)$ is Lipschitz continuous, which follows from $A \in W^{3,\infty}(\mathbb{R}^3,\mathbb{R}^3)$, we conclude that
\begin{align}
|\divv_x \tilde \mathbf A(x,y)| &\leq \int_0^1\mathrm{d}t \; t^2\, | \curl (\curl \mathbf A)(y + t(x-y))\cdot (x-y)| \notag\\
&\leq \Vert \curl (\curl \mathbf A) \Vert_\infty \; |x-y|. \label{Thz_boundedness_2}
\end{align}
We also have
\begin{align}
|\tilde \mathbf A(x,y)| &\leqslant \int_0^1 \mathrm{d}t \; t \; |\curl \mathbf A(y + t(x-y)) \wedge (x-y)| \notag \\
&\leq \Vert \curl \mathbf A\Vert_\infty \, |x-y|. \label{Thz_boundedness_3}
\end{align}
In combination with $\Vert \curl(\curl \mathbf A_h)\Vert_\infty \leq M_\mathbf A h^3$ and $\Vert \curl \mathbf A_h\Vert_\infty^2 \leq M_\mathbf A \, \Vert \curl A\Vert_\infty^{\nicefrac 12} h^4$, this proves \eqref{Thz_boundedness_1}.
Next, we have a closer look at the function $\eta^z$. Lemma~\ref{g_decay} and the assumption $f(t, \omega) \, h^2 \leq \delta_a$ imply the bound
\begin{align}
\Vert \, |\cdot|^a \eta^z\Vert_1 \leq C_a \; f(t, \omega)^{\frac 32 + \frac a2} \bigl[ 1 + h \, f(t,\omega)^{\nicefrac 12}\bigr] \leq C_a \, f(t, \omega)^{\frac 32 + \frac a2}.
\label{eq:boundL1hz}
\end{align}
In particular,
\begin{align}
M_\mathbf A \, h^3 \, \Vert \eta^z\Vert_1 \leq C \, M_\mathbf A \, h^3 \, f(t,\omega)^{\frac 32} \leq \frac 12 \label{gh-g_decay_1}
\end{align}
for all allowed $t, \omega$ and $h$ provided $\delta_a$ is chosen small enough. The bound in \eqref{Thz_boundedness_1} and an application of Young's inequality therefore show that the operator norm of $T_h^z$ satisfies
\begin{align}
\Vert T_h^z\Vert_\infty \leq M_\mathbf A \, h^3 \; \Vert \eta^z \Vert_1 \leqslant \frac{1}{2}. \label{Thz_boundedness_eq2}
\end{align}
We use \eqref{Ghz_Thz_relation} and this bound to write the resolvent of the magnetic Laplacian as
\begin{align}
\frac 1{z - (-\i \nabla + \mathbf A_h)^2 + \mu} = \tilde G_h^z\; \frac{1}{1-T_h^z} = \tilde G_h^z\; \sum_{j=0}^\infty \bigl(T_h^z\bigr)^j. \label{Neumann-series}
\end{align}
This finishes the first step of our proof. In the second step we use \eqref{Neumann-series} to prove the claimed bounds for the integral kernel $g_h^z(x,y)$. Our first goal is to prove the bounds for $g_h^z(x,y)$ without a gradient.
In the following we use the notation $\mathcal{S}_h^z = \sum_{j=1}^\infty ( T_h^z )^j$. Eq.~\eqref{Neumann-series} allows us to write the kernel $g_h^z(x,y)$ as
\begin{align}
g_h^z(x,y) = g_0^z(x-y) + \mathrm{e}^{-\i \Phi_{\mathbf A_h}(x,y)} \int_{\mathbb{R}^3} \mathrm{d} u \; \mathrm{e}^{\i \Phi_{\mathbf A_h}(x, u)} g_0^z(x-u) \; \mathcal{S}_h^z(u,y). \label{GAz-GtildeAz_decay_3}
\end{align}
We use \eqref{Thz_boundedness_1} to bound the integral kernel of the operator $\mathcal{S}_h^z$ by
\begin{equation}
| \mathcal{S}_h^z (x,y) | \leqslant \sum_{j=1}^{\infty} (M_{\mathbf A} h^3)^{j} (\eta^z)^{*j}(x-y) \eqqcolon s^z(x-y),
\label{tau_definition}
\end{equation}
where $(\eta^z)^{*j}$ denotes the $j$-fold convolution of $\eta^z$ with itself. An application of the inequality
\begin{align}
|x_1 + \cdots + x_j|^a \leq j^{(a-1)_+} \bigl(|x_1|^a + \cdots + |x_j|^a \bigr) \label{DHS2:convexity}
\end{align}
with $a \geq 0$ and $x_+ = \max\{ 0,x \}$ allows us to see that the function $s^z$ satisfies the pointwise bound
\begin{align*}
|x|^a \, s^z(x) \leq \sum_{j=1}^\infty (M_\mathbf A h^3)^j \sum_{m=1}^j j^{(a-1)_+} \, \eta^z * \cdots * \bigl( |\cdot|^a \eta^z\bigr) * \cdots * \eta^z(x).
\end{align*}
Here, $|\cdot|^a \eta^z$ appears in the $m$\tho\ slot. An application of \eqref{eq:boundL1hz}, \eqref{gh-g_decay_1}, and Young's inequality therefore implies
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} x \ |x|^a s^z(x) &\leq M_{\mathbf A} h^3 \, \Vert \, |\cdot|^a \eta^z\Vert_1 \sum_{j=1}^\infty \frac{j^{1 + (a-1)_+}}{2^{j-1}} \leq C_a \, h^3 \, f(t, \omega)^{\frac 32 + \frac a2}. \label{gh-g_decay_2}
\end{align}
Let us also define the functions
\begin{align*}
\rho^z(x) &\coloneqq |g_0^z(x)| + |g_0^z| \ast s^z(x), & \tau^z(x) &\coloneqq |g_0^z| \ast s^z(x).
\end{align*}
From \eqref{GAz-GtildeAz_decay_3} we know that
\begin{align*}
|g_h^z(x,y)| &\leqslant \rho^z(x-y) & |g_h^z(x,y) - g_0^z(x-y)| &\leqslant \tau^z(x-y).
\end{align*}
The claimed bounds for the $L^1(\mathbb{R}^3)$-norms of $\rho^z$ and $\tau^z$ in \eqref{gh-g_decay_eq3} and \eqref{gh-g_decay_eq4} follow from Lemma~\ref{g_decay}, \eqref{tau_definition}, and \eqref{gh-g_decay_2}. It remains to prove the bounds involving a gradient.
An application of Lemma~\ref{PhiA_derivative} and \eqref{Thz_boundedness_3} show
\begin{align*}
|\nabla_x \mathrm{e}^{-\i \Phi_\mathbf A(x,y)} \mathrm{e}^{\i \Phi_\mathbf A(x,u)}| \leq |\tilde \mathbf A(x,y)| + |\tilde \mathbf A(x,u)| \leqslant C h^2 \left( |x-y| + |x -u| \right).
\end{align*}
In combination with \eqref{GAz-GtildeAz_decay_3} and \eqref{tau_definition}, this implies
\begin{equation*}
|\nabla_x g_h^z(x,y)| \leq |\nabla g_0^z(x-y)| + C h^2 \int_{\mathbb{R}^3} \mathrm{d} u \left( |x-u| + |u-y| \right) |g_0^z(x-u)| s^z(u-y).
\end{equation*}
We denote the right side of the above equation by $\rho_{\nabla}^z(x-y)$. The claimed bound for $\rho_{\nabla}^z$ follows immediately from those for $g_0^z$ and $s^z$, see Lemma~\ref{g_decay} and \eqref{gh-g_decay_2}, and the assumption $f(t, \omega) \, h^2 \leq \delta_a$. A bound for $|\nabla_x g_h^z(x,y) - \nabla g_0^z(x-y)|$ can be obtained similarly. The bounds for $|\nabla_y g_h^z(x,y)|$ and $|\nabla_y g_h^z(x,y) - \nabla g_0^z(x-y)|$ can be obtained when we use the identity $G_h^z(x,y) = \ov{G_h^{\ov z}(y,x)}$. This proves Proposition~\ref{gh-g_decay}.
\end{proof}
\subsection{Proof of Lemma \ref{Gamma_Delta_admissible}}
\label{sec:proofofadmissibilitya}
Let us recall the definition of $\Gamma_\Delta$ in \eqref{GammaDelta_definition}. From its definition we infer that it is a gauge-periodic generalized fermionic one-particle density matrix. Consequently, we only need to verify the trace class condition in \eqref{Gamma_admissible}.
We use the identity $(\exp(x)+1)^{-1} = (1-\tanh(x/2))/2$ to write $\Gamma_\Delta$ as
\begin{align}
\Gamma_\Delta &= \frac 12 - \frac 12 \tanh\bigl( \frac \beta 2 H_\Delta\bigr). \label{GammaDelta_tanh_relation1}
\end{align}
Let us also recall the Mittag--Leffler series expansion
\begin{align}
\tanh\bigl( \frac \beta 2 z\bigr) &= -\frac{2}{\beta} \sum_{n\in \mathbb{Z}} \frac{1}{\i\omega_n - z}, \label{tanh_Matsubara}
\end{align}
see e.g. \cite[Eq. (3.12)]{DeHaSc2021}. Its convergence becomes manifest by combining the $+n$ and $-n$ terms. When we use \eqref{GammaDelta_tanh_relation1}, and the resolvent identity
\begin{align}
(z- T)^{-1} = (z-S)^{-1} + (z-T)^{-1} \; (S - T)\; (z- S)^{-1} \label{Resolvent_Equation}
\end{align}
for two operators $S$ and $T$, we find
\begin{align}
\Gamma_\Delta = \frac 12 - \frac 12 \tanh\bigl( \frac \beta 2 H_\Delta\bigr) = \frac 12 + \frac{1}{\beta} \sum_{n\in \mathbb{Z}} \frac{1}{\i \omega_n - H_\Delta} = \Gamma_0 + \mathcal{O} + \mathcal{Q}_{T,\mathbf A, W}(\Delta). \label{alphaDelta_decomposition_1}
\end{align}
Here $\Gamma_0$ denotes the normal state in \eqref{Gamma0} and
\begin{align}
\mathcal{O} &\coloneqq \frac 1\beta \sum_{n\in \mathbb{Z}} \frac{1}{ \i \omega_n - H_0} \delta \frac{1}{ \i \omega_n - H_0}, \notag \\
\mathcal{Q}_{T,\mathbf A, W}(\Delta) &\coloneqq \frac 1\beta \sum_{n\in \mathbb{Z}} \frac{1}{ \i \omega_n - H_0} \delta\frac{1}{ \i \omega_n - H_0} \delta \frac{1}{ \i \omega_n - H_\Delta} \label{alphaDelta_decomposition_2}
\end{align}
with $\delta$ in \eqref{HDelta_definition}.
Since the diagonal components of the operator $\mathcal{O}$ equal zero, this term does not contribute to the $11$-component $\gamma_{\Delta}$ of $\Gamma_{\Delta}$. In the following, we use the notation $\pi = - \i \nabla + \mathbf A_{\mathbf B}$. To see that $(1 + \pi^2) [\mathcal{Q}_{T, \mathbf A, W}(\Delta)]_{11}$ is locally trace class, we use
\begin{align*}
\frac{1}{\i \omega_n \pm H_0}\, \delta\, \frac{1}{\i \omega_n \pm H_0} \, \delta
&= \begin{pmatrix}
\frac{1}{\i \omega_n \pm \mathfrak{h}_{\mathbf A, W}} \, \Delta \frac{1}{\i \omega_n \mp \ov{\mathfrak{h}_{\mathbf A, W}}} \, \ov \Delta & 0 \\ 0 & \frac{1}{\i \omega_n \mp \ov{\mathfrak{h}_{\mathbf A, W}}}\, \ov \Delta \frac{1}{\i \omega_n \pm \mathfrak{h}_{\mathbf A, W}} \, \Delta
\end{pmatrix}
\end{align*}
to write it as
\begin{align*}
\bigl[ \mathcal{Q}_{T, \mathbf A, W}(\Delta)\bigr]_{11} = \frac 1\beta \sum_{n\in \mathbb{Z}} \frac{1}{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac{1}{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \, \ov \Delta \, \Bigl[ \frac{1}{\i\omega_n - H_\Delta}\Bigr]_{11}.
\end{align*}
The operator $\mathfrak{h}_{\mathbf A, W}$ is defined in \eqref{hfrakAW_definition}. An application of Hölder's inequality in \eqref{Schatten-Hoelder} shows that the local trace norm of the term inside the sum is bounded by
\begin{align*}
\Bigl\Vert (1 + \pi^2) \frac{1}{\i \omega_n - \mathfrak{h}_{\mathbf A, W}}\Bigr\Vert_\infty \; \frac{1}{|\omega_n|^2} \; \Vert \Delta\Vert_2^2.
\end{align*}
Using Cauchy-Schwarz, we see that $\mathfrak{h}_{\mathbf A, W} \leqslant C (1+\pi^2)$, which implies that the operator norm in the above equation is bounded uniformly for $n \in \mathbb{Z}$. Since $|\omega_n|^{-2}$ is summable in $n$ and $\Delta$ is locally Hilbert-Schmidt these considerations show that $(1+\pi^2)[\mathcal{Q}_{T,\mathbf A, W}(\Delta)]_{11}$ is locally trace class. It remains to show that $(1 + \pi^2) \gamma_0$ with $\gamma_0$ in \eqref{Gamma0} is locally trace class.
To that end, we first note that
\begin{equation*}
\left\Vert (1+\pi^2) \gamma_0 \right\Vert_1 \leqslant \left\Vert (1+\pi^2) \frac{1}{1 + \mathfrak{h}_{\mathbf A, W} + \mu} \right\Vert_{\infty} \ \left\Vert (1+\mathfrak{h}_{\mathbf A, W} + \mu) \gamma_0 \right\Vert_{1}.
\end{equation*}
We argue as above to see that the first norm on the right side is finite. To obtain a bound for the second norm, we first note that there is a constant $C>0$ such that $(1+x)(\exp(\beta (x-\mu))+1)^{-1} \leqslant C \exp(-\beta x/2)$ holds for $x > a > -\infty $. The constant $C$ depends on $\beta,\mu$, and $a$. Accordingly,
\begin{equation*}
\left\Vert (1+\mathfrak{h}_{\mathbf A, W} + \mu) \gamma_0 \right\Vert_{1} \leqslant C \Tr \exp(-\beta \mathfrak{h}_{\mathbf A, W}/2).
\end{equation*}
From Corollary~A.1.2 and Corollary~B.13.3 in \cite{Simon82} we know that for any $t>0$ the operator $\exp(-t \mathfrak{h}_{\mathbf A, W})$ has an integral kernel $k_{t}(x,y)$ that satisfies
\begin{equation*}
\left\Vert k_t \right\Vert_{2,\infty}^2 = \esssup_{x \in \mathbb{R}^3} \int_{\mathbb{R}^3} \mathrm{d} y \ | k_{t}(x,y)|^2 < \infty.
\end{equation*}
Accordingly,
\begin{equation*}
\Tr \exp(-\beta \mathfrak{h}_{\mathbf A, W}/2) = \left\Vert \exp(-\beta \mathfrak{h}_{\mathbf A, W}/4) \right\Vert_2^2 = \fint_{Q_h} \mathrm{d} x \int_{\mathbb{R}^3} \mathrm{d} y \ |k_{\beta/4}(x,y)|^2 \leqslant \Vert k_{\beta/4} \Vert_{2,\infty}^2.
\end{equation*}
We conclude that $(1+\pi^2) \gamma_0$ is locally trace class. This ends the proof of Lemma~\ref{Gamma_Delta_admissible}.
\subsection{Proof of Lemma~\ref{lem:propsLN}}
We start by proving that $L_{T,\mathbf A,W}$ is a bounded linear operator on ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$. To that end, we first check that for $\Delta \in {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$, we have $L_{T,\mathbf A,W} \Delta \in \mathcal{S}^2$. Using H\"older's inequality for the trace per unit volume in \eqref{Schatten-Hoelder} and $\omega_n = \pi (2n+1) T $, we see that
\begin{equation}
\sum_{n \in \mathbb{Z}} \left\Vert \frac 1{\i \omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac 1 {\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \right\Vert_2 \leqslant \Vert \Delta \Vert_2 \sum_{n \in \mathbb{Z}} \frac{1}{\omega_n^2} \leqslant C \Vert \Delta \Vert_2 \sum_{n \in \mathbb{Z}} \frac{1}{(2n+1)^2},
\label{eqA:1B}
\end{equation}
which proves the claim.
Next, we show that $L_{T, \mathbf A, W}\Delta$ satisfies \eqref{alpha_periodicity_COM}. To that end, we need the identity
\begin{align}
\frac{1}{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}} - \mu} (x,y) = -\frac{1}{-\i \omega_n - \mathfrak{h}_{\mathbf A, W} + \mu}(y,x).
\label{GAz_Kernel_of_complex_conjugate}
\end{align}
It follows from $\ov{ \mathcal{R}^* (x,y) } = \mathcal{R}(y,x)$ for a general operator $\mathcal{R}$ with kernel $\mathcal{R}(x,y)$ and
\begin{align*}
\frac{1}{z - \ov{\mathfrak{h}_{\mathbf A, W}} + \mu} = \ov{\Bigl(\frac{1}{z - \mathfrak{h}_{\mathbf A, W} + \mu} \Bigr)^*}.
\end{align*}
Using the coordinate transformation $(w_1,w_2) \mapsto (w_1+v,w_2+v)$ and the above identities for the resolvent kernel, we see that
\begin{align}
L_{T,\mathbf A,W} \Delta(X+v,r) &= \frac{2}{\beta} \sum_{n \in \mathbb{Z}} \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} w_1 \mathrm{d} w_2 \; \Delta \bigl( \frac{w_1+w_2}{2} + v, w_1 - w_2\bigr) \notag \\
&\hspace{-60pt} \times \frac{1}{\i\omega_n - \mathfrak{h}_{\mathbf A, W}+ \mu}\bigl( X + v + \frac r2, w_1\bigr) \frac{1}{-\i\omega_n - \mathfrak{h}_{\mathbf A, W}+ \mu}\bigl( X + v - \frac r2, w_2\bigr)
\label{eqA:2B}
\end{align}
holds. We highlight that we wrote $\Delta$ in terms of relative and center-of-mass coordinates in \eqref{eqA:2B}. We have $T(v) \mathfrak{h}_{\mathbf A, W} T(v)^* = \mathfrak{h}_{\mathbf A, W}$, $v \in \Lambda_h$, where $T(v)$ is the magnetic translation in \eqref{Magnetic_Translation}, and hence the same identity with $\mathfrak{h}_{\mathbf A, W}$ replaced by its resolvent. In combination with
\begin{align*}
&\iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} x \mathrm{d} y \; \overline{\varphi(x)} \, [T(v)(z - \mathfrak{h}_{\mathbf A, W})^{-1}T(v)^*](x,y) \, \varphi(y) \\
&\hspace{3cm}= \int_{\mathbb{R}^6} \mathrm{d} x \mathrm{d} y \; \overline{\varphi(x)} \, \mathrm{e}^{\i \frac{\mathbf B}{2} \cdot (v \wedge (x-y))} \, (z - \mathfrak{h}_{\mathbf A, W})^{-1}(x+v,y+v) \, \varphi(y),
\end{align*}
which holds for $z \in \rho(\mathfrak{h}_{\mathbf A, W})$, this proves that the resolvent kernel of $\mathfrak{h}_{\mathbf A, W}$ obeys the first relation in \eqref{alpha_periodicity}. When we combine this and the fact that $\Delta$ satisfies the second relation in \eqref{alpha_periodicity}, we see that we pick up the total phase
\begin{align*}
\mathrm{e}^{\i \frac{\mathbf B}{2} \cdot (v\wedge (X + \frac r2 - w_1))} \, \mathrm{e}^{\i \frac{\mathbf B}{2} \cdot (v\wedge (X - \frac r2 - w_2))} \, \mathrm{e}^{\i \frac{\mathbf B}{2} \cdot (v\wedge (w_1 + w_2))} = \mathrm{e}^{\i \mathbf B\cdot (v\wedge X)}
\end{align*}
in \eqref{eqA:2B}. This proves the first equation in \eqref{alpha_periodicity_COM} for $L_{T, \mathbf A, W} \Delta(X,r)$.
To see that the second equation in \eqref{alpha_periodicity_COM} is true, we further observe that the Matsubara frequencies obey $-\omega_n = \omega_{-(n+1)}$. Hence, the index shift $n \mapsto -n-1$ and \eqref{GAz_Kernel_of_complex_conjugate} imply the desired symmetry. This proves that $L_{T,\mathbf A,W}$ is a bounded linear operator on ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$.
To show that $N_{T,\mathbf A,W}(\Delta) \in \mathcal{S}^2$, we use the bound
\begin{align*}
&\sum_{n \in \mathbb{Z}} \left\Vert \frac 1{\i \omega_n - \mathfrak{h}_{\mathbf A, W}}\, \Delta \, \frac 1{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \, \ov \Delta \, \frac 1{\i\omega_n - \mathfrak{h}_{\mathbf A, W}}\, \Delta \, \frac 1{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \right\Vert_2 \\
&\hspace{8cm} \leq C \, \Vert \Delta \Vert_2 \ \Vert \Delta \Vert_{\infty}^2 \ \sum_{n \in \mathbb{Z}} \frac{1}{(2n+1)^4}.
\end{align*}
The proof of \eqref{alpha_periodicity_COM} for $N_{T,\mathbf A,W}(\Delta)$ goes along the same lines as that for $L_{T,\mathbf A,W} \Delta$. This proves Lemma~\ref{lem:propsLN}.
\subsubsection{Decomposition of \texorpdfstring{$L_{T, \mathbf A, W}$}{LTAW} --- separation of \texorpdfstring{$W$}{W}}
\label{sec:Q1}
We use the resolvent equation in \eqref{Resolvent_Equation} to decompose the operator $L_{T, \mathbf A, W}$ in \eqref{LTAW_definition} as
\begin{align}
L_{T, \mathbf A, W} = L_{T, \mathbf A} + \Wcal_{T, \mathbf A} + \mathcal{R}_{T, \mathbf A,W}^{(2)}, \label{LTAW_decomposition}
\end{align}
where $L_{T, \mathbf A} = L_{T, \mathbf A, 0}$,
\begin{align}
\Wcal_{T, \mathbf A} \Delta &\coloneqq -\frac 2 \beta \sum_{n\in \mathbb{Z}} \Bigl[ \frac 1{\i \omega_n - \mathfrak{h}_\mathbf A} \, W_h \, \frac 1{\i \omega_n - \mathfrak{h}_\mathbf A} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \notag \\
&\hspace{100pt} - \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \, W_h \, \frac{1}{\i\omega_n + \ov{\mathfrak{h}_\mathbf A}}\Bigr], \label{LTA^W_definition}
\end{align}
and
\begin{align}
\mathcal{R}_{T, \mathbf A, W}^{(2)} \Delta &\coloneqq -\frac 2\beta \sum_{n\in \mathbb{Z}} \Bigl[ \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, W_h \, \frac{1}{\i \omega_n - \mathfrak{h}_\mathbf A} \, W_h \, \frac{1}{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \notag \\
&\hspace{60pt} + \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, \Delta \, \frac{1}{\i\omega_n + \ov{\mathfrak{h}_\mathbf A}} \, W_h \, \frac{1}{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \, W_h \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \notag \\
&\hspace{60pt} -\frac{1}{\i \omega_n - \mathfrak{h}_\mathbf A} \, W_h \, \frac{1}{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac{1}{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \, W_h \, \frac 1{\i\omega_n + \ov{\mathfrak{h}_\mathbf A}} \Bigr]. \label{RTAW2_definition}
\end{align}
In the special case of a constant magnetic field, the operator $L_{T, \mathbf A}$ appeared for the first time in \cite{Hainzl2017}. The operator $\Wcal_{T, 0}$ (case of no external magnetic fields) was studied in \cite{ProceedingsSpohn}. In the sections \ref{sec:repLT}--\ref{Analysis_of_MTA_Section} we analyze $\langle \Delta, L_{T, \mathbf A} \Delta \rangle$ and extract the first and the third term in the GL functional from it. Afterwards, we study in Sections~\ref{LTAW_action_Section} and \ref{Approximation_of_LTA^W_Section} the quadratic form $\langle \Delta, \Wcal_{T, \mathbf A} \Delta \rangle$, which contributes the second term of the GL functional. Finally, in Section~\ref{Summary_quadratic_terms_Section}, we collect the results of the previous sections and provide a bound for $\langle \Delta, \mathcal{R}_{T, \mathbf A, W}^{(2)}\Delta \rangle$ showing that this term does not contribute to the GL functional.
\subsubsection{A representation formula for \texorpdfstring{$L_{T,\mathbf A}$}{LTA} and an outlook on the quadratic terms}
\label{sec:repLT}
In the following subsections we compute the contribution from $\langle \Delta, L_{T, \mathbf A} \Delta \rangle$ to the Ginzburg--Landau energy. Our analysis is based on the following representation formula for the operator $L_{T, \mathbf A}$, which characterizes its action solely in terms of relative and center-of-mass coordinates.
\begin{lem}
\label{LTA_action}
The operator $L_{T,\mathbf A} \colon {L^2(Q_h \times \Rbb_{\mathrm s}^3)} \rightarrow {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ acts as
\begin{align*}
(L_{T,\mathbf A}\alpha) (X,r) &= \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_{T, \mathbf A}(X, Z, r, s) \; (\mathrm{e}^{\i Z \cdot (-\i \nabla_X)}\alpha) (X,s)
\end{align*}
with
\begin{align}
k_{T,\mathbf A}(X, Z, r, s) \coloneqq \frac 2\beta \sum_{n\in \mathbb{Z}} k_{T, \mathbf A}^n(X, Z, r, s) \; \mathrm{e}^{\i \tilde \Phi_{\mathbf A_h}(X, Z, r, s)}, \label{kTA_definition}
\end{align}
where
\begin{align}
k_{T, \mathbf A}^n(X, Z, r, s) &\coloneqq g_h^{\i\omega_n} \bigl(X + \frac r2, X + Z + \frac s2\bigr) \; g_h^{-\i\omega_n}\bigl(X - \frac r2, X + Z - \frac s2\bigr) \label{kTAn_definition}
\end{align}
with $g_h^z$ in \eqref{ghz_definition} and
\begin{align}
\tilde \Phi_\mathbf A (X, Z, r, s) &\coloneqq \Phi_\mathbf A\bigl(X + \frac r2, X + Z + \frac s2\bigr) + \Phi_\mathbf A\bigl(X - \frac r2, X + Z - \frac s2\bigr) \label{LTA_PhitildeA_definition}
\end{align}
with $\Phi_\mathbf A$ in \eqref{PhiA_definition}.
\end{lem}
\begin{bem}
Lemma~\ref{LTA_action} should be compared to the representation formula in \cite[Lemma~11]{Hainzl2017} for the operator $L_{T, B}$ in \cite[Eq.~(8)]{Hainzl2017}. The differences between the two representation formulas are related to the fact that our magnetic field is non-constant. Accordingly, the kernel $g_h^z(x,y)$ is not translation-invariant and $\Phi_{\mathbf A_h}(x,y)$ does not simply equal $\frac{\bold{B}}{2} \cdot ( x \wedge y)$. This results in the dependence of the function $k_{T, \mathbf A}^n$ on the coordinate $X$ and in the fact that the representation formula in Lemma~\ref{LTA_action} is not symmetric under the transformation $Z \mapsto -Z$. As a consequence, the operator $\cos(Z\cdot \Pi)$ in \cite[Lemma~11]{Hainzl2017} is replaced by $\exp(\i Z\cdot (-\i \nabla_X))$. Both the cosine function and the full magnetic momentum operator in its argument, will be recovered at a later stage, see \eqref{MtildeTA_definition} below. The main guiding principle behind the definition of the above representation formula and its subsequent analysis is that we should think of $\Phi_{\mathbf A_h}(x,y)$ as a generalization of $\frac{\bold{B}}{2} \cdot ( x \wedge y)$. The latter has more convenient algebraic properties, which is responsible for much of the simplicity of the analysis in \cite{Hainzl2017,DeHaSc2021} in comparison to the present work. In case of $\Phi_{\mathbf A_h}(x,y)$ we do computations as if similar relations were satisfied and afterwards carefully bound the emergent remainder terms.
\end{bem}
\begin{proof}[Proof of Lemma \ref{LTA_action}]
With \eqref{GAz_Kernel_of_complex_conjugate} applied to $W =0$ the integral kernel of $L_{T, \mathbf A}$ can be written as
\begin{align*}
L_{T,\mathbf A} \alpha(x, y) &= \frac 2\beta \sum_{n\in \mathbb{Z}} \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} u\mathrm{d} v\; G_h^{\i\omega_n} (x, u) \, G_h^{-\i\omega_n}(y, v) \, \alpha(u,v).
\end{align*}
We note that $\alpha$ and $L_{T,\mathbf A} \alpha$ are not yet written in terms of relative and center-of-mass coordinates, which will be done in the next step. To that end, we define the coordinates $X =\frac{x+y}{2}$, $r=x-y$,
\begin{align}
u &= X + Z + \frac s2, & v = X + Z - \frac s2,
\end{align}
and introduce the notation $\zeta_X^r \coloneqq X + \frac r2$. This allows us to write the above equation as
\begin{align*}
L_{T,\mathbf A} \alpha(X, r) &= \frac 2\beta \sum_{n\in \mathbb{Z}} \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s\; G_h^{\i\omega_n} (\zeta_X^r, \zeta_{X+Z}^s) \, G_h^{-\i\omega_n}(\zeta_X^{-r}, \zeta_{X+Z}^{-s}) \, \alpha(X + Z, s).
\end{align*}
We highlight that, by a slight abuse of notation, we denoted the kernels of $\alpha$ and $L_{T,\mathbf A} \alpha$ when expressed in terms of relative and center-of-mass coordinates still by the same symbols. When we use the relation $G^z_h(x,y) = \exp( \i \Phi_{\mathbf A}(x,y) ) g_h^z(x,y)$ and
\begin{align*}
\alpha(X+Z,s) = \mathrm{e}^{\i Z\cdot (-\i \nabla_X)} \alpha(X, s),
\end{align*}
we see that the above identity implies the claimed formula.
\end{proof}
We analyze the operator $L_{T,\mathbf A}$ in four steps. In the first three steps, we introduce three operators of increasing simplicity in their dependence on $\mathbf A_h$. More precisely, we write it as
\begin{align}
L_{T,\mathbf A} = (L_{T,\mathbf A} - \tilde L_{T,\mathbf A}) + (\tilde L_{T,\mathbf A} - \tilde M_{T,\mathbf A}) + (\tilde M_{T,\mathbf A} - M_{T,\mathbf A}) + M_{T, \mathbf A} \label{LTA_decomposition}
\end{align}
with the operators $\tilde L_{T,\mathbf A}$, $\tilde M_{T, \mathbf A}$, and $M_{T,\mathbf A}$ defined below in \eqref{LtildeTA_definition}, \eqref{MtildeTA_definition}, and \eqref{MTA_definition}, respectively. As we will show, the operators in brackets in \eqref{LTA_decomposition} do not contribute to the GL functional. The operator $\tilde L_{T,\mathbf A}$ is obtained from $L_{T,\mathbf A}$ when we replace the kernels $g_h^{z}(x,y)$ in \eqref{kTAn_definition} by $g_0^z(x-y)$. To obtain the operator $\tilde M_{T,\mathbf A}$ from $\tilde L_{T,\mathbf A}$, we need to replace the phase factor in the definition of $k_{T,\mathbf A}^n(X,Z,r,s)$ by $\exp( -\i (r-s) \cdot D \mathbf A_h(X) (r+s)/4 )$, where $D \mathbf A_h$ denotes the Jacobi matrix of $\mathbf A_h$, and $\exp(\i Z \cdot (-\i \nabla_X))$ by $\cos( Z \cdot \Pi_{\mathbf A_h})$. Finally, $M_{T,\mathbf A}$ emerges when we replace $\exp( -\i (r-s) \cdot D \mathbf A_h(X) (r+s)/4 )$ in the definition of $\tilde M_{T, \mathbf A}$ by $1$. In the fourth and final step we extract the quadratic terms in the GL functional (except the one proportional to $W$) as well as a term that cancels the last term on the left side of \eqref{Calculation_of_the_GL-energy_eq} from $\langle \Delta, M_{T,\mathbf A} \Delta \rangle$. To that end, we expand the operator $\cos( Z \cdot \Pi_{\mathbf A_h})$ up to second order in powers $Z \cdot \Pi_{\mathbf A_h}$ and use $T = T_{\mathrm{c}}(1 - D h^2)$.
In the case of a constant magnetic field a similar decomposition of $L_{T,\mathbf A}$ has been introduced for the first time in \cite{Hainzl2017}. In this reference the operator $\tilde L_{T,\mathbf A}$ is called $M_{T,\mathbf A}$ and our $M_{T,\mathbf A}$ is called $N_{T,\mathbf B}$. We did not follow the notation in \cite{Hainzl2017} because the symbol $N_{T,\mathbf A,W}$ is reserved for the nonlinear term in our paper. The decomposition of $L_{T,\mathbf A}$ in \cite{Hainzl2017} has also been used in \cite{DeHaSc2021}. In comparison to these two references we have an additional term (the operator $\tilde M_{T,\mathbf A}$) in our decomposition of $L_{T,\mathbf A}$, which is a consequence of the fact that we are dealing with a general magnetic field. Generally speaking, the magnetic vector potential $\mathbf A_h$ is more difficult to treat than $\mathbf A_{\mathbf B}$ because several algebraic relations that hold for the latter do not hold for the former. Our main contribution in this section is that we overcome the related mathematical difficulties in the computation of the above terms. Another main difference between our work and \cite{Hainzl2017} is that we additionally need $H_{\mathrm{mag}}^1(Q_h)$-norm estimates. In the special case of a constant magnetic field such bounds have been proved in \cite{DeHaSc2021}. It should also be noted that $L_{T,\mathbf A}$ acts on $L^2(\mathbb{R}^6)$ in \cite{Hainzl2017}, while it acts on ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ in our case and in \cite{DeHaSc2021}.
\subsubsection{Approximation of \texorpdfstring{$L_{T, \mathbf A}$}{LTA}}
\label{Approximation_of_LTAW_Section}
\paragraph{The operator $\tilde L_{T, \mathbf A}$.}
\label{para:approx1}
We define the operator $\tilde L_{T, \mathbf A}$ by
\begin{align}
\tilde L_{T, \mathbf A}\alpha(X,r) &\coloneqq \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; \tilde k_{T, \mathbf A}(X, Z, r,s) \; ( \mathrm{e}^{\i Z \cdot (-\i \nabla_X)} \alpha)(X,s) \label{LtildeTA_definition}
\end{align}
with
\begin{align}
\tilde k_{T, \mathbf A} (X, Z, r,s) \coloneqq \frac 2\beta \sum_{n\in \mathbb{Z}} k_T^n(Z, r-s) \; \mathrm{e}^{\i \tilde \Phi_{\mathbf A_h}(X, Z, r, s)}, \label{ktildeTA_definition}
\end{align}
where $\tilde \Phi_\mathbf A$ is defined in \eqref{LTA_PhitildeA_definition} and
\begin{align}
k_T^n(Z, r) \coloneqq k_{T, 0}^n(0, Z, r, 0) = g_0^{\i\omega_n}\bigl(Z - \frac{r}{2} \bigr) \, g_0^{-\i \omega_n}\bigl( Z + \frac{r}{2} \bigr). \label{kTn_definition}
\end{align}
The following proposition allows us to replace the operator $L_{T, \mathbf A}$ by $\tilde L_{T, \mathbf A}$ in the computation of the BCS energy of our trial state.
\begin{prop}
\label{LTA-LtildeTA}
Assume that $\mathbf A = \mathbf A_{e_3} + A$ with $A \in W^{2,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ periodic, let $|\cdot|^k V\alpha_*\in L^2(\mathbb{R}^3)$ for $k \in \{ 0,1 \}$, $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0 > 0$ there is $h_0>0$ such that for any $0 < h \leq h_0$ and any $T\geq T_0$ we have
\begin{align*}
\Vert L_{T, \mathbf A} \Delta - \tilde L_{T, \mathbf A} \Delta\Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2 \leq C \; h^8 \; \left( \Vert V\alpha_*\Vert_2^2 + \Vert \ |\cdot| V\alpha_*\Vert_2^2 \right) \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
\end{prop}
\begin{bem}
\label{rem:A1}
In order to prove Theorem~\ref{Calculation_of_the_GL-energy}, we only need the bound
\begin{equation*}
| \langle \Delta, (L_{T, \mathbf A} - \tilde L_{T, \mathbf A})\Delta\rangle | \leqslant C \; h^5 \; \left( \Vert V\alpha_*\Vert_2^2 + \Vert \ |\cdot| V\alpha_*\Vert_2^2 \right) \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2,
\end{equation*}
which is a direct consequence of Cauchy--Schwarz, Proposition~\ref{LTA-LtildeTA}, Lemma~\ref{Schatten_estimate}, and \eqref{Periodic_Sobolev_Norm}. We prove the more general statement in Proposition~\ref{LTA-LtildeTA} here because we need it in the proof of Proposition~\ref{Structure_of_alphaDelta} in Section~\ref{sec:proofofadmissibility} below.
\end{bem}
Let us define the functions
\begin{align}
F_{T,h}^{a} &\coloneqq \frac 2\beta \sum_{m=0}^a \sum_{n\in \mathbb{Z}} \bigl(|\cdot|^m \, \tau^{\i \omega_n} \bigr) * \rho^{-\i\omega_n} + \tau^{\i\omega_n} * \bigl(|\cdot|^m \, \rho^{-\i\omega_n} \bigr) \notag \\
&\hspace{80pt}+ \bigl(|\cdot|^a \, |g_0^{\i\omega_n}|\bigr) * \tau^{-\i\omega_n} + |g_0^{\i\omega_n}| * \bigl(|\cdot|^a \, \tau^{-\i\omega_n} \bigr) \label{LTA-LtildeTA_FTA_definition}
\end{align}
with $a \geq 0$ and
\begin{align}
G_{T, h} &\coloneqq \frac 2\beta \sum_{\# \in \{ \pm \}} \sum_{n\in \mathbb{Z}} \tau_\nabla^{\# \i\omega_n} * \rho^{-\i\omega_n} + \tau^{\i\omega_n} * \rho_\nabla^{-\# \i \omega_n} + |\nabla g_0^{\# \i\omega_n}| * \tau^{-\i\omega_n} + |g_0^{\i\omega_n}| * \tau_\nabla^{-\# \i\omega_n}, \label{LTA-LtildeTA_GTA_definition}
\end{align}
which play a prominent role in the proof of Proposition~\ref{LTA-LtildeTA}. We also recall the definition of the Matsubara frequencies $\omega_n$ in \eqref{Matsubara_frequencies}, of $g_0^z$ in \eqref{ghz_definition}, and of the functions $\rho^z$, $\tau^z$, $\rho_{\nabla}^z$, and $\tau_{\nabla}^z$ in Proposition~\ref{gh-g_decay}.
We claim that for any $T_0 > 0$ there is $h_0>0$ such that for any $0 < h \leq h_0$ and any $T\geq T_0$ we have
\begin{align}
\Vert F_{T, h}^a \Vert_1 + \Vert G_{T, h} \Vert_1 \leq C_a \; h^3. \label{LTA-LtildeTA_FThGTh}
\end{align}
To prove this claim, we apply Young's inequality, Proposition~\ref{gh-g_decay}, and Lemma~\ref{g_decay}, and note that the function $f(t, \omega)$ in \eqref{gh-g_decay_f} obeys the estimate
\begin{align}
f(0, \omega_n) &\leq C \; |2n+1|^{-1}. \label{g0_decay_f_estimate1}
\end{align}
Moreover,
\begin{align}
1+\frac{|\omega_n| + |\mu|}{|\omega_n| + \mu_-} \leq C. \label{g0_decay_f_estimate2}
\end{align}
In combination, these considerations prove our claim. We are now prepared to give the proof of the above proposition.
\begin{proof}[Proof of Proposition \ref{LTA-LtildeTA}]
We have
\begin{align}
\Vert L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A} \Delta\Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2 &= \Vert L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A} \Delta\Vert_2^2 \notag \\
&\hspace{-50pt} + \Vert \Pi (L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A} \Delta)\Vert_2^2 + \Vert \tilde \pi(L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A} \Delta)\Vert_2^2 \label{LTA-LtildeTBA_8}
\end{align}
and we claim that the first term on the right side satisfies
\begin{align}
\Vert L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A} \Delta\Vert_2^2 &\leq 4 \; \Vert \Psi\Vert_2^2 \; \Vert F_{T, h}^0 * |V\alpha_*| \, \Vert_2^2. \label{LTA-LtildeTBA_1}
\end{align}
Using Young's inequality, \eqref{Periodic_Sobolev_Norm}, and \eqref{LTA-LtildeTA_FThGTh}, we see that the right side of \eqref{LTA-LtildeTBA_1} is bounded by a constant times $h^8 \Vert V\alpha_*\Vert_2^2 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2$. If \eqref{LTA-LtildeTBA_1} holds, this therefore proves the claimed bound for this term.
To prove \eqref{LTA-LtildeTBA_1}, we start by noting that
\begin{align}
\Vert L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A}\Delta\Vert_2^2 & \leq 4 \int_{\mathbb{R}^3} \mathrm{d} r \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} Z' \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} s\mathrm{d} s' \; |V\alpha_*(s)| \; |V\alpha_*(s')| \notag\\
&\hspace{60pt} \times \esssup_{X\in \mathbb{R}^3} | (k_{T, \mathbf A} - \tilde k_{T, \mathbf A}) (X, Z, r, s)| \notag\\
&\hspace{60pt} \times \esssup_{X\in \mathbb{R}^3} |(k_{T, \mathbf A} - \tilde k_{T, \mathbf A}) (X, Z', r, s')| \notag\\
&\hspace{30pt}\times \fint_{Q_h} \mathrm{d} X \; |\mathrm{e}^{\i Z \cdot (-\i \nabla_X)} \Psi(X)| \; | \mathrm{e}^{\i Z'\cdot (-\i \nabla_X)}\Psi(X)|. \label{Expanding_the_square}
\end{align}
Since the norm of the operator $\mathrm{e}^{\i Z\cdot (-\i \nabla_X)}$ equals $1$, we have
\begin{align}
\fint_{Q_h} \mathrm{d} X \; |\mathrm{e}^{\i Z \cdot (-\i \nabla_X)} \Psi(X)| \; | \mathrm{e}^{\i Z'\cdot (-\i \nabla_X)}\Psi(X)| &\leq \Vert \Psi\Vert_2^2. \label{LTA-LtildeTBA_3}
\end{align}
Consequently, \eqref{Expanding_the_square} yields
\begin{align}
\Vert L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A}\Delta\Vert_2^2 & \leq 4\; \Vert \Psi\Vert_2^2 \notag\\
&\hspace{-80pt} \times \int_{\mathbb{R}^3 }\mathrm{d} r\, \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s \; \esssup_{X\in \mathbb{R}^3} |(k_{T, \mathbf A} - \tilde k_{T, \mathbf A}) (X, Z, r, s)|\; |V\alpha_*(s)|\Bigr|^2. \label{LTA-LtildeTBA_2}
\end{align}
We note that
\begin{align}
k_{T, \mathbf A}^n(X, Z, r, s) - k_T^n(Z, r - s) & \notag \\
&\hspace{-120pt} = \bigl( g_h^{\i \omega_n} - g_0^{\i \omega_n}\bigr) \bigl( X+ \frac r2, X + Z+ \frac s2\bigr) \, g_h^{-\i \omega_n} \bigl( X - \frac r2, X + Z - \frac s2\bigr) \notag \\
&\hspace{-100pt} + g_0^{\i \omega_n} \bigl(Z - \frac {r-s}2\bigr) \, \bigl( g_h^{-\i \omega_n} - g_0^{-\i \omega_n} \bigr) \bigl( X - \frac r2 , X + Z - \frac s2\bigr) \label{LTA-LtildeTBA_15}
\end{align}
and hence, by Proposition~\ref{gh-g_decay}, the integrand in \eqref{LTA-LtildeTBA_2} is bounded by
\begin{align}
\bigl|(k_{T, \mathbf A} - \tilde k_{T, \mathbf A}) (X, Z, r, s)\bigr| &\leq \frac 2\beta\, \smash{\sum_{n\in \mathbb{Z}}} \, \bigl[ \tau^{\i\omega_n} \bigl( Z - \frac {r-s}2\bigr) \; \rho^{-\i\omega_n} \bigl( Z + \frac {r-s}2\bigr) \notag\\
&\hspace{50pt}+ |g_0^{\i\omega_n}|\bigl( Z -\frac {r-s}2\bigr)\; \tau^{-\i\omega_n} \bigl( Z +\frac {r-s}2\bigr)\bigr]. \label{LTA-LtildeTBA_11}
\end{align}
We combine \eqref{LTA-LtildeTBA_11} and the fact that the functions in \eqref{LTA-LtildeTBA_11} are even (see Proposition~\ref{gh-g_decay}), to show
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} Z \; \esssup_{X\in \mathbb{R}^3} |(k_{T, \mathbf A} - \tilde k_{T, \mathbf A})(X, Z, r, s)| \leq F_{T, h}^0(r-s), \label{LTA-LtildeTBA_5}
\end{align}
where $F_{T, \mathbf A}^0$ is the function in \eqref{LTA-LtildeTA_FTA_definition}. When we apply \eqref{LTA-LtildeTBA_5} to \eqref{LTA-LtildeTBA_2}, we obtain \eqref{LTA-LtildeTBA_1}.
Let us pause for a moment and highlight the main idea behind the above bound because it will reappear frequently in the subsequent analysis. The kernels in \eqref{LTA-LtildeTBA_15} are not translation-invariant. From Proposition~\ref{gh-g_decay} we know, however, that they can by bounded by translation-invariant kernels. When we do this, we see that we obtain convolutions of translation-invariant kernels after the integration over $Z$ has been carried out. These convolutions can now be estimated with the $L^1$-norm bounds in Proposition~\ref{gh-g_decay}. We highlight that this emergent simplicity is difficult to find when working in the operator picture. Our analysis is inspired by the analysis for the constant magnetic field in \cite{Hainzl2017}, where much of the above structure is more apparent.
For the second term on the right side of \eqref{LTA-LtildeTBA_8}, we claim the bound
\begin{align}
\Vert \Pi(L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A}\Delta)\Vert_2^2 &\leq C \, h^2 \; \Vert (F_{T, h}^1 + G_{T, h} ) * |V\alpha_*| \,\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \notag \\
&\leq C \, h^8 \; \Vert V\alpha_*\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{LTA-LtildeTBA_6}
\end{align}
The second inequality follows from Young's inequality and \eqref{LTA-LtildeTA_FThGTh}. To prove the first inequality in \eqref{LTA-LtildeTBA_6}, we note that the gradient can act either on $\Psi$ or on $k_{T,\mathbf A}-\widetilde{k}_{T,\mathbf A}$ and we start by considering the term, where it acts on $\Psi$. We therefore apply \eqref{Expanding_the_square} with $\exp(\i Z \cdot (-\i \nabla_X))$ replaced by $\Pi_X \, \exp(\i Z \cdot (-\i \nabla_X))$ (we recall that $\Pi_X = -\i \nabla_X + 2 \mathbf A_{\mathbf B}(X)$) and replace \eqref{LTA-LtildeTBA_3} by
\begin{align*}
&\fint_{Q_h} \mathrm{d} X \; |\Pi_X \, \mathrm{e}^{\i Z \cdot (-\i \nabla_X)}\Psi(X)| \; |\Pi_X \, \mathrm{e}^{\i Z' \cdot (-\i \nabla_X)}\Psi(X)| \\
&\hspace{6cm}\leq \Vert \Pi_X \, \mathrm{e}^{\i Z \cdot (-\i \nabla_X)}\Psi\Vert_2\; \Vert \Pi_X \, \mathrm{e}^{\i Z' \cdot (-\i \nabla_X)}\Psi\Vert_2.
\end{align*}
From
a direct computation, we know that
\begin{align}
\Pi_X\, \mathrm{e}^{\i Z\cdot (-\i \nabla_X)} = \mathrm{e}^{\i Z \cdot (-\i \nabla_X)} \bigl[\Pi_X - \mathbf B \wedge Z\bigr]. \label{PiU_intertwining}
\end{align}
Using this and \eqref{Periodic_Sobolev_Norm}, we find
\begin{align}
\Vert \Pi_X \, \mathrm{e}^{\i Z\cdot (-\i \nabla_X)} \Psi\Vert_2 &\leq \Vert \Pi\Psi\Vert_2 + |\mathbf B| \, |Z| \; \Vert \Psi\Vert_2 \leq C\, h^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; (1 + |Z|), \label{PiXcos_estimate}
\end{align}
which subsequently proves
\begin{align}
\Vert \Pi(L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A}\Delta)\Vert_2^2 &\leq C \, h^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \notag\\
&\hspace{-110pt} \times \int_{\mathbb{R}^3} \mathrm{d} r \Bigl( \; \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s \; h \, (1+ |Z|) \; \esssup_{X\in \mathbb{R}^3} |(k_{T, \mathbf A} - \tilde k_{T, \mathbf A})(X, Z, r,s)| |V\alpha_*(s)|\Bigr|^2 \notag \\
&\hspace{-2.5cm} + \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s \; \esssup_{X\in \mathbb{R}^3} |(\nabla_X k_{T, \mathbf A} - \nabla_X \tilde k_{T, \mathbf A})(X, Z, r, s)| \, |V\alpha_*(s)|\Bigr|^2 \; \Bigr). \label{LTA-LtildeTBA_9}
\end{align}
We claim that
\begin{align}
|\nabla_X \tilde\Phi_\mathbf A(X, Z, r, s) | \leq C \, \Vert D\mathbf A\Vert_\infty \Bigl( \bigl| Z + \frac{r-s}{2}\bigr| + \bigl| Z - \frac{r-s}{2}\bigr|\Bigr) \label{LTA-LtildeTBA_16}
\end{align}
holds, where $D\mathbf A$ denotes the Jacobi matrix of $\mathbf A$. To see this, we use Lemma~\ref{PhiA_derivative} and compute
\begin{align*}
\nabla_X \tilde \Phi_\mathbf A(X, Z, r, s) &= \mathbf A\bigl( X + Z + \frac s2\bigr) - \mathbf A\bigl( X + \frac r2\bigr) + \mathbf A\bigl(X + Z - \frac s2\bigr) - \mathbf A\bigl( X - \frac r2\bigr) \\
&\hspace{40pt} + \tilde \mathbf A\bigl( X + \frac r2, X + Z + \frac s2\bigr) - \tilde \mathbf A \bigl( X + Z + \frac s2, X +\frac r2\bigr) \\
&\hspace{40pt} + \tilde \mathbf A\bigl( X - \frac r2, X + Z - \frac s2\bigr) - \tilde \mathbf A \bigl( X + Z - \frac s2, X - \frac r2\bigr).
\end{align*}
The claim is a direct computation of this equality and a first order Taylor approximation. In combination, Proposition~\ref{gh-g_decay}, \eqref{LTA-LtildeTBA_15}, and \eqref{LTA-LtildeTBA_16} imply that, for $h$ small enough,
\begin{align*}
\int_{\mathbb{R}^3} \mathrm{d} Z \; \esssup_{X\in \mathbb{R}^3} |(\nabla_X k_{T, \mathbf A} - \nabla_X \tilde k_{T, \mathbf A})(X, Z, r, s)| &\leq \bigl( G_{T, h} + F_{T, h}^1\bigr) (r-s).
\end{align*}
The functions $F_{T, h}^1$ and $G_{T, h}$ are defined in \eqref{LTA-LtildeTA_FTA_definition} and \eqref{LTA-LtildeTA_FTA_definition}, respectively. A similar argument that uses $|Z| \leq
| Z + \frac{r}{2}| + | Z - \frac{r}{2}|$ shows
\begin{equation*}
\int_{\mathbb{R}^3 } \mathrm{d} Z \; (1+ |Z|) \; \esssup_{X\in \mathbb{R}^3} |(k_{T, \mathbf A} - \tilde k_{T, \mathbf A})(X, Z, r,s)| \leqslant F_{T, h}^1(r-s).
\end{equation*}
When we insert these two bounds into \eqref{LTA-LtildeTBA_9}, this proves \eqref{LTA-LtildeTBA_6}. It remains to consider the third term on the right side of \eqref{LTA-LtildeTBA_8}.
We claim that it satisfies
\begin{align}
\Vert \tilde \pi (L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A} \Delta)\Vert_2^2 &\leq C \; \Vert \Psi\Vert_2^2 \; \Vert (F_{T, h}^1 + G_{T, h}) * | \ |\cdot| V\alpha_*| \, \Vert_2^2 \notag \\
&\leq C \, h^8\; \Vert \ |\cdot| \ V\alpha_*\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{LTA-LtildeTBA_4}
\end{align}
The second inequality follows from Young's inequality, \eqref{Periodic_Sobolev_Norm}, and \eqref{LTA-LtildeTA_FThGTh}. To see that the first inequality holds, we first estimate
\begin{align}
\Vert \tilde \pi (L_{T, \mathbf A}\Delta - \tilde L_{T, \mathbf A}\Delta)\Vert_2^2 & \leq 4 \;\Vert \Psi\Vert_2^2 \notag \\
&\hspace{-100pt} \times \int_{\mathbb{R}^3} \mathrm{d} r \, \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s \; \esssup_{X\in \mathbb{R}^3} |(\tilde \pi k_{T, \mathbf A} - \tilde \pi\tilde k_{T, \mathbf A} )(X, Z, r, s)| \;|V\alpha_*(s)| \Bigr|^2. \label{LTA-LtildeTBA_10}
\end{align}
We start by noting that
\begin{align*}
&\tilde \pi k_{T, \mathbf A}(X, Z, r, s) - \tilde \pi \tilde k_{T, \mathbf A}(X, Z, r, s) \\
&= \mathrm{e}^{\mathrm{i} \widetilde{\Phi}_{\mathbf A}(X,Z,r,s)} \bigl(-\i \nabla_r + \frac 14 \mathbf B \wedge r + \nabla_r \widetilde{\Phi}_{\mathbf A}(X,Z,r,s) \bigr) \frac{2}{\beta} \sum_{n\in \mathbb{Z}} ( k_{T, \mathbf A}^n(X, Z, r, s) - \tilde k_T^n(Z, r-s) )
\end{align*}
and
\begin{align*}
\nabla_r \tilde \Phi_{\mathbf A}(X, Z, r, s) &= -\frac 12 \bigl[ \mathbf A\bigl( X + \frac r2\bigr) - \mathbf A\bigl( X - \frac r2\bigr) \bigr] \\
&\hspace{30pt} + \frac 12 \bigl[ \tilde \mathbf A\bigl( X + \frac r2, X + Z + \frac s2\bigr) - \tilde \mathbf A \bigl( X - \frac r2, X + Z - \frac s2\bigr)\bigr],
\end{align*}
which follows from Lemma~\ref{PhiA_derivative}. We combine these two identities and estimate
\begin{align}
|\tilde \pi k_{T, \mathbf A} - \tilde \pi \tilde k_{T, \mathbf A}| \leq& \frac 2\beta \sum_{n\in \mathbb{Z}} |\nabla_r k_{T, \mathbf A}^n - \nabla_r k_T^n| + |k_{T, \mathbf A} - \tilde k_{T, \mathbf A}| \left( \frac{| \mathbf B |}{4} + h^2 \Vert D \mathbf A \Vert_{\infty} \right) \nonumber \\
&\hspace{2.5cm}\times \left( | r-s| + |s| + \left| Z + \frac{r-s}{2} \right| + \left| Z - \frac{r-s}{2} \right| \right). \label{LTA-LtildeTBA_13}
\end{align}
We apply
\begin{align}
|r-s|^a &= \bigl| \frac{r-s}{2} + Z + \frac{r-s}{2} - Z\bigr|^a \leq 2^{(a-1)_+} \Bigl( \bigl| Z - \frac{r-s}{2}\bigr|^a + \bigl| Z + \frac{r-s}{2}\bigr|^a\Bigr), \label{r-s_estimate}
\end{align}
which holds for $a \geq 0$, with $a=1$ to bound $|r-s|$ in \eqref{LTA-LtildeTBA_13}. When we put \eqref{LTA-LtildeTBA_13} and \eqref{r-s_estimate} together, and additionally use Lemma~\ref{gh-g_decay}, \eqref{LTA-LtildeTBA_15} and \eqref{LTA-LtildeTBA_13}, we find
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} Z \; \esssup_{X\in \mathbb{R}^3} |(\tilde \pi k_{T, \mathbf A} - \tilde \pi\tilde k_{T, \mathbf A}) (X, Z, r, s)| \leq C \bigl(F_{T, h}^1 + G_{T, h}\bigr)(r-s) (1+|s|). \label{LTA-LtildeTBA_7}
\end{align}
Finally, \eqref{LTA-LtildeTBA_7} and \eqref{LTA-LtildeTBA_10} imply \eqref{LTA-LtildeTBA_4}, which finishes the proof of Proposition~\ref{LTA-LtildeTA}.
\end{proof}
\paragraph{The operator $\tilde M_{T,\mathbf A}$.}
We define $\tilde M_{T,\mathbf A}$ by
\begin{align}
\tilde M_{T,\mathbf A}\alpha(X,r) \coloneqq \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s \; k_T(Z , r-s) \, \mathrm{e}^{-\i \frac{r-s}{4} \cdot D\mathbf A_h (X)(r+s)} \, (\cos(Z \cdot \Pi_{\mathbf A_h}) \alpha) (X,s), \label{MtildeTA_definition}
\end{align}
where
\begin{equation}
k_T(Z, r) \coloneqq \frac{2}{\beta} \sum_{n\in \mathbb{Z}} g_0^{\i\omega_n}\bigl(Z - \frac{r}{2} \bigr) \, g_0^{-\i \omega_n}\bigl( Z + \frac{r}{2} \bigr).
\label{eq:AKT0}
\end{equation}
By $(D\mathbf A)_{ij} \coloneqq \partial_j \mathbf A_i$ we denote the Jacobi matrix of $\mathbf A$. Here and in the following we use the notation that $D\mathbf A_h (X)(r+s)$ denotes the matrix $D\mathbf A_h (X)$ applied to $r+s$. Accordingly, $\frac{r-s}{4} \cdot D\mathbf A_h (X)(r+s)$ is the inner product of this vector with $\frac{r-s}{4}$.
The following proposition allows us to replace $\tilde L_{T, \mathbf A}$ by $\tilde M_{T, \mathbf A}$ in our computations.
\begin{prop}
\label{LtildeTA-MtildeTA}
Assume that $\mathbf A = \mathbf A_{e_3} + A$ with a periodic vector potential $A \in W^{2,\infty}(\mathbb{R}^3,\mathbb{R}^3)$, let $|\cdot|^k V\alpha_*\in L^2(\mathbb{R}^3)$ for $k \in \{ 0,1,2 \}$, $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0 > 0$ there is $h_0>0$ such that for any $0 < h \leq h_0$ and any $T\geq T_0$ we have
\begin{align}
\Vert \tilde L_{T, \mathbf A}\Delta - \tilde M_{T,\mathbf A}\Delta \Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2 &\leq C\; h^8 \; \max_{k =0,1,2} \Vert \, |\cdot|^k V\alpha_*\Vert_2^2 \;\Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{LtildeTA-MtildeTA_eq1}
\end{align}
\end{prop}
Before we give the proof of Proposition~\ref{LtildeTA-MtildeTA}, we state and prove two preparatory lemmas. The first lemma allows us to extract the term $\Phi_{2A_h}(X, X + Z)$ and to replace $\mathrm{e}^{\i Z \cdot (-\i \nabla)}$ by $\mathrm{e}^{\i Z \cdot \Pi_{\mathbf A_h}}$ in the definition of $\tilde L_{T, \mathbf A}$. In \cite{Hainzl2017,DeHaSc2021}, where only the constant magnetic field is present, this approximation holds as an algebraic identity. A version of the first bound in \eqref{Phi_Approximation_eq1} has, in case of a periodic vector potential, been proved in \cite{BdGtoGL}. In this reference $Z$ does not appear on the right side, which is why this bound appears to be wrong.
\begin{lem}
\label{Phi_Approximation}
Assume that $A \in \Wvec{2}$. We then have
\begin{align}
\sup_{X\in \mathbb{R}^3} \bigl| \tilde \Phi_A(X, Z, r, s) - \Phi_{2A}(X,X+Z) + \frac 14 (r-s)\cdot D A(X)(r+s)\bigr| & \notag\\
&\hspace{-200pt} \leq C \; \Vert D^2A\Vert_\infty \; \bigl( |Z| + |r-s|\bigr) \bigl( |s|^2 + |r-s|^2\bigr), \label{Phi_Approximation_eq1}
\end{align}
where $\Phi_A$ is defined in \eqref{PhiA_definition} and $\tilde \Phi_A$ is defined in \eqref{LTA_PhitildeA_definition}. We also have
\begin{align}
\sup_{X\in \mathbb{R}^3} \bigl| \nabla_X \tilde \Phi_A(X, Z, r, s) - \nabla_X \Phi_{2A} (X, X + Z) \bigr| & \notag\\
&\hspace{-100pt} \leq C \; \Vert D^2A \Vert_\infty\; \bigl( |Z| + |r-s|\bigr) \bigl(|s| + |r-s|\bigr) \label{Phi_Approximation_eq3}
\end{align}
as well as
\begin{align}
\sup_{X\in \mathbb{R}^3} \bigl| \nabla_X (r-s) \cdot DA(X) \cdot (r+s) \bigr| \leq C \, \Vert D^2A\Vert_\infty \, |r-s| \, \bigl( |s| + |r-s|\bigr) \label{Phi_Approximation_eq4}
\end{align}
and
\begin{align}
\sup_{X\in \mathbb{R}^3} \bigl| \nabla_r \tilde \Phi_A(X, Z, r, s) + \frac 14 \nabla_r (r-s)\cdot D A(X)(r+s) \bigr| &\notag\\
&\hspace{-100pt}\leq C\, \Vert D^2A\Vert_\infty \, \bigl( |s|^2 + |r-s|^2 + |Z|^2\bigr). \label{Phi_Approximation_eq6}
\end{align}
The same bounds hold if $A$ is replaced by $\mathbf A_{e_3} + A$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{Phi_Approximation}]
We use the notation $\zeta_X^r \coloneqq X + \frac r2$ and start by writing
\begin{align}
\tilde \Phi_A(X, Z, r, s) &= \int_0^1 \mathrm{d}t\; \bigl[A\bigl( \zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) + A\bigl(\zeta_{X + Z - tZ}^{-s - t(r-s)}\bigr) \bigr] \cdot Z \notag \\
& \hspace{50pt} - \int_0^1 \mathrm{d}t\; \bigl[A\bigl(\zeta_{X + Z - tZ}^{s + t(r-s)}\bigr) - A\bigl( \zeta_{X + Z - tZ}^{-s - t(r-s)} \bigr) \bigr] \cdot \frac{r-s}{2}. \label{Phi_Approximation_1}
\end{align}
A second order Taylor expansion in the variable $\pm \frac 12 (s + t(r-s))$ allows us to show that
\begin{align}
\Bigl| A\bigl(X + Z- tZ \pm \frac{s + t(r-s)}{2}\bigr) - A(X + Z-tZ) \mp \frac 12 DA(X + Z-tZ) (s + t(r-s)) \Bigr| & \notag\\
&\hspace{-220pt} \leq C\; \Vert D^2A\Vert_\infty \; \bigl(|s|^2 +|r-s|^2\bigr). \label{Phi_Approximation_2}
\end{align}
For the first term on the right side of \eqref{Phi_Approximation_1}, this implies
\begin{align}
\Bigl| \int_0^1 \mathrm{d}t\; \bigl[A\bigl( \zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) + A\bigl( \zeta_{X + Z - tZ}^{-s - t(r-s)} \bigr) \bigr] \cdot Z -\Phi_{2A}(X,X+Z) \Bigr| & \notag\\
&\hspace{-100pt} \leq C \, \Vert D^2A\Vert_\infty \, |Z| \, \bigl(|s|^2 + |r-s|^2\bigr), \label{Phi_Approximation_4}
\end{align}
and we find
\begin{align}
\Bigl| - \int_0^1 \mathrm{d}t\; \bigl[A\bigl( \zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) - A\bigl(\zeta_{X + Z - tZ}^{-s - t(r-s)}\bigr) \bigr] \cdot \frac{r-s}{2}+ \frac 14 (r-s) \cdot DA(X) (r+s) \Bigr| & \notag\\
&\hspace{-350pt} \leq \Bigl| -\frac{r-s}2 \cdot \int_0^1 \mathrm{d} t \; \bigl[ DA(X + Z - tZ) - DA(X) \bigr](s + t(r-s)) \Bigr| \notag \\
&\hspace{-150pt} + C\, \Vert D^2A\Vert_\infty \, \bigl( |s|^2 + |r-s|^2\bigr) \, |r-s| \notag \\
&\hspace{-350pt} \leq C\, \Vert D^2A \Vert_\infty \, |r-s| \, \bigl[|s|^2 + |r-s|^2 + \bigl(|s| + |r-s|\bigr) |Z| \bigr] \label{Phi_Approximation_3}
\end{align}
for the second term. Adding up \eqref{Phi_Approximation_4} and \eqref{Phi_Approximation_3} proves \eqref{Phi_Approximation_eq1}. The proof of \eqref{Phi_Approximation_eq3} results from a first order Taylor expansion and that of \eqref{Phi_Approximation_eq4} is a straightforward computation. It remains to prove \eqref{Phi_Approximation_eq6}.
When we differentiate \eqref{Phi_Approximation_1} with respect to $r$ this yields
\begin{align}
\nabla_r \tilde \Phi_A(X, Z, r, s) &= \int_0^1 \mathrm{d}t\; \frac t2 \ Z \cdot \bigl[DA\bigl( \zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) - DA\bigl(\zeta_{X + Z - tZ}^{-s - t(r-s)}\bigr) \bigr] \notag \\
& \hspace{50pt} - \int_0^1 \mathrm{d}t\; \frac t2 \ \frac{r-s}{2} \cdot \bigl[DA\bigl(\zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) + DA\bigl(\zeta_{X + Z - tZ}^{-s - t(r-s)} \bigr) \bigr] \notag \\
& \hspace{50pt} - \frac 12 \int_0^1 \mathrm{d}t\; \bigl[A\bigl(\zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) - A\bigl(\zeta_{X + Z - tZ}^{-s - t(r-s)} \bigr) \bigr]. \label{Phi_Approximation_7}
\end{align}
A first order Taylor expansion shows that the absolute value of the first term on the right side of \eqref{Phi_Approximation_7} is bounded by $C\Vert D^2A\Vert_\infty (|s| + |r-s|) |Z|$. We also note that
\begin{align}
\frac 14 \nabla_r (r - s) \cdot DA(X) (r + s) = \frac 14 (r-s) \cdot DA(X) + \frac 14 DA(X) (r+s). \label{Phi_Approximation_9}
\end{align}
The second term on the right side of \eqref{Phi_Approximation_7} obeys
\begin{align}
\Bigl| \int_0^1 \mathrm{d}t\; t \; \frac{r-s}{4} \cdot \bigl[DA\bigl(\zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) + DA\bigl(\zeta_{X + Z - tZ}^{-s - t(r-s)} \bigr) \bigr] - \frac 14 (r-s) \cdot DA(X) \bigr| & \notag \\
&\hspace{-360pt} \leq \Bigl| \int_0^1 \mathrm{d} t \; \frac{t}{2} \; (r-s) \cdot \bigl[ DA(X + Z - tZ) - DA(X) \bigr] \Bigr| + C \, \Vert D^2A\Vert_\infty \bigl( |s| + |r-s|\bigr) |r-s| \notag\\
&\hspace{-360pt} \leq C \, \Vert D^2A\Vert_\infty \, \bigl[ |r-s| \, |Z| + \bigl( |s| + |r-s|\bigr) |r-s|\bigr] . \label{Phi_Approximation_8}
\end{align}
For the third term on the right side of \eqref{Phi_Approximation_7}, we use the bound
\begin{align*}
\Bigl| \frac 12 \int_0^1 \mathrm{d}t\; \bigl[A\bigl(\zeta_{X + Z - tZ}^{s + t(r-s)} \bigr) - A\bigl(\zeta_{X + Z - tZ}^{-s - t(r-s)} \bigr) \bigr] - \frac 14 DA(X) (r+ s)\Bigr| \leq \mathcal{T}_+ + \mathcal{T}_- + \mathcal{T},
\end{align*}
where
\begin{align*}
\mathcal{T}_\pm &\coloneqq \Bigl| \frac 12 \int_0^1 \mathrm{d} t \; \bigl[ A\bigl( \zeta_{X + Z - tZ}^{\pm s \pm t(r-s)}\bigr) \mp A(X + Z - tZ) - \frac 12 DA(X + Z- tZ) (s+ t(r-s)) \bigr]\Bigr|
\end{align*}
and
\begin{align*}
\mathcal{T} &\coloneqq \Bigl| \frac 12 \int_0^1 \mathrm{d} t \; DA(X + Z- tZ) (s + t(r-s)) - \frac 14 DA(X) (r+s)\Bigr|.
\end{align*}
By \eqref{Phi_Approximation_2}, we have
\begin{align*}
\mathcal{T}_\pm &\leq C \, \Vert D^2 A\Vert_\infty \, \bigl( |s|^2 + |r-s|^2\bigr), & \mathcal{T} &\leq C\, \Vert D^2A\Vert_\infty \, \bigl( |s| + |r-s|\bigr ) \, |Z|.
\end{align*}
In combination with \eqref{Phi_Approximation_8}, these considerations imply \eqref{Phi_Approximation_eq6}. The proof for $\mathbf A_{e_3} + A$ is literally the same.
\end{proof}
The next lemma is a substitute for the identity
\begin{align}
\mathrm{e}^{\i \mathbf B \cdot (X \wedge Z)} \mathrm{e}^{\i Z \cdot P_X} = \mathrm{e}^{\i Z \cdot \Pi_X }
\label{Magnetic_Translation_constant_Decomposition}
\end{align}
in the case of a general magnetic field. It holds because $\mathbf B \cdot (X\wedge Z) = Z \cdot (\mathbf B \wedge X)$ and the latter commutes with $Z \cdot \Pi_X$. Here, we used the notations $P = -\i \nabla$ for the momentum operator and $\Pi = -\i \nabla + 2 \mathbf A_{\mathbf B}$ for its magnetic counterpart. Sometimes when several variables appear in an equation we write, e.g., $P_X$, $\Pi_X$, etc.\ to indicate on which variable $P$, $ \Pi$, etc.\ is acting.
\begin{lem}
\label{Magnetic_Translation_Representation}
Assume that $A \in L^{\infty}(\mathbb{R}^3,\mathbb{R}^3)$ is a periodic function. Then
\begin{align}
\mathrm{e}^{\i \Phi_{2A_h}(X , X +Z)} \, \mathrm{e}^{ \i Z\cdot \Pi_X } = \mathrm{e}^{ \i Z\cdot \Pi_{\mathbf A_h} },
\end{align}
where $\Pi_{\mathbf A_h} = P_X + 2 \mathbf A_h(X)$ is understood to act on the $X$ coordinate.
\end{lem}
The above lemma is a consequence of the following more abstract proposition, whose proof can be found in \cite[p. 290]{Werthamer1966}. For the sake of completeness we repeat it here.
\begin{prop}
\label{Operator_Equality_Abstract}
Let $\mathcal{H}$ be a separable Hilbert space, let $P\colon \mathcal{D}(P)\rightarrow \mathcal{H}$ be a densely defined self-adjoint operator, and assume that $Q$ is bounded and self-adjoint. Assume further that $[\mathrm{e}^{\i tP} \, Q \, \mathrm{e}^{-\i tP} , \mathrm{e}^{\i sP} \, Q \, \mathrm{e}^{-\i sP} ] =0$ for every $t,s\in [0,1]$. Then, we have
\begin{align*}
\exp\Bigl(\i \int_0^1\mathrm{d}t \; \mathrm{e}^{\i tP} \, Q\, \mathrm{e}^{-\i tP}\Bigr) \, \mathrm{e}^{\i P} = \mathrm{e}^{\i(P + Q)}.
\end{align*}
\end{prop}
\begin{proof}
For $s\in \mathbb{R}$ we define $Q(s) \coloneqq \mathrm{e}^{\i sP} \, Q \, \mathrm{e}^{-\i sP}$ and $W(s) \coloneqq \mathrm{e}^{\i s(P+Q)} \, \mathrm{e}^{-\i sP}$. On $\mathcal{D}(P)$, we may differentiate $W$ to get
\begin{align*}
-\i W'(s) = \mathrm{e}^{\i s(P+Q)} (P+Q) \, \mathrm{e}^{-\i sP} - \mathrm{e}^{\i s(P+Q)} \, P \, \mathrm{e}^{-\i sP}
= \mathrm{e}^{\i s(P+Q)} \, Q \, \mathrm{e}^{-\i sP} = W(s) \, Q(s).
\end{align*}
Using that $Q$ is bounded we conclude that this identity also holds on $\mathcal{H}$. Hence, $W$ satisfies the linear differential equation $W'(s) = \i W(s) \, Q(s)$. Since by assumption $[Q(s),Q(t)] = 0$ holds for all $s,t \in [0,1]$, we conclude that the unique solution to this equation can be written as
\begin{align*}
\tilde W(s) \coloneqq \exp\Bigl(\i \int_0^s \mathrm{d}t\; Q(t)\Bigr),
\end{align*}
and hence
\begin{align*}
\exp\Bigl(\i \int_0^s \mathrm{d}t\; Q(t)\Bigr) = \mathrm{e}^{\i s(P+Q)}\mathrm{e}^{-\i sP}.
\end{align*}
With the choice $s =1$ this equation proves the claim.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Magnetic_Translation_Representation}]
We first show that
\begin{equation}
\mathrm{e}^{ \i \Phi_{2A_h}(X, X+Z) } \, \mathrm{e}^{ \i Z \cdot P_X } = \mathrm{e}^{ \i Z \cdot (P_X + 2A_h(X)) }
\label{eq:Afirststep}
\end{equation}
holds. To that end, we apply Proposition~\ref{Operator_Equality_Abstract} with the choices $P = Z\cdot P_X$, where $P_X = -\i \nabla_X$, and $Q = 2 Z\cdot A_h(X)$ and find
\begin{align*}
\exp\Bigl(\i \int_0^1\mathrm{d}t \; \mathrm{e}^{\i t Z\cdot P_X} \, 2 Z\cdot A_h(X) \, \mathrm{e}^{-\i t Z \cdot P_X}\Bigr) \, \mathrm{e}^{\i Z\cdot P_X} = \mathrm{e}^{\i(Z \cdot (P_X + 2 A(X)))}.
\end{align*}
It remains to compute the integral in the exponential, which reads
\begin{equation*}
\int_0^1\mathrm{d}t \; \mathrm{e}^{\i t Z\cdot P_X} \, 2 Z\cdot A_h(X) \, \mathrm{e}^{-\i t Z \cdot P_X} = 2 \int_0^1\mathrm{d}t \; Z\cdot A_h(X+tZ) = \Phi_{2A_h}(X,X+Z).
\end{equation*}
To obtain the last equality we applied the coordinate transformation $t\mapsto 1 -t$. In combination, these considerations prove \eqref{eq:Afirststep}.
Next, we use \eqref{eq:Afirststep} and \eqref{Magnetic_Translation_constant_Decomposition} to see that
\begin{align*}
\mathrm{e}^{\i \Phi_{2A_h}(X , X +Z)} \, \mathrm{e}^{ \i Z\cdot \Pi_X } &= \mathrm{e}^{ \i \Phi_{2A_h}(X, X+Z) } \, \mathrm{e}^{ \i Z \cdot P_X } \; \mathrm{e}^{\i \mathbf B \cdot(X \wedge Z)} = \mathrm{e}^{ \i Z \cdot (P_X + 2A_h(X)) } \; \mathrm{e}^{\i \mathbf B \cdot(X \wedge Z)} \\
&= \mathrm{e}^{ \i Z\cdot \Pi_{\mathbf A_h} }
\end{align*}
holds. This proves Lemma~\ref{Magnetic_Translation_Representation}.
\end{proof}
For $a\in \mathbb{N}_0$ we define the functions
\begin{align}
F_T^{a} \coloneqq \frac 2\beta \sum_{n\in \mathbb{Z}} \sum_{m=0}^a \sum_{b = 0}^m \binom mb \; \bigl(|\cdot|^{b}\, |g_0^{\i\omega_n}|\bigr) * \bigl(|\cdot|^{m-b} \, |g_0^{-\i\omega_n}| \bigr) \label{LtildeTA-MtildeTA_FT_definition}
\end{align}
and
\begin{align}
G_T^a &\coloneqq \smash{\frac 2\beta \sum_{n\in \mathbb{Z}} \sum_{m=0}^a \sum_{b=0}^m \binom mb} \; \bigl( |\cdot|^b \, |\nabla g_0^{\i \omega_n}| \bigr) * \bigl( |\cdot|^{m-b} \, |g_0^{-\i\omega_n}|\bigr) \notag \\
&\hspace{150pt} + \bigl( |\cdot|^b \, |g_0^{\i\omega_n}| \bigr) * \bigl( |\cdot|^{m-b} \, |\nabla g_0^{-\i\omega_n}| \bigr), \label{LtildeTA-MtildeTA_GT_definition}
\end{align}
which play an prominent role in the proof of Proposition~\ref{LtildeTA-MtildeTA}. An application of Lemma~\ref{g_decay}, \eqref{g0_decay_f_estimate1}, and \eqref{g0_decay_f_estimate2} shows that for $T \geq T_0 > 0$ and $a\in \mathbb{N}_0$, we have
\begin{align}
\Vert F_{T}^a\Vert_1 + \Vert G_T^a \Vert_1 &\leq C_{a}. \label{LtildeTA-MtildeTA_FTGT}
\end{align}
We are now prepared to give the proof of Proposition~\ref{LtildeTA-MtildeTA}.
\begin{proof}[Proof of Proposition \ref{LtildeTA-MtildeTA}]
We use \eqref{Magnetic_Translation_constant_Decomposition} and $\Phi_{\mathbf A_{\mathbf B}}(x,y)= \frac{\bold{B}}{2} \cdot ( x \wedge y)$ to write the operator $\tilde L_{T, \mathbf A}$ as
\begin{align*}
\tilde L_{T, \mathbf A} \alpha(X, r) &= \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; \mathrm{e}^{\i \frac{\mathbf B}{4} \cdot (r\wedge s)} k_T(Z, r - s) \, \mathrm{e}^{\i \tilde \Phi_{A_h}(X, Z, r, s)} \, (\mathrm{e}^{\i Z \cdot \Pi} \alpha)(X, s),
\end{align*}
where $\Pi = -\i \nabla + 2 \mathbf A_{\mathbf B}$. With the identities
\begin{align*}
- \frac{r-s}{4} \cdot D \mathbf A_{\mathbf B} (r + s) &= \frac{\mathbf B}{4} \cdot (r\wedge s), & \Phi_{2\mathbf A_{\mathbf B}}(X, X + Z) &= Z \cdot (\mathbf B \wedge X),
\end{align*}
and \eqref{LTA_PhitildeA_definition} we check that
\begin{align*}
\tilde \Phi_{\mathbf A_{\mathbf B}} (X, Z, r, s) &= - \frac{r-s}{4} \cdot D\mathbf A_{\mathbf B}(X) (r+s) + \Phi_{2\mathbf A_{\mathbf B}}(X, X + Z).
\end{align*}
In combination with Lemma~\ref{Magnetic_Translation_Representation} and the fact that the integrand in the definition of $\tilde M_{T, \mathbf A}$ is an even function of $Z$, this allows us to write $\tilde M_{T, \mathbf A}$ as
\begin{align*}
\tilde M_{T, \mathbf A} \alpha(X, r) &= \smash{\iint_{\mathbb{R}^3 \times \mathbb{R}^3}} \mathrm{d} Z \mathrm{d} s \; \mathrm{e}^{\i \frac \mathbf B 4 \cdot (r\wedge s)} k_T(Z, r-s) \, \mathrm{e}^{\i \Phi_{2A_h} (X, X+Z)} \, \mathrm{e}^{-\i \frac{r-s}{4} \cdot DA_h(X) (r + s)} \\
&\hspace{230pt} \times (\mathrm{e}^{\i Z \cdot \Pi} \alpha)(X, s),
\end{align*}
and consequently,
\begin{align}
\bigl(\tilde L_{T, \mathbf A} \Delta - \tilde M_{T, \mathbf A} \Delta\bigr) (X, r) &= -2 \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; \mathrm{e}^{\i \frac \mathbf B 4 \cdot (r\wedge s)} k_T(Z , r-s) \, V\alpha_*(s) \, (\mathrm{e}^{\i Z \cdot \Pi} \Psi)(X)\notag \\
&\hspace{-20pt} \times \bigl[ \mathrm{e}^{\i \tilde \Phi_{A_h}(X, Z, r, s)} - \mathrm{e}^{\i \Phi_{2A_h}(X, X + Z)}\mathrm{e}^{-\i \frac{r-s}{4} \cdot DA_h(X)(r + s)} \bigr]. \label{LtildeTA-MtildeTA_1}
\end{align}
We claim that
\begin{align}
\Vert \tilde L_{T, \mathbf A} \Delta - \tilde M_{T, \mathbf A} \Delta \Vert_2^2 &\leq C \, \Vert \Psi\Vert_2^2 \; \Vert D^2A_h\Vert_\infty^2 \; \Vert F_T^3 * |V\alpha_*| + F_T^1 * |\cdot|^2\, |V\alpha_*| \, \Vert_2^2 \notag \\
& \leqslant C\; h^8 \; \max_{k =0,1,2} \Vert \, |\cdot|^k V\alpha_*\Vert_2^2 \;\Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\label{LtildeTA-MtildeTA_6}
\end{align}
The second bound in the above equation follows from Young's inequality, \eqref{Periodic_Sobolev_Norm}, and \eqref{LtildeTA-MtildeTA_FTGT}. To prove the first bound in \eqref{LtildeTA-MtildeTA_6}, we use \eqref{LTA-LtildeTBA_3} and find
\begin{align}
\Vert \tilde L_{T, \mathbf A} \Delta - \tilde M_{T, \mathbf A} \Delta \Vert_2^2 &\leq 4 \, \Vert \Psi\Vert_2^2 \int_{\mathbb{R}^3} \mathrm{d} r \, \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; |k_T(Z, r-s)| \, |V\alpha_*(s)| \notag \\
&\hspace{-30pt} \times \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{\i \tilde \Phi_{A_h}(X, Z, r, s) - \i \Phi_{2A_h}(X, X+ Z) + \i \frac{r-s}{4} \cdot DA_h(X) (r + s)} -1\bigr| \Bigr|^2. \label{LtildeTA-MtildeTA_7}
\end{align}
Furthermore, an application of Lemma~\ref{Phi_Approximation} implies
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} Z \; |Z|^a \; |k_T(Z, r-s)| \, \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s) - \i \Phi_{2A}(X, X+ Z) + \i \frac{r-s}{4} \cdot DA(X) (r + s)} -1\bigr| & \notag \\
&\hspace{-280pt} \leq C \; \Vert D^2A\Vert_\infty \bigl[ F_T^{3+a}(r-s) + F_T^{1+a}(r-s)\; |s|^2\bigr] \label{LtildeTA-MtildeTA_2}
\end{align}
with $F_T^a$ in \eqref{LtildeTA-MtildeTA_FT_definition}. We need this bound here only for the case $a=0$ but state it for general $a\geq0$ for later reference. In combination with \eqref{LtildeTA-MtildeTA_7}, this proves \eqref{LtildeTA-MtildeTA_6}.
We claim that the term involving $\Pi = -\i \nabla + 2 \mathbf A_{\mathbf B}$, which is understood to act on the center-of-mass coordinate, is bounded by
\begin{align}
\Vert \Pi (\tilde L_{T, \mathbf A}\Delta - \tilde M_{T, \mathbf A}\Delta)\Vert_2^2 &\leq C\, h^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \, \Vert D^2 A_h\Vert_\infty^2 \, \bigl( 1 + \Vert DA_h\Vert_\infty^2\bigr) \notag \\
&\hspace{-1cm} \times \bigl[ \Vert F_T^4 * |V\alpha_*|\, \Vert_2^2 + \Vert F_T^1 * |\cdot| \, |V\alpha_*| \, \Vert_2^2 + \Vert F_T^2* |\cdot|^2|V\alpha_*| \, \Vert_2^2\bigr]. \label{LtildeTA-MtildeTA_3}
\end{align}
If this holds, the desired bound for this term follows from Young's inequality and \eqref{LtildeTA-MtildeTA_FTGT}. To prove \eqref{LtildeTA-MtildeTA_3}, we use \eqref{LtildeTA-MtildeTA_1} and argue as in the proof of \eqref{LTA-LtildeTBA_9} to see that
\begin{align}
&\Vert \Pi (\tilde L_{T, \mathbf A}\Delta - \tilde M_{T, \mathbf A}\Delta)\Vert_2^2 \leq C \, h^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \int_{\mathbb{R}^3} \mathrm{d} r\, \Big( \; \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; \, |V\alpha_*(s)| \notag \\
&\hspace{0.1cm} \times |k_T(Z, r-s)| \Bigl[ h\, (1 + |Z|)\, \sup_{X\in\mathbb{R}^3} \bigl| \mathrm{e}^{\i \tilde \Phi_{A_h}(X, Z, r, s) - \i \Phi_{2A_h}(X, X+ Z) + \i \frac{r-s}{4} \cdot DA_h(X) (r + s)} -1\bigr| \notag \\
&\hspace{1.6cm} + \sup_{X\in \mathbb{R}^3} \bigl| \nabla_X \mathrm{e}^{\i \tilde \Phi_{A_h}(X, Z, r, s)} - \nabla_X \mathrm{e}^{\i \Phi_{2A_h}(X, X + Z)} \mathrm{e}^{-\i \frac{r-s}{4} \cdot DA_h(X)(r + s)} \bigr| \Bigr] \Bigr|^2 \Big). \label{LtildeTA-MtildeTA_4}
\end{align}
The difference of the phases involving a gradient can be estimated as
\begin{align*}
&\bigl| \nabla_X \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s)} - \nabla_X \mathrm{e}^{\i \Phi_{2A}(X, X + Z)} \mathrm{e}^{-\i \frac{r-s}{4} \cdot DA(X)(r + s)} \bigr| \notag \\
&\hspace{1cm} \leq \bigl| \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s) - \i \Phi_{2A}(X, X+Z) + \i \frac{r-s}{4} \cdot DA(X) (r + s)} - 1\bigr| |\nabla_X \tilde \Phi_{A}(X, Z, r, s)| \notag\\
&\hspace{1.5cm} + \bigl| \nabla_X \tilde \Phi_{A}(X, Z, r, s) - \nabla_X \Phi_{2A}(X, X + Z) \bigr| + \bigl|\nabla_X (r-s) \cdot DA(X) (r+s)\bigr|,
\end{align*}
which, by \eqref{LTA-LtildeTBA_16} and Lemma~\ref{Phi_Approximation}, is bounded by
\begin{align*}
& C \, \Vert D^2A \Vert_\infty \, \bigl(1 + \Vert DA\Vert_\infty\bigr) \\
&\hspace{50pt} \times \bigl[ \bigl( |s|^2 + |r-s|^2 \bigr) \bigl( 1+ |Z|^2 + |r-s|^2\bigr) + \bigl( |s| + |r-s|\bigr) \bigl( |Z| + |r-s|\bigr)\bigr].
\end{align*}
Accordingly,
\begin{align*}
&\int_{\mathbb{R}^3} \mathrm{d} Z \; |k_T(Z, r-s)| \, \sup_{X\in \mathbb{R}^3} \bigl| \nabla_X \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s)} - \nabla_X \mathrm{e}^{\i \Phi_{2A}(X, X + Z)} \mathrm{e}^{-\i \frac{r-s}{4} \cdot DA(X)(r + s)} \bigr| \\
&\hspace{40pt} \leq C \, \Vert D^2A\Vert_\infty \bigl( 1 + \Vert DA\Vert_\infty\bigr) \\
&\hspace{60pt} \times \bigl[ F_T^4(r-s) + F_T^1(r-s) \, |s| + F_T^2(r-s) \, |s|^2 \bigr].
\end{align*}
Using \eqref{LtildeTA-MtildeTA_2}, we also find
\begin{align*}
&\int_{\mathbb{R}^3} \mathrm{d} Z \; |k_T(Z, r-s)| \, h\, (1 + |Z|) \sup_{X\in\mathbb{R}^3} \bigl| \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s) - \i \Phi_{2A}(X, X+ Z) + \i \frac{r-s}{4} \cdot DA(X) (r + s)} -1\bigr| \\
&\hspace{40pt} \leq C \, \Vert D^2A\Vert_\infty \; \bigl[ F_T^4(r-s) + F_T^2(r-s) \, |s|^2 \bigr].
\end{align*}
In combination with \eqref{LtildeTA-MtildeTA_4}, this proves \eqref{LtildeTA-MtildeTA_3}.
We claim that the term involving $\tilde \pi = -\i \nabla + \frac{1}{2} \mathbf A_{\mathbf B}$, which is understood to act on the relative coordinate, is bounded by
\begin{align}
\Vert \tilde \pi (\tilde L_{T, \mathbf A} \Delta - \tilde M_{T, \mathbf A}\Delta)\Vert_2^2 &\leq C \, h^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \, \Vert D^2A_h\Vert_\infty^2 \, \bigl( 1 + \Vert A_h\Vert_\infty^2 + \Vert DA_h\Vert_\infty^2\bigr) \notag\\
&\hspace{1cm}\times \bigl \Vert (F_T^4+ G_T^2) * ( 1 + |\cdot|^2 ) |V\alpha_*| \bigr\Vert_2^2 \notag \\
& \leqslant C \; h^8 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \; \Vert (1+|\cdot|^2) V \alpha_* \Vert_2^2 . \label{LtildeTA-MtildeTA_5}
\end{align}
The second inequality is a consequence of Young's inequality and \eqref{LtildeTA-MtildeTA_FTGT}. To prove the first inequality in \eqref{LtildeTA-MtildeTA_5}, we first use \eqref{LtildeTA-MtildeTA_1} and argue as in the proof of \eqref{LTA-LtildeTBA_9} to see that
\begin{align*}
&\Vert \tilde \pi (\tilde L_{T, \mathbf A}\Delta - \tilde M_{T, \mathbf A}\Delta)\Vert_2^2 \leq C \, h^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \, \int_{\mathbb{R}^3} \mathrm{d} r \, \Big( \; \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; |V\alpha_*(s)| \\
&\hspace{1cm} \times \bigl|\tilde \pi k_T(Z, r-s) \mathrm{e}^{\i \frac{\mathbf B}{4} \cdot (r\wedge s)}\bigr| \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{\i \tilde \Phi_{A_h}(X, Z, r, s) - \i \Phi_{2A_h}(X, X+ Z) + \i \frac{r-s}{4} \cdot DA_h(X) (r + s)} -1\bigr| \\
&\hspace{0.5cm} + |k_T(Z, r-s)| \, \sup_{X\in \mathbb{R}^3} \bigl| \nabla_r \mathrm{e}^{\i \tilde \Phi_{A_h}(X, Z, r, s)} - \nabla_r \mathrm{e}^{\i \Phi_{2A_h}(X, X+ Z)} \mathrm{e}^{-\i \frac{r-s}{4} \cdot DA_h(X) (r + s)}\bigr| \Bigr|^2 \Big).
\end{align*}
A brief computation shows that the operator $\tilde \pi$ obeys the following intertwining relation with respect to $\mathrm{e}^{\i \frac \mathbf B 4 \cdot (r\wedge s)}$:
\begin{align}
\tilde \pi_r \, \mathrm{e}^ {\frac \i 4 \mathbf B \cdot (r \wedge s)} = \mathrm{e}^{\frac \i 4 \mathbf B \cdot (r\wedge s)} \bigl( -\i \nabla_r + \frac 14 \mathbf B \wedge (r-s)\bigr). \label{tildepi_magnetic_phase}
\end{align}
The notation $\tilde \pi_r$ in the above equation highlights on which of the two variables the operator $\tilde \pi$ is acting. An application of this identity shows
\begin{align}
|\tilde \pi k_T(Z,r-s) \mathrm{e}^{\i \frac{\mathbf B}{4} (r\wedge s)}| &\leq |\nabla_r k_T(Z, r-s)| + \frac{|\mathbf B|}{4} \, |r-s| \, |k_T(Z,r-s)|. \label{eq:A17}
\end{align}
Hence, a computation similar to that leading to \eqref{LtildeTA-MtildeTA_2} shows that
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} Z \; |\tilde \pi k_T(Z, r-s) \mathrm{e}^{\i \frac{\mathbf B}{4} \cdot (r\wedge s)}| & \notag \\
&\hspace{-110pt} \times \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s) - \i \Phi_{2A}(X, X+ Z) + \i \frac{r-s}{4} \cdot DA(X) (r + s)} -1\bigr| & \notag \\
&\hspace{-90pt} \leq C \; \Vert D^2A \Vert_\infty \; \bigl[ (F_T^4 + G_T^3)(r-s) + (F_T^2 + G_T^1)(r-s)\; |s|^2\bigr] \label{LtildeTA-MtildeTA_9}
\end{align}
with the function $F_T^a$ in \eqref{LtildeTA-MtildeTA_FT_definition} and $G_T^a$ in \eqref{LtildeTA-MtildeTA_GT_definition}. Let us also note that we estimate the factor $|\mathbf B|$ coming from the second term in \eqref{eq:A17} by $1$.
We also have
\begin{align}
&\bigl| \nabla_r \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s)} - \nabla_r \mathrm{e}^{\i \Phi_{2A}(X, X+ Z) - \i \frac{r-s}{4} \cdot DA(X) (r + s)}\bigr| \notag \\
&\hspace{3cm} \leq \bigl| \mathrm{e}^{\i \tilde \Phi_{A} (X, Z, r, s) - \i \Phi_{2A}(X, X + Z)+ \i \frac{r-s}{4} \cdot DA(X) (r + s)} - 1\bigr| |\nabla_r \tilde \Phi_{A}(X, Z, r, s)| \notag \\
&\hspace{3.5cm} + \bigl| \nabla_r \tilde \Phi_{A}(X, Z,r, s) + \nabla_r \frac{r-s}{4} \cdot DA(X) (r+s)\bigr|. \label{eq:A16}
\end{align}
We use
\begin{align}
|\nabla_r \tilde \Phi_{A}(X, Z, r, s)| &\leq C \; \bigl( \Vert A\Vert_\infty + \Vert DA\Vert_\infty \bigr) \, \bigl( \bigl| Z + \frac{r-s}{2} \bigr| + \bigl| Z - \frac{r-s}{2}\bigr| \bigr) \label{LTA-LtildeTBA_17}
\end{align}
and Lemma~\ref{Phi_Approximation} to see that the right side of \eqref{eq:A16} is bounded by
\begin{align*}
&C \, \Vert D^2A\Vert_\infty \bigl( 1+ \Vert A\Vert_\infty + \Vert D A \Vert_\infty \bigr) \; \bigl[ |s|^2 + |r-s|^2 + |Z|^2 \\
&\hspace{40pt} + \bigl( |s| + |r-s|\bigr) \bigl( |Z|+ |r-s|\bigr) \bigl( \bigl| Z+\frac{r-s}{2} \bigr| + \bigl| Z-\frac{r-s}{2} \bigr| \bigr) \bigr].
\end{align*}
In combination with \eqref{r-s_estimate} these considerations imply
\begin{align*}
\int_{\mathbb{R}^3} \mathrm{d} Z \; |k_T(Z,r-s)| \, \sup_{X\in \mathbb{R}^3} \bigl| \nabla_r \mathrm{e}^{\i \tilde \Phi_{A}(X, Z, r, s)} - \nabla_r \mathrm{e}^{\i \Phi_{2A}(X, X+ Z)} \mathrm{e}^{-\i \frac{r-s}{4} \cdot DA(X) (r + s)}\bigr| & \\
&\hspace{-350pt} \leq C\, \Vert D^2A\Vert_\infty \bigl( 1 + \Vert A \Vert_\infty + \Vert DA \Vert_\infty \bigr) \\
&\hspace{-300pt} \times \bigl[ F_T^3 (r-s) + F_T^2 (r-s) \, |s| + F_T^0 (r-s) \, |s|^2\bigr].
\end{align*}
When we combine this with \eqref{LtildeTA-MtildeTA_9}, we obtain \eqref{LtildeTA-MtildeTA_5}.
\end{proof}
\paragraph{The operator $M_{T,\mathbf A}$.} We define the operator $M_{T,\mathbf A}$ by
\begin{align}
M_{T,\mathbf A} \alpha(X,r) \coloneqq \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_T(Z, r-s) \;(\cos(Z\cdot \Pi_{\mathbf A_h})\alpha)(X,s), \label{MTA_definition}
\end{align}
where $k_T$ is defined below \eqref{MtildeTA_definition}. In our calculation, we may replace $\tilde M_{T, \mathbf A}$ by $M_{T,\mathbf A}$ due to the following error bound.
\begin{prop}
\label{MtildeTA-MTA}
Assume that $\mathbf A = \mathbf A_{e_3} + A$ with a $A \in W^{2,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ periodic, let $|\cdot|^k V\alpha_*\in L^2(\mathbb{R}^3)$ for $k \in \{ 0,1,2 \}$, $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0 > 0$ there is $h_0>0$ such that for any $0 < h \leq h_0$ and any $T\geq T_0$ we have
\begin{align}
\Vert \tilde M_{T,\mathbf A}\Delta - M_{T,\mathbf A}\Delta \Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2 &\leq C\;h^6 \; \bigl( \max_{k\in \{0,1,2\}} \Vert \, |\cdot|^k V\alpha_*\Vert_2^2\bigr) \;\Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{MtildeTA-MTA_eq1}
\end{align}
Furthermore,
\begin{align}
|\langle \Delta, \tilde M_{T,\mathbf A}\Delta - M_{T,\mathbf A}\Delta\rangle| &\leq C \;h^6 \; \bigl( \Vert V\alpha_*\Vert_2^2 + \Vert\, |\cdot|^2 V\alpha_*\Vert_2^2 \bigr) \;\Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{MtildeTA-MTA_eq2}
\end{align}
\end{prop}
\begin{bem}
The two bounds in \eqref{MtildeTA-MTA_eq1} and \eqref{MtildeTA-MTA_eq2} are needed for the proof of Proposition~\ref{Structure_of_alphaDelta} and Theorem~\ref{Calculation_of_the_GL-energy}, respectively. We highlight that the bound in \eqref{MtildeTA-MTA_eq1} is not strong enough to be useful in the proof of Theorem~\ref{Calculation_of_the_GL-energy}. More precisely, if we apply Cauchy--Schwarz and use Lemma~\ref{Schatten_estimate} as well as \eqref{MtildeTA-MTA_eq1} to estimate $\Vert \Delta \Vert_2$, we obtain a bound that is only of the order $h^4$. This is not good enough because $h^4$ is the order of the Ginzburg--Landau energy. To obtain \eqref{MtildeTA-MTA_eq2}, we exploit the fact that $V\alpha_*$ is real-valued, which allows us to replace $\exp(-\i \frac{r-s}{4} \cdot D\mathbf A_h(X)(r+s))$ in the definition of $\tilde M_{T, \mathbf A}$ by $\cos(\frac{r-s}{4} \cdot D\mathbf A_h(X)(r+s))$ and to win an additional factor $h^2$.
\end{bem}
\begin{proof}[Proof of Proposition \ref{MtildeTA-MTA}]
The proof is similar to that of Proposition \ref{LTA-LtildeTA}. We begin by proving \eqref{MtildeTA-MTA_eq1} and claim that
\begin{align}
\Vert \tilde M_{T, \mathbf A}\Delta - M_{T, \mathbf A}\Delta\Vert_2^2 &\leq C \, \Vert \Psi\Vert_2^2 \, \Vert D\mathbf A_h\Vert_\infty^2 \, \Vert F_T^2 * |V\alpha_*| + F_T^1 * |\cdot|\, |V\alpha_*|\, \Vert_2^2 \label{MtildeTA-MTA_6}
\end{align}
with the function $F_T^a$ in \eqref{LtildeTA-MtildeTA_FT_definition}. If this holds, the desired bound for this term follows from Young's inequality, \eqref{Periodic_Sobolev_Norm}, and the $L^1$-norm estimate on $F_T^a$ in \eqref{LtildeTA-MtildeTA_FTGT}. To prove \eqref{MtildeTA-MTA_6}, we argue as in \eqref{Expanding_the_square}--\eqref{LTA-LtildeTBA_2} and obtain
\begin{align}
\Vert \tilde M_{T, \mathbf A}\Delta - M_{T, \mathbf A}\Delta\Vert_2^2 & \leq 4 \, \Vert \Psi\Vert_2^2 \notag \\
&\hspace{-90pt} \times \int_{\mathbb{R}^3} \mathrm{d} r\, \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3}\mathrm{d} Z \mathrm{d} s \; |k_T (Z, r-s)| \, \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{-\i \frac{r-s}{4} \cdot D\mathbf A_h(X) (r+s)} - 1\bigr| \;|V\alpha_*(s)|\Bigr|^2. \label{MtildeTA-MTA_4}
\end{align}
When we combine the bound
\begin{align}
\bigl| (r-s) \cdot D\mathbf A(X) (r+s)\bigr| &\leq \Vert D\mathbf A \Vert_\infty \, |r-s| \, \bigl( 2 |s| + |r-s|\bigr) \label{MtildeTA-MTA_1}
\end{align}
and the estimate for $|r-s|$ in \eqref{r-s_estimate}, we obtain
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} Z \; |k_T (Z, r-s)| \, \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{-\i \frac{r-s}{4} \cdot D\mathbf A(X) (r+s)} - 1\bigr| & \notag \\
&\hspace{-100pt} \leq C\, \Vert D\mathbf A\Vert_\infty \bigl[ F_T^2(r-s) + F_T^1 (r - s)\;|s|\bigr]. \label{MtildeTA-MTA_5}
\end{align}
In combination with \eqref{MtildeTA-MTA_4}, this proves \eqref{MtildeTA-MTA_6}.
We also claim that the term involving $\Pi$ is bounded by
\begin{align}
\Vert \Pi (\tilde M_{T, \mathbf A}\Delta - M_{T, \mathbf A}\Delta)\Vert_2^2 &\leq C\, h^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \notag \\
&\hspace{-70pt} \times \bigl[ ( |\mathbf B|^2 + \Vert D^2A_h\Vert_\infty^2 )(1+ |\mathbf B|^2 + \Vert D^2A_h\Vert_\infty^2 ) \, \Vert F_T^3 *|V\alpha_*| + F_T^2 * |\cdot| \, |V\alpha_*| \, \Vert_2^2 \notag \\
&\hspace{-50pt} + \Vert D^2\mathbf A_h\Vert_\infty^2\, \Vert F_T^2 * |V\alpha_*| + F_T^1 * |\cdot|\, |V\alpha_*|\, \Vert_2^2 \bigr]. \label{MtildeTA-MTA_2}
\end{align}
If this is correct, an application Young's inequality and \eqref{LtildeTA-MtildeTA_FTGT} shows the desired bound for this term. To prove \eqref{MtildeTA-MTA_2}, we first
show that
\begin{align}
\Vert \Pi \cos(Z\cdot \Pi_{\mathbf A_h}) \Psi\Vert_2 \leq C \, h^2 \, (1 + |Z|) \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \label{MtildeTA-MTA_Lemma}
\end{align}
holds. From Lemma~\ref{Magnetic_Translation_Representation} we know that
\begin{align*}
\mathrm{e}^{\pm \i Z \cdot \Pi_{\mathbf A_h}} = \mathrm{e}^{\pm \i Z \cdot \Pi} \mathrm{e}^{-\i \Phi_{2A_h}(X, X \mp Z)}
\end{align*}
holds. An application of the intertwining relation in \eqref{PiU_intertwining} for $\Pi_X$ and $\mathrm{e}^{\i Z\cdot \Pi_X}$ therefore shows
\begin{align*}
[\Pi, \mathrm{e}^{\pm \i Z\cdot \Pi_{\mathbf A_h}}] = \mathrm{e}^{\pm \i Z\cdot \Pi_{\mathbf A_h}} \bigl[ \mp 2\, \mathbf B\wedge Z - \nabla_X \Phi_{2A_h}(X, X \mp Z)\bigr].
\end{align*}
We highlight that $\Pi$ and $\Pi_{\mathbf A_h}$ in the two equations above act on the coordinate $X$. Using Lemma~\ref{PhiA_derivative}, we check that
\begin{align*}
\nabla_X \Phi_{2A}(X, X \mp Z) = 2A(X\mp Z) - 2A(X) + 2\tilde A(X, X\mp Z) - 2\tilde A (X\mp Z, X)
\end{align*}
holds. Accordingly, $|\nabla_X\Phi_{2A}(X, X \mp Z)|\leq C\, \Vert DA\Vert_\infty |Z|$, which implies
\begin{align}
\Vert [\Pi, \cos(Z\cdot \Pi_{\mathbf A_h})]\Psi\Vert_2 \leq C \, ( | \mathbf B | + \Vert DA_h\Vert_\infty ) \, |Z| \, \Vert \Psi\Vert_2 \label{PiXcosPiA_estimate}
\end{align}
as well as \eqref{MtildeTA-MTA_Lemma}. Finally, a computation similar to the one leading to \eqref{LTA-LtildeTBA_9}, Lemma~\ref{Phi_Approximation}, \eqref{MtildeTA-MTA_5}, \eqref{MtildeTA-MTA_Lemma}, and the above considerations prove \eqref{MtildeTA-MTA_2}. It remains to consider the term proportional to $\tilde \pi$.
With an argument that is similar to the one leading to \eqref{LTA-LtildeTBA_10}, we see that
\begin{align*}
\Vert \tilde \pi (\tilde M_{T, \mathbf A}\Delta - M_{T, \mathbf A}\Delta)\Vert_2^2 & \leq 4\, \Vert \Psi\Vert_2^2 \\
&\hspace{-100pt} \times \int_{\mathbb{R}^3} \mathrm{d} r \, \Bigl| \iint_{\mathbb{R}^3 \times \mathbb{R}^3}\mathrm{d} Z \mathrm{d} s \; \bigl| \tilde \pi k_T (Z, r-s) \bigl[ \mathrm{e}^{-\i \frac{r-s}{4} \cdot D\mathbf A_h(X) \cdot (r + s)} - 1\bigr] \bigr| \; |V\alpha_*(s)|\Bigr|^2.
\end{align*}
We also note that
\begin{align*}
\bigl|\nabla_r (r - s) D\mathbf A(X) (r+s) \bigr| \leq C \, \Vert D\mathbf A\Vert_\infty \bigl( |s| + |r-s|\bigr).
\end{align*}
In combination with \eqref{MtildeTA-MTA_1} and $|\mathbf A_{\mathbf B}(r)| \leqslant | \mathbf B | \ (|r-s| + |s|)$, this implies
\begin{align*}
&\int_{\mathbb{R}^3} \mathrm{d} Z\; \bigl| \tilde \pi k_T (Z, r-s) \bigl[ \mathrm{e}^{-\i \frac{r-s}{4} \cdot D\mathbf A(X) \cdot (r+s)} - 1\bigr] \bigr| \\
&\hspace{2cm}\leq C\, ( |\mathbf B| + \Vert DA\Vert_\infty ) \; \bigl( (F_T^2 + G_T^2)(r-s) + (F_T^0 + G_T^1)(r-s) \; |s| \bigr)
\end{align*}
with the function $F_T^a$ in \eqref{LtildeTA-MtildeTA_FT_definition} and $G_T^a$ in \eqref{LtildeTA-MtildeTA_GT_definition}. When we apply Young's inequality and \eqref{LtildeTA-MtildeTA_FTGT}, we obtain \eqref{MtildeTA-MTA_eq1}. It remains to prove \eqref{MtildeTA-MTA_eq2}.
To that end, we need to consider
\begin{align}
\langle \Delta, \tilde M_{T, \mathbf A} \Delta - M_{T, \mathbf A} \Delta \rangle &= 4 \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} r \mathrm{d} s \; \bigl( \mathrm{e}^{-\i \frac{r-s}{4} \cdot D\mathbf A_h(X) \cdot (r+s)} - 1 \bigr) V\alpha_*(r) V\alpha_*(s) \notag\\
&\hspace{-20pt} \times \int_{\mathbb{R}^3} \mathrm{d} Z \; k_T(Z, r-s) \fint_{Q_h} \mathrm{d} X \; \ov{\Psi(X)} \cos(Z\cdot \Pi_{\mathbf A_h})\Psi(X). \label{MtildeTA-MTA_7}
\end{align}
The left side of this equation is real-valued. It therefore equals $1/2$ times the right side plus $1/2$ times the complex conjugate of the right side. When we use that $V \alpha_*$ is real-valued, that the Matsubara frequencies in \eqref{Matsubara_frequencies} satisfy $-\omega_n = \omega_{-(n+1)}$, and the transformation $n \mapsto -n-1$ in the sum in the definition of $k_T(Z,r-s)$, we see that the complex conjugate of the right side equals the same expression with $\exp(-\i \frac{r-s}{4} \cdot D\mathbf A_h(X) (r+s))$ replaced by its complex conjugate. Using this and the identity $\cos(x) -1 =- 2\sin^2(\frac x2)$ we find
\begin{align}
\langle \Delta, \tilde M_{T, \mathbf A} \Delta - M_{T, \mathbf A} \Delta \rangle & = -8 \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} r \mathrm{d} s \; \sin^2\left( \frac {(r-s)} {8} \cdot D\mathbf A_h(X) \cdot (r+s)\right) \notag \\
&\hspace{-3cm}\times V\alpha_*(r) V\alpha_*(s) \int_{\mathbb{R}^3} \mathrm{d} Z \; k_T(Z, r-s) \fint_{Q_h} \mathrm{d} X \; \ov{\Psi(X)} \cos(Z\cdot \Pi_{\mathbf A_h})\Psi(X). \label{MtildeTA-MTA_8}
\end{align}
Furthermore, \eqref{MtildeTA-MTA_1} implies
\begin{align*}
\sin^2\bigl(\frac 18 (r-s) \cdot D\mathbf A_h(X)(r+s) \bigr) \leq C \, \Vert D\mathbf A_h\Vert_\infty^2\, |r-s|^2 \, \bigl( |s|^2+ |r-s|^2\bigr).
\end{align*}
We use this bound, \eqref{MtildeTA-MTA_8}, and $\Vert \cos(Z\cdot\Pi_{\mathbf A_h}) \Vert_{\infty} \leqslant 1$ to see that
\begin{align*}
|\langle \Delta, \tilde M_{T, \mathbf A}\Delta - M_{T, \mathbf A}\Delta\rangle | &\leq \Vert D\mathbf A_h\Vert_\infty^2 \; \Vert \Psi\Vert_2^2 \; \bigl\Vert |V\alpha_*| \; \bigl( F_T^4 * |V\alpha_*| + F_T^2 * |\cdot |^2 |V\alpha_*| \bigr) \bigr\Vert_1.
\end{align*}
The desired bound in \eqref{MtildeTA-MTA_eq2} follows when we apply Young's inequality and use \eqref{Periodic_Sobolev_Norm} as well as the $L^1$-norm estimate for $F_T^a$ in \eqref{LtildeTA-MtildeTA_FTGT}. This completes the proof of Proposition~\ref{MtildeTA-MTA}.
\end{proof}
\subsubsection{Analysis of \texorpdfstring{$M_{T, \mathbf A}$}{MTA} and calculation of two quadratic terms in the Ginz\-burg--Landau functional}
\label{Analysis_of_MTA_Section}
We decompose as $M_{T, \mathbf A} = M_T^{(1)} + M_{T, \mathbf A}^{(2)} + M_{T, \mathbf A}^{(3)}$, where
\begin{align}
M_T^{(1)} \alpha(X,r) &\coloneqq \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_T(Z, r-s) \; \alpha(X,s), \label{MT1_definition}\\
M_{T, \mathbf A}^{(2)} \alpha(X, r) &\coloneqq -\frac 12 \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s\; k_T(Z, r-s) \; (Z\cdot \Pi_{\mathbf A_h})^2 \; \alpha(X, s), \label{MTB2_definition}\\
M_{T, \mathbf A}^{(3)} \alpha(X,r) &\coloneqq \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s\; k_T(Z, r-s) \; \mathcal{R}(Z\cdot \Pi_{\mathbf A_h}) \; \alpha(X, s), \label{MTA3_definition}
\end{align}
and $\mathcal{R}(x) = \cos(x) - 1 + \frac 12 x^2$.
\paragraph{The operator $M_T^{(1)}$.} From the quadratic form $\langle \Delta, M_T^{(1)} \Delta \rangle$ we extract the quadratic term without external fields or a gradient in the Ginzburg--Landau functional in \eqref{Definition_GL-functional}. We also obtain a term that cancels the last term on the left side of \eqref{Calculation_of_the_GL-energy}. The relevant computation can be found in \cite[Proposition 4.11]{DeHaSc2021}. For the sake of completeness, we state the result also here. We recall that $\Delta \equiv \Delta_\Psi = -2 V\alpha_* \Psi$ as in \eqref{Delta_definition}.
\begin{prop}
\label{MT1}
Let $V\alpha_*\in L^2(\mathbb{R}^3)$, $\Psi \in L_{\mathrm{mag}}^2(Q_h)$, and $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. Then the following statements hold.
\begin{enumerate}[(a)]
\item We have $M_{{T_{\mathrm{c}}}}^{(1)} \Delta (X, r) = -2\, \alpha_* (r) \Psi(X)$.
\item For any $T_0>0$ there is a constant $c>0$ such that for $T_0 \leq T \leq {T_{\mathrm{c}}}$ we have
\begin{align*}
\langle \Delta, M_T^{(1)} \Delta - M_{{T_{\mathrm{c}}}}^{(1)} \Delta \rangle \geq c \, \frac{{T_{\mathrm{c}}} - T}{{T_{\mathrm{c}}}} \; \Vert \Psi\Vert_2^2.
\end{align*}
\item Given $D\in \mathbb{R}$ there is $h_0>0$ such that for $0< h\leq h_0$, and $T = {T_{\mathrm{c}}} (1 - Dh^2)$ we have
\begin{align*}
\langle \Delta, M_T^{(1)} \Delta - M_{{T_{\mathrm{c}}}}^{(1)} \Delta\rangle = 4\; Dh^2 \; \Lambda_2 \; \Vert \Psi\Vert_2^2 + R(\Delta)
\end{align*}
with the coefficient $\Lambda_2$ in \eqref{GL-coefficient_2}, and
\begin{align*}
|R(\Delta)| &\leq C \; h^6 \; \Vert V\alpha_*\Vert_2^2\; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
\item Assume additionally that $| \cdot | V\alpha_*\in L^2(\mathbb{R}^3)$. Then, there is $h_0>0$ such that for any $0< h \leq h_0$, any $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and any $T \geq T_0 > 0$ we have
\begin{align*}
\Vert M_T^{(1)}\Delta - M_{{T_{\mathrm{c}}}}^{(1)}\Delta\Vert_{{H^1(Q_h \times \Rbb_{\mathrm s}^3)}}^2 &\leq C \, h^2 \, | T - {T_{\mathrm{c}}} |^2 \, \bigl( \Vert V\alpha_*\Vert_2^2 + \Vert \, |\cdot| V\alpha_*\Vert_2^2\bigr) \Vert\Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
\end{enumerate}
\end{prop}
\begin{bem}
Parts (a) and (c) of the above proposition are needed for the proof of Theorem~\ref{Calculation_of_the_GL-energy}, part (b) is needed for the proof of Proposition~\ref{Lower_Tc_a_priori_bound}, and part (d) for the proof of Proposition~\ref{Structure_of_alphaDelta}.
\end{bem}
\paragraph{The operator $M_{T,\mathbf A}^{(2)}$.} The kinetic term in the Ginzburg--Landau functional in \eqref{Definition_GL-functional} is contained in $\langle \Delta, M_{T,\mathbf A}^{(2)} \Delta \rangle$ with $M_{T,\mathbf A}^{(2)}$ defined in \eqref{MTB2_definition}. The following proposition allows us to extract this term.
\begin{prop}
\label{MTB2}
Let $V\alpha_* \in L^2(\mathbb{R}^3)$ be a radial function, let $A\in W^{1,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ be periodic, assume $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. We then have
\begin{align}
\langle \Delta, M_{{T_{\mathrm{c}}},\mathbf A}^{(2)} \Delta\rangle = - 4\; \Lambda_0 \; \Vert \Pi_{\mathbf A_h} \Psi\Vert_2^2 \label{MTA2_1}
\end{align}
with $\Lambda_0$ in \eqref{GL-coefficient_1}. Moreover, for any $T \geq T_0 > 0$ we have
\begin{align}
|\langle \Delta, M_{T,\mathbf A}^{(2)} \Delta - M_{{T_{\mathrm{c}}},\mathbf A}^{(2)}\Delta\rangle| \leq C\; h^4 \; |T - {T_{\mathrm{c}}}| \; \Vert V\alpha_*\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{MTA2_2}
\end{align}
\end{prop}
\begin{proof}
The proof is analogous to the proof of \cite[Proposition 4.13]{DeHaSc2021} with the obvious replacements, and is therefore omitted.
\end{proof}
\paragraph{The operator $M_{T,\mathbf A}^{(3)}$.} The term $\langle \Delta, M_{T,\mathbf A}^{(3)} \Delta \rangle$ with $M_{T,\mathbf A}^{(3)}$ in \eqref{MTA3_definition} does not contribute to the Ginzburg--Landau energy. To obtain a bound for it, we need, as in \cite{DeHaSc2021}, the $H_{\mathrm{mag}}^2(Q_h)$-norm of $\Psi$.
\begin{prop}
\label{MTB3}
Let $V\alpha_* \in L^2(\mathbb{R}^3)$, let $A\in W^{1,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ be periodic, assume that $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align*}
|\langle \Delta, M_{T,\mathbf A}^{(3)} \Delta\rangle| &\leq C \; h^6 \; \Vert V\alpha_*\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2.
\end{align*}
\end{prop}
Before we give the proof of Proposition~\ref{MTB3}, we state and prove the following lemma.
\begin{lem}
\label{ZPiX_inequality_quartic}
Assume that $\mathbf A = \mathbf A_{e_3} + A$ with a periodic function $A \in W^{2,\infty}(\mathbb{R}^3,\mathbb{R}^3)$.
\begin{enumerate}[(a)]
\item For $\varepsilon >0$ we have
\begin{align}
|Z\cdot \Pi_\mathbf A|^4 &\leq C\; |Z|^4 \; \bigl[ \Pi_\mathbf A^4 + \varepsilon \, \Pi_\mathbf A^2 + \, |\curl \mathbf A|^2 + \varepsilon^{-1} \, \bigl( |\curl(\curl \mathbf A)|^2 + |\nabla (\curl \mathbf A)|^2 \bigr) \bigr]. \label{ZPiX_inequality_quartic_eq}
\end{align}
The gradient in the last term is understood to act on each component of $\curl A$ separately. The result is a vector field with nine components.
\item Assume that $\Psi\in H_{\mathrm{mag}}^2(Q_h)$. There is a constant $h_0>0$ such that for any $0 < h \leq h_0$, we have
\begin{align*}
\langle \Psi, \, |Z\cdot \Pi_{\mathbf A_h}|^4 \, \Psi\rangle &\leq C\; h^6 \; |Z|^4 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2.
\end{align*}
\end{enumerate}
\end{lem}
\begin{proof}
We first give the proof of part (a) and start by noting that
\begin{align}
\bigl[\Pi_\mathbf A^{(i)}, \Pi_\mathbf A^{(j)}\bigr] = -\i \sum_{k=1}^3 \varepsilon_{ijk} \; (\curl \mathbf A)_k \label{ZPiX_inequality_quartic_3}
\end{align}
with the Levi--Civita symbol $\varepsilon_{ijk}$, which is defined as $1$ if $(i, j, k)$ is a cyclic permutation of $\{1, 2, 3\}$, as $-1$ if it is an anticyclic permutation, and zero if at least two indices coincide. We claim that
\begin{align}
\Pi_\mathbf A \; \Pi_\mathbf A^2\; \Pi_\mathbf A = \Pi_\mathbf A^4 + 2 \, |\curl \mathbf A|^2 - \curl (\curl \mathbf A) \cdot \Pi_\mathbf A. \label{PiPi2Pi_equality}
\end{align}
If this holds, then we can use the fact that all terms in the above equation except for the last are self-adjoint, to see that
\begin{align}
\bigl[ \curl (\curl \mathbf A) , \Pi_\mathbf A \bigr] = 0. \label{ZPiX_inequality_quartic_4}
\end{align}
To prove \eqref{PiPi2Pi_equality}, we first note that
\begin{align}
\Pi_\mathbf A \; \Pi_\mathbf A^2 &= \Pi_\mathbf A^2 \; \Pi_\mathbf A + 2 \sum_{i=1}^3 [\Pi_\mathbf A, \Pi_\mathbf A^{(i)}] \, \Pi_\mathbf A^{(i)} + \sum_{i=1}^3 \bigl[\Pi_\mathbf A^{(i)} , [\Pi_\mathbf A, \Pi_\mathbf A^{(i)}] \bigr].
\label{ZPiX_inequality_quartic_6}
\end{align}
Moreover, an application of \eqref{ZPiX_inequality_quartic_3} shows that
\begin{align}
\sum_{i=1}^3 [\Pi_\mathbf A, \Pi_\mathbf A^{(i)}] \, \Pi_\mathbf A^{(i)} = \i (\curl \mathbf A) \wedge \Pi_\mathbf A \label{ZPiX_inequality_quartic_7}
\end{align}
and
\begin{align}
\sum_{i=1}^3 \bigl[\Pi_\mathbf A^{(i)} , [\Pi_\mathbf A, \Pi_\mathbf A^{(i)}] \bigr] = - \curl(\curl \mathbf A). \label{ZPiX_inequality_quartic_8}
\end{align}
We combine \eqref{ZPiX_inequality_quartic_6}-\eqref{ZPiX_inequality_quartic_8} and find
\begin{align*}
\Pi_\mathbf A \; \Pi_\mathbf A^2 \; \Pi_\mathbf A = \Pi_\mathbf A^4 + 2\i \bigl( (\curl \mathbf A) \wedge \Pi_\mathbf A\bigr) \cdot \Pi_\mathbf A - \curl (\curl \mathbf A) \cdot \Pi_\mathbf A
\end{align*}
Using additionally the identity $\i ( (\curl \mathbf A) \wedge \Pi_\mathbf A) \cdot \Pi_\mathbf A = |\curl \mathbf A|^2$, we conclude that \eqref{PiPi2Pi_equality} holds.
Our next goal is to prove the formula
\begin{align}
(Z\cdot \Pi_\mathbf A) \, \Pi_\mathbf A^2\, (Z\cdot \Pi_\mathbf A) &= \Pi_\mathbf A \, (Z\cdot \Pi_\mathbf A)^2 \, \Pi_\mathbf A + (Z \cdot \curl(\curl \mathbf A)) \; (Z \cdot \Pi_\mathbf A) \notag \\
&\hspace{100pt} + (Z \wedge \Pi_\mathbf A) \cdot \bigl( (Z\cdot \nabla) (\curl \mathbf A)\bigr). \label{ZPiX_inequality_quartic_1}
\end{align}
We note that
\begin{align}
(Z\cdot \Pi_\mathbf A) \, \Pi_\mathbf A^2\, (Z\cdot \Pi_\mathbf A) = \sum_{i,j =1}^3 Z_iZ_j \; \Pi_\mathbf A^{(i)} \, \Pi_\mathbf A^2 \, \Pi_\mathbf A^{(j)} \label{ZPiX_inequality_quartic_2}
\end{align}
and
\begin{align*}
\Pi_\mathbf A^{(i)} \, \Pi_\mathbf A^2 \, \Pi_\mathbf A^{(j)} = \sum_{m = 1}^3 \bigl( \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(i)} \, \Pi_\mathbf A^{(j)} \, \Pi_\mathbf A^{(m)} + \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(i)} \, [ \Pi_\mathbf A^{(m)}, \Pi_\mathbf A^{(j)}] + [\Pi_\mathbf A^{(i)}, \Pi_\mathbf A^{(m)}] \, \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(j)} \bigr).
\end{align*}
The sum in \eqref{ZPiX_inequality_quartic_2} is left unchanged when we exchange the indices $i$ and $j$. Motivated by this, we combine $\frac 12$ times the original term and $\frac 12$ times the term with $i$ and $j$ interchanged and find
\begin{align}
\Pi_\mathbf A^{(i)} \, \Pi_\mathbf A^2 \, \Pi_\mathbf A^{(j)} + \Pi_\mathbf A^{(j)} \, \Pi_\mathbf A^2 \, \Pi_\mathbf A^{(i)} &= \sum_{m=1}^3 \bigl( \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(i)} \, \Pi_\mathbf A^{(j)} \, \Pi_\mathbf A^{(m)} + \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(j)} \, \Pi_\mathbf A^{(i)} \, \Pi_\mathbf A^{(m)} \notag \\
&\hspace{-30pt} + \bigl[ [\Pi_\mathbf A^{(i)}, \Pi_\mathbf A^{(m)}] , \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(j)}\bigr] + \bigl[ [\Pi_\mathbf A^{(j)}, \Pi_\mathbf A^{(m)}] , \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(i)}\bigr] \bigr). \label{ZPiX_inequality_quartic_9}
\end{align}
Using the commutator identity $[A, BC] = B\, [A, C] + [A, B] \, C$ we write the third term as
\begin{align}
\bigl[ [\Pi_\mathbf A^{(i)}, \Pi_\mathbf A^{(m)}] , \Pi_\mathbf A^{(m)} \, \Pi_\mathbf A^{(j)}\bigr] &= \Pi_\mathbf A^{(m)} \bigl[ [\Pi_\mathbf A^{(i)}, \Pi_\mathbf A^{(m)}] , \Pi_\mathbf A^{(j)}\bigr] + \bigl[ [\Pi_\mathbf A^{(i)}, \Pi_\mathbf A^{(m)}] , \Pi_\mathbf A^{(m)} \bigr] \, \Pi_\mathbf A^{(j)} \label{ZPiX_inequality_quartic_10}
\end{align}
and likewise for the term with $i$ and $j$ interchanged.
We also use \eqref{ZPiX_inequality_quartic_8} to see that
\begin{align*}
\frac 12 \sum_{i,j =1}^3 Z_iZ_j \Bigl( \bigl[ [\Pi_\mathbf A^{(j)} , \Pi_\mathbf A] , \Pi_\mathbf A\bigr] \Pi_\mathbf A^{(i)} + \bigl[ [\Pi_\mathbf A^{(i)} , \Pi_\mathbf A] , \Pi_\mathbf A\bigr] \Pi_\mathbf A^{(j)} \Bigr) = (Z \cdot \curl(\curl \mathbf A) )\; (Z\cdot \Pi_\mathbf A)
\end{align*}
holds. Concerning the first term on the right side of \eqref{ZPiX_inequality_quartic_10}, \eqref{ZPiX_inequality_quartic_3} can be used to show
\begin{align*}
\bigl[ [\Pi_\mathbf A^{(i)}, \Pi_\mathbf A^{(m)}] , \Pi_\mathbf A^{(j)}\bigr] = \sum_{k=1}^3 \varepsilon_{imk} \, \partial_j (\curl \mathbf A)_k),
\end{align*}
which implies
\begin{align}
&\frac 12 \sum_{i,j=1}^3 Z_i Z_j \bigl( \Pi_{\mathbf A} \bigl[ [ \Pi_{\mathbf A}^{(i)}, \Pi_{\mathbf A} ], \Pi_{\mathbf A}^{(j)} \bigr] + \Pi_{\mathbf A} \bigl[ [ \Pi_{\mathbf A}^{(j)}, \Pi_{\mathbf A} ], \Pi_{\mathbf A}^{(i)} \bigr] \bigr) = (Z \wedge \Pi_\mathbf A) \cdot \bigl( (Z\cdot \nabla) (\curl \mathbf A)\bigr). \label{ZPiX_inequality_quartic_11}
\end{align}
When we combine \eqref{ZPiX_inequality_quartic_2}-\eqref{ZPiX_inequality_quartic_11}, this proves proves \eqref{ZPiX_inequality_quartic_1}. We are now prepared to give the proof of \eqref{ZPiX_inequality_quartic_eq}.
We start by noting that $|A + B + C|^2 \leq 3(|A|^2 + |B|^2 + |C|^2)$ holds for three linear operators $A, B, C$, which implies
\begin{align}
(Z\cdot \Pi_\mathbf A)^2 &\leq 3 \; \bigl( Z_1^2 \; (\Pi_\mathbf A^{(1)})^2 + Z_2^2 \; (\Pi_\mathbf A^{(2)})^2 + Z_3^2 \; (\Pi_\mathbf A^{(3)})^2\bigr) \leq 3 \; |Z|^2 \; \Pi_\mathbf A^2. \label{ZPiX_inequality}
\end{align}
We use \eqref{PiPi2Pi_equality}, \eqref{ZPiX_inequality_quartic_1}, and \eqref{ZPiX_inequality} to show
\begin{align*}
(Z\cdot \Pi_\mathbf A) \, \Pi_\mathbf A^2\, (Z\cdot \Pi_\mathbf A) &\leq 3\, |Z|^2 \, \bigl(\Pi_\mathbf A^4+ 2\, |\curl \mathbf A|^2 - \curl(\curl \mathbf A) \cdot \Pi_\mathbf A\bigr) \\
&\hspace{10pt} + (Z \cdot \curl(\curl \mathbf A)) \, (Z\cdot \Pi_\mathbf A) + (Z \wedge \Pi_\mathbf A) \cdot \bigl( (Z\cdot \nabla) (\curl \mathbf A)\bigr).
\end{align*}
Next, we write $|Z\cdot \Pi_\mathbf A|^4 = (Z\cdot \Pi_\mathbf A) (Z\cdot \Pi_\mathbf A)^2 (Z\cdot \Pi_\mathbf A)$, apply \eqref{ZPiX_inequality} to the term in the middle, and find
\begin{align*}
|Z\cdot \Pi_\mathbf A|^4 &\leq 9 \, |Z|^4 \bigl( \Pi_\mathbf A^4 + 2\, |\curl \mathbf A|^2 - \curl (\curl \mathbf A) \cdot \Pi_\mathbf A \bigr) \\
&\hspace{50pt} + 3 \, |Z|^2 \bigl[ (Z \cdot \curl(\curl \mathbf A)) \, (Z\cdot \Pi_\mathbf A) + (Z \wedge \Pi_\mathbf A) \cdot \bigl( (Z\cdot \nabla) (\curl \mathbf A)\bigr)\bigr].
\end{align*}
Moreover, $AB + BA \leq \varepsilon \, A^2 + \frac{1}{4 \varepsilon} \, B^2$ for $\varepsilon>0$ and self-adjoint operators $A$ and $B$, \eqref{ZPiX_inequality}, and $|Z\wedge \Pi_\mathbf A|^2 \leqslant 3 |Z|^2 \Pi_\mathbf A^2$ imply that the right side of the above equation is bounded by
\begin{align*}
C \, |Z|^4 \bigl[ \Pi_\mathbf A^4 + \, |\curl \mathbf A|^2 + \varepsilon \, \Pi_\mathbf A^2 + \varepsilon^{-1} \bigl( |\curl(\curl \mathbf A)|^2 + |\nabla (\curl \mathbf A)|^2 \bigr)\bigr].
\end{align*}
This proves part (a).
The proof of part (b) is a direct consequence of part (a) with the choice $\varepsilon = h^2$, and is therefore left to the reader. This proves Lemma~\ref{ZPiX_inequality_quartic}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{MTB3}] We use the definition of $M_{T, \mathbf A}^{(3)}$ to write
\begin{align}
\langle \Delta, M_{T, \mathbf A}^{(3)} \Delta \rangle &= 4 \iiint_{\mathbb{R}^3\times \mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} r\mathrm{d} s\mathrm{d} Z \; V\alpha_*(r) V\alpha_*(s)\, k_T(Z, r-s) \; \langle \Psi , \mathcal{R}(Z\cdot \Pi_{\mathbf A_h})\Psi\rangle. \label{MTB3_1}
\end{align}
The function $\mathcal{R}(x) = \cos(x) - 1 + \frac{x^2}{2}$ satisfies the bound $0\leq\mathcal{R}(x) \leq \frac{1}{24} x^4$, and hence an application of Lemma~\ref{ZPiX_inequality_quartic} shows
\begin{align}
\langle \Psi, \mathcal{R}(Z\cdot \Pi_{\mathbf A_h}) \Psi\rangle &\leq C\; h^6 \; |Z|^4 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2. \label{MTB3_2}
\end{align}
When we apply the estimate $|Z|^4 \leq | Z + \frac{r}{2} |^4 + | Z - \frac{r}{2} |^4$, we see that
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} Z\; |Z|^4 \; |k_T(Z,r)| &\leq F_T^4(r) \label{MTB3_3}
\end{align}
holds with $F_T^4$ defined in \eqref{LtildeTA-MtildeTA_FT_definition}. But this also shows
\begin{equation*}
|\langle \Delta, M_{T, \mathbf A}^{(3)} \Delta \rangle| \leqslant C\; h^6 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2 \; \Vert (V \alpha_*) F_T^4 \ast (V \alpha_*) \Vert_1.
\end{equation*}
In combination with the $L^1(\mathbb{R}^3)$-norm bound for $F_T^4$ in \eqref{LtildeTA-MtildeTA_FTGT}, this proves the claim.
\end{proof}
\subsubsection{A representation formula for the operator \texorpdfstring{$\Wcal_{T,\mathbf A}$}{WTA}}
\label{LTAW_action_Section}
In the next five subsections we study the operator $\Wcal_{T, \mathbf A}$ in \eqref{LTA^W_definition}. In particular, we extract the term in the GL functional that is proportional to $W$ from $\langle \Delta, \Wcal_{T, \mathbf A} \Delta \rangle$. The operator $\Wcal_{T, 0}$ has previously been studied in \cite{ProceedingsSpohn}. After the magnetic field has been removed, our analysis mostly follows ideas in this reference. Because of this and because several ideas of the previous sections appear again, we keep our presentation rather short and only mention the main ideas. As in the case of $L_{T, \mathbf A}$, we start our analysis with a representation formula for $\Wcal_{T, \mathbf A}$ in terms of relative and center-of-mass coordinates.
\begin{lem}
\label{LTAW_action}
The operator $\Wcal_{T, \mathbf A} \colon {L^2(Q_h \times \Rbb_{\mathrm s}^3)} \rightarrow {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ in \eqref{LTA^W_definition} acts as
\begin{align}
\Wcal_{T, \mathbf A} \alpha(X, r) &= \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_{T, \mathbf A, W}(X, Z, r, s) \; (\mathrm{e}^{\i Z \cdot (-\i \nabla_X)} \alpha)(X, s),
\label{eq:Andinew2}
\end{align}
where
\begin{align}
k_{T, \mathbf A, W} (X, Z, r, s) &\coloneqq \smash{\frac 2\beta \sum_{n\in \mathbb{Z}} \int_{\mathbb{R}^3} \mathrm{d} Y} \; W_h(X + Y) \bigl[ k_{T, \mathbf A, +}^n(X, Y, Z, r, s) \; \mathrm{e}^{\i \Theta_{\mathbf A_h}^+(Y, Z, r, s)} \notag \\
&\hspace{80pt} + k_{T, \mathbf A, -}^n(X, Y, Z, r, s) \; \mathrm{e}^{\i \Theta_{\mathbf A_h}^-(Y, Z, r, s)} \bigr], \label{kTAW_definition}
\end{align}
where
\begin{align}
k_{T, \mathbf A, \pm}^n(X, Y, Z, r, s) &\coloneqq g_h^{\pm \i \omega_n} \bigl( X \pm \frac r2, X + Y\bigr) \; g_h^{\pm \i \omega_n} \bigl( X + Y, X + Z \pm \frac s2\bigr)\notag \\
&\hspace{120pt} \times g_h^{\mp \i \omega_n} \bigl( X \mp \frac r2, X + Z \mp \frac s2\bigr). \label{kTAWn_definition}
\end{align}
The function $g_h^z$ is defined in \eqref{ghz_definition} and
\begin{align}
\Theta_\mathbf A^\pm (Y, Z,r, s) &\coloneqq \Phi_\mathbf A \bigl( X \pm \frac r2, X + Y\bigr) + \Phi_\mathbf A \bigl( X + Y, X + Z \pm \frac s2\bigr)\notag \\
&\hspace{120pt} + \Phi_\mathbf A \bigl( X \mp \frac r2, X + Z \mp \frac s2\bigr). \label{LTA^W_Phi_definition}
\end{align}
\end{lem}
\begin{proof}
The proof that $\Wcal_{T, \mathbf A}$ is a bounded linear map on ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ goes along the same lines as that of Lemma~\ref{lem:propsLN}. The proof of the representation formula is analogous to the proof of Lemma~\ref{LTA_action}. We use \eqref{GAz_Kernel_of_complex_conjugate} for $W=0$ to write
\begin{align}
\Wcal_{T, \mathbf A} \alpha(x, y) &= \frac 2\beta \sum_{n\in \mathbb{Z}} \iiint_{\mathbb{R}^9} \mathrm{d} u \mathrm{d} v \mathrm{d} w \; \bigl[ G_h^{\i \omega_n} (x, u) \, W_h(u) \, G_h^{\i\omega_n} (u,v) \, \alpha(v, w) \, G_h^{-\i\omega_n}(y, w) \notag \\
&\hspace{60pt} + G_{\mathbf A_h}^{\i\omega_n} (x, v) \, \alpha(v, w) \, G_h^{-\i\omega_n} (u,w) \, W_h(u) \, G_h^{-\i\omega_n}(y, u)\bigr].
\label{eq:Andinew}
\end{align}
When we define the coordinates $X = \frac{x + y}{2}$ and $r = x-y$, apply the change of variables
\begin{align*}
u &= X + Y, & v &= X + Z + \frac s2, & w &= X + Z - \frac s2,
\end{align*}
and use \eqref{ghz_definition}, this yields \eqref{eq:Andinew2}. We highlight that, by a slight abuse of notation, we denoted the function $\alpha$ depending on the original coordinates in \eqref{eq:Andinew} and the function depending on relative and center-of-mass coordinates in \eqref{eq:Andinew2} by the same symbol.
\end{proof}
\subsubsection{Approximation of the operator \texorpdfstring{$\Wcal_{T, \mathbf A}$}{WTA}}
\label{Approximation_of_LTA^W_Section}
The operator $\Wcal_{T, \mathbf A}$ will be analyzed in three steps. More precisely, we write
\begin{align}
\Wcal_{T, \mathbf A} = \bigl( \Wcal_{T, \mathbf A} - \tilde \Wcal_{T, \mathbf A} \bigr) + \bigl( \tilde \Wcal_{T, \mathbf A} - \Wcal_T \bigr) + \Wcal_T,
\label{eq:Andinew3}
\end{align}
where $\tilde \Wcal_{T, \mathbf A}$ and $\Wcal_T$ are operators of increasing simplicity in their dependence on $W$ and $\mathbf A$. They are defined below in \eqref{LtildeTAW_definition} and \eqref{MTW_definition}, respectively. The term in the Ginzburg--Landau functional that is proportional to $W$ will be extracted from the expectation of the operator $\Wcal_T$ with respect to $\Delta$. The expectation of the first two terms in \eqref{eq:Andinew3} will be shown to be negligible.
\paragraph{The operator $\tilde \Wcal_{T, \mathbf A}$.}
We define the operator
\begin{align}
\tilde \Wcal_{T, A} \alpha(X, r) &\coloneqq W_h(X) \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_{T, \mathbf A, 0}(X, Z, r, s) \; (\mathrm{e}^{\i Z \cdot (-\i \nabla_X)} \alpha)(X, s), \label{LtildeTAW_definition}
\end{align}
where $k_{T, \mathbf A, W}$ is defined in \eqref{kTAW_definition}. The following proposition allows us to estimate the expectation of the first term in \eqref{eq:Andinew3} with respect to $\Delta$.
\begin{prop}
\label{LTAW-LtildeTAW}
Let $| \cdot|^k V\alpha_* \in L^2(\mathbb{R}^3)$ for $k \in \{ 0,1 \}$, let $A\in W^{3,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ and $W \in W^{1,\infty}(\mathbb{R}^3,\mathbb{R})$ be periodic, assume $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align*}
|\langle \Delta, \Wcal_{T, \mathbf A} \Delta - \tilde \Wcal_{T, \mathbf A} \Delta \rangle| \leq C\, h^5\, \max_{k=0,1} \Vert \, |\cdot|^k \ V\alpha_*\Vert_2^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
\end{prop}
\begin{proof}
We write
\begin{align}
|\langle \Delta, \Wcal_{T, \mathbf A}\Delta- \tilde \Wcal_{T, \mathbf A} \Delta \rangle | & \notag \\
&\hspace{-110pt}\leq 4 \, \Vert \Psi\Vert_2^2 \iiint_{\mathbb{R}^9} \mathrm{d} Z \mathrm{d} r \mathrm{d} s \; \esssup_{X\in \mathbb{R}^3} |(k_{T, \mathbf A, W} - k_{T, \mathbf A, 0})(X, Z, r, s)| \; |V\alpha_*(r)| \, |V\alpha_*(s)| \label{LTAW-LtildeTAW_2}
\end{align}
and note that
\begin{align}
|(k_{T, \mathbf A, W} - k_{T, \mathbf A, W})(X, Z, r, s)| & \notag\\
&\hspace{-140pt} \leq \frac 2\beta \sum_{n\in \mathbb{Z}} \int_{\mathbb{R}^3} \mathrm{d} Y \; |W_h(X + Y) - W_h(X)| \; \bigl( |k_{T, \mathbf A, +}^n| + |k_{T, \mathbf A, -}^n|\bigr)(X, Y, Z, r, s).
\label{LTAW-LtildeTAW_1}
\end{align}
When we use \eqref{LTAW-LtildeTAW_1}, $|W(X + Y) - W(X)|\leq \Vert \nabla W\Vert_\infty \, |Y|$, $|Y| \leq |Y \pm \frac r2| + |\frac r 2|$, and Proposition~\ref{gh-g_decay}, we see that the right side of \eqref{LTAW-LtildeTAW_2} is bounded by a constant times
\begin{align*}
\Vert \Psi \Vert_2^2 \; \Vert \nabla W_h \Vert_{\infty} \; \Vert (1+|\cdot|) (V \alpha_*) \tilde F_{T}^1 \ast | V \alpha_* | \ \Vert_1,
\end{align*}
where
\begin{align*}
\tilde F_{T}^a \coloneqq \frac 2\beta \sum_{n\in \mathbb{Z}} \sum_{b=0}^a \left[ \bigl( |\cdot|^b \, \rho^{ \i\omega_n}\bigr) * \rho^{\i\omega_n} * \rho^{- \i \omega_n} + \bigl( |\cdot|^b \, \rho^{- \i\omega_n}\bigr) * \rho^{- \i\omega_n} * \rho^{ \i \omega_n} \right].
\end{align*}
With Proposition~\ref{gh-g_decay} and \eqref{g0_decay_f_estimate1} we show that $\Vert \tilde F_{T}^a\Vert_1 \leq C$. In combination with the bound $\Vert \nabla W_h\Vert_\infty \leq Ch^3$, this proves the claim.
\end{proof}
\paragraph{The operator $\Wcal_T$.}
We define the operator $\Wcal_T$ by
\begin{align}
\Wcal_T \alpha(X, r) &\coloneqq W_h(X) \iint_{\mathbb{R}^3 \times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_T(Z, r-s) \; \alpha(X, s) \label{MTW_definition}
\end{align}
where
\begin{align}
k_T(Z, r) &\coloneqq \frac 2\beta \sum_{n\in \mathbb{Z}} \bigl( k_{T, +}^n (Z, r) + k_{T,-}^n(Z, r) \bigr) \label{kTW_definition}
\end{align}
and
\begin{align}
k_{T, \pm}^n(Z, r) &\coloneqq (g_0^{\pm \i \omega_n} * g_0^{\pm \i \omega_n})\bigl( Z \mp \frac r2\bigr)\; g_0^{\mp \i \omega_n} \bigl( Z \pm \frac r2\bigr). \label{kTWn_definition}
\end{align}
\begin{prop}
\label{MtildeTAW-MTW}
Let $| \cdot|^k V\alpha_* \in L^2(\mathbb{R}^3)$ for $k \in \{ 0,1 \}$, let $A\in W^{3,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ and $W \in L^{\infty}(\mathbb{R}^3,\mathbb{R})$ be periodic, assume $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align*}
|\langle \Delta, \tilde \Wcal_{T, \mathbf A} \Delta - \Wcal_T \Delta \rangle| \leq C \, h^5 \, \max_{k=0,1} \Vert \, |\cdot|^k \ V\alpha_*\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
\end{prop}
\begin{proof}[Sketch of proof]
The proof goes along the same lines as that of Propositions~\ref{LTA-LtildeTA}, \ref{LtildeTA-MtildeTA}, and \ref{MtildeTA-MTA} with the notable simplification that we only need to prove bounds for the quadratic form. We therefore only mention the main steps that need to be carried out and leave the details to the reader. In the first step $g_h^z$ is replaced by $g_0^z$. In the second step the part of the magnetic phase coming from $\mathbf A_\mathbf B$ is split off. A careful analysis shows that
\begin{align*}
\Theta_{\mathbf A_\mathbf B}^\pm(X, Y, Z, r, s) = Z \cdot (\mathbf B \wedge X) + \frac \mathbf B 2 \cdot \theta_\pm(Y, Z, r, s),
\end{align*}
where
\begin{align}
\theta_\pm(Y, Z, r, s) &\coloneqq \pm \frac r2\wedge \bigl(Y \mp \frac r2\bigr) + \bigl( Y \mp \frac r2\bigr) \wedge \bigl( Z - Y \pm \frac s2\bigr) \notag\\
&\hspace{60pt} \pm \frac r2\wedge \bigl( Z - Y\pm \frac s2\bigr) \mp \frac r2 \wedge \bigl( Z\pm \frac{r-s}{2}\bigr). \label{LTA^W_Phi_definition}
\end{align}
The phase $\exp(\i Z \cdot (\mathbf B \wedge X))$ and $\exp(\i Z \cdot (-\i \nabla_X))$ are combined and give $\exp(\i Z \cdot \Pi)$, see \eqref{Magnetic_Translation_constant_Decomposition}. In the third step the magnetic phases coming from $\theta_\pm$ and from the periodic vector potential $A$, that is, from $\Theta_{A}$, are removed. Afterwards, the emergent symmetry of the integrand under the transformation $Z \rightarrow -Z$ is used to replace the operator $\exp(\i Z \cdot \Pi)$ by $\cos(Z\cdot \Pi)$. In the final step, we apply the estimate $1 - \cos(Z\cdot \Pi)\leq C |Z|^2 \, \Pi^2$. This ends our sketch of proof.
\end{proof}
\subsubsection{Analysis of \texorpdfstring{$\Wcal_T$}{WT} and calculation of the quadratic \texorpdfstring{$W$}{W}-term}
\label{Analysis_of_MTW_Section}
\begin{prop}
\label{MTW}
Let $V\alpha_* \in L^2(\mathbb{R}^3)$, let $W \in L^{\infty}(\mathbb{R}^3,\mathbb{R})$ be periodic, assume that $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. There is $h_0>0$ such that for any $0< h \leqslant h_0$ we have
\begin{align}
\langle \Delta, \Wcal_{T_{\mathrm{c}}} \Delta\rangle = -4\; \Lambda_1 \; \langle \Psi, W_h \Psi\rangle \label{MTW_1}
\end{align}
with $\Lambda_1$ in \eqref{GL-coefficient_W}. Moreover, for any $T \geq T_0 > 0$ we have
\begin{align}
|\langle \Delta, \Wcal_T \Delta - \Wcal_{T_{\mathrm{c}}} \Delta\rangle| \leq C\; h^4 \; |T - {T_{\mathrm{c}}}| \; \Vert V\alpha_*\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{MTW_2}
\end{align}
\end{prop}
\begin{proof}
Using the definition of $k_T(Z,r)$ in \eqref{kTW_definition}, we write
\begin{align*}
k_T(Z, r) &= - \frac 2\beta \sum_{n\in \mathbb{Z}} \int_{\mathbb{R}^3} \frac{\mathrm{d} p}{(2\pi)^3} \int_{\mathbb{R}^3} \frac{\mathrm{d} q}{(2\pi)^3} \; \Bigl[ \frac{\mathrm{e}^{\i Z\cdot (p+q)} \mathrm{e}^{\i \frac r2\cdot (q-p)}}{(\i \omega_n + \mu - p^2)^2 (\i\omega_n - \mu + p^2)} \\
&\hspace{150pt} - \frac{\mathrm{e}^{\i Z\cdot (p+q)} \mathrm{e}^{\i \frac r2\cdot (p-q)}}{(\i\omega_n - \mu + p^2)^2 (\i\omega_n + \mu - p^2)}\Bigr]
\end{align*}
as well as
\begin{align*}
\int_{\mathbb{R}^3} \mathrm{d} Z \; k_T(Z, r) &= -\frac 4\beta \sum_{n\in \mathbb{Z}} \int_{\mathbb{R}^3} \frac{\mathrm{d} p}{(2\pi)^3} \; \mathrm{e}^{\i r \cdot p} \, \frac{p^2 -\mu }{(\i \omega_n + \mu - p^2)^2 (\i\omega_n - \mu + p^2)^2}.
\end{align*}
With the Mittag-Leffler series expansion in \eqref{tanh_Matsubara}, we also check that
\begin{align}
\frac{\beta}{2} \frac{1}{\cosh^2(\frac \beta 2z)} = \frac{\mathrm{d}}{\mathrm{d} z} \tanh\bigl( \frac \beta 2 z\bigr) = - \frac{2}{\beta} \sum_{n\in \mathbb{Z}} \frac{1}{(\i\omega_n - z)^2} \label{cosh2_Matsubara}
\end{align}
holds. We use \eqref{cosh2_Matsubara} and the partial fraction expansion
\begin{align*}
\frac{1}{(\i\omega_n - E)^2(\i\omega_n + E)^2} = \frac{1}{4E^2} \Bigl[ \frac{1}{(\i\omega_n - E)^2} + \frac{1}{(\i\omega_n + E)^2}\Bigr] - \frac{1}{4E^3} \Bigl[ \frac{1}{\i\omega_n - E} - \frac{1}{\i\omega_n + E}\Bigr]
\end{align*}
to see that
\begin{align*}
\frac{4}{\beta} \sum_{n\in \mathbb{Z}} \frac{E}{(\i\omega_n - E)^2(\i\omega_n + E)^2} = \beta^2 \; g_1(\beta E)
\end{align*}
with the function $g_1$ in \eqref{XiSigma}. Therefore,
\begin{align*}
\langle \Delta, M_{T_{\mathrm{c}}}^W \Delta\rangle &= -{\beta_{\mathrm{c}}}^2 \int_{\mathbb{R}^3} \frac{\mathrm{d} p}{(2\pi)^3} \; |(-2)\hat{V\alpha_*}(p)|^2 \, g_1({\beta_{\mathrm{c}}} (p^2-\mu)) \; \langle \Psi, W_h \Psi\rangle \\
&= - 4\, \Lambda_1 \, \langle \Psi, W_h\Psi\rangle.
\end{align*}
This proves \eqref{MTW_1}.
To obtain the bound in \eqref{MTW_2}, we argue as in the proof of \eqref{MTA2_2}.
\end{proof}
\subsubsection{Summary: The quadratic terms}
\label{Summary_quadratic_terms_Section}
In this section, we summarize our results concerning the quadratic terms (in $\Delta$) that are relevant for the proof of Theorem~\ref{Calculation_of_the_GL-energy}. We also use our results to prove another statement (Proposition~\ref{Rough_bound_on_BCS energy} below), which will later be used in the proof of Proposition~\ref{Lower_Tc_a_priori_bound}. We start by summarizing our findings.
Let the assumptions of Theorem~\ref{Calculation_of_the_GL-energy} hold and recall the definition of $\mathcal{R}_{T, \mathbf A, W}^{(2)}$ in \eqref{RTAW2_definition}. An application of H\"older's inequality in \eqref{Schatten-Hoelder} and the bound $\Vert ( \i \omega_n - \mathfrak{h}_\mathbf A )^{-1} \Vert_{\infty} \leqslant |\omega_n|^{-1}$ show that
\begin{equation*}
| \langle \Delta, \mathcal{R}_{T, \mathbf A, W}^{(2)}\Delta \rangle | \leqslant C \; \Vert \Delta \Vert_2^2 \; \Vert W_h \Vert^2_{\infty} \leqslant C \; h^4 \; \Vert \Psi\Vert_{2}^2 \leqslant C \; h^6 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2.
\end{equation*}
We combine \eqref{LTAW_decomposition}, this bound, and the results of Propositions~\ref{MT1}, \ref{MTB2}, \ref{MTB3}, \ref{LTAW-LtildeTAW}, \ref{MtildeTAW-MTW}, and \ref{MTW}, to see that for $T = {T_{\mathrm{c}}}(1-Dh^2)$ with $D \in \mathbb{R}$ the identity
\begin{align}
-\frac{1}{4} \langle \Delta, L_{T,\mathbf A, W} \Delta \rangle + \Vert \Psi \Vert_2^2 \, \langle \alpha_*, V \alpha_* \rangle & \notag \\
&\hspace{-80pt} = \Lambda_0 \; \Vert \Pi_{\mathbf A_h}\Psi\Vert_2^2 + \Lambda_1 \; \langle \Psi, W_h\Psi\rangle - Dh^2 \; \Lambda_2 \; \Vert \Psi\Vert_2^2 + R_2(\Delta) \label{eq:A15}
\end{align}
holds. The remainder term $R_2(\Delta)$ obeys the estimate
\begin{align*}
| R_2(\Delta) | \leq C\; \bigl( h^5 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + h^6 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2\bigr).
\end{align*}
This concludes the computation of the quadratic terms in the Ginzburg--Landau functional. It remains to compute the term that is proportional to $| \Psi |^4$, which is the content of the remaining part of Section~\ref{Calculation_of_the_GL-energy_proof_Section}.
Before we continue with the proof of Theorem~\ref{Calculation_of_the_GL-energy}, we state and prove the following statement, which will later be used in the proof of Proposition~\ref{Lower_Tc_a_priori_bound}. It is a straightforward consequence of our results for the quadratic terms, and we therefore prove it here.
\begin{prop}
\label{Rough_bound_on_BCS energy}
Let $| \cdot|^k V\alpha_* \in L^2(\mathbb{R}^3)$ for $k \in \{ 0,1,2 \}$, let $A\in W^{3,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ and $W \in W^{1,\infty}(\mathbb{R}^3,\mathbb{R})$ be periodic, assume $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align}
- \frac 14 \langle \Delta, L_{T,\mathbf A, W} \Delta\rangle + \Vert \Psi\Vert_2^2 \; \langle \alpha_*, V\alpha_*\rangle & \leq c \, \frac{T - {T_{\mathrm{c}}}}{{T_{\mathrm{c}}}}\, \Vert \Psi\Vert_2^2 + C h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{Rough_bound_on_BCS energy_eq1}
\end{align}
\end{prop}
\begin{proof}
We write
\begin{align}
-\frac 14 \langle \Delta, L_{T, \mathbf A, W} \Delta \rangle = - \frac 14 \langle \Delta, L_{T,\mathbf A} \Delta\rangle - \frac 14 \langle \Delta, L_{T, \mathbf A, W} \Delta - L_{T, \mathbf A} \Delta\rangle \label{Rough_bound_on_BCS energy_proof_2}
\end{align}
and use the resolvent identity in \eqref{Resolvent_Equation} to write one of the operators on the right side as
\begin{align}
L_{T, \mathbf A, W} \Delta - L_{T, \mathbf A} \Delta &= -\frac 2\beta \sum_{n\in \mathbb{Z}} \frac{1}{\i \omega_n - \mathfrak{h}_\mathbf A} W_h \frac{1}{\i \omega_n - \mathfrak{h}_{\mathbf A, W}} \Delta \frac{1}{\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \notag \\
&\hspace{80pt} - \frac{1}{\i \omega_n - \mathfrak{h}_\mathbf A} \Delta \frac{1}{\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} W_h \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}}. \label{RTAW2_estimate_1}
\end{align}
An application of Hölder's inequality therefore implies the bound
\begin{align*}
|\langle \Delta, L_{T, \mathbf A, W} \Delta - L_{T, \mathbf A} \Delta \rangle| \leq C \, h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
When we additionally use the decomposition of $L_{T,\mathbf A}$ in \eqref{LTA_decomposition}, Propositions~\ref{LTA-LtildeTA}, \ref{LtildeTA-MtildeTA}, and \ref{MtildeTA-MTA}, we find
\begin{align}
- \frac 14 \langle \Delta, L_{T,\mathbf A} \Delta\rangle + \Vert \Psi\Vert_2^2 \; \langle \alpha_*, V\alpha_*\rangle &\notag\\
&\hspace{-100pt} = - \frac 14 \langle \Delta, M_T^{(1)}\Delta- M_{{T_{\mathrm{c}}}}^{(1)} \Delta\rangle - \frac 14 \langle \Delta, M_{T,\mathbf A} \Delta - M_T^{(1)}\Delta\rangle + R_1(\Delta), \label{Rough_bound_on_BCS energy_proof_1}
\end{align}
with a remainder $R_1(\Delta)$ obeying the bound
\begin{align*}
|R_1(\Delta) | \leq C \; h^5 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
From Proposition~\ref{MT1} we know that
\begin{align*}
-\frac 14 \langle \Delta, M_T^{(1)}\Delta- M_{{T_{\mathrm{c}}}}^{(1)} \Delta\rangle \leq c \; \frac{T - {T_{\mathrm{c}}}}{{T_{\mathrm{c}}}} \; \Vert \Psi \Vert_2^2.
\end{align*}
We also claim that the bound
\begin{align}
|\langle \Delta, M_{T,\mathbf A}\Delta - M_T^{(1)}\Delta\rangle| &\leq C\; h^4 \; \Vert V\alpha_*\Vert_2^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \label{MTB-MT1_eq1}
\end{align}
holds. Its proof can be achieved with the same methods that have been used to prove Proposition~\ref{MtildeTA-MTA}. The main point is that we have to use the bound
\begin{align}
|\langle \Psi, [\cos(Z\cdot\Pi_\mathbf A) - 1] \Psi \rangle| &\leq C\; h^4 \; |Z|^2 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2, \label{MTB-MT1_1}
\end{align}
as well as the operator inequality in \eqref{ZPiX_inequality} for $(Z\cdot \Pi_\mathbf A)^2$. Since no additional difficulties occur, we leave the details to the reader. This completes the proof of \eqref{Rough_bound_on_BCS energy_eq1}.
\end{proof}
\subsubsection{Decomposition of \texorpdfstring{$N_{T, \mathbf A, W}$}{NTAW} --- perturbation of \texorpdfstring{$W$}{W}}
\subsubsection{A representation formula for the operator \texorpdfstring{$N_{T, \mathbf A}$}{NTA}}
\label{sec:NT}
In this and the following sections we investigate the nonlinear operator
\begin{align}
N_{T, \mathbf A} \coloneqq N_{T, \mathbf A, 0} \label{NTA_definition}
\end{align}
with $N_{T, \mathbf A, W}$ in \eqref{NTAW_definition}. In particular, we show that the quartic term in the GL functional emerges from $\langle \Delta, N_{T, \mathbf A} (\Delta) \rangle$. In Section~\ref{sec:quarticterms} we show that $\langle \Delta, N_{T, \mathbf A}(\Delta) - N_{T, \mathbf A, W} (\Delta) \rangle$ yields a negligible contribution. In this section we will also collect all previous results and finish the proof of Theorem~\ref{Calculation_of_the_GL-energy}.
Before we state a representation formula for $N_{T, \mathbf A}$ in terms of relative and center-of-mass coordinates, we introduce the notation $\mathbf Z$ to denote the vector $(Z_1,Z_2,Z_3)$ with $Z_1, Z_2, Z_3 \in \mathbb{R}^3$ as well as $\mathrm{d} \mathbf Z = \mathrm{d} Z_1 \mathrm{d} Z_2 \mathrm{d} Z_3$.
\begin{lem}
\label{NTA_action}
The operator $N_{T, \mathbf A}$ in \eqref{NTAW_definition} acts as
\begin{align*}
N_{T, \mathbf A}(\alpha) (X, r) &= \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf s \; \ell_{T, \mathbf A}(X, \mathbf Z, r, \mathbf s)\; \mathcal{A}(X, \mathbf Z, \mathbf s),
\end{align*}
where
\begin{align}
\mathcal{A}(X, \mathbf Z, \mathbf s) &\coloneqq \mathrm{e}^{\i Z_1\cdot (-\i \nabla_X)} \alpha(X, s_1) \; \ov{\mathrm{e}^{\i Z_2\cdot (-\i \nabla_X)} \alpha(X, s_2)} \; \mathrm{e}^{\i Z_3\cdot (-\i \nabla_X)} \alpha(X,s_3) \label{NTA_alpha_definition}
\end{align}
and
\begin{align}
\ell_{T, \mathbf A}(X, \mathbf Z, r, \mathbf s) &\coloneqq \frac{2}{\beta} \sum_{n\in \mathbb{Z}} \ell_{T, \mathbf A}^n(X, \mathbf Z, r, \mathbf s) \; \mathrm{e}^{\i \Upsilon_{\mathbf A_h}(X, \mathbf Z, r, s)}. \label{lTA_definition}
\end{align}
Here,
\begin{align}
\ell_{T, \mathbf A}^n(X, \mathbf Z, r, \mathbf s) &\coloneqq g_h^{\i\omega_n}\bigl(X + \frac r2 \, , \, X + Z_1 + \frac{s_1} 2\bigr) \, g_h^{-\i\omega_n} \bigl( X + Z_2 + \frac{s_2}{2} \, , \, X + Z_1 - \frac{s_1}{2}\bigr) \notag\\
&\hspace{-50pt} \times g_h^{\i\omega_n}\bigl( X + Z_2 - \frac{s_2}{2} \, , \, X + Z_3 + \frac{s_3}{2}\bigr) \, g_h^{-\i\omega_n}\bigl(X - \frac{r}{2} \, , \, X + Z_3 - \frac{s_3}{2}\bigr), \label{lTAn_definition}
\end{align}
with $g_h^z$ in \eqref{ghz_definition} and
\begin{align}
\Upsilon_\mathbf A(X, \mathbf Z, r, \mathbf s) &\coloneqq \Phi_\mathbf A \bigl(X + \frac r2 \, , \, X + Z_1 + \frac{s_1} 2\bigr) + \Phi_\mathbf A \bigl( X + Z_2 + \frac{s_2}{2} \, , \, X + Z_1 - \frac{s_1}{2}\bigr) \notag\\
&\hspace{-30pt} + \Phi_\mathbf A \bigl( X + Z_2 - \frac{s_2}{2} \, , \, X + Z_3 + \frac{s_3}{2}\bigr) + \Phi_\mathbf A \bigl(X - \frac{r}{2} \, , \, X + Z_3 - \frac{s_3}{2}\bigr). \label{NTA_PhitildeA_definition}
\end{align}
\end{lem}
\begin{bem}
The above representation formula for $N_{T,\mathbf A}$ should be compared to that in the case of a constant magnetic field in \cite[Lemma~4.16]{DeHaSc2021} and to the representation formula for $L_{T,\mathbf A}$ in \ref{LTA_action}. The following two properties are relevant for us: (a) The functions $\alpha$ are multiplied by translation operators that can later be completed with appropriate phase factors to give magnetic translation operators. (b) The coordinates appearing in $\Upsilon_{\mathbf A}$ in \eqref{NTA_PhitildeA_definition} equal those in the different factors in the definition of $\ell_{T, \mathbf A}^n(X, \mathbf Z, r, \mathbf s)$ in \eqref{lTAn_definition}. When proving bounds, this allows us to find a similar structure of nested convolutions as the one we already encountered in the analysis of $L_{T,\mathbf A}$. The center-of-mass part of $\alpha$ never participates in these convolutions.
\end{bem}
\begin{proof}[Proof of Lemma \ref{NTA_action}]
When we compute the integral kernel of $N_{T, \mathbf A}$ using \eqref{GAz_Kernel_of_complex_conjugate} in the case $W =0$, we get
\begin{align}
N_{T, \mathbf A} (\alpha)(x,y) &= \smash{\frac 2\beta \sum_{n\in \mathbb{Z}} \iiint_{\mathbb{R}^{9}} \mathrm{d} \mathbf u \iiint_{\mathbb{R}^9}\mathrm{d} \mathbf v} \; G_h^{\i\omega_n} (x, u_1)\, \alpha(u_1,v_1)\, G_h^{-\i\omega_n} (u_2,v_1) \, \ov{\alpha(u_2,v_2)} \nonumber \\
&\hspace{120pt} \times G_h^{\i\omega_n}(v_2,u_3)\, \alpha(u_3,v_3)\, G_h^{-\i\omega_n}(y, v_3),
\label{eq:newAndi4}
\end{align}
We highlight that, by a slight abuse of notation, $\alpha$ and $N_{T, \mathbf A} (\alpha)$ in the above equation are functions of the original coordinates, while they are functions of relative and center-of-mass coordinates in \eqref{NTA_alpha_definition}. Let us denote $X = \frac{x+y}{2}$, $r=x-y$ and let us also introduce the relative coordinate $\mathbf s$ and the center-of-mass coordinate $\mathbf Z$ by
\begin{align*}
u_i &= X + Z_i + \frac {s_i}{ 2 }, & v_i &= X + Z_i - \frac {s_i}{ 2 }, & i&=1,2,3.
\end{align*}
When we express the integration in \eqref{eq:newAndi4} in terms of these coordinates and use \eqref{ghz_definition}, we see that the claimed formula holds.
\end{proof}
As the operator $L_{T,\mathbf A}$, we analyze the operator $N_{T, \mathbf A}$ in four steps. More precisely, we decompose $N_{T, \mathbf A}$ as
\begin{align}
N_{T, \mathbf A} = (N_{T, \mathbf A} - \tilde N_{T, \mathbf A}) + (\tilde N_{T, \mathbf A} - N_{T, \mathbf B}^{(1)}) + (N_{T, \mathbf B}^{(1)} - N_T^{(2)}) + N_{T}^{(2)} \label{NTB_decomposition}
\end{align}
with $\tilde N_{T, \mathbf A}$ defined below in \eqref{NtildeTB_definition}, $N_{T, \mathbf B}^{(1)}$ in \eqref{NTB1_definition}, and $N_T^{(2)}$ in \eqref{NT2_definition}. To obtain the map $\tilde N_{T, \mathbf A}$ from $N_{T, \mathbf A}$ we need to replace $g_{h}^z$ by $g_0^z$. The operator $N_{T,\mathbf B}^{(1)}$ emerges when we use a part of the phase $\exp(\i \Upsilon_{\mathbf A_h}(X, \mathbf Z, r, s))$ in the definition of $\ell_{T, \mathbf A}$ in \eqref{lTA_definition} to replace the translation operators $\exp(\i Z\cdot P_X)$ in front the $\alpha$ factors by magnetic translation operators. The part of the phase factor that is not needed during this procedure is shown to yield a negligible contribution. Finally, the operator $N_T^{(2)}$ is obtained when we replace the just found magnetic translations by $1$. The above decomposition of $N_{T, \mathbf A}$ should be compared to that in \cite[Eq.~(4.120)]{DeHaSc2021}. In Section~\ref{sec:approxNTA} we show that the terms in brackets in \eqref{NTB_decomposition} only yield negligible contributions. Afterwards, we extract in Section~\ref{sec:compquarticterm} the quartic term in the Ginzburg--Landau functional from $\langle \Delta, N_{T}^{(2)}(\Delta) \rangle$. In Section~\ref{sec:quarticterms} we summarize our findings.
\subsubsection{Approximation of \texorpdfstring{$N_{T, \mathbf A}$}{NTA}}
\label{sec:approxNTA}
\paragraph{The operator $\tilde N_{T, \mathbf A}$.} We define the operator $\tilde N_{T, \mathbf A}$ by
\begin{align}
\tilde N_{T, \mathbf A}(\alpha) (X,r) &\coloneqq \iiint_{\mathbb{R}^{9}} \mathrm{d} \mathbf Z \iiint_{\mathbb{R}^{9}} \mathrm{d} \mathbf s \; \tilde \ell_{T, \mathbf A} (X, \mathbf Z, r, \mathbf s) \; \mathcal{A}(X, \mathbf Z , \mathbf s) \label{NtildeTB_definition}
\end{align}
with $\mathcal{A}$ in \eqref{NTA_alpha_definition} and
\begin{align*}
\tilde \ell_{T, \mathbf A}(X, \mathbf Z, r, \mathbf s) &\coloneqq \frac 2\beta \sum_{n\in\mathbb{Z}} \ell_T^n (\mathbf Z, r, \mathbf s) \; \mathrm{e}^{\i \Upsilon_{\mathbf A_h}(X, \mathbf Z, r, \mathbf s)},
\end{align*}
where $\Upsilon_\mathbf A$ has been defined in \eqref{NTA_PhitildeA_definition}, and
\begin{align}
\ell_T^n(\mathbf Z, r, \mathbf s) &\coloneqq g_0^{\i\omega_n} \bigl(Z_1 - \frac{r-s_1}{2}\bigr) \; g_0^{-\i\omega_n} \bigl( Z_1 - Z_2 - \frac{s_1 + s_2}{2}\bigr) \notag \\
&\hspace{50pt} \times g_0^{\i\omega_n} \bigl( Z_2 - Z_3 - \frac{s_2 + s_3}{2} \bigr) \; g_0^{-\i\omega_n} \bigl( Z_3 + \frac{r-s_3}{2}\bigr). \label{lTn_definition}
\end{align}
In our calculation of the BCS energy we can replace $N_{T, \mathbf A}(\Delta)$ by $\tilde N_{T, \mathbf A}(\Delta)$ because of the following error bound.
\begin{prop}
\label{NTA-NtildeTA}
Let $V\alpha_*\in L^{\nicefrac 43}(\mathbb{R}^3)$, let $A\in W^{3,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ be periodic, assume that $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align*}
|\langle \Delta, N_{T, \mathbf A}(\Delta) - \tilde N_{T, \mathbf A}(\Delta)\rangle| &\leq C \; h^6 \; \Vert V\alpha_*\Vert_{\nicefrac 43}^4 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^4.
\end{align*}
\end{prop}
The function
\begin{align}
J_{T,\mathbf A} &\coloneqq \smash{\frac 2\beta \sum_{n\in \mathbb{Z}}} \; \tau^{\i\omega_n} * \rho^{-\i \omega_n} * \rho^{\i \omega_n} * \rho^{-\i \omega_n} + |g_0^{\i\omega_n}| * \tau^{-\i \omega_n} * \rho^{\i \omega_n} * \rho^{-\i \omega_n} \notag \\
&\hspace{50pt} + |g_0^{\i\omega_n}| * |g_0^{-\i\omega_n} | * \tau^{\i \omega_n} * \rho^{-\i \omega_n} + |g_0^{\i\omega_n}| * |g_0^{-\i\omega_n} | * |g_0^{\i\omega_n}| * \tau^{-\i \omega_n}. \label{NTA-NtildeTA_FTA_definition}
\end{align}
plays a prominent role in the proof of Proposition \ref{NTA-NtildeTA}. Using Lemmas~\ref{gh-g_decay} and \ref{g_decay} as well as \eqref{g0_decay_f_estimate1}, we see that for any $T \geq T_0 > 0$ there is a constant $C>0$ such that
\begin{align}
\Vert J_{T, \mathbf A_h}\Vert_1 \leq C \; h^3 \label{NTA-NtildeTA_FTA_estimate}
\end{align}
holds.
\begin{proof}[Proof of Proposition \ref{NTA-NtildeTA}]
The function $|\Psi|$ is periodic, and hence \eqref{Magnetic_Sobolev} implies
\begin{align}
\Vert \mathrm{e}^{\i Z \cdot (-\i \nabla_X)}\Psi\Vert_6^2 = \Vert \Psi\Vert_6^2 \leq C\, h^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{NTB-NtildeTB_3}
\end{align}
In particular, we have
\begin{align}
\fint_{Q_h} \mathrm{d} X \; |\Psi(X)|\; \prod_{i=1}^3 |\mathrm{e}^{\i Z_i\cdot (-\i \nabla_X)}\Psi(X)| &\leq \Vert \Psi\Vert_2 \; \prod_{i=1}^3 \Vert \mathrm{e}^{\i Z_i\cdot (-\i \nabla_X)}\Psi\Vert_6 \leq C \, h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^4 \label{NTA-NtildeTBA_2}
\end{align}
as well as
\begin{align}
|\langle \Delta, N_{T, \mathbf A}(\Delta)- \tilde N_{T, \mathbf A}(\Delta)\rangle| & \notag\\
&\hspace{-100pt}\leq C\, h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h))}^4 \int_{\mathbb{R}^3} \mathrm{d} r \, \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf s\; |V\alpha_*(r)|\; |V\alpha_*(s_1)| \; |V\alpha_*(s_2)| \; |V\alpha_*(s_3)| \notag\\
&\hspace{-20pt}\times \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z \; \esssup_{X\in \mathbb{R}^3} \bigl|(\ell_{T, \mathbf A} - \tilde \ell_{T, \mathbf A})(X, \mathbf Z, r, \mathbf s)\bigr|. \label{NTB-NtildeTB_1}
\end{align}
Next, we define the variables $Z_1',Z_2',Z_3'$ via the equation
\begin{align}
Z_1' - Z_2' &\coloneqq Z_1 - Z_2 - \frac{s_1 +s_2}{2}, & Z_2' - Z_3' &\coloneqq Z_2 - Z_3 - \frac{s_2 + s_3}{2}, & Z_3' &\coloneqq Z_3 + \frac{r-s_3}{2}, \label{NTA_change_of_variables_1}
\end{align}
which implies
\begin{align}
Z_1 - \frac{r-s_1}{2} = Z_1' - (r - s_1 - s_2 - s_3). \label{NTA_change_of_variables_2}
\end{align}
We argue as in the proof of \eqref{LTA-LtildeTBA_5} to see that
\begin{align*}
\iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z\; \esssup_{X\in \mathbb{R}^3} \bigl|(\ell_{T, \mathbf A} - \tilde \ell_{T, \mathbf A})(X, \mathbf Z, r, \mathbf s)\bigr| \leq J_{T, \mathbf A_h}(r - s_1 - s_2 - s_3)
\end{align*}
holds with $J_{T, \mathbf A}$ in \eqref{NTA-NtildeTA_FTA_definition}. When insert this bound into \eqref{NTB-NtildeTB_1} and use
\begin{align*}
\bigl\Vert V\alpha_* \; \bigl( V\alpha_* * V\alpha_* * V\alpha_* * J_{T, \mathbf A}\bigr) \bigr\Vert_1 &\leq C \; \Vert V\alpha_*\Vert_{\nicefrac 43}^4 \; \Vert J_{T, \mathbf A}\Vert_1,
\end{align*}
as well as \eqref{NTA-NtildeTA_FTA_estimate}, this finishes the proof.
\end{proof}
\paragraph{The operator $N_{T, \mathbf B}^{(1)}$.} We define the operator $N_{T, \mathbf B}^{(1)}$ by
\begin{align}
N_{T, \mathbf B}^{(1)}(\alpha)(X,r) &\coloneqq \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf s \; \ell_T (\mathbf Z, r, \mathbf s) \; \mathcal{A}_\mathbf B (X, \mathbf Z , \mathbf s), \label{NTB1_definition}
\end{align}
where
\begin{align}
\ell_T(\mathbf Z, r, \mathbf s) &\coloneqq \ell_{T,0}(0, \mathbf Z, r, \mathbf s), \label{lT_definition}
\end{align}
with $\ell_{T,0}$ in \eqref{lTA_definition} and
\begin{align}
\mathcal{A}_\mathbf B(X, \mathbf Z, \mathbf s) &\coloneqq \mathrm{e}^{\i Z_1\cdot \Pi} \alpha(X, s_1) \; \ov{\mathrm{e}^{\i Z_2\cdot \Pi} \alpha(X, s_2)} \; \mathrm{e}^{\i Z_3\cdot \Pi} \alpha(X,s_3). \label{NTB1_alphaB_definition}
\end{align}
The following bound allows us to replace $\langle \Delta, \tilde N_{T, \mathbf A}(\Delta) \rangle$ by $\langle \Delta, N_{T, \mathbf B}^{(1)}(\Delta) \rangle$ in our computation of the energy.
\begin{prop}
\label{NtildeTBA-NTB1}
Let $| \cdot| V\alpha_*\in L^{\nicefrac 43}(\mathbb{R}^3)$ for $k\in \{0,1\}$, let $A\in L^{\infty}(\mathbb{R}^3,\mathbb{R}^3)$ be periodic, assume $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align*}
|\langle \Delta, \tilde N_{T, \mathbf A}(\Delta) - N_{T, \mathbf B}^{(1)}(\Delta)\rangle| &\leq C \; h^5 \; \max_{k=0,1} \Vert \ |\cdot|^k \ V\alpha_*\Vert_{\nicefrac 43}^4 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^4.
\end{align*}
\end{prop}
Before we give the proof of Proposition~\ref{NtildeTBA-NTB1} we define the functions
\begin{align}
J_T^{(1)} &\coloneqq \smash{\frac 2\beta \sum_{n\in \mathbb{Z}}} \, |g_0^{\i\omega_n}| * \bigl(|\cdot|\, |g_0^{-\i\omega_n}|\bigr) * \bigl(|\cdot|\, |g_0^{\i\omega_n}|\bigr) * |g_0^{-\i\omega_n}| \notag \\
&\hspace{100pt}+ |g_0^{\i\omega_n}| * \bigl(|\cdot|\, |g_0^{-\i\omega_n}|\bigr) * |g_0^{\i\omega_n}| * \bigl(|\cdot|\, |g_0^{-\i\omega_n}|\bigr) \notag \\
&\hspace{100pt}+ |g_0^{\i\omega_n}| * |g_0^{-\i\omega_n}| * \bigl(|\cdot|\, |g_0^{\i\omega_n}|\bigr) * \bigl(|\cdot|\, |g_0^{-\i\omega_n}|\bigr) \label{NtildeTB-NTB1_FT1_definition}
\end{align}
and
\begin{align}
J_T^{(2)} &\coloneqq \smash{\frac 2\beta \sum_{n\in \mathbb{Z}}} \, \bigl( |\cdot| \, |g_0^{\i\omega_n}|\bigr) * |g_0^{-\i\omega_n}| * |g_0^{\i\omega_n}| * |g_0^{-\i\omega_n}| + |g_0^{\i\omega_n}| * \bigl(|\cdot|\, |g_0^{-\i\omega_n}|\bigr) * |g_0^{\i\omega_n}| * |g_0^{-\i\omega_n}| \notag\\
&\hspace{30pt}+ |g_0^{\i\omega_n}| * |g_0^{-\i\omega_n}| * \bigl(|\cdot|\, |g_0^{\i\omega_n}|\bigr) * |g_0^{-\i\omega_n}| + |g_0^{\i\omega_n}| * |g_0^{-\i\omega_n}| * |g_0^{\i\omega_n}| * \bigl(|\cdot|\, |g_0^{-\i\omega_n}|\bigr). \label{NtildeTB-NTB1_FT2_definition}
\end{align}
Using Lemma~\ref{g_decay} and \eqref{g0_decay_f_estimate1}, we show that for any $T_0 > 0$ there is a constant $C>0$ such that for $T \geqslant T_0$ we have
\begin{align}
\Vert J_T^{(1)} \Vert_1 + \Vert J_T^{(2)} \Vert_1 \leq C. \label{NtildeTB-NTB1_FT1-2_estimate}
\end{align}
\begin{proof}[Proof of Proposition \ref{NtildeTBA-NTB1}]
We recall the definition of the phase $\Upsilon_\mathbf A$ in \eqref{NTA_PhitildeA_definition}. A tedious but straightforward computation shows that
\begin{align*}
\Upsilon_{\mathbf A_{\mathbf B}} (X, \mathbf Z, r, \mathbf s) &= Z_1 \cdot (\mathbf B \wedge X) - Z_2 \cdot (\mathbf B \wedge X) + Z_3 \cdot (\mathbf B \wedge X) + \frac \mathbf B 2 \cdot I(\mathbf Z, r, \mathbf s),
\end{align*}
where
\begin{align}
I(\mathbf Z, r, \mathbf s) &\coloneqq \frac r2 \wedge \bigl( Z_1 - \frac{r - s_1}{2}\bigr) + \frac r2 \wedge \bigl( Z_3 + \frac{r-s_3}{2}\bigr) \notag \\
&\hspace{-35pt} + \bigl( Z_2 - Z_3 - \frac{s_2 + s_3}{2}\bigr) \wedge \bigl( Z_1 - Z_2 - \frac{s_1 + s_2}{2}\bigr) \notag \\
&\hspace{-35pt}+ \bigl( Z_3 + \frac{r - s_3}{2}\bigr) \wedge \bigl( Z_1 - Z_2 - \frac{s_1 + s_2}{2}\bigr)+ \bigl( s_2 + s_3 - \frac r2\bigr) \wedge \bigl( Z_1 - Z_2 - \frac{s_1 + s_2}{2}\bigr) \notag \\
&\hspace{-35pt}+ \bigl( Z_3 + \frac{r - s_3}{2} \bigr) \wedge \bigl( Z_3 - Z_2 + \frac{s_2 + s_3}{2}\bigr)+ \bigl( s_3 - \frac r2\bigr) \wedge \bigl( Z_3 - Z_2 + \frac{s_2 + s_3}{2}\bigr). \label{PhiB_definition}
\end{align}
By \eqref{Magnetic_Translation_constant_Decomposition}, the operator $\tilde N_{T, \mathbf A}$ can therefore be rewritten as
\begin{align*}
\tilde N_{T, \mathbf A}(\alpha) &= \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z \iiint_{\mathbb{R}^9} \mathrm{d} s \; \ell_T(\mathbf Z, r, \mathbf s) \, \mathrm{e}^{\i \Upsilon_{A_h}(X, \mathbf Z, r, \mathbf s)} \mathrm{e}^{\i \frac \mathbf B 2 \cdot I(\mathbf Z, r, \mathbf s)} \; \mathcal{A}_\mathbf B(X, \mathbf Z, \mathbf s).
\end{align*}
This formula and the estimate in \eqref{NTA-NtildeTBA_2} imply the bound
\begin{align}
|\langle \Delta, \tilde N_{T, \mathbf A}(\Delta) - N_{T, \mathbf B}^{(1)}(\Delta)\rangle| & \notag\\
&\hspace{-120pt}\leq C\; h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^4 \int_{\mathbb{R}^3} \mathrm{d} r \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf s \; |V\alpha_*(r)| \; |V\alpha_*(s_1)| \; |V\alpha_*(s_2)| \; |V\alpha_*(s_3)| \notag \\
&\hspace{-100pt}\times \frac{2}{\beta} \sum_{n\in \mathbb{Z}} \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z \; |\ell_T^n(\mathbf Z, r, \mathbf s)| \; \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{\i \Upsilon_{A_h}(X, \mathbf Z, r, \mathbf s)} \, \mathrm{e}^{\i \frac \mathbf B 2 \cdot I(\mathbf Z, r,\mathbf s)} - 1\bigr| \label{NtildeTBA-NTB1_1}
\end{align}
with $\Upsilon_A$ in \eqref{NTA_PhitildeA_definition} and $I$ in \eqref{PhiB_definition}. In terms of the coordinates in \eqref{NTA_change_of_variables_1} and with the help of \eqref{NTA_change_of_variables_2}, the phase function $I$ can be written as
\begin{align}
I(\mathbf Z, r,\mathbf s) &= (Z_2' - Z_3') \wedge (Z_1' - Z_2') + Z_3' \wedge (Z_1' - Z_2') + Z_3' \wedge (Z_3'- Z_2') \notag\\
& \hspace{50pt} + \frac r2 \wedge \bigl( Z_1' - (r - s_1 - s_2 - s_3)\bigr) + \bigl( s_2 + s_3 - \frac r2\bigr) \wedge (Z_1 ' - Z_2')\notag \\
&\hspace{50pt} + \bigl( s_3 - \frac r2\bigr) \wedge (Z_3' - Z_2') + \frac r2 \wedge Z_3'.\label{NtildeTBA-NTB1_2}
\end{align}
Moreover, using the definition of $\Phi_\mathbf A$ in \eqref{PhiA_definition}, the definition of $\Upsilon_{A}(X, \mathbf Z, r, \mathbf s)$ in \eqref{NTA_PhitildeA_definition}, \eqref{NTA_change_of_variables_1}, and \eqref{NTA_change_of_variables_2}, we obtain the bound
\begin{align*}
|\Upsilon_{A}(X, \mathbf Z, r, \mathbf s)| &\leq \Vert A\Vert_\infty \bigl( (Z_1' - (r - s_1 - s_2 - s_3)) + (Z_1' - Z_2') + (Z_2' - Z_3') + Z_3'\bigr).
\end{align*}
In combination with \eqref{NtildeTBA-NTB1_1}, \eqref{NtildeTBA-NTB1_2}, and an argument that is similar to the one used to obtain \eqref{MtildeTA-MTA_5}, we find
\begin{align*}
&\frac{2}{\beta} \sum_{n\in \mathbb{Z}} \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z \; |\ell_{T,0}^n(\mathbf Z, r, \mathbf s)| \; \bigl| \mathrm{e}^{\i \Upsilon_{A_h}(X, \mathbf Z, r, \mathbf s)}\, \mathrm{e}^{\i \frac \mathbf B 2 \cdot I(\mathbf Z, r, \mathbf s)} - 1\bigr| \\
&\hspace{20pt} \leq Ch \; \bigl[ J_T^{(1)} (r - s_1 - s_2 - s_3) + J_T^{(2)}(r - s_1 - s_2 - s_3) \; \bigl(1 + |r| + |s_1| + |s_2| + |s_3|\bigr)\bigr]
\end{align*}
with the functions $F_T^{(1)}$ and $F_T^{(2)}$ in \eqref{NtildeTB-NTB1_FT1_definition} and \eqref{NtildeTB-NTB1_FT2_definition}, respectively. Accordingly, an application of Young's inequality shows that
\begin{align*}
|\langle \Delta, \tilde N_{T, \mathbf A}(\Delta) - N_{T, \mathbf B}^{(1)}(\Delta)\rangle| &\\
&\hspace{-100pt}\leq C\; h^5 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^4 \; \bigl( \Vert V\alpha_*\Vert_{\nicefrac 43}^4 + \Vert \, |\cdot| V\alpha_*\Vert_{\nicefrac 43}^4 \bigr) \bigl( \Vert J_T^{(1)}\Vert_1 + \Vert J_T^{(2)}\Vert_1 \bigr).
\end{align*}
The claim of the proposition follows when we apply \eqref{NtildeTB-NTB1_FT1-2_estimate} on the right side of the above equation.
\end{proof}
\paragraph{The operator $N_T^{(2)}$.} We define the operator $N_{T}^{(2)}$ by
\begin{align}
N_T^{(2)}(\alpha) (X, r) &\coloneqq \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf Z \iiint_{\mathbb{R}^9} \mathrm{d} \mathbf s \; \ell_{T} (\mathbf Z, r, \mathbf s) \, \prod_{i=1}^3 \alpha(X,s_i) \label{NT2_definition}
\end{align}
with $\ell_{T}$ in \eqref{lT_definition}.
In the computation of the BCS energy we can replace $\langle \Delta, N_{T, \mathbf B}^{(1)}(\Delta) \rangle$ by $\langle \Delta, N_{T}^{(2)}(\Delta) \rangle$ with the help of the following error bound. Its proof can be found in \cite[Proposition 4.20]{DeHaSc2021}. We highlight that the $H_{\mathrm{mag}}^2(Q_h)$-norm of $\Psi$ is needed once more.
\begin{prop}
\label{NTB1-NT2}
Assume that $|\cdot|^kV\alpha_* \in L^{\nicefrac 43}(\mathbb{R}^3)$ for $k\in \{0,1,2\}$, let $\Psi \in H_{\mathrm{mag}}^2(Q_h)$, and $\Delta\equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T \geq T_0 >0$ there is $h_0 > 0$ such that for $0 < h \leqslant h_0$ we have
\begin{align*}
|\langle \Delta, N_{T, \mathbf B}^{(1)}(\Delta) - N_{T}^{(2)}(\Delta) \rangle| &\leq C \; h^6 \; \max_{k=0,1,2} \Vert \ |\cdot|^k \ V\alpha_*\Vert_{\nicefrac 43}^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^3 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}.
\end{align*}
\end{prop}
\subsubsection{Calculation of the quartic term in the Ginzburg--Landau functional}
\label{sec:compquarticterm}
The quartic term in the Ginzburg--Landau functional in \eqref{Definition_GL-functional} is contained in $\langle \Delta, N_T^{(2)}(\Delta) \rangle$. It can be extracted with the following proposition, whose proof can be found in \cite[Proposition 4.21]{DeHaSc2021}.
\begin{prop}
\label{NTc2}
Assume $V\alpha_* \in L^{\nicefrac 43}(\mathbb{R}^3)$ and let $\Psi\in H_{\mathrm{mag}}^1(Q_h)$ as well as $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $h>0$, we have
\begin{align*}
\langle \Delta, N_{{T_{\mathrm{c}}}}^{(2)}(\Delta)\rangle = 8\; \Lambda_3 \; \Vert \Psi\Vert_4^4
\end{align*}
with $\Lambda_3$ in \eqref{GL_coefficient_3}. Moreover, for any $T \geq T_0 > 0$, we have
\begin{align*}
|\langle \Delta, N_T^{(2)}(\Delta) - N_{{T_{\mathrm{c}}}}^{(2)}(\Delta)\rangle| &\leq C\; h^4 \; |T - {T_{\mathrm{c}}}| \; \Vert V\alpha_*\Vert_{\nicefrac 43}^4 \; \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^4.
\end{align*}
\end{prop}
\subsubsection{Summary: The quartic term and proof of Theorem~\ref{Calculation_of_the_GL-energy}}
\label{sec:quarticterms}
Let the assumptions of Theorem~\ref{Calculation_of_the_GL-energy} hold. We use the resolvent identity in \eqref{Resolvent_Equation} to decompose the operator $N_{T, \mathbf A, W}$ in \eqref{NTAW_definition} as
\begin{align}
N_{T, \mathbf A, W} = N_{T, \mathbf A} + \mathcal{R}_{T, \mathbf A,W}^{(3)} \label{NTAW_decomposition}
\end{align}
with $N_{T, \mathbf A}$ in \eqref{NTA_definition} and
\begin{align}
\mathcal{R}_{T, \mathbf A, W}^{(3)}(\Delta) & \notag\\
&\hspace{-40pt} \coloneqq \frac 2\beta \sum_{n\in \mathbb{Z}} \Bigl[ \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, W_h \, \frac{1}{\i \omega_n -\mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac{1}{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \, \ov \Delta \, \frac{1}{\i \omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \notag \\
&\hspace{10pt} - \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \, W_h \, \frac{1}{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \, \ov \Delta \, \frac{1}{\i \omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \notag \\
&\hspace{10pt} + \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \, \ov \Delta \, \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, W_h \, \frac{1}{\i \omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \notag \\
&\hspace{10pt} - \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \, \ov \Delta \, \frac{1}{\i\omega_n - \mathfrak{h}_\mathbf A} \, \Delta \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_\mathbf A}} \, W_h \, \frac{1}{\i \omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \Bigr]. \label{RTAW3_definition}
\end{align}
We claim that the operator $\mathcal{R}_{T, \mathbf A, W}^{(3)}$ satisfies the bound
\begin{align*}
\Vert \mathcal{R}_{T, \mathbf A, W}^{(3)} (\Delta)\Vert_{{L^2(Q_h \times \Rbb_{\mathrm s}^3)}} &\leq C\, T^{-5} \, h^{5} \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^3.
\end{align*}
This is a direct consequence of Hölder's inequality in \eqref{Schatten-Hoelder} for the trace per unit volume, which implies that the Hilbert-Schmidt norm per unit volume of the terms in the sum in \eqref{RTAW3_definition} are bounded by $C\, |\omega_n|^{-5} \, \Vert W_h\Vert_\infty \, \Vert \Delta\Vert_6^3$. Moreover, an application of Lemma~\ref{Schatten_estimate} and \eqref{Magnetic_Sobolev} show that this expression is bounded by $C\, |2n+1|^{-5} \, T^{-5} \, h^5 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^3$, which proves our claim.
When we combine this bound, Lemma~\ref{NTA_action}, and Propositions~\ref{NTA-NtildeTA}-\ref{NTc2}, we find
\begin{align}
\frac{1}{8} \langle \Delta, N_{T, \mathbf A, W}(\Delta) \rangle = \; \Lambda_3 \; \Vert \Psi\Vert_4^4 + R_4(h), \label{eq:A28}
\end{align}
where the remainder $R(h)$ satisfies the bound
\begin{align*}
| R_4(h) | \leq C \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)}^3 \, \bigl( h^5 \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)} + h^6 \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^2(Q_h)}\bigr).
\end{align*}
The statement in Theorem~\ref{Calculation_of_the_GL-energy} is a direct consequence of \eqref{eq:A28} and \eqref{eq:A15}.
\subsection{Proof of Proposition \ref{Structure_of_alphaDelta}}
\label{sec:proofofadmissibility}
We assume that the assumptions of Proposition~\ref{Structure_of_alphaDelta} hold and recall the definition of $\Gamma_\Delta$ in \eqref{GammaDelta_definition}. Using the resolvent equation in \eqref{Resolvent_Equation} and \eqref{alphaDelta_decomposition_1}, we write $\alpha_\Delta = [\Gamma_\Delta]_{12}$ as
\begin{align*}
\alpha_\Delta &= [\mathcal{O}]_{12} + \mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta).
\end{align*}
Here, $\mathcal{O} = \frac 1\beta \sum_{n\in \mathbb{Z}} \frac{1}{ \i \omega_n - H_0} \delta \frac{1}{ \i \omega_n - H_0}$, see \eqref{alphaDelta_decomposition_2}, with $\delta$ in \eqref{Delta_definition} and
\begin{align}
\mathcal{R}_{T,\mathbf A, W}^{(4)}(\Delta) &\coloneqq \frac 1\beta \sum_{n\in \mathbb{Z}} \Bigl[ \frac{1}{ \i \omega_n - H_0} \delta\frac{1}{ \i \omega_n - H_0} \delta\frac{1}{ \i \omega_n - H_\Delta} \delta \frac{1}{ \i \omega_n - H_0}\Bigr]_{12}. \label{RTAW4_definition}
\end{align}
Moreover, we have $[\mathcal{O}]_{12} = -\frac 12 L_{T, \mathbf A, W}\Delta$ with $L_{T, \mathbf A, W}$ in \eqref{LTAW_definition}. Using the decomposition of $L_{T, \mathbf A, W}$ in \eqref{LTAW_decomposition}, we define
\begin{align}
\eta_0(\Delta) &\coloneqq \frac 12 \bigl( L_{T, \mathbf A, W} \Delta - L_{T, \mathbf A}\Delta\bigr) + \frac 12 \bigl(L_{T, \mathbf A}\Delta - M_{T, \mathbf A}\Delta\bigr) + \frac 12 \bigl( M_T^{(1)}\Delta - M_{{T_{\mathrm{c}}}}^{(1)}\Delta\bigr) \notag \\
&\hspace{60pt} + \frac 12 \bigl( M_{T, \mathbf A}\Delta - M_{T, \mathbf A_{e_3}}\Delta\bigr) + \mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta), \notag \\
\eta_\perp(\Delta) &\coloneqq \frac 12 \bigl( M_{T, \mathbf A_{e_3}}\Delta - M_{T}^{(1)}\Delta\bigr), \label{eta_perp_definition}
\end{align}
with $M_{T, \mathbf A}$ in \eqref{MTA_definition}, $L_{T, \mathbf A}^W$ in \eqref{LTA^W_definition}, and $M_T^{(1)}$ in \eqref{MT1_definition}. From Proposition~\ref{MT1} we know that $-\frac 12 M_{{T_{\mathrm{c}}}}^{(1)} \Delta = \Psi\alpha_*$, which allows us to write $\alpha_{\Delta}$ as in \eqref{alphaDelta_decomposition_eq1}. The operator $M_{T, \mathbf A_{e_3}}$ equals $M_{T, \mathbf A}$ in \eqref{MTA_definition} with $\mathbf A$ replaced by $\mathbf A_{e_3}$. The contribution from this operator needs to be carefully isolated for the orthogonality property in \eqref{alphaDelta_decomposition_eq4} to hold. This should be compared to part~(c) of \cite[Proposition~3.2]{DeHaSc2021}. In the following, we will establish the properties of $\eta_0$ and $\eta_\perp$ that are stated in Proposition~\ref{Structure_of_alphaDelta}.
We will first prove \eqref{alphaDelta_decomposition_eq2}, and start by noting that
\begin{align*}
\mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta) &= \frac 1\beta \sum_{n\in\mathbb{Z}} \frac 1{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \, \Delta \, \frac 1{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}}\, \ov \Delta\, \Bigl[ \frac{1}{\i \omega_n - H_\Delta}\Bigr]_{11}\, \Delta \, \frac 1{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}}.
\end{align*}
An application of Hölder's inequality shows $\Vert \mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta)\Vert_2 \leq C \beta^{3} \Vert \Delta\Vert_6^3$. With the operator $\pi = -\i \nabla + \mathbf A_{\mathbf B}$ understood to act on the $x$-coordinate of the integral kernel of $\mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta)$ we also have
\begin{equation*}
\Vert \pi \mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta) \Vert_2 \leq \frac 1\beta \sum_{n\in\mathbb{Z}} \Bigl\Vert \pi \frac 1{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \Bigr\Vert_{\infty} \Bigl\Vert \frac 1{\i\omega_n + \ov{\mathfrak{h}_{\mathbf A, W}}} \Bigr\Vert_{\infty}^2 \Bigl\Vert \Bigl[ \frac{1}{\i \omega_n - H_\Delta}\Bigr]_{11} \Bigr\Vert_{\infty} \Vert \Delta \Vert_6^3.
\end{equation*}
An application of Cauchy--Schwarz shows
\begin{equation}
(- \i \nabla + \mathbf A_{\mathbf B} + A)^2 + W_h \geqslant \frac{1}{2} (- \i \nabla + \mathbf A_{\mathbf B} )^2 - C h^2.
\label{eq:Andinew7}
\end{equation}
Accordingly, we have
\begin{align*}
\Bigl\Vert \pi \frac 1{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \Bigr\Vert_{\infty} &= \Bigl\Vert \frac 1{\i\omega_n + \mathfrak{h}_{\mathbf A, W}} \pi^2 \frac 1{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \Bigr\Vert_{\infty}^{\nicefrac 12} \\
&\leqslant \Bigl\Vert \frac 1{\i\omega_n + \mathfrak{h}_{\mathbf A, W}} \left( \mathfrak{h}_{\mathbf A, W} + \mu + C h^2 \right) \frac 1{\i\omega_n - \mathfrak{h}_{\mathbf A, W}} \Bigr\Vert_{\infty}^{\nicefrac 12} \leq C \, |\omega_n|^{-\nicefrac 12}.
\end{align*}
It follows that
\begin{align}
\Vert \pi \mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta) \Vert_2 \leq C\, \Vert \Delta \Vert_6^3. \label{eq:A25}
\end{align}
The same argument with obvious adjustments also shows that $\Vert \mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta)\pi \Vert_2$ is bounded by the right side of \eqref{eq:A25}, too. Finally, \eqref{Norm_equivalence_2}, an application of Lemma~\ref{Schatten_estimate}, and \eqref{Magnetic_Sobolev} allow us to conclude that
\begin{align}
\Vert \mathcal{R}_{T, \mathbf A, W}^{(4)}(\Delta) \Vert_{{H^1(Q_h \times \Rbb_{\mathrm s}^3)}}^2 \leq C \; h^6 \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)}^6 \label{eq:A23}
\end{align}
holds.
To control $M_{T, \mathbf A}\Delta - M_{T, \mathbf A_{e_3}}\Delta$, we need the following proposition.
\begin{prop}
\label{MTA-MTAe3}
Let $| \cdot| V\alpha_*\in L^{2}(\mathbb{R}^3)$ for $k\in \{0,1\}$, let $A\in W^{1,\infty}(\mathbb{R}^3,\mathbb{R}^3)$ be periodic, assume $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align}
\Vert M_{T, \mathbf A} \Delta - M_{T, \mathbf A_{e_3}} \Delta\Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2 &\leq C \, h^5 \, \max_{k=0,1} \Vert \ | \cdot |^k \ V\alpha_*\Vert_2^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{MTA-MTAe3_eq1}
\end{align}
\end{prop}
\begin{proof}
Let us define the operator
\begin{align*}
\mathcal{Q}_{T, \mathbf B, A}\alpha(X, r) \coloneqq \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_T(Z, r-s) \; \mathrm{e}^{2\i A_h(X) \cdot Z} \; (\mathrm{e}^{\i Z\cdot \Pi} \alpha)(X,s),
\end{align*}
where $k_T(Z,r) \coloneqq k_{T, 0}(0,Z, r, 0)$ with $k_{T, 0}$ in \eqref{kTA_definition} and $\Pi = -\i \nabla + 2 \mathbf A_{\mathbf B}$ is understood to act on the center-of-mass coordinate $X$ of $\alpha$. We start our analysis by writing
\begin{align}
M_{T, \mathbf A} \Delta - M_{T, \mathbf A_{e_3}} \Delta = \bigl( M_{T, \mathbf A} \Delta - \mathcal{Q}_{T, \mathbf B, A} \Delta \bigr) + \bigl( Q_{T, \mathbf B, A} \Delta - M_{T, \mathbf A_{e_3}} \Delta\bigr). \label{MTA-MTAe3_1}
\end{align}
In the following we derive bounds on the ${H^1(Q_h \times \Rbb_{\mathrm s}^3)}$-norms of the two terms on the right side of \eqref{MTA-MTAe3_eq1}. When we use that the integrand in the definition of $M_{T, \mathbf A}$ is symmetric with respect to the transformation $Z \mapsto -Z$ and apply Lemma \ref{Magnetic_Translation_Representation}, we see that
\begin{align*}
(M_{T, \mathbf A} \alpha - \mathcal{Q}_{T, \mathbf B, A} \alpha)(X, r) &\\
&\hspace{-100pt} = \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s\; k_T(Z, r-s) \, \bigl[ \mathrm{e}^{\i \Phi_{2A_h}(X, X+Z)} - \mathrm{e}^{2\i A_h(X) \cdot Z}\bigr] \, (\mathrm{e}^{\i Z\cdot \Pi} \alpha)(X, s)
\end{align*}
holds. Let us also recall that $\Phi_{\mathbf A}$ is defined in \eqref{PhiA_definition}.
We have
\begin{align*}
\Phi_{2A}(X, X+ Z) - 2A(X)\cdot Z = 2\int_0^1 \mathrm{d} t \; \bigl[ A(X + (1-t) Z) - A(X)\bigr] \cdot Z,
\end{align*}
and hence
\begin{align}
\bigl| \mathrm{e}^{\i \Phi_{2A}(X, X+Z)} - \mathrm{e}^{2\i A(X)\cdot Z} \bigr| \leq \Vert DA\Vert_\infty \; |Z|^2. \label{MTA-MTAe3_2}
\end{align}
When we apply this bound and $|Z|^2 \leq
| Z + \frac{r}{2}|^2 + | Z - \frac{r}{2}|^2$, it follows that
\begin{align*}
\Vert M_{T, \mathbf A}\Delta - \mathcal{Q}_{T, \mathbf B, A}\Delta\Vert_2^2 &\leq C\, \Vert \Psi\Vert_2^2 \, \Vert DA_h \Vert_\infty^2 \, \Vert F_T^{2} \Vert_1 \, \Vert V\alpha_*\Vert_2^2
\end{align*}
with $F_T^{2}$ in \eqref{LtildeTA-MtildeTA_FT_definition}. Using the $L^1$-norm bound for $F_T^{2}$ in \eqref{LtildeTA-MtildeTA_FTGT}, we conclude the claimed estimate for this term.
To obtain a bound for the first gradient term, we start by noting that
\begin{align*}
\Vert \Pi (M_{T, A}\Delta - \mathcal{Q}_{T, \mathbf B, A}\Delta )\Vert_2^2 &\leq C \, \Vert \Psi \Vert_2^2 \int_{\mathbb{R}^3} \mathrm{d} r \; \Bigl| \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s\; |k_T(Z, r-s)| \, |V\alpha_*(s)| \\
& \hspace{1cm} \times \sup_{X\in \mathbb{R}^3} \bigl| \nabla_X \mathrm{e}^{\i \Phi_{2A_h}(X, X + Z)} - \nabla_X \mathrm{e}^{2\i A_h(X)\cdot Z} \bigr| \, \bigr|^2 \\
&+ C \, \Vert \Pi \mathrm{e}^{\i Z\cdot \Pi} \Psi\Vert_2 \, \int_{\mathbb{R}^3} \mathrm{d} r \; \Bigl| \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s\; |k_T(Z, r-s)| \, |V\alpha_*(s)| \\
&\hspace{1cm} \times\sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{\i \Phi_{2A_h}(X, X + Z)} - \mathrm{e}^{2\i A_h(X)\cdot Z}\bigr| \, \bigr|^2,
\end{align*}
where $\Pi = -\i \nabla + 2 \mathbf A_{\mathbf B}$ is understood to act on the center-of-mass coordinate. When we additionally use
\begin{align*}
\bigl| \nabla_X \mathrm{e}^{\i \Phi_{2A}(X, X + Z)} - \nabla_X\mathrm{e}^{2\i A(X)\cdot Z} \bigr| &\leq \bigl| \nabla_X \Phi_{2A}(X, X + Z) - 2\nabla_X A(X)\cdot Z \bigr| \\
&\hspace{-20pt} + \bigl|\Phi_{2A}(X, X+Z) - 2A(X)\cdot Z\bigr| \, |\nabla_XA(X)\cdot Z| \\
&\leq \bigl[ \Vert D^2A\Vert_\infty + \Vert DA\Vert_\infty^2 \bigr] \bigl[ |Z|^2 + |Z|^3\bigr],
\end{align*}
as well as
\begin{equation}
|Z|^a \leq \left| Z + \frac{r}{2} \right|^a + \left| Z - \frac{r}{2} \right|^a, \quad a \geq 0
\label{Z_estimate}
\end{equation}
for the choices $a=2,3$, $\Vert D^2A_h\Vert_\infty \leq Ch^3$, $\Vert DA_h\Vert_\infty^2 \leq Ch^4$, \eqref{PiXcosPiA_estimate}, and \eqref{MTA-MTAe3_2}, we see that
\begin{align*}
\Vert \Pi (M_{T, \mathbf A} \Delta - \mathcal{Q}_{T, \mathbf B, A} \Delta)\Vert_2^2 &\leq C \, h^5 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \, \Vert V\alpha_*\Vert_2^2 \, \left( \Vert F_T^{2} \Vert_1^2 + \Vert F_T^{3} \Vert_1^2 \right)
\end{align*}
holds with $F_T^{a}$ in \eqref{LtildeTA-MtildeTA_FTGT}. An application of \eqref{LtildeTA-MtildeTA_FTGT} proves the claimed bound for this term.
In the last step we consider
\begin{align*}
\Vert \tilde \pi (M_{T, \mathbf A} \Delta - \mathcal{Q}_{T, \mathbf B, A}\Delta) \Vert_2^2 &\leq C \, \Vert \Psi\Vert_2^2 \\
&\hspace{-130pt} \times \int_{\mathbb{R}^3} \mathrm{d} r \, \Bigl| \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s\; |\tilde \pi k_T(Z, r-s)| \, |V\alpha_*(s)| \, \sup_{X\in \mathbb{R}^3} \bigl| \mathrm{e}^{\i \Phi_{2A_h}(X, X + Z)} - \mathrm{e}^{2\i A_h(X)\cdot Z}\bigr|\Bigr|^2.
\end{align*}
The estimate in \eqref{Z_estimate} and $\frac 14 |\mathbf B \wedge r| \leq \frac 14 |\mathbf B| ( |r-s| + |s| )$ allow us to prove the bound
\begin{align}
\int_{\mathbb{R}^3} \mathrm{d} Z \; |\tilde \pi k_T(Z, r-s)| \, |Z|^2 \leq F_T^3(r-s) \, (1 + |s| ) + G_T^2(r-s) \label{MTA-MTAe3_3}
\end{align}
with $F_T^a$ in \eqref{LtildeTA-MtildeTA_FT_definition} and $G_T^a$ in \eqref{LtildeTA-MtildeTA_GT_definition}. In combination with \eqref{MTA-MTAe3_2}, this proves the claimed bound for this term. It also ends the proof of the claimed bound for the first term on the right side of \eqref{MTA-MTAe3_1}. It remains to consider the second term.
A short computation that uses $k_T(-Z,r-s) = k_T(Z,r-s)$ and $\cos(x) - 1 = -2\sin^2(\frac x2)$ shows
\begin{align*}
(\mathcal{Q}_{T, \mathbf B, A}\alpha - M_{T, \mathbf A_{e_3}}\alpha)(X, r) &\\
&\hspace{-60pt} = -2 \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_T(Z, r-s) \, \sin^2(A_h(X)\cdot Z) (\cos(Z\cdot \Pi) \alpha)(X, s) \\
&\hspace{-30pt}- \int_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; k_T(Z, r-s) \, \sin(2A_h(X) \cdot Z) \, (\sin(Z\cdot \Pi) \alpha)(X, s).
\end{align*}
From this, we check that
\begin{align*}
\Vert \mathcal{Q}_{T, \mathbf B, A}\Delta - M_{T, \mathbf A_{e_3}}\Delta\Vert_2^2 &\leq C\, \bigl[ \Vert\Psi\Vert_2^2 \, \Vert A_h\Vert_\infty^4 + \Vert \Pi\Psi\Vert_2^2 \Vert A_h\Vert_\infty^2 \bigr] \, \Vert F_T^{2} \Vert_1^2 \, \Vert V\alpha_*\Vert_2^2
\end{align*}
holds with $F_T^{a}$ in \eqref{LtildeTA-MtildeTA_FT_definition}. When we use \eqref{MTA-MTAe3_2} to obtain a bound for the $L^1$-norm of $F_T^{2}$, this proves the claimed bound for this term.
Next, we note that
\begin{align}
&\Vert \Pi ( \mathcal{Q}_{T, \mathbf B, A}\Delta - M_{T, \mathbf A_{e_3}}\Delta)\Vert_2^2 \leq C \, \int_{\mathbb{R}^3} \mathrm{d} r \, \Bigl| \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z\mathrm{d} s \; |k_T(Z, r-s)| \, |V\alpha_*(s)| \notag \\%
& \times \Bigl[ \sup_{X \in \mathbb{R}^3} | \nabla_X \sin^2 (A_h(X)\cdot Z) | \, \Vert \cos(Z\cdot \Pi)\Psi\Vert_2 + \sup_{X \in \mathbb{R}^3} | \sin^2(A_h(X)\cdot Z) | \, \Vert \Pi \cos(Z\cdot\Pi)\Psi\Vert_2 \notag \\
&+ \sup_{X \in \mathbb{R}^3} | \sin(2A_h(X)\cdot Z) | \, \Vert \sin(Z\cdot \Pi) \Psi\Vert_2 + \sup_{X \in \mathbb{R}^3} | \sin(2A_h(X)\cdot Z) | \, \Vert \Pi\sin(Z\cdot \Pi) \Psi\Vert_2\Bigr]\Bigr|^2. \label{alphaDelta_decomposition_5}
\end{align}
We have
\begin{align*}
\Vert \sin(Z\cdot \Pi)\Psi\Vert_2 \leq C \, h^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \, |Z|.
\end{align*}
Furthermore, from a straight forward computation or from \cite[Lemma 5.12]{DeHaSc2021}, we know that
\begin{align*}
\Pi_X \, \sin(Z\cdot \Pi_X) = \sin(Z\cdot \Pi_X) \;\Pi_X + 2\i \, \cos (Z\cdot \Pi_X) \; \mathbf B \wedge Z,
\end{align*}
and hence
\begin{align*}
\Vert \Pi \sin(Z\cdot \Pi)\Psi\Vert_2 &\leq C \, h^2 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \, (1 + |Z|).
\end{align*}
To obtain the bound we used that $|\mathbf B| = h^2$. For $\Pi\cos(Z\cdot \Pi)\Psi$ a similar estimate was obtained in \eqref{MtildeTA-MTA_Lemma}. Putting these bounds together, we find that the term on the left side of \eqref{alphaDelta_decomposition_5} is bounded by a constant times $h^6 \Vert V \alpha_* \Vert_2^2 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 $. It remains to consider the term proportional to $\tilde \pi$.
A straightforward computation shows that
\begin{align*}
\Vert \tilde \pi(\mathcal{Q}_{T, \mathbf B, A}\Delta - M_{T, \mathbf A_{e_3}}\Delta)\Vert_2^2 &\leq C\bigl[ \Vert A_h\Vert_\infty^4 \Vert \Psi\Vert_2^2 + \Vert A_h\Vert_\infty^2 \Vert \Pi\Psi\Vert_2^2 \bigr] \\
&\hspace{20pt} \times \int_{\mathbb{R}^3} \mathrm{d} r \, \Bigl| \iint_{\mathbb{R}^3\times \mathbb{R}^3} \mathrm{d} Z \mathrm{d} s \; |\tilde \pi k_T(Z, r-s)| \, |V\alpha_*(s)| \, |Z|^2\Bigr|^2.
\end{align*}
We use \eqref{MTA-MTAe3_3} to see that the term on the left side is bounded by a constant times $h^6 \max_{k=0,1} \Vert \ | \cdot |^k \ V\alpha_*\Vert_2^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2$. This proves Proposition~\ref{MTA-MTAe3}.
\end{proof}
The next lemma provides us with a bound for the term in \eqref{eta_perp_definition} that is proportional to $L_{T, \mathbf A, W} - L_{T, \mathbf A}$.
\begin{lem}
\label{RTAW2_estimate}
Let $V\alpha_*\in L^{2}(\mathbb{R}^3)$, let $W \in L^{\infty}(\mathbb{R}^3)$ and $A\in L^{\infty}(\mathbb{R}^3,\mathbb{R}^3)$ be periodic, assume $\Psi\in H_{\mathrm{mag}}^1(Q_h)$, and denote $\Delta \equiv \Delta_\Psi$ as in \eqref{Delta_definition}. For any $T_0>0$ there is $h_0>0$ such that for any $T\geq T_0$ and any $0 < h \leq h_0$ we have
\begin{align*}
\Vert L_{T, \mathbf A, W} \Delta - L_{T, \mathbf A} \Delta \Vert_{{H^1(Q_h \times \Rbb_{\mathrm s}^3)}}^2 &\leq C \, h^6 \, \Vert V \alpha_* \Vert_2^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2.
\end{align*}
\end{lem}
\begin{proof}
To prove the lemma, we write $L_{T, \mathbf A, W} \Delta - L_{T, \mathbf A} \Delta$ as in \eqref{RTAW2_estimate_1} and use the representation of the $H^1$-norm in \eqref{Norm_equivalence_2}. The details are a straight forward application of arguments that have been used already several times above, and are therefore left to the reader.
\end{proof}
When we combine Propositions~\ref{LTA-LtildeTA}, \ref{LtildeTA-MtildeTA}, \ref{MtildeTA-MTA}, \ref{MT1} as well as \eqref{eq:A23} and Lemma~\ref{RTAW2_estimate}, we obtain the claimed bound for $\eta_0(\Delta)$ in \eqref{alphaDelta_decomposition_eq2}, that is, part (a) of Proposition~\ref{Structure_of_alphaDelta}. The proofs of parts (b) and (c) can be found in \cite{DeHaSc2021}, see the proofs of Proposition 3.2 (b), (c). This ends our proof of Proposition~\ref{Structure_of_alphaDelta}.
\subsection{Proof of Proposition \ref{Lower_Tc_a_priori_bound}}
\label{Lower_Tc_a_priori_bound_proof_Section}
Let the assumptions of Proposition~\ref{Lower_Tc_a_priori_bound} hold. In the following we prove that there are constants $D_0>0$ and $h_0>0$ such that for $0 < h \leq h_0$ and
\begin{align*}
0 < T_0 \leq T < {T_{\mathrm{c}}} (1 - D_0 h^2)
\end{align*}
there is a function $\Psi \in H_{\mathrm{mag}}^2(Q_h)$, such that the energy of the Gibbs state $\Gamma_\Delta$ in \eqref{GammaDelta_definition} with gap function $\Delta(X,r) = -2 V\alpha_*(r) \Psi(X)$ satisfies \eqref{Lower_critical_shift_2}.
Let $\psi \in H_{\mathrm{mag}}^2(Q_1)$ with $\Vert \psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}=1$ and define $\Psi(X) = h \psi(hX)$. The function $\Psi$ satisfies $\Vert \Psi\Vert_{H_{\mathrm{mag}}^2(Q_h)}=1$. When we apply Propositions~\ref{Structure_of_alphaDelta}, \ref{BCS functional_identity}, \ref{Rough_bound_on_BCS energy}, as well as \eqref{Magnetic_Sobolev} and \eqref{eq:A28}, we find
\begin{align*}
\FBCS(\Gamma_\Delta) - \FBCS(\Gamma_0) &< h^2 \, \bigl( - cD_0 \, \Vert \psi\Vert_2^2 + C \bigr)
\end{align*}
for $h$ small enough.
The proof of Proposition~\ref{Lower_Tc_a_priori_bound} is completed when we choose $D_0 = \frac{C}{c \Vert \psi\Vert_2^2}$.
\section{Proofs of the Results in Section \ref{Upper_Bound}}
\label{Proofs}
\input{4_Proofs/4.1_Product_Wave_Functions}
\input{4_Proofs/4.3_Calculation_of_GL-energy/1_technical_preparations}
\input{4_Proofs/4.3_Calculation_of_GL-energy}
\input{4_Proofs/4.4_alphaDelta}
\input{4_Proofs/4.5_A_priori_bound_on_lower_Tc}
\subsection{A lower bound for the BCS functional}
We start the proof of Theorem \ref{Structure_of_almost_minimizers} with the following lower bound on the BCS functional, whose proof is literally the same as that of the comparable statement in \cite{Hainzl2012}.
\begin{lem}
Let $\Gamma_0$ be the normal state in \eqref{Gamma0}. We have the lower bound
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) \geq \Tr\bigl[ (K_{T,\mathbf A, W} - V) \alpha \alpha^*\bigr] + \frac{4T}{5} \Tr\bigl[ (\alpha^* \alpha)^2\bigr], \label{Lower_Bound_A_3}
\end{align}
where
\begin{align}
K_{T, \mathbf A, W} = \frac{(-\i \nabla + \mathbf A_h)^2 + W_h- \mu}{\tanh (\frac{(-\i \nabla + \mathbf A_h)^2 + W_h - \mu}{2T})} \label{KTAW_definition}
\end{align}
and $V\alpha(x,y) = V(x-y) \alpha(x,y)$.
\end{lem}
In Proposition~\ref{prop:gauge_invariant_perturbation_theory} in Appendix~\ref{KTV_Asymptotics_of_EV_and_EF_Section} we show that the external electric and magnetic fields can lower the lowest eigenvalue zero of $K_{{T_{\mathrm{c}}}} - V$ at most by a constant times $h^2$. We use this in the next lemma to show that $K_{T, \mathbf A, W} - V$ is bounded from below by a nonnegative operator, up to a correction of the size $Ch^2$.
\begin{lem}
\label{KTB_Lower_bound}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} be true. For any $D_0 \geq 0$, there are constants $h_0>0$ and $T_0>0$ such that for $0< h\leq h_0$ and $T>0$ with $T - {T_{\mathrm{c}}} \geq -D_0h^2$, the estimate
\begin{align}
K_{T, \mathbf A, W} - V &\geq c \; (1 - P) (1 + \pi^2) (1- P) + c \, \min \{ T_0, (T - {T_{\mathrm{c}}})_+\} - Ch^2 \label{KTB_Lower_bound_eq}
\end{align}
holds. Here, $P = |\alpha_*\rangle\langle \alpha_*|$ is the orthogonal projection onto the ground state $\alpha_*$ of $K_{{T_{\mathrm{c}}}} - V$ and $\pi = -\i \nabla + \mathbf A_\mathbf B$.
\end{lem}
\begin{proof}
Since $W \in L^{\infty}(\mathbb{R}^3)$ we can use Lemma \ref{KT_integral_rep} to show that $K_{T, \mathbf A, W} \geq K_{T, \mathbf A, 0} - C h^2$ holds. The rest of the proof goes along the same lines as that of \cite[Lemma 5.4]{DeHaSc2021} with the obvious replacements. In particular, \cite[Proposition~A.1]{DeHaSc2021} needs to be replaced by Proposition~\ref{prop:gauge_invariant_perturbation_theory}. We omit the details.
\end{proof}
We deduce two corollaries from \eqref{Lower_Bound_A_3} and Lemma \ref{KTB_Lower_bound}. The first statement is an a priori bound that will be used in the proof of Theorem \ref{Main_Result_Tc}~(b). Its proof goes along the same lines as that of \cite[Corollary 5.5]{DeHaSc2021}.
\begin{kor}
\label{TcB_First_Upper_Bound}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} be true. Then, there are constants $h_0>0$ and $C>0$ such that for all $0 < h \leq h_0$ and all temperatures $T\geq {T_{\mathrm{c}}}(1 + Ch^2)$, we have $\FBCS(\Gamma) - \FBCS(\Gamma_0) >0$ unless $\Gamma = \Gamma_0$.
\end{kor}
The second corollary provides us with an inequality for Cooper pair wave functions of low-energy BCS states in the sense of \eqref{Second_Decomposition_Gamma_Assumption}. The left side of \eqref{Lower_Bound_A_2} appears as a lower bound for the full BCS functional. Despite of its apparent simplicity, it still contains all the information needed for a proof of Theorem~\ref{Structure_of_almost_minimizers}. Before we state the corollary, let us define the operator
\begin{align}
U &\coloneqq \mathrm{e}^{-\i \frac r2 \Pi_X}, \label{U_definition}
\end{align}
with $\Pi_X$ in \eqref{Magnetic_Momenta_COM}, which acts on the relative coordinate $r = x-y$ as well as on the center-of-mass coordinate $X = \frac{x+y}{2}$ of a function $\alpha(X,r)$.
\begin{kor}
\label{cor:lowerbound}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} be true. For any $D_0, D_1 \geq 0$, there is a constant $h_0>0$ such that if $\Gamma$ is a low-energy state in the sense that it satisfies \eqref{Second_Decomposition_Gamma_Assumption}, if $0 < h\leq h_0$, and if $T$ is such that $T - {T_{\mathrm{c}}} \geq -D_0h^2$, then $\alpha = \Gamma_{12}$ obeys
\begin{align}
&\langle \alpha, [U(1 - P)(1 + \pi^2)(1 - P)U^* + U^*(1 - P)(1 + \pi^2)(1 - P)U] \alpha \rangle \notag\\
&\hspace{200pt} + \Tr\bigl[(\alpha^* \alpha)^2\bigr] \leq C h^2 \Vert \alpha\Vert_2^2 + D_1h^4, \label{Lower_Bound_A_2}
\end{align}
where $P = | \alpha_* \rangle \langle \alpha_* |$ and $\pi = -\i \nabla_r + \mathbf A_\mathbf B(r)$ both act on the relative coordinate $r$ of $\alpha(X,r)$.
\end{kor}
In the statement of the corollary and in the following, we refrain from equipping the operator $\pi$ and the projection $P = |\alpha_*\rangle \langle \alpha_*|$ with an index $r$ although it acts on the relative coordinate. This does not lead to confusion and keeps the formulas readable. The proof of the corollary is inspired by the proof of \cite[Proposition~23]{Hainzl2017}.
\begin{proof}[Proof of Corollary~\ref{cor:lowerbound}]
In the following we use the notation $\pi_{\mathbf A}^x = -\i \nabla_x + \mathbf A(x)$. We claim that
\begin{align}
\pi_{\mathbf A_h}^x &= U \pi_{\mathbf A^+_h}^r U^*, & -\pi_{\mathbf A_h}^y &= U^* \pi_{\mathbf A^-_h}^r U, \label{cor:lowerbound_1}
\end{align}
where $\pi_{\mathbf A^\pm}^r = -\mathrm{i} \nabla_r + \mathbf A^{\pm}(r)$ with
\begin{align*}
\mathbf A^\pm(r) &\coloneqq \mathbf A_{e_3}(r) \pm A(X \pm r).
\end{align*}
To obtain \eqref{cor:lowerbound_1}, we denote $P_X = -\mathrm{i} \nabla_X$ and note that $[r \cdot P_X, r \cdot (\mathbf B \wedge X)] = 0$ implies $U = \mathrm{e}^{\i \frac{\mathbf B}{2} (X \wedge r)} \mathrm{e}^{- \i \frac{r}{2} P_X}$. Using this identity we conclude that
\begin{align*}
U \; (-\mathrm{i} \nabla_r + \mathbf A_{\mathbf B}(r)) \, U^* &= -\mathrm{i} \nabla_r + \frac{1}{2} \mathbf A_{\mathbf B}(r) - \frac{1}{2} \Pi_X, \\
U^* (-\mathrm{i} \nabla_r + \mathbf A_{\mathbf B}(r)) \, U \; &= -\mathrm{i} \nabla_r + \frac{1}{2} \mathbf A_{\mathbf B}(r) + \frac{1}{2} \Pi_X,
\end{align*}
where $\Pi = -\i \nabla + 2 \mathbf A_{\mathbf B}$. Eq.~\eqref{cor:lowerbound_1} is a direct consequence of these two identities.
We also have
\begin{align*}
W(x) &= U \, W^+(r) \, U^*, & W(y) &= U^* \, W^-(r) \, U, & W^\pm(r) &\coloneqq W(X \pm r).
\end{align*}
Consequently, if $K_{T, \mathbf A, W}^x$ and $K_{T, \mathbf A, W}^y$ denote the operators $K_{T, \mathbf A, W}$ acting on the $x$ and $y$ coordinate, respectively, we infer
\begin{align}
K_{T, \mathbf A, W}^x - V(r) &= U^* ( K_{T, \mathbf A^+, W^+}^r - V(r) ) \, U, \notag \\
K_{T, \mathbf A, W}^y - V(r) &= U \; ( K_{T, \mathbf A^-, W^-}^r - V(r) ) \, U^*. \label{eq:idK_T^r}
\end{align}
We highlight that $\mathbf A^\pm$ and $W^\pm$ depend on $X$.
The operator $V$ in \eqref{Lower_Bound_A_3} acts by multiplication with the function $V(x-y)$. We use the symmetry $\alpha(x,y) = \alpha(y,x)$ to deduce
\begin{align}
\Tr \bigl[ (K_{T, \mathbf A, W} - V) \alpha \alpha^*\bigr] & \notag\\
&\hspace{-80pt} = \frac 12 \fint_{Q_h} \mathrm{d} x \int_{\mathbb{R}^3} \mathrm{d} y \; \overline{\alpha(x,y)} \bigl[ (K_{T, \mathbf A, W}^x - V) + (K_{T, \mathbf A, W}^y - V) \bigr] \alpha(x,y). \label{Lower_Bound_A_4}
\end{align}
In combination, \eqref{Second_Decomposition_Gamma_Assumption}, \eqref{Lower_Bound_A_3}, \eqref{eq:idK_T^r}, and \eqref{Lower_Bound_A_4} therefore prove the bound
\begin{align*}
\frac 12\langle \alpha, [U^* ( K_{T, \mathbf A^+, W^+}^r - V(r) ) \, U + U \; ( K_{T, \mathbf A^-, W^-}^r - V(r) ) \, U^*]\alpha\rangle + c \Tr \bigl[ (\alpha^* \alpha)^2\bigr] \leq D_1 h^4.
\end{align*}
An application of Lemma \ref{KTB_Lower_bound} on the left side establishes \eqref{Lower_Bound_A_2}.
\end{proof}
\subsection{The first decomposition result}
The proof of Theorem~\ref{Structure_of_almost_minimizers} is based on Corollary~\ref{cor:lowerbound} and is given in two steps. In the first step we drop the second term on the left side of \eqref{Lower_Bound_A_2} for a lower bound, and investigate the implications of the resulting inequality for $\alpha$. The result of the corresponding analysis is summarized in Proposition~\ref{First_Decomposition_Result} below. The second term on the left side of \eqref{Lower_Bound_A_2} is used later in Lemma~\ref{Bound_on_psi}.
\begin{prop}
\label{First_Decomposition_Result}
Given $D_0, D_1 \geq 0 $, there is $B_0>0$ with the following properties. If, for some $0< B\leq B_0$, the wave function $\alpha\in {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ satisfies
\begin{align}
\langle \alpha , [U^* (1-P)(1 + \pi_r^2)(1-P)U + U (1-P) (1 + \pi_r^2)(1-P)U^*] \alpha \rangle & \leq D_0 B \Vert \alpha\Vert_2^2 + D_1 B^2 , \label{First_Decomposition_Result_Assumption}
\end{align}
then there are $\Psi\in H_{\mathrm{mag}}^1(Q_B)$ and $\xi_0\in {H^1(Q_h \times \Rbb_{\mathrm s}^3)}$ such that
\begin{align}
\alpha(X,r) = \alpha_*(r)^t \cos\bigl( \frac r2 \cdot \Pi_X\bigr) \Psi(X) + \xi_0(X,r) \label{First_Decomposition_Result_Decomp}
\end{align}
with
\begin{align}
\langle \Psi, \Pi^2 \Psi\rangle + \Vert \xi_0\Vert_{{H^1(Q_h \times \Rbb_{\mathrm s}^3)}}^2 \leq C\bigl( B \Vert \Psi \Vert_2^2 + D_1 B^2 \bigr). \label{First_Decomposition_Result_Estimate}
\end{align}
\end{prop}
Before we give the proof of the a priori estimates in Proposition \ref{First_Decomposition_Result}, we define the decomposition, discuss the idea behind it, which originates from \cite{Hainzl2017}, and make the connection to other existing works. For this purpose, let the operator $A \colon {L^2(Q_h \times \Rbb_{\mathrm s}^3)} \rightarrow L_{\mathrm{mag}}^2(Q_B)$ be given by
\begin{align}
(A\alpha)(X) := \int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\alpha_*(r)} \; \cos\bigl( \frac r2\cdot \Pi_X\bigr) \alpha(X,r). \label{Def_A}
\end{align}
A short computation shows that its adjoint $A^*\colon L_{\mathrm{mag}}^2(Q_B) \rightarrow {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ is given by
\begin{align}
(A^*\Psi) (X,r) = \alpha_*(r)^t \cos\bigl( \frac r2\cdot \Pi_X\bigr) \Psi(X). \label{Def_Astar}
\end{align}
We highlight that this is the form of the first term in \eqref{First_Decomposition_Result_Decomp}. For a given Cooper pair wave function $\alpha$, we use these operators to define the two functions $\Psi$ and $\xi_0$ by
\begin{align}
\Psi &:= (AA^*)^{-1} A\alpha, & \xi_0 &:= \alpha - A^*\Psi. \label{Def_Psixi}
\end{align}
Lemma~\ref{AAstar_Positive} below guarantees that $AA^*$ is invertible, and we readily check that \eqref{First_Decomposition_Result_Decomp} holds with these definitions. Moreover, this decomposition of $\alpha$ is orthogonal in the sense that $\langle A^* \Psi, \xi_0 \rangle = 0$ holds. The claimed orthogonality follows from
\begin{align}
A\xi_0 =0, \label{fundamental_property}
\end{align}
which is a direct consequence of \eqref{Def_Psixi}. In the following we motivate our choice for $\Psi$ and $\xi_0$ and comment on its appearance in the literature.
The decomposition of $\alpha$ is motivated by the minimization problem for the low-energy operator $2 -UPU^* - U^*PU$, that is, the operator in \eqref{First_Decomposition_Result_Assumption} with $\pi_r^2$ replaced by zero. The operators $UPU^*$ and $U^*PU$ act as $A^*A$ on the space ${L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ of reflection symmetric functions in the relative coordinate. If $\Pi$ is replaced by $P_X$ then $A^*A$ can be written as
\begin{equation}
A^* A \cong \int^{\oplus}_{[0,\sqrt{ 2 \pi B}]^3} \mathrm{d}P_X \; | a_{P_X} \rangle \langle a_{P_X} |, \label{eq:directintegralA}
\end{equation}
with $| a_{P_X} \rangle \langle a_{P_X} |$ the orthogonal projection onto the function $a_{P_X}(r) = \cos(r/2 \cdot P_X) \alpha_*(r)$. Here the variable $P_X$ is the dual of the center-of-mass coordinate $X$ in the sense of Fourier transformation and $r$ denotes the relative coordinate. That is, the function $a_{P_X}(r)$ minimizes $1-A^* A$ in each fiber, whence it is the eigenfunction with respect to the lowest eigenvalue of $1 - A^* A = 1 - (UPU^* + U^*PU)/2$. This discussion should be compared to \cite[Eq. (5.47)]{Hainzl2012} and the discussion before Lemma~20 in \cite{ProceedingsSpohn}.
If $B$ is present in the magnetic momentum operator $\Pi$ the above picture changes because the components of $\Pi$ cannot be diagonalized simultaneously (they do not commute), and hence \eqref{eq:directintegralA} has no obvious equivalent in this case. The decomposition of $\alpha$ in terms of the operators $A$ and $A^*$ above has been introduced in \cite{Hainzl2017} in order to study the operator $1 - V^{\nicefrac 12} L_{T,B} V^{\nicefrac 12}$ with $L_{T,B}$ in \eqref{LTB_definition}, see also the discussion below Theorem~\ref{Calculation_of_the_GL-energy}. The situation in this work is comparable to our case with $\pi_r^2$ replaced by zero in \eqref{First_Decomposition_Result_Assumption}. Our analysis below shows that the ansatz \eqref{Def_Psixi} is useful even if the full range of energies is considered, that is, if $\pi_r^2$ is present in \eqref{First_Decomposition_Result_Assumption}.
In the following lemma we collect useful properties of the operator $A A^*$. It should be compared to \cite[Lemma~27]{Hainzl2017}.
\begin{lem}
\label{AAstar_Positive}
The operators
\begin{align*}
AA^* &= \int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\alpha_*(r)} \alpha_*(r)^t \cos^2\bigl( \frac{r}{2} \cdot \Pi\bigr), & 1-AA^* &= \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\alpha_*(r)} \alpha_*(r)^t \sin^2\bigl( \frac r2 \cdot \Pi\bigr)
\end{align*}
on $L_{\mathrm{mag}}^2(Q_B)$ are both bounded nonnegative matrix-valued functions of $\Pi^2$ and satisfy the following properties:
\begin{enumerate}[(a)]
\item $0\leq AA^*\leq 1$ and $0 \leq 1 - AA^*\leq 1$.
\item There is a constant $c>0$ such that $AA^* \geq c$ and $1 - AA^* \geq c \; \Pi^2\; (1 + \Pi^2)^{-1}$.
\end{enumerate}
In particular, $AA^*$ and $1 - AA^*$ are boundedly invertible on $L_{\mathrm{mag}}^2(Q_B)$.
\end{lem}
Strategy of Proof:
\begin{itemize}
\item Prove this in the ``whole-space-case'' using the spherical harmonics.
\item Use Andi's argument to lift it to the periodic case.
\end{itemize}
The remainder of this subsection is devoted to the proof of Proposition~\ref{First_Decomposition_Result}. We start with a lower bound on the operator in \eqref{First_Decomposition_Result_Assumption} when it acts on wave functions of the form $A^*\Psi$, see Lemma~\ref{MainTerm} below.
\subsubsection{Step one -- lower bound on the range of $A^*$}
The main result of this subsection is the following lemma.
\begin{lem}
\label{MainTerm}
For any $\Psi\in L_{\mathrm{mag}}^2(Q_B)$, with $A$ and $A^*$ given by \eqref{Def_A} and \eqref{Def_Astar}, with $U$ given by \eqref{U_definition}, and $P = |\alpha_*\rangle \langle \alpha_*|$ with $\alpha_*$ from \eqref{alpha_star_ev-equation} acting on the relative coordinate, we have
\begin{align}
\frac 12 \langle A^* \Psi, [ U^* (1 - P) (1 + \pi_r^2) (1- P) U + U(1 - P) (1 + \pi_r^2)(1-P) U^* ] A^*\Psi\rangle \hspace{-340pt}& \notag\\
&= \langle \Psi, AA^* (1 - AA^*)(1 + \Pi^2) \Psi\rangle \notag\\
&\hspace{10pt} + \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{(1 - AA^*)\Psi(X)} \; |\nabla \alpha_*(r)|^2 \; \cos^2 \bigl(\frac r2 \Pi_X\bigr) \; (1 - AA^*)\Psi(X) \notag \\
&\hspace{10pt}+ \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d}r \; \ov{AA^*\Psi(X)} \; |\nabla \alpha_*(r)|^2\; \sin^2 \bigl(\frac r2 \Pi_X\bigr) \; AA^*\Psi(X)\notag \\
&\hspace{10pt} + \frac 14 \fint_{Q_B}\mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{(1 - AA^*)\Psi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2 \; \sin^2 \bigl(\frac r2 \Pi_X\bigr) \; (1 - AA^*)\Psi(X) \notag\\
&\hspace{10pt} + \frac 14\fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d}r \; \ov{AA^*\Psi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2 \; \cos^2 \bigl(\frac r2 \Pi_X\bigr) \; AA^*\Psi(X). \label{MainTerm_5}
\end{align}
In particular, we have the lower bound
\begin{align}
\frac 12 \langle A^* \Psi, [ U^* (1 - P) (1 + \pi_r^2) (1- P) U + U(1 - P) (1 + \pi_r^2)(1-P) U^* ] A^*\Psi\rangle
\geq c\, \langle \Psi, \Pi^2\Psi\rangle . \label{MainTerm_LowerBound}
\end{align}
\end{lem}
\begin{bem}
Let us replace $\pi_r^2$ on the left side of \eqref{MainTerm_5} by zero for the moment. In this case, the substitute of \eqref{MainTerm_5} reads
\begin{align}
\frac 12 \langle A^*\Psi, [U^*(1 - P)U + U(1 - P)U^* ] A^*\Psi\rangle = \langle \Psi , AA^*(1 - AA^*) \Psi\rangle. \label{Low_Energy_Operator}
\end{align}
It follows from Lemma~\ref{AAstar_Positive} that the operator $AA^*(1 - AA^*)$ is bounded from below by $\Pi^2$ only for small values of $\Pi^2$, which is not enough for the proof of Proposition \ref{First_Decomposition_Result}. This justifies the term ``low-energy operator'' for $1 - A^*A$, which we used earlier in the discussion below \eqref{Def_Astar}. The additional factor $1 + \Pi^2$ in the first term on the right side of \eqref{MainTerm_5} compensate for the problematic behavior of \eqref{Low_Energy_Operator} for high energies. The expression on the right side of \eqref{Low_Energy_Operator} also appears in \cite{Hainzl2017}.
\end{bem}
Before we give the proof of Lemma~\ref{MainTerm}, we prove two technical lemmas, which provide intertwining relations for various magnetic momentum operators with $U$ and linear combinations of $U$. A part of the relations in the first lemma can be found in \cite[Lemma~24]{Hainzl2017}.
\begin{lem}
\label{CommutationI}
Let $p_r := -\i \nabla_r$, $\pi_r = p_r + \frac 12\mathbf B \wedge r$ and $\tilde \pi_r$ and $\Pi_X$ be given by \eqref{Magnetic_Momenta_COM}. For $U$ in \eqref{U_definition}, we have the following intertwining relations:
\begin{align*}
\begin{split}
U \Pi_X U^* &= \Pi_X - \mathbf B\wedge r,\phantom{\frac 12} \\ U^* \Pi_X U &= \Pi_X + \mathbf B\wedge r, \phantom{\frac 12}
\end{split} &
\begin{split}
U \pi_r U^* &= \tilde \pi_r + \frac{1}{2}\Pi_X, \\ U^* \pi_r U &= \tilde \pi_r - \frac{1}{2}\Pi_X, \phantom{\frac 12}
\end{split} &
\begin{split}
U \tilde \pi_r U^* &= p_r + \frac{1}{2}\Pi_X, \\ U^* \tilde \pi_r U &= p_r - \frac{1}{2}\Pi_X.
\end{split}
\end{align*}
\end{lem}
\begin{proof}
Let us denote $P_X := -\i \nabla_X$. We use the fact that $r\cdot P_X$ commutes with $r\cdot (\mathbf B \wedge X)$ to see that
\begin{align}
U^* = \mathrm{e}^{\i \frac \mathbf B 2 \cdot X\wedge r} \; \mathrm{e}^{\i \frac r2P_X} \label{representation_Ustar}
\end{align}
holds. To prove the first intertwining relation with $\Pi_X$, we compute
\begin{align*}
\Pi_XU^* &= (P_X + \mathbf B\wedge X)U^* = \mathrm{e}^{\i \frac \mathbf B 2 \cdot X\wedge r} \Bigl[ P_X - \frac 12 \mathbf B \wedge r + \mathbf B\wedge X\Bigr] \mathrm{e}^{\i \frac r2P_X} \\
&= U^* \Bigl[ P_X - \frac 12 \mathbf B \wedge r + \mathbf B \wedge \bigl(X -\frac r2\bigr)\Bigr] = U^* [ \Pi_X - \mathbf B \wedge r].
\end{align*}
Here we used that $f(X)\, \mathrm{e}^{\i \frac r2P_X} = \mathrm{e}^{\i \frac r2 P_X}\, f(X-\frac r2)$. The second intertwining relation with $\Pi_X$ is obtained by replacing $r$ by $-r$.
Next we consider the first intertwining relation with $\pi_r$ and compute
\begin{align*}
\pi_r U^* &= \bigl( p_r + \frac 12\mathbf B \wedge r\bigr) U^* = \mathrm{e}^{\i \frac \mathbf B 2 \cdot X\wedge r}\Bigl[ p_r + (-\i)\i \frac 12 \mathbf B \wedge X + \frac 12 \mathbf B \wedge r\Bigr] \mathrm{e}^{\i \frac r2P_X} \\
&= U^* \Bigl[ p_r + \frac{P_X}{2} + \frac 12 \mathbf B \wedge \bigl( X - \frac r2\bigr) + \frac 12 \mathbf B \wedge r\Bigr] = U^* \Bigl[ \tilde \pi_r + \frac{\Pi_X}{2}\Bigr].
\end{align*}
The remaining relations can be proved similarly and we skip the details.
\end{proof}
\begin{lem}
\label{CommutationII}
\begin{enumerate}[(a)]
\item We have the following intertwining relations for $\Pi_X$:
\begin{align}
\Pi_X \cos\bigl( \frac r2 \Pi_X\bigr) &= \cos\bigl( \frac r2 \Pi_X\bigr)\Pi_X - \i \sin\bigl( \frac r2 \Pi_X\bigr) \mathbf B\wedge r, \label{PiXcos} \\
\Pi_X \sin\bigl( \frac r2 \Pi_X\bigr) &= \sin\bigl( \frac r2 \Pi_X\bigr) \Pi_X + \i \cos\bigl( \frac r2 \Pi_X\bigr) \mathbf B\wedge r. \label{PiXsin}
\end{align}
\item The operators $p_r$, $\tilde \pi_r$ and $\pi_r$ intertwine according to
\begin{align}
\tilde \pi_r \cos\bigl( \frac r2 \Pi_X\bigr) &= \cos\bigl( \frac r2 \Pi_X\bigr) p_r + \i \sin\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X}{2}, \label{pirtilde_cos_pr}\\
\tilde \pi_r \cos\bigl( \frac r2 \Pi_X\bigr) &= \cos\bigl( \frac r2 \Pi_X\bigr)\pi_r + \i \frac{\Pi_X}{2} \sin\bigl( \frac r2 \Pi_X\bigr), \label{pirtilde_cos_pir}
\end{align}
and
\begin{align}
\tilde \pi_r \sin\bigl( \frac r2 \Pi_X\bigr) &= \sin\bigl( \frac r2 \Pi_X\bigr) p_r - \i \cos\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X}{2}, \label{pirtilde_sin_pr}\\
\tilde \pi_r \sin\bigl( \frac r2 \Pi_X\bigr) &= \sin\bigl( \frac r2 \Pi_X\bigr) \pi_r - \i \frac{\Pi_X}{2} \cos\bigl( \frac r2 \Pi_X\bigr) \label{pirtilde_sin_pir}
\end{align}
\end{enumerate}
\end{lem}
It will be useful in the proof of Lemma \ref{MainTerm} to have displayed both, \eqref{pirtilde_cos_pr} and \eqref{pirtilde_cos_pir} as well as \eqref{pirtilde_sin_pr} and \eqref{pirtilde_sin_pir}, even though they follow trivially from each other and \eqref{PiXcos} or \eqref{PiXsin}.
\begin{proof}
The proof is a direct consequence of the representations
\begin{align}
\cos\bigl( \frac r2 \Pi_X\bigr) &= \frac 12 (U^* + U), & \sin\bigl( \frac r2 \Pi_X\bigr) &= \frac 1{2\i} (U^*-U),
\end{align}
and the intertwining relations in Lemma~\ref{CommutationII}. We omit the details.
\end{proof}
\begin{proof}[Proof of Lemma \ref{MainTerm}]
The proof is a tedious computation that is based on the intertwining relations in Lemma \ref{CommutationII}.
We start by defining
\begin{align}
\mathcal{T}_1 &:= U^* \pi_r^2 U + U\pi_r^2 U^* = 2\, \tilde \pi_r^2 + \frac 12 \, \Pi_X^2, & \mathcal{T}_2 &:= U^*P\pi_r^2PU + UP\pi_r^2PU^*, \notag \\
\mathcal{T}_3 &:= U^* P\pi_r^2U + UP\pi_r^2U^*, & \mathcal{T}_4 &:= U^* \pi_r^2 PU + U\pi_r^2PU^*. \label{MainTerm_6}
\end{align}
Then, \eqref{MainTerm_5} can be written as
\begin{align}
\langle A^* \Psi, [ U^* (1 - P) (1 + \pi_r^2) (1- P) U + U(1 - P) (1 + \pi_r^2)(1-P) U^* ] A^*\Psi\rangle & = \notag\\
&\hspace{-330pt}= 2\langle A^*\Psi, (1 - A^*A)A^*\Psi\rangle \notag \\
&\hspace{-300pt} + \langle A^*\Psi, \mathcal{T}_1 A^*\Psi\rangle + \langle A^*\Psi, \mathcal{T}_2 A^*\Psi\rangle - \langle A^*\Psi, \mathcal{T}_3 A^*\Psi\rangle - \langle A^*\Psi, \mathcal{T}_4 A^*\Psi\rangle. \label{Tcal_first_line}
\end{align}
The first term on the right side equals twice the term in \eqref{Low_Energy_Operator}, which is in its final form.
We start by computing the $\Pi_X^2$ term in $\mathcal{T}_1$, which reads
\begin{align*}
\langle A^* \Psi , \Pi_X^2 A^* \Psi \rangle = \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\Psi(X)} \alpha^*(r) \; \cos\bigl( \frac r2 \Pi_X\bigr) \Pi_X^2 \cos\bigl( \frac r2 \Pi_X\bigr) \; \alpha_*(r) \Psi(X).
\end{align*}
Our goal is to move $\Pi_X^2$ to the right. To that end, we apply \eqref{PiXcos} twice and obtain
\begin{align*}
\Pi_X^2 \cos\bigl( \frac r2 \Pi_X\bigr) = \cos\bigl( \frac r2 \Pi_X\bigr)\Pi_X^2 - \i \sin\bigl( \frac r2 \Pi_X\bigr) \Pi_X\mathbf B \wedge r - \i \Pi_X \sin\bigl( \frac r2 \Pi_X\bigr)\mathbf B \wedge r.
\end{align*}
We multiply this from the left with $\cos( \frac r2 \Pi_X)$, use \eqref{PiXcos} to commute $\Pi_X$ to the left in the last term, and find
\begin{align}
\cos\bigl( \frac r2 \Pi_X\bigr) \Pi_X^2\cos\bigl( \frac r2 \Pi_X\bigr) &= \cos^2\bigl( \frac r2 \Pi_X\bigr) \Pi_X^2 + \sin^2\bigl( \frac r2 \Pi_X\bigr) |\mathbf B\wedge r|^2 \notag\\
&\hspace{-60pt}- \i \bigl[ \Pi_X\cos\bigl( \frac r2 \Pi_X\bigr)\sin\bigl( \frac r2 \Pi_X\bigr) + \cos\bigl( \frac r2 \Pi_X\bigr)\sin\bigl( \frac r2 \Pi_X\bigr) \Pi_X\bigr]\mathbf B\wedge r. \label{PiX-term_1}
\end{align}
The operator $| \mathbf B \wedge r |^2$ in the second term on the right side commutes with $\sin^2( \frac r2 \Pi_X)$. The operator in square brackets is self-adjoint and commutes with $\mathbf B \wedge r$. When we add \eqref{PiX-term_1} and its own adjoint, we obtain
\begin{align}
\cos\bigl( \frac r2 \Pi_X\bigr) \Pi_X^2\cos\bigl( \frac r2 \Pi_X\bigr) & \notag\\
&\hspace{-50pt}= \frac 12 \cos^2\bigl( \frac r2 \Pi_X\bigr) \Pi_X^2 + \frac 12 \, \Pi_X^2 \cos^2\bigl( \frac r2 \Pi_X\bigr) + \sin^2\bigl( \frac r2 \Pi_X\bigr) |\mathbf B\wedge r|^2. \label{PiX-term_2}
\end{align}
We evaluate \eqref{PiX-term_2} in the inner product with $\alpha_*\Phi$ and $\alpha_*\Psi$ on the left and right side, respectively, use the fact that $AA^*$ commutes with $\Pi^2$, see Lemma \ref{AAstar_Positive}, and obtain
\begin{align}
\langle A^*\Phi, \Pi_X^2A^*\Psi\rangle &= \langle \Phi, AA^* \Pi^2\Psi\rangle \notag\\
&\hspace{10pt} + \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\Phi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2 \; \sin^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X). \label{PiX-term}
\end{align}
When we choose $\Phi = \Psi$ we obtain the result for the term proportional to $\Pi_X^2$ in $\mathcal{T}_1$.
Next, we investigate the term proportional to $\tilde\pi_r^2$ in $\mathcal{T}_1$. We use \eqref{pirtilde_cos_pir} to move the operators $\tilde \pi_r$ from the middle to the outer positions and find
\begin{align}
\langle A^*\Psi, \tilde \pi_r^2A^*\Psi\rangle &= \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\Psi(X)} \alpha_*(r)\left[ \pi_r\cos\bigl( \frac r2 \Pi_X\bigr) -\i \sin\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X}{2}\right] \notag\\
&\hspace{80pt} \times \left[ \cos\bigl( \frac r2 \Pi_X\bigr)\pi_r + \i \frac{\Pi_X}{2}\sin\bigl( \frac r2 \Pi_X\bigr)\right]\alpha_*(r) \Psi(X). \label{tildepir_eq1}
\end{align}
We multiply out the brackets and obtain four terms. The terms proportional to $\cos^2$ and $\sin^2$ read
\begin{align}
&\fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3}\mathrm{d} r\; \ov{\Psi(X)}\; |\pi_r\alpha_*(r)|^2\; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \notag\\
&\hspace{20pt} + \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\Psi(X)} \; \alpha_*(r)\; \sin\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X^2}{4} \sin\bigl( \frac r2 \Pi_X\bigr) \; \alpha_*(r) \; \Psi(X).\label{tildepir_eq4}
\end{align}
For the moment the second line remains untouched. It is going to be canceled by a term in \eqref{tildepir_eq5} below. The term in the first line equals
\begin{align}
&\fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3}\mathrm{d} r\; \ov{\Psi(X)} \; |\nabla\alpha_*(r)|^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \notag \\
&\hspace{50pt} + \frac 14\fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3}\mathrm{d} r\; \ov{\Psi(X)} \; |\mathbf B \wedge r|^2 \alpha_*(r)^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X). \label{tildepir_eq2}
\end{align}
To obtain this result, we used $(\nabla \alpha_*)(r) \cdot \mathbf B \wedge r =0$, which holds because $\alpha_*$ is radial. This term is in its final form.
Now we have a closer look at the terms proportional to $\sin $ times $\cos$ in \eqref{tildepir_eq1}. The operator inside the relevant quadratic form is given by
\begin{align}
\i \pi_r\cos\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X}{2} \sin\bigl( \frac r2 \Pi_X\bigr) - \i \sin\bigl( \frac r2 \Pi_X\bigr)\frac{\Pi_X}{2} \cos\bigl( \frac r2 \Pi_X\bigr)\pi_r. \label{MainTerm_1}
\end{align}
We intend to interchange $\sin( \frac r2 \Pi_X)$ and $\cos( \frac r2 \Pi_X)$ in the first term. To do this, we use \eqref{PiXsin} to move $\Pi_X$ out of the center so that the first term equals
\begin{align*}
\i \pi_r\cos\bigl( \frac r2 \Pi_X\bigr) \sin\bigl( \frac r2 \Pi_X\bigr)\frac{\Pi_X}{2} -\frac 12\pi_r\cos^2\bigl( \frac r2 \Pi_X\bigr) \mathbf B \wedge r.
\end{align*}
In the first term we may now commute the sine and the cosine and use \eqref{PiXcos} and \eqref{pirtilde_sin_pir} to bring $\pi_r$ and $\Pi_X$ in the center again. We also move $\pi_r$ into the center in the second term in \eqref{MainTerm_1}. As a result, \eqref{MainTerm_1} equals
\begin{align}
&\cos\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X^2}{4}\cos\bigl( \frac r2 \Pi_X\bigr) - \sin\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X^2}{4}\sin\bigl( \frac r2 \Pi_X\bigr) \notag \\
&\hspace{40pt}+ \i \cos\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X}{4} \sin\bigl( \frac r2 \Pi_X\bigr) \mathbf B \wedge r \notag \\
&\hspace{80pt}- \frac 12 \pi_r \cos^2\bigl( \frac r2 \Pi_X\bigr)\mathbf B \wedge r - \frac 12 \sin\bigl( \frac r2 \Pi_X\bigr)\tilde \pi_r \sin\bigl( \frac r2 \Pi_X\bigr)\mathbf B \wedge r. \label{MainTerm_3}
\end{align}
We use \eqref{pirtilde_sin_pir} to move $\tilde \pi_r$ to the left in the last term in \eqref{MainTerm_3}. One of the terms we obtain in this way cancels the third term in \eqref{MainTerm_3}. We also use $\cos( \frac r2 \Pi_X)^2 + \sin( \frac r2 \Pi_X)^2=1$ to rewrite the fourth term in \eqref{MainTerm_3}. In combination, these considerations imply that the terms in \eqref{MainTerm_3} equal
\begin{align}
\begin{split}
\cos\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X^2}{4}\cos\bigl( \frac r2 \Pi_X\bigr) - \sin\bigl( \frac r2 \Pi_X\bigr)\frac{\Pi_X^2}{4}\sin\bigl( \frac r2 \Pi_X\bigr) - \frac 12 \pi_r\mathbf B \wedge r. \end{split} \label{tildepir_eq5}
\end{align}
The expectation of the second term with respect to $\alpha_*(r) \Psi(X)$ cancels the second term in \eqref{tildepir_eq4}. We multiply the last term from the left and from the right with $\alpha_*(r)$, integrate over $r$ and find
\begin{align}
\frac 12\int_{\mathbb{R}^3} \mathrm{d} r\; \alpha_*(r) \pi_r \mathbf B \wedge r \alpha_*(r) = \frac 12 \int_{\mathbb{R}^3} \ov{p_r\alpha_*(r)} \mathbf B \wedge r\alpha_*(r) + \frac 14\int_{\mathbb{R}^3} \mathrm{d} r\; |\mathbf B \wedge r|^2 \alpha_*(r)^2. \label{Tcal3_eq2}
\end{align}
The first term on the right side vanishes because $\alpha_*$ is radial, see the remark below \eqref{tildepir_eq4}. Let us summarize where we are. We combine \eqref{tildepir_eq1}--\eqref{Tcal3_eq2} to see that
\begin{align}
\langle A^*\Psi, \tilde \pi_r^2A^*\Psi\rangle &= \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3}\mathrm{d} r\; \ov{\Psi(X)} \; |\nabla\alpha_*(r)|^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \notag \\
&\hspace{-30pt} + \frac 14 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3}\mathrm{d} r\; \ov{\Psi(X)} \; |\mathbf B \wedge r|^2 \alpha_*(r)^2 \; \bigl( \cos^2 \bigl( \frac r2 \Pi_X\bigr) -1 \bigr) \; \Psi(X) \notag \\
&\hspace{-30pt} + \frac 14 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3}\mathrm{d} r \; \ov{\alpha_*(r) \Psi(X)} \; \cos\bigl( \frac r2 \Pi_X\bigr) \Pi_X^2 \cos\bigl( \frac r2 \Pi_X\bigr) \; \alpha_*(r) \Psi(X). \label{eq:1}
\end{align}
The term in the last line equals $\langle A^*\Psi, \Pi_X^2A^*\Psi\rangle$ and we use \eqref{PiX-term} to rewrite it. This allows us to show
\begin{align*}
\langle A^*\Psi, \tilde \pi_r^2A^*\Psi\rangle &= \frac 14 \langle \Psi, AA^* \Pi^2\Psi\rangle + \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\Psi(X)}\; |\nabla \alpha_*(r)|^2\; \cos^2\bigl( \frac r2 \Pi_X\bigr)\; \Psi(X).
\end{align*}
In combination with \eqref{MainTerm_6} and \eqref{PiX-term}, this yields
\begin{align}
\langle A^* \Psi, \mathcal{T}_1 A^*\Psi\rangle &= \langle \Psi , AA^*\Pi^2\Psi\rangle \notag \\
&\hspace{10pt}+ 2\fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\Psi(X)}\; |\nabla \alpha_*(r)|^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr)\; \Psi(X) \notag \\
&\hspace{10pt} + \frac 12 \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\Psi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2\; \sin^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \label{Tcal1-Result}
\end{align}
and completes our computation of the term involving $\mathcal{T}_1$.
A short computation shows that
\begin{equation}
\langle A^*\Psi , \mathcal{T}_2A^*\Psi\rangle = 2\, \langle AA^*\Psi , AA^*\Psi\rangle \Bigl[ \Vert \nabla \alpha_*\Vert_2^2 + \frac 14 \int_{\mathbb{R}^3} \mathrm{d} r \; |\mathbf B \wedge r|^2 \alpha_*(r)^2\Bigr]. \label{Tcal2_result}
\end{equation}
It remains to compute the terms in \eqref{Tcal_first_line} involving the operators $\mathcal{T}_3$ and $\mathcal{T}_4$, where $\mathcal{T}_4^* = \mathcal{T}_3$.
In the following we compute the term with $\mathcal{T}_3$. A short computation which uses the fact that $\alpha_*$ is radial shows
\begin{align}
\langle A^* \Psi, \mathcal{T}_3A^*\Psi\rangle &= \langle \alpha_* A A^*\Psi, \pi_r^2 (U^* + U) A^*\Psi \rangle \notag\\
&\hspace{-30pt}= 2\fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; \ov{p_r\alpha_*(r)} \; p_r\cos^2\bigl( \frac r2 \Pi_X\bigr) \; \alpha_*(r) \; \Psi(X) \notag\\
&\hspace{-10pt} + \frac 12 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r\; \ov{AA^*\Psi(X)} \; |\mathbf B \wedge r|^2 \alpha_*(r)^2\; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X). \label{Tcal3_eq3}
\end{align}
The second term on the right side is in its final form and will be canceled by a term below. We continue with the first term, use \eqref{pirtilde_cos_pr} twice to commute $p_r$ with the squared cosine, as well as \eqref{PiXcos} and \eqref{PiXsin} to commute $\Pi_X$ to the center in the emerging terms, and find
\begin{align}
p_r\cos^2\bigl( \frac r2 \Pi_X\bigr)
&= \cos^2\bigl( \frac r2 \Pi_X\bigr)p_r + \i \sin\bigl( \frac r2 \Pi_X\bigr) \Pi_X \cos\bigl( \frac r2 \Pi_X\bigr) - \frac 12 \mathbf B \wedge r. \label{Tcal3_eq1}
\end{align}
We note that the last term, when inserted back into \eqref{Tcal3_eq3}, vanishes because $\alpha_*$ is radial. The first term is final and its quadratic form with $p_r \alpha_* AA^*\Psi$ and $\alpha_*\Psi$ reads
\begin{align}
2 \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; |\nabla \alpha_*(r)|^2 \; \cos^2\bigl( \frac r2\Pi_X\bigr) \; \Psi(X). \label{MainTerm_4}
\end{align}
Let us continue with the second term on the right side of \eqref{Tcal3_eq1}. We multiply it with $p_r$ from the left and use \eqref{pirtilde_cos_pr} and \eqref{pirtilde_sin_pr} to commute $p_r$ to the right. In the two emerging terms we bring $\Pi_X$ to the center and obtain
\begin{align*}
p_r \sin\bigl( \frac r2 \Pi_X\bigr) \Pi_X\cos\bigl( \frac r2 \Pi_X\bigr) &= \sin\bigl( \frac r2 \Pi_X\bigr)\Pi_X\cos\bigl( \frac r2 \Pi_X\bigr) p_r + \i \sin\bigl( \frac r2 \Pi_X\bigr)\frac{\Pi_X^2}{2} \sin\bigl( \frac r2 \Pi_X\bigr) \\
&\hspace{145pt}- \i \cos\bigl( \frac r2 \Pi_X\bigr) \frac{\Pi_X^2}{2}\cos\bigl( \frac r2 \Pi_X\bigr).
\end{align*}
We plug the second term of \eqref{Tcal3_eq1}, written in this form, back into \eqref{Tcal3_eq3} and obtain
\begin{align}
2 \i \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{AA^*\Psi(X)} \; \ov{p_r\alpha_*(r)} \; \sin\bigl( \frac r2 \Pi_X\bigr) \Pi_X \cos\bigl( \frac r2 \Pi_X\bigr)\; \alpha_*(r)\; \Psi(X) \hspace{-350pt}& \notag\\
&= 2 \i \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{AA^* \Psi(X)} \; \alpha_*(r) \; \sin\bigl( \frac r2 \Pi_X\bigr) \Pi_X \cos\bigl( \frac r2 \Pi_X\bigr) \; p_r\alpha_*(r)\; \Psi(X) \notag \\
&\hspace{20pt} + \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; \alpha_*(r) \; \cos\bigl( \frac r2 \Pi_X\bigr)\Pi_X^2 \cos\bigl( \frac r2 \Pi_X\bigr) \; \alpha_*(r)\; \Psi(X) \notag \\
&\hspace{20pt} - \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; \alpha_*(r) \; \sin\bigl( \frac r2 \Pi_X\bigr)\Pi_X^2 \sin\bigl( \frac r2 \Pi_X\bigr) \; \alpha_*(r) \; \Psi(X).\label{MainTerm_7}
\end{align}
Notice that the first term on the right side equals $(-1)$ times the term on the left side. Thus, the left side equals $\frac 12$ times the third line plus the fourth line. To compute the third line of \eqref{MainTerm_7} we use \eqref{PiX-term} with the choice $\Phi = AA^*\Psi$. A short computation shows that \eqref{PiX-term_2} holds equally with $\cos$ and $\sin$ interchanged. Accordingly, $(-1)$ times the fourth line of \eqref{MainTerm_7} equals
\begin{equation*}
\langle AA^*\Psi, (1 -AA^*) \Pi^2\Psi\rangle + \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X).
\end{equation*}
In combination, these considerations imply that the left side of \eqref{MainTerm_7} is given by
\begin{align}
&\frac 12 \langle AA^*\Psi, AA^* \Pi^2\Psi\rangle - \frac 12 \langle AA^*\Psi, (1 -AA^*) \Pi^2\Psi\rangle \notag \\
&\hspace{20pt}+ \frac 12 \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2 \; \sin^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \notag\\
&\hspace{20pt} - \frac 12 \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X). \label{Tcal3_eq4}
\end{align}
We note that the third term in \eqref{Tcal3_eq4} cancels the second term in \eqref{Tcal3_eq3}. Adding all this to \eqref{MainTerm_4}, we find
\begin{align}
\langle A^*\Psi, \mathcal{T}_3 A^*\Psi\rangle &= \frac 12 \langle \Psi, AA^* AA^* \Pi^2 \Psi\rangle - \frac 12 \langle \Psi, AA^* (1 - AA^*) \Pi^2 \Psi\rangle \notag\\
&\hspace{-10pt} + 2 \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{AA^*\Psi(X)} \; |\nabla \alpha_*(r)|^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \notag \\
&\hspace{-10pt}+ \frac 12 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; |\mathbf B \wedge r|^2 \alpha_*(r)^2 \; \sin^2\bigl( \frac r2 \Pi_X\bigr)\; \Psi(X). \label{Tcal3_result}
\end{align}
The corresponding result for $\langle A^*\Psi, \mathcal{T}_4 A^*\Psi\rangle$ is obtained by taking the complex conjugate of the right side of \eqref{Tcal3_result}, which amounts to interchanging the roles of $AA^*\Psi$ and $\Psi$ in the last two lines.
We are now prepared to collect our results and to provide the final formula for \eqref{Tcal_first_line}. We need to collect the terms in \eqref{Tcal1-Result}, \eqref{Tcal2_result}, \eqref{Tcal3_result} and the complex conjugate of \eqref{Tcal3_result}. The terms involving $|\nabla \alpha_*|^2$ read
\begin{align}
&2\fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\Psi(X)}\; |\nabla \alpha_*(r)|^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) + 2\langle AA^*\Psi, AA^*\Psi\rangle \Vert \nabla \alpha_*\Vert_2^2 \notag \\
&\hspace{50pt}-2 \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{AA^*\Psi(X)} \; |\nabla \alpha_*(r)|^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \notag \\
&\hspace{50pt}- 2 \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\Psi(X)} \; |\nabla \alpha_*(r)|^2 \; \cos^2\bigl( \frac r2 \Pi_X\bigr) \; AA^*\Psi(X). \label{nablaterms}
\end{align}
When we insert the factor $1 = \cos^2(\frac r2 \Pi_X) + \sin^2(\frac r2\Pi_X)$ in the second term, we obtain the final result for the terms proportional to $|\nabla \alpha_*|^2$.
The terms proportional to $\alpha_*^2$ with magnetic fields read
\begin{align*}
&\frac 12 \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\Psi(X)} \; |\mathbf B\wedge r|^2 \alpha_*(r)^2\; \sin^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \\
&\hspace{30pt} + \frac 12\langle AA^*\Psi, AA^*\Psi\rangle \int_{\mathbb{R}^3} \mathrm{d} r \; |\mathbf B \wedge r|^2 \alpha_*(r)^2 \\
&\hspace{30pt}- \frac 12 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{AA^*\Psi(X)} \; |\mathbf B \wedge r|^2 \alpha_*(r)^2 \; \sin^2\bigl( \frac r2 \Pi_X\bigr) \; \Psi(X) \\
&\hspace{30pt}- \frac 12 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\Psi(X)} \; |\mathbf B \wedge r|^2\alpha_*(r)^2\; \sin^2\bigl( \frac r2 \Pi_X\bigr) \; AA^*\Psi(X)
\end{align*}
When we insert $1 = \cos^2(\frac r2 \Pi_X) + \sin^2(\frac r2\Pi_X)$ in the second term we can bring these terms in the claimed form.
Finally, we collect the terms proportional to $\alpha_*^2$ but without magnetic field. Taking into account the first term in \eqref{Tcal_first_line}, we find
\begin{align*}
&2\langle \Psi, AA^*(1 - AA^*)\Psi\rangle + \langle \Psi, AA^*\Pi^2\Psi\rangle + \langle \Psi, AA^*(1 - AA^*)\Pi^2\Psi\rangle - \langle \Psi, AA^* AA^* \Pi^2 \Psi\rangle \\
&\hspace{50pt}= 2\langle \Psi, AA^*(1 - AA^*)(1 + \Pi^2)\Psi\rangle.
\end{align*}
To obtain the result, we used that the terms coming from $\mathcal{T}_3$ and $\mathcal{T}_4$ are actually the same because $AA^*$ and $1 - AA^*$ commute with $\Pi^2$, see Lemma \ref{AAstar_Positive}. This proves \eqref{MainTerm_5} and the lower bound \eqref{MainTerm_LowerBound} is implied by the operator bounds in Lemma \ref{AAstar_Positive} as well.
\end{proof}
\subsubsection{Step two -- estimating the cross terms}
In the second step of the proof of Proposition \ref{First_Decomposition_Result} we estimate the cross terms that we obtain when the decomposition in \eqref{First_Decomposition_Result_Decomp} with $\Psi$ and $\xi_0$ in \eqref{Def_Psixi} is inserted into the left side of \eqref{First_Decomposition_Result_Assumption}.
\begin{lem}
\label{Decomp_Low_Momenta_Crossterms}
Given $D_0, D_1 \geq 0$, there is $B_0>0$ with the following properties. If, for some $0< B\leq B_0$, the wave function $\alpha\in {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ satisfies
\begin{align*}
\frac 12 \langle \alpha , [U^* (1 - P)U + U(1 - P)U^* ]\alpha \rangle \leq D_0 B\, \Vert \alpha \Vert_2^2 + D_1B^2,
\end{align*}
then $\Psi$ and $\xi_0$ in \eqref{Def_Psixi} satisfy the estimate
\begin{align}
\langle \Psi, AA^*(1 - AA^*)\Psi\rangle + \Vert \xi_0\Vert_2^2 \leq C \bigl( B \Vert \Psi \Vert_2^2 +D_1 B^2\bigr). \label{Estimate_Low_Momenta}
\end{align}
Furthermore, for any $\eta >0$ we have
\begin{align}
|\langle \xi_0, [U(1-P)(1+\pi_r^2)(1-P)U^* + U^* (1-P)(1+\pi_r^2)(1-P)U] A^*\Psi\rangle| & \notag \\
&\hspace{-180pt}\leq \eta \, \Vert \Pi\Psi\Vert^2 + C \left(1+ \eta^{-1} \right)\, \bigl( B \Vert \Psi \Vert_2^2 + D_1B^2\bigr). \label{Tcal_CrossTerms}
\end{align}
\end{lem}
\begin{proof}
We start by noting that $A\xi_0 =0$ implies $\langle \xi_0, A^*\Psi\rangle =0$, and hence
\begin{align}
\Vert \alpha\Vert_2^2 = \Vert A^*\Psi\Vert_2^2 + \Vert \xi_0\Vert_2^2 \leq \Vert \Psi\Vert_2^2 + \Vert \xi_0\Vert_2^2. \label{Decomp_Low_Momenta_1}
\end{align}
We use $\alpha \in {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ and $A(1 - A^*A)A^* = AA^*(1 - AA^*)$ to see that
\begin{align}
D_0 B \Vert \alpha\Vert_2^2 + D_1 B^2 &\geq \frac 12 \langle \alpha, [U^* (1 - P)U + U(1 - P)U^* ]\alpha \rangle = \langle \alpha, (1 - A^*A) \alpha\rangle \notag\\
&= \langle A^*\Psi, (1 - A^*A)A^*\Psi\rangle + \langle \xi_0, (1- A^*A) A^*\Psi\rangle + \langle A^*\Psi, (1- A^*A)\xi_0\rangle \notag \\
&\hspace{230pt} + \langle \xi_0 , (1- A^*A)\xi_0\rangle \notag \\
&= \langle \Psi, AA^*(1 - AA^*) \Psi\rangle + \Vert \xi_0\Vert_2^2. \label{Decomp_Low_Momenta_9}
\end{align}
From Lemma~\ref{AAstar_Positive} we know that the first term on the right side is nonnegative and hence
\begin{equation*}
\Vert \xi_0\Vert_2^2 \leq D_0 B \Vert \alpha\Vert_2^2 + D_1 B^2.
\end{equation*}
Together with \eqref{Decomp_Low_Momenta_1}, this also proves
\begin{equation*}
(1 - D_0B)\Vert \alpha\Vert_2^2 \leq \Vert \Psi\Vert_2^2 + D_1 B^2.
\end{equation*}
Finally, this last bound \eqref{Decomp_Low_Momenta_9} prove \eqref{Estimate_Low_Momenta}.
Next we prove \eqref{Tcal_CrossTerms}. Let us define
\begin{align}
\mathcal{T} := U^* (1 - P)(1 + \pi_r^2)(1 - P)U + U(1 - P)(1 + \pi_r^2)(1 - P)U^* \label{Tcal_Op_Def}
\end{align}
and consider $\langle \xi_0, \mathcal{T} A^*\Psi\rangle$. We note that $A\xi_0 =0$ implies $PU \xi_0 = 0 = PU^*\xi_0$, where the projection $P$ is understood to act on the relative coordinate. In combination with \eqref{MainTerm_6} this allows us to see that
\begin{align}
\langle \xi_0, \mathcal{T} A^*\Psi\rangle &= \Bigl\langle \xi_0, \Bigl[2\tilde \pi_r^2 + \frac{\Pi_X^2}{2} \Bigr] A^*\Psi\Bigr\rangle - \langle \xi_0, (U^* + U) \pi_r^2 \, \alpha_* AA^*\Psi\rangle \label{Decomp_Low_Momenta_2}
\end{align}
holds. We use \eqref{PiXcos} and \eqref{PiXsin} to commute $\Pi_X^2$ in the first term on the right side of \eqref{Decomp_Low_Momenta_2} to the right and find
\begin{align}
\frac 12\langle \xi_0, \Pi_X^2 A^*\Psi\rangle &= \frac 12\langle \xi_0, A^*\Pi_X^2\Psi\rangle \notag \\
&\hspace{10pt}- \i \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\xi_0(X,r)} \; \sin\bigl( \frac r2 \Pi_X\bigr) \; \mathbf B \wedge r\; \alpha_*(r) \; \Pi_X \Psi(X) \notag \\
&\hspace{10pt}+ \frac 12\fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; \ov{\xi_0(X,r)} \; \cos\bigl( \frac r2 \Pi_X\bigr) \; |\mathbf B\wedge r|^2 \alpha_*(r) \; \Psi(X). \label{Decomp_Low_Momenta_4}
\end{align}
The first term on the right side vanishes because $A\xi_0 =0$. Similarly, we apply \eqref{pirtilde_cos_pr} and \eqref{pirtilde_sin_pr} to commute $\tilde \pi_r^2$ in the first term in \eqref{Decomp_Low_Momenta_2} to the right and find
\begin{align}
2\langle \xi_0, \tilde \pi_r^2A^*\Psi\rangle &= 2 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\xi_0(X,r)} \; \cos\bigl( \frac r2 \Pi_X\bigr) \; p^2\alpha_*(r) \; \Psi(X) \notag \\
&\hspace{20pt} + 2\i \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \, \ov{\xi_0(X,r)} \; \sin\bigl( \frac r2 \Pi_X\bigr) \; p\alpha_*(r) \; \Pi_X\Psi(X). \label{Decomp_Low_Momenta_6}
\end{align}
When we combine $\pi_r^2 \alpha_*(r) = p_r^2\alpha_*(r) + \frac 14|\mathbf B \wedge r|^2\alpha_*(r)$, which holds because $\alpha_*$ is radial, \eqref{Decomp_Low_Momenta_2}, \eqref{Decomp_Low_Momenta_4} and \eqref{Decomp_Low_Momenta_6}, we obtain
\begin{align*}
\langle \xi_0, \mathcal{T} A^*\Psi\rangle &= 2 \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\xi_0(X,r)} \; \cos\bigl( \frac r2 \Pi_X\bigr) \; \pi_r^2\alpha_*(r) \; (1 - AA^*)\Psi(X) \\
&\hspace{30pt}+ 2\i \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; \ov{\xi_0(X,r)} \; \sin\bigl( \frac r2 \Pi_X\bigr) \; \bigl[ p - \frac 12\mathbf B \wedge r\bigr] \alpha_*(r) \; \Pi_X\Psi(X).
\end{align*}
Using Cauchy-Schwarz, we bound the absolute value of this by
\begin{align}
|\langle \xi_0, \mathcal{T} A^*\Psi\rangle| \leq 2\Vert \xi_0\Vert_2 \; \bigl[ \Vert \pi_r^2\alpha_*\Vert_2 \; \Vert (1 - AA^*)\Psi\Vert_2 + \bigl(\Vert \nabla \alpha_*\Vert_2 + B \Vert \, | \cdot |\alpha_*\Vert_2\bigr) \Vert \Pi\Psi\Vert_2\bigr], \label{Decomp_Low_Momenta_7}
\end{align}
and with the decay properties of $\alpha_*$ in \eqref{Decay_of_alphastar} we see that the norms of $\alpha_*$ on the right side are bounded uniformly in $0 \leq B \leq B_0$. Moreover, Lemma~\ref{AAstar_Positive} and \eqref{Estimate_Low_Momenta} imply that there is a constant $c>0$ such that
\begin{equation}
\Vert (1 - AA^*)\Psi\Vert_2^2 \leq \langle \Psi, (1-AA^*) \Psi \rangle \leq \frac{1}{c} \langle \Psi, A A^* (1-AA^*) \Psi \rangle \leq C \bigl( B \Vert \Psi \Vert_2^2 +D_1 B^2\bigr).
\end{equation}
For $\eta >0$ we thus obtain
\begin{align}
|\langle \xi_0, \mathcal{T} A^*\Psi\rangle| \leq C \bigl[ \eta\, \Vert \Pi\Psi\Vert_2^2 + \eta^{-1} \, \Vert \xi_0 \Vert_2^2 + \bigl( B \Vert \Psi \Vert_2^2 +D_1 B^2\bigr) \bigr] \label{Decomp_Low_Momenta_8}
\end{align}
and an application of \eqref{Estimate_Low_Momenta} proves the claim.
\end{proof}
\subsubsection{Proof of Proposition \ref{First_Decomposition_Result}}
We recall the decomposition $\alpha = A^*\Psi + \xi_0$ with $\Psi$ and $\xi_0$ in \eqref{Def_Psixi} as well as $\mathcal{T}$ in \eqref{Tcal_Op_Def}. From \eqref{First_Decomposition_Result_Assumption} we know that
\begin{align}
D_0 B \Vert\alpha\Vert_2^2 + D_1 B^2 \geq \langle A^*\Psi, \mathcal{T} A^*\Psi\rangle + 2\Re\langle \xi_0, \mathcal{T} A^*\Psi\rangle + \langle \xi_0, \mathcal{T}\xi_0\rangle . \label{First_Decomposition_Result_1}
\end{align}
With the help of Lemma \ref{CommutationI}, the identities $PU\xi_0 = 0 = PU^*\xi_0$ imply
\begin{align}
\langle \xi_0, \mathcal{T} \xi_0 \rangle = \Bigl\langle \xi_0, \bigl(2 + \frac{\Pi_X^2}{2} + 2\tilde \pi_r^2\bigr) \xi_0\Bigr\rangle \geq \frac 12 \, \Vert \xi_0\Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2. \label{First_Decomposition_Result_2}
\end{align}
Lemma \ref{AAstar_Positive} guarantees the existence of a constant $\rho>0$ such that
\begin{align*}
AA^*(1 - AA^*)(1 + \Pi^2) \geq \rho\; \Pi^2.
\end{align*}
Therefore, \eqref{MainTerm_LowerBound} implies
\begin{align}
\langle A^* \Psi, \mathcal{T} A^*\Psi\rangle \geq 2\, \langle \Psi, AA^*(1 - AA^*) (1 + \Pi^2)\Psi\rangle \geq 2\rho\, \langle \Psi, \Pi^2\Psi\rangle. \label{First_Decomposition_Result_3}
\end{align}
To estimate the second term on the right side of \eqref{First_Decomposition_Result_1}, we note that $\mathcal{T}$ is bounded from below by $U(1 - P)U^* + U^*(1-P)U$. Therefore, we may apply Lemma \ref{Decomp_Low_Momenta_Crossterms} with $\eta = \frac \rho2$ and find
\begin{align*}
2\Re \langle \xi_0, \mathcal{T} A^*\Psi\rangle \geq -2\, |\langle \xi_0, \mathcal{T} A^*\Psi\rangle| \geq - \rho \, \Vert \Pi\Psi\Vert_2^2 - C\bigl( B\Vert \Psi\Vert_2^2 + D_1 B^2\bigr).
\end{align*}
In combination with \eqref{First_Decomposition_Result_1}, \eqref{First_Decomposition_Result_2} and \eqref{First_Decomposition_Result_3}, we thus obtain
\begin{align*}
C\bigl( B\Vert \Psi\Vert_2^2 + D_1B^2 \bigr)\geq \rho \, \Vert \Pi\Psi\Vert_2^2 + \frac 12 \, \Vert \xi_0\Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2.
\end{align*}
This proves \eqref{First_Decomposition_Result_Estimate}.
\subsection{Uniform estimate on \texorpdfstring{$\Vert\Psi\Vert_2$}{Psi}}
Up to now we neglected the nonlinear term on the left side of \eqref{Lower_Bound_A_2}. This term provides the inequality
\begin{align}
\Tr\bigl[(\alpha^* \alpha)^2\bigr] \leq C \left( B \Vert \alpha\Vert_2^2 + B^2 \right). \label{Lower_Bound_A_2b}
\end{align}
In this section we will take this term and \eqref{Lower_Bound_A_2b} into account and show that it can be combined with Proposition~\ref{First_Decomposition_Result} to obtain a bound for $\Vert \Psi\Vert_2$. This will afterwards allow us to prove Theorem~\ref{Structure_of_almost_minimizers}.
\begin{lem}
\label{Bound_on_psi}
Given $D_0\geq0$, there is $B_0>0$ such that for all $0 < B \leq B_0$ the following holds. If the wave function $\alpha\in {L^2(Q_h \times \Rbb_{\mathrm s}^3)}$ obeys \eqref{Lower_Bound_A_2} then $\Psi$ in \eqref{Def_Psixi} satisfies
\begin{align}
\Vert \Psi\Vert_2^2 &\leq CB. \label{Bound_on_psi_result}
\end{align}
\end{lem}
\begin{proof}
We recall the decomposition $\alpha = A^*\Psi + \xi_0$ with $\Psi$ and $\xi_0$ in \eqref{Def_Psixi}. Eq.~\eqref{Lower_Bound_A_2b} and an application of the triangle inequality imply
\begin{align}
C\bigl( B\Vert \Psi\Vert_2^2 + B^2\bigr)^{\nicefrac 14} \geq \Vert \alpha\Vert_4 \geq \Vert A^*\Psi\Vert_4 - \Vert \xi_0\Vert_4. \label{Nonlinearity_8}
\end{align}
Thus, it suffices to prove an upper bound for $\Vert \xi_0\Vert_4$ and a lower bound for $\Vert A^*\Psi\Vert_4$. Our proof follows closely the proof of \cite[Eq. (5.48)]{Hainzl2012}.
\emph{Step 1.} Let us start with the upper bound on $\Vert \xi_0\Vert_4$. We claim the estimate
\begin{align}
\Vert \xi_0\Vert_4 \leq C \bigl( B^{\nicefrac 14} \Vert \Psi\Vert_2^{\nicefrac 12} + B^{\nicefrac 18} \Vert \Psi\Vert_2 + B^{\nicefrac 12}\bigr).
\label{Nonlinearity_1}
\end{align}
To see this, we first use Hölder's inequality to estimate $\Vert \xi_0\Vert_4^4 \leq \Vert \xi_0\Vert_2^2 \; \Vert \xi_0\Vert_\infty^2$. From Proposition~\ref{First_Decomposition_Result} we know that $\Vert \xi_0\Vert_2^2 \leq C(B\Vert \Psi\Vert_2^2+ B^2)$, and it thus remains to prove a bound for $\Vert \xi_0\Vert_\infty$. We claim that for any $\nu > 3$
\begin{align}
\Vert \xi_0\Vert_\infty \leq 1 + C_\nu\, B^{-\nicefrac 14} \, \Vert (1 + |\cdot |)^\nu \alpha_*\Vert_{\nicefrac 65} \; \Vert \Psi \Vert_6, \label{Claim_xi_infty}
\end{align}
where the right side is finite by the decay properties of $\alpha_*$ in \eqref{Decay_of_alphastar}. To prove \eqref{Claim_xi_infty}, we first note that \eqref{gamma_alpha_fermionic_relation} implies $\Vert \alpha\Vert_\infty\leq 1$, and hence $\Vert \xi_0\Vert_\infty \leq 1 + \Vert A^*\Psi\Vert_\infty$. We apply Lemma \ref{Schatten_estimate} (b) to $A^*\Psi$ and obtain \eqref{Claim_xi_infty}. We also combine \eqref{Magnetic_Sobolev} with Proposition \ref{First_Decomposition_Result} and obtain $
\Vert \Psi\Vert_6 \leq C( \Vert \Psi\Vert_2 + B^{\nicefrac 12})$. In combination, these considerations imply \eqref{Nonlinearity_1}.
\emph{Step 2.} We claim that
\begin{align}
\Vert A^*\Psi\Vert_4^4 \geq \frac 1{16} \; \Vert \hat \alpha_* \Vert_4^4 \; \Vert \Psi\Vert_4^4 - C \bigl( B^{\nicefrac 18} \Vert \Psi\Vert_2 + B^{\nicefrac 58} \bigr)^4 \label{Nonlinearity_5}
\end{align}
holds. To prove \eqref{Nonlinearity_5}, we write $\Vert A^*\Psi\Vert_4^4 = \tr ((A^*\Psi)^* A^*\Psi)^2$. The fact that $\alpha_*$ is real-valued implies
\begin{align*}
\Vert A^*\Psi\Vert_4^4 &= \fint_{Q_B} \mathrm{d} x \int_{\mathbb{R}^3} \mathrm{d} y \; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \; \alpha_*(x - z)\ov{ \left[ \cos\bigl( \frac{x - z}{2}\Pi_{\frac{x+z}{2}}\bigr) \Psi\bigl( \frac{x+z}{2}\bigr) \right] } \mathop \times\\
&\hspace{150pt} \times \alpha_*(z-y) \left[ \cos\bigl( \frac{z-y}{2} \Pi_{\frac{z+y}{2}}\bigr) \Psi\bigl( \frac{z+y}{2}\bigr) \right] \Bigr|^2.
\end{align*}
We use $\cos(x) = 1 - 2\sin^2(\frac x2)$ twice and find
\begin{align}
\Vert A^*\Psi\Vert_4^4 &\geq \frac 14 \mathcal{T}_* - C \, (\mathcal{T}_1 + \mathcal{T}_2), \label{Nonlinearity_2}
\end{align}
where
\begin{align*}
\mathcal{T}_* &:= \fint_{Q_B} \mathrm{d}x \int_{\mathbb{R}^3} \mathrm{d}y \;\Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \; \alpha_*(x - z) \ov{\Psi\bigl(\frac{x+z}{2}\bigr)} \alpha_*(z - y) \Psi\bigl( \frac{z+y}{2}\bigr)\Bigr|^2
\end{align*}
and
\begin{align}
\mathcal{T}_1 &:= \fint_{Q_B} \mathrm{d}x \int_{\mathbb{R}^3} \mathrm{d}y \;\Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \; \alpha_*(x - z) \ov{\Psi\bigl( \frac{x+z}{2}\bigr)} \mathop \times \notag\\
&\hspace{140pt} \times \alpha_*(z - y) \left[ \sin^2\bigl( \frac{z-y}{4} \Pi_{\frac{z+y}{2}}\bigr) \Psi\bigl( \frac{z +y}{2}\bigr) \right] \Bigr|^2, \notag\\
\mathcal{T}_2 &:= \fint_{Q_B} \mathrm{d}x \int_{\mathbb{R}^3} \mathrm{d}y \; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z\; \alpha_*(x - z)\ov{ \left[ \sin^2\bigl( \frac{x -z}{4} \Pi_{\frac{x+z}{2}}\bigr) \Psi\bigl( \frac{x + z}{2}\bigr) \right]} \mathop \times \notag \\
&\hspace{140pt} \times\alpha_*(z - y) \left[ \cos\bigl( \frac{z - y}{2}\Pi_{\frac{z+y}{2}}\bigr) \Psi\bigl( \frac{z+y}{2}\bigr) \right] \Bigr|^2. \label{Nonlinearity_3}
\end{align}
In the following we derive a lower bound on $\mathcal{T}_*$ and an upper bound on $\mathcal{T}_1$ and $\mathcal{T}_2$.
\emph{Lower bound on $\mathcal{T}_*$.} We change variables $z\mapsto z + x$ and $y\mapsto y + x$ and afterwards replace $x$ by~$X$, which allows us to write
\begin{align}
\mathcal{T}_* = \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d}y \; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \; \alpha_*(z) \ov{\Psi\bigl(X + \frac z2\bigr)} \alpha_*(z - y) \Psi\bigl(X + \frac{z+y}{2}\bigr)\Bigr|^2. \label{Nonlinearity_9}
\end{align}
Next, we combine $\Psi(X + \frac z2) = \mathrm{e}^{\i \frac z2P_X} \Psi(X)$ and the identity $\mathrm{e}^{\i \frac r2P_X} = \mathrm{e}^{\i \frac \mathbf B 2 r\wedge X}\mathrm{e}^{\i \frac r2\Pi_X}$ in \eqref{representation_Ustar} to write $\Psi(X + \frac z2) = \mathrm{e}^{\i \frac \mathbf B 2 z\wedge X} \mathrm{e}^{\i \frac z2\Pi_X} \Psi(X)$.
We conclude that
\begin{align*}
\ov{\Psi\bigl( X + \frac z2\bigr)} \Psi\bigl(X + \frac{z+y}{2}\bigr) = \mathrm{e}^{\i \frac{\mathbf B}{2} y\wedge X} \; \ov{\left[ \mathrm{e}^{\i \frac{z}{2}\Pi_X}\Psi(X) \right]} \; \left[ \mathrm{e}^{\i \frac{z+y}{2}\Pi_X} \Psi(X) \right],
\end{align*}
as well as
\begin{align*}
\mathcal{T}_* = \fint_{Q_B} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} y\; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \; \alpha_*(-z) \ov{ \left[ \mathrm{e}^{\i \frac z2\Pi_X} \Psi(X) \right]} \alpha_*(z - y) \left[ \mathrm{e}^{\i \frac{z+y}{2}\Pi_X} \Psi(X) \right] \Bigr|^2.
\end{align*}
This also implies
\begin{align}
\mathcal{T}_* \geq \frac 14\mathcal{T}_*^* - C(\mathcal{T}_*^{(1)} + \mathcal{T}_*^{(2)})
\label{eq:A1}
\end{align}
with
\begin{align*}
\mathcal{T}_*^* := \fint_{Q_B}\mathrm{d} X\int_{\mathbb{R}^3}\mathrm{d} y\; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \; \alpha_*(z)^t \ov{\Psi(X)} \alpha_*(z - y)^t \Psi(X)\Bigr|^2
\end{align*}
and
\begin{align}
\mathcal{T}_*^{(1)} &:= \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} y\; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \; \alpha_*(z) \ov{\bigl[\mathrm{e}^{\i \frac{z}{2}\Pi_X}\Psi(X)\bigr]} \alpha_*(z - y) \bigl[\bigl( \mathrm{e}^{\i \frac{z+y}{2}\Pi_X} - 1\bigr) \Psi(X)\bigr]\Bigr|^2, \notag \\
\mathcal{T}_*^{(2)} &:= \fint_{Q_B} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} y\; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z \;\alpha_*(z) \ov{\bigl[\bigl( \mathrm{e}^{\i \frac z2\Pi_X} - 1\bigr) \Psi(X)\bigr]} \alpha_*(z - y) \Psi(X)\Bigr|^2. \label{Nonlinearity_10}
\end{align}
The term $\mathcal{T}_*^*$ equals
\begin{align*}
\mathcal{T}_*^* = \fint_{Q_B} \mathrm{d} X\; |\Psi(X)|^4 \; \int_{\mathbb{R}^3} \mathrm{d} y\;\Bigl|\int_{\mathbb{R}^3} \mathrm{d} z\; \alpha_*(z)\alpha_*(y - z)\Bigr|^2 = \Vert \Psi\Vert_{L^4(Q_B)}^4 \; \Vert \alpha_* * \alpha_*\Vert_{L^2(\mathbb{R}^3)}^2.
\end{align*}
By \eqref{Decay_of_alphastar}, the Fourier transform of $\alpha_**\alpha_*$ equals $|\hat \alpha_*|^2$. Thus, $\Vert \alpha_* *\alpha_*\Vert_2^2 = \Vert \hat \alpha_*\Vert_4^4>0$. This is the desired leading term of \eqref{Nonlinearity_5}.
\emph{Upper bound on $\mathcal{T}_*^{(1)}$ and $\mathcal{T}_*^{(2)}$.} We start with $\mathcal{T}_*^{(1)}$, expand the square and estimate
\begin{align}
&\mathcal{T}_*^{(1)} \leq \int_{\mathbb{R}^3} \mathrm{d} y \int_{\mathbb{R}^3} \mathrm{d} z \int_{\mathbb{R}^3} \mathrm{d} z' \; | \alpha_*(z) \, \alpha_*(z') \, \alpha_*(z - y) \, \alpha_*(z' - y)| \label{Nonlinearity_13} \\
& \times \fint_{Q_B} \mathrm{d} X \; \Bigl| \bigl[ \mathrm{e}^{\i \frac z2\Pi_X} \Psi(X) \bigr] \bigl[ \mathrm{e}^{\i \frac {z'}2\Pi_X} \Psi(X) \bigr] \bigl[ \bigl( \mathrm{e}^{\i \frac{z+y}{2}\Pi_X}-1\bigr) \Psi(X) \bigr] \bigl[ \bigl( \mathrm{e}^{\i \frac{z'+y}{2}\Pi_X}-1\bigr) \Psi(X) \bigr] \Bigr|. \nonumber
\end{align}
When we use Hölder's inequality, \eqref{NTB-NtildeTB_3}, \eqref{ZPiX_inequality}, and \eqref{Magnetic_Sobolev} we see that the integral in the second line can be bounded by
\begin{align}
\Vert \mathrm{e}^{\i \frac{z}{2}\Pi} \Psi\Vert_6^2 \; \Vert (\mathrm{e}^{\i \frac{z+y}{2}\Pi} - 1) \Psi\Vert_6 \; \Vert (\mathrm{e}^{\i \frac{z+y}{2}\Pi} - 1) \Psi\Vert_2 \leq C\; \bigl|\frac{z+y}{2}\bigr| \; B^{-\nicefrac 32} \; \Vert \Pi\Psi\Vert_2^4. \label{Nonlinearity_11}
\end{align}
Proposition~\ref{First_Decomposition_Result} provides us with a bound for $\Vert \Pi \Psi \Vert_2$. In combination with \eqref{Nonlinearity_13}, \eqref{Nonlinearity_11}, Young's inequality, and the bound $|z + y| \leq 2|z| + |z- y|$, this implies
\begin{align}
\mathcal{T}_*^{(1)} &\leq C \, B^{-\nicefrac 32}\, \left( B^2 \Vert \Psi \Vert_2^4 + D_1 B^4 \right) \int_{\mathbb{R}^3} \mathrm{d} y \; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} z\; |z+y|\; | \alpha_*(z) \alpha_*(y-z)|\Bigr|^2 \nonumber \\
&\leq C \left( B^{-\nicefrac 12} \Vert \Psi \Vert_2^4 + D_1 B^{ \nicefrac 52 } \right) \, \Vert \alpha_*\Vert_{\nicefrac 43} \, \Vert \, |\cdot|\alpha_*\Vert_{\nicefrac 43}, \label{Nonlinearity_12}
\end{align}
where the right side is finite by \eqref{Decay_of_alphastar}. Similarly, we see that $\mathcal{T}_*^{(2)}$ is bounded by the right side of \eqref{Nonlinearity_12}.
\emph{Upper bound on $\mathcal{T}_1$ and $\mathcal{T}_2$ in \eqref{Nonlinearity_3}.} Bounds for $\mathcal{T}_1$ and $\mathcal{T}_2$ can be obtain along the same lines as the bound for $\mathcal{T}_*^{(1)}$. We apply the same change of variables as above and use estimates similar to the ones in \eqref{Nonlinearity_13}. In case of $\mathcal{T}_1$, the bound in \eqref{Nonlinearity_11} needs to be replaced by
\begin{align}
\Vert \Psi\Vert_6^2 \, \bigl\Vert \sin^2\bigl( \frac{z - y}{2} \Pi\bigr) \Psi \bigr\Vert_6 \, \bigl\Vert \sin^2\bigl( \frac{z - y}{2} \Pi\bigr) \Psi \bigr\Vert_2 \leq C \; \frac{|z-y|}{2} \; B^{-\nicefrac 32}\, \Vert \Pi\Psi\Vert_2^4. \label{Nonlinearity_14}
\end{align}
Here, we used $\sin^2(x) \leq |x|$ and the operator inequality in \eqref{ZPiX_inequality} to estimate the third factor. For the first and the second factor, we used
\begin{align*}
\sin^2\bigl( \frac{z-y}{4} \Pi\bigr) = - \frac{1}{4} \bigl( 2+ \mathrm{e}^{\i\frac{z-y}{2} \Pi} + \mathrm{e}^{-\i \frac{z-y}{2}\Pi}\bigr)
\end{align*}
and \eqref{Magnetic_Sobolev} or \eqref{NTB-NtildeTB_3}, respectively. A bound for $\mathcal{T}_2$ can be proved analogously. The final estimate we obtain in this way reads
\begin{equation}
\mathcal{T}_1 + \mathcal{T}_2 \leq C \left( B^{-\nicefrac 12} \Vert \Psi \Vert_2^4 + D_1 B^{ \nicefrac 52 } \right).
\label{eq:A2}
\end{equation}
In combination with \eqref{Nonlinearity_2}, \eqref{eq:A1}, and \eqref{Nonlinearity_12}, this proves \eqref{Nonlinearity_5}.
\emph{Step 3.} We denote $c:= \frac 1{2} \Vert \hat\alpha_*\Vert_4$, insert \eqref{Nonlinearity_1} and \eqref{Nonlinearity_5} into \eqref{Nonlinearity_8} and obtain
\begin{align}
C B^{\frac 14}\Vert \Psi\Vert_2^{\nicefrac 12} \geq c \Vert \Psi\Vert_4 - CB^{\frac 18} \Vert \Psi\Vert_2 - CB^{\frac 12}, \label{Nonlinearity_6}
\end{align}
which holds for $B$ small enough. For $\eta >0$ the left side is bounded from above by a constant times $\eta \Vert \Psi\Vert_2 + \eta^{-1} B^{\frac 12}$ and Hölder's inequality implies $\Vert \Psi\Vert_4 \geq \Vert \Psi\Vert_2$.
Accordingly,
\begin{equation}
C \left( \eta \Vert \Psi\Vert_2 + \eta^{-1} B^{\frac 12} \right) \geq (c - CB^{\frac 18}) \Vert \Psi\Vert_2 - CB^{\frac 12}.
\label{eq:A3}
\end{equation}
When we choose $\eta$ and $B$ in \eqref{eq:A3} small enough, this proves the claim.
\end{proof}
\subsection{Proof of Theorem \ref{Structure_of_almost_minimizers}}
The statement in Theorem \ref{Structure_of_almost_minimizers} is a direct consequence of Corollary~\ref{cor:lowerbound} above and the results in \cite{DeHaSc2021}. More precisely, we need to combine Corollary~\ref{cor:lowerbound}, \cite[Proposition~5.7]{DeHaSc2021}, \cite[Lemma~5.14]{DeHaSc2021}, and the arguments in \cite[Section~5.4]{DeHaSc2021}.
\section{The Structure of Low-Energy States}
\label{Lower Bound Part A}
In this section we prove a priori bounds for low-energy states of the BCS functional in the sense of \eqref{Second_Decomposition_Gamma_Assumption} below. The goal is to show that their Cooper pair wave function has a structure similar to that of the trial state we use in the proof of the upper bound in Section~\ref{Upper_Bound}. These bounds and the trial state analysis in Section~\ref{Upper_Bound} are the main technical ingredients for the proof of the lower bound in Section~\ref{Lower Bound Part B}. To prove the a priori bounds, we show that the periodic external potentials $W_h$ and $A_h$ can be treated as a perturbation, which reduces the problem to proving a priori bounds for the case of a constant magnetic field. The solution of this problem has been the main novelty in \cite[Theorem~5.1]{DeHaSc2021} and we apply it here. In case of a magnetic field with zero flux through the unit cell such bounds have been proved for the first time in \cite{Hainzl2012}. The idea to reduce the problem to the case of a constant magnetic field is inspired by a similar perturbative analysis in \cite{Hainzl2012}.
We recall the definition of the generalized one-particle density matrix $\Gamma$ in \eqref{Gamma_introduction}, its Cooper pair wave function $\alpha = \Gamma_{12}$, as well as the normal state $\Gamma_0$ in \eqref{Gamma0}.
\begin{thm}[Structure of low-energy states]
\label{Structure_of_almost_minimizers}
Let Assumptions \ref{Assumption_V} and \ref{Assumption_KTc} hold. For given $D_0, D_1 \geq 0$, there is a constant $h_0>0$ such that for all $0 <h \leq h_0$ the following holds: If $T>0$ obeys $T - {T_{\mathrm{c}}} \geq -D_0h^2$ and if $\Gamma$ is a gauge-periodic state with low energy, that is,
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) \leq D_1h^4, \label{Second_Decomposition_Gamma_Assumption}
\end{align}
then there are $\Psi\in H_{\mathrm{mag}}^1(Q_h)$ and $\xi\in {H^1(Q_h \times \Rbb_{\mathrm s}^3)}$ such that
\begin{align}
\alpha(X,r) = \alpha_*(r) \Psi(X) + \xi(X,r), \label{Second_Decomposition_alpha_equation}
\end{align}
where
\begin{align}
\sup_{0< h\leq h_0} \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 &\leq C, & \Vert \xi\Vert_{H^1(Q_h \times \Rbb_{\mathrm s}^3)}^2 &\leq Ch^4 \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr). \label{Second_Decomposition_Psi_xi_estimate}
\end{align}
\end{thm}
\begin{varbems}
\begin{enumerate}[(a)]
\item Equation~\eqref{Second_Decomposition_Psi_xi_estimate} shows that $\Psi$ is a macroscopic quantity in the sense that its $H_{\mathrm{mag}}^1(Q_h)$-norm scales as that of the function in \eqref{GL-rescaling}. It is important to note that the $H_{\mathrm{mag}}^1(Q_h)$-norm is scaled with $h$, see \eqref{Periodic_Sobolev_Norm}. The unscaled $L_{\mathrm{mag}}^2(Q_h)$-norm of $\Psi$ is of the order $h$, and therefore much larger than that of $\xi$, see \eqref{Second_Decomposition_Psi_xi_estimate}.
\item Theorem~\ref{Structure_of_almost_minimizers} has been proven in \cite[Theorem 5.1]{DeHaSc2021} for the case of a constant external magnetic field, where $A_h =0$ and $W_h = 0$. Our proof of Theorem~\ref{Structure_of_almost_minimizers} for general external fields reduces the problem to that case.
\end{enumerate}
\end{varbems}
Although Theorem~\ref{Structure_of_almost_minimizers} contains the natural a priori bounds for low-energy states, we need a slightly different version of it in our proof of the lower bound for the BCS free energy in Section~\ref{Lower Bound Part B}. The main reason is that we intend to use the function $\Psi$ from the decomposition of the Cooper pair wave function of a low-energy state in \eqref{Second_Decomposition_alpha_equation} to construct a Gibbs state $\Gamma_{\Delta}$ as in \eqref{GammaDelta_definition}. In order to be able to justify the relevant computations with this state, we need $\Psi \in H_{\mathrm{mag}}^2(Q_h)$, which is not guaranteed by Theorem~\ref{Structure_of_almost_minimizers} above, see also Remark~\ref{rem:alpha}. The following corollary provides us with a decomposition of $\alpha$, where the center-of-mass wave function $\Psi_\leq$ has the required $H_{\mathrm{mag}}^2(Q_h)$-regularity. A decomposition with a cut-off function of the form in the corollary has also been used in \cite{Hainzl2012,Hainzl2014,Hainzl2017,DeHaSc2021}.
\begin{kor}
\label{Structure_of_almost_minimizers_corollary}
Let the assumptions of Theorem~\ref{Structure_of_almost_minimizers} hold and let $\varepsilon \in [h^2, h_0^2]$. Let $\Psi$ be as in
\eqref{Second_Decomposition_alpha_equation} and define
\begin{align}
\Psi_\leq &\coloneqq \mathbbs 1_{[0,\varepsilon]}(\Pi^2) \Psi, & \Psi_> &\coloneqq \mathbbs 1_{(\varepsilon,\infty)}(\Pi^2) \Psi. \label{PsileqPsi>_definition}
\end{align}
Then, we have
\begin{align}
\Vert \Psi_\leq\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 &\leq \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2, \notag \\
\Vert \Psi_\leq \Vert_{H_{\mathrm{mag}}^k(Q_h)}^2 &\leq C\, (\varepsilon h^{-2})^{k-1} \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2, \qquad k\geq 2, \label{Psileq_bounds}
\end{align}
as well as
\begin{align}
\Vert \Psi_>\Vert_2^2 &\leq C \varepsilon^{-1}h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2, & \Vert \Pi\Psi_>\Vert_2^2 &\leq Ch^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2. \label{Psi>_bound}
\end{align}
Furthermore,
\begin{align}
\sigma_0(X,r) \coloneqq \alpha_*(r) \Psi_>(X) \label{sigma0}
\end{align}
satisfies
\begin{align}
\Vert \sigma_0\Vert_{H^1_\mathrm{symm}(Q_h\times \mathbb{R}^3)}^2 &\leq C\varepsilon^{-1}h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \label{sigma0_estimate}
\end{align}
and, with $\xi$ in \eqref{Second_Decomposition_alpha_equation}, the function
\begin{align}
\sigma \coloneqq \xi + \sigma_0 \label{sigma}
\end{align}
obeys
\begin{align}
\Vert \sigma\Vert_{H^1_\mathrm{symm}(Q_h\times \mathbb{R}^3)}^2 \leq Ch^4 \bigl( \varepsilon^{-1}\Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr). \label{Second_Decomposition_sigma_estimate}
\end{align}
In terms of these functions, the Cooper pair wave function $\alpha$ of the low-energy state $\Gamma$ in \eqref{Second_Decomposition_Gamma_Assumption} admits the decomposition
\begin{align}
\alpha(X,r) = \alpha_*(r) \Psi_\leq (X) + \sigma(X,r). \label{Second_Decomposition_alpha_equation_final}
\end{align}
\end{kor}
For a proof of the corollary we refer to the proof of Corollary~5.2 in \cite{DeHaSc2021}.
\input{5_Structure_of_Low-Energy_States/5.1_A_priori_estimates}
\input{5_Structure_of_Low-Energy_States/5.4_Proof_of_Thm_5.1}
\subsection{The BCS energy of low-energy states}
In this section, we provide the lower bound on \eqref{ENERGY_ASYMPTOTICS} and the proof of Theorem~\ref{Main_Result_Tc}~(b), and thereby complete the proof of Theorems \ref{Main_Result} and \ref{Main_Result_Tc}. Let $D_1\geq 0$ and $D\in \mathbb{R}$ be given and assume that $\Gamma$ is a gauge-periodic state at temperature $T = {T_{\mathrm{c}}}(1 - Dh^2)$ that satisfies \eqref{Second_Decomposition_Gamma_Assumption}. Corollary~\ref{Structure_of_almost_minimizers_corollary} provides us with a decomposition of the Cooper pair wave function $\alpha = [\Gamma]_{12}$ in terms of $\Psi_\leq$ in \eqref{PsileqPsi>_definition} and $\sigma$ in \eqref{sigma}, where $\Vert \Psi_\leq \Vert_{H_{\mathrm{mag}}^1(Q_h)} \leq C$ and where the bound
\begin{align}
\Vert \Psi_\leq\Vert_{H_{\mathrm{mag}}^2(Q_h)}^2 &\leq C \, \varepsilon h^{-2} \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \label{Lower_Bound_B_Psileq}
\end{align}
holds in terms of the function $\Psi$ in Theorem \ref{Structure_of_almost_minimizers}.
With the function $\Psi_\leq$ we construct a Gibbs state $\Gamma_{\Delta}$ with the gap function $\Delta \equiv \Delta_{\Psi_\leq}$ as in \eqref{Delta_definition}. Using Proposition~\ref{BCS functional_identity}, we write the BCS free energy of $\Gamma$ as
\begin{align*}
\FBCS(\Gamma) - \FBCS(\Gamma_0) &= - \frac 14 \langle \Delta, L_{T,B} \Delta\rangle + \frac 18 \langle \Delta, N_{T,B} (\Delta)\rangle + \Vert \Psi_\leq \Vert_2^2 \; \langle \alpha_*, V\alpha_*\rangle \\
&\hspace{10pt}+ \Tr\bigl[\mathcal{R}_{T,B}(\Delta)\bigr] + \frac T2 \mathcal{H}_0(\Gamma, \Gamma_\Delta) - \fint_{Q_h} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; V(r) \, |\sigma(X,r)|^2,
\end{align*}
where
\begin{equation*}
\Vert \mathcal{R}_{T,B}(\Delta) \Vert_1 \leq C \; h^6 \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)}^6.
\end{equation*}
We also apply Theorem~\ref{Calculation_of_the_GL-energy} to compute the terms in the first line on the right side, and find the lower bound
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) &\geq h^4 \; \mathcal E^{\mathrm{GL}}_{D}(\Psi_\leq) -C \bigl( h^5 + \varepsilon h^4 \bigr) \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \notag \\
&\hspace{30pt}+ \frac T2 \mathcal{H}_0(\Gamma, \Gamma_\Delta) - \fint_{Q_h} \mathrm{d} X\int_{\mathbb{R}^3} \mathrm{d} r\; V(r) \, |\sigma(X,r)|^2. \label{Lower_Bound_B_2}
\end{align}
The relative entropy is nonnegative and the last term on the right side is nonpositive. In the next section we show that their sum is negligible.
\subsection{Estimate on the relative entropy}
In this section we prove a lower bound for the second line in \eqref{Lower_Bound_B_2}, showing that it is negligible.
To start out with, let us define the function $\eta := \alpha_* \Psi_\leq - \alpha_\Delta$. By Corollary~\ref{Structure_of_almost_minimizers_corollary} we have $\alpha - \alpha_\Delta = \sigma + \eta$ and it follows from \cite[Eqs. (6.3)-(6.7)]{DeHaSc2021} that
\begin{align}
&\frac T2 \mathcal{H}_0(\Gamma, \Gamma_\Delta) - \fint_{Q_h} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; V(r)|\sigma(X, r)|^2 \notag\\
&\hspace{20pt} \geq (1 - C\Vert \Delta\Vert_\infty) \langle \sigma, (K_{T,\mathbf A, W} - V) \sigma \rangle - C\Vert \Delta\Vert_\infty \Vert V\Vert_\infty \Vert \sigma \Vert_2^2 - 2 \, | \langle\eta, K_{T, \mathbf A, W} \sigma\rangle|. \label{Lower_Bound_2_intermediate}
\end{align}
From \eqref{KTB_Lower_bound_5} we know that the lowest eigenvalue of $K_{T, \mathbf A, W} - V$ is bounded from below by $-Ch^2$. Furthermore, Lemma~\ref{Schatten_estimate} and \eqref{Magnetic_Sobolev} imply
\begin{equation}
\Vert \Delta\Vert_\infty \leq C \; h^{\nicefrac 12} \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)},
\label{eq:ABDelta}
\end{equation}
which in combination with \eqref{Second_Decomposition_sigma_estimate} implies that the first two terms on the right side of \eqref{Lower_Bound_2_intermediate} are bounded from below by
\begin{align}
-C \varepsilon^{-1} h^{\nicefrac 92} \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac 12}. \label{Lower_Bound_B_3}
\end{align}
To estimate the last term on the right side of \eqref{Lower_Bound_2_intermediate}, we use \eqref{KTc_bounded_derivative} to replace $K_{T, \mathbf A, W}$ by $ K_{{T_{\mathrm{c}}}, \mathbf A, W}$, which yields the estimate
\begin{align*}
|\langle \eta ,(K_{T,\mathbf A, W} - K_{{T_{\mathrm{c}}}, \mathbf A, W} )\sigma\rangle| &\leq 2D_0 h^2 \, \Vert \sigma\Vert_2 \, \Vert \eta\Vert_2 \\
&\leq C \, h^6 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac 12}.
\end{align*}
To obtain the result we also used \eqref{Second_Decomposition_sigma_estimate}, Proposition~\ref{Structure_of_alphaDelta} and \eqref{Lower_Bound_B_Psileq}. Next, we decompose $\eta = \eta_0 + \eta_\perp$ with $\eta_0(\Delta)$ and $\eta_{\perp}(\Delta)$ as in Proposition~\ref{Structure_of_alphaDelta} and write
\begin{align}
\langle \eta, K_{{T_{\mathrm{c}}}, \mathbf A, W} \sigma \rangle &= \langle \eta_0 , K_{{T_{\mathrm{c}}}, \mathbf A, W} \sigma\rangle + \langle \eta_\perp, K_{{T_{\mathrm{c}}}, \mathbf A, W} (\sigma - \sigma_0)\rangle + \langle \eta_\perp , K_{{T_{\mathrm{c}}}, \mathbf A, W} \sigma_0\rangle. \label{Lower_Bound_B_1}
\end{align}
Using \eqref{alphaDelta_decomposition_eq2} and \eqref{Second_Decomposition_sigma_estimate}, we see that the first term on the right side of \eqref{Lower_Bound_B_1} is bounded by
\begin{align}
|\langle \eta_0, K_{{T_{\mathrm{c}}}, \mathbf A, W} \sigma \rangle| &\leq \bigl\Vert \sqrt{K_{{T_{\mathrm{c}}}, \mathbf A, W}}\, \eta_0\bigr\Vert_{2} \bigl\Vert \sqrt{K_{{T_{\mathrm{c}}}, \mathbf A, W}}\,\sigma\bigr\Vert_{2} \notag \\
&\leq C \varepsilon^{-\nicefrac 12} h^5 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac 12}.
\label{eq:A30}
\end{align}
We note that $\sigma - \sigma_0 = \xi$ and use \eqref{alphaDelta_decomposition_eq3}, \eqref{Second_Decomposition_Psi_xi_estimate}, and \eqref{Lower_Bound_B_Psileq} to estimate
\begin{align}
|\langle \eta_\perp, K_{{T_{\mathrm{c}}}, \mathbf A, W} \xi \rangle| &\leq \bigl\Vert \sqrt{K_{{T_{\mathrm{c}}}, \mathbf A, W}} \, \eta_\perp\bigr\Vert_{2} \bigl\Vert \sqrt{K_{{T_{\mathrm{c}}}, \mathbf A, W}} \, \xi \bigr\Vert_{2} \notag \\
&\leq C\varepsilon^{\nicefrac 12} h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac 12}. \label{eq:A31}
\end{align}
It remains to estimate the last term on the right side of \eqref{Lower_Bound_B_1}, which we write as
\begin{align}
\langle \eta_\perp, K_{{T_{\mathrm{c}}}, \mathbf A, W} \sigma_0\rangle &= \langle \eta_\perp, K_{{T_{\mathrm{c}}}}^r \sigma_0\rangle + \langle \eta_\perp, [K_{{T_{\mathrm{c}}}, \mathbf A, W}^r - K_{{T_{\mathrm{c}}}}^r ] \sigma_0\rangle + \langle \eta_\perp, (U-1)K_{{T_{\mathrm{c}}}, \mathbf A, W}^r \sigma_0\rangle \notag \\
&\hspace{170pt} + \langle \eta_\perp, UK_{{T_{\mathrm{c}}}, \mathbf A, W}^r (U^* - 1) \sigma_0\rangle. \label{LBpartB_3}
\end{align}
with the unitary operator $U$ in \eqref{U_definition}. We recall that the operators $K_{{T_{\mathrm{c}}}, \mathbf A, W}^r$ and $K_{{T_{\mathrm{c}}}}^r$ act on the relative coordinate $r = x-y$. Since $\Delta(X,r) = - 2 V(r) \alpha_*(r) \Psi_{\leq}(X)$ and $\sigma_{0}(X,r) = \alpha_*(r) \Psi_>(X)$ we know from Proposition~\ref{Structure_of_alphaDelta}~(c) that the first term on the right side of \eqref{LBpartB_3} vanishes. A bound for the remaining terms is provided by the following lemma.
\begin{lem}
\label{Lower_Bound_B_remainings}
We have the following estimates on the remainder terms of \eqref{LBpartB_3}:
\begin{enumerate}[(a)]
\item $|\langle \eta_\perp, [K_{{T_{\mathrm{c}}},\mathbf A, W}^r - K_{{T_{\mathrm{c}}}}^r]\sigma_0\rangle| \leq C h^6 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2$,
\item $|\langle \eta_\perp, (U - 1) K_{{T_{\mathrm{c}}},\mathbf A, W}^r \sigma_0\rangle| \leq C \varepsilon^{\nicefrac 12} h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2$,
\item $|\langle \eta_\perp, UK_{{T_{\mathrm{c}}}, \mathbf A, W}^r (U^* - 1)\sigma_0\rangle| \leq C \varepsilon^{\nicefrac 12} h^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2$.
\end{enumerate}
\end{lem}
\begin{proof}
The proof for the case $W =0$ and $A =0$ is given in \cite[Lemma 6.2]{DeHaSc2021}. The extension for general $A$ and $W$ is straightforward.
\end{proof}
Accordingly, we have
\begin{align*}
|\langle \sigma, K_{T, \mathbf A, W} \eta \rangle | &\leq C\bigl( \varepsilon^{-\nicefrac 12} h^5 + \varepsilon^{\nicefrac 12} h^4 \bigr) \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac 12}.
\end{align*}
We combine this with \eqref{Lower_Bound_B_2}, \eqref{Lower_Bound_2_intermediate}, and \eqref{Lower_Bound_B_3} to see that
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) & \geq h^4\, \mathcal E^{\mathrm{GL}}_{D}(\Psi_\leq) \notag\\
&\hspace{-90pt} - C\bigl( \varepsilon^{-\nicefrac 12} h^5 + \varepsilon^{\nicefrac 12} h^4 + \varepsilon^{-1} h^{\nicefrac 92} \bigr) \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} + D_1 \bigr)^{\nicefrac 12}. \label{Lower_Bound_B_8}
\end{align}
The optimal choice $\varepsilon = h^{\nicefrac 13}$ in \eqref{Lower_Bound_B_8} yields
\begin{align}
&\FBCS(\Gamma) - \FBCS(\Gamma_0) \notag \\
&\hspace{2cm}\geq h^4 \, \bigl(\mathcal E^{\mathrm{GL}}_{D}(\Psi_\leq) - C \, h^{\nicefrac {1}{6}} \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac 12} \bigr). \label{Lower_Bound_B_5}
\end{align}
\subsection{Conclusion}
We use \eqref{Lower_Bound_B_5} to complete the proofs of Theorem~\ref{Main_Result} and Theorem~\ref{Main_Result_Tc}. To prove the former, let an approximate minimizer $\Gamma$ of the BCS functional be given. This implies that \eqref{Second_Decomposition_Gamma_Assumption} holds with
\begin{align}
D_1 := E^{\mathrm{GL}}(D) + \rho \label{Lower_Bound_B_4}
\end{align}
and $\rho\geq 0$. Since $\Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \leq C$ by \eqref{Second_Decomposition_Psi_xi_estimate}, \eqref{Lower_Bound_B_5} implies
\begin{align*}
h^4 \bigl(E^{\mathrm{GL}}(D) + \rho\bigr) \geq \FBCS(\Gamma) - \FBCS(\Gamma_0) \geq h^4 \bigl( \mathcal E^{\mathrm{GL}}_{D}(\Psi_\leq) - C\, h^{\nicefrac 16}\bigr)
\end{align*}
and this proves the remaining claims of Theorem \ref{Main_Result}.
We continue with the proof of Theorem~\ref{Main_Result_Tc}. We assume that the temperature $T$ obeys
\begin{align}
{T_{\mathrm{c}}} ( 1 - h^2 ({D_{\mathrm{c}}} - D_0 h^{\nicefrac 16})) < T \leq {T_{\mathrm{c}}}(1 + Ch^2), \label{TcB_Upper_Fine_1}
\end{align}
where ${D_{\mathrm{c}}}$ is defined in \eqref{Dc_Definition} and $D_0 > 0$. For such temperatures $T$, we claim that the BCS functional is minimized by the normal state if $D_0$ is choosen appropriately large. This concludes the proof of part (b) of Theorem \ref{Main_Result_Tc}, since Corollary \ref{TcB_First_Upper_Bound} covers the remaining temperature range.
To prove this claim, we use \eqref{Lower_Bound_B_5} as a basis and assume that \eqref{Second_Decomposition_Gamma_Assumption} holds with $D_1 =0$. For a lower bound, we may drop the nonnegative quartic term in the Ginzburg--Landau functional so that
\begin{align*}
\mathcal E^{\mathrm{GL}}_{D}(\Psi_\leq) &\geq h^{-4} \langle \Psi_\leq , (\Lambda_0 \, \Pi_\mathbf A^2 + \Lambda_1 \, W - Dh^2 \Lambda_2) \Psi_\leq\rangle \geq \Lambda_2 \, ( {D_{\mathrm{c}}} - D) \; h^{-2} \, \Vert \Psi_\leq \Vert_2^2,
\end{align*}
with the coefficients $\Lambda_0$, $\Lambda_1$, $\Lambda_2$, and $\Lambda_3$ in \eqref{GL-coefficient_1}-\eqref{GL_coefficient_3}, as well as $D \in \mathbb{R}$ determined by $T = {T_{\mathrm{c}}}(1-D h^2)$. We use the bound
\begin{align*}
\Vert \Pi \Psi\Vert_2 &\leq C \, h \, \Vert \Psi\Vert_2,
\end{align*}
whose proof is given by \cite[Eq. (5.26)]{DeHaSc2021} and combine this with \eqref{Psi>_bound} to see that
\begin{align*}
\Vert \Psi_\leq \Vert_2 \geq \Vert \Psi\Vert_2 - \Vert \Psi_>\Vert_2 \geq c \; h \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)} \, ( 1 - C \, h^{\nicefrac{5}{6}} ).
\label{eq:A32}
\end{align*}
We combine these observations with the lower bound \eqref{Lower_Bound_B_5} and obtain
\begin{equation}
0\geq \FBCS(\Gamma) - \FBCS(\Gamma_0) \geq c \; h^4 \; \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \bigl( ({D_{\mathrm{c}}} - D) - Ch^{\nicefrac {1}{6}}\bigr), \label{eq:A33}
\end{equation}
Since the lower bound in \eqref{TcB_Upper_Fine_1} is equivalent to
\begin{align}
{D_{\mathrm{c}}} - D > D_0 \, h^{\nicefrac 1{6}}, \label{TcB_Upper_Fine_2}
\end{align}
the choice $D_0 > C$ with $C>0$ in \eqref{eq:A33}, together with \eqref{TcB_Upper_Fine_2}, yields a lower bound for the right side of \eqref{eq:A33}. This proves that $\Psi =0$. By \eqref{Second_Decomposition_alpha_equation} and \eqref{Second_Decomposition_Psi_xi_estimate}, this implies that $\alpha = 0$, i.e., $\Gamma$ is a diagonal state. Therefore, $\Gamma_0$ is the unique minimizer of $\FBCS$ in the temperature range \eqref{TcB_Upper_Fine_1} provided $D_0$ is chosen as above. This proves Theorem \ref{Main_Result_Tc}.
\subsection{Proof of Lemma~\ref{Lower_Bound_B_remainings}}
\label{sec:A1}
In this section we prove Lemma~\ref{Lower_Bound_B_remainings}.
Our proof of part (a) uses a Cauchy integral representation for the operator $K_{{T_{\mathrm{c}}},B}- (\pi^2 -\mu)$, which is provided in Lemma~\ref{KT_integral_rep} below. Let us start by defining the contour for the Cauchy integral.
\begin{defn}[Speaker path]
\label{speaker path}
Let $R>0$, assume that $\mu \leq 1$ and define the following complex paths
\begin{align*}
\begin{split}
u_1(t) &:= \frac{\pi\i}{2{\beta_{\mathrm{c}}}} + (1 + \i)t\\
u_2(t) &:= \frac{\pi\i}{2{\beta_{\mathrm{c}}}} - (\mu + 1)t\\
u_3(t) &:= -\frac{\pi\i}{2{\beta_{\mathrm{c}}}}t - (\mu + 1)\\
u_4(t) &:= -\frac{\pi\i}{2{\beta_{\mathrm{c}}}} - (\mu + 1)(1-t) \\
u_5(t) &:= -\frac{\pi\i }{2{\beta_{\mathrm{c}}}} + (1 - \i)t
\end{split}
&
\begin{split}
\phantom{ \frac \i{\beta_{\mathrm{c}}} }t&\in [0,R], \\
\phantom{ \frac \i{\beta_{\mathrm{c}}} }t &\in [0,1], \\
\phantom{ \frac \i{\beta_{\mathrm{c}}} }t&\in [-1,1],\\
\phantom{ \frac \i{\beta_{\mathrm{c}}} }t &\in [0,1],\\
\phantom{ \frac \i {\beta_{\mathrm{c}}} }t&\in [0,R].
\end{split}
& \begin{split}
\text{\includegraphics[width=6cm]{../BCS-Theory_General_Field/Speaker_path.pdf}}
\end{split}
\end{align*}
The speaker path is defined as the union of paths $u_i$, $i=1, \ldots, 5$, with $u_1$ taken in reverse direction, i.e.,
\begin{align*}
{\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R := \mathop{\dot -}u_1 \mathop{\dot +} u_2 \mathop{\dot +} u_3 \mathop{\dot +} u_4 \mathop{\dot +} u_5.
\end{align*}
If $\mu > 1$ we choose the same path as in the case $\mu = 1$.
\end{defn}
This path has the property that certain norms of the resolvent kernel of $\pi^2$ are uniformly bounded for $z \in {\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R$ and $R > 0$. More precisely, Lemma~\ref{GAz-GtildeAz_decay} implies
\begin{align}
\sup_{0\leq B\leq B_0} \sup_{R>0} \sup_{w\in {\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R} \bigl[ \left\Vert \, |\cdot |^a g_B^w\right\Vert_1 + \left \Vert \, |\cdot|^a \nabla g_B^w\right\Vert_1 \bigr] < \infty. \label{g0_decay_along_speaker}
\end{align}
We could also choose a path parallel to the real axis in Lemma~\ref{KT_integral_rep} below. In this case the above norms would depend on $R$. Although our analysis also works in this case, we decided to use the path ${\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R$ because of the more elegant bound in \eqref{g0_decay_along_speaker}.
With the above definition at hand, we are prepared to state the following lemma.
\begin{lem}
\label{KT_integral_rep}
Let $H\colon \mathcal{D}(H)\rightarrow \mathcal{H}$ be a self-adjoint operator on a separable Hilbert space~$\mathcal{H}$ with $H\geq -\mu$. Then, we have
\begin{align}
\frac{H}{\tanh(\frac{{\beta_{\mathrm{c}}}}{2} H)} = H + \lim_{R\to\infty} \int_{{\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R} \frac{\mathrm{d} z}{2\pi\i} \Bigl( \frac{z}{\tanh(\frac{{\beta_{\mathrm{c}}}}{2} z)} - z \Bigr) \frac{1}{z - H}, \label{KT_integral_rep_eq}
\end{align}
with the speaker path ${\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R$ in Definition~\ref{speaker path}. The above integral including the limit is understood as an improper Riemann integral with respect to the uniform operator topology.
\end{lem}
\begin{proof}
We have that the function $f(z) = \frac{z}{\tanh(\frac{{\beta_{\mathrm{c}}} }{2}z)} - z = \frac{2z}{\mathrm{e}^{{\beta_{\mathrm{c}}} z} - 1}$ is analytic in the open domain $\mathbb{C} \setminus 2\pi T \i \mathbb{Z}_{\neq 0}$. The construction of the Riemann integral over the path ${\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R$ with respect to the uniform operator topology is standard. The fact that the limit $R \to \infty$ exists in the same topology follows from the exponential decay of the function $f(z)$ along the speaker path. To check the equality in \eqref{KT_integral_rep_eq}, we evaluate both sides in the inner product with two vectors in $\ran \mathbbs 1_{(-\infty, K]}(H)$ for $K > 0$ and use the functional calculus, the Cauchy integral formula and the fact that $\bigcup_{K>0} \ran \mathbbs 1_{(-\infty, K]}(H)$ is a dense subset of $\mathcal{H}$. This proves the claim.
\end{proof}
Henceforth, we use the symbol $\int_{{\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}}$ to denote the integral on the right side of \eqref{KT_integral_rep_eq} including the limit and we denote ${\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}} = \bigcup_{R > 0} {\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}_R$.
\begin{proof}[Proof of Lemma \ref{Lower_Bound_B_remainings}]
We apply Cauchy-Schwarz to estimate
\begin{align}
|\langle \eta_\perp, (K_{{T_{\mathrm{c}}},B}^r - K_{{T_{\mathrm{c}}}}^r)\sigma_0\rangle| \leq \Vert \eta_\perp\Vert_2 \, \Vert (K_{{T_{\mathrm{c}}},B}^r- K_{{T_{\mathrm{c}}}}^r)\sigma_0\Vert_2
\label{eq:A34}
\end{align}
and claim that
\begin{align}
\Vert [K_{{{T_{\mathrm{c}}}},B}^r - K_{{T_{\mathrm{c}}}}^r]\sigma_0\Vert_2 \leq C \varepsilon^{-\nicefrac 12} B^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)} \label{Lower_Bound_B_remainings_3}
\end{align}
holds. To see this, we apply Lemma~\ref{KT_integral_rep} and write
\begin{align}
K_{{{T_{\mathrm{c}}}},B}^r - K_{{T_{\mathrm{c}}}}^r = \pi_r^2 - p_r^2 + \int_{\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}} \frac{\mathrm{d} w}{2\pi\i} \; f(w) \; \frac{1}{w + \mu - \pi_r^2} [\pi_r^2 - p_r^2] \frac{1}{w + \mu - p_r^2}, \label{Lower-bound-final_2}
\end{align}
where $\pi_r^2 - p_r^2 = \i \, \mathbf B\wedge r\; p_r + \frac 14 |\mathbf B\wedge r|^2$. Using \eqref{Psi>_bound} and \eqref{Decay_of_alphastar}, we estimate the first term on the right side of \eqref{Lower-bound-final_2} by
\begin{align}
\Vert [\pi_r^2 - p_r^2]\sigma_0\Vert_2 &\leq B \, \Vert \, |\cdot|\nabla \alpha_*\Vert_2 \Vert \Psi_>\Vert_2 + B^2 \Vert \,|\cdot|^2\alpha_*\Vert_2 \Vert \Psi_>\Vert_2 \notag\\
&\leq C\varepsilon^{-\nicefrac 12} B^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}. \label{Lower_Bound_B_remainings_1}
\end{align}
To estimate the second term in \eqref{Lower-bound-final_2}, we use Hölder's inequality in \eqref{Schatten-Hoelder} and find
\begin{align*}
\Bigl\Vert \int_{\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}} \frac{\mathrm{d} w}{2\pi\i} \, f(w)\; \frac{1}{w + \mu - \pi_r^2} [\pi_r^2 - p_r^2]\frac{1}{w+\mu - p_r^2}\sigma_0\Bigr\Vert_2 &\\
&\hspace{-180pt}\leq \int_{\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}} \frac{\mathrm{d} |w|}{2\pi}\, |f(w)| \, \Bigl\Vert \frac{1}{w + \mu- \pi_r^2} \Bigr\Vert_\infty \Bigl\Vert [\pi_r^2 - p_r^2]\frac{1}{w + \mu - p_r^2}\sigma_0\Bigr\Vert_2,
\end{align*}
where $\mathrm{d} |w| = \mathrm{d} t \; |w'(t)|$. Eq.~\eqref{g0_decay_along_speaker} implies that the operator norm of the magnetic resolvent is uniformly bounded for $w \in {\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}$. Since the function $f$ is exponentially decaying along the speaker path it suffices to prove a bound on the last factor that is uniform for $w \in {\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}$. We have
\begin{align*}
[\pi_r^2 - p_r^2] \frac{1}{w + \mu - p_r^2} \sigma_0(X,r) &= \int_{\mathbb{R}^3} \mathrm{d} s \; [\pi_r^2 - p_r^2] g_0^w(r - s) \alpha_*(s) \Psi_>(X),
\end{align*}
which implies
\begin{align}
\Bigl\Vert [\pi_r^2 - p_r^2] \frac{1}{w + \mu - p_r^2}\sigma_0\Bigr\Vert_2^2 &
\leq \Vert \Psi_>\Vert_2^2 \int_{\mathbb{R}^3} \mathrm{d} r \, \Bigl| \int_{\mathbb{R}^3} \mathrm{d} s \; |[\pi_r^2 - p_r^2] g_0^w(r - s) \alpha_*(s)|\Bigr|^2. \label{Lower_Bound_B_remainings_2}
\end{align}
Moreover,
\begin{align*}
\int_{\mathbb{R}^3} \mathrm{d} r \; \Bigl| \int_{\mathbb{R}^3} \mathrm{d} s \; |[\pi_r^2 - p_r^2] g_0^w(r - s) \alpha_*(s)|\Bigr|^2 & \\
&\hspace{-195pt} \leq CB^2 \bigl( \Vert \, |\cdot| \nabla g_0^w\Vert_1^2 \; \Vert \alpha_*\Vert_2^2 + \Vert \nabla g_0^w\Vert_1^2 \; \Vert \, |\cdot|\alpha_*\Vert_2^2 + \Vert \, |\cdot|^2 g_0^w\Vert_1^2 \; \Vert\alpha_*\Vert_2^2 + \Vert g_0^w\Vert_1^2 \; \Vert \, |\cdot|^2\alpha_*\Vert_2^2\bigr).
\end{align*}
The right side is uniformly bounded for $w \in {\hspace{0.7pt}\text{\raisebox{-1.7pt}{\scalebox{1.4}{\faVolumeOff}}\hspace{0.7pt}}}$ by \eqref{g0_decay_along_speaker} and \eqref{Decay_of_alphastar}. In combination with \eqref{Psi>_bound} and \eqref{Lower_Bound_B_remainings_2}, this implies
\begin{align*}
\Bigl\Vert [\pi_r^2 -p_r^2]\frac{1}{w + \mu - p_r^2} \sigma_0\Bigr\Vert_2^2 & \leq CB^2 \; \Vert\Psi_>\Vert_2^2\leq C \varepsilon^{-1} B^4 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}^2.
\end{align*}
Using this and \eqref{Lower_Bound_B_remainings_1}, we read off \eqref{Lower_Bound_B_remainings_3}. Finally, we apply Proposition~\ref{Structure_of_alphaDelta} to estimate $\Vert \eta_\perp\Vert_2$ in \eqref{eq:A34}, which proves part (a).
To prove part (b), we start by noting that
\begin{align*}
|\langle \eta_\perp, (U-1) K_{{T_{\mathrm{c}}},B}^r\sigma_0 \rangle| &\leq \Vert \, |r| \eta_\perp\Vert_2 \; \Vert\, |r|^{-1} (U - 1) K_{{T_{\mathrm{c}}},B}^r \, \sigma_0 \Vert_2.
\end{align*}
A bound for the left factor on the right side is provided by Proposition~\ref{Structure_of_alphaDelta}. To estimate the right factor, we use \eqref{Psi>_bound}, \eqref{Decay_of_alphastar} and the operator inequality in \eqref{ZPiX_inequality}, which implies $|U- 1|^2 \leq 3 r^2 \Pi_X^2$, and find
\begin{align*}
\Vert\, |r|^{-1} (U - 1) K_{{T_{\mathrm{c}}},B}^r \, \sigma_0 \Vert_2 &\leq C \Vert K_{{T_{\mathrm{c}}},B}^r \alpha_*\Vert_2 \; \Vert \Pi\Psi_>\Vert_2 \leq CB \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}.
\end{align*}
This proves part (b).
For part (c), we estimate
\begin{align}
|\langle \eta_\perp, UK_{{T_{\mathrm{c}}},B}^r (U^* - 1) \sigma_0\rangle| &\leq \bigl\Vert \sqrt{K_{{T_{\mathrm{c}}},B}^r} \, U^* \eta_\perp \bigr\Vert_2 \; \bigl\Vert \sqrt{K_{{T_{\mathrm{c}}},B}^r}\, (U^* - 1)\sigma_0\bigr\Vert_2
\label{eq:A35}
\end{align}
and note that $K_{{T_{\mathrm{c}}},B}^r \leq C(1+ \pi_r^2)$ implies
\begin{align}
\bigl\Vert \sqrt{K_{{T_{\mathrm{c}}},B}^r} (U^* - 1)\sigma_0\bigr\Vert_2^2 &= \langle \sigma_0, (U - 1) K_{{T_{\mathrm{c}}},B}^r (U^* - 1)\sigma_0\rangle \notag\\
&\leq C \Vert (U^* - 1)\sigma_0\Vert_2^2 + C\Vert \pi_r (U^* - 1)\sigma_0\Vert_2^2.\label{Lower-bound-final_3}
\end{align}
Using the bound for $|U-1|^2$ in part (b), \eqref{Psi>_bound} and \eqref{Decay_of_alphastar}, we see that the first term is bounded by $C\Vert |r|\alpha_* \Pi_X\Psi_>\Vert^2 \leq CB^2$. Lemma~\ref{CommutationI} allows us to write
\begin{align*}
\pi_r (U^* - 1) &= (U^* - 1) \tilde \pi_r + \frac 12 U^*\Pi_X - \frac 14 \mathbf B \wedge r.
\end{align*}
Accordingly, we have
\begin{align*}
\Vert \pi_r (U^* - 1)\sigma_0\Vert_2^2 &\\
&\hspace{-50pt}\leq C\bigl( \Vert \, |r| p_r\alpha_*\Pi_X\Psi_>\Vert_2^2 + B \, \Vert \, |r|^2\alpha_* \Pi_X \Psi_> \Vert_2^2 + \Vert \alpha_*\Pi_X\Psi_>\Vert_2^2 + B \Vert \, |r|\alpha_* \Psi_>\Vert_2^2\bigr)\\
&\hspace{-50pt}\leq C \bigl( B^2 + \varepsilon^{-1} B^3 \bigr) \leq CB^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}^2.
\end{align*}
We conclude that the right side of \eqref{Lower-bound-final_3} is bounded by $CB^2 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}^2$.
With $K_T(p) \leq C (1+p^2)$ we see that the first factor on the right side of \eqref{eq:A35} is bounded by
\begin{align*}
\bigl\Vert \sqrt{K_{{T_{\mathrm{c}}},B}^r} \, U^* \eta_\perp\bigr\Vert_2^2 &= \langle \eta_\perp , U K_{{T_{\mathrm{c}}},B}^r U^* \eta_\perp\rangle \leq C \Vert \eta_\perp\Vert_2^2 + C\Vert \pi_rU^*\eta_\perp\Vert_2^2.
\end{align*}
From Lemma~\ref{CommutationI} we know that $\pi_r U^* = U^* [\tilde\pi_r + \frac 12 \Pi_X]$, and hence
\begin{align*}
\bigl\Vert \sqrt{K_{{T_{\mathrm{c}}},B}^r} \, U^* \eta_\perp\bigr\Vert_2^2 &\leq C \bigl( \Vert \eta_\perp\Vert_2^2 + \Vert \tilde \pi_r\eta_\perp\Vert_2^2 + \Vert \Pi_X\eta_\perp\Vert_2^2\bigr) \leq C \varepsilon B^2 \, \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_B)}^2.
\end{align*}
This proves part (c) and ends the proof.
\end{proof}
\section{The Lower Bound on \texorpdfstring{(\ref{ENERGY_ASYMPTOTICS})}{(\ref{ENERGY_ASYMPTOTICS})} and Proof of Theorem \ref{Main_Result_Tc} (b)}
\label{Lower Bound Part B}
\subsection{The BCS energy of low-energy states}
In this section, we complete the proofs of Theorems \ref{Main_Result} and \ref{Main_Result_Tc}, which amounts to providing the lower bound on \eqref{ENERGY_ASYMPTOTICS}, the bound in \eqref{GL-estimate_Psi}, and the proof of Theorem~\ref{Main_Result_Tc}~(b). Since these proofs mostly go along the same lines as those in \cite{DeHaSc2021}, we only mention the differences and keep the presentation to a minimal length. Once the a priori estimates in \cite[Theorem~5.1]{DeHaSc2021} are proved, the proofs of the lower bound for the free energy, the decomposition of the Cooper pair wave function of an approximate minimizer, and the upper bound for the critical temperature shift in \cite{DeHaSc2021} follow the same strategy as the related proofs in \cite{Hainzl2012,Hainzl2014}. In the following we will, however, only refer to \cite{DeHaSc2021} because our presentation is closer to the analysis in this reference than to those in \cite{Hainzl2012,Hainzl2014}.
Let $D_1\geq 0$ and $D\in \mathbb{R}$ be given, choose $T = {T_{\mathrm{c}}}(1 - Dh^2)$, and assume that $\Gamma$ is a gauge-periodic state that satisfies \eqref{Second_Decomposition_Gamma_Assumption}. Corollary~\ref{Structure_of_almost_minimizers_corollary} guarantees a decomposition of the Cooper pair wave function $\alpha = [\Gamma]_{12}$ in terms of $\Psi_{\leq}$ in \eqref{PsileqPsi>_definition} and $\sigma$ in \eqref{sigma}. The function $\Psi_{\leq}$ satisfies the bounds
\begin{align}
\Vert \Psi_{\leq} \Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 &\leq \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \leq C, & \Vert \Psi_{\leq} \Vert_{H_{\mathrm{mag}}^2(Q_h)}^2 &\leq C \varepsilon h^{-2} \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)}^2, \label{eq:lowerboundB1}
\end{align}
with $\Psi$ in \eqref{Second_Decomposition_alpha_equation}. Let us define the state $\Gamma_{\Delta}$ as in \eqref{GammaDelta_definition} with $\Delta(X,r) = -2 V \alpha_* (r) \Psi_{\leq}(X)$. We apply Proposition~\ref{BCS FUNCTIONAL_IDENTITY} and Theorem~\ref{CALCULATION_OF_THE_GL-ENERGY} to obtain the following lower bound for the BCS energy of $\Gamma$:
\begin{align}
\FBCS(\Gamma) - \FBCS(\Gamma_0) &\geq h^4\; \mathcal E^{\mathrm{GL}}_{D}(\Psi_{\leq}) - C \left( h^5 + \varepsilon h^4 \right) \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 \nonumber \\
%
&\hspace{20pt}+ \frac{T}{2} \mathcal{H}_0(\Gamma, \Gamma_\Delta) - \fint_{Q_h} \mathrm{d} X \int_{\mathbb{R}^3} \mathrm{d} r \; V(r) \, | \sigma(X,r) |^2. \label{eq:lowerboundB2}
\end{align}
In the next section we prove a lower bound for the terms in the second line of \eqref{eq:lowerboundB2}.
\subsection{Estimate on the relative entropy}
The arguments in \cite[Eqs.~(6.1)-(6.14)]{DeHaSc2021} apply in literally the same way here, too. We obtain the correct bounds when we replace $B$ by $h^2$ in all formulas. This, in particular, applies to the statement of \cite[Lemma~6.2]{DeHaSc2021}. The only difference is that \cite[Eq.~(6.10)]{DeHaSc2021} is now given by
\begin{equation*}
| \langle \eta_{0}, K_{T_{\mathrm{c}},\mathbf A,W} \sigma \rangle | \leq \ C \varepsilon^{-\nicefrac{1}{2}} h^{\nicefrac{9}{2}} \Vert \Psi \Vert_{H_{\mathrm{mag}}^1(Q_h)} \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac{1}{2}},
\end{equation*}
which is due to the reason that the bound for the $L^2$-norm of $\eta_0$ in Proposition~\ref{Structure_of_alphaDelta} is worse than the comparable bound we obtained in \cite[Proposition~3.2]{DeHaSc2021}. This, however, does not change the size of the remainder in the final bound because other error terms come with a worse rate.
With the choice $\varepsilon = h^{\nicefrac 13}$, we therefore obtain the bound
\begin{align}
&\FBCS(\Gamma) - \FBCS(\Gamma_0) \notag \\
&\hspace{2cm}\geq h^4 \, \bigl(\mathcal E^{\mathrm{GL}}_{D}(\Psi_\leq) - C \, h^{\nicefrac {1}{6}} \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)} \; \bigl( \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}^2 + D_1\bigr)^{\nicefrac 12} \bigr), \label{Lower_Bound_B_5}
\end{align}
which is the equivalent of \cite[Eq~(6.14)]{DeHaSc2021}.
\subsection{Conclusion}
The arguments in \cite[Section~6.3]{DeHaSc2021} apply in the same way also here and we obtain the correct formulas when we replace $B^{\nicefrac{1}{2}}$ by $h$. This concludes the proof of Theorem~\ref{Main_Result} and Theorem~\ref{MAIN_RESULT_TC}.
\subsection{Proof of the equivalent of \texorpdfstring{\cite[Lemma~6.2]{DeHaSc2021}}{Lemma~6.2 in Deuchert, Hainzl, Maier (2021)} in our setting}
To obtain a proof of the equivalent of \cite[Lemma~6.2]{DeHaSc2021} in our setting, we follow the proof strategy in \cite{DeHaSc2021}. The additional terms coming from the external electric potential are not difficult to bound because $W$ is a bounded function. To obtain bounds of the correct size in $h$ for the terms involving the periodic vector potential $A_h$, we need to use that $A(0) = 0$, which is guaranteed by Assumption~\ref{Assumption_V}. This is relevant for example when we estimate our equivalent of the term on the left side of \cite[Eq.~(6.24)]{DeHaSc2021}, that is,
\begin{equation*}
\Vert [ \pi_{\mathbf A_h}^2 + W_h(r) - p_r^2 ] \sigma_0 \Vert_2
\end{equation*}
with $p_r = - \mathrm{i} \nabla_r$ and $\sigma_0$ in \eqref{sigma0}. We write the operator multiplying $\sigma_0$ as
\begin{equation}
\pi_r^2 - p_r^2 + W(r) + A(r) \cdot \pi_r + \pi_r \cdot A(r) + |A(r)|^2,
\label{eq:Andi17}
\end{equation}
where $\pi_r = -\i \nabla_r + \mathbf A_{\mathbf B}(r)$. When we use \eqref{Psi>_bound}, we see that the terms involving $|A_h|^2$ and $W_h$ are bounded by
\begin{align}
\bigl( \Vert A_h\Vert_\infty^2 + \Vert W_h \Vert_\infty \bigr) \ \Vert \sigma_0 \Vert_2 \leq C \varepsilon^{-\nicefrac{1}{2}} h^4 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}.
\label{eq:proofoflemma32b}
\end{align}
Moreover, from \cite[Eq.~(6.24)]{DeHaSc2021} we know that
\begin{equation*}
\Vert [\pi_r^2 - p_r^2]\sigma_0\Vert_2 \leq C \varepsilon^{-\nicefrac{1}{2}} h^4 \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}.
\end{equation*}
To obtain a bound for the contribution from the fourth and the fifth term on the right side of \eqref{eq:Andi17}, we write
\begin{equation*}
A_h(r) = h^2 \int_0^1 \mathrm{d} t \ (D A)(h r t) \cdot r,
\end{equation*}
where $DA$ denotes the Jacobi matrix of $A$. Hence,
\begin{equation*}
\Vert A_h(r) \cdot \pi_r \, \sigma_0 \Vert_2 \leq \ h^2 \Vert DA \Vert_\infty \, \Vert \ | \cdot | \pi \alpha_* \Vert_2 \, \Vert \Psi_> \Vert_2 \leq C h^4 \varepsilon^{-\nicefrac{1}{2}} \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}.
\end{equation*}
The term involving $\pi_r \cdot A_h(r)$ can be treated similarly when we commute $\pi_r$ to the right. In combination, the above considerations show
\begin{equation*}
\Vert [ \pi_{\mathbf A_h}^2 + W_h(r) - p_r^2 ] \sigma_0 \Vert_2 \leq C h^4 \varepsilon^{-\nicefrac{1}{2}} \Vert \Psi\Vert_{H_{\mathrm{mag}}^1(Q_h)}.
\end{equation*}
All other bounds in the proof of the equivalent of \cite[Lemma~6.2]{DeHaSc2021} in our setting that involve $W_h$ or $A_h$ can be estimated with similar ideas. We therefore omit further details.
|
2,869,038,154,305 | arxiv | \section{{\small {\bf Introduction}}}
\vskip 10pt\noindent
Stock loan is a simple economy where a client(borrower), who owns
one share of a stock, borrows a loan of amount $q$ from a bank
(lender) with one share of stock as collateral. The bank charges
amount $c(0\leq c\leq q)$ from the client for the service. The
client may regain the stock by repaying principal and interest
(that is, $qe^{\gamma t}$, here $\gamma$ is continuously compounding
{\sl loan} interest rate ) to the bank, or surrender the stock
instead of repaying the loan at any time. It is a currently popular
financial product. It can create liquidity while overcoming the
barrier of large block sales, such as triggering tax events or
control restrictions on sales of stocks. It also can serve as a
hedge against a market down: if the stock price goes down, the
client may forfeit the loan at initial time; if the stock price
goes up, the client keeps all the upside by
repaying the principal
and interest. Therefore, the stock loan has unlimited liability. As
a result, stock loan transforms risk into the bank.
To reduce the risk, the bank introduces cap $L$ feature
in the stock loan because the cap adds a further incentive to
exercise early. Such a loan is called a {\sl capped stock loan}
throughout this paper. Since the capped stock loan has limited
liability and lower risk, it therefore will be an attractive
instrument to market for an issuer, or to hold short for an
investor like American call option with cap in financial market(cf.
Broadie and Detemple \cite{BRDE}(1995)). \vskip 10pt \noindent
The wide acceptance and popular feature of the capped
(uncapped)
stock loan in the marketplace, however, greatly depend on how to
make successfully these kinds of financial products. More
precisely, how to work out right values of the parameters $(q,
\gamma, c, L)$ is a natural and key problem in negotiation between
the client and the bank at initial time. Unfortunately, to the
authors' best knowledge, it seems that few results on the topic have
been reported in existing literature. The main goal of the present
paper is to develop a pure variational inequality method to solve
this kind of problems. \vskip 10pt \noindent
We explain major difficulty and main idea of
solving the problem as follows. We formulate the capped
(uncapped) stock loan as a perpetual American option with negative
interest rate. We denote by $f(x)$ the initial value of this
option. The problem can be reduced to calculating the function
$f(x)$ for determining the ranges of fair values of the parameters
$(q, \gamma, c, L)$. According to the conventional variational
inequality method, the $f(x)$ must satisfy a variational inequality,
we calculate $f(x)$ by initial condition $f(0)=0$ and smooth-fit
principle. However, because of negative interest rate the initial
condition $f(0)=0$ does not work, i.e., the conventional variational
inequality method can not solve this option with negative interest
rate. Moreover, the presence of the cap also complicates the
valuation procedure(if we focus on studying the capped stock loan).
So we need to develop the variational inequality method to deal with
the case of negative interest rate. As payoff process of the capped
(uncapped) stock loan is a Markov process, the optimal stopping
time must be a hitting time( Remark \ref{R31} below) from which we guess that
$f'(0)=0$. In addition, we observe that the condition do work in the case
of negative interest rate but it has no use to dealing with the case of
non-negative interest rate. Based on the conjecture and observation, we first
establish explicitly the value $f(x)$ of the capped stock loan by a
new pure variational inequality method. Then we use the expression
of $f(x)$ to work out the ranges of fair values of parameters
associated with
the loan. Finally, as a special case of our main result, we also get the
same conclusion on uncapped stock loan as in Xia and Zhou
\cite{stock} proved by a pure probability approach.
\vskip10pt\noindent
The paper is organized as follows: In Section
2, we formulate a mathematical model of capped stock loan and it is
considered as a perpetual American option with a possibly negative
interest rate. In Section 3, we extend standard variational
inequality method to the case of negative interest rate and
calculate the initial value and the value process of the
capped(uncapped) stock loan. In Section 4, we work out the ranges of
fair values of parameters associated with the loan based on
the results in previous sections. In Section 5, we
present two examples to explain how the cap impacts on the initial
value of uncapped stock loan.
\vskip 10pt\noindent
\setcounter{equation}{0}
\section{{\small {\bf Mathematical model}}}
\vskip 10pt \noindent In this section the standard Black-Scholes
model in a continuous financial market consists of two assets: a
risky asset stock $S$ and a risk-less bond $B$. The uncertainty is
described by a standard Brownian motion $\{ \mathbf{W}_{t}, t\geq
0\}$ on a risk-neutral complete probability space $(\Omega, \mathcal
{F}, \{ \mathcal {F}_{t}\}_{t\geq 0}, P)$, where $\{ \mathcal
{F}_{t}\}_{t\geq 0} $ is the filtration generated by $ \mathbf{W}
$, $\mathcal {F}_{0}=\sigma\{ \Omega, \emptyset\}$ and $ \mathcal
{F}=\sigma\{ \bigcup_{t\geq 0}\mathcal {F}_{t} \}$. The risk-less
bond $B$ evolves according to the following dynamic system,
\begin{eqnarray*}
dB_{t}=rB_{t}dt,\ \ r>0,
\end{eqnarray*}
where $ r$ is continuously compounding interest rate. The stock
price $S$ follows a geometric Brownian motion,
\begin{eqnarray} \label{E2.1}
S_{t}=S_{0}e^{(r-\delta-\frac{\sigma^{2}}{2})t+\sigma\mathbf{W}_{t}},
\end{eqnarray}
where $S_{0}$ is initial stock price, $\delta\geq0$ is dividend
yield and $\sigma >0$ is volatility. The discounted payoff process
of the capped stock loan is defined by
\begin{eqnarray*}
Y(t)=e^{-rt}(S_{t}\wedge Le^{\gamma t}-qe^{\gamma t})_{+}.
\end{eqnarray*}
Since $Y(t)\geq 0, \ a.s.$ and $Y(t)>0$ with a positive
probability, to avoid arbitrage, throughout this paper we assume
that
\begin{eqnarray}
S_{0}-q+c>0 .
\end{eqnarray}
According to theory of American contingent claim, the initial value
of this capped stock loan is
\begin{eqnarray}\label{reward2}
f(x)&=&\sup\limits_{\tau \in \mathcal {T}_{0}}{\bf E}\big
[e^{-r\tau}(S_{\tau}\wedge Le^{\gamma \tau}-qe^{\gamma
\tau})_{+}\big ]\nonumber\\
&=&\sup\limits_{\tau \in \mathcal {T}_{0}}{\bf E}\big [e^{-\tilde
{r}\tau}(\tilde{S}_{\tau}\wedge L-q)_{+}\big ],
\end{eqnarray}
where $\tilde{r}=r-\gamma$, $\tilde{S}_{t}=e^{-\gamma
t}S_{t},\tilde{S}_{0}=S_{0}=x$, $ L>S_{0}$ and $\mathcal {T}_{0}$
denotes all $\{ \mathcal {F}_{t}\}_{t\geq 0}$-stopping times.
The value process of this capped stock loan is
\begin{eqnarray}\label{C24}
V_{t}=\sup\limits_{\tau \in \mathcal {T}_{t}}{\bf E}\big[e^{-
{r}(\tau-t)}(S_{\tau}\wedge Le^{\gamma \tau}-qe^{\gamma
\tau})_{+}|\mathcal{F}_{t}\big],
\end{eqnarray}
i.e.,
\begin{eqnarray*}
e^{-rt}V_{t}=\sup\limits_{\tau \in \mathcal {T}_{t}}{\bf E}\big
[e^{-\tilde {r}\tau}(\tilde{S}_{\tau}\wedge
L-q)_{+}|\mathcal{F}_{t}\big ],
\end{eqnarray*}
where $\mathcal {T}_{t}$ denotes all $\{ \mathcal {F}_{t}\}_{t\geq
0}$-stopping times $\tau $ with $\tau \geq t$ a.s.. Since the fair
values of $q, \gamma,c, L$ should be such that $f(S_0)=S_0-q+c$, the
range of the fair values of the parameters $( q, \gamma, c, L)$
reduce to calculating the $f(S_0)$. Because $\tilde{r}=r-\gamma\leq
0$, the problem is essentially to calculate the initial value of
a conventional perpetual American call option with a
possibly negative interest rate. We have the following.
\begin{Prop}\label{inequality}
$(x-q)_{+}\leq f(x)\leq x$ for $x\geq 0$. $f(x)$ is continuous and
nondecreasing on $(0,\infty)$
\end{Prop}
\begin{proof}
Using the same way as in Xia and Zhou \cite{stock}(2007), we
have $(x\wedge L-q)_{+}\leq f(x)\leq x\wedge L$ for $x\geq0$. The
$f(x)$ is continuous and nondecreasing on $(0,\infty)$ can be proved
by the optional sampling theorem.
\end{proof}
\vskip 10pt \noindent
\setcounter{equation}{0}
\section{{\small {\bf Variational
inequality method in stock loans}}}
\vskip 10pt \noindent
In this section we develop a pure variational
inequality method in the case of
negative interest rate to establish explicitly the value $ f(x)$
of the capped (uncapped) stock loan. The key point is to replace
the initial condition $f(0)=0$ in the conventional case with a new
one $f'(0+)=0$. The detailed observation will be given in {\bf
Remark} \ref{R31} below. We find that the condition $f'(0+)=0$ holds
for the conventional perpetual American call option but no use in
determining free constants. Now we star with the following.
\begin{Prop} \label{Prop1}
Assume that $\delta>0$ and $\gamma-r+\delta\geq 0$ or $\delta=0$ and
$\gamma-r>\frac{\sigma^{2}}{2}$. Let $\lambda_1 $, $\lambda_2$ and $\mu $ be defined by
\begin{eqnarray}\label{e35}
\left\{
\begin{array}{l l l}
\lambda_{1}=\frac{-\mu+\sqrt{\mu^{2}-2(\gamma-r)}} {\sigma},\
\lambda_{2}=\frac{-\mu-\sqrt{\mu^{2}-2(\gamma-r)}}{\sigma},\\
\mu=-(\frac{\sigma}{2}+\frac{\gamma-r+\delta}{\sigma}).
\end{array}
\right.
\end{eqnarray}
\vskip 10pt\noindent
(i)If $L\geq b^{}$, $h(x)\in \mathcal
{C}\big([0,\infty)\big )\cap\mathcal
{C}^{1}\big((0,\infty)\setminus \{L\}\big)\cap \mathcal
{C}^{2}\big((0,\infty)\setminus \{b^{},L\}\big )$ and solves the
variational inequality
\begin{eqnarray}\label{e31}
\left\{
\begin{array}{l l l l l}
\frac{1}{2}\sigma^{2}x^{2}h^{''}+(\tilde{r}-\delta)xh^{'}
-\tilde{r}h=0, & x\in [0,b)\cup (L, \infty),\\
h(x)=x-q,\quad &x\in [b, L],\\
h(x)>(x-q)_{+},& x<b,\\
h(x)\leq x, & x \geq 0,\\
h'(0+)=0,\ h(b)=b-q,h^{'}(b)=1,
\end{array}
\right.
\end{eqnarray}
then
\begin{eqnarray}\label{solution21}
h(x)=\left\{
\begin{array}{l l l}
(b^{}-q)(\frac{x}{b^{}})^{\lambda_{1}},&0< x\leq b^{},\\
x-q ,& b^{}<x<L,\\
(L-q)(\frac{x}{L})^{\lambda_{2}},&x\geq L.
\end{array}
\right.
\end{eqnarray}
Moreover,
\begin{eqnarray}\label{b}
b=\frac{q\lambda_1}{\lambda_1-1}.
\end{eqnarray}
(ii) If $L< b^{}$, $h(x)\in \mathcal
{C}\big([0,\infty)\big )\cap\mathcal
{C}^{1}\big((0,\infty)\setminus \{L\}\big)\cap \mathcal
{C}^{2}\big((0,\infty)\setminus \{L\}\big )$ and solves the
variational inequality
\begin{eqnarray}\label{equivalent22}
\left\{
\begin{array}{l l l l}
\frac{1}{2}\sigma^{2}x^{2}h^{''}+(\tilde{r}-\delta)xh^{'}-\tilde{r}h=0,
& x \in [0,L) \cup (L, \infty ),\\
h(x)>(x-q)_{+}, & x<L,\\
h(x)\leq x, & x \geq 0,\\
h'(0+)=0, h(L)=L-q,
\end{array}
\right.
\end{eqnarray}
then
\begin{eqnarray}\label{solution22}
h(x)=\left\{
\begin{array}{l l l}
(L-q)(\frac{x}{L})^{\lambda_{1}},&0<x<L,\\
L-q ,& x=L,\\
(L-q)(\frac{x}{L})^{\lambda_{2}},&x> L.
\end{array}
\right.
\end{eqnarray}
\end{Prop}
\begin{proof} We only deal with the part (i). The
part (ii) can be treated similarly.
Solving the first second-order differential
equation of the(\ref{e31}), we get
\begin{eqnarray}\label{e39}
h(x)=\left\{
\begin{array}{l l l}
C_1x^{\lambda_1}+ C_2x^{\lambda_2}, & x \in [0, b),\\
x-q, & x\in [b, L],\\
C_3x^{\lambda_1}+ C_4x^{\lambda_2},& x \in (L, \infty ),
\end{array}
\right.
\end{eqnarray}
where $C_i$, $i=1,2,3,4$, and $b$ are free constants to be
determined, $ \lambda_1$ and $\lambda_2 $ are defined by
(\ref{e35}). \vskip 5pt \noindent Note that { \sl if $\delta >0$
then $\lambda_1 >1 > \lambda_2
>0 $, and if $\delta=0 $ and $\gamma-r> \frac{\sigma^2}{2} $ then
$\lambda_1=\frac{2(\gamma-r )}{\sigma^2}>1= \lambda_2 $}. Using
$h'(0+)=0$ and $h(x)<x$, we have $C_2= C_3=0 $. Substituting
$C_2=C_3=0$ into (\ref{e39}) and applying $h(b)=b-q$, $ h'(b)=1 $ as
well as the principle of smooth fit
in the resulting expression of $h(x)$ yields that
$$b-q=C_1b^{\lambda_1}, \quad L-q= C_4 L^{\lambda_2},\quad \lambda_1C_1
b^{\lambda_1-1}=1.$$ \vskip 5pt \noindent Solving this system for
$b$, $ C_1$ and $C_4$, we get $b=\frac{q\lambda_1}{\lambda_1-1}$, $
C_1=\frac{b-q}{b^{\lambda_1} }$ and $
C_4=\frac{L-q}{L^{\lambda_2}}$. Hence the equations
(\ref{solution21}) and (\ref{b}) follow. Since $\lambda_1 >1 \geq
\lambda_2 >0 $ , the function $h(x)$ defined by (\ref{solution21})
obviously belongs to $\mathcal {C}\big([0,\infty)\big )\cap\mathcal
{C}^{1}\big((0,\infty)\setminus \{L\}\big)\cap \mathcal
{C}^{2}\big((0,\infty)\setminus \{b^{},L\}\big ).$ Thus we complete
the proof.
\end{proof}
\vskip 2pt \noindent
\begin{Remark}\label{R31}
From (\ref{e35}) we know that if $\widetilde{r}=r-\gamma>0 $( the
conventional case ) then $\lambda_1\geq 1>0$ and $\lambda_2 <0 $.
Consequently, we see from (\ref{e39}) that the function
$h(x)=C_2x^{\lambda_2}$ is rejected by the initial condition
$h(0)=0$, i.e., $C_2=0$. So by the same way as in proof of
Proposition \ref{Prop1} above, we can determine other free constants
$C_1$, $C_3$, $ C_4$ and $b$. \vskip 5pt \noindent If
$\widetilde{r}=r-\gamma\leq 0 $( the present case ) then $\lambda_1>
1\geq \lambda_2 >0 $. As opposed to the conventional case, we can
not deduce from $h(0)=0$ that $C_2=0$. This is the major difficulty
appearing in variational
inequality method in the case of stock loan. However, we note that
in present case $h'(x)=\lambda_1C_1x^{\widetilde{\lambda_1}} +
\lambda_2C_2x^{\widetilde{\lambda_2}}$ with
$\widetilde{\lambda_1}=\lambda_1-1 > 0 $ and
$\widetilde{\lambda_2}=\lambda_2-1 < 0 $. So if $h'(0+)=0$, the
present case can be reduced to the conventional case. On the
other hand, Markov property implies that the optimal stopping
time must be a hitting time. In view of the fact, we conjecture
that $h'(0+)=0$ is correct. The proof of the conjecture will be
given in Proposition \ref{Prop2} below. Comparing with the
conventional case, the $h'(0+)=0$ will play an important role in
developing pure variational
inequality method in the case of
stock loan.
\end{Remark}
\vskip 2pt \noindent
\begin{Prop}\label{Prop2}
Assume that $\delta=0$, $\gamma-r>\frac{\sigma^{2}}{2}$ and $
g(x)={\bf E}\big [e^{-\tilde{r}(\tau\wedge \varsigma )}
(\tilde{S}_{\tau\wedge \varsigma }-q)_{+}I_{\{\varsigma <\infty\}}
\big ] $ for $b\geq q$, $ L\geq 0$, stoping times $\tau $ and
$\varsigma$. Then $g'(0+)=0$.
\end{Prop}
\begin{proof} Using $g(0)=0$, we have for $x>0$
\begin{eqnarray}\label{e37}
0&\leq &\frac{g(x)-g(0)}{x-0}\nonumber\\
&=& {\bf E}\large\{ \large (\exp\{-\frac{1}{2}
\sigma^2\tau\wedge \varsigma+\sigma\mathbf{W}_{\tau\wedge \varsigma
} \}\nonumber\\
&& \qquad -\frac{q}{x}\exp\{ (\gamma-r )\tau\wedge \varsigma\}
\large )_+I_{\{ \varsigma< +\infty\}}
\large \}\nonumber\\
&\leq & {\bf E}\large\{ \large (\exp\{-\frac{1}{2}
\sigma^2\tau\wedge \varsigma+\sigma\mathbf{W}_{\tau\wedge \varsigma
} \}-\frac{q}{x} \large )_+I_{\{ \varsigma< +\infty\}} \large \}.
\end{eqnarray}
Note that $\large\{\exp\{-\frac{1}{2}\sigma^2t+\sigma \mathbf{W}_t
\}, t\geq 0 \large\} $ is a strong martingale w.r.t. $\{ \mathcal
{F}_{t}\}_{t\geq 0} $. It follows that
\begin{eqnarray} \label{C1}
{\bf E}\large\{ \exp\{-\frac{1}{2}
\sigma^2\tau\wedge \varsigma+\sigma\mathbf{W}_{\tau\wedge \varsigma
}\} \large\}=1.
\end{eqnarray}
To let $x\rightarrow 0$ in (\ref{e37}), we need to dominate the
right-hand side. Because
\begin{eqnarray*} \label{C2}
\Large (\exp\{-\frac{1}{2}
\sigma^2\tau\wedge \varsigma+\sigma\mathbf{W}_{\tau\wedge \varsigma
} \}-\frac{q}{x} \Large )_+I_{\{ \varsigma< +\infty\}}\leq
\exp\{-\frac{1}{2}
\sigma^2\tau\wedge \varsigma+\sigma\mathbf{W}{\tau\wedge \varsigma } \},\nonumber\\
\end{eqnarray*}
\begin{eqnarray*}\label{C3}
\lim_{x\rightarrow 0+}\Large (\exp\{-\frac{1}{2}
\sigma^2\tau\wedge \varsigma+\sigma\mathbf{W}_{\tau\wedge \varsigma
} \}-\frac{q}{x} \Large )_+I_{\{ \varsigma< +\infty\}}=0. \quad a.s.,
\end{eqnarray*}
and (\ref{C1}), the dominated convergence theorem allows us to let
$x\rightarrow 0 $ in (\ref{e37}) and obtain
\begin{eqnarray*}
0\leq \lim_{x\rightarrow 0+}\frac{g(x)-g(0)}{x-0}
\leq \lim_{x\rightarrow 0+} {\bf E}\large\{ \big (\exp\{-\frac{1}{2}
\sigma^2\tau\wedge \varsigma+\sigma\mathbf{W}_{\tau\wedge \varsigma
} \}-\frac{q}{x} \big )_+I_{\{ \varsigma< +\infty\}} \large \}=0.
\end{eqnarray*}
Thus $g^{'}(0+)=0$.
\vskip 5pt\noindent
\end{proof}
\vskip 5pt\noindent In view of Proposition \ref{Prop1} and
variational
inequality method, $\tau_b$ must be the optimal time, where $b$ is
defined by (\ref{b}). Now we show the fact.
\vskip 5pt\noindent
\begin{Prop}\label{Prop3}
Let $b$ and $h(x) $ be defined by (\ref{b}), (\ref{solution21}) and
(\ref{solution22} ) respectively, $ E(x)\equiv {\bf E}\big
[e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau_{b}\wedge\tau^{}_{L}}\wedge
L-q)_{+}I_{\{\tau^{}_{L}<\infty\}} \big ]$ for $L\geq 0 $, where
$x=S_0$, $\tau_{b^{}}=\inf{\{t\geq 0:e^{-\gamma t}S_{t}\geq b^{}\}}$
and $\tau^{}_{L}=\inf{\{t\geq 0:e^{-\gamma t}S_{t}=L\}}$. Then
$E(x)=h(x)$.
\end{Prop}
\begin{proof}
Using formula 2.20.3 in Section 9, Part II of \cite{Handbook}(2000),
we have
\begin{eqnarray}\label{C5}
{\bf E}\big
[e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
I_{\{\tau_{b}\wedge\tau_{L}<\infty\}}\big
]=\left\{
\begin{array}{l l}
(\frac{x}{b})^{\lambda_{1}},x<b,\\
(\frac{x}{L})^{\lambda_{2}},x\geq L.
\end{array}
\right.
\end{eqnarray}
Based on the (\ref{C5}), the proof of Proposition \ref{Prop3} will
be accomplished in two cases, namely, $L>b$ and $L\leq b$. \vskip
5pt\noindent {\bf Case of $L>b$. } We shall distinguish three
subcases, i.e., $x<b$, $L\geq x\geq b$ and $ x\geq L $. \vskip
5pt\noindent If $x<b$ then $\tau_{b}\leq\tau_{L}$. Using(\ref{C5}),
we calculate
\begin{eqnarray}\label{C6}
E(x)&=&{\bf E}\big
[e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau_{b}\wedge\tau_{L}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&{\bf E}\big
[e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau_{b}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}
\big ]
\nonumber\\
&=&(b-q)(\frac{x}{b})^{\lambda_{1}}.
\end{eqnarray}
If $L\geq x\geq b$ then $\tau_{b}=0$ and so
\begin{eqnarray}\label{C7}
E(x)&=&{\bf E}\big [e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau_{b}\wedge\tau_{L}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&{\bf E}\big [(\tilde{S}_{0}\wedge L-q)_{+}\big ]=
(x-q).
\end{eqnarray}
If $ x> L $ then $\tilde{S}_{\tau_{b}\wedge\tau_{L}}\geq L $. By
using (\ref{C5}),
\begin{eqnarray}\label{C8}
E(x)&=&{\bf E}\big [e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau_{b}\wedge\tau_{L}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&(L-q){\bf E}\big [e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&(L-q)(\frac{x}{L})^{\lambda_{2}}.
\end{eqnarray}
Comparing with (\ref{solution21}), the equations (\ref{C6}),
(\ref{C7}) and (\ref{C8}) yield that $ E(x)=h(x)$. \vskip
5pt\noindent {\bf Case of $b\geq L$.} We shall distinguish two
subcases, i.e.,$x<L$ and $x\geq L $. \vskip 5pt\noindent If $x<L$
then $\tau_b \geq \tau_l $ . Using (\ref{C5}), we have
\begin{eqnarray}\label{C9}
E(x)&=&{\bf E}\big
[e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau_{b}\wedge\tau{L}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&{\bf E}\big
[e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau{L}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&
(L-q)(\frac{x}{L})^{\lambda_{1}}.
\end{eqnarray}
If $ x\geq L $ then $\tilde{S}_{\tau_{b}\wedge\tau_{L}}\wedge
L=L $. So by using (\ref{C5})
\begin{eqnarray}\label{C10}
E(x)&=&{\bf E}\big [e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
(\tilde{S}_{\tau_{b}\wedge\tau_{L}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&(L-q){\bf E}\big [e^{-\tilde{r}(\tau_{b}\wedge\tau_{L})}
I_{\{\tau_{L}<\infty\}}
\big ]\nonumber\\
&=&(L-q)(\frac{x}{L})^{\lambda_{2}} .
\end{eqnarray}
Comparing the equations (\ref{C9}) and (\ref{C10}) with the (\ref{solution22}), we see that
$E(x)=h(x)$.
\end{proof}
\vskip 5pt\noindent We now return to main result of this section. It
states that the initial value of the capped stock loan is just
$f(x)$ defined by (\ref{solution21}) and (\ref{solution22})
respectively, where $x=S_0$. \vskip 5pt\noindent
\begin{Them}\label{main2}
Assume that $\delta>0$ and $\gamma-r+\delta\geq 0$ or $\delta=0$ and
$\gamma-r>\frac{\sigma^{2}}{2}$. Let $h(x)$ be defined by
(\ref{solution21}) and (\ref{solution22})respectively, and
$f(x)$ be defined by (\ref{reward2}). Then $ f(x)=h(x)$ for $x\geq 0$, and
$\tau_{b}\wedge\tau^{}_{L}$ is the optimal stopping time.
\end{Them}
\begin{proof} In view of Proposition \ref{Prop3},
$$f(x)\equiv\sup\limits_{\tau \in \mathcal {T}_{0}}{\bf E}\big
[e^{-\tilde {r}\tau}(\tilde{S}_{\tau}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\} }\big ]\geq h(x).$$ Therefore we only need to prove that for any stopping
time $\tau$
\begin{eqnarray} \label{Appendx1}
h(x)\geq {\bf E}\big [e^{-\tilde{r}(\tau\wedge\tau^{}_{L})}(\tilde
{S}_{\tau\wedge\tau^{}_{L}}\wedge
L-q)_{+}I_{\{\tau^{}_{L}<\infty\}}\big ].
\end{eqnarray}
From Proposition \ref{Prop1} and the expression of $h(x)$ as well as
$\lambda_1 >1 \geq \lambda_2 >0 $, we know that
$h(x)\in \mathcal
{C}\big([0,\infty)\big )\cap\mathcal
{C}^{1}\big((0,\infty)\setminus \{L\}\big)\cap \mathcal
{C}^{2}\big((0,\infty)\setminus \{b^{},L\}\big )$ or
$h(x)\in \mathcal
{C}\big([0,\infty)\big )\cap\mathcal {C}^{1}\big((0,\infty)\setminus
\{L\}\big)\cap \mathcal {C}^{2}\big((0,\infty)\setminus \{L\}\big )$
and $0< h''(b\pm)<+\infty$, $ 0 < h'(L\pm)< +\infty $ and $ 0 <
h''(L\pm)< +\infty $. As a result, the generalized
It\^{o}'s formula(cf.Karatzas and Shreve\cite{Brownian}(1991)
for Problem 6.24, p.215 ), the inequalities (\ref{e31}) and (\ref{equivalent22})
yield
\begin{eqnarray}\label{Appendx2}
\int_{0}^{t}d(e^{-\tilde{r}s}h(\tilde{S}_{s}))
&=&\int_{0}^{t}e^{-\tilde{r}s}\tilde{S}_{s}h^{'}(\tilde{S}_{s})\sigma
d\mathcal{W}(s)\nonumber\\
&&+\int_{0}^{t}e^{-\tilde{r}s}(\tilde{r}q-\delta \tilde{S}_{s}
) I_{\{L>\tilde{S}_{s}\geq b^*\}}ds\nonumber\\
&&+[h^{'}(L+)-h^{'}(L-)]\int_{0}^{
t}e^{-\tilde{r}s}dL_{s}(L)\nonumber\\
&\equiv & \mathcal{M}(t) + A_1(t) +\Lambda(t),
\end{eqnarray}
where $L_{t}(L)\geq0$ is the local time of $\tilde{S}_{t} $ at the
point $ L$, $
\mathcal{M}(t)=\int_{0}^{t}e^{-\tilde{r}s}\tilde{S}_{s}h^{'}(\tilde{S}_{s})\sigma
d\mathcal{W}(s)$ is a martingale,
\begin{eqnarray*}
A_1(t)&=&\int_{0}^{t}e^{-\tilde{r}s}(\tilde{r}q-\delta \tilde{S}_{s}
) I_{\{L>\tilde{S}_{s}\geq b^*\}}ds \qquad \mbox{and}\\
\Lambda(t)&=& \ \ [h^{'}(L+)-h^{'}(L-)]\int_{0}^{
t}e^{-\tilde{r}s}dL_{s}(L).
\end{eqnarray*}
Define $T_n=t\wedge \tau \wedge \tau^{}_L -\frac{1}{n} $ for $
\forall t\geq 0 $ , $ n\geq 1 $ and stopping time $ \tau $. Then
by using definition of $\tau^{}_L $ and equality $L_t(L)=\int^t_0
I_{\{s: \tilde{S}_s=L\}}dL_s(L)$, we have $ \Lambda(T_n)=0$.
Moreover, since $ \tilde{r}\leq 0$ and $\delta\geq 0$, we see that
$A_1(T_n)\leq 0$. So by (\ref{Appendx2})
\begin{eqnarray*}\label{Appendx3}
h(\tilde{S}_{0})&=&{\bf E}\big [e^{-\tilde{r}T_n}h(\tilde{S}_{T_n})
I_{\{\tau^{}_{L}<\infty\}}\big ]+{\bf E} \big
[-A_1(T_n)I_{\{\tau^{}_{L}<\infty\}}\big ]
\nonumber\\
&\geq &{\bf E}\big [e^{-\tilde{r}T_n}h(\tilde{S}_{T_n})
I_{\{\tau^{}_{L}<\infty\}}\big ]\nonumber\\
&=&{\bf E}\big
[e^{-\tilde{r}T_n}h(\tilde{S}_{T_n})I_{\{\tau^{}_{L}<\infty\}}\big ]\nonumber\\
&\geq&{\bf E}\big [e^{-\tilde{r}T_n}(\tilde{S}_{T_n}\wedge
L-q)_{+}I_{\{\tau^{}_{L}<\infty\}}\big ].
\end{eqnarray*}
Note that
\begin{eqnarray*} e^{-\tilde{r}T_n}(\tilde{S}_{T_n}\wedge
L-q)_{+}I_{\{\tau^{}_{L}<\infty\}}\leq\sup\limits_{0\leq
t<\infty}e^{-\tilde{r}t}(\tilde{S}_{t}-q)_{+}
\end{eqnarray*}
and
$${\bf E}\big [\sup\limits_{0\leq
t<\infty}e^{-\tilde{r}t}(\tilde{S}_{t}-q)_{+}\big]<\infty.$$
The dominated convergence theorem now implies
\begin{eqnarray*} \label{Appendx4}
h(x)=h(\tilde{S}_{0})&\geq& \lim_{n\rightarrow +\infty}
{\bf E}\big [e^{-\tilde{r}T_n}(\tilde{S}_{T_n}\wedge
L-q)_{+}I_{\{\tau^{}_{L}<\infty\}}\big ]
\nonumber\\&=&{\bf E}\big
[e^{-\tilde{r}(\tau\wedge\tau^{}_{L})}(\tilde{S}_{\tau\wedge\tau^{}_{L}}\wedge
L-q)_{+}I_{\{\tau^{}_{L}<\infty\}} \big ].
\end{eqnarray*}
Thus $h(x)=f(x)$ and
$\tau_{b}\wedge\tau^{}_{L}$ is the optimal stopping time.
\end{proof}
\vskip 5pt \noindent As a direct consequence of Theorem \ref{main2},
i.e., $ L=+\infty$, we can get the following result proved by a
pure probability approach in Xia and Zhou\cite{stock}. \vskip 5pt
\noindent
\begin{CR}
Let $f(x)=\sup\limits_{\tau \in \mathcal {T}_{0}}{\bf E}\big [e^{-\tilde
{r}\tau}(\tilde{S}_{\tau}-q)_{+}\big ]$, $b$ and $ \lambda_1 $ be defined by
(\ref{b}) and (\ref{e35}) respectively. Then
\begin{eqnarray}
f(x)=\left\{
\begin{array}{l l l}
(b^{}-q)(\frac{x}{b^{}})^{\lambda_{1}},&0< x\leq b^{},\\
x-q ,& x> b
\end{array}
\right.
\end{eqnarray}
and $ \tau_b $ is the optimal stoping time.
\end{CR}
\vskip 5pt \noindent
\begin{Remark}Comparing with the value of capped American option with
non-negative interest rate(cf.Broadie and Detemple \cite{BRDE}(1995)
), we see that the value of capped stock loan treated in the present paper
has different behaviors.
If the stock price is bigger than $L$ then the capped American
option with non-negative interest rate should be exercise
immediately. While the capped stock loan treated in the present paper
has no this kind of performances.
\end{Remark}
\vskip 5pt \noindent
\begin{Remark} Let $V_t$ be the value process for the capped stock loan defined by (\ref{C24}). Then by using Markov property of the process $S_{t}$
(cf. Karatzas and Shreve\cite{Methods}(1998)), we easily get
\begin{eqnarray*}
e^{-\tilde{r}t}f(\tilde{S}_{t})&=&\sup\limits_{\tau \in \mathcal
{T}_{t}}{\bf
E}\big[e^{-\tilde{r}(\tau\wedge\tau_{L})}(\tilde{S}_{\tau\wedge\tau_{L}}\wedge
L-q)_{+}I_{\{\tau_{L}<\infty\}}|\mathcal{F}_{t}\big]\\
&=&e^{-rt}V_{t}
\end{eqnarray*} and $V_{t}=e^{\gamma t}f(e^{-\gamma t}S_{t})$.
\end{Remark}
\vskip 5pt \noindent
\setcounter{equation}{0}
\section{{\small {\bf Ranges of fair values
of parameters }}} \vskip 10pt \noindent In this section we will
work out the ranges of fair values of the
parameters $(q, \gamma,c, L)$
of the capped stock loan treated in this paper based on
Theorem \ref{main2} and equality $f(S_0)=S_0
-q+c$. In view of Proposition\ref {Prop1}, there are two cases to be
dealt with. The first is when $ L>b$ and the second case is when $
L\leq b$. We only consider the first case, the second case can be
treated similarly. We shall distinguish three subcases, i.e.,
$S_{0}\geq L$, $L>S_{0}\geq b$ and $0< S_{0}\leq b$. \vskip 5pt
\noindent {\bf Case of $S_{0}\geq L$}. By (\ref{solution21}) and
$f(S_0)=S_0 -q+c$, we have
$f(S_0)=(L-q)(\frac{S_{0}}{L})^{\lambda_{2}}= S_0 -q+c$. So the
parameters $\gamma$, $q$, $c$ and $L $ must satisfy
$c=(L-q)(\frac{S_{0}}{L})^{\lambda_{2}} +q -S_0 \leq 0$. This means
that the bank wants to earn maximal profit
because the initial stock price is very high. The bank
gets the stock via paying the money $c$ to
the client, the client gives the stock to the bank due to the money
$c$. Therefore both the client and the bank have incentives to do
the business. Actually, $\tau^{}_{L}$ is the optimal stopping time
for the client to terminate the capped stock loans. \vskip 5pt
\noindent {\bf Case of $L>S_{0}\geq b $}. By (\ref{solution21})
and $f(S_0)=S_0 -q+c$, we have
$f(S_0)=S_{0}-q=S_{0}-q+c$, so $c=0$, i.e., the client has no
incentive to do the transaction. $\tau_{b^{}}=0$ in this case is
the optimal stopping time.
\vskip 5pt \noindent
{\bf Case of $0< S_{0}\leq b $}.\quad In this case
both the client and the bank have incentives to do the business.
The bank does because there is dividend payment and so does the
client because the initial stock price is neither very high nor too
low. By Theorem \ref{main2}, the initial value is $f(S_{0})$. In
this case the bank charges an amount $c=f(S_{0})-S_{0}+q$ from the
client for providing the service. So the fair values of the
parameters $\gamma,q$ and $c$ must satisfy
\begin{eqnarray*}\label{value_determine}
S_{0}-q+c= (b^{}-q)(\frac{x}{b^{}})^{\lambda_{1}}
\end{eqnarray*}
and the optimal stopping time is $\tau_{b^{}}$. \vskip 10pt
\noindent
\setcounter{equation}{0}
\section{{\small {\bf Examples }}}
\vskip 10pt \noindent In this section we will give two examples of
capped stock loan as follows.
\begin{EX}\label{E1}
Let the risk free rate $r=0.05$, the loan rate $\gamma=0.07$, the
volatility $\sigma=0.15$, the dividend $\delta=0.01$, the principal
$q=100$ and the cap $L=240$. Then $b=147.8< L$. We compute the
initial value $f(x)$ of capped stock loan as in the following
Figure 1. The graph obviously shows that the cap greatly impacts on
the initial value when the stock price is very large. The cap
reduces the value of the stock loan and the client can acquires more
liquidity.
\begin{figure}[H]
\includegraphics[height=9cm]{cap240.eps}
\caption{$\gamma=0.07,r=0.05,\sigma=0.15,\delta=0.01,
q=100,cap=240$ }\label{fa}
\end{figure}
\end{EX}
\begin{EX}
Let $r$, $ \sigma$, $\sigma$, $\delta $ and $q$ be as the same as in
{\bf Example} \ref{E1}, $L=120$. Then $L < b=147.8$. We compute
the initial value $f(x)$ of capped stock loan as in the following
Figure 2. The graph explains that the initial value of caped stock
loan greatly decreases because the cap is less than $b^{}$. The
client will acquire much more liquidity compared to the uncapped
stock loan.
\begin{figure}[H]
\includegraphics[height=9cm]{cap120.eps}
\caption{$\gamma=0.07,r=0.05,\sigma=0.15,\delta=0.01,q=100,cap=120$ }\label{fa}
\end{figure}
\end{EX}
\vskip 15pt \noindent
{\bf Acknowledgements.} We
are very grateful to Professor Jianming Xia for his conversation
with us and providing original paper of \cite{stock} for us. We
also express our deep thanks to Professor Xun Yu Zhou for providing
power point files of his talk on stock loan at Peking University for
us. Special thanks also go to the participants of the seminar
stochastic analysis and finance at Tsinghua University for their
feedbacks and useful conversations. This work is supported by
Project 10771114 of NSFC, Project 20060003001 of SRFDP, and SRF for
ROCS, SEM, and the Korea Foundation for Advanced Studies. We would
like to thank the institutions for the generous financial support.
\vskip 20pt \noindent \setcounter{equation}{0}
|
2,869,038,154,306 | arxiv | \section{Introduction}
\IEEEPARstart In recent years, ID verification systems have been exposed to variations of presentation attacks. For instance, they compare the user selfie with a picture of the photo ID extracted from the user ID card or passport, where the critical challenge becomes ensuring whether or not the ID card image has been tampered with in the digital or physical domain. Image tampering is a significant issue for such scenarios and biometric systems at large \cite{Ferrara}.
One of these approaches is related to the passports, and the Morphing attack on face recognition systems based on the enrolment of a morphed face image, which is averaged from two-parent images and allowing both contributing subjects to travel with the passport \cite{Ferrara, Scherhag_survey, ScherhagDeep}.
Morphing attack detection is a new topic aimed to detect unauthorised individuals who want to gain access to a "valid" identity in other countries. Morphing can be understood as a technique to combine two o more look-alike facial images from one subject and an accomplice, who could apply for a valid passport exploiting the accomplice's identity. Morphing takes place in the enrolment process stage. The threat of morphing attacks is known for border crossing or identification control scenarios.
A morphing attack's success depends on the decision of human observers, especially a passport identification expert. The real-life application for a border police expert who compares the passport reference image of the traveller (digital extracted from the embedded chip) with the facial appearance of the traveller \cite{venkatesh2020face} is too hard because of the improvements of the morphing tools and because of the difficulty for the human expert to localise facial areas, in which morphing artefacts are present.
This work proposes to add an extra stage of feature selection after feature extraction based on Mutual Information $MI$ to estimate and keep the most relevant and remove the most redundant features from the face images to separate bona fide and morphed images. The high redundancy between features confuses the classifier.
The contributions of this work are described as follows: a) Identify the most relevant and less redundant features from faces that allow us to separate bona fide from morphed images. b) Localise the position of the most relevant areas on the images. c) Visualise the areas that contain morphing artefacts d) Reduce the algorithm's complexity, sending fewer features to the classifier. e) Analysis of the feature level fusion, the intensity, shape, and texture information.
All these contributions may help to guide the manual inspection of morphed images.
This paper is organised as follows: a summary background in features selection and $MI$ is presented in section \ref{FS}. The relate work is describe in Section \ref{sec:related}. The methods are described in Section \ref{method}. The database are described in section \ref{database} and the experiments and results are presented in section \ref{exp} and conclusion are presented in section \ref{conclusion}.
\section{Related work}\label{sec:related}
Face morphing attack has captured the interest of the research community and government agencies in Europe. For instance the EU founded the iMARS project \footnote{\url{https://cordis.europa.eu/project/id/883356}}, developing new techniques of manipulation and detection of morphed images.
Ferrera et al. \cite{Ferrara} was the first to investigate the face recognition system's vulnerability with regards to morphing attacks. He has evaluated the feasibility of creating deceiving morphed face images and analysed the robustness of commercial face recognition systems in the presence of morphing.
Scherlag et al. \cite{Scherhag_survey} studied the literature and developed a survey about the impact of morphing images on face recognition systems. The same author \cite{ScherhagDeep} proposed a face representation from embedding vectors for differential morphing attack detection, creating a more realistic database, different scenarios, and constraints with four automatic morphed tools. He also reported detection performances for several texture descriptors in conjunction with machine learning techniques.
Indeed, the NIST FRVT MORPH \cite{Ngan2020FaceRV} evaluates and reports the performances of different morph detection algorithms organised in three tiers according to the morph images quality. Tier 1 evaluates low-quality morph images; Tier 2 considers automatic morph images; and Tier 3 for high-quality images. Further, the NIST report is organised w.r.t local (crop faces) and global (passport-photos) morphing algorithms. This fact confirms and shows that morphing images is a problem considering many scenarios.
Most of the state-of-the-art approaches are using machine learning and deep learning to detect and classify morph images. Also, they are utilising embedding vectors from deep learning approaches to detect and classify the images. However, those approaches did not analyse the most relevant features and their localisation on the original images. An efficient feature selection method may help to improve this limitation.
Regarding feature selection, in image understanding, raw input data often has very high dimensionality and a limited number of samples. In this area, feature selection plays an important role in improving accuracy, efficiency and scalability of the object identification process. Since relevant features are often unknown a priori in the real world, irrelevant and redundant features may be introduced to represent the domain.
However, using more features implies increasing computational cost in the feature extraction process, slowing down the classification process and also increasing the time needed for training and validation, which may lead to classification over-fitting. As is the case in most image analysis problems, with a limited amount of sample data, irrelevant features may obscure the distributions of the small set of relevant features and confuse the classifiers.
Peng et al. \cite{Peng2005} develop a general framework to analyse the interaction between the redundancy and the relevance of the features in a machine learning method to look at the most valuable features based on $MI$.
Guyon et al.\cite{Guyon2006} proposed the Conditional Mutual Information Maximisation (CMIM) to estimate the relationship of the relevance of the features among three pairs of features.
Vergara et al. \cite{Vergara} proposed an improvement for CMIM \cite{Guyon2006} approach based on the selection of the first relevant feature. The traditional method maximised the conditional mutual information to select relevant features. This author proposes the average of the $MI$ to reduce the difference among chosen features.
Tapia et al. \cite{iriscode, cmimw} used the measures of $MI$ to guide the selection of bits from the iris code to be used as features in gender prediction. Also, in \cite{cmimw} used complementary information to create clusters of the most relevant features based on information theory to classify gender from faces.
According to those previous works, we believed that $MI$ is suitable for detecting morphed images to localised and detect the artefact present in morphed images using an efficient number of features.
\section{Methods}
\label{method}
Figure \ref{fig:framwork} shows the proposed framework used in this paper, where a feature selection stage is added after traditional feature extraction approaches.
\begin{figure*}[]
\centering
\includegraphics[scale=0.36]{images/Framework2.png}
\caption{Framework proposed with feature selection stage.}
\label{fig:framwork}
\end{figure*}
\subsection{Feature extraction}
Three different features were extracted from the morphing face images: Intensity, Texture and Shape.
\subsubsection{Intensity}
For raw data the intensity of the values in grayscale were used and normalised between 0 and 1.
\subsubsection{Uniform Local Binary Pattern}
For texture, the histogram of uniform local binary pattern were used \cite{LBPs}. LBP is a gray-scale texture operator which characterises the spatial structure of the local image texture. Given a central pixel in the image, a binary pattern number is computed by comparing its value with those of its neighbours. The original operator used a $3\times3$ windows size. LBP features were computed from relative pixels intensities in a neighbourhood, as is show in the following equation:
\begin{equation}
LBP_{P,R}(x,y)=\bigcup_{(x',y')\in N(x,y)}h(I(x,y),I(x',y'))\label{eq:LBP-1}
\end{equation}
where $N(x,y)$ is vicinity around $(x,y)$, $\cup$ is the concatenation operator, $P$ is number of neighbours and $R$ is the radius of the neighbourhood.
The uniform Local Binary Pattern (uLBP) was used as texture information. The uLBP was introduced, extending the original LBP operator to a circular neighbourhood with a different radius size and a small subset of LBP patterns selected. In this work we use, ‘U2’ which refers to a uniform pattern. LBP is called uniform when it contains at most 2 transitions from 0 to 1 or 1 to 0, which is considered to be a circular code. Thus, the number of patterns is reduced from 256 to 59 bins.
\begin{figure}[]
\centering
\includegraphics[scale=0.38]{images/LBP/feret/LBP_grid.png}
\caption{Example of LBP images. Left:Grayscale image. Middle: traditional LBP (256 bins). Right: LBP with uniform pattern implementation (59 bins).}
\label{fig:lbp_images}
\end{figure}
The reasons for omitting the non-uniform patterns are twofold. First, most of the LBP in natural images are uniform. It was noticed experimentally
that uniform patterns account for a bit less than 90\% of all patterns when using the (8,1) neighbourhood. In experiments with facial images, it was found that 90.6\% of the patterns in the (8,1) neighbourhood and 85.2\% of the patterns in the (8,2) neighbourhood are uniform \cite{ECCV_gender}.
The second reason for considering uniform patterns is the statistical robustness. Using uniform patterns instead of all the possible patterns has produced better recognition results in many applications. On one hand, there are indications that uniform patterns themselves are more stable, i.e. less prone to noise and on the other hand, considering only uniform patterns makes the number of possible LBP labels significantly lower and reliable estimation of their distribution requires fewer samples. See Figure \ref{fig:lbp_images}.
\subsubsection{Inverse Histogram Oriented Gradient}
From Shape, the inverse Histogram of oriented gradients \cite{iHOG, HOG} were used. The Histogram of oriented gradient was proposed by Dalal et al. \cite{HOG}. The distribution directions of gradients (oriented gradients) are used as features. Gradients, $x$, and $y$ derivatives of an image are helpful because the magnitude of gradients is large around edges and corners (regions of abrupt intensity changes). We know that edges and corners contain more information about object shape than flat regions. However, this descriptor presents some problems. For instance, when we visualise the features for high-scoring false alarms in the object detection area, they are wrong in image space. They look very similar to true positives in feature space. To avoid this limitation that confuses the classifiers, we used the visualisation proposed by Vondrik et al. \cite{iHOG} to select the best parameters that allows us to visualise the artefacts contained in morphed images. This implementation used $10\times12$ blocks and $3\times3$ filter sizes.
One example is shown in Figure \ref{fig:ihog}.
\begin{figure}[]
\centering
\includegraphics[scale=0.36]{images/HOG/ihog.png}
\caption{Example images of inverse HOG. Left: Morphed images. Middle: Traditional HOG. Right: Inverse HOG.}
\label{fig:ihog}
\end{figure}
\subsection{Feature selection}
\label{FS}
Feature selection (FS) is the process in which groups of features derived from image areas and textures respectively pixels (in raw images) from facial images out of a dataset are selected based on some measure or the correlation such as F-statistic, Logistic regression or $MI$ between the features and the class of the labels. See Figure \ref{fig:correlation}.
It is closely related to feature extraction, a process in which feature vectors are created from the facial image. This takes place through domain transformation or manipulation of the data space and can be considered as selecting a subset of features.
Figure \ref{fig:correlation} shows a random morphed image with three different correlation metrics. The heat maps show the most correlated features in blue and the less correlated in red. All the features (relevant and redundant) are present in the image.
\begin{figure}[H]
\centering
\includegraphics[scale=0.40]{images/correlation2.png}
\caption{Example images whit different correlation metrics. Red pixels represent the less correlated features.}
\label{fig:correlation}
\end{figure}
FS can be classified into three main groups: Filters, Wrappers, and Embedding methods \cite{Guyon2006}.
A filter does not have a dependency with classifiers when looking for the most relevant features as it.
Filters estimates the correlation values according to the $MI$ values. Conversely, wrappers search for the most relevant features according to the classifier. Therefore, if the classifier changes, then the relevant features vary. The embedding method is looking to estimate an optimisation function according to the data and the classifier.
For this work, we propose to use a filter methods based on $MI$ as correlation metrics to estimate the most relevant features to classify bona fide versus morphed face images.
\subsection{Mutual information}
$MI$ is defined as a measure of how much information is contained jointly in two variables or how much information of one variable determines the other variable \cite{Cover1991}. $MI$ is the foundation for information theoretic feature selection since it provides a function for computing the relevance of a variable with respect to the target class \cite{Guyon2006}. The $MI$ between two variables, $x$ and $y$, is defined based on their joint probabilistic distribution $p(x,y)$ and the respective marginal probabilities $p(x)$ and $p(y)$ as:
{
\begin{equation}
MI(x,y)=\sum_{{\scriptscriptstyle i,j}}{\textstyle p(x_{i},y_{j})log\frac{p(x_{i},y_{j})}{p(x_{i})p(y_{j})}}\label{eq:IM-1}.
\end{equation}
}{ \par}
A categorical $MI$ is used in this paper, which can be estimated by tallying the samples of categorical variables in the data building adaptive histograms to compute the joint probability distribution $p(x,y)$ and the marginal probabilities $p(x)$ and $p(y)$ based on the Fraser algorithm \cite{Fraser1986} for bona fide and morphing images. According to that, if more than two pairs of features reach the same value then, the information is \textbf{redundant}. Conversely, if a couple of features is not contained in any, other pair of features is considered \textbf{relevant} and therefore can help to disentangle and separate the two classes.
If a feature extracted from an image is randomly or uniformly distributed in different classes (bona fide or morph), then the $MI$ between these classes is zero. If a feature is strongly differently expressed for other classes (morph), it should have a large $MI$. Thus, we use $MI$ as a measure of the relevance of features presented in the images.
The following protocol was used:
\begin{itemize}
\item Each image of size $M \times N$ was flattened to $1\times M \times N$ for each class (bona fide and morphed).
\item The matrix $A$ is formed by $K$ flattened images of size $1\times M \times N$ features, and the class vector (c).
\item $MI$ for each pair of column of matrix $A$ is estimated.
\item The relevance (Rl) and redundancy (Rd) are estimated from matrix $A$.
\item The trade-off between the relevance and redundancy (Rl and Rd) matrices is estimated, sorted and indexes according to the $MI$ values.
\item A vector $v$ with the index value of each column (feature) with the higher relevance and less redundant is formed.
\item Only the $N$ columns according to with index value are selected.
\item A small matrix from $A$ and element $v$ is conformed in the step of 100 features up to 1,000 features to be evaluated for the classifier.
\end{itemize}
Different implementations have been proposed in state-of-the-art \cite{Guyon2006} to estimate the trade-off between relevance and redundancy. Estimate all the combinations $2^N$ to remove all the redundancy is not possible because of high dimensionality problem. Then, the following methods based on $MI$ and Conditional $MI$ have been used and are described as follows:
\subsection{minimum Redundancy Maximal Relevance (mRMR)}
Two forms of combining relevance and redundancy operations are reported in \cite{Peng2005}; $MI$ difference $\left(MID\right)$, and $MI$ quotient $\left(MIQ\right)$. Thus, the $mRMR$ feature set is obtained by optimising $MID\:\:$ and $MIQ\:$ simultaneously. The trade-off both conditions requires to integrate them into a single criterion function \cite{Peng2005} as follows:
{
\scriptsize
\begin{equation}
f^{mRMR}(X_{i})=MI(c;fi)-\frac{1}{S}\sum MI(fi;fs),\label{eq:mrmr}
\end{equation}
}{ \par}
where,{\small{} $MI(c;fi)$} measures the relevance of the feature $f_i$ to be added for the class $c$, and the term{\small{} $\frac{1}{S}\sum_{fi\epsilon S}MI(fi;fs)$}
estimates the redundancy of the $fi_{th}$ feature with respect to the previously selected features $f_s$ to belong to set $S$.
\subsection{Normalised Mutual Information Feature Selection (NMIFS)}
Estevez et al. \cite{Estevez2009} proposed with the Normalised Mutual Information (NMIFS) an improved version of mRMR based on the normalised feature of $MI$. The $MI$ between two random variables is bounded above by the minimum of their entropies $H$. As the entropy of a feature could vary greatly, this measure should be normalised before applying it to a global set of features as follows:
\begin{equation}
f^{NMIFS}(X_{i})=MI(c;fi)-\frac{1}{\textbar}S{\textbar}\sum_{fi\epsilon S}MI_{N}(fi;fs)\label{eq:nmifs}
\end{equation}
Where, $MI_{N}\:$ is the normalised $MI$ by the minimum entropy of both features, as defined in:
\begin{equation}
MI_{N}(fi;fs)=\frac{MI(fi;fs)}{min(H(fi),H(fs))}\label{eq:IN}
\end{equation}
\subsection{Conditional Maximisation Mutual Information (CMIM)}
The $CMIM$ criterion is a tri-variate measure of the information associated with a single feature about the class, conditioned upon an already selected feature \cite{Fleuret}. It loops over the chosen features and assigns each candidate to feature a score based upon the lowest Conditional Mutual Information $(CMI)$ between the features selected, the candidate feature, and the class \cite{Guyon2006, Fleuret}. Then, the selected feature is the one with the maximum score.
{
\scriptsize
\begin{equation}
CMIM=\begin{cases}
arg\: max_{fi\in F}\left\{ MI(fi;c) for\: S=\emptyset\right\} \\
arg\: max_{fi\in F/S}\left\{ min_{fj\in S}\, MI(fi;c/fj)\right\} \\ for\: S\neq\emptyset.
\end{cases}
\end{equation}
}{\footnotesize \par}
\subsection{Conditional Maximisation Mutual Information-2 (CMIM2)}
The $CMIM$ criterion selects relevant variables and avoids redundancy. However, it does not necessarily choose a variable that is complementary to the already chosen variables. A variable with high complementarity information (max) to the already selected variable will be had by a high $(CMI)$.
In general, in problems where the variables are highly complementary (or dependent) to predict $c$, the $CMIM$ algorithm will fail to find that dependence among the variables. The $CMIM-2$ \cite{Vergara} was proposed in order to improve $CMIM$ and changes the max function for the average function ($1/d$). Then, the selected feature is the one with the average score.
{
\scriptsize
\begin{equation}
MI(x,y)=1/d \sum_{f,j{ \in S}}{MI(f_i; c\mid f_j)}\label{eq:IM-1}.
\end{equation}
}{ \par}
\begin{figure*}[]
\hfill
\begin{minipage}{.165\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FERET/ff_01208_940128_fa_a.png_vs_00192_940128_fa.png}
\caption*{FERET Subject 1}
\end{minipage}%
\hfill
\begin{minipage}{.165\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FERET/ff_01208_940128_fa_a.png_vs_00192_940128_fa.png}
\caption*{FaceFusion}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FERET/fm_01208_940128_fa_a.png_vs_00192_940128_fa.png}
\caption*{FaceMorpher}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FERET/ocv_01208_940128_fa_a.png_vs_00192_940128_fa.png}
\caption*{OpenCV Morpher}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FERET/ubo_01208_940128_fa_a.png_vs_00192_940128_fa.png}
\caption*{UBO-Morpher}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FERET/00192_940128_fa.png}
\caption*{Subject2}
\end{minipage}%
\vspace{1ex}
\hfill
\begin{minipage}{.15\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FRGC/bona_04200d68.png}
\caption*{FRGCv2 Subject 1}
\end{minipage}%
\hfill
\begin{minipage}{.17\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FRGC/ff_04200d68.png_vs_04734d04.png}
\caption*{FaceFusion}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FRGC/fm_04200d68.png_vs_04734d04.png}
\caption*{FaceMorpher}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FRGC/ocv_04200d68.png_vs_04734d04.png}
\caption*{OpenCV Morpher}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FRGC/ubo_04200d68.png_vs_04734d04.png}
\caption*{UBO-Morpher}
\end{minipage}%
\hfill
\begin{minipage}{.16\linewidth}
\includegraphics[width=\linewidth]{images/Examples/FRGC/04734d04.png}
\caption*{Subject 2}
\end{minipage}%
\hfill
\caption{Examples of different morphing algorithms for two subjects in the FERET and FRGCv2 databases}
\label{fig:datasetex}
\end{figure*}
\section{Databases}
\label{database}
The FERET and FRGCv2 databases were used to create the morph images based on the protocol described by \cite{ScherhagDeep}. A summary of the databases is presented in Table \ref{tab:database}. All the images were captured in a controlled scenario and include variations in pose and illumination.
FRGCv2 presents images more compliant to the passport portrait photo requirements. The images contain illumination variation, different sharpness and changes in the background.
The original images have the size of $720\times960$ pixels. For this paper, the faces were detected, and images were resized and reduced to $180 \times 240$ pixels. These images still fulfill the resolution requirement of the intra-eye distance of 90 pixels defined by ICAO-9303-p9-2015.
The $\alpha$ value to define the contribution of each subject to morph image results was 0.5.
Figure \ref{fig:datasetex} shows examples of the morphing portrait images and the different output qualities with the artefact in their background. For instances OpenCV implementation.
\begin{table}[H]
\scriptsize
\centering
\caption{Number of images used for FERET and FRGCv2 Database. Column 1, show the software used to create morph images. The number of images is per dataset.}
\begin{tabular}{lllll}
\hline
\textbf{Database} & \textbf{Nº Subjects} & \textbf{Bona fide} & \textbf{Morphs} & \textbf{Probes} \\ \hline
FRGCv2 & 533 & 984 & 964 & 1726 \\ \hline
FERET & 529 & 529 & 529 & 791 \\ \hline
FaceFusion & 533 & 984 & 964 & 1726 \\ \hline
FaceMorpher & 533 & 984 & 964 & 1726 \\ \hline
FaceOpenCV & 533 & 984 & 964 & 1726 \\ \hline
UBO-Morpher & 533 & 984 & 964 & 1726 \\ \hline
\end{tabular}
\label{tab:database}
\end{table}
The following algorithms were used to create morph images:
\begin{itemize}
\item FaceFusion is a proprietary morphing algorithm, developing for IOS app \footnote{\url{www.wearemoment.com/FaceFusion/}}. This algorithm to create high-quality morph images without visible artifact.
\item FaceMorpher is an open-source algorithm to create morph images \footnote{\url{github.com/alyssaq/face\_\morpher}}. This algorithm introduce also some artifacts in the background.
\item FaceOpenCV, this algorithm is based on the OpenCV implementation. \footnote{\url{www.learnopencv.com/face-morph-using-opencv-cpp-python}}. The images contain visible artifacts in the background and some areas of the face.
\item Face UBO-Morpher. The University of Bologna developed this algorithm. The resulting images are of high quality without artifact in the background.
\end{itemize}
As we mentioned before, after creation of the morphed images, all the faces were cropped using a modified dlib face detector implementation \footnote{https://www.pyimagesearch.com/2018/09/24/opencv-face-recognition/}. Figure \ref{fig:morphing} shows examples of the FERET cropped face database. We can observe that cropped images represent a more challenging scenario because all the background artefacts of the morphing process result were removed. However, some artefacts remain and can be observed in the images, for instances for the FaceMorpher and OpenCV implementations.
\begin{figure}[]
\centering
\includegraphics[scale=0.14]{images/crops/bonafide.png}\includegraphics[scale=0.149]{images/crops/morphs_facefusion.png}\includegraphics[scale=0.155]{images/crops/morphs_facemorpher.png}\includegraphics[scale=0.148]{images/crops/morphs_opencv.png}\includegraphics[scale=0.158]{images/crops/morphs_ubo.png}
\caption{Examples of FERET cropped images. From Left to Right: Bona fide, FaceFusion, FaceMorpher, OpenCV, UBO-Morpher implementations.}
\label{fig:morphing}
\end{figure}
\section{Experiments and Results}
\label{exp}
This section presents the quantitative results of the proposed scheme based on feature selection for automated single-morph attack detection. In addition to the proposed system, we evaluated six different contemporary classifiers such as K-Nearest Neighbors (KNN), Logistic regression (LOGIT), Support Vector Machine (SVM), Decision Tree (DT), Random Forrest (RF), and Multilayer Perceptron (MLP). Overall, Random Forest and SVM reached the best results. See Figure \ref{fig:all_curves}. To compare and to estimate the baseline method, only the Random Forest classifier was used.
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{images/morphs_ubo_raw_gray_random_forest_paper.png}
\caption{DET Curves comparing the baseline classifiers using RF. RF and SVM reached the best results. KNN, LOGIT and MLP are not showed in the curve because of poor results. }
\label{fig:all_curves}
\end{figure}
The experiments, tested a leave-one-out (LOO) protocol and an RF classifier with 300 trees. These datasets allow subject-disjoint results to be computed; that is, no subject has an image in both the training and the testing subset.
The FERET and FRGCv2 databases were partitioned to have 60\% training and 40\% testing data for feature selection. The selection of features was made using only the training set. The output of the four methods delivers the index of each column of the matrix $A$ that represents the more relevant features. The number of features were evaluated in steps of 100 features up to the end of the vector.
The performance of the detection algorithms is reported according to metrics defined in ISO/IEC 30107-3. The Attack Presentation Classification Error Rate (APCER) is defined as the proportion of attack presentations using the same attack instrument species incorrectly classified as bona fide in a specific scenario. The bona fide Presentation Classification Error Rate (BPCER) is defined as the proportion of bona fide images incorrectly classified as a morphing in the system. The D-EER is the operation point where APCER = BPCER is reported for the different morphing methods.
\begin{table*}[]
\centering
\scriptsize
\caption{Baseline performance reported in \% of D-EER for FERET LOO trained on FaceFusion and FaceMorpher.}
\label{tab:ff-fm_feret}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Train & \multicolumn{4}{c|}{FACEFUSION} & Train & \multicolumn{4}{c|}{FACEMORPHER} \\ \hline
Method & FACEMORPHER & OpenCV & UBO-MORPHER & Average & Method & FACEFUSION & OpenCV & UBO-MORPHER & Average \\ \hline
RAW & 37.47 & 35.67 & 41.35 & 38.16 & RAW & 49.23 & 23.53 & 49.6 & 40,79 \\ \hline
HOG & 38.83 & 40.47 & 40.4 & 39.90 & HOG4 & 42.03 & 37.14 & 42.07 & 40.41 \\ \hline
LBP1 & 27.35 & 32.33 & 38.53 & 32.74 & LBP1 & 45.45 & 32.88 & 42.67 & 40.33 \\ \hline
LBP2 & 24.01 & 27.46 & 37.31 & 29.59 & LBP2 & 42.8 & 30.65 & 41.79 & 38.41 \\ \hline
LBP3 & 24.88 & 26.92 & 37.03 & 29.61 & LBP3 & 40.28 & 26,55 & 40,45 & 35,76 \\ \hline
LBP4 & 23.25 & 24.32 & 36.24 & 27.94 & LBP4 & 38.76 & 25.29 & 40.91 & 34.99 \\ \hline
LBP5 & 24.55 & 26.25 & 38.7 & 29.83 & LBP5 & 36,14 & 30.14 & 38.85 & 35.04 \\ \hline
LBP6 & 25.79 & 26.98 & 38.95 & 30.57 & LBP6 & 35.71 & 27.49 & 40.27 & 34.49 \\ \hline
LBP7 & 27.78 & 28.37 & 40.42 & 32.19 & LBP7 & 37.72 & 26,43 & 42,2 & 35.45 \\ \hline
LBP8 & 28.88 & 27.73 & 42.47 & 33.03 & LBP8 & 38.01 & 27.21 & 43.59 & 36.27 \\ \hline
\textbf{uLBP\_ALL} & 23.71 & 26.34 & 38.02 & \textbf{29.36} & \textbf{uLBP\_ALL} & 38.59 & 30.05 & 40.33 & \textbf{36.32} \\ \hline
LBP\_VERT & 26.76 & 30.1 & 23.98 & 26.95 & LBP\_VERT & 40.99 & 24.81 & 42.68 & 36.16 \\ \hline
LBP\_HOR & 26.59 & 28.66 & 37.69 & 30.98 & LBP\_HOR & 41.95 & 25,06 & 42,71 & 36,57 \\ \hline
FUSION & 43.64 & 44.56 & 46.88 & 45.03 & FUSION & 44.28 & 32.31 & 46.68 & 41.09 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[]
\centering
\scriptsize
\caption{Baseline performance reported in \% of D-EER for FERET LOO trained on OpenCV and UBO-Morpher.}
\label{tab:ocv-ubo_feret}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Train & \multicolumn{4}{c|}{OpenCV} & Train & \multicolumn{4}{c|}{UBO-MORPHER} \\ \hline
Method & FACEFUSION & FACEMORPHER & UBO-MORPHER & Average & Method & FACEFUSION & FACEMORPHER & UBO-MORPHER & Average \\ \hline
RAW & 47.45 & 20.21 & 48.82 & 38.83 & RAW & 35.55 & 40.45 & 37.46 & 37.82 \\ \hline
HOG & 43.17 & 35.7 & 40.032 & 39.63 & HOG & 39.77 & 35.74 & 35.82 & 37.11 \\ \hline
LBP1 & 44.6 & 25.72 & 41.68 & 37.33 & LBP1 & 42.6 & 27.45 & 32.93 & 34.33 \\ \hline
LBP2 & 41.28 & 24.85 & 40.28 & 35.47 & LBP2 & 40.28 & 25.26 & 29.86 & 31.80 \\ \hline
LBP3 & 37.66 & 23.95 & 39.52 & 33.71 & LBP3 & 35.99 & 24.97 & 25.58 & 28.85 \\ \hline
LBP4 & 36.08 & 22.72 & 38.52 & 32.44 & LBP4 & 36.03 & 24.33 & 26.99 & 29.12 \\ \hline
LBP5 & 35.76 & 25.56 & 38.5 & 33.27 & LBP5 & 34.31 & 26.94 & 28.64 & 29.96 \\ \hline
LBP6 & 37.03 & 28.74 & 40.58 & 35.45 & LBP6 & 37.74 & 30.36 & 30.21 & 32.77 \\ \hline
LBP7 & 37.74 & 25.51 & 42.47 & 35.24 & LBP7 & 38.56 & 32.08 & 31.5 & 34.05 \\ \hline
LBP8 & 37.66 & 27.47 & 42.86 & 36.00 & LBP8 & 39.79 & 34.58 & 32.16 & 35.51 \\ \hline
\textbf{uLBP\_ALL} & 42.23 & 22.48 & 41.36 & \textbf{35.36} & \textbf{uLBP\_ALL} & 41.58 & 25.78 & 29.81 & \textbf{32.39} \\ \hline
LBP\_VERT & 40.6 & 24.77 & 42.63 & 36.00 & LBP\_VERT & 38.51 & 29.6 & 31.26 & 33.12 \\ \hline
LBP\_HOR & 41.5 & 23.4 & 42.45 & 35.78 & LBP\_HOR & 38.66 & 29.55 & 31.7 & 33.30 \\ \hline
FUSION & 44.85 & 28.84 & 45.92 & 39.87 & FUSION & 46.17 & 43.57 & 44.05 & 44.60 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[]
\scriptsize
\centering
\caption{Baseline performance reported in \% of D-EER for FRGCv2 LOO trained on FaceMorpher and FaceFusion.}
\label{tab:frgc_fm-ff}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Train & \multicolumn{4}{c|}{FACEFUSION} & Train & \multicolumn{4}{c|}{FACEMORPHER} \\ \hline
Method & FACEMORPHER & OpenCV & UBO-MORPHER & Average & & FACEFUSION & OpenCV & UBO-MORPHER & Average \\ \hline
RAW & 25.1 & 23.92 & 27.91 & 25.64 & RAW & 41.71 & 13.97 & 42.3 & 32.66 \\ \hline
HOG & 26.4 & 27.02 & 29.89 & 27.77 & HOG4 & 30.93 & 24.03 & 32.38 & 29.11 \\ \hline
LBP1 & 17.61 & 10.8 & 17.44 & 15.28 & LBP1 & 22.6 & 9.57 & 19.77 & 17.31 \\ \hline
LBP2 & 14.18 & 11.4 & 19.2 & 14.93 & LBP2 & 20.97 & 13.13 & 19.49 & 17.86 \\ \hline
LBP3 & 10.58 & 13.36 & 21.67 & 15.20 & LBP3 & 20.46 & 9.53 & 20.9 & 16.96 \\ \hline
LBP4 & 11.71 & 13.58 & 22.88 & 16.06 & LBP4 & 20.34 & 10.32 & 23 & 17.89 \\ \hline
LBP5 & 13.43 & 14.16 & 25.25 & 17.61 & LBP5 & 20.73 & 10.69 & 25.43 & 18.95 \\ \hline
LBP6 & 14.61 & 15.43 & 28.87 & 19.64 & LBP6 & 21.38 & 11.37 & 26.74 & 19.83 \\ \hline
LBP7 & 15.88 & 15.78 & 26.2 & 19.29 & LBP7 & 21.91 & 11.01 & 26.7 & 19.87 \\ \hline
LBP8 & 15.96 & 16.29 & 26.06 & 19.44 & LBP8 & 24 & 12.22 & 27.42 & 21.21 \\ \hline
\textbf{uLBP\_ALL} & 10.05 & 12.38 & 20.36 & \textbf{14.26} & \textbf{uLBP\_ALL} & 22.44 & 7.99 & 21.64 & \textbf{17.36} \\ \hline
LBP\_VERT & 13.81 & 14.43 & 20.9 & 16.38 & LBP\_VERT & 20.96 & 11.22 & 22.41 & 18.20 \\ \hline
LBP\_HOR & 13.45 & 13.74 & 19.85 & 15.68 & LBP\_HOR & 20.57 & 11 & 21.1 & 17.56 \\ \hline
FUSION & 13.4 & 16.09 & 27.68 & 19.06 & FUSION2 & 27.79 & 15.21 & 26.81 & 23.27 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[]
\scriptsize
\centering
\caption{Baseline performance reported in \% of D-EER for FRGC LOO trained on OpenCV and UBO-Morpher.}
\label{tab:frgc_ocv_ubo}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Train & \multicolumn{4}{c|}{OpenCV} & Train & \multicolumn{4}{c|}{UBO-MORPHER} \\ \hline
Method & FACEFUSION & FACEMORPHER & UBO-MORPHER & Average & Method & FACEFUSION & FACEMORPHER & OpenCV & Average \\ \hline
RAW & 40.3 & 13.07 & 41.3 & 31.56 & RAW & 22.58 & 23.83 & 22.24 & 22.88 \\ \hline
HOG & 31.21 & 22.17 & 32.44 & 28.61 & HOG & 27.44 & 25.38 & 26.5 & 26.44 \\ \hline
LBP1 & 20.94 & 13.29 & 17.88 & 17.37 & LBP1 & 20.54 & 6.32 & 9.92 & 12.26 \\ \hline
LBP2 & 20.52 & 8.43 & 18.89 & 15.95 & LBP2 & 20.18 & 7.38 & 9.86 & 12.47 \\ \hline
LBP3 & 19.57 & 7.26 & 20.11 & 15.65 & LBP3 & 19.59 & 9.54 & 11.76 & 13.63 \\ \hline
LBP4 & 20.76 & 8.05 & 23.1 & 17.30 & LBP4 & 18.99 & 10.69 & 12.65 & 14.11 \\ \hline
LBP5 & 20.4 & 9.1 & 24.63 & 18.04 & LBP5 & 20.38 & 13.28 & 13.64 & 15.77 \\ \hline
LBP6 & 21.66 & 10.66 & 26.57 & 19.63 & LBP6 & 21.19 & 15.7 & 15.82 & 17.57 \\ \hline
LBP7 & 22.64 & 10.68 & 26.84 & 20.05 & LBP7 & 20.26 & 16.85 & 16.47 & 17.86 \\ \hline
LBP8 & 23.5 & 11.94 & 27.78 & 21.07 & LBP8 & 22.71 & 19.37 & 19.52 & 20.53 \\ \hline
\textbf{uLBP\_ALL} & 21.55 & 5.79 & 21.49 & \textbf{16.28} & \textbf{uLBP\_ALL} & 18.2 & 7.51 & 9.52 & \textbf{11.74} \\ \hline
LBP\_VERT & 21.22 & 9.45 & 22.33 & 17.67 & LBP\_VERT & 18.19 & 13.77 & 14.07 & 15.34 \\ \hline
LBP\_HOR & 20.98 & 9.62 & 21.97 & 17.52 & LBP\_HOR & 18.34 & 13.69 & 13.9 & 15.31 \\ \hline
FUSION & 27.22 & 11.7 & 26.31 & 21.74 & FUSION & 27.59 & 11.24 & 13.22 & 17.35 \\ \hline
\end{tabular}
\end{table*}
\subsection{Experiment 1}
Three different kinds of features were extracted from faces. Intensity, HOG, and uLBP. From raw images, we used the values of intensity of the pixels normalised between 0 and 1. For shape, we used the histogram of HOG. For texture, the histogram of the Uniform Local Binary Patterns (uLBP) was used. For the uLBP all radii values were explored from uLBP81 to uLBP88. The fusion of LBPs was also investigated, concatenating the LBP81 up to LBP88 (LBP\_ALL). The vertical (uLBP\_VERT) and horizontal (uLBP\_HOR) concatenation of the image divided into 8 patches also was explored.
After feature extraction, we fused that information at the feature level by concatenating the feature vectors from different sources (Intensity, HOG, and uLBP) into a single feature vector that becomes the input to the classifier (FUSION). The classifier was trained with each feature extraction method's selected features and the fused chosen features.
Table \ref{tab:ff-fm_feret} and \ref{tab:ocv-ubo_feret} show the baseline results for the intensity, shape and texture feature extraction methods. This baseline was estimated using a leave-one-out protocol for all the morphing methods. The intensity (Raw) and HOG reached the higher D-EER (worst results). Most of the time, the (LBP\_ALL) obtained the lower average D-EER results (Best results).
Table \ref{tab:ff-fm_feret} shows the results on the left side for the FERET database were trained with FaceFusion and tested with FaceMorpher, OpenCV, and UBO-Morpher. Right side, trained with FaceMorpher and tested with FaceFusion, OpenCV, and UBO-Morpher.
Table \ref{tab:ocv-ubo_feret} shows the results on the left side for FERET database were trained with OpenCV and tested with FaceFusion, FaceMorpher, and UBO-Morpher. Right side, trained with UBO-Morpher and tested with FaceFusion, FaceMorpher and OpenCV. The same protocol was applied to Tables \ref{tab:frgc_fm-ff} and \ref{tab:frgc_ocv_ubo} with FRGCv2 database.
\subsection{Experiment 2}
This experiment explores the application of the proposed method based on feature selection. The four feature selection methods, mRMR, NMIFS, CMIM, and CMIM2, were applied in order to reduce the size of the data and estimate the position of the relevant features before entering classifiers from Intensity, HOG, and uLBP.
The best 5,000 from 43,200 features were extracted from the raw data (intensity). The best 1,000 from 1,048 features were extracted from HOG, and the best 400 features from 472 were selected from the fusion of uLBP (uLBP\_ALL).
Table \ref{tab:hog_fea_feret} and \ref{tab:hog_fea_frgc} show the results for FERET and FRGCv2 database for single morphed detection from the best feature selected from \textbf{HOG} applied to FaceFusion, FaceMorpher, OpenCV-Morph and UBO-Morpher. The results reported shown an improved in comparison to the baseline in Experiment 1 using the HOG features extracted of the images. The number of feature was reduced on average down to 10\%. This reduction would enable the application in mobile devices hardware and also allow us to see the localisation of the most relevant features.
\begin{table}[H]
\scriptsize
\caption{D-EER in \% of HOG + Fea / FERET. The figures in parenthesis represent the best number of features for each method.}
\label{tab:hog_fea_feret}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \begin{tabular}[c]{@{}c@{}}FaceFusion\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}FaceMorpher\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}OpenCV-Morph\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}UBO-Morpher\\ (bestFea)\end{tabular} \\ \hline
mRMR & \begin{tabular}[c]{@{}c@{}}17.15\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}7.07\\ (700)\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.15\\ (700)\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.68\\ (100)\end{tabular} \\ \hline
NMIFS & \begin{tabular}[c]{@{}c@{}}19.98\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}9.74\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5.84\\ (800)\end{tabular} & \begin{tabular}[c]{@{}c@{}}13.88\\ (300)\end{tabular} \\ \hline
CMIM & \begin{tabular}[c]{@{}c@{}}17.83\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5.84\\(600)\end{tabular} &\begin{tabular}[c]{@{}c@{}} 7.07\\(500)\end{tabular} & \begin{tabular}[c]{@{}c@{}}11.07\\ (400)\end{tabular} \\ \hline
CMIM2 & \begin{tabular}[c]{@{}c@{}}8.12\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}4.92\\ (900)\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.15\\ (500)\end{tabular} & \begin{tabular}[c]{@{}c@{}}13.52\\ (300)\end{tabular} \\ \hline
\end{tabular}
\end{table}
\begin{table}[H]
\scriptsize
\caption{D-EER in \% of HOG + Fea / FRGCv2. The figures in parenthesis represent the best number of features for each method.}
\label{tab:hog_fea_frgc}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \begin{tabular}[c]{@{}c@{}}FaceFusion\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}FaceMorpher\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}OpenCV-Morph\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}UBO-Morpher\\ (bestFea)\end{tabular} \\ \hline
mRMR & \begin{tabular}[c]{@{}c@{}}6.83\\ (1000)\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.06\\ (900)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2.17\\ (1000)\end{tabular} & \begin{tabular}[c]{@{}c@{}}4.99\\ (500)\end{tabular} \\ \hline
NMIFS & \begin{tabular}[c]{@{}c@{}}4.83\\ (600)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2.1\\ (900)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.68\\ (900)\end{tabular} & \begin{tabular}[c]{@{}c@{}}4.73\\ (700)\end{tabular} \\ \hline
CMIM & \begin{tabular}[c]{@{}c@{}}6.50\\ (700)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.83\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}2.17\\ (1000)\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.83\\ (500)\end{tabular} \\ \hline
CMIM2 & \begin{tabular}[c]{@{}c@{}}6.65\\ (900)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.92\\ (900)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.50 \\ (1000)\end{tabular} & \begin{tabular}[c]{@{}c@{}}4.02\\ (600)\end{tabular} \\ \hline
\end{tabular}
\end{table}
Table \ref{tab:uLBP_fea_feret} and \ref{tab:frgc_fea_ulbp} show the results for FERET and FRGCv2 database for single morphed detection from the best feature selected from the \textbf{fusion of uLBP} (LBP8,1 up to LBP 8,8) applied to FaceFusion, FaceMorpher, OpenCV-Morph and UBO-Morpher. The results reported shown an improved in comparison to the Experiment 1 using all the features extracted of the images. The number of feature also is reduced on average down to 10\% for texture features.
\begin{table}[]
\scriptsize
\caption{D-EER in \% of Fusion uLBP + Fea / FERET. The figures in parenthesis represent the best number of features for each method.}
\label{tab:uLBP_fea_feret}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \begin{tabular}[c]{@{}c@{}}FaceFusion\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}FaceMorpher\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}OpenCV-Morph\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}UBO-Morpher\\ (bestFea)\end{tabular} \\ \hline
mRMR & \begin{tabular}[c]{@{}c@{}}22.7\\ (200)\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.92\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}13.84\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}23.37\\ (200)\end{tabular} \\ \hline
NMIFS & \begin{tabular}[c]{@{}c@{}}21.22\\ (100)\end{tabular} & \begin{tabular}[c]{@{}c@{}}11.84\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.30\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}23,98\\ (200)\end{tabular} \\ \hline
CMIM & \begin{tabular}[c]{@{}c@{}}21.53\\ (100)\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.30\\ (200)\end{tabular} & \begin{tabular}[c]{@{}c@{}}11.84\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}22.75\\ (200)\end{tabular} \\ \hline
CMIM2 & \begin{tabular}[c]{@{}c@{}}18.45\\ (100)\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.45\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}11.68\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.11\\ (200)\end{tabular} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\scriptsize
\caption{ D-EER in \% of Fusion uLBP + Fea / FRGCv2. The figures in parenthesis represent the best number of features for each method.}
\label{tab:frgc_fea_ulbp}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \begin{tabular}[c]{@{}c@{}}FaceFusion\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}FaceMorpher\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}OpenCV-Morph\\ (bestFea)\end{tabular} & \begin{tabular}[c]{@{}c@{}}UBO-Morpher\\ (bestFea)\end{tabular} \\ \hline
mRMR & \begin{tabular}[c]{@{}c@{}}9.15 \\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.60\\ (200)\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.54\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.47\\ (300)\end{tabular} \\ \hline
NMIFS & \begin{tabular}[c]{@{}c@{}}9.65\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.49\\ (200\end{tabular} & \begin{tabular}[c]{@{}c@{}}4.50\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.99\\ (400)\end{tabular} \\ \hline
CMIM & \begin{tabular}[c]{@{}c@{}}8.52\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.09\\ (100)\end{tabular} & \begin{tabular}[c]{@{}c@{}}4.17\\ (200)\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.70\\ (300)\end{tabular} \\ \hline
CMIM2 & \begin{tabular}[c]{@{}c@{}}7.53\\ (300)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1,33\\ (200)\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.99\\ (400)\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.15\\ (400)\end{tabular} \\ \hline
\end{tabular}
\end{table}
Figure \ref{DET1-raw} shows the accuracy obtained for the UBO-Morpher tool when features selected were applied from \textbf{intensity} features. The UBO-Morpher constitutes a high-quality morphing implementation and then is used and analysed on FERET and FRGCv2 databases. Conversely, FaceMorpher is the more straightforward method to be detected based on the artefacts present in the images.
The mRMR and NMIFS methods based on $MI$ obtained the lower results. The method based on conditional $MI$ (CMIM and CMIM-2) reached the best results. These results show that the complementary information captures the relationship between the feature selected and the feature candidate in a better way. CMIM with only 400 features and CMIM-2 with 1,000 features reached higher accuracy and lower D-EER.
\begin{figure}[H]
\centering
\caption{\label{DET1-raw} FRGCv2 Feature selection for intensity features. X axis represents the number of the best features. Y axis represents the Accuracy in \%.}
\includegraphics[scale=0.30]{images/DET/FRGC/RAW/FRGC_plot_int_ubo.png}
\end{figure}
Figure \ref{DET1-hog} shows the accuracy obtained for the UBO-Morpher tool, when feature selected were applied from \textbf{HOG} features. Again, The method mRMR and NMIFS based on $MI$ obtained the lower results. The method based on conditional $MI$ (CMIM and CMIM-2) reached the best results with 500 and 600 features respectively.
\begin{figure}[]
\centering
\caption{\label{DET1-hog} FRGCv2 Feature selection for HOG features. X axis represents the number of the best features. Y axis represents the Accuracy in \%. }
\includegraphics[scale=0.38]{images/DET/FRGC/HOG/plot_ubo_hog_fea_frgc.png}
\end{figure}
Figure \ref{DET1-lbp} shows the accuracy obtained for the UBO-Morpher tool, when feature selected were applied from the \textbf{fusion of uLBP}. This time NMIFS and CMIM reached the best results with 300 and 400 features respectively. Consolidating the Conditional $MI$ over traditional $MI$.
\begin{figure}[]
\centering
\caption{\label{DET1-lbp} FRGCv2 Feature selection for uLBP\_All features. X axis represents the number of the best features. Y axis represents the Accuracy in \%.}
\includegraphics[scale=0.47]{images/DET/FRGC/LBP/plot_frgc_ubo_lbpfusion8188.png}
\end{figure}
Table \ref{tab:cmim2_frgc_hog} shows the D-EER for HOG feature with the best method CMIM2. Surprisingly, the shape feature (HOG) reached the best results with the lower D-EER using CMIM-2 and FRGCv2 database. FaceMorpher reached the lower D-EER with 1.8\% with a BPCER10 of 0.3\% and BPCER20 of 1.0\%. Conversely, FaceFusion reached the higher D-EER of 5.8\%. The second column shows, the comparison (D-EER) between the HOG results from baseline using all the HOG features versus the proposed method with feature selected from HOG.
\begin{table}[]
\scriptsize
\centering
\caption{D-EER in \% for the best results reached by CMIM-2 using HOG.}
\label{tab:cmim2_frgc_hog}
\begin{tabular}{|c|c|c|c|}
\hline
FRGCv2 - HOG. & HOG / Fea+HOG & BPCER10 & BPCER20 \\
& (D-EER) & & \\ \hline
FaceFusion & 27.7/\textbf{5.8} & 3.7 & 7.7 \\ \hline
FaceMorpher & 29.1/\textbf{1.8} & 0.3 & 1.0 \\ \hline
OpenCV-Morpher & 28.6/\textbf{2.0} & 0.0 & 0.0 \\ \hline
UBO-Morpher & 26.4/\textbf{4.0} & 2.0 & 4.4 \\ \hline
\end{tabular}
\end{table}
Table \ref{tab:cmim2_feret_lbp} shows the D-EER for uLBP\_ALL feature with the best method CMIM2. For FERET database the best results with the lower D-EER using CMIM-2. FaceMorpher again reached the lower D-EER with 1.3\% with a BPCER10 of 0.3\% and BPCER20 of 1.0\% . Conversely, UBO-Morpher reached the higher D-EER of 9.4\% with a BPCER10 of 2.9\% and BPCER20 of 13.8\%. The second column shows, the comparison (D-EER) between the uLBP\_all results from baseline using only the fusion of uLBP features versus the proposed method with feature selected from uLBP.
\begin{table}[]
\scriptsize
\centering
\caption{D-EER in \% for the best results reached by CMIM-2 using All\_LBP.}
\label{tab:cmim2_feret_lbp}
\begin{tabular}{|c|c|c|c|}
\hline
FRGCv2-uLBP & uLBP / Fea+uLBP & BPCER10 & BPCER20 \\
& (D-EER) & & \\ \hline
FaceFusion & 14.2/\textbf{9.2} & 7.4 & 20.0 \\ \hline
FaceMorpher & 17.3/\textbf{1.3} & 0.3 & 1.0 \\ \hline
OpenCV-Morpher & 16.2/\textbf{4.0} & 1.3 & 4.4 \\ \hline
UBO-Morpher & 11.7/\textbf{9.4} & 2.9 & 13.8 \\ \hline
\end{tabular}
\end{table}
Figure \ref{DET2} show the DET curves obtained for the four feature selected method for the three feature selected (Intensity, Texture and Shape). The UBO-Morpher constitutes a high-quality morphing implementation and is applied on FERET and FRGCv2 databases. Conversely, FaceMorpher is the more straightforward method to be detected based on the artefacts present in the images. The features selection applied to intensities values reached the lower results. Even these results improve the baseline, the D-EER are not competitive with the literature. Conversely, uLBP and HOG improve a lot in comparison with the baseline and reached results competitive with the literature as is shown in Tables \ref{tab:cmim2_frgc_hog} and \ref{tab:cmim2_feret_lbp}.
\begin{figure}[]
\centering
\caption{\label{DET2} DET curves for FRGCv2 and FERET using Feature selection method. Top: RAW. Middle: HOG. Bottom: uLBP Fusion.}
\includegraphics[scale=0.14]{images/DET/FRGC/RAW/FRGC_det_plot_ubo_raw.png}
\includegraphics[scale=0.13]{images/DET/FRGC/HOG/plot_ubo_det_curve.png}\\
\includegraphics[scale=0.20]{images/DET/FRGC/LBP/plot_ubo_det_curve2_lbp8188.png}
\includegraphics[scale=0.19]{images/DET/FERET/RAW/Feret_det_plot_ubo_raw.png}\\
\includegraphics[scale=0.18]{images/DET/FERET/HOG/Feret_det_plot_ubo_hog.png}
\includegraphics[scale=0.19]{images/DET/FERET/LBP/Feret_det_plot_ubo_lbp.png}
\end{figure}
In order to compare and analysed which extracted feature delivers more useful information for the detection task, the Figures \ref{tab:cmim2_ulbp_RHL_feret} and \ref{tab:cmim2_ulbp_RHL_frgc} shows a comparison of FERET and FRGCv2 for best results obtained by CMIM-2 from intensity, shape (HOG) and texture (uLBP). Both figures have shown that HOG reached a lower D-EER in both databases. This result shows that the shape algorithms also can detect morphing images as a complement of textures. The exploration parameters to find the most representative inverse HOG features and their visualisation allows us to improve the results. This is shown in Figure \ref{fig:ihog}.
\begin{figure}[]
\centering
\caption{D-EER for comparison of the features selected using CMIM from intensity, shape and texture for FERET database. R: represents RAW. H: represents HOG and L, represents uLBP.}
\includegraphics[scale=0.33]{images/DET/FERET/feret_det_ubo_cmim.png}
\label{tab:cmim2_ulbp_RHL_feret}
\end{figure}
\begin{figure}[]
\centering
\caption{D-EER for comparison of the features selected using CMIM from intensity, shape and texture for FRGCv2 database. R: represents RAW. H: represents HOG and L, represents uLBP.}
\includegraphics[scale=0.30]{images/DET/FRGC_plot_IHL_ubo_cmim.png}
\label{tab:cmim2_ulbp_RHL_frgc}
\end{figure}
\section{Visualisation}
Once we select the best features, it is possible to recover the coordinates of the features into the images. Then, we can visualise the attributes for each method.
Figure \ref{vis_frgc} shows the localisation of the most relevant features for an FRGCv2 random image.
The 5,000 features were divided into five equal parts and assigned to five different colours. The most relevant features from 1 to 1,000 are represented as red pixels. From 1,001 to 2,000 are pink. 2,001 to 3,000 are green. 3,001 to 4,000 are light green, and 4,001 to 5,000 are represented as blue.
It is essential to highlight that the pixels in colours represent the best features selected, which means the most relevant less redundant from the four methods: mRMR, NMIFS, CMIM, and CMIM2, from 1,000 up to 5,000. The CMIM features are distributed in all the images and only concentrate in some areas. The CMIM-2 focalised the features in the most relevant areas. The eyes and the nose areas are selected as relevant to detect morphed images.
\begin{figure}[]
\centering
\textbf{mRMR}\par\medskip
\includegraphics[scale=0.31]{images/vis2/int_mrmr_FF.png}
\includegraphics[scale=0.31]{images/vis2/int_nmifs_FF.png}
\includegraphics[scale=0.31]{images/vis2/int_cmim_FF.png}
\includegraphics[scale=0.31]{images/vis2/int_cmim2_FF.png}
\\
\textbf{NMIFS}\par\medskip
\includegraphics[scale=0.31]{images/vis2/int_mrmr_FM.png}
\includegraphics[scale=0.31]{images/vis2/int_nmifs_FM.png}
\includegraphics[scale=0.31]{images/vis2/int_cmim_FM.png}
\includegraphics[scale=0.31]{images/vis2/int_cmim2_FM.png}
\\
\textbf{CMIM}\par\medskip
\includegraphics[scale=0.31]{images/vis2/int_mrmr_ocv.png}
\includegraphics[scale=0.31]{images/vis2/int_nmifs_ocv.png}
\includegraphics[scale=0.31]{images/vis2/int_cmim_ocv.png}
\includegraphics[scale=0.31]{images/vis2/int_cmim2_ocv.png}
\\
\textbf{CMIM-2}\par\medskip
\includegraphics[scale=0.31]{images/vis2/int_mrmr_ubo.png}
\includegraphics[scale=0.31]{images/vis2/int_nmifs_ubo.png}
\includegraphics[scale=0.31]{images/vis2/int_nmifs_ubo.png}
\includegraphics[scale=0.31]{images/vis2/int_nmifs_ubo.png}\\
\textbf{FaceFusion~~~~~~FaceMorph.~~~FaceOpenCV~UBO-Morph.}\par\medskip
\caption{\label{vis_frgc} Localisation of the feature selected by mRMM, NMIFS, CMIM and CMIM2 for different morphing algorithm. Each image shows the best 5.000 features}
\end{figure}
\section{Conclusion}
\label{conclusion}
After analysing all the results, we can conclude that morphing based on the FERET database is more challenging to detect than the FRGCv2 database. The leave-one-out protocol is essential to estimate the actual performance of the proposed method. In the literature, the test set typically contains images from the same morphing tools.
The feature selection reduces the number of features used drastically to separate bona fide for morphed images and reduce the D-EER in all the cases. For the feature selected from HOG, the D-EER decreased from 26.4\% (baseline) to 4.0\% for UBO-Morpher, reached a BPCER10 of 2.0\%. For the chosen feature from the fusing uLBP, the D-EER decrease from 11.7\% (baseline) to 8.4\% obtained a BPCER10 of 2.9\%. These results are very competitive with the state of art.
The localisation of the features enabled us to select the most relevant and less redundant features. The nose and eyes are identified as relevant areas in the face for manual analysis of morphed images. This tool may help the border police detect morphing images and address the areas to be analysed for the artefacts.
In summary, the shape feature (HOG) results outperform the texture performance as is shown in Figures \ref{tab:cmim2_ulbp_RHL_feret} and \ref{tab:cmim2_ulbp_RHL_frgc}.
In future work, we will apply this method to embedding features extracting from the face-recognition system in order to choose the best features.
\section*{Acknowledgment}
This work is supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No 883356 and the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
\section*{Disclaimer}
This text reflects only the author’s views, and the Commission is not liable for any use that may be made of the information contained therein.
\bibliographystyle{IEEEtran}
|
2,869,038,154,307 | arxiv | \section{Introduction}
The cosmic microwave background (CMB) is nowadays an essential tool of
theoretical and observational cosmology. Recent advances in the observations of
the CMB angular fluctuations in temperature and polarization
\citep[e.g. ][]{WMAP5-basic,WMAP5-params} provide a detailed description of the
global properties of the Universe, and the cosmological parameters are currently
known with accuracies of the order of few percent in many cases.
The experimental prospect for the {\sc Planck} satellite \citep{Planck2006},
which was launched on May 14th 2009, is to achieve the most detailed picture of
the CMB anisotropies down to angular scales of $\ell \sim 2500$ in temperature
and $\ell \sim 1500$ in polarization. This data will achieve sub-percent
precision in many cosmological parameters. However, those high accuracies will
rely on a highly precise description of the theoretical predictions for the
different cosmological models. Currently, it is widely recognised that the major
limiting factor in the accuracy of angular power spectrum calculations is the
uncertainty in the ionization history of the Universe \citep[see ][]{hu1995,
Seljak2003}.
This has motivated several groups to re-examine the problem of cosmological
recombination \citep{Zeldovich1968, Peebles1968}, taking into account detailed
corrections to the physical processes occurring during hydrogen
\citep[e.g.][]{Dubrovich2005, RHS2005, Chluba2006, Kholupenko2006, Rubino2006,
Chluba2007, Chluba2007a, Karshenboim2008, Hirata2008, Chluba2009c,
Chluba2009b, Chluba2009, Jentschura2009, Labzowsky2009, HirataForbes2009} and
helium recombination \citep[e.g][]{Kholupenko2007, Wong2007, Switzer2008a,
Switzer2008b, HirataSwi2008, Rubino2008, Kholupenko2008, Chluba2009d}.
Each one of the aforementioned corrections individually leads to changes in the
ionization history at the level of $\ga 0.1$\%, in such a way that the
corresponding overall uncertainty in the CMB angular power spectra exceeds the
benchmark of $\pm 3/\ell$ at large $\ell$ \citep[for more details,
see][hereafter FCRW09]{rico}, thus biasing any parameter constraints inferred
by experiments like {\sc Planck}, which will be cosmic variance limited up to
very high multipoles.
The standard description of the recombination process is provided by the widely
used {\sc Recfast} code \citep{Seager1999}, which uses effective three-level
atoms, both for hydrogen and helium, with the inclusion of a conveniently chosen
{\it fudge factor} which artificially modifies the dynamics of the process to
reproduce the results of a multilevel recombination code \citep{Seager2000}.
The simultaneous evaluation of all the new effects discussed above make the
numerical computations very time-consuming, as they currently require the
solution of the full multilevel recombination code. Moreover, some of the key
ingredients in the accurate evaluation of the recombination history (e.g. the
problem of radiative transfer in hydrogen and the proper inclusion of two-photon
processes) are solved using computationally demanding approaches, although in
some cases semi-analytical approximations \citep[see e.g.][]{Hirata2008} might
open the possibility of a more efficient evaluation in the future.
In order to have an accurate and fast representation of the cosmological
recombination history as a function of the cosmological parameters, two possible
approaches have been considered in the literature. The first one consists of the
inclusion of additional fudge factors to mimic the new physics, as recently done
in \cite{Wong2008} (see {\sc Recfast} v1.4.2), where they include an additional
fudge factor to modify the dynamics of helium recombination.
The second approach is the so-called {\sc Rico} code (FCRW09), which provides an
accurate representation of the recombination history by using a regression
scheme based on {\sc Pico} \citep{Fendt2007a,Fendt2007}. The {\sc Rico} code
smoothly interpolates the $X_{\rm e}(z; \vec{p})$ function on a set of
pre-computed recombination histories for different cosmologies, where $z$ is the
redshift and $\vec{p}$ represents the set of cosmological parameters.
In this paper, we present the results for parameter estimations using {\sc Rico}
with the most recent training set presented in FCRW09.
This permits us to accurately account for the full cosmological dependence of
the corrections to the recombination history that were included in the
multi-level recombination code which was used for the training of {\sc Rico}
(see Sect.~\ref{sec:rico} for more details).
With this tool, we have evaluated the impact of the corrections on the posterior
probability distributions that are expected to be obtained for the {\sc Planck}
satellite, by performing a complete Monte Carlo Markov Chain analysis.
The study of these posteriors have shown that the impact of the cosmology
dependence is not very relevant for those processes included into the current
{\sc Rico} training set.
Therefore, by assuming that the cosmology dependence of the correction in
general can be neglected, we have also investigated the impact of recent
improvements in the treatment of hydrogen recombination (see
Sect.~\ref{sec:Lya}).
The basic conclusion is that, with our current understanding of the
recombination process, the expected biases in the cosmological parameters
inferred from {\sc Planck} might be as large as 1.5-2.5 sigmas for some
parameters as the baryon density or the primordial spectral index of scalar
fluctuations, if all these corrections to the recombination history are
neglected.
The paper is organized as follows. Sect.~\ref{sec:physics} describes the current
training set for {\sc Rico}, and provides an updated list of physical processes
during recombination which were not included in FCRW09. Sect.~\ref{sec:impact}
presents the impact of the recombination uncertainties on cosmological parameter
estimation, focusing on the case of {\sc Planck}
satellite. Sect.~\ref{sec:additional} further extends this study to account for
the remaining recombination uncertainties described in Sect.~\ref{sec:physics}.
Sect.~\ref{sec:current} presents the analysis of present-day CMB experiments,
for which the effect is shown to be negligible. Finally, the discussion and
conclusions are presented in sections \ref{sec:discussion} and
\ref{sec:conclusions}, respectively.
\section{Updated list of physical processes during recombination}
\label{sec:physics}
In this Section we provide an updated overview on the important physical
processes during cosmological recombination which have been discussed in the
literature so far.
We start with a short summary of those processes which are already included into
the current training set (FCRW09) of {\sc Rico} (Sect.~\ref{sec:rico}).
The corresponding correction to the ionization history close to the maximum of
the Thomson visibility function is shown in Fig.~\ref{fig:remaining_xe}.
We then explain the main recent advances in connection with the radiative
transfer calculations during hydrogen recombination (Sect.~\ref{sec:Lya}), which
lead to another important correction to the cosmological ionization history (see
Fig.~\ref{fig:remaining_xe}) that is not yet included into the current training
set of {\sc Rico}.
However, as we explain below (Sect.~3.4) it is possible to take these
corrections into account (Sect.~4), provided that their cosmology dependence is
negligible. Our computations show that this may be a valid approximation
(Sect.~3.4).
We end this section mentioning a few processes that have been recently addressed
but seem to be of minor importance in connection with parameter estimations for
{\sc Planck}.
Overall it seems that the list of processes that could be of importance in
connection with {\sc Planck} is nearly completed.
However, still some effort has to go in cross-validation of the results obtained
by different groups.
\subsection{The current training set for {\sc Rico}}
\label{sec:rico}
As demonstrated in FCRW09, {\sc Rico} can be used to represent the recombination
history of the Universe, accurately capturing the full cosmology dependence and
physical model of the multilevel recombination code that was used in the
computations of the {\sc Rico} training set.
For the current {\sc Rico} training set we ran our full recombination code using
a $75$-shell model for the hydrogen atom. The physical processes which are
included during hydrogen recombination are described in detail in FCRW09: the
induced 2s-1s two-photon decay \citep{Chluba2006}; the feedback of Lyman
$\alpha$ photons on the 1s-2s absorption rate \citep{Kholupenko2006}; the
non-equilibrium populations in the angular momentum sub-states \citep{Rubino2006,
Chluba2007}; and the effect of Lyman series feedback \citep{Chluba2007a}.
For helium recombination we took into account: the spin-forbidden {He}~{\sc i}
$2^3{\rm P}_1-1^1{\rm S}_0$ transition \citep{Dubrovich2005}; and the
acceleration of helium recombination by neutral hydrogen \citep{Switzer2008a}.
Furthermore, we also updated our physical constants according to the {\sc Nist}
database\footnote{http://www.nist.gov/, 2008 May.}, including the new value of
the gravitational constant and the helium to hydrogen mass ratio
\citep{Wong2007}.
A more detailed description of the physical processes that were taken into
account in the current {\sc Rico} training set can be found in FCRW09.
This {\sc Rico} training set is now publicly available at
\verb+http://cosmos.astro.uiuc.edu/rico+.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{dne_rico_cs2009_raman.eps}
\caption{Recombination uncertainties with respect to the standard {\sc Recfast
v1.4.2} code. For the fiducial cosmological model, it is shown the
correction to the recombination history which is incorporated in the new {\sc
Rico} training set (dotted line); the correction due to Ly$\alpha$ radiative
transfer effects \citep{Chluba2009c, Chluba2009b, Chluba2009} (dot-dashed);
the effect of Raman scattering \citep{Hirata2008} (dashed); and the
combination of all previous effects (solid line). }
\label{fig:remaining_xe}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{dcl_rico_cs2009_raman.eps}
\caption{Same as figure~\ref{fig:remaining_xe}, but for the angular power
spectrum (APS). For the fiducial cosmological model, it is shown the
correction to the APS due to the recombination history which is incorporated
in the new {\sc Rico} training set (gray lines), and the correction due to the
additional inclusion of Ly$\alpha$ radiative transfer effects
\citep{Chluba2009c, Chluba2009b, Chluba2009} and Raman scattering
\citep{Hirata2008} (dark lines). }
\label{fig:remaining_aps}
\end{figure}
\subsection{Updated radiative transfer calculations during hydrogen recombination}
\label{sec:Lya}
As already pointed out earlier \citep[e.g.][]{Chluba2008a} in particular a
detailed treatment of the hydrogen Lyman $\alpha$ radiative transfer problem
including two-photon corrections is expected to lead to an important additional
modification during hydrogen recombination.
An overview of the relevant physical aspects in connection with this problem was
already given in Sect. 2.3.2 and 2.3.3 of FCRW09. However, at that time the
problem was still not solved at full depth, but recently several important steps
were taken, which here we now want to discuss briefly \citep[for additional
overview see also][]{Sunyaev2009}.
\subsubsection{Partial frequency redistribution due to line scattering}
\label{sec:Lya-recoil}
Recently, the effect of partial frequency redistribution on the Lyman $\alpha$
escape rate and the ionization history during hydrogen recombination was
independently studied in detail by \citet{Chluba2009c} and
\citet{HirataForbes2009}.
As shown by \citet{Chluba2009c}, the {\it atomic recoil} effect leads to the
dominant contribution to the associated correction in the ionization history,
and the result for this process alone seems to be in good agreement with the one
obtained earlier by \citet{Grachev2008}, leading to $\Delta N_{\rm e}/\Delta
N_{\rm e}\sim -1.2\%$ at $z\sim 900$.
Also the computations by \citet{HirataForbes2009} seem to support this
conclusion.
However, \citet{Chluba2009c} also included the effect of {\it Doppler
broadening} and {\it Doppler boosting}\footnote{They used a Fokker-Planck
approximation \citep[e.g.][]{Rybicki2006} for the frequency redistribution
function.}, which was neglected in the analysis of \citet{Grachev2008}.
Doppler boosting acts in the opposite direction as atomic recoil and therefore
decelerates recombination, while the effect of Doppler broadening can lead to
both an increase in the photons escape probability or a decrease, depending on
the initial frequency of the photons \citep[see][for more detailed
explanation]{Chluba2009c}. The overall correction to the recombination
history due to line scattering amounts to $\Delta N_{\rm e}/\Delta N_{\rm e}\sim
-0.6\%$ at $z\sim 900$.
The results of \citet{Chluba2009c} seem to be rather similar to those of
\citet{HirataForbes2009}, however a final comparison will become necessary in
order to reach full agreement on the final correction.
In the computations presented below we will use the results of
\citet{Chluba2009c}.
\subsubsection{Two-photon transitions from higher levels}
\label{sec:2gamma}
Initially, the problem of two-photon transitions from highly excited levels in
hydrogen and helium was proposed by \citet{Dubrovich2005}. However, for hydrogen
recombination only very recently this problem has been solved convincingly by
\citet{Hirata2008} and \citet{Chluba2009b}, using two independent, conceptually
different approaches.
Also until now, \citet{Chluba2009b} took the main contribution coming from the
3d-1s and 3s-1s two-photon profile corrections into account, while
\citet{Hirata2008} also included the $n$s-1s and $n$d-1s two-photon profile
corrections for larger $n$.
In the analysis of \citet{Chluba2009b}, {\it three independent} sources for the
corrections in connection with the two-photon picture were identified (we will
discuss the other two processes in Sect.~\ref{sec:thermo} and \ref{sec:time}).
As they explain, the total modification coming from {\it purely} quantum
mechanical aspects of the problem (i.e. corrections due to deviations of the
line profiles from the normal Lorenzian shape, as pointed out by
\citet{Chluba2008a}) leads to a change in the free electron number of $\Delta
N_{\rm e}/\Delta N_{\rm e}\sim -0.4\%$ at $z\sim 1100$.
It also seems clear that the remaining small difference (at the level of $\Delta
N_{\rm e}/N_{\rm e}\sim 0.1\%-0.2\%$) between the results of \citet{Hirata2008}
and \citet{Chluba2009b} for the total correction related to the two-photon
decays from excited states is because \citet{Chluba2009b} only included the full
two-photon profiles of the 3s-1s and 3d-1s channels.
Below we will use the results of \citet{Chluba2009b} for this process.
\subsubsection{Time-dependent aspects in the emission and absorption of Lyman $\alpha$ photon}
\label{sec:time}
One of the key ingredients for the derivation of the escape probability in the
Lyman $\alpha$ resonance using the Sobolev approximation \citep{Sobolev1960} is
the {\it quasi-stationarity} of the line transfer problem. However, as shown
recently \citep{Chluba2009, Chluba2009b} at the percent-level this assumption is
not justified during the recombination of hydrogen, since (i) the ionization
degree, expansion rate of the Universe and Lyman $\alpha$ death probability
change over a characteristic time $\Delta z/z\sim 10\%$, and (ii) because a
significant contribution to the total escape probability is coming from photons
emitted in the distant wings (comparable to $10^2-10^3$ Doppler width) of the
Lyman $\alpha$ resonance.
Therefore, one has to include {\it time-dependent aspects} in the emission and
absorption process into the line transfer problem, leading to a delay of
recombination by $\Delta N_{\rm e}/N_{\rm e}\sim +1.2\%$ at $z\sim 1000$.
Below we will use the results of \citet{Chluba2009b}.
\subsubsection{Thermodynamic asymmetry in the Lyman $\alpha$ emission and absorption profile}
\label{sec:thermo}
As explained by \citet{Chluba2009b}, the largest correction related to the
two-photon formulation of the Lyman $\alpha$ transfer problem is due to the {\it
frequency-dependent asymmetry} between the emission and absorption profile
around the Lyman $\alpha$ resonance.
This asymmetry is given by a thermodynamic correction factor, which has an
exponential dependence on the detuning from the line center, i.e. $f_\nu \propto
\exp[h(\nu-\nu_{\alpha})/kT_{\gamma}]$, where $\nu_\alpha$ is the transition
frequency for the Lyman $\alpha$ resonance.
Usually this factor can be neglected, since for most astrophysical problems the
main contribution to the number of photons is coming from within a few Doppler
width of the line center, where the thermodynamic factor indeed is very close to
unity. However, in the Lyman $\alpha$ escape problem during hydrogen
recombination contributions from the very distant damping wings are also
important \citep{Chluba2009, Chluba2009b}, so that there $f_\nu\neq 1$ has to be
included.
As explained in \citep{Chluba2008a}, the thermodynamic factor also can be
obtained in the classical picture, using the {\it detailed balance
principle}. However, in the two-photon picture this factor has a natural
explanation in connection with the absorption of photons from the CMB blackbody
ambient radiation field \citep{Chluba2009b, Sunyaev2009}.
This process leads to a $\sim 10\%$ increase in the Lyman $\alpha$ escape
probability, and hence accelerates hydrogen recombination. For the correction to
the ionization history, \citet{Chluba2009b} obtained $\Delta N_{\rm e}/\Delta
N_{\rm e}\sim -1.9\%$ at $z\sim 1100$.
Note also that in the analysis of \citet{Chluba2009b} the thermodynamic
correction factor was included for all $n$s-1s and $n$d-1s channels with $3\leq
n \leq10$.
Below we will use the results of \citet{Chluba2009b} for this process.
\subsubsection{Raman scatterings}
\label{sec:Raman}
\citet{Hirata2008} also studied the effect of Raman scatterings on the
recombination dynamics, leading to an additional delay of hydrogen recombination
by $\Delta N_{\rm e}/N_{\rm e}\sim 0.9\%$ at $z\sim 900$.
Here in particular the correction due to 2s-1s Raman scatterings is important.
Again it is expected that a large part of this correction can be attributed to
time-dependent aspects and the correct formulation using detailed balance, and
we are currently investigating this process in detail.
In the computations presented below we will use the results of
\citet{Hirata2008} for the effect of Raman scatterings.
\subsection{Additional processes}
\label{sec:addproc}
There are a few more processes that here we only want mention very briefly
(although with this the list is not meant to be absolutely final or complete),
and which we did not account for in the computations presented here. However, it
is expected that their contribution will not be very important.
\subsubsection{Effect of electron scattering}
\label{sec:e-scatt}
The effect of {\it electron scattering} during hydrogen recombination was also
recently investigated by \citet{Chluba2009b} using a Fokker-Planck
approach. This approximation for the frequency redistribution function may not
be sufficient towards the end of hydrogen recombination, but in the overall
correction to the ionization history was very small close the maximum of the
Thomson visibility function, so that no big difference are expected when more
accurately using a scattering Kernel-approach.
Very recently \citet{Haimoud2009} showed that this statement indeed seems to be
correct.
\subsubsection{Feedback of helium photons}
\label{sec:feedbackHe}
Very recently \citet{Chluba2009d} investigated the feedback problem of helium
photons including the processes of $\gamma(\ion{He}{i})\rightarrow \ion{He}{i}$,
$\gamma(\ion{He}{i})\rightarrow \ion{H}{i}$, $\gamma(\ion{He}{ii})\rightarrow
\ion{He}{i}$ and $\gamma(\ion{He}{ii})\rightarrow \ion{H}{i}$ feedback.
They found that only $\gamma(\ion{He}{i})\rightarrow \ion{He}{i}$ feedback leads
to some small correction ($\Delta N_{\rm e}/N_{\rm e}\sim +0.17\%$ at $z\sim
2300$) in the ionization history, while all the other helium feedback induced
corrections are negligible.
This is because the $\gamma(\ion{He}{i})\rightarrow \ion{H}{i}$,
$\gamma(\ion{He}{ii})\rightarrow \ion{He}{i}$ and
$\gamma(\ion{He}{ii})\rightarrow \ion{H}{i}$ feedback processes all occur in the
{\it pre-recombinational epochs} of the considered species, where the
populations of the levels are practically in full equilibrium with the free
electrons and ions.
The $\gamma(\ion{He}{i})\rightarrow \ion{He}{i}$ feedback process was already
studied by \citet{Switzer2008a}, but the result obtained by \citet{Chluba2009d}
seems to be smaller.
However, it is clear that any discrepancy in the helium recombination history at
the 0.1\% - 0.2\% level will not be very important for the analysis of future
CMB data.
We would also like to mention, that although the $\gamma(\ion{He}{i})\rightarrow
\ion{H}{i}$, $\gamma(\ion{He}{ii})\rightarrow \ion{He}{i}$ and
$\gamma(\ion{He}{ii})\rightarrow \ion{H}{i}$ feedback processes do not affect
the ionization history, they do introduce interesting changes in the
recombinational radiation, increasing the total contribution of photons from
helium by 40\% - 70\% \citep{Chluba2009d}.
\subsubsection{Other small correction}
\label{sec:othercorrs}
Recently, several additional processes during hydrogen recombination were
discussed. These include: the overlap of Lyman series resonances caused by the
thermal motion of the atoms \citep{Haimoud2009}; quadrupole transitions in
hydrogen \citep{Grin2009}; hydrogen deuterium recombination \citep{Fung2009,
Kholupenko2009}; and 3s-2s and 3d-2s two-photon transitions
\citep{Chluba2008a}.
All these processes seem to affect the ionization history of the Universe at a
level (well) below 0.1\%.
Furthermore, one should include the small re-absorption of photons from the
2s-1s two-photon continuum close to the Lyman $\alpha$ resonance, where our
estimates show that this leads to another $\Delta N_{\rm e}/N_{\rm e}\sim
0.1\%-0.2\%$ correction.
\subsection{Overall correction}
\label{sec:final}
Figure~\ref{fig:remaining_xe} shows the current best-estimate of the remaining
overall correction to the recombination history, while
Fig.~\ref{fig:remaining_aps} translates these corrections into changes of the
angular power spectrum.
To describe the Ly-$\alpha$ transfer effects, we adopt the curve presented in
\cite{Chluba2009c}, which includes the results of the processes investigated by
\citet{Chluba2009b} and \citet{Chluba2009}.
We also include the effect of Raman scattering on hydrogen recombination as
described in \cite{Hirata2008}.
It seems that the remaining uncertainty due to processes that were not taken
into account here can still exceed the 0.1\% level, but likely will not lead to
any significant addition anymore.
However, it is clear that a final rigorous comparison of the total result from
different independent groups will become necessary to assure that the accuracy
required for the analysis of {\sc Planck} data will be reached.
Such detailed code comparison is currently under discussion.
\section{Impact of recombination uncertainties on cosmological parameter estimation}
\label{sec:impact}
Here, the {\sc Rico} code together with the latest training set described above
is used to evaluate the impact of the recombination uncertainties in the
cosmological parameter constraints inferred from CMB experiments.
All our analyses use the software packages {\sc
Camb}\footnote{http://camb.info/} \citep{camb} and {\sc
CosmoMC}\footnote{http://cosmologist.info/cosmomc/} \citep{cosmomc}.
{\sc Camb} is used to calculate the linear-theory CMB angular power spectrum,
and has been modified here to include the recombination history as described by
{\sc Rico}.
{\sc CosmoMC} uses a Monte Carlo Markov Chain (MCMC) method to sample the
posterior distribution of the cosmological parameters from a given likelihood
function which describes the experimental constraints on the CMB angular power
spectrum.
The default parametrization which is included inside {\sc CosmoMC} exploits some
of the intrinsic degeneracies in the CMB angular power spectrum \citep[see
e.g.][]{Kosowsky_params}, and uses as basic parameters the following subset:
\begin{equation}
\vec{p}_{\rm std} = \{ {\Omega_{\rm b}h^2}, {\Omega_{\rm dm}h^2}, \theta, \tau, n_{\rm S}, \log(10^{10} A_{\rm S}) \}.
\end{equation}
Here, ${\Omega_{\rm b}h^2}$ and ${\Omega_{\rm dm}h^2}$ are the (physical) baryon and dark matter
densities respectively, where $h$ stands for the Hubble constant in units of
100~km~s$^{-1}$~Mpc$^{-1}$; $\theta$ is the acoustic horizon angular scale;
$\tau$ is the Thomson optical depth to reionization; and $n_{\rm S}$ and $A_{\rm S}$
are the spectral index and the amplitude of the primordial (adiabatic) scalar
curvature perturbation power spectrum at a certain scale $k_0$. For the
computations in this paper, we will use $k_0=0.05$~Mpc$^{-1}$.
It is important to note that the above parametrization $\vec{p}_{\rm std}$ makes
use of an approximate formula for the sound horizon \citep{1996ApJ...471..542H},
which used in its derivation some knowledge on the recombination history
provided by {\sc Recfast}. Therefore, the use of this parameter is not
appropriate if we are changing the recombination history, as this might
introduce artificial biases. For this reason, in this paper we have modified
{\sc CosmoMC} to use a new parametrization $\vec{p}_{\rm new}$, which is defined
as
\begin{equation}
\vec{p}_{\rm new} = \{ {\Omega_{\rm b}h^2}, {\Omega_{\rm dm}h^2}, H_0, \tau, n_{\rm S}, \log(10^{10} A_{\rm S}) \}.
\label{eq:newpar}
\end{equation}
where we have replaced $\theta$ by $H_0$ as a basic parameter. In this way, we
still exploit some of the well-known degeneracies (e.g. that between $\tau$ and
$A_{\rm S}$), but we eliminate the possible uncertainties which may arise from
the use of $\theta$, at the expense of decreasing slightly the speed at which
the chain converges.
As we show below, both the shape of the posteriors and the confidence levels for
the rest of the parameters is practically unaffected by this modification in the
parametrization, despite of the fact that the {\sc CosmoMC} code now assumes a
flat prior on $H_0$ instead of a flat prior on $\theta$. A modified version of
the \verb!params.f90! subroutine inside {\sc CosmoMC} which uses this new
parametrization is also available in the {\sc Rico} webpage.
Finally, concerning the convergence of the chains, all computations throughout
this paper were obtained using at least five independent chains; those chains
have been run until the \citet{Gelman92} convergence criterion $R-1$ yields a
value smaller than $0.005$ for the minimal (6-parameter) case, and values
smaller than $0.02-0.2$ for those cases with a larger number of parameters.
\subsection{Impact on parameter estimates for {\sc Planck} alone}
\label{sec:planck_alone}
For the case of {\sc Planck} satellite, the mock data is prepared as follows.
We assume a Gaussian symmetric beam, and the noise is taken to be uniform across
the sky. For definiteness, we have adopted the nominal values of the beam and
pixel noise which correspond to the 143~GHz {\sc Planck} band, as described in
\cite{Planck2006}. Thus, we have $\theta_{\rm beam}= 7.1'$, $w_T^{-1} =
\sigma_{\rm noise}^2 \Omega_{\rm beam} = \pot{1.53}{-4}$~$\mu$K$^2$ and
$w_P^{-1}=\pot{5.59}{-4}$~$\mu$K$^2$, where $w_T$ and $w_P$ indicate the
intensity and polarization sensitivities, respectively.
Note that using a single frequency channel is implicitly assuming that the
remaining {\sc Planck} frequencies have been used to fully remove the foreground
contamination from this reference 143~GHz channel. For a complete discussion on
which combination of {\sc Planck} channels is more appropriate in terms of the
parameter constraints, see \cite{Colombo2009}.
The fiducial cosmological model used for these computations corresponds to the
WMAP5 cosmology \citep{WMAP5-basic}, and has parameters ${\Omega_{\rm b}h^2} = 0.02273$,
${\Omega_{\rm dm}h^2}=0.1099$, $h=71.9$, $\tau = 0.087$, $n_{\rm S}= 0.963$ and an amplitude
$\pot{2.41}{-9}$ at $k=0.002$~Mpc$^{-1}$ (or equivalently, $A_{\rm S} =
\pot{2.14}{-9}$ at $k=0.05$~Mpc$^{-1}$, or $\log(10^{10} A_{\rm S}) = 3.063$).
The mock data is then produced using the cosmological recombination history as
computed with our complete multi-level recombination code for this particular
cosmology (using $n=75$ shells to model the hydrogen atom).
The shape of the likelihood function adopted here corresponds to the exact
full-sky Gaussian likelihood function given in \cite{Lewis2005}, which is
implemented in {\sc CosmoMC} using the \verb+all_l_exact+ format for the
data\footnote{See also http://cosmocoffee.info/viewtopic.php?t=231 for more
details. }. We note that for the mock {\sc Planck} data we use not a
simulation but the actual (fiducial) angular power spectrum.
\begin{figure*}
\centering
\includegraphics[height=15cm,angle=90]{planck_6par_2d_final.ps}
\caption{Impact on parameter estimates for {\sc Planck} satellite, in the
six-parameter case. Dark solid line shows the 1-D and 2-D posterior
distributions which are obtained when the {\sc Rico} code is used to describe
the recombination history. Light dashed solid line shows the same posteriors
obtained with the {\sc Recfast} code. The biases are evident in $n_{\rm S}$,
${\Omega_{\rm b}h^2}$ and $\log(10^{10} A_{\rm S})$. }
\label{fig:planck}
\end{figure*}
\begin{table*}
\centering
\caption{Impact of recombination uncertainties on the confidence limits of the
cosmological parameters for the Planck satellite. Confidence intervals are
derived as the 0.16, 0.5 and 0.84 points of the cumulative probability
distribution function, in such a way that our parameter estimate is the median
of the marginalised posterior probability distribution function, and the
confidence interval encompasses 68 per cent of the probability. The biases
correspond to the case of using the {\sc Rico} training set. }
\begin{tabular}{lcccc}
\hline
& With {\sc Rico} & With {\sc Recfast} & Bias & Fiducial \\
& & (v1.4.2) & (in sigmas) & model \\
\hline
${\Omega_{\rm b}h^2} (\times 10^2)$ & $2.273 \pm 0.014$ & $2.269 \pm 0.014$ & -0.31 &
2.273\\
${\Omega_{\rm dm}h^2}$ & $0.1098^{+0.0013}_{-0.0012}$ & $0.1099^{+0.0012}_{-0.0013}$ & 0.06
& 0.1099\\
$H_0$ & $71.9^{+0.6}_{-0.7}$ & $72.0^{+0.7}_{-0.6}$ & 0.11 & 71.9\\
$\tau$ & $0.087 \pm 0.006$ & $0.087 \pm 0.006$ & -0.04 & 0.087\\
$n_{\rm S}$ & $0.9632^{+0.0038}_{-0.0034}$ & $0.9606^{+0.0035}_{-0.0037}$ & -0.74 &
0.963\\
$\log(10^{10} A_{\rm S})$ & $3.063^{+0.009}_{-0.008}$ & $3.059 \pm 0.009$ & -0.42 & 3.063\\
\hline
\end{tabular}
\label{tab:planck}
\end{table*}
We run {\sc CosmoMC} for the case of {\sc Planck} mock data alone and the
minimal model with six free parameters. For each set of runs, we consider two
cases for the recombination history, one is the current version of {\sc Recfast}
(v1.4.2), and the second one is the {\sc Rico} code with our latest training
set.
The main result are summarized in table~\ref{tab:planck}, and the corresponding
posterior distributions are shown in Fig.~\ref{fig:planck}. We note that the
sizes of the error bars on these parameters are similar to those obtained for
{\sc Planck} by other authors \citep[e.g][]{Bond2004,Colombo2009}.
For this six parameter case, the largest biases do appear in $n_{\rm S}$ ($-0.7$
sigmas), ${\Omega_{\rm b}h^2}$ ($-0.3$ sigmas) and $\log(10^{10} A_{\rm S})$ ($-0.4$ sigmas).
The sign of the correction in $n_{\rm S}$, which is the parameter having the largest
bias, can be understood as follows. The physical effects to the recombination
history included in {\sc Rico} produce a slight delay of the recombination
around the peak of the visibility function, i.e. an excess of electrons with
respect to the standard computation (see Fig.~\ref{fig:remaining_xe}). This in
turn produces a slightly larger Thomson optical depth, which increases the
damping of the anisotropies at high multipoles. In order to compensate this
excess of damping, the analysis which uses the standard {\sc Recfast} code gives
a lower value of $n_{\rm S}$.
The biases on the other three parameters, i.e. $\tau$ (which is well-constrained
by large scale polarization measurements), ${\Omega_{\rm dm}h^2}$ and $H_0$, are negligible
(i.e. less than 0.1 sigmas). However, if the analysis is repeated with the
standard parametrization using $\theta$ as basic parameter, then we would find a
$+1.8$ sigma bias in $\theta$, while for the rest of the parameters the
posteriors remain the same as before. For illustration of these facts,
Fig.~\ref{fig:ns_theta} shows a comparison of the posteriors for $n_{\rm S}$ obtained
with the two different parametrizations, as well as the posteriors obtained for
$\theta$ if using the standard parametrization.
Finally, we have checked that the inclusion of lensed angular power spectra on
the complete procedure modifies neither the shape of the posteriors nor the
biases. Therefore, for the rest of the paper we perform the computations in the
case without lensing. This decreases the computational time by a significant
factor.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{ns_params.eps}
\includegraphics[width=0.9\columnwidth]{theta_params.eps}
\caption{Top: Posterior distributions for $n_{\rm S}$ obtained with two different
parametrizations. The solid line corresponds to the marginalised 1-d posterior
distribution obtained using {\sc Rico} while the dashed line corresponds to
the case of using {\sc Recfast}, both with the new parametrization. The dotted
lines show in each case the posteriors obtained when the standard
parametrization is used (i.e. $\theta$ instead of $H_0$ as fundamental
parameter). Bottom: Bias recovered on $\theta$ parameter if the standard
parametrization is used. This case corresponds to the same case as
Fig.~\ref{fig:planck}, i.e. six parameter case. }
\label{fig:ns_theta}
\end{figure}
\subsection{Importance of the cosmology dependence of the correction}
\label{sec:corr_fac}
One of the questions that can be explored with {\sc Rico} and the new
training-set is the importance of the cosmology dependence of the corrections to
the recombination history. In order to obtain a simplified description of the
recombination history, it is important to evaluate if the cosmology dependence
of the corrections plays a significant role in determining the final shape of
the posteriors.
To explore this issue, we have modified the standard {\sc Recfast} code by
introducing the following (cosmology independent) correction:
\begin{equation}
x_{\rm e}^{\rm new}(z ; \{ {\rm cosmology}\}) = x_{\rm e}^{\rm RECFAST}(z ; \{{\rm cosmology}\}) f(z)
\label{eq:approx}
\end{equation}
where this function $f(z)$ is computed as
\begin{equation}
f(z) = 1 + \frac{\Delta x_{\rm e}}{x_{\rm e}}
\label{eq:f_z}
\end{equation}
for a certain fiducial cosmological model.
By introducing this modification inside {\sc CosmoMC}, we have compared the
posterior distributions obtained in this case with those obtained using the full
{\sc Rico} code with the new training set. The results of this comparison are
shown in Figure~\ref{fig:approx}. The fact that the differences in the posterior
are negligible indicates that there is no significant cosmological dependence in
the corrections to the ionization history included in the {\sc Rico} training
set.
Therefore, and for the case of the {\sc Planck} satellite, one can in principle
use cosmology independent `fudge functions', as the one presented in
Eq.~\eqref{eq:f_z}, to accommodate additional corrections to the recombination
history.
As a final check in this section, we have explored the sensitivity of the fudge
function $f(z)$ to the cosmological model which is used as fiducial model for
the full computation of the recombination history.
Taking as a reasonable range of variation the two-sigma confidence interval
which is obtained of the analysis of WMAP5 data \citep{Dunkley2009}, we have
compared the $f(z)$ functions obtained from cosmological models which differ
two-sigmas with respect to the actual fiducial model which is used in this
paper. The result is that the changes in $f(z)$ with respect to the solid curve
presented in Fig.~\ref{fig:remaining_xe} are below one percent, thus giving a
negligible correction to the main effect included in $f(z)$.
\begin{figure}
\centering
\includegraphics[width=0.48\columnwidth]{approx_ns.eps}%
\includegraphics[width=0.48\columnwidth]{approx_wb.eps}
\includegraphics[width=0.48\columnwidth]{approx_h.eps}%
\includegraphics[width=0.48\columnwidth]{approx_logas.eps}
\caption{Comparison of the 1-D posterior distributions obtained with the full
{\sc Rico} code and the new training set (solid lines), with those obtained
with the simplified description given in Eq.~\eqref{eq:approx} (dashed
lines). For this case, the fiducial model which has been used to compute
$f(z)$ is taken to be the same as the fiducial cosmology. In all four panels,
the vertical dotted line shows the value of the fiducial model.}
\label{fig:approx}
\end{figure}
\section{Estimating the corrections from the remaining recombination uncertainties}
\label{sec:additional}
Although a full recombination code which includes all the physical effects
discussed in Sect.~\ref{sec:physics} is still not available, there is a good
agreement in the community about the list of relevant physical processes that
have to be included. Moreover, all those effects have been already discussed in
the literature by at least one group, so we have estimates of the final impact
of these corrections on the recombination history (see
Fig.~\ref{fig:remaining_xe}).
Based on these estimates, we have quantified the impact of future corrections to
the recombination history using the approximation described in
Eq.~\eqref{eq:approx}, where $f(z)$ is taken from Fig.~\ref{fig:remaining_xe},
as described in Eq.~\eqref{eq:f_z}.
Using this function, we have obtained the posterior distributions for the same
case discussed above (nominal {\sc Planck} satellite sensitivities and a six
parameter analysis). The basic results are shown in Table~\ref{tab:planck2} and
Fig.~\ref{fig:remaining_params}.
The biases in the different parameters increase very significantly, as one would
expect from the inspection of Fig.~\ref{fig:remaining_xe}, specially for $n_{\rm S}$
($-2.3$ sigmas), ${\Omega_{\rm b}h^2}$ ($-1.65$ sigmas) and $\log(10^{10} A_{\rm S})$ ($-1$ sigmas).
Therefore, as pointed out by several authors \citep[see e.g.][]{Chluba2008a,
Hirata2008}, the detailed treatment of the hydrogen Lyman $\alpha$ radiative
transfer problem constitutes the most significant correction to our present
understanding of the recombinational problem. If this effect is not taken into
account when analysing {\sc Planck} data, the final constraints could be
significantly biased.
\begin{table*}
\centering
\caption{Estimate of the impact of remaining recombination uncertainties on the
confidence limits of the cosmological parameters for the {\sc Planck}
satellite. Confidence intervals are derived as in table~\ref{tab:planck} so
the confidence interval encompasses 68 per cent of the probability.}
\begin{tabular}{lcccc}
\hline
& With $f(z)$ & With {\sc Recfast} & Bias & Fiducial \\
& from Fig.~\ref{fig:remaining_xe} & (v1.4.2) & (in sigmas) & model \\
\hline
${\Omega_{\rm b}h^2} (\times 10^2)$ & $2.274^{+0.014}_{-0.015}$ & $2.250^{+0.015}_{-0.013}$
& -1.65 & 2.273\\
${\Omega_{\rm dm}h^2}$ & $0.1098^{+ 0.0013}_{- 0.0012}$ & $0.1098^{+0.0013}_{-0.0012}$ &
0.02 & 0.1099\\
$H_0$ & $71.9 \pm 0.6 $ & $71.7 \pm 0.6$ & -0.41 & 71.9\\
$\tau$ & $0.087 \pm 0.006$ & $0.086 \pm 0.006$ & -0.18 & 0.087\\
$n_{\rm S}$ & $0.963^{+0.0037}_{-0.0036}$ & $0.955^{+ 0.0035}_{- 0.0038}$ & -2.27 &
0.963\\
$\log(10^{10} A_{\rm S})$ & $3.064 \pm 0.009$ & $3.055 \pm 0.009$ & -0.99 & 3.063\\
\hline
\end{tabular}
\label{tab:planck2}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=1.3\columnwidth,angle=90]{planck_6par_remaining_2d.ps}
\caption{Posterior distributions obtained with the remaining recombination
uncertainties (see text for details). Solid line corresponds to the posteriors
using the correct description of the recombination history, while the dashed
lines represent the case in which the current description ({\sc Recfast
v1.4.2}) is used. }
\label{fig:remaining_params}
\end{figure*}
\subsection{Extended cosmological models}
\label{sec:extendedmodels}
In this subsection we describe to what extend the full set of corrections to the
recombination history may affect the cosmological constraints on some extensions
to the (minimal) six-parameter model which was used in this work. Throughout
this subsection, we compare the complete recombination history (which includes
the additional corrections shown in Fig.~\ref{fig:remaining_xe}) with the
constraints that would be inferred using {\sc Recfast v1.4.2}.
\subsubsection{Tensor perturbations}
We first consider the case of including tensor perturbations in addition to the
previous model. For these computations, we consider an 8-parameter model, by
including the spectral index of the primordial tensor perturbation ($n_{\rm T}$) and
the tensor-to-scalar ratio $r$, in addition to the parameters in
equation~\ref{eq:newpar}. As pivot scale, we are using $k_0=0.05$~Mpc$^{-1}$.
Fig.~\ref{fig:tensors} shows the impact of recombination uncertainties on the
$r$-$n_{\rm S}$ plane. The constraints on $r$ are determined by large-scale
information, and therefore the modifications of the recombination history do not
bias this parameter. However, due to the important bias on $n_{\rm S}$, the 2-D
contours on this plane are significantly shifted.
\begin{figure}
\centering \includegraphics[width=0.9\columnwidth]{inflation_ns.eps}
\caption{Biases on the two dimensional marginalised constraints (68\% and 95\%)
on inflationary parameters $r$-$n_{\rm S}$. Shaded contours represent the
constraints inferred with the complete recombination history, while the solid
lines show the constraints using {\sc Recfast v1.4.2}. See text for details. }
\label{fig:tensors}
\end{figure}
\subsubsection{Scale dependence of spectral index}
The running of the spectral index is a possible extension to the simple
$\Lambda$CDM model which is under debate in the literature in light of WMAP
observations \citep[see e.g.][which gives $n_{\rm run} = d n_{\rm S} / d\ln k = -0.055 \pm
0.030$]{Spergel2007}.
Fig.~\ref{fig:running} illustrate the impact of the recombination uncertainties
on the determination of the running of the spectral index. For this computation,
we considered a 7-parameter model by adding $n_{\rm run} = d n_{\rm S} / d\ln k$, which
again is computed at the same pivot scale of $k_0=0.05$~Mpc$^{-1}$.
When studying the one-dimensional posterior distributions for all parameters, we
find that the inclusion of $n_{\rm run}$ does not affect the shape of the rest of the
posteriors. The bias on $n_{\rm run}$ due to recombination uncertainties is not very
significant (changes from $n_{\rm run} = -0.0012 \pm 0.0050$ to $n_{\rm run} = -0.0034 \pm
0.0050$, i.e. $0.4$ sigmas) but has to be taken into account.
For completeness, we have run a 9-parameter case, in which we allow to vary
simultaneously $r$, $n_{\rm T}$ and $n_{\rm run}$ in addition to the other six parameters.
In this case, we have checked that the contours shown in Fig.~\ref{fig:tensors}
and Fig.~\ref{fig:running} are not affected by the inclusion of the other
parameter.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{inflation_nrun.eps}
\caption{Same as Fig.~\ref{fig:tensors}, but in the $n_{\rm S}$-$n_{\rm run}$ plane. }
\label{fig:running}
\end{figure}
\subsubsection{Curvature}
Another possible extension of the minimal six-parameter model is to constrain
simultaneously the spatial curvature, ${\Omega_{\rm K}}$. The inclusion of this
additional parameter introduces a practical complication, since in our new
parametrization $\vec{p}_{\rm new}$ (Eq.~\ref{eq:newpar}), the $H_0$ parameter
is highly correlated with ${\Omega_{\rm K}}$.
Fig.~\ref{fig:omk} presents the posterior distributions for this case, in which
the degeneration between $H_0$ and ${\Omega_{\rm K}}$ is clearly visible. There are two
things to note.
First, there is no significant bias to ${\Omega_{\rm K}}$ due to the inclusion of
recombination uncertainties, as one would expect since this parameter is mainly
constrained by information at angular scales around the first Doppler peak.
Second, the shape of the remaining posteriors and the biases to the parameters
are not significantly affected by the inclusion of this additional parameter.
\begin{figure*}
\centering
\includegraphics[width=1.3\columnwidth,angle=90]{planck_omk_compare_2d.ps}
\caption{Posterior distributions obtained with the remaining recombination
uncertainties (see text for details) for the case of seven parameters, adding
the curvature ${\Omega_{\rm K}}$. Solid line corresponds to the posteriors using the
correct description of the recombination history, while the dashed lines
represent the case in which the current description ({\sc Recfast v1.4.2}) is
used. }
\label{fig:omk}
\end{figure*}
\subsubsection{Residual SZ clusters/point source contributions}
One common extension of the minimal model is the inclusion of some parameters
describing the residual contribution of Sunyaev-Zeldovich (SZ) clusters or point
sources which are left in the maps after the component separation processes. For
the case of {\sc Planck} satellite, these are known to be the major contaminants
at small angular scales \citep[see e.g.][]{Leach2008}.
For illustration, here we will consider the case of the residual SZ cluster
contribution. One of the simplest parametrizations of the SZ contribution to the
angular power spectrum is to used a fixed template from numerical simulations,
and fit for the relative amplitude by using an additional parameter,
$A_{\rm SZ}$. This approach is similar to the one used in \cite{Spergel2007}, where
they parametrize the SZ contribution by the model of \cite{KS2002} but allowing
for a different normalization through the $A_{\rm SZ}$ parameter.
When the parameter constraints are infered with the inclusion of this additional
parameter, we find that (neglecting the intrinsic bias due to the degeneracy
between $n_{\rm S}$ and $A_{\rm SZ}$) the relative bias is practically not affected. In
particular, we obtain a bias of -1.42, -0.44, -2.09 and -1.03 sigmas for
${\Omega_{\rm b}h^2}$, $H_0$, $n_{\rm S}$ and $\log(10^{10} A_{\rm S})$, respectively. Those numbers are comparable
to the net biases which have been obtained for the minimal model in
Table~\ref{tab:planck2}.
\section{Impact on parameter estimates using present-day CMB experiments}
\label{sec:current}
One would expect that the order of magnitude of the corrections to the
recombination history discussed in previous sections (at the level of 1\%-2\%)
would have a negligible impact on the parameters constraints that we would infer
from present-day CMB experiments. As shown in FCRW09, the changes introduced in
the power spectra (both temperature and polarization) are significant at high
multipoles, in the sense that they are larger than the benchmark level estimated
as $\pm 3/\ell$ \citep[see][]{Seljak2003}.
To quantify this fact, we have obtained the posterior distributions for the case
of the minimal model with six free parameters, combining the CMB information
from WMAP5 \citep{WMAP5-basic}, ACBAR \citep{ACBAR2007}, CBI \citep{cbipol} and
Boomerang \citep{B03-TT,B03-EE}, together with measurements on the linear matter
power spectrum based on luminous red galaxies from SDSS-DR4 \citep{SDSS-LRG}.
Figure~\ref{fig:wmap} presents the results for the case of using the standard
{\sc Recfast} recombination history, together with the case of using our most
complete description of the recombination history, as presented in the previous
section. As expected, the modifications on the shape of the posteriors are very
small and no biases are seen in the parameters except for $n_{\rm S}$ and $\log(10^{10} A_{\rm S})$,
which are slightly biased.
Our analysis including the full description of the recombination history gives
$n_{\rm S} = 0.970 \pm 0.013$ and $\log(10^{10} A_{\rm S}) = 3.075 \pm 0.038$, while the result using
{\sc Recfast v1.4.2} gives $n_{\rm S} = 0.967^{+0.013}_{-0.012}$ and $\log(10^{10} A_{\rm S}) =
3.066^{+0.038}_{-0.036}$. In other words, this is a $\sim -0.25$ and $\sim
-0.22$ sigma bias on $n_{\rm S}$ and $\log(10^{10} A_{\rm S})$, respectively. For completeness, we have
run also the MCMC for the case of using WMAP5 data alone. In that case, the bias
decreases to $\la -0.15$ sigmas for those two parameters.
\begin{figure*}
\centering
\includegraphics[height=15cm,angle=90]{wmap5_acbar_2d_final.ps}
\caption{Impact on parameter estimates for present day data
(WMAP5+ACBAR+CBI+B03), together with SDSS LRGs. For this minimal six-parameter
case, there are no obvious biases on the parameters, although there is a small
difference in the shape of the posterior for $n_{\rm S}$. As in
Fig.~\ref{fig:remaining_params}, solid lines represent the posterior
distributions obtained when our most complete description of the recombination
process is used, while dashed lines use the current description ({\sc Recfast}
v1.4.2). }
\label{fig:wmap}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
In this section, we now discuss the results presented in this paper focusing in
three particular aspects.
On one hand, we discuss the robustness of our results against possible
modifications of the physical description of the recombination process.
Second, we also consider the dependence of the obtained biases if additional
parameters are included in the MCMC analysis.
Finally, we discuss the possible impact of recombination uncertainties on the
results obtained from other cosmological probes different from CMB anisotropies.
\subsection{Dependence of the results on the description of the recombination process}
As discussed above (Sect.~\ref{sec:physics}), there is a wide agreement in the
community about the list of physical processes which should be included in the
description of the cosmological recombination process. In many cases, these
physical processes have been treated separately by at least two separate groups,
and the agreement on the signs and amplitudes of the corrections is excellent in
most of the cases \citep[e.g. see the compilation of uncertainties in the
physics of recombination in Table~2.1 of][]{WongThesis}. Although an agreement
at the level of $\la 0.1$\% is still not reached, we are almost there, as the
remaining uncertainties seem to be at the level of 0.1\%-0.3\% between the
different groups.
In this sense, one would expect that a code which includes self-consistently all
those processes should obtain essentially the same biases that have been
described in Sect.~\ref{sec:additional}, and have been reported in
Table~\ref{tab:planck2}.
However, apart from the processes described in section~\ref{sec:addproc}, there
are still some possible uncertainties which might lead to measurable biases on
the cosmological parameters.
Below we now briefly address them.
\subsubsection{Hydrogen recombination}
One effect which might lead to additional biases on the cosmological parameters
is the inclusion of very high-$n$ states in the cosmological hydrogen
recombination. The computations in this paper are based on a training set which
uses $n=75$ shells to describe the hydrogen atom. To explore the dependence of
higher number of shells, we have repeated the standard six-parameter computation
for the mock {\sc Planck} data presented in Sect.~\ref{sec:planck_alone} and
Sect.~\ref{sec:additional}, but taking as a reference model the one computed
using $n=110$ hydrogen shells, and trying to recover it with {\sc Rico} (which
uses $n=75$ hydrogen shells).
The recovered posteriors using the {\sc Rico} code in this case do not show {\it
any appreciable} bias in any of six parameters of the minimal model. This is
illustrated in Fig.~\ref{fig:ns_fullcode}, where we present the posterior
distribution for the $n_{\rm S}$ parameter, which is the one having the largest bias.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{ns_fullcode.eps}
\caption{Bias on the $n_{\rm S}$ parameter due to the inclusion of additional number
of shells in the description of the hydrogen atom. The black solid line has
been obtained from a fiducial model with $n=110$ hydrogen shells. The dashed
line corresponds to the posterior distribution recovered using the {\sc Rico}
code only, with a training set based on $n=75$ shells. The dotted line
corresponds to the posterior distribution recovered using the current {\sc
Recfast} code. Note that we did not include for this computation the
corrections due to Ly-$\alpha$ radiative transfer and Raman scattering. }
\label{fig:ns_fullcode}
\end{figure}
\subsubsection{Helium recombination}
As mentioned in Sect.~\ref{sec:addproc}, and discussed in FCRW09, we do not
expect major changes in our current understanding of the helium recombination,
at a level which might be relevant for the computation of CMB anisotropies.
This statement assumes that the uncertainties in the modelling of the helium
atom are well controlled, but, as described in \cite{Rubino2008}, we still lack
an accurate description of the photoionization cross-sections, energies or
transition rates for the Helium atom, which might lead to small changes in these
results.
One may also wonder whether the small differences found for the correction
caused by Helium feedback processes (see Sect.~\ref{sec:feedbackHe}) could
matter.
By far, the most relevant correction to the helium recombination history is
caused by the absorption of \ion{He}{i} photons by neutral hydrogen, although
the inclusion of the $2^3\rm P_1$-$1^1\rm S_0$ intercombination line also gives
some contribution. These two effects are already included in the codes, both in
{\sc Rico} (FCRW09) and in {\sc Recfast} v1.4.2 \citep{Wong2008}. For
completeness, we have also quantified for this paper the impact that these two
corrections to the Helium recombination would have on the recovered cosmological
parameters.
Our computations show that, if these corrections are not included (which in
practise corresponds to setting {\verb!RECFAST_Heswitch! = 0} in {\sc Recfast}
v1.4.2, or equivalently, using {\sc Recfast} v1.3), then the resulting biases on
the parameters are found to be -3.2, -2.0, -1.2 and -0.7 sigmas for $n_{\rm S}$,
${\Omega_{\rm b}h^2}$, $\log(10^{10} A_{\rm S})$ and $H_0$, respectively. For illustration, we show in
Fig.~\ref{fig:ns_oldrecfast} a comparison of the posterior distribution obtained
for the $n_{\rm S}$ parameter when using the {\sc Recfast} code with
({\verb!RECFAST_Heswitch! = 6}) and without ({\verb!RECFAST_Heswitch! = 0}) all
the corrections to the helium recombination.
This simple computation shows that additional corrections to the helium
recombination history at the $\sim 0.1\%$ level will not matter much to the
analysis of future {\sc Planck} data. Therefore, the physics of helium
recombination already seems to be captured at a sufficient level of precision,
when including the acceleration caused by the hydrogen continuum opacity and the
$2^3\rm P_1$-$1^1\rm S_0$ intercombination line, which together lead to a $\sim
-3\%$ correction to $X_{\rm e}(z)$ at $z\sim 1800$ \citep{rico}.
\begin{figure}
\centering \includegraphics[width=0.9\columnwidth]{ns_oldrecfast.eps}
\caption{Bias on the $n_{\rm S}$ parameter for different modellings of the Helium
recombination history. The black solid line corresponds to the posterior
distribution recovered using the {\sc Rico} code and the additional
corrections used in Sec.~\ref{sec:additional}. The dotted line corresponds to
the posterior distribution recovered using the current {\sc Recfast} code
(v1.4.2), while the dashed line uses the same version of the code but without
any correction to the helium recombination history. }
\label{fig:ns_oldrecfast}
\end{figure}
\subsection{Recombination uncertainties and other extended cosmological models}
\label{sec:nonstandard}
In addition to those extended models described in
Sect.~\ref{sec:extendedmodels}, there is a number of possible non-standard
models for which the inclusion of refined recombination physics might be of
importance.
For example, when using current CMB data to constrain the presence of
hypothetical sources of Ly$\alpha$ resonance radiation or ionizing photons at
high redshifts \citep[e.g.][]{Peebles2000, Bean2003, Bean2007}; or to probe dark
matter models with large annihilation cross-section
\citep[e.g][]{Padmanabhan2005, Galli2009, Huetsi2009}; or energy release by
long-lived unstable particles \citep{Chen2004, Zhang2007};
or when exploring the variation of fundamental constants with time (see
e.g. \cite{Galli2009b} for Newton's gravitational constant, or
\cite{2008PhRvD..78h3527L} for the fine structure constant and the Higgs vacuum
expectation value), it is obvious that neglecting physically well understood
additions to the recombination model, as described in Sect.~\ref{sec:physics},
could lead to {\it spurious detections} or {\it confusion}, in particular, when
the possible effects are already known to be rather small \citep[e.g. see][in
the case of dark matter annihilations]{Galli2009}.
More generally speaking, given that the largest recombination uncertainties are
obtained for $n_{\rm S}$, ${\Omega_{\rm b}h^2}$ and $\log(10^{10} A_{\rm S})$, one can say that any additional
parameter showing a strong correlation with those three might be biased if an
incomplete description of the recombination physics is used.
In this sense, neglecting the refinements to the recombination model could be as
important as not taking into account, for instance, uncertainties in the beam
shapes, which also have been shown to compromise our ability to measure $n_{\rm S}$
\citep{Colombo2009};
or the combined effect of beam and calibration uncertainties, which introduce
significant biases to $n_{\rm S}$ and ${\Omega_{\rm b}h^2}$ \citep{Bridle2002}, although other
parameters (like ${\Omega_{\rm K}}$) are essentially not affected because these are
basically constrained by the position of the peaks, and not by their amplitudes.
\subsection{Recombination modelling and other cosmological probes}
The combination of CMB data with other datasets usually helps to improve the
parameter constraints, in some cases by breaking internal degeneracies which are
inherent to the CMB data alone. One of the commonly used external datasets is
the Baryon Acoustic Oscillations (BAOs). \cite{deBernardis2009} have recently
shown that a possible delay of recombination \citep{Peebles2000} by extra
sources of ionizing or exciting photons leads to biases on the constraints from
BAOs, because they largely rely on the determination of the size of the acoustic
horizon at recombination. Their conclusions can be directly translated here,
stressing that a fully consistent combination of the constraints from CMB and
BAOs should be done by using the same recombination history in both cases.
In addition, we would like to point out that in principle one could reduce the
uncertainty in our knowledge of the recombination epoch and its possible {\it
non-standard} extensions \citep[e.g. due to annihilating dark
matter][]{Padmanabhan2005} in two ways.
On one hand, one could search for the imprint of the cosmological hydrogen
recombination lines on the CMB angular power spectrum
\citep{RHS2005,Carlos2007}, which arises due to the resonant scattering of CMB
photons by hydrogen atoms at each epoch \citep{Basu2004}.
On the other hand, one could also try to {\it directly observe} the photons that
are emitted during the recombination epoch.
Today these photons should still be visible as small distortion of the CMB
energy spectrum in the mm, cm and dm spectral bands
\citep[e.g. see][]{Dubrovich1975, Dubrovich1997, Rubino2006, Chluba2006a,
Chluba2007, Rubino2008, Chluba2009d}.
These observations not only would open another possibility to determine some of
the key cosmological parameters, such as the {\it primordial helium abundance},
the {\it number density of baryons} and the CMB {\it monopole temperature} at
recombination \citep[e.g.][]{Chluba2008}, but they would also allow us to {\it
directly} check our understand of the recombination process and possible
non-standard aspects \citep[e.g. see][for an overview]{Sunyaev2009}, for
example, in connection with early energy release \citep{Chluba2008b}, or dark
matter annihilations \citep{Chlubaprep2009}.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have performed a MCMC analysis of the expected biases on the
cosmological constraints to be derived from the upcoming {\sc Planck} data, in
the light of recent developments in the description of the standard cosmological
recombination process. Our main conclusions are:
\begin{itemize}
\item An incomplete description of the cosmological recombination process leads
to significant biases (of several sigmas) in some of the basic parameters to
be constrained by {\sc Planck} satellite (see Table~\ref{tab:planck2}), and in
general, by any future CMB experiment. However, these corrections have a minor
impact for present-day CMB experiments; for instance, using WMAP5 data plus
other cosmological datasets, we find a $\sim -0.25$ and $\sim -0.22$ sigma
bias on $n_{\rm S}$ and $\log(10^{10} A_{\rm S})$, respectively, while the rest of the parameters
remain unchanged.
\item Today, it seems that our understanding of cosmological recombination has
reached the sub-percent level in $X_{\rm e}$ at redshifts $500 \la z \la
1600$. However, it will be important to cross-validate all of the considered
corrections in a detailed code comparison, which currently is under discussion
among the different groups.
\item Given the range of variation of the relevant cosmological parameters, it
is possible to incorporate all the new recombination corrections by using
(cosmology independent) fudge functions.
Here we described one possibility which uses a simple correction factor to the
results obtained with {\sc Recfast} (see Sect.~\ref{sec:corr_fac}). We provide
the function $f(z)$ on the {\sc
Rico}-webpage \footnote{http://cosmos.astro.uiuc.edu/rico}.
\item The physics of helium recombination already seems to be captured at a
sufficient level of precision, when including the acceleration caused by the
hydrogen continuum opacity and the $2^3\rm P_1$-$1^1\rm S_0$ intercombination
line. The biases caused by neglecting only these corrections are -0.8 and -0.4
sigmas, for $n_{\rm S}$ and ${\Omega_{\rm b}h^2}$, respectively.
\item When allowing for more non-standard additions to the recombination model
(e.g. related to annihilating dark matter), the biases introduced by an
inaccurate recombination model could lead to spurious detections or additional
confusion (see Sect.~\ref{sec:nonstandard}).
\end{itemize}
\section*{Acknowledgements}
JAR-M and JC are grateful to R.~A.~Sunyaev for useful comments and
discussion. Furthermore, they would like to thank M.~Bucher, J.~Fung, S.~Galli,
D.~Grin, Y.~Haimoud, C.~Hirata, U.~Jentschura, S.~Karshenboim, E.~Kholupenko,
L.~Labzowsky, D.~Scott, and D.~Solovyev for stimulating and very friendly
discussion during the recombination workshop held in July 2009 in Orsay/Paris.
The authors are also very grateful to E.~Switzer for comments and useful
discussions.
This work has been partially funded by project AYA2007-68058-C03-01 of the
Spanish Ministry of Science and Innovation (MICINN). JAR-M is a Ram\'on y Cajal
fellow of the MICINN.
WAF was supported through a fellowship from the Computational Science and
Engineering program at the University of Illinois. Some of the calculations in
this work used the Linux cluster in the University of Illinois Department of
Physics.
|
2,869,038,154,308 | arxiv | \section{Introduction}
Representations of the symmetric group $S_m$ have a long and beautiful history in mathematics. Partitions of $m$ biject with the irreducible representations of $S_{m}$ given by Specht modules; these representations have a basis corresponding to standard Young tableaux. The relations that allow us to express any tableau as a linear combination of standard Young tableaux are called Garnir relations.
For a partition $\lambda = (\lambda_{1}, \dots , \lambda_k)$ of $m$, let $\lambda^{'} = (\lambda^{'}_{1}, \dots , \lambda^{'}_{j})$ be the conjugate of $\lambda$
and let $S^{\lambda}$ be the Specht module corresponding to $\lambda.$ Also, let $\mathcal{T}_{\lambda}$ be the set of Young tableaux of shape $\lambda$ in which each element of $[m]$ appears exactly once.
For any $t \in \mathcal{T}_{\lambda}$, let $R_{t}$ be the row stabilizer of $t$, let $C_{t}$ be the column stabilizer of $t$, let$\{ t \} $ be the associated row tabloid, and let
\[ \varepsilon_{t} = \sum_{\beta \in C_{t}} \sgn(\beta) \{\beta t \} \] be the associated row polytabloid of $t$.
It is a classical result that the set of all $\varepsilon_{t}$ where $t$ is a standard Young tableau forms a basis of $S^{\lambda}$.
In \cite{Kras} and \cite{fulton}, both Kraskiewicz and Fulton introduce a dual construction of the Specht module, $\tilde{S^{\lambda}},$ using column tabloids rather than row tabloids. Column tabloids are quite similar to row tabloids: a row tabloid is an equivalence class of numberings of a Young diagram such that two row tabloids are equivalent if they have the same entries in each row. Dually, a \emph{column tabloid}, denoted $[t]$, is an equivalence class of numberings of a Young diagram such that two column tabloids are equivalent \emph{up to sign} if they have the same entries in each column. Herein lies a key difference between row and column tabloids: unlike row tabloids, column tabloids are antisymmetric within columns. That is, for a column tabloid $[t]$ and $\beta \in C_{t}$, we have $[t] = \sgn(\beta) \beta [t] = \sgn(\beta) [\beta t].$
Let $\tilde{M}^{\lambda}$ be the vector space generated by all $[t]$ where $t$ is a Young tableau of shape $\lambda$, modulo the antisymmetry relations which are generated by $[t] - \sgn(\beta)[\beta t]$ for each $\beta \in C_{t}$. Thus a basis of $\tilde{M}^{\lambda}$ is given by all {\sl ordered} column tabloids of shape $\lambda$, where by ``ordered" we mean that the numbers in the tableaux increase going down the columns.
The symmetric group acts on $[t] \in \tilde{M}^{\lambda}$ in the natural way: $\sigma [t] = [\sigma t].$ Fulton defines $\tilde{S^{\lambda}}$ to be the subspace of $\tilde{M}^{\lambda}$ spanned by elements of the form $\sum_{\alpha \in R_{t}} \alpha [t]$. He shows that this dual construction of a Specht module is isomorphic to its row tabloid counterpart, $S^{\lambda}$ \cite{fulton}.
In order to prove this result, Fulton defines a dual straightening algorithm which gives a presentation of Specht modules as a quotient space of $\tilde{M}^{\lambda}$ by dual Garnir relations. This presentation also appeared in \cite{Kras} two years earlier. There is a dual Garnir relation for each $t\in \mathcal{T}_\lambda$, each choice of adjacent columns, and each $k$ up to the length of the shorter column. In Section \ref{newSp}, we simplify this presentation significantly: we show that we need only a single relation called $\eta$ for each choice of adjacent columns of an ordered column tabloid $[t]\in \tilde{M}^{\lambda}$ (Theorem \ref{spechtgarnir}). Our result applies to all partitions, thereby extending a simplification achieved in \cite{FHSW} that applied only to staircase partitions.
We then use the relation $\eta$ in the study of the action of the symmetric group on a generalization of free Lie algebras introduced in \cite{FHSW}. This work is based on the following generalization of the bi-linear Lie bracket $[\cdot, \cdot]$ to an $n$-linear commutator $[\cdot, \cdot, \dots, \cdot]$, which arose from the study of the correspondence between ADE singularities and ADE Lie algebras in \cite{Friedmann}, and appeared previously in other contexts \cite{Fi, Ta, DT, Ka, Li, BL, Gu}.
\begin{definition}
A \emph{Lie Algebra $\mathcal{L}$ of the $n^{th}$ kind (LAnKe)} is a vector space equipped with an $n$-linear bracket such that the following hold.
\begin{enumerate}
\item The bracket is antisymmetric: $[x_{1}, \dots x_{n}] = \sgn(\sigma)[x_{\sigma(1)}, \dots, x_{\sigma(n)}].$
\item The \emph{generalized Jacobi identity} holds:
\begin{equation}\label{jacobiidentity}
[[x_{1}, \dots ,x_{n}],y_{1}, \dots y_{n-1}] = \sum_{i=1}^{n}(-1)^{n-i} [[y_{1}, \dots, y_{n-1}, x_{i}], x_{1}, \dots \hat{x_{i}}, \dots, x_{n}]
\end{equation}
for every $x_{i}, y_{j} \in \mathcal{L}$.
\end{enumerate}
\end{definition}
Many natural objects in the Lie case generalize to the LA$n$Ke. This includes homomorphisms, ideals, and subalgebras; for a more complete description, see \cite{Friedmann}. In particular, we can generalize the free Lie algebra on a set $X$.
\begin{definition} [\cite{FHSW}]
A \emph{free LA$n$Ke} on a set $X$ is a LA$n$Ke $\mathcal L$ and map $i:X \to \mathcal L$ with the universal property that for any LA$n$Ke $\mathcal K$ and map $f: X \to \mathcal K$, there exists a unique LA$n$Ke homomorphism $F$ making the following diagram commute: \\
\begin{center}
\begin{tikzcd}
X \arrow{r}{i} \arrow{dr}{f}
& \mathcal L \arrow{d}{F}\\
& \mathcal K
\end{tikzcd}
\end{center}
\end{definition}
Just as the free Lie algebra on a set $X$ is the space generated by all Lie bracketings subject to the antisymmetry and bi-linearity of the Lie bracket and the Jacobi identity, the free LA$n$Ke on $X$ is the space generated by all $n$-bracketed elements in $X$ subject to the $n$-linear, antisymmetric bracket and the generalized Jacobi identity given in Equation (\ref{jacobiidentity}).
The \emph{multi-linear component} of the free LA$n$Ke on $X$ is the vector subspace spanned by $n$-bracketed words in which each generator in $X$ appears exactly once. In this paper, we will take all vector spaces to be over $\mathbb{C}$.
The free Lie algebra admits a natural grading given by the number of times the Lie bracket is applied, denoted here by $k-1$. This $k$ is still relevant for the free LA$n$Ke. However, the free LA$n$Ke takes into account a second variable as well: the number of entries in each bracket, denoted $n$.
For an element of the form $[[[...]..]..]$, for example, we say $k=4$ and $n=3$. Through this lens, the free Lie algebra is simply the case where $n=2$. It follows that the multi-linear component of the free LA$n$Ke involving $k-1$ bracketings will involve $kn-n-k+2$ generators; in the case of the free Lie algebra, this is exactly $k$.
When $n=2$, acting by permutation on these generators gives the famed $(k-1)!$-dimensional representations of the symmetric group $S_{k}$ on the multi-linear component of the free Lie algebra on $k$ generators.
These representations, denoted $\lie(k)$, are an object of longtime fascination to algebraic combinatorialists.
Here we continue the study initialized in \cite{FHSW} of the natural generalization of $\lie(k)$ to the representations of $S_{kn-n-k+2}$ on the multi-linear component of the free LA$n$Ke on $kn-n-k+2$ generators. This representation is called $\rho_{n,k}$. In particular, we study the case where $k=3$; it was proved in \cite{FHSW} that $\rho_{n,3}$ is isomorphic to the Specht module $S^{2^{n-1}1}$ and therefore has dimension given by the Catalan numbers. In Section \ref{catiso} we give an independent proof of this result which has the advantage of including an explicit isomorphism between the two spaces (Theorem \ref{iso}). This isomorphism allows us to find an elegant basis for $\rho_{n,3}$ corresponding to standard Young tableaux of shape $2^{n-1}1$ (Corollary \ref{rhon3basis}).
\section{A new presentation of Specht modules}\label{newSp}
\subsection{Garnir relations for column tabloids}
In this section we recall known presentations of Specht modules in terms of dual Garnir relations.
In \cite{fulton}, Fulton introduces a map $$\alpha: \tilde{M}^{\lambda} \to S^{\lambda}$$ given by
\[ \alpha: [t] \mapsto \varepsilon_{t}. \]
The map $\alpha$ is equivariant and surjective. Moreover, $\ker(\alpha)$ is generated by a set of relations which Fulton calls the dual Garnir relations.
The dual Garnir relations are constructed as follows.
For a fixed column $c$ of a tableau $t$ of shape $\lambda$, and for $1\leq k \leq \lambda^{'}_{c+1}$, let $\pi_{c,k}(t)$ be the sum of column tabloids obtained from all possible ways of exchanging the top $k$ elements of the $(c+1)^{st}$ column of $t$ with any subset of size $k$ of the elements of column $c$, and fixing all other elements of $t$. For example, for
\[ t = \ytableausetup{centertableaux}
\begin{ytableau}
1 & 4 & 6\\
2 & 5 \\
3
\end{ytableau} \]
we have
\begin{center}
\includegraphics[scale = .25]{pi11.pdf}
\end{center}
Then the \emph{dual Garnir relation} $g_{c,k}(t)$ is
\begin{equation} \label{garnir} g_{c,k}(t) = [t] - \pi_{c,k}(t) .\end{equation}
Note that $t$ can be any tableau, not necessarily with increasing columns. The relation $g_{c,k}(t)$ is called a dual Garnir relation, and varying over $c$ and $k$ gives a straightening algorithm for column tabloids.
\begin{thm}[\cite{fulton}] \label{fultonthm}
Let $\tilde{G}^{\lambda}$ be the subspace of $\tilde{M}^{\lambda}$ generated by $g_{c,k}(t)$ where $t$ varies across all $t \in \mathcal{T}_{\lambda}$, $1 \leq c \leq \lambda_1 -1$ and $1 \leq k \leq \lambda^{'}_{c+1}.$ Then
the kernel of $\alpha$ is generated by $\tilde{G}^{\lambda}$. That is,
\[ S^{\lambda} \cong \tilde{M}^{\lambda} /\tilde{G}^{\lambda} . \]
\end{thm}
Fulton shows in an exercise in \cite{fulton} that this presentation can be simplified further using only the $g_{c,1}$ relations. A corollary to this theorem is another proof of the classical result that a basis of $S^{\lambda}$ is given by polytabloids of standard Young tableaux of shape $\lambda.$ Our main contribution to this theory will be to give a new presentation of $S^{\lambda}$ that
reduces the number of generators for $\tilde{G}^{\lambda}$ even further.
\subsection{One relation to generate them all}
In this section we derive a presentation of $S^\lambda$ that requires far fewer relations than those needed in Theorem \ref{fultonthm}.
We begin by narrowing our study to partitions $\mu$ of $n+m$ with shape $2^{m}1^{n-m}$, so $\mu$ has a column of size $n$ and a column of size $m$ for $1 \leq m \leq n$, and $\mu'=(n,m)$. We shall generalize these results to partitions of any shape at the end of this Section.
Note that by the antisymmetry of column tabloids, $\tilde{M}^{\mu}$ can be induced from the Young subgroup $S_{n} \times S_{m} \leq S_{n+m}$ as follows:
\[ \tilde{M}^{\mu} \cong \ind_{S_{n} \times S_{m}}^{S_{n+m}} \left ( \sgn_{n} \otimes \sgn_{m} \right ) \cong \bigoplus_{i=0}^{m} S^{2^{i}1^{n+m - 2i}} .\]
To ease our discussion, we introduce the following map. For $t \in \mathcal{T}_\lambda$, let $\pi_{c,1}^{\ell}([t])$ correspond to the $\pi_{c,1}$ relations obtained from switching the $\ell^{th}$ element in column $c+1$ of $[t]$ with each of the elements in column $c$ of $[t]$. For example,
\begin{center}
\includegraphics[scale = .25 ]{pi211.pdf}
\end{center}
We can therefore define
\[ g_{c,1}^\ell ([t]): =[t]-\pi_{c,1}^\ell ([t]) . \]
\begin{prop}
The set $\{ g_{c,1}^\ell ([t]) \mid [t]\in \tilde{M}^{\lambda} \; , 1\leq \ell \leq \lambda_{c+1}' \}$ is the same as the set of the dual Garnir relations $\{ g_{c,1}(t) \mid t \in\mathcal{T}_\lambda \}$.
\end{prop}
\begin{proof} The statement follows directly from the definitions.
\end{proof}
The maps $\pi_{c,1}^\ell$ and $g_{c,1}^\ell$ allow us to narrow our study to tableaux with ordered columns. Additionally, they allow us to define a new linear transformation from
$\tilde{M}^{\mu}$ to $\tilde{M}^{\mu}$.
\begin{definition} \label{defeta}
For $\mu = 2^m1^{n-m}$, let $\eta: \tilde{M}^{\mu} \to \tilde{M}^{\mu}$ be the map
\[ \eta: [t] \mapsto \sum _{j=1}^m g_{1,1}^j ([t]) = m [t] - \sum_{j=1}^{m} \pi_{1,1}^{j}([t]). \]
\end{definition}
Because $\eta$ is a sum of equivariant linear maps, it follows that $\eta$ is equivariant as well. Furthermore, it follows from Theorem \ref{fultonthm} that $\im(\eta) \subseteq \ker(\alpha)$, as each term in the summand in $\eta$ is a dual Garnir relation. Using a technique employed in \cite{FHSW}, we will now show that the relations generated by $\eta$ are all that is needed to generate $\tilde{G}^{\mu}$.
\begin{thm} \label{imetakeralpha}
For $\mu = 2^m1^{n-m}$, $\ker(\eta) \cong S^\mu$, and thus $\im(\eta) = \ker(\alpha)$ for $\alpha : \tilde{M}^\mu \rightarrow S^\mu$.
\end{thm}
Note that because
\[ \tilde{M}^{\mu} \cong \bigoplus_{i=0}^{m} S^{2^{i}1^{n+m - 2i}}\]
is multiplicity-free, by Schur's Lemma $\eta$ acts as a scalar on each irreducible submodule of $\tilde{M}^{\mu}$. Thus, finding the kernel of $\eta$ is equivalent to finding the irreducible submodules of $\tilde{M}^{\mu}$ on which $\eta$ acts like the 0 scalar.
We proceed by computing the action of $\eta$ on each irreducible submodule of $\tilde{M}^{\mu}$. For each $T \in \binom{[n+m]}{n}$, let $v_{T}\in \tilde{M}^{\mu}$ be the column tabloid with first column $T$ (both columns assumed to be in increasing order). For any $v\in \tilde{M}^{\mu}$, let $\langle v, v_T \rangle$ be the coefficient of $v_T$ in the expansion of $v$ in the basis of all $v_T$.
\begin{lem}\label{lemma}
For every $S, T \in \binom{[n+m]}{n}$,
\[ \langle \eta(v_{S}), v_{T} \rangle = \begin{cases}
m & \textrm{if } S = T \\
0 & \textrm{if } |S \cap T| < n-1 \\
(-1)^{x+y} & \textrm{if } |S \cap T| = n -1 \textrm{ with}\\
& x \in S \backslash T, y \in T \backslash S
\end{cases} \]
\end{lem}
\begin{proof}
The first two cases follow easily from the definition of $\eta$. For the last case, suppose $x$ is in the $r_{x}^{th}$ row in the first column of $v_{S}$ and $y$ is in the $r_{y}^{th}$ row of the second column of $v_{S}$. Then there are precisely $x-1$ numbers smaller than $x$ altogether, with $r_{x}-1$ of them in the first column. It follows that there are $x - r_{x}$ numbers smaller than $x$ in the second column. Similarly, there are $r_{y} - 1$ numbers smaller than $y$ in the second column and $y - r_{y}$ numbers smaller than $y$ in the first column.
There are two cases: $x< y$ or $y < x$.
Suppose $x<y$ and swap the positions of $x$ and $y$. Then in order to obtain an element in the basis of $\tilde{M}^{\lambda}$, we must move $y$ to the $(y-r_{y})^{th}$ row of the first column and $x$ to the $(x - r_{x} +1)^{st}$ row of the second column. This means moving $y$ from the $r_{x}^{th}$ row down to the $(y-r_{y})^{th}$ row, which requires $y - r_{y} - r_{x}$ transpositions. Similarly moving $x$ up from the $r_{y}^{th}$ row to the $(x - r_{x} + 1)^{st}$ row requires $r_{y} - x + r_{x} - 1$ transpositions. Altogether, this amounts to a sign change of
\[ (-1)^{r_{y} - x + r_{x} - 1 + y - r_{y} - r_{x}} = (-1)^{y-x-1} .\]
Finally, taking into account that $\eta$ itself contributes a sign change of $(-1)$, we obtain the coefficient $(-1)^{x+y}$ for $\langle \eta(v_{S}), v_{T} \rangle$. The case $y<x$ is similar.
\end{proof}
We next calculate the scalar action of $\eta$ on the irreducible submodules of $\tilde{M}^{\mu}$.
\begin{thm} \label{eta}
On the irreducible submodule of $\tilde{M}^{\mu}$ isomorphic to $S^{2^{i}1^{(n+m) - 2i}}$, the operator $\eta$ acts like a scalar $\omega_{i}$, where
\[ \omega_{i} := 2(m-i) .\]
\end{thm}
\begin{proof}
For simplicity, take $T = [n]$.
Then for a given $i$, we take $t$ to be the Young tableau given by
\begin{center}
\includegraphics[scale = .45]{tableau.pdf}
\end{center}
Recall that $C_{t}$ is the column stabilizer of $t$ and $R_{t}$ is the row stabilizer of $t$. Then we denote by $e_{t}$ the Young symmetrizer of $t$:
\[ e_{t} = \sum_{\alpha \in R_{t}} \sum_{\beta \in C_{t}} \sgn(\beta)\alpha \beta. \]
As in \cite{FHSW}, we adopt a slight abuse of notation by referring to the restriction to $S^{2^{i}1^{n+m-2i}}$ of the space spanned by $\tau e_{t}v_{T}$ for $\tau \in S_{n+m}$ to be $S^{2^{i}1^{n+m-2i}}$ itself.
We define $d_{t}$, $f_{t}$ and $r_{t}$ as in \cite{FHSW}; that is $r_{t} = \sum_{\alpha \in R_{t}} \alpha$, while $d_{t}$ is the signed sum of column permutations stabilizing $\{ 1, 2, \dots , n \}, \{ n+1, \dots, n+i \}$, and $\{ n+i+1, \dots n+ m \}$, and $f_{t}$ is the signed sum of permutations in $C_{t}$ that maintain the vertical order of these sets. Then $e_{t}v_{T} = r_{t}f_{t}d_{t}v_{T}$.
The antisymmetry of column tabloids ensures that $d_{t}v_{T}$ is a scalar multiple of $v_{T},$ because it simply permutes within columns. Therefore we can conclude that $r_{t}f_{t}v_{T}$ is a scalar multiple of $e_{t}v_{T}$, and in particular that $e_{t}v_{T}$ is nonzero, as the coefficient of $v_{T}$ in $r_{t}f_{t}v_{T}$ is nonzero.
Consider $\eta(r_{t}f_{t}v_{T}).$ In the subspace restricted to $S^{2^{i}1^{n+m-2i}}$, the fact that $\eta$ acts on $e_{t}v_{T}$ as a scalar implies the same is true of $r_{t}f_{t}v_{T}.$ In fact, because the coefficient of $v_{T}$ in $r_{t}f_{t}v_{T}$ is 1, we can determine precisely what this scalar is by computing $\langle \eta(r_{t}f_{t}v_{T}), v_{T} \rangle$. In particular, we wish to show that
\[ \langle \eta(r_{t}f_{t}v_{T}), v_{T} \rangle = \omega_{i} = 2(m-i) .\]
Again, following \cite{FHSW} we have
\[ r_{t}f_{t}v_{T} = \sum_{S \in \binom{[n+m]}{n}} \langle r_{t}f_{t}v_{T}, v_{S} \rangle v_{S} .\]
Applying the linear operator $\eta$ thus gives
\[ \eta(r_{t}f_{t}v_{T}) = \sum_{S \in \binom{[n+m]}{n}} \langle r_{t}f_{t}v_{T}, v_{S} \rangle \eta(v_{S}) .\]
Note that when $T = S$, by Lemma \ref{lemma} we have $\langle r_{T}, r_{S} \rangle = m$. With this, we can compute the coefficient of $v_{T}$ in general by
\begin{align} \label{equation}
\hskip -3em \langle \eta (r_{t} f_{t} v_{T}), v_{T} \rangle = \sum_{S \in \binom{[n+m]}{n}} \langle r_{t}f_{t}v_{T}, v_{S} \rangle \langle \eta(v_{S}), v_{T} \rangle = m + \sum_{S \in \binom{[n+m]}{n} \backslash \{ T \} } \langle r_{t}f_{t}v_{T}, v_{S} \rangle \langle \eta(v_{S}), v_{T} \rangle .
\end{align}
By Lemma \ref{lemma}, for $T \neq S$, $\langle \eta(v_{S}), v_{T} \rangle \neq 0$ only when $S$ and $T$ differ by a single element. In the sum $r_{t}f_{t}v_{T}$, there are two types of possible $v_{S}$ that fulfill this criterion.
\begin{enumerate}
\item \emph{We can obtain $v_{S}$ from a single row swap.} That is, up to signs, $v_{S}$ is given by $(j, n+j)v_{T}$ for $1 \leq j \leq i$, so $(j, n+j) \in R_{t}$. In this case, in order to write $(j, n+j)v_{T}$ in our basis, we must move $j$ from the $j^{th}$ row to the $1^{st}$ row of the second column and $n+j$ from the $j^{th}$ row to the $n^{th}$ row of the first column. In total, this gives a sign change of $(-1)^{j-1+n-j}= (-1)^{n-1}$.
By Lemma \ref{lemma}, for such a $v_{S}$, we get $\langle \eta(v_{S}), v_{T}) \rangle = (-1)^{n+j +j} = (-1)^{n}$. Hence overall we get a contribution to Equation \ref{equation} of
\[ \langle r_{t}f_{t}v_{T}, v_{S} \rangle \langle \eta(v_{S}), v_{T} \rangle = (-1)^{n-1 + n} = -1. \]
There are $i$ such possible $v_{S}$. Therefore this case contributes $-i$ to Equation \ref{equation}.
\item \emph{We can obtain $v_{S}$ by a swap coming from a column permutation $\sigma$ in $f_{t}$.} Note that because $f_{t}$ maintains the order of $\{ 1, 2, \dots , n \}, \{ n+1, \dots, n+i \}$, and $\{ n+i+1, \dots n+ m \}$ and we require that $|S \cap T| = n-1$, it must be that $S = \{ 1, 2, \dots, n-1, n+i+1 \}$. Suppose $\sigma$ moves $n$ to the $(n+\ell)^{th}$ row of $t$ for $1 \leq \ell \leq m-i$. To calculate the sign of $\sigma$, note that in order to move $n$ to the $(n+ \ell)^{th}$ row of $t$, it follows that
\[ \sigma = (n, n+ i + \ell)(n, n+i + \ell -1) \dots (n, n+ i + 1), \]
so $\sgn(\sigma) = \ell$.
In $\sigma v_{T}$, $n$ is in the $(i + \ell)^{th}$ row of the second column. In order to put this in our basis, we must move $n$ to the first row in the second column, which requires $i + \ell - 1$ transpositions.
Combining these, we have a sign change of $(-1)^{\ell + i + \ell -1} = (-1)^{i-1}$.
For such a $v_{S}$, the coefficient of $\langle \eta(v_{S}), v_{T}) \rangle$ is $(-1)^{n+i +1 +n} = (-1)^{i+1}$.
Thus for such a $v_{S}$ and $\sigma$, we get a total coefficient of $(-1)^{i+1 + i -1} = 1.$
There are $m-i$ possible $\sigma$ (one for each $\ell$), and so we get a contribution to Equation \ref{equation} of $m-i$.
\end{enumerate}
Thus combining the $T=S$ case with the two cases above, we have
\[ \omega_i=\langle \eta (r_{t} f_{t} v_{T}), v_{T} \rangle = m + (- i) + (m-i) = 2(m-i) .\] \end{proof}
\begin{proof}[Proof of Theorem \ref{imetakeralpha}] By Theorem \ref{eta}, $\omega_i$ is 0 only when $m=i$, so $\ker(\eta) \cong S^{\mu}$. Thus $\im(\eta) \cong \tilde{M}^{\mu} / \ker(\eta) = \ker(\alpha)$, and the theorem is proved.
\end{proof}
Theorem \ref{imetakeralpha} allows us to generate $\tilde{G^{\mu}}$ for any $\mu$ with two columns using only the single $\eta$ relation.
We now consider any partition $\lambda = (\lambda_{1}, \dots , \lambda_{k})$ with conjugate $\lambda^{'} = (\lambda^{'}_{1}, \dots , \lambda^{'}_{j})$.
For $t \in \mathcal{T}_{\lambda}$, let $h_{c}(t)$ be the image of $\eta$ on the $c$ and $(c+1)^{st}$ columns of $t$ that leaves the other columns of $[t]$ fixed.
\begin{thm}\label{spechtgarnir}
For any partition $\lambda$ of $m$, let $\tilde{H}^{\lambda}$ be the space generated by $h_{c}([t])$ for every $[t] \in \tilde{M}^{\lambda}$ and $1 \leq c \leq \lambda_{1}-1$. Then
\[ S^{\lambda} \cong \tilde{M}^{\lambda} / \tilde{H}^{\lambda} .\]
\end{thm}
Theorem \ref{spechtgarnir} dramatically reduces the number of generators needed to find $\tilde{G}^{\lambda}$. The original construction of Theorem \ref{fultonthm} required enumerating over every $1 \leq k \leq \lambda _{c+1}'$ for every pair of columns $c$ and $c+1$ of every $t \in \mathcal{T}_\lambda$. Even Fulton's simplification using only $g_{c,1}$ relations requires enumerating over $t \in \mathcal{T}_\lambda$ for every pair of columns $c$ and $c+1$. By contrast, our construction uses a single relation for every pair of adjacent columns, and $[t]$ varies in $\tilde{M}^{\lambda}$, a significantly smaller space than $\mathcal{T}_\lambda$.
\section{A CataLAnKe Isomorphism}\label{catiso}
In this section, we will restrict our attention to tableaux of shape $2^{n-1}1$, and turn to the multi-linear component of the free LAnKe.
\subsection{The Jacobi identity and Garnir relations}
There is an intimate link between the space of column tabloids and the multilinear component of the free LAnKe.
Take the case $k=3$ and $n=3$. A typical bracket looks like
\[ [[1,2,3],4,5]. \]
Note that when $k=3$ and there are two brackets, we can always have the internal bracket justified to the left:
\[ [4,5,[1,2,3]] = -[4,[1,2,3],5]=[[1,2,3],4,5]. \]
We call brackets that are justified to the left \emph{combs}. The antisymmetry of column tabloids is precisely the antisymmetry in the combs. For example,
$$ [[1,2,3],4,5]=-[[1,2,3],5,4]=[[1,3,2],5,4] $$
corresponds to
\begin{center}
\includegraphics[scale = .25 ]{antisym.pdf}
\end{center}
Following \cite{FHSW}, we let $V_{n,3}$ be the space of antisymmetric multilinear, left comb brackets without imposing the Jacobi identity. It follows that $V_{n,3}$ and $\tilde{M}^{2^{n-1}1}$ are isomorphic as $S_{2n-1}$--modules:
\[ V_{n,3} \cong \bigoplus_{i=0}^{n-1} S^{2^{i}1^{2n-2i-1}} \cong \tilde{M}^{2^{n-1}1}. \]
We can formalize this by defining \[\Omega: V_{n,3} \to \tilde{M}^{2^{n-1}1}\] to be the map that sends a bracket $v_{T}$ to its corresponding column tabloid. For the remainder of this paper, we will abuse notation by referring to $\Omega(v_{T})$ and $v_{T}$ interchangeably.
As in \cite{FHSW}, we define an $S_{2n-1}$--module homomorphism $\varphi: V_{n,3} \to V_{n,3}$ by
\begin{eqnarray*} \varphi([[x_{1}, \dots, x_{n}], y_{1}, \dots, y_{n-1}] ) && \\ && \hskip -3.5cm =[[x_{1}, \dots, x_{n}], y_{1}, \dots, y_{n-1}] - \sum_{i=1}^{n}(-1)^{n-i} [[y_{1}, \dots y_{n-1}, x_{i}], x_{1}, \dots \hat{x_{i}}, \dots x_{n}], \end{eqnarray*}
so $\varphi =0$ if the Jacobi identity holds. Thus by construction, $\ker(\varphi) = \rho_{n,3}$.
Note that by our above argument, we can define $\varphi:\tilde{M}^{2^{n-1}1} \to \tilde{M}^{2^{n-1}1}$ by composition with $\Omega.$
\begin{prop}\label{garnirJI}
The image of $[t]$ under $\varphi: \tilde{M}^{2^{n-1}1} \to \tilde{M}^{2^{n-1}1}$ is a dual Garnir relation for each $[t]$.
\end{prop}
\begin{proof}
The image $\varphi([t])$ is the relation $g_{1,n-1}(t)$.
\end{proof}
Proposition \ref{garnirJI} will prove informative in constructing our isomorphism from $\rho_{n,3}$ to $S^{2^{n-1}1}$, as will the following lemma.
\begin{lem}\label{kernelalpha}
For $\mu = 2^{n-1}1$, we have $\im(\eta) \subseteq \im(\varphi).$
\end{lem}
\begin{proof}[Proof of Lemma \ref{kernelalpha}]
As before, let $T = [n]$ and
$$v_{T} = [[1, \dots ,n ], n+1, \dots , 2n-1].$$
For $i\in [n]$ and $j\in [n-1]$, let $R_{i} = \{ n+1, \dots 2n-1, i \} $ and $S_{i,j} = \{ 1, \dots \hat{i}, \dots, n, n+j \}$, so that
\begin{align*}
v_{R_{i}} &= [[ i, n+1, \dots 2n-1], 1, \dots \hat{i}, \dots n],\\
v_{S_{i,j}} &= [[1, \dots \hat{i}, \dots, n, n+j],i, n+1, \dots, \widehat{(n+j)}, \dots 2n-1].
\end{align*}
We will show that $\eta( v_{T}) \in \im(\varphi)$.
The image of $v_{T}$ by $\varphi$ is
\begin{equation*}
\varphi (v_{T}) =v_{T} - \sum_{i=1}^{n} (-1)^{i-1} v_{R_{i}}.
\end{equation*}
We now claim that
\begin{equation} \label{eqneta}
\eta(v_{T}) = -\left ( \varphi(v_{T}) + \sum_{i = 1}^{n} (-1)^{i-1} \varphi(v_{R_{i}}) \right ) ,\end{equation}
from which the lemma follows. To see why equation (\ref{eqneta}) is true, consider each $\varphi(v_{R_{i}})$. One can verify that
\begin{equation}
\varphi(v_{R_{i}}) = v_{R_{i}} - \left ( \sum_{j = 1}^{n-1} (-1)^{j+n-1} v_{S_{i,j}} + (-1)^{i-1}v_{T} \right ) .
\end{equation}
Now consider
\begin{equation} \label{x} \varphi(v_{T}) + \sum_{i = 1}^{n} (-1)^{i-1} \varphi(v_{R_{i}}) .\end{equation}
By our above discussion, we can rewrite equation (\ref{x}) as
\[ \Big(v_{T} - \sum_{i =1}^{n} (-1)^{i-1} v_{R_{i}} \Big) + \sum_{i = 1}^{n} (-1)^{i-1} \Big ( v_{R_{i}} - \Big ( \sum_{j = 1}^{n-1} (-1)^{j+n-1} v_{S_{i,j}} + (-1)^{i-1}v_{T} \Big ) \Big) .\]
Noting that the $v_{R_{i}}$ cancel for every $i$ and adjusting for signs, we simplify this to
\begin{align*}
-(n-1)v_{T}-\sum_{i=1}^{n} \Big( \sum_{j=1}^{n-1}(-1)^{n+j+i}v_{S_{i,j}} \Big) .\\
\end{align*}
Observe that by Definition \ref{defeta} this is precisely $- \eta(v_{T})$.
It follows that $\im(\eta) \subseteq \im(\varphi)$.
\end{proof}
\subsection{The isomorphism between $\rho_{n,3}$ and $S^{2^{n-1}1}$}
We now move from the world of column tabloids back to the more standard one of row tabloids and row polytabloids. Recall that for a tableau $t \in \mathcal{T}_\lambda$,
$\{ t \}$ is the associated row tabloid and $\varepsilon_{t}$ is the associated row polytabloid.
We define a map $\Psi: \rho_{n,3} \to S^{2^{n-1}1}$ by first defining a map $\tilde{\Psi}: V_{n,3} \to S^{2^{n-1}1}$ and then considering the restriction of this map to $\rho_{n,3}.$
For a bracket $v = [[x_{1}, \dots , x_{n}], y_{1}, \dots, y_{n-1}] \in V_{n,3}$, let $t(v)$ be the tableau labeled compatibly with the bracket, as in:
\[ t(v) =\ytableausetup
{mathmode, boxsize=2.25em, centertableaux}
\begin{ytableau}
x_{1} & y_{1} \\
x_{2} & y_{2} \\
\vdots & \vdots \\
x_{n-1} & y_{n-1} \\
x_{n} \\
\end{ytableau} \; . \]
\begin{definition} \label{Psidef}
The map $\tilde{\Psi}: V_{n,3} \to S^{2^{n-1}1} $ is given by
\[ \tilde{\Psi}: v \mapsto \varepsilon_{t(v)} .\]
\end{definition}
Note that $\tilde{\Psi}(v) = (\alpha \circ \Omega)(v).$
It is therefore clear that $\tilde{\Psi}$ is a well-defined $S_{2n-1}$--module homomorphism and that $\ker(\alpha)\cong \ker(\tilde{\Psi})$. By Theorem \ref{fultonthm} we know that every dual Garnir relation is in $\ker(\alpha)$. Since by Proposition \ref{garnirJI}, the Jacobi identity relations are dual Garnir relations, they are in $\ker(\tilde{\Psi})$. That is, $\tilde \Psi (\varphi (u))=0$ for every $u\in V_{n,3}$. An independent proof which makes this fact more explicit appears in Appendix \ref{appendixpf}.
Because $\varphi:V_{n,3} \to V_{n,3}$ is an $S_{2n-1}$-module homomorphism, we can write $V_{n,3} \cong \ker(\varphi) \oplus \im(\varphi)$ and let
$$\gamma: \ker(\varphi) \oplus \im(\varphi) \to \ker(\varphi)$$ be the projection map. Since $\tilde \Psi (\im \varphi ) =0$, we can consider the restriction of $\tilde{\Psi}$ to $\rho_{n,3}\cong \ker(\varphi)$.
\begin{definition}
For $x \in \rho_{n,3}$, let $\Psi: \rho_{n,3} \to S^{2^{n-1}1}$ be defined by
\[\Psi(x) = \tilde{\Psi}|_{{\rho_{n,3}}}(x) = (\tilde{\Psi} \circ \gamma)(x). \]
\end{definition}
We now state the main theorem of this section.
\begin{thm}\label{iso}
The map $\Psi$ is an $S_{2n-1}$--module isomorphism.
\end{thm}
\begin{proof}
It is clear that $\Psi$ is surjective. It remains to show that $\Psi$ is injective. Note that it is sufficient to show that $\im(\varphi)$ is isomorphic to $\ker(\tilde\Psi).$ By Theorem \ref{imetakeralpha}, $\ker(\alpha) = \im(\eta)$ and we have that $\im (\varphi)\subseteq \ker(\tilde\Psi)$. Because $V_{n,3} \cong \tilde{M^{2^{n-1}1}}$, it follows (with a slight abuse of notation) that $\im(\varphi) \subseteq \im(\eta).$ By Lemma \ref{kernelalpha}, the other containment $\im (\eta) \subseteq \im (\varphi)$ holds as well, implying that $\im (\varphi)\cong \ker(\tilde\Psi)$ as needed. \end{proof}
\begin{cor} \label{catalanke} [\cite{FHSW}, The CataLAnKe Theorem] The representation $\rho_{n,3}$ is isomorphic to $S^{2^{n-1}1}$.
\end{cor}
An alternative method to prove Theorem \ref{iso} was pointed out to us by Michelle Wachs. In \cite{lie}, the $g_{1,1}$ relations are shown to be equivalent to the Jacobi identity. Combining this result with the observation in \cite{fulton} that over $\mathbb{C}$, the $g_{c,1}$ relations generate $\ker(\alpha)$ gives another way to show the isomorphism between $\rho_{n,3}$ and $S^{2^{n-1}1}$ by $\Psi$.
We can use Theorem \ref{iso} to find a basis for $\rho_{n,3}$ by using the iconic basis of the Specht module.
\begin{definition}
A bracket $[[x_{1}, \dots, x_{n}],y_{1}, \dots , y_{n-1}]$ is \emph{standard} if $x_{1} < x_{2}< \dots < x_{n}$, $y_{1} < y_{2}< \dots < y_{n-1}$ and $x_{j} < y_{j}$ for every $j \in [n-1].$
\end{definition}
\begin{cor}\label{rhon3basis}
The set of standard brackets forms a basis for $\rho_{n,3}$.
\end{cor}
\paragraph{Acknowledgements} The authors would like to thank Michelle Wachs, Vic Reiner, Marissa Miller, and Leslie Nordstrom for useful conversations. The authors gratefully acknowledge the National Science Foundation (DMS–1143716) and Smith College for their support of the Center for Women in Mathematics and the Post-Baccalaureate Program. This work was also partially funded by a Smith College Summer Research Fund.
\vskip 1cm
\nocite{*}
\bibliographystyle{alpha}
|
2,869,038,154,309 | arxiv | \section{Introduction}
\label{sec:1}
The structures of aggregates of atoms or molecules found in nature have been widely studied in disordered systems\cite{Gavezzotti,Cusack}. In these studies, the transitions caused by the break up of an aggregation in which atoms are connected through bonds constructing a network configuration, such as liquid\textendash liquid phase transitions, have been investigated both experimentally and theoretically\cite{Winter,Poole,Franzese}. One-component liquid\textendash liquid phase transitions, which are characterized as the transitions between two distinct liquids with different densities and entropies, show different atomic network configurations in liquid phases. For instance, phosphorus liquid at low-pressure consists of tetrahedral P$_4$ molecules, each of which is composed of atoms and its bonds, with the bond angle $\theta =60^\circ $, whereas at high-pressure, the liquid has a polymeric form in which atoms are connected through anisotropic bonds\cite{Katayama,Morishita}. That is, by compressing a low-pressure liquid, the entire tetrahedral network configuration collapses and a new network configuration is generated as a polymeric form.
To investigate geometric structures of condensed matters, mathematical methods using topology have been applied to condensed matter physics, for example, topological defects\cite{Monastyrsky} and quasicrystals\cite{Hirata}. In several topological methods, the mathematical structure of aggregates has been successfully studied using more fundamental mathematical approach, that is, a point set topology\cite{Kitada-csf,Kitada-jpsj}. In particular, we have focused on a somewhat indirect method of observation of the material structure. That is, the geometrical structures are expressed indirectly through the mathematical observations of the formation of a set of equivalence classes. Such a concept was also proposed by Fern$\acute{a}$ndez\cite{Fernandez} in statistical physics. Note that here diffraction analysis is based on the idea of the equivalence class\cite{Cassels}.
In this paper, we discuss two distinct structures of aggregates of atoms composing a network configuration from the viewpoint of a point set topology. The purpose of the current study is to characterize different network structures topologically and discuss the obtained topological structures by using the concept of the equivalence class. In particular, we mainly associate a geometric structure of a network with a concept of a finite graph in a point set topology, in which a finite graph has a topological structure constituted of some points called nodes and some arcs called edges, each end point of which is a node\cite{NadlerC}. Note that an arc is a space that is homeomorphic to closed interval [0,1]. Here, we simply deal with two abstract network models shown schematically in Fig.1 and described as follows (i) The first network structure consists of some clusters of atoms, that is, we assume a situation in which some atoms are strongly connected with the other atoms, and hence the collection of atoms is regarded as a cluster. The situation is induced by one liquid phase composed of tetrahedral molecules in liquid\textendash liquid phase transition for phosphorus liquid. (ii) The other network structure is supposed to be composed of all atoms and anisotropic bonds between two atoms; no isolated atom exists. The network structures of (i) and (ii) are hereafter referred as ``phase I" and ``phase II", respectively, and we assume a system in which phase I transforms into phase II and vice versa.
~~\\
\vspace{-5mm}
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=4cm]{fig1_phaseI.eps}
\hspace{23mm}
\includegraphics[clip,width=4cm]{fig1_phaseII.eps}\\
(a)
\hspace{65mm}
(b)
\end{center}
\caption{Schematic explanation of (a) phase I and (b) phase II. Phase I consists of atomic clusters $C_i$, each of which is composed of some atoms(solid circles) and their bonds(lines), however, there are only anisotropic bonds and no atomic cluster.}
\end{figure}
In the next section, we characterize each phase by a point set topology and show the topological natures corresponding to each network structure. In Section 3, we demonstrate that a specific topological space connects both different topological structures of phases I and II by an equivalence class, that is, the decomposition space of the topological space. In Section 4, the main difference of topological structures of two phases is presented using the decomposition spaces. Finally, the conclusions are given in Section 5.
\section{Characterization of structures of two phases}
\label{sec:2}
\subsection{Topological characterization for Phase I}
\label{subsec:2.1}
In this section, we characterize phases I and II described in Sec. 1 by using a topological method. First, phase I is supposed to consist of atomic clusters, each of which is denoted by $C_1,\dots,C_s$, where $s(<\infty )$ stands for the number of the clusters, and each cluster $C_i$ is composed of atoms, the number of which is denoted by $M_i$. Then, each atom in a cluster is connected with one of the others by some bonds, shown in Fig.1 (a). Note that $\Sigma _{i=1}^sM_i=M$ is the number of whole atoms in the system and is preserved in transformation of phase I to phase II and vice versa.
Here, let us introduce the finite graph stated in Sec. 1. Regarding each atom and each bond in a cluster $C_i$ as a node and an edge, respectively, each node of which corresponds to the atom connected by the bond, the topological structure of $C_i$ can be characterized through a graph. Then, each bond connecting two atoms is an arc, and each end point of a bond corresponds to one of the two atoms. Therefore, the number of nodes in each graph is equal to the number of atoms $M_i$ composing the cluster $C_i$. Thus, the topological structure of phase I can be deduced as the disjoint union $\bigoplus_{i=1}^sC_i$\cite{Kuratowski} for the family of graphs $\{ C_i,i=1\dots,s \}$. We denote the topological structure by $P_{\rm I}$. Note that $P_{\rm I}$ has a disconnected compact metric structure, whereas each graph $C_i$ is itself a locally connected, connected, compact metric space. The property of the topological disconnectedness is derived from the fact of phase I consisting of the atomic cluster such that each connection between clusters is negligible. In addition, it should be emphasized that $C_i$ is not necessarily homeomorphic to $C_j, i\not=j$.
\subsection{Topological characterization for Phase II}
\label{subsec:2.2}
Let us discuss the topological structure of phase II that consists of atoms connected by their bonds without the formation of clusters as shown in Fig.1 (b). Regarding each atom and the bond between two atoms as a node and an edge, respectively, the topological structure of phase II is directly characterized as a graph. As all atoms in the system correspond to nodes of the graph, the number of nodes is clearly $M$. Let $P_{\rm II}$ denote the topological structure of phase II. Note that $P_{\rm II}$ cannot be homeomorphic to $P_{\rm I}$ since $P_{\rm II}$ has a locally connected, connected, compact metric structure, whereas $P_{\rm I}$ has a disconnected structure. Therefore, phase II has a topologically different structure than that of phase I.
\section{Topological space representing network structures}
\label{sec:3}
In the previous section, we verified that the topological structures $P_{\rm I}$ and $P_{\rm II}$ of the two phases are by no means associated with any homeomorphic map. In this section, we exhibit that $S$ connects the different structures of $P_I$ and $P_{II}$ by using decomposition spaces of $S$ defining a specific topological space $S$.
From the viewpoint of a point set topology, any compact metric structure can be obtained as a decomposition space of
the space that is characterized, in principle as a 0-dim, perfect, compact $T_2$-space. Let
$S=(\{0,1\}^\Lambda ,\tau_0^\Lambda ), Card \Lambda \succ \aleph _0$ be the
$\Lambda -$product space of $(\{0,1\},\tau_0)$, where $\tau_0$ is a discrete topology for
$\{0,1\}$. Note that $S$ need not be metrizable\cite{Metrizable}. Then, $(\{0,1\}^\Lambda ,\tau_0^\Lambda )~(Card \Lambda \succ \aleph _0)$ is easily verified to be a 0-dim, perfect, compact $T_2$-space. Since both topological structures $P_{\rm I}$ and $P_{\rm II}$ are compact metric spaces, it is mathematically confirmed that there are decomposition spaces $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$\cite{Decomposition} of $S$ such that $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$ are homeomorphic to $P_{\rm I}$ and $P_{\rm II}$, respectively. In fact, there is a continuous map $f_{\rm I}$ from $(\{0,1\}^\Lambda ,\tau_0^\Lambda )$ onto $P_{\rm I}$, and then a homeomorphism $h_{\rm I} : P_{\rm I} \to \mathcal{D}_{\rm I}, y\mapsto f_{\rm I}^{-1}(y)$ is obtained. Note that we can think of the structures of $\mathcal{D}_{\rm I}$ as the topological structures of phase I instead of $P_{\rm I}$ because the properties of $P_{\rm I}$ are invariant under the homeomorphism. Similarly, by an obtained continuous map $f_{\rm II}$ onto $P_{\rm II}$ and a homeomorphism $h_{\rm II} : P_{\rm II} \to \mathcal{D}_{\rm II}$, $\mathcal{D}_{\rm II}$ represents the topological structures of phase II (Fig.2).
Thus, the distinct topological structures $P_{\rm I}$ and $P_{\rm II}$ are represented as the decomposition spaces $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$ of $S$, respectively. It means that $S$ provides an indirect association between topologies of phases I and II. Therefore, we can proceed the investigation of topological structures in the two phases based on $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$.
\begin{figure}[h]
\vspace{8mm}
\begin{center}
\includegraphics[clip,width=2.5cm]{fig2.eps}
\end{center}
\caption{Connection between $P_{\rm I}$ and $P_{\rm II}$ by $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$. $h_{\rm I}$ and $h_{\rm II}$ are homeomorphisms. By using continuous onto maps $f_{\rm I}$ and $f_{\rm II}$, $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$, being homeomorphic to $P_{\rm I}$ and $P_{\rm II}$, respectively, are obtained as the decomposition spaces of $S$.}
\end{figure}
\section{Discussion of topological structures}
\label{sec:4}
In this section, we show the main difference between the topological structures of phases I and II by considering the decomposition spaces $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$ obtained in the previous section.
Let $x$ be a point regarding an atom in the system. In phase I, the atom belongs to a cluster $C_{i_0}$ and has some bonds in $C_{i_0}$. Namely, $x$ is a node of edges $A_{1}^{i_0},\dots,A_{p}^{i_0}$ in terms of the graph $C_{i_0}$, where $p$ is the number of bonds\cite{Whyburn}. Then, $x$ is represented concretely in $\mathcal{D}_{\rm I}$ as
\begin{eqnarray}
h_{{\rm I}}(x) (=f^{-1}_{\rm I}(x)) =
J_{i_0}\times \big[ K_{j_1}\times \{0\}_{\nu _1^{j_1}}\times \{0\}_{\nu _2^{j_1}}\times \cdots \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _{j_1}\}\cup \{\nu _1^{j_1},\nu _2^{j_1},\dots\})} \nonumber \\
\cup \cdots \cup
K_{j_p}\times \{0\}_{\nu _1^{j_p}}\times \{0\}_{\nu _2^{j_p}}\times \cdots \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _{j_p}\}\cup \{\nu _1^{j_p},\nu _2^{j_p},\dots\})}\big ] \nonumber
\end{eqnarray}
(the derivation of $f^{-1}_{\rm I}(x)$ and the exact forms of $J_{i_0}$, $K_{j_l}$ ($l=1,\dots,p$) are given in Appendix 6.2). Note that the term $J_{i_0}$ is generated because cluster $C_{i_0}$ containing $x$ is separated from other clusters, and each $K_{j_l}$ is the term related to each bond $A_{l}^{i_0}$ of $x$ in $C_{i_0}$. In contrast, in phase II, the atom is just an element of the network structure characterized as a graph. So there is no atomic cluster to which $x$ belongs. Assuming that the number of the bonds of $x$ is $q$, $x$ is a node with edges $B_{1},\dots,B_{q}$. In $\mathcal{D}_{\rm II}$, $x$ is represented as
\begin{eqnarray}
h_{\rm II}(x) = L_{u_1}\times \{0\}_{\xi_1^{u_1}}\times \{0\}_{\xi_2^{u_1}}
\times \cdots \times \{0,1\}^{\Lambda -(\{\sigma _1,\dots,\sigma _{u_1}\}\cup \{\xi_1^{u_1},\xi_2^{u_1},\dots\})}
\nonumber \\
\cup \cdots \cup
L_{u_q}\times \{0\}_{\xi_1^{u_q}}\times \{0\}_{\xi _2^{u_q}}\times \cdots \times \{0,1\}^{\Lambda -(\{\sigma _1,\dots,\sigma _{u_q}\}\cup \{\xi_1^{u_q},\xi _2^{u_q},\dots\})}, \nonumber
\end{eqnarray}
where each $L_{u_n}$ is the term according to each bond $B_n, (n=1,\dots,q)$. Hence, for a fixed $x$, term $J_{i_0}$ is the main difference between phases I and II. Similarly, we can find the difference for any point $x$ regarding other atoms in the system. As $J_{i}$ is obtained because of the existence of cluster $C_{i}$ in phase I, the distinction of topological structures in phase I and II may mainly result in terms $J_{i},~i=1,\dots s$, that is, the atomic clusters may or may not exist.
In Sec. 2 we proved that $P_{\rm I}$ is not homeomorphic to $P_{\rm II}$, for $P_{\rm I}$ is disconnected; this disconnectedness is lead from the direct sum characterizing phase I composed of the atomic clusters. Furthermore, we obtained the obvious terms $J_{i},~i=1,\dots s$ related to the disconnectedness in this section. As each $J_{i}$ is formed by the composition of $0$ and $1$ (see Appendix 6.2 (\ref{eqn:4})), it is anticipated that in
the transformation from phases I to II, the arrangement of $0$ and $1$ for $J_{i}$ collapses in $\{0,1\}^{\Lambda} $, and
then $J_{i}$ vanishes. The term $K_{j}$ is rearranged to be $L_j$ depending on the number of bonds. Note that the quantitative meaning of $J_{i}$ will be revealed based on the approach of discrete dynamical systems because the representation of $P_{\rm I}$ and $P_{\rm {II}}$ in Sec. 3 is associated with discrete dynamics\cite{Devaney}, and will be reported in the future.
\section{Conclusion}
\label{sec:5}
We discussed two distinct structures of aggregates of atoms defined as phases I and II connected through anisotropic bonds composing a network configuration. Phase I consists of some clusters $C_i, i=1,\dots,s$, the topological nature of each of which is given as a graph, and phase I itself is characterized topologically as the direct sum $\bigoplus_{i=1}^sC_i$ for the graphs $C_i$. Phase II is directly characterized as a graph, which is by no means homeomorphic to $\bigoplus_{i=1}^sC_i$. The topological structures $P_{\rm I}$ and $P_{\rm II}$ of phases I and II are represented by the decomposition spaces $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$, respectively, of a specific topological space $S$. As the main difference of two phases, the existence of the term $J_i,~i=1,\dots,s$ is shown using $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$. Finally, note that the current topological characterization in this paper can be available for any system composing network structures with or without clusterized structures of some elements, with their bonds represented by arcs, as well as for aggregates of atoms. For instance, the use of village structures in social science or computer networks might be one of the applications of the current study.
\section{Appendix}
\subsection{Construction of a continuous map onto any arc}
We will show the construction of a continuous map from $(\{0,1\}^\Lambda ,\tau_0^\Lambda )$ onto any arc $(A,\tau_A )$, where $(\{0,1\}^\Lambda ,\tau_0^\Lambda ), Card \Lambda \succ \aleph _0$ is the
$\Lambda -$product space of $(\{0,1\},\tau_0)$ where $\tau_0$ is a discrete topology for
$\{0,1\}$. Since any arc $(A,\tau_A )$ is homeomorphic to $([0,1],\tau_{[0,1]})$\cite{NadlerC}, where $([0,1],\tau_{[0,1]})$ is a subspace of $(R,\tau_R)$, it is sufficient to construct a continuous map from $(\{0,1\}^\Lambda ,\tau_0^\Lambda )$ onto $([0,1],\tau_{[0,1]})$.
We consider decomposing closed interval $[0,1]$ by using a binary system. For any $a\in
(0,1]$, there is a sequence $\{a_1,a_2,\cdots \}$ of elements in $\{0,1\}$ such that $a=\Sigma _{i=1}^\infty a_i/2^i$
. Let $a=(0.a_1a_2\cdots a_n)_2$ denote $a=\Sigma _{i=1}^n a_i/2^i$ of finite binary
representation $(n<\infty )$. For each $n\in \bf{N}$, $[0,1]$ is decomposed by closed partitions $I_{a_1a_2\cdots a_n}=[(0,a_1a_2\cdots a_n)_2,\\ (0,a_1a_2\cdots
a_n)_2+1/2^n],~a_1, a_2,\dots, a_n\in\{0,1\}$ as $I=\displaystyle\bigcup _{a_1 a_2\cdots
a_n}I_{a_1a_2\cdots a_n}$. For example, if $n=1$, then $I_0=[(0,0)_2,(0,0)_2+1/2],I_1=[(0,1)_2,(0,1)_2+1/2]$, and $I=I_0\cup I_1$; if $n=2$, then $I_{00}=[(0,00)_2,(0,00)_2+1/2^2],I_{01}=[(0,01)_2,(
0,01)_2+1/2^2],I_{10}=[(0,10)_2,(0,10)_2+1/2^2],I_{11}=[(0,11)_2,(0,11)_2+1/2^2]$, and $I=I_{00}\cup I_{01}\cup I_{10}\cup I_{11}$. Note that the relations $I_{a_1 a_2\cdots a_n}\supset I_{a_1 a_2\cdots a_{n-1}}$
and $dia~I_{a_1 a_2\cdots a_n}=1/2^n$ for any $n$ hold, where $dia$ stands for diameter of a set. Then, there exists a partition $\Big{\{}\{a_1\}_{\lambda _1}\times \cdots \times \{a_n\}_{\lambda _n}\times \{0,1\}^{\Lambda -\{\lambda _1,\dots,\lambda _n\}};a_1, a_2, \dots, a_n \in\{0,1\}\Big{\}}$ of $\{0,1\}^\Lambda $, each of which corresponds to $I_{a_1 a_2\cdots a_n}$, where $\lambda _1,\dots,\lambda _n$ are arbitrary elements in $\Lambda $ and $\{a_1\}_{\lambda _1}\times \cdots \times \{a_n\}_{\lambda _n}\times \{0,1\}^{\Lambda -\{\lambda _1,\dots,\lambda _n\}}=\{x\in \{0,1\}^\Lambda;x(\lambda _i)=a_i,i\in \{1,\dots,n\} \}$.
Defining maps $g_n:(\{0,1\}^\Lambda ,\tau_0^\Lambda )\to
\Im _{[0,1]}-\{\phi \},n=1,2,\dots,$ as $g_n(x)=I_{a_1 a_2\cdots a_n}$ for any $x\in
\{a_1\}_{\lambda _1}\times \cdots \times \{a_n\}_{\lambda _n}\times
\{0,1\}^{\Lambda -\{\lambda _1,\dots,\lambda _n\}}$, it is easily verified that $f:(\{0,1\}^\Lambda ,\tau_0^\Lambda )\to ([0,1],\tau_{[0,1]}), x\to \cap _n g_n(x)$ is a continuous onto map. Here, for $y\in (0,1]$, there are $k_1,k_2,\cdots $ in $\{0,1\}$ such that $y=\Sigma _{i=1}^\infty k_i/2^i$. Then $f^{-1}(y)=\{k_1\}_{\lambda _1}\times \{k_2\}_{\lambda _2}\times \cdots \times \{0,1\}^{\Lambda -\{k_1,k_2,\cdots\}} $. Note that $f^{-1}(0)=\{0\}_{\lambda _1}\times \{0\}_{\lambda _2}\times \cdots \times \{0,1\}^{\Lambda -\{k_1,k_2,\cdots\}} $.
\subsection{Derivation of decomposition spaces $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$}
We will derive the decomposition spaces $\mathcal{D}_{\rm I}$ and $\mathcal{D}_{\rm II}$ of $S$ using homeomorphic mappings $h_{\rm I}$ and $h_{\rm II}$ stated in \S\ref{sec:3}. Now focus on the topological structure $P_{\rm I}$ of phase I. Since $P_{\rm I}$ is constructed by the disjoint union $\bigoplus_{i=1}^sC_i$ for the graphs $\{C_i;i=1,\dots,s\}$, it is confirmed that there exists a partition $\{X^1,\dots,X^s\}$ of $\{0,1\}^\Lambda $ such that
\begin{equation}
\left\{
\begin{array}{lcl}
X^1=\{0\}_{\lambda _1}\times \{0,1\}^{\Lambda -\{\lambda _1\}}, \\
\ \\
X^i=\{1\}_{\lambda _1}\times \cdots \times \{1\}_{\lambda _{i-1}}\times\{0\}_{\lambda _i}\times \{0,1\}^{\Lambda -\{\lambda _1,\dots,\lambda _i\}}~(i=2,3,\dots,s-1), \\ \\
X^s=\{1\}_{\lambda _1}\times \cdots \times \{1\}_{\lambda _{s-2}}\times \{1\}_{\lambda _{s-1}}\times \{0,1\}^{\Lambda -\{\lambda _1,\dots,\lambda _{s-1}\}},
\end{array}
\right.
\label{eqn:2}
\end{equation}
where $\lambda _i$ is arbitrarily element of $\Lambda $, $i=1,\dots,s-1$. Note that each $X^i \in (\tau_0^\Lambda \cap \Im_0^\Lambda )-\{\phi \}$. Since each $C_i$ is a graph, there are arcs $C^i_1,\dots,C^i_{m_i},(m_i<\infty )$ such that $C_i=\cup _{l=1}^{m_i}C^i_l$, any two of which are either disjoint or intersect only in one or both of their end points. To $C^i_1,\dots,C^i_{m_i},(m_i<\infty )$ there corresponds a partition $\{X^i_1,\dots,X^i_{m_i}\}$ of $X^i$ such that
\begin{equation}
X^i_l=J_i\times K_l\times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _l\})}~(l=1,\dots,m_i)
\label{eqn:3}
\end{equation}
where
\begin{equation}
J_i=\{1\}_{\lambda _1}\times \cdots \times \{1\}_{\lambda _{i-1}}\times\{0\}_{\lambda _i}, K_l=\{1\}_{\mu _1}\times \cdots \times \{1\}_{\mu _{l-1}}\times\{0\}_{\mu _l},
\label{eqn:4}
\end{equation}
and $\mu _1,\dots,\mu _{m_i}\in \Lambda -\{\lambda _1,\dots,\lambda _i\}$.
Note that each $(X_l^i,(\tau_0^\Lambda )_{X_l^i})$ is 0-dim, perfect, compact $T_2$. Therefore, it is easily shown from Appendix 6.1 that there is a continuous map $f_{\rm I}$ from $(\{0,1\}^\Lambda ,\tau_0^\Lambda )$ onto $\Big{(}\bigoplus_{i=1}^sC_i,\bigoplus_{i=1}^s\tau_i\Big{)}$ such that $f_{\rm I}(X_l^i)=C_l^i,~l=1,\dots,m_i,~i=1\dots,s$. Since $(\{0,1\}^\Lambda ,\tau_0^\Lambda )$ is compact and $\Big{(}\bigoplus_{i=1}^sC_i,\bigoplus_{i=1}^s\tau_i\Big{)}$ is a $T_2$-space, the map $f_{\rm I}$ is quotient. Hence, the map $h_{\rm I}:\Big{(}\bigoplus_{i=1}^sC_i,\bigoplus_{i=1}^s\tau_i\Big{)}\to (\mathcal{D}_{{\rm I}},\tau(\mathcal{D}_{\rm I})), y\mapsto f_{\rm I}^{-1}(y)$ must be a homeomorphism. It follows from Appendix 6.1 that $f^{-1}_{\rm I}(a)=J_i\times K_l\times \{a_1\}_{\nu _1}\times \{a_2\}_{\nu _2}\times \cdot \cdot \cdot \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _l\}\cup \{\nu_1,\nu_2,\dots\})}$ for $a\in C^i_l\subset \bigoplus_{i=1}^sC_i$, where $a_i\in\{0,1\}, i=1,2,\dots $ depending on $a$ and $\nu_1,\nu_2,\dots \in \Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _l\})$. Thus, $P_{\rm I}$ is represented as the decomposition space $(\mathcal{D}_{\rm I},\tau(\mathcal{D}_{\rm I}))$ of $S$.
In the same way, we can obtain a homeomorphism $h_{\rm II}:(Y,\tau_\rho )\to (\mathcal{D}_{\rm II},\tau(\mathcal{D}_{\rm II})), y\mapsto f_{\rm II}^{-1}(y)$. Note that the decomposition space $(\mathcal{D}_{\rm II},\tau(\mathcal{D}_{\rm II})$ of $S$ is based on a continuous map $f_{\rm II}$ from $(\{0,1\}^\Lambda ,\tau_0^\Lambda )$ onto $(Y,\tau_\rho )$ leaded by following; Since $(Y,\tau_\rho )$ is a graph, there are arcs $E_1,\dots, E_r$ such that $Y=\bigcup _{j=1}^rE_j$. Then, there exists a partition $\{Y_1,\dots,Y_r\}$ of $\{0,1\}$ such that
\begin{equation}
\left\{
\begin{array}{lcl}
Y_1=\{0\}_{\sigma _1}\times \{0,1\}^{\Lambda -\{\sigma _1\}}, \\
\ \\
Y_j=\{1\}_{\sigma _1}\times \cdot \cdot \cdot \times \{1\}_{\sigma _{j-1}}\times\{0\}_{\sigma _j}\times \{0,1\}^{\Lambda -\{\sigma _1,\dots,\sigma _j\}}~(j=2,3,\dots,r-1), \\ \\
Y_r=\{1\}_{\sigma _1}\times \cdot \cdot \cdot \times \{1\}_{\sigma _{r-2}}\times\{0\}_{\sigma _r-1}\times \{0,1\}^{\Lambda -\{\sigma _1,\dots,\sigma _{r-1}\}}
\end{array}
\right.
\label{eqn:5}
\end{equation}
where $\sigma _j\in\Lambda ,j=1,\dots,r-1$. Hence, by Appendix 6.1, the continuous onto map $f_{\rm II}:(\{0,1\}^\Lambda ,\tau_0^\Lambda )\to (Y,\tau_\rho )$ such that $f_{\rm II}^{-1}(b)=L_j \times \{b_1\}_{\xi_1}\times \{b_2\}_{\xi_2}\times \cdot \cdot \cdot \times \{0,1\}^{\Lambda -(\{\sigma _1,\dots,\sigma _i\}\cup \{\xi_1,\xi_2,\dots\})}$ for $b\in Y$ is obtained, where
\begin{equation}
L_j=\{1\}_{\sigma _1}\times \cdot \cdot \cdot \times \{1\}_{\sigma _{j-1}}\times\{0\}_{\sigma _j},
\label{eqn:6}
\end{equation}
$b_i\in\{0,1\}, i=1,2,\dots $ depending on $b$, and $\xi_1,\xi_2,\dots \in \Lambda -\{\sigma _1,\dots,\sigma _j\}$.
For instance, for $x$ in \S\ref{sec:4}, $f^{-1}_I(x)$ is calculated as
\begin{eqnarray}
f^{-1}_I(x) = J_{i_0}\times K_{j_1}\times \{0\}_{\nu _1^{j_1}}\times \{0\}_{\nu _2^{j_1}}\times \cdots \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _{j_1}\}\cup \{\nu _1^{j_1},\nu _2^{j_1},\dots\})}\nonumber \\
\cup J_{i_0}\times K_{j_2}\times \{0\}_{\nu _1^{j_2}}\times \{0\}_{\nu _2^{j_2}}\times \cdots \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _{j_2}\}\cup \{\nu _1^{j_2},\nu _2^{j_2},\dots\})}\nonumber \\
\cdots ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \nonumber \\
\cup J_{i_0}\times K_{j_p}\times \{0\}_{\nu _1^{j_p}}\times \{0\}_{\nu _2^{j_p}}\times \cdots \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _{j_p}\}\cup \{\nu _1^{j_p},\nu _2^{j_p},\dots\})}\nonumber,
\end{eqnarray}
that is,
\begin{eqnarray}
f^{-1}_I(x) = J_{i_0}\times \big[ K_{j_1}\times \{0\}_{\nu _1^{j_1}}\times \{0\}_{\nu _2^{j_1}}\times \cdots \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _{j_1}\}\cup \{\nu _1^{j_1},\nu _2^{j_1},\dots\})} \nonumber \\
\cup \cdots \cup
K_{j_p}\times \{0\}_{\nu _1^{j_p}}\times \{0\}_{\nu _2^{j_p}}\times \cdots \times \{0,1\}^{\Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _{j_p}\}\cup \{\nu _1^{j_p},\nu _2^{j_p},\dots\})}\big ],
\label{eqn:7}
\end{eqnarray}
where $J_{i_0}$ and each $K_{j_l}$ are given in Eq.(\ref{eqn:4}), and $\nu _1^{j_l},\nu _2^{j_l},\dots\in \Lambda -(\{\lambda _1,\dots,\lambda _i\}\cup \{\mu _1,\dots,\mu _l\})\}~(l=1,\dots,p)$. In the same way,
\begin{eqnarray}
f^{-1}_{II}(x) = L_{u_1}\times \{0\}_{\xi_1^{u_1}}\times \{0\}_{\xi_2^{u_1}}
\times \cdots \times \{0,1\}^{\Lambda -(\{\sigma _1,\dots,\sigma _{u_1}\}\cup \{\xi_1^{u_1},\xi_2^{u_1},\dots\})}
\nonumber \\
\cup \cdots \cup
L_{u_q}\times \{0\}_{\xi_1^{u_q}}\times \{0\}_{\xi _2^{u_q}}\times \cdots \times \{0,1\}^{\Lambda -(\{\sigma _1,\dots,\sigma _{u_q}\}\cup \{\xi_1^{u_q},\xi _2^{u_q},\dots\})},
\label{eqn:8}
\end{eqnarray}
where $L_{u_l}$ is defined by Eq.(\ref{eqn:6}) and $\xi _1^{u_l},\xi _2^{u_l},\dots\in \Lambda -\{\sigma _1,\dots,\sigma _{j_l}\}$.
|
2,869,038,154,310 | arxiv | \section{Introduction}
The state of a complex ecosystem is frequently described by a probability
distribution on the nodes of a weighted network. For example, a microbial
community may be modeled on a network in which each node represents a
taxon and the main interactions between pairs of taxa are represented
by weighted edges, the weights reflecting the levels of interaction. In
this formulation, bacterial relative abundance in a sample may be viewed as a
probability distribution $\xi$ on the vertex set $V$ of the network.
Thus, $\xi$ describes the composition of the community whereas
the underlying network encodes the expected interactions
amongst the various taxa. Identifying sub-communities and their
organization at different scales from such data, quantifying variation
across samples, and integrating information across scales are core
problems. Similar questions arise in other contexts such as
probability measures defined on Euclidean spaces or other
metric spaces. To address these problems, we develop methods that
(i) capture the geometry of data and probability measures across
a continuum of spatial scales and (ii) integrate information obtained
at all scales. In this paper, we focus on the Euclidean and network cases.
We analyze distributions with the aid of a 1-parameter family of diffusion
metrics $d_t$, $t>0$, where $t$ is treated as the scale parameter
\cite{coifman}. For (Borel) probability measures $\alpha$ on $\ensuremath{\mathbb{R}}^d$,
our approach is based on a functional statistic
$V_{\alpha, t} \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ derived from $d_t$ and termed
the {\em diffusion Fr\'{e}chet function\/} of $\alpha$ at scale $t$. Similarly,
we define {\em diffusion Fr\'{e}chet vectors} $F_{\xi, t}$ associated
with a distribution $\xi$ on the vertex set of a network.
The (classical) Fr\'{e}chet function of a random variable $y \in \ensuremath{\mathbb{R}}^d$
distributed according to a probability measure $\alpha$ with
finite second moment is defined as
\begin{equation} \label{E:frechet1}
V_\alpha (x) = \expect{\alpha}{\|y-x\|^2}
= \int_{\ensuremath{\mathbb{R}}^d} \|y-x\|^2 \, d\alpha (y) \,.
\end{equation}
The Fr\'{e}chet function quantifies the scatter of $y$ about the point $x$
and depends only on $\alpha$. It is well-known that the {\em mean\/}
(or {\em expected value}) of $y$ is the unique minimizer of $V_\alpha$.
For complex distributions, aiming at more effective descriptors,
we introduce diffusion Fr\'{e}chet functions and show that they encode
a wealth of information about the shape of $\alpha$. At scale $t>0$,
we define the {\em diffusion Fr\'{e}chet function\/}
$V_{\alpha,t} \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ by
\begin{equation} \label{E:frechet2}
V_{\alpha,t} (x) = \expect{\alpha}{d_t^2 (y,x)} = \int_{\ensuremath{\mathbb{R}}^d}
d_t^2 (y,x) \, d\alpha (y) \,.
\end{equation}
This definition simply replaces the Euclidean distance in
\eqref{E:frechet1} with the diffusion distance $d_t$. $V_{\alpha,t} (x)$
may be viewed as a localized second moment of $\alpha$ about $x$.
In the network setting, Fr\'{e}chet vectors are defined in a similar manner
through the diffusion kernel associated with the graph Laplacian.
Unlike $V_\alpha$ that is a quadratic function, diffusion Fr\'{e}chet functions
and vectors typically exhibit rich profiles that reflect the shape of the distribution at
different scales. We illustrate this point with simulated data
and exploit it in an application to the analysis
of microbiome data associated with {\em Clostridium difficile} infection (CDI).
On the theoretical front, we prove stability theorems for diffusion Fr\'{e}chet
functions and vectors with respect to the Wasserstein distance between
probability measures \cite{villani09}. The stability results ensure that both
$V_{\alpha, t}$ and $F_{\xi, t}$ yield
robust descriptors, useful for data analysis.
As explained in detail below, $V_{\alpha,t}$ is closely related to the
solution of the heat equation $\partial_t u = \Delta u$ with initial condition
$\alpha$. In particular, if $p_1, \ldots, p_n \in \ensuremath{\mathbb{R}}^d$ are data
points sampled from $\alpha$, then the diffusion Fr\'{e}chet functions for
the empirical measure $\alpha_n = \frac{1}{n} \sum_{i=1}^n \delta_{p_i}$
give a reinterpretation of Gaussian density estimators derived from
the data as localized second moments of $\alpha_n$.
An interesting consequence of this fact is that, for the Gaussian
kernel, the scale-space model for data on the real line investigated
in \cite{scalespace} may be recast as a 1-parameter family of
diffusion Fr\'{e}chet functions.
The rest of the paper is organized as follows. Section \ref{S:distance}
reviews the concept of diffusion distance in the Euclidean and network
settings. We define multiscale diffusion Fr\'{e}chet functions in Section
\ref{S:efrechet} and prove that they are stable with respect to the
Wasserstein distance between probability measures defined in Euclidean spaces. The network
counterpart is developed in Section \ref{S:nfrechet}. In
Section \ref{S:cdi} we describe and analyze microbiome data
associated with {\em C. difficile} infection. We
conclude with a summary and some discussion in
Section \ref{S:remarks}.
\section{Diffusion Distances} \label{S:distance}
Our multiscale approach to probability measures uses a formulation
derived from the notion of diffusion distance \cite{coifman}. In
this section, we review diffusion distances associated with the heat
kernel on $\ensuremath{\mathbb{R}}^d$, followed by a discrete analogue for weighted
networks.
For $t>0$, let $G_t \colon \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$
be the diffusion (heat) kernel
\begin{equation} \label{E:gauss}
G_t (x,y) = \frac{1}{C_d (t)} \exp \left( -\frac{\|y-x\|^2}{4t} \right) \,,
\end{equation}
where $C_d (t) = (4 \pi t)^{d/2}$. If an initial distribution of
mass on $\ensuremath{\mathbb{R}}^d$ is described by a Borel probability measure $\alpha$
and its time evolution is governed by the heat equation
$\partial_t u = \Delta u$, then the distribution at time $t$ has a smooth
density function $\alpha_t (y) = u (y,t)$ given by the convolution
\begin{equation}
u (y, t) = \int_{\ensuremath{\mathbb{R}}^d} G_t (x,y) \, d\alpha (x) \,.
\end{equation} For an initial point mass located at $x \in \ensuremath{\mathbb{R}}$,
$\alpha$ is Dirac's delta $\delta_x$ and $u (y,t) = G_t(x, y)$.
The diffusion map $\Psi_t \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{L}_2} (\ensuremath{\mathbb{R}}^d)$,
given by $\Psi_t (x) = G_t (x, \cdot)$, embeds $\ensuremath{\mathbb{R}}^d$ into
$\ensuremath{\mathbb{L}_2} (\ensuremath{\mathbb{R}}^d)$. The {\em diffusion distance} $d_t$ on $\ensuremath{\mathbb{R}}^d$ is
defined as
\begin{equation}
d_t (x_1, x_2) = \|\Psi_t (x_1) - \Psi_2 (x_2) \|_2 \,,
\end{equation} the metric that $\ensuremath{\mathbb{R}}^d$ inherits from $\ensuremath{\mathbb{L}_2} (\ensuremath{\mathbb{R}}^d)$
via the embedding $\psi_t$. A calculation shows that
\begin{equation} \label{E:difdistance}
\begin{split}
d^2_t (x_1, x_2) &= \|\Psi_t (x_1)\|^2_2 + \|\Psi_t (x_2)\|^2_2
- 2G_{2t}(x_1,x_2) \\
&= 2 \left( \frac{1}{C_d (2t)} - G_{2t} (x_1, x_2) \right) \,,
\end{split}
\end{equation}
where we used the fact that $\|\Psi_t (x)\|^2_2 =
1/ C_d (2t)$, for every $x \in \ensuremath{\mathbb{R}}^d$. Note that this implies that
the metric space $(\ensuremath{\mathbb{R}}^d, d_t)$ has finite diameter.
Diffusion distances on weighted networks may be defined in a
similar way by invoking the graph Laplacian and the associated
diffusion kernel. Let $v_1, \ldots, v_n$ be the nodes of a weighted
network $K$. The weight of the edge between $v_i$ and $v_j$ is
denoted $w_{ij}$, with the convention that $w_{ij} = 0$ if there is
no edge between the nodes. We let $W$ be the $n\times n$
matrix whose $(i,j)$-entry is $w_{ij}$. The graph Laplacian is the
$n \times n$ matrix $\Delta = D-W$, where $D$ is the diagonal matrix with
$d_{ii}=\sum_{k=1}^{n}w_{ik}$, the sum of the weights of all
edges incident with the node $v_i$ (cf.\,\cite{lux}). This definition is based
on a finite difference discretization of the Laplacian, except for a sign
that makes $\Delta$ a positive semi-definite, symmetric matrix.
The diffusion (or heat) kernel at $t>0$ is the matrix
$e^{-t\Delta}$.
Let $\xi$ be a probability distribution on the vertex set $V$ of $K$.
If $\xi_i \geq 0$, $1 \leq i \leq n$, is the probability of the vertex $v_i$,
we write $\xi$ as the vector $\xi = [\xi_1 \, \ldots \, \xi_n]^T \in \ensuremath{\mathbb{R}}^n$,
where $T$ denotes transposition. Clearly, $\xi_1 + \ldots + \xi_n =1$.
Note that $u = e^{-t\Delta} \xi$ solves the heat equation
$\partial_t u = - \Delta u$ with initial condition $\xi$. (The negative sign
is due to the convention made in the definition of $\Delta$.) Mass initially
distributed according to $\xi$, whose diffusion is governed by the heat equation,
has time $t$ distribution $u_t = e^{-t\Delta} \xi$. A point mass at the
$i$th vertex is described by the vector $e_i \in \ensuremath{\mathbb{R}}^n$, whose $j$th entry
is $\delta_{ij}$. Thus, $e^{-t\Delta} e_i$ may be viewed as a network
analogue of the Gaussian $G_t (x, \cdot)$. For $t>0$, the diffusion
mapping $\psi_t \colon V \to \ensuremath{\mathbb{R}}^n$ is defined by
$\psi_t (v_i) = e^{-t\Delta} e_i$ and the time $t$ {\em diffusion distance\/}
between the vertices $v_i$ and $v_j$ by
\begin{equation}
d_t (i,j) = \|e^{-t\Delta} e_i - e^{-t\Delta} e_j\| \,,
\end{equation}
where $\|\cdot\|$ denotes Euclidean norm. If
$0 = \lambda_1 \leq \ldots \leq \lambda_n$ are the eigenvalues of $\Delta$
with orthonormal eigenvectors $\phi_1, \ldots, \phi_n$, then
\begin{equation}
d_{t}^2(i,j)= \sum_{k=1}^{n}e^{-2\lambda_kt}\left(\phi_k(i)-\phi_k(j)\right)^2 \,,
\end{equation} where $\phi_k (i)$ denotes the $i$th component of $\phi_k$ \cite{coifman}.
\section{Diffusion Fr\'{e}chet Functions}
\label{S:efrechet}
The classical Fr\'{e}chet function $V_\alpha$ of a probability
measure $\alpha$ on $\ensuremath{\mathbb{R}}^d$
with finite second moment is a useful statistic. However, for complex
distributions, such as those with a multimodal profile or more intricate
geometry, their Fr\'{e}chet functions usually fail to provide a good
description of their shape. Aiming at more effective descriptors,
we introduce diffusion Fr\'{e}chet functions and show that they encode
a wealth of information across a full range of spatial scales. At scale
$t>0$, define the {\em diffusion Fr\'{e}chet function\/}
$V_{\alpha,t} \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ by
\begin{equation}
V_{\alpha,t} (x) = \int_{\ensuremath{\mathbb{R}}^d}
d_t^2 (y,x) \, d\alpha (y) \,.
\end{equation}
Note that $V_{\alpha,t}$ is defined
for any Borel probability measure $\alpha$, not just those with finite
second moment, because $(\ensuremath{\mathbb{R}}^d, d_t)$ has finite diameter. Moreover,
$V_{\alpha,t}$ is uniformly bounded. We also point out that this
construction is distinct from the standard kernel trick
that pushes $\alpha$ forward to a probability measure $(\psi_t)_\ast (\alpha)$
on $\ensuremath{\mathbb{L}_2} (\ensuremath{\mathbb{R}}^d)$, where the usual Fr\'{e}chet function of
$(\psi_t)_\ast (\alpha)$ can be used to define such notions as an
extrinsic mean of $\alpha$. $V_t$ is an intrinsic statistic and typically a
function on $\ensuremath{\mathbb{R}}^d$ with a complex profile, as we shall see in the
examples below.
From \eqref{E:frechet2} and \eqref{E:difdistance}, it follows that
\begin{equation} \label{E:frechet3}
\begin{split}
V_{\alpha,t/2} (x) = \frac{2}{C_d (t)} - 2 \int_{\ensuremath{\mathbb{R}}^d}
G_{t} (x,y) \, d \alpha (y)
= \frac{2}{C_d (t)} - 2 \alpha_{t} (x) \,,
\end{split}
\end{equation}
where $\alpha_{t} (x) = u(x, t)$, the solution of the heat equation
$\partial_t u = \Delta u$ with initial condition $\alpha$.
If $p_1, \ldots, p_n \in \ensuremath{\mathbb{R}}^d$ are data points sampled from
$\alpha$, then the diffusion Fr\'{e}chet function of
the empirical measure $\alpha_n = \frac{1}{n} \sum_{i=1}^n \delta_{p_i}$
is closely related to the Gaussian density estimator
$\widehat{\alpha}_{n, t} = \frac{1}{n} \sum_{i=1}^n G_{t} (x, p_i)$
derived from the sample points.
Indeed, it follows from \eqref{E:frechet3} that
\begin{equation}
\begin{split}
V_{\alpha_n,t/2} (x) = \frac{2}{C_d (t)} - 2 \int_{\ensuremath{\mathbb{R}}^d}
G_{t} (x,y) \, d \alpha_n (y)
= \frac{2}{C_d (t)} - 2 \widehat{\alpha}_{n,t} (x) \,.
\end{split}
\end{equation}
Thus, diffusion Fr\'{e}chet functions provide a new interpretation
of Gaussian density estimators, essentially as second moments
with respect to diffusion distances. This opens up interesting new
perspectives. For example, the
classical Fr\'{e}chet function $V_\alpha \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$
(see \eqref{E:frechet1}) may be viewed as the trace of the
covariance tensor field $\Sigma^\alpha \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d
\otimes \ensuremath{\mathbb{R}}^d$ given by
\begin{equation}
\Sigma_\alpha (x) = \expect{\alpha}{(y-x) \otimes (y-x)}
= \int_{\ensuremath{\mathbb{R}}^d} (y-x)\otimes (y-x) \, d \alpha (y) \,.
\end{equation}
Thus, it is natural to ask: {\em How to define diffusion covariance
tensor fields $\Sigma_{\alpha,t}$ that capture the
modes of variation of $\alpha$, about each $x \in \ensuremath{\mathbb{R}}^d$ at all
scales, bearing a close relationship to $V_{\alpha,t}$?}
A multiscale approach to data along related lines has been
developed in \cite{mmm13,diaz1}.
\begin{example}
Here we consider the dataset highlighted in blue in
Figure \ref{F:evolution}, comprising $n=400$ points
$p_1, \ldots, p_n$ on the real line, grouped into two clusters. The
figure shows the evolution of the diffusion Fr\'{e}chet function
for the empirical measure $\alpha_n = \sum_{i=1}^n \delta_{p_i} /n$
for increasing values of the scale parameter.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.4\linewidth]{diff-1d-0_005a}
\quad & \quad
\includegraphics[width=0.4\linewidth]{diff-1d-0_1a} \\
$t = 0.005$ \quad & \quad $t = 0.1$ \vspace{0.1in} \\
\includegraphics[width=0.4\linewidth]{diff-1d-4a}
\quad & \quad
\includegraphics[width=0.4\linewidth]{diff-1d-20a} \\
$t = 4$ \quad & \quad $t= 20$ \\
\end{tabular}
\end{center}
\caption{Evolution of the diffusion Fr\'{e}chet function across
scales.}
\label{F:evolution}
\end{figure}
At scale $t=0.1$, the Fr\'{e}chet function has two well defined
``valleys'', each corresponding to a data cluster. At smaller scales,
$V_{\alpha_n,t}$ captures finer properties of the organization
of the data, whereas
at larger scales $V_{\alpha_n,t}$ essentially views the data as
comprising a single cluster. For probability distributions on the real line,
the Fr\'{e}chet function levels off to $1/\sqrt{2 \pi t}$, as $|x| \to \infty$.
The local minima of $V_{\alpha,t}$ provide a generalization of the
mean of the distribution and a summary of the organization of the data
into sub-communities across multiple scales. Elucidating such
organization in data in an easily interpretable manner is one of our
principal aims.
\end{example}
\begin{example}
In this example, we consider the dataset formed by $n = 1000$
points in $\ensuremath{\mathbb{R}}^2$, shown on the first panel of Figure\,\ref{F:frechet2d}.
The other panels show the diffusion Fr\'{e}chet functions for the
corresponding empirical measure $\alpha_n$, calculated at increasing
scales, and the gradient field $-\nabla V_{\alpha_n}$ at $t=2$,
whose behavior reflects the organization of the data at that scale.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\begin{tabular}{ccc}
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{data2-data}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{data2-dff-0_5}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{data2-dff-2}
\end{tabular} \\
data & $t = 0.5$ & $t = 2$
\vspace{0.1in} \\
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{data2-dff-4_5}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{data2-dff-9}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{data2-dff-30}
\end{tabular} \\
$t = 4.5$ & $t = 9$ &
$t = 30$
\end{tabular}
&
\begin{tabular}{c}
\includegraphics[width=0.33\linewidth]{data2-gradient-2}
\\
$t=2$
\end{tabular}
\end{tabular}
\end{center}
\caption{Data points in $\ensuremath{\mathbb{R}}^2$, heat maps of their diffusion
Fr\'{e}chet functions at increasing scales, and the gradient field
at $t=2$.}
\label{F:frechet2d}
\end{figure}
\end{example}
\begin{example}
This example illustrates the evolution of a dataset consisting of
$n=3158$ points under the (negative) gradient flow of the Fr\'{e}chet
function of the associated empirical measure at scale $t=0.2$.
Panel (a) in Figure \ref{F:flow} shows the original data and the
other panels show various stages of the evolution towards the
attractors of the system.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.25\linewidth]{trjs_0_2_0}
\quad & \quad
\includegraphics[width=0.25\linewidth]{trjs_0_2_5}
\quad & \quad
\includegraphics[width=0.25\linewidth]{trjs_0_2_10} \\
(a) \quad & \quad (b) \quad & \quad (c) \\
\includegraphics[width=0.25\linewidth]{trjs_0_2_15}
\quad & \quad
\includegraphics[width=0.25\linewidth]{trjs_0_2_20}
\quad & \quad
\includegraphics[width=0.25\linewidth]{trjs_0_2_35} \\
(d) \quad & \quad (e) \quad & \quad (f) \\
\end{tabular}
\end{center}
\caption{Panel (a) shows the original data and panels (b)--(f) display
various stages of the evolution of the data towards the attractors
of the gradient flow of the Fr\'{e}chet function at scale $t=0.2$.}
\label{F:flow}
\end{figure}
\end{example}
We now prove stability results for $V_{\alpha,t}$ and its gradient
field $\nabla V_{\alpha,t}$, which provide a basis for their use as
robust functional statistics in data analysis. We begin by reviewing
the definition of the Wasserstein distance between two probability
measures.
A {\em coupling\/} between two probability measures $\alpha$
and $\beta$ is a probability measure $\mu$ on $\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d$
such that $(p_1)_\ast (\mu) = \alpha$ and $(p_2)_\ast (\mu) = \beta$.
Here, $p_1$ and $p_2$ denote the projections onto the first and second
coordinates, respectively. The set of all such
couplings is denoted $\Gamma (\alpha, \beta)$.
For each $p \in [1, \infty)$, let $\ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$ denote the collection
of all Borel probability measures $\alpha$ on $\ensuremath{\mathbb{R}}^d$ whose
$p$th moment $M_p (\alpha) = \int \|y\|^p \, d\alpha (y)$ is finite.
\begin{definition}[cf.\cite{villani09}] \label{D:wass}
Let $\alpha, \beta \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$.
The $p$-Wasserstein distance $W_p (\alpha, \beta)$ is defined as
\begin{equation} \label{E:wass}
W_p (\alpha, \beta) = \left(\inf_\mu \iint\limits_{\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d}
\|x-y\|^p d\mu(x,y)\right)^{1/p} \,,
\end{equation}
where the infimum is taken over all $\mu \in \Gamma (\alpha, \beta)$.
\end{definition}
It is well-known that the infimum in \eqref{E:wass} is realized
by some $\mu \in \Gamma (\alpha, \beta)$ \cite{villani09}.
Moreover, if $p > q \geq 1$, then
$\ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d) \subset \ensuremath{\mathcal{P}}_q (\ensuremath{\mathbb{R}}^d)$ and
$W_q (\alpha, \beta) \leq W_p (\alpha, \beta)$.
The following lemma will be useful in the proof of the stability results.
Part (a) of the lemma is a special case of Lemma 1 of \cite{diaz1}. Let
$K_t \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ be the heat kernel centered at zero.
In the notation used in \eqref{E:gauss}, $K_t (y) = G_t (0,y)$.
\begin{lemma} \label{L:gauss}
Let $\displaystyle C_d(t) = (4\pi t)^{d/2}$. For any $y_1, y_2 \in \ensuremath{\mathbb{R}}^d$
and $t>0$, we have:
\begin{itemize}
\item[(a)]
$\displaystyle \left| K_t (y_1) - K_t (y_2) \right| \leq
\frac{\|y_1- y_2\|}{C_d(t)\sqrt{2t\cdot e}}$ ;
\item[(b)]
$\displaystyle \left \Vert y_1 K_t (y_1) - y_2 K_t(y_2) \right \Vert \leq
\frac{e+2}{e C_d (t)} \|y_1- y_2\|$.
\end{itemize}
\end{lemma}
\begin{proof}
(a) For each $\xi \in[0,1]$, let $y(\xi) = \xi y_1 +(1-\xi) y_2$. Then,
\begin{equation}
\begin{split}
\| K_t (y_1) - K_t (y_2) \| &=
\left| \int_0^1 \frac{d}{d\xi} K_t(y(\xi))\,d\xi \right|
\leq \int_0^1 \left| \frac{d}{d\xi} K_t(y(\xi)) \right| \, d\xi \\
&= \int_0^1 \left| \nabla K_t(y(\xi))\cdot (y_1- y_2) \right| \, d\xi \\
&\leq \|y_1- y_2\| \int_0^1 \left\| \nabla K_t(y(\xi)) \right\| \, d\xi \,.
\end{split}
\end{equation}
Note that
\begin{equation}
\| \nabla K_t(y) \| = \|\frac{y}{2t} K_t(y) \| \leq
\frac{\|y\|}{2tC_d (t)} \exp\left(-\frac{\Vert y\Vert^2}{4t}\right)
\leq \frac{1}{C_d (t) \sqrt{2t\cdot e}} \,.
\end{equation}
In the last inequality we used the fact that
$\|y\| \exp\left(-\frac{\Vert y\Vert^2}{4t}\right) \leq \sqrt{\frac{2t}{e}}$.
Hence,
\begin{equation*}
\begin{split}
\left| K_t (y_1) - K_t (y_2) \right|
&\leq \frac{\|y_1- y_2\|}{C_d(t) \sqrt{2t\cdot e}} \,.
\end{split}
\end{equation*}
(b) Using the same notation as in (a),
\begin{equation}
\frac{d}{d\xi} \left[y K_t (y) \right] =
K_t (y) (y_1-y_2) - y \frac{K_t (y)}{2t} \, y \cdot (y_2-y_2) \,.
\end{equation}
Hence,
\begin{equation} \label{E:diff}
\begin{split}
\left\| \frac{d}{d\xi} \left[y K_t (y) \right] \right\| &\leq
\left(K_t (y) + \frac{\|y\|^2}{2t} K_t (y) \right) \|y_2-y_2\| \\
&\leq \frac{1}{C_d (t)} \left(1 + \frac{2}{e}\right) \|y_1-y_2\| \,.
\end{split}
\end{equation}
In the last inequality we used the facts that $K_t (y)
\leq 1/ C_d (t)$ and $\|y\|^2 K_t (y) \leq 4t/e C_d (t)$.
Writing
\begin{equation} \label{E:diff1}
y_1 K_t (y_1) - y_2 K_t (y_2) = \int_0^1
\frac{d}{d\xi} \left[y K_t (y) \right] \, d\xi \,,
\end{equation}
it follows from \eqref{E:diff} and \eqref{E:diff1} that
\begin{equation}
\|y_1 K_t (y_1) - y_2 K_t (y_2)\| \leq
\frac{1}{C_d (t)} \left(1 + \frac{2}{e}\right) \|y_1-y_2\| \,,
\end{equation}
as claimed.
\end{proof}
\begin{theorem}[Stability of Fr\'{e}chet functions] \label{T:stability1}
Let $\alpha$ and $\beta$ be Borel probability measures on $\ensuremath{\mathbb{R}}^d$
with diffusion Fr\'{e}chet functions $V_{\alpha,t}$ and $V_{\beta,t}$,
$t>0$, respectively. If $\alpha, \beta \in \ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$,
then
\[
\left\| V_{\alpha,t} - V_{\beta,t} \right\|_\infty \leq
\frac{1}{C_d (2t)\sqrt{t\cdot e}}\, W_1 (\alpha,\beta) \,.
\]
\end{theorem}
\begin{proof}
Fix $t>0$ and $x \in \ensuremath{\mathbb{R}}^d$. Let $\mu \in \Gamma(\alpha,\beta)$
be a coupling such that
\begin{equation}
\displaystyle\iint\limits_{\ensuremath{\mathbb{R}}^d \times
\ensuremath{\mathbb{R}}^d}\|z_1 - z_2\| d \mu(z_1, z_2) = W_1 (\alpha,\beta) \,.
\end{equation} Then,
we may write
\begin{equation} \label{E:simplified1}
V_{\alpha,t} (x)=\iint\limits_{\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d}
d^2_t(z_1,x)\, d\mu(z_1, z_2)
\end{equation}
and
\begin{equation} \label{E:simplified2}
V_{\beta,t} (x)=\iint\limits_{\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d}
d^2_t(z_2,x)\, d\mu(z_1, z_2).
\end{equation}
By \eqref{E:difdistance},
$d^2_t(z,x)=2/C_d (2t) - 2 G_{2t}(z,x)$. Hence,
\begin{equation} \label{E:simplified3}
V_{\alpha,t} (x) - V_{\beta,t} (x) = -2 \iint\limits_{\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d}
\left(G_{2t} (z_1,x) - G_{2t} (z_2,x)\right) d\mu(z_1, z_2) \,,
\end{equation}
which implies that
\begin{equation} \label{E:simplified4}
\left \vert V_{\alpha,t} (x)- V_{\beta,t} (x) \right \vert \leq
2 \iint\limits_{\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d}
\left\vert G_{2t}(z_1,x) - G_{2t}(z_2,x) \right\vert d\mu(z_1, z_2) \,.
\end{equation}
After translating $\alpha$ and $\beta$, we may assume that $x=0$.
Thus, by Lemma \ref{L:gauss}(a),
\begin{equation} \label{E:simplified5}
\begin{split}
\left| V_{\alpha,t} (x) - V_{\beta,t} (x) \right|
&\leq \frac{1}{C_d (2t)\sqrt{t\cdot e}}
\iint\limits_{\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d} \left \Vert z_1-z_2 \right \Vert
d\mu(z_1, z_2) \\
&= \frac{1}{C_d (2t)\sqrt{t\cdot e}} \, W_1 (\alpha,\beta) \,,
\end{split}
\end{equation}
as claimed.
\end{proof}
\begin{theorem}[Stability of gradient fields] \label{T:stability1a}
Let $\alpha$ and $\beta$ be Borel probability measures on $\ensuremath{\mathbb{R}}^d$
with diffusion Fr\'{e}chet functions $V_{\alpha,t}$ and $V_{\beta,t}$,
$t>0$, respectively. If $\alpha, \beta \in \ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$,
then
\[
\sup_{x \in \ensuremath{\mathbb{R}}^d} \left\| \nabla V_{\alpha,t} (x) -
\nabla V_{\beta,t} (x) \right\| \leq
\frac{e+2}{e C_d(2t)} \, W_1 (\alpha,\beta) \,.
\]
\end{theorem}
\begin{proof}
Let $\mu \in \Gamma(\alpha, \beta)$ be a coupling that
realizes $W_1 (\alpha, \beta)$.
From \eqref{E:simplified3}, it follows that
\begin{equation}
\begin{split}
\| \nabla V_{\alpha,t} (x) &- \nabla V_{\beta,t} (x) \|
\leq 2 \iint \left\| \nabla_x G_{2t}(z_1,x) - \nabla_x G_{2t}(z_2,x) \right\|
d\mu(z_1, z_2) \\
&\leq \frac{1}{2t} \iint\limits
\left\| (x-z_1) G_{2t}(z_1,x) - (x-z_2) G_{2t}(z_2,x) \right\|
d\mu(z_1, z_2) \,.
\end{split}
\end{equation}
We may assume that $x=0$, so we rewrite the previous
inequality as
\begin{equation}
\| \nabla V_{\alpha,t} (x) - \nabla V_{\beta,t} (x) \|
\leq \frac{1}{2t} \iint
\left\| z_1 K_{2t}(z_1) - z_2 K_{2t}(z_2) \right\|
d\mu(z_1, z_2) \,.
\end{equation}
Therefore, by Lemma \ref{L:gauss}(b),
\begin{equation}
\begin{split}
\| \nabla V_{\alpha,t} (x) - \nabla V_{\beta,t} (x) \|
&\leq \frac{e+2}{e C_d(2t)} \iint \|z_1-z_2\| d\mu(z_1, z_2) \\
&= \frac{e+2}{e C_d(2t)} W_1 (\alpha, \beta) \,,
\end{split}
\end{equation}
which proves the theorem.
\end{proof}
\section{Diffusion Fr\'{e}chet Vectors on Networks}
\label{S:nfrechet}
In this section, we define a network analogue of diffusion Fr\'{e}chet
functions and prove a stability theorem. Let
$\xi = [\xi_1 \, \ldots \, \xi_n]^T \in \ensuremath{\mathbb{R}}^n$ represent a probability distribution
on the vertex set $V = \{v_1, \ldots, v_n\}$ of a weighted
network $K$. For $t>0$, define the {\em diffusion Fr\'{e}chet vector\/} (DFV)
as the vector $F_{\xi,t} \in \ensuremath{\mathbb{R}}^n$ whose $i$th component is
\begin{equation}
F_{\xi,t} (i) = \sum_{j=1}^n d_t^2 (i,j) \xi_j \,.
\end{equation}
\begin{example}
We illustrate the behavior of DFVs on a social network
of frequent interactions among 62 dolphins in a community living off
Doubtful Sound, New Zealand \cite{lusseau}. The network data was obtained
from the UC Irvine Network Data Repository. All edges are given the
same weight and we consider the uniform distribution on the vertex set.
Figure \ref{F:dolphins} shows maps of the diffusion Fr\'{e}chet vector
at multiple scales. As in the Euclidean case, the profiles of the DFVs reveal
sub-communities in the network and their interactions.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.35\linewidth]{g-1sig-0_01}
\quad & \quad
\includegraphics[width=0.35\linewidth]{g-5sig-0_0305} \\
$t = 0.01$ \quad & \quad $t = 0.03$ \vspace{0.1in} \\
\includegraphics[width=0.35\linewidth]{g-9sig-0_0931}
\quad & \quad
\includegraphics[width=0.35\linewidth]{g-13sig-0_284} \\
$t = 0.09$ \quad & \quad $t= 0.3$ \\
\end{tabular}
\end{center}
\caption{Evolution across scales of the diffusion Fr\'{e}chet vector
for the uniform distribution.}
\label{F:dolphins}
\end{figure}
\end{example}
To state a stability theorem for DFVs, we first define the Wasserstein
distance between two probability distributions on the vertex set $V$ of
a weighted network $K$.
This requires a base metric on $V$ to play a role similar to that of the
Euclidean metric in \eqref{E:wass}. We use the {\em commute-time
distance\/} between the nodes of a weighted network (cf. \cite{lovasz96}).
\begin{definition}
Let $K$ be a connected, weighted network with nodes
$v_1, \ldots, v_n$, and let $\phi_1, \phi_2, \ldots, \phi_n$ be an
orthonormal basis of $\ensuremath{\mathbb{R}}^n$ formed by
eigenvectors of the graph Laplacian with eigenvalues
$0 = \lambda_1 < \lambda_2 \leq \ldots
\leq \lambda_n$. The commute-time distance between $v_i$ and
$v_j$ is defined as
\begin{equation}
d_{CT} (i,j)= \left(
\sum_{k=2}^n \frac{1}{\lambda_k}\left(\phi_k(i)-\phi_k(j)\right)^2
\right)^{1/2}.
\end{equation}
\end{definition}
\noindent
It is simple to verify that $d_{CT}^2 (i,j) = 2 \int_0^\infty d_t^2 (i,j) \, dt$.
\begin{definition}
Let $\xi, \zeta \in \ensuremath{\mathbb{R}}^n$ represent probability distributions on $V$.
The $p$-Wasserstein distance, $p \geq 1$, between $\xi$ and
$\zeta$ (with respect to the commute-time distance on $V$) is
defined as
\begin{equation}
W_p (\xi, \zeta) = \min_{\mu \in \Gamma (\alpha, \beta)}
\left( \sum_{j=1}^n \sum_{i=1}^n d_{CT}^p(i,j) \mu_{ij} \right)^{1/p} \,,
\label{E:wasserstein-netw}
\end{equation}
where $\Gamma (\xi, \zeta)$ is the set of all probability measures
$\mu$ on $V \times V$ satisfying $(p_1)_\ast (\mu) = \xi$ and
$(p_2)_\ast (\mu) = \zeta$.
\end{definition}
\begin{theorem}
Let $\xi, \zeta \in \ensuremath{\mathbb{R}}^n$ be probability distributions on the
vertex set of a connected, weighted network. For each $t>0$, their diffusion
Fr\'{e}chet vectors satisfy
\begin{equation}
\|F_{\xi,t} - F_{\zeta,t} \|_\infty \leq
4 \sqrt{\frac{\Tr e^{-2t \Delta }-1}{2 et}} \, W_1 (\xi, \zeta) \,.
\end{equation}
\end{theorem}
\begin{proof}
Fix $t>0$ and a node $v_\ell$. Let $\mu \in \Gamma(\xi,\zeta)$
be such that
\begin{equation}
\sum_{j=1}^n \sum_{i=1}^n d_{CT}(i,j)\mu(i,j) = W_1 (\xi, \zeta)\,.
\end{equation}
Since $\mu$ has marginals $\alpha$ and $\beta$, we may write
\begin{equation}
F_{\xi,t}(\ell)=\sum_{j=1}^n \sum_{i=1}^n d_t^2(i,\ell) \mu_{i,j}
\quad \text{and} \quad
F_{\zeta,t}(\ell)=\sum_{j=1}^n \sum_{i=1}^n d_t^2(j,\ell) \mu_{i,j} \,,
\end{equation}
which implies that
\begin{equation}
\left\vert F_{\xi,t}(\ell) - F_{\zeta,t}(\ell) \right\vert \leq
\sum_{j=1}^n \sum_{i=1}^n \left| d_t^2(i, \ell) -
d_t^2(j, \ell)\right|\mu_{i,j} \,.
\end{equation}
By Lemma \ref{L:commute} below,
\begin{equation} \label{E:ineq1}
\begin{split}
\left\vert F_{\xi,t}(\ell) - F_{\zeta,t}(\ell) \right\vert \leq &
4\sqrt{\Tr e^{-2\Delta t}-1} \sum_{j=1}^n \sum_{i=1}^n d_t(i,j)\mu_{i,j} \,.
\end{split}
\end{equation}
Observe that
\begin{equation} \label{E:ineq2}
\begin{split}
d_t^2(i,j) &= \sum_{k=1}^{n}e^{-2\lambda_kt}\left(\phi_k(i)-\phi_k(j)\right)^2 \\
&= \sum_{k=1}^{n}\lambda_k e^{-2\lambda_kt}\frac{1}{\lambda_k}
\left(\phi_k(i)-\phi_k(j)\right)^2 \leq \frac{1}{2e t} d_{CT}^2(i,j)
\end{split}
\end{equation}
since $\lambda_k e^{-2 \lambda_k t} \leq \frac{1}{2et}$.
The theorem follows from \eqref{E:ineq1}
and \eqref{E:ineq2}.
\end{proof}
\begin{lemma} \label{L:commute}
Let $v_i,v_j,v_{\ell}$ be nodes of a connected, weighted network. For
any $t>0$,
\begin{equation*}
\left\vert d_t^2(i,\ell)-d_t^2(j,\ell) \right\vert \leq
4\sqrt{\Tr e^{-2t \Delta} -1} \, d_t(i,j)\,,
\end{equation*}
where $\Tr$ denotes the trace operator.
\end{lemma}
\begin{proof}
Since the eigenfunction $\phi_1$ is constant, we may write the
diffusion distance as
\begin{equation} \label{E:diffdist}
\begin{split}
d_t^2 (i,\ell) = \sum_{k=2}^n e^{-2\lambda_kt}
\left(\phi_k^2 (i) - 2 \phi_k (i) \phi_k(\ell) + \phi_k^2 (\ell) \right) \,.
\end{split}
\end{equation}
Thus,
\begin{equation}
\begin{split}
d_t^2(i,\ell)-d_t^2(j,\ell) &=
\sum_{k=2}^{n}e^{-2\lambda_kt}\left(\phi_k^2(i)-\phi_k^2(j)\right) \\
&-2 \sum_{k=2}^{n}e^{-2\lambda_kt}\phi_k(\ell)\left(\phi_k(i)-\phi_k(j)\right) \\
&= \sum_{k=2}^{n}e^{-2\lambda_kt} (\phi_k(i)+\phi_k(j)) (\phi_k(i)-\phi_k(j)) \\
&- 2\sum_{k=2}^{n}e^{-2\lambda_kt}\phi_k(\ell)\left(\phi_k(i)-\phi_k(j)\right) \,.
\end{split}
\end{equation}
Since each $\phi_k$ has unit norm,
$|\phi_k (\ell)| \leq 1$ and $|\phi_k (i) + \phi_k (j)| \leq 2$ \,.
Therefore,
\begin{equation} \label{E:difflip}
\left\vert d_t^2(i,\ell)-d_t^2(j,\ell) \right\vert \leq
4 \sum_{k=2}^{n} e^{-2\lambda_kt} \left| \phi_k(i)-\phi_k(j) \right| \,.
\end{equation}
The Cauchy-Schwarz inequality, applied to the vectors
$a = \left( e^{-\lambda_2 t}, \ldots, e^{-\lambda_n t} \right)$ and
$b = \left( e^{-\lambda_2 t} \left| \phi_2(i)-\phi_2 (j) \right|, \ldots,
e^{-\lambda_n t} \left| \phi_n(i)-\phi_n (j) \right| \right)$, yields
\begin{equation} \label{E:cs}
\begin{split}
\sum_{k=2}^{n} e^{-2\lambda_kt} \left| \phi_k(i)-\phi_k (j) \right|
&\leq \sqrt{\sum_{k=2}^{n}e^{-2\lambda_kt}} \,
\sqrt{\sum_{k=2}^{n}e^{-2\lambda_kt}\left(\phi_k(i)-\phi_k(j)\right)^2} \\
&= \sqrt{\Tr e^{-2t \Delta} -1} \, d_t(i,j) \,.
\end{split}
\end{equation}
The lemma follows from \eqref{E:difflip} and \eqref{E:cs}.
\end{proof}
\section{{\em C. difficile\/} Infection and Fecal Microbiota Transplantation}
\label{S:cdi}
This section presents an application to analyses of
microbiome data associated with {\em Clostridium difficile} infection
(CDI). CDI kills thousands of patients every year in healthcare facilities
\cite{kelly}. Traditionally, CDI is treated with antibiotics, but the drugs
also attack other bacteria in the gut flora and this has been linked to
recurrence of CDI in recovering patients
\cite{vincent}. Fecal microbiota transplantation (FMT) is a promising
alternative that shows recovery rates close to $90\%$ \cite{bakken, gough}.
Shahinas {\em et al.}\,\cite{shahinas} and Seekatz {\em et al.}\,\cite{seekatz}
have used 16S rRNA sequencing to estimate the abundance of the various
bacterial taxa living in the gut of healthy donors and CDI patients. These
studies have reported a reduced diversity in the bacterial communities
of CDI patients and abundance scores after FMT treatment that are closer
to those for healthy donors. The studies, however, have focused on counts
of bacterial taxa, disregarding interactions in the bacterial communities. To
account for these, we employ diffusion Fr\'{e}chet vectors to examine
differences in pre-treatment and
post-treatment fecal samples and the effect of FMT in the composition
of the gut flora. The analysis is based on metagenomic data for 17 patients
(paired pre-FMT and post-FMT) and 7 donor samples, selected from data
collected by Lee {\em et al.}\,\cite{lee}.
\subsection{Metagenomic Data} \label{S:data}
The data comes from a subset of 94 patients treated with fecal
microbiota transplantation by
one of the authors \cite{lee}, covering the period 2008--2012. From this,
17 patients were selected, not randomly, for sequencing. The protocol
followed for obtaining consent consisted of first sending a written letter
to each patient asking permission to further study their already collected
stool samples. In the letter it was stated that there will be a followup
telephone call to the patient verifying that they received the letter and
whether they will provide consent 5-10 business days following the
date of the mailing. All patients in this study were contacted, and all
patients provided written informed consent. Furthermore, this study
and permission protocol was approved by the Hamilton Integrated
Research Ethics Board \#12-3683, the University of Guelph
Research Ethics Board 12AU013 and the Florida State University
Research Ethics IRB00000446.
All {\em C. difficile} infections were confirmed by in-hospital, real-time,
polymerase chain reaction (PCR) testing for the toxin B gene. This study
sequenced the forward V3-V5 region of the 16S rRNA gene from 17 CDI
patients who were treated with FMT(s). A pre-FMT, a corresponding
post-FMT, and 7 samples from four donors, corresponding altogether
to 41 fecal samples were sequenced.
The bioinformatics software {\tt mothur} was used
as the primary means of processing and quality-filtering reads and
calculating statistical indices of community structure; see \cite{rush}
for a breakdown of the {\tt mothur} processing pipeline.
\subsection{Co-occurrence Networks} \label{S:network}
This study is based on bacterial interactions at the phylum level.
We model the (expected) interactions among the various phyla found
in the healthy human gut by means of a co-occurrence network
\cite{junker} in which each node represents a phylum. An
edge between two phyla is weighted according to the correlation
between their counts, estimated from samples taken from a group
of healthy individuals. More precisely, let $v_1, \ldots, v_n$
be the nodes of the network and $\rho_{ij}$, $i \ne j$, be the correlation
coefficient between the counts for the phyla represented by $v_i$ and $v_j$.
As we are interested in sub-communities
of interactive phyla, the edge between $v_i$ and $v_j$ is weighted by
the absolute correlation $w_{ij} = |\rho_{ij}|$, disregarding whether the
correlation is positive or negative. Since this construction typically yields
a fully connected network, we use the locally adaptive network
sparsification (LANS) technique \cite{foti} to simplify the network,
retaining the most significant interactions (edges) while keeping the
network connected. The following description of LANS is equivalent
to that in \cite{foti}. Define
\begin{equation}
F_{ij}=\frac{1}{n}\sum_{k=1}^{n}\mathbf{1}\{w_{ik}\leqslant w_{ij}\} \,,
\end{equation}
where $\mathbf{1}\{w_{ik}\leqslant w_{ij}\}$ returns $1$ if
$w_{ik}\leqslant w_{ij}$ and $0$ otherwise. For a given pair $(i,j)$, $F_{ij}$
is the probability that the absolute correlation between the counts for a
random phylum and $v_i$ is no larger than $w_{ij}$. Observe that $F_{ij}$
may differ from $F_{ji}$. For $i \ne j$, the decision as to whether the edge
between $v_i$ and $v_j$ is deleted or preserved is based on a
``significance'' level $0 \leqslant \alpha \leqslant 1$. The edge is
preserved if $1-F_{ij} < \alpha$ or $1-F_{ji} < \alpha$. Larger values of
$\alpha$ retain more edges of the network.
The model we develop is based on seven bacterial phyla:
\emph{Actinobacteria}, \emph{Bacteroidetes}, \emph{Firmicutes}, \emph{Fusobacteria}, \emph{Proteobacteria}, \emph{Verrucomicrobia}, and a group of unidentified bacteria treated as a
single phylum labeled Unclassified. Figure \ref{F:graph} shows the
co-ocurrence network obtained after LANS sparsification with $\alpha= 0.1$.
Dark edge colors indicate a higher level of interaction; that is, larger
edge weights. Note that there is a highly interactive sub-community
comprising \emph{Actinobacteria}, \emph{Verrucomicrobia} and unclassified bacteria.
This network is used in our analyses of variation in the structure of bacterial communities in samples from CDI patients, recovering patients and healthy
individuals, as well as the effects of FMT. (The package \emph{NetworkX} for
the Python programming language was used to depict the
network \cite{netw}.)
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth]{graph_0_1}
\end{center}
\caption{Bacterial phyla co-occurrence network for the human gut
sparsified to significance level $\alpha = 0.1$ with the LANS method.}
\label{F:graph}
\end{figure}
\subsection{Microbiota Analysis} \label{S:analysis}
The co-occurrence network constructed in Section \ref{S:network}
(Figure \ref{F:graph}) from culture data for healthy donors
provides a model for the expected interactions among the seven
bacterial phyla considered in this study. We use the network and
bacterial count data to produce a biomarker $\gamma_t$ that is
effective in characterizing CDI and potentially in monitoring
the effects of FMT treatment. To establish a baseline, we also
construct a biomarker $\beta$ solely based on bacterial counts
and compare it with $\gamma_t$.
Let $v_1, \ldots, v_7$ be the nodes of the co-occurrence network.
For a gut culture sample $S$, let $\xi_i (S)$ be the frequency of
$v_i$ in $S$. Clearly, $\xi_1 (S) + \ldots + \xi_7 (S) = 1$. Our first
method of analysis is based directly on the probability distribution
on the vertex set given by
\begin{equation}
\xi (S) = \left( \xi_1(S) , \ldots , \xi_7 (S) \right) \in \ensuremath{\mathbb{R}}^7 \,.
\end{equation}
To derive a scalar biomarker $\beta (S)$, we
use culture samples from healthy individuals and CDI patients. We
calculate their 7-dimensional frequency vectors $\xi (S)$ and use linear
discriminant analysis (LDA) to learn an axis in $\ensuremath{\mathbb{R}}^7$ along
which their scores $\beta (S)$ optimally discriminate healthy samples
from those of CDI patients.
For a sample $S$, $\beta (S)$ is the score of $\xi (S)$ along the
learned axis. Next, we describe the biomarker $\gamma_t$.
For a sample $S$, let
\begin{equation}
F^S_t (i) = \sum_{j=1}^7 d^2_t (i,j) \xi_j (S)
\end{equation}
be the diffusion Fr\'{e}chet vector for the distribution $\xi (S)$.
As before, using training data and applying LDA, we
obtain $\gamma_t$.
\subsection{Results} \label{S:results}
Our analyses were based on 41 gut culture samples comprising 7 healthy
donors, 17 pre-treatment CDI patients (pre-FMT), 4 post-treatment patients
not in resolution (post-NR), and 13 post-treatment patients in resolution
(post-R). Figure \ref{F:freq} shows boxplots for each of the four groups
of the distribution of the phylum frequency data obtained from
metagenomic sequencing of the forty-one samples.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4.5in]{freqs_boxplot}
\end{center}
\caption{Boxplot of bacterial phylum frequency in the human gut
for healthy donors, pre-FMT patients, post-FMT patients not in
resolution, and post-FMT patients in resolution.}
\label{F:freq}
\end{figure}
Due to the relatively small sample size, we formed a Healthy group
comprising all samples from donors and post-FMT patients in resolution
and a CDI group consisting of all samples from pre-FMT and post-FMT
patients not in resolution. Inclusion of post-R samples in the Healthy
group has the virtue of challenging the biomarkers $\beta$ and
$\gamma_t$ to be sensitive to partial restoration to normal of the
gut flora of recovering patients.
From the frequency data, we constructed $\beta$, as described in
Section \ref{S:analysis}. Linear discriminant analysis yielded an axis
in $\ensuremath{\mathbb{R}}^7$ along a direction determined by a unit vector whose
loadings are specified in Table \ref{T:loadings}. The loadings revealed that
$\beta$ captures a complex combination of the frequencies of the
various phyla, with only \emph{Fusobacteria} and \emph{Verrucomicrobia} playing
lesser roles due to their low frequencies.
\begin{table}[h!]
\begin{center}
\begin{tabular}{| c | c | c |} \hline
Phylum & LDA loadings for $\beta$ & LDA loadings for
$\gamma_t$ \\ \hline
\emph{Firmicutes} & -0.412 & -0.079 \\ \hline
\emph{Proteobacteria} & -0.644 & 0.059 \\ \hline
\emph{Bacteroidetes} & 0.517 & -0.623 \\ \hline
\emph{Actinobacteria} & 0.267 & -0.340 \\ \hline
Unclassified & 0.275 & -0.443 \\ \hline
\emph{Fusobacteria} & -0.010 & -0.119 \\ \hline
\emph{Verrucomicrobia} & 0.006 & -0.525 \\ \hline
\end{tabular}
\end{center}
\caption{Loadings of the directions that determine the axes in
7-D space for $\beta$ and $\gamma_t$.}
\label{T:loadings}
\end{table}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=4.5in]{frechet_boxplot}
\end{center}
\caption{Boxplot of the values of the diffusion Fr\'{e}chet functions.}
\label{F:dffs}
\end{figure}
A similar analysis was carried out for $\gamma_t$. We tested a range of
values for the significance level $\alpha$ used in network sparsification
and the scale parameter $t$. The values $\alpha = 0.1$ and $t = 7.75$
were selected because they optimized the performance of $\gamma_t$
as measured by the area under its receiver operating characteristic
(ROC) curve \cite{fawcett}, a plot of the true positive rate (sensitivity)
against the false positive rate (1 - specificity) at different threshold
levels. Figure \ref{F:dffs}
shows boxplots of the values of the diffusion Fr\'{e}chet function for
the four groups. Table \ref{T:loadings} shows the loadings for a unit vector
in the direction of the axis in $\ensuremath{\mathbb{R}}^7$ space associated with $\gamma_t$
that indicate that the composition of the sub-communities
associated with \emph{Bacteroidetes} and \emph{Verrucomicrobia} have a dominant
role in characterizing CDI through $\gamma_t$, followed by Unclassified
and \emph{Actinobacteria}. Note that the model identifies
\emph{Verrucomicrobia} as a key player, whereas its contribution to $\beta$
is minor simply because its count is significantly smaller than the
counts for other phyla.
Figure \ref{F:roc} provides a comparison of the
ROC curves for $\beta$ and $\gamma_t$, $t = 7.75$, demonstrating
that $\gamma_t$ has a superior performance relative to $\beta$ in
characterizing CDI and recovery from CDI.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=2.5in]{roc_both}
\end{center}
\caption{ROC curves for $\beta$ (black) and $\gamma_t$ (blue),
for $t=7.75$.}
\label{F:roc}
\end{figure}
At the threshold value $a = -1.128$,
$\gamma_t$ attained a true positive rate of $83\%$ and a false positive rate
of $20\%$, as estimated from data for 20 healthy and 21 infected samples,
respectively. At $a = -1.108$, the sensitivity increased to $91\%$ with a false
positive rate of $25\%$. As samples from recovering CDI patients whose
gut flora are only partially restored to normal have been included in the healthy
group, we conclude that $\gamma_t$ also shows good sensitivity to the
effects of FMT treatment. In this regard, we note that a significant
portion of the false positives correspond to samples from post-FMT treatment
patients in resolution whose gut flora are only partially restored to normal.
\section{Concluding Remarks} \label{S:remarks}
We introduced diffusion Fr\'{e}chet functions and diffusion Fr\'{e}chet
vectors for probability measures on Euclidean space and weighted
networks that integrate geometric information at multiple spatial
scales, yielding a pathway to the quantification of their shapes.
We proved that these functional statistics are stable with respect
to the Wasserstein distance, providing a theoretical basis for their
use in data analysis. To demonstrate the usefulness of these concepts,
we discussed examples using simulated data and applied DFVs to
the analysis of data associated with fecal microbiota transplantation
that has been used as an alternative to antibiotics in treatment of
recurrent CDI. Among other things, the method provides a technique
for detecting the presence and studying the organization of
sub-communities at different scales. The approach enables us to
address such problems as quantification of structural variation in
bacterial communities associated with a cohort of
individuals (for example, bacterial communities in the gut of
multiple individuals), as well as associations between structural
changes, health and disease.
Diffusion Fr\'{e}chet functions may be defined in the broader framework
of metric spaces equipped with a diffusion kernel. However, their
stability in this general setting remains to be investigated. As
remarked in Section \ref{S:efrechet}, the present work suggests
the development of refinements of diffusion Fr\'{e}chet
functions such as diffusion covariance tensor fields for Borel
measures on Riemannian manifolds to make their geometric
properties more readily accessible.
\section*{Acknowledgements}
This research was supported in part by: NSF grants DBI-1262351
and DMS-1418007; CIHR-NSERC Collaborative Health Research Projects (413548-2012); and NSF under Grant DMS-1127914 to SAMSI.
\section*{References}
|
2,869,038,154,311 | arxiv | \section{Introduction}
The real algebraic variety of representations from a $3$-manifold
group $\pi_1(M)$ to $SU(2)$ or $SO(3)$ has long been a subject of
interest, giving rise as it does to useful invariants such as the
Casson invariant and the $A$-polynomial \cite{CCGLS}.
In the case where $\partial M$ is a torus -- in particular, where $M$
is the exterior of a knot in $S^3$ -- there is a particular interest in finding representations which vanish on a given slope $\alpha\in\mathbb{Q}\cup\{\infty\}$ on $\partial M$,
and hence give rise to a representation of $\pi_1(M(\alpha))$, where
$M(\alpha)$ is the manifold obtained from $M$ by Dehn filling along $\alpha$.
A description of the character variety in the case of a $2$-bridge
knot is given by Burde in \cite{B}. For twist knots, a more detailed
description is given by Uygur and Azcan in \cite{UA}.
Burde \cite{B} used this description to show that nontrivial representations $\pi_1(M(+1))\to SU(2)$
exist for any nontrivial $2$-bridge knot exterior $M$, and
deduced the Property P Conjecture for $2$-bridge knots. More recently,
Kronheimer and Mrowka \cite{KM1} proved the Property P Conjecture in full by
showing that nontrivial representations $\pi_1(M(+1))\to SO(3)$ exist for
an arbitrary nontrivial knot exterior $M$.
In another article \cite{KM2}, the same authors proved that there
is an irreducible representation $\pi_1(M(r))\to SU(2)$
(that is, a representation with nonabelian image), for
any nontrivial knot exterior $M$ and any slope $r\in\mathbb{Q}$ such that
$|r|\le 2$. One consequence of this (see \cite{BZ,DG})
is that every nontrivial knot has a nontrivial $A$-polynomial.
\medskip
In the present note, we construct a bilinear pairing
$\pi_1({\mathcal C})\times\pi_1(\partial M)\to\mathbb{Z}$ for suitable subsets
$\mathcal C$ of the variety $\mathcal R$ of representations
$\pi_1M\to SU(2)$, and apply it to Burde's description \cite{B} of
$\mathcal R$ in the case of $2$-bridge knots, to show that the
restriction $|r|\le 2$ in \cite{KM2} can be weakened in this case:
\begin{thm}\label{twobridge}
Let $M$ be the exterior of a nontrivial $2$-bridge knot in $S^3$
which is not a torus knot,
and let $\alpha$ be any non-meridian slope in $\partial M$. Then
there exists an irreducible representation $\pi_1(M(\alpha))\to SU(2)$.
\end{thm}
Since there are many examples of lens spaces obtainable by Dehn surgery
on nontrivial knots, it is clear that the above theorem cannot possibly
extend from $2$-bridge knots to arbitrary knots. However, by varying
the subset $\mathcal C$ of the representation in our construction,
we can adapt the technique to consider also reducible representations.
As an example, we prove the following result for torus knots.
\begin{thm}\label{torus}
Let $X$ be the exterior of the $(p,q)$ torus knot, where $1<p<q$,
and $X(\alpha)$
the manifold obtained from $X$ by Dehn filling along a non-meridian
slope $\alpha\in\mathbb{Q}$. Then
\begin{enumerate}
\item[\rm{(1)}] if $\alpha=pq$ and $p>2$, then $\pi_1(X(\alpha))$ admits
an irreducible representation to $SU(2)$;
\item[\rm{(2)}] if $\alpha=pq$ and $p=2$, then $\pi_1(X(\alpha))$ admits
no irreducible representation to $SU(2)$, but admits
a representation to $SO(3)$ with nonabelian image;
\item[\rm{(3)}] if $\alpha=pq\pm\frac1n$ for some positive integer
$n$, then every representation from $\pi_1(X(\alpha))$ to $SO(3)$
has abelian image;
\item[\rm{(4)}] for any other value of $\alpha$, $\pi_1(X(\alpha))$ admits
an irreducible representation to $SU(2)$.
\end{enumerate}
\end{thm}
Results of \cite{M} indicate that this result is in a sense
best possible: for example, in
Case (3) the Dehn surgery manifold $X(\alpha)$ is a lens space.
The paper is organised as follows. In Section \ref{charvar}
below we recall some basic properties of the $SU(2)$
representation and character varieties of a knot group.
In Section \ref{wind}
we describe our bilinear pairing, in a fairly general context.
We then apply this in Sections \ref{2bk} and \ref{tk} to prove Theorems
\ref{twobridge} and \ref{torus} respectively.
\medskip\noindent{\bf Acknowledgement} We are grateful to Ben Klaff
for helpful conversations about this work.
\section{The SU(2) representation and character varieties}\label{charvar}
If $\Gamma$ is any finitely presented group,
and $G$ is a (real) algebraic matrix group, then the set of
representations $\Gamma\to G$ forms a real affine algebraic variety
$\mathcal{R}$ on which $G$ acts by conjugation, giving rise to a
quotient {\em character variety} $\mathcal{X}$.
For the purposes of the present paper, $\Gamma$ will always be a knot group,
and $G=SU(2)$. In this case $\mathcal{R}$ is naturally expressed as a union of
two closed $SU(2)$-invariant subvarieties $\mathcal{R}_{red}\cup\mathcal{R}_{irr}$,
and hence also $\mathcal{X}$ is a union of subvarieties $\mathcal{X}_{red}\cup\mathcal{X}_{irr}$. Here $\mathcal{R}_{red}$ denotes the variety of
{\em reducible} representations $\rho:\Gamma\to SU(2)$, in other words
those for which the resulting $\Gamma$-module $\mathbb{C}^2$ splits as a direct sum of
two $1$-dimensional modules. This happens precisely when the image of
$\rho$ is abelian, in other words when $\rho$ is induced from a representation
of $\Gamma/[\Gamma,\Gamma]\cong\mathbb{Z}$. Hence $\mathcal{R}_{red}$ is canonically
homeomorphic to $SU(2)\cong S^3$. The corresponding character subvariety
$\mathcal{X}_{red}$ is canonically homeomorphic to the closed interval $[-2,2]\subset\mathbb{R}$, parametrised by the trace $Tr(\rho(\mu))$, where $\rho$
is a representative of a conjugacy class of reducible representations, and
$\mu\in\Gamma$ is a fixed meridian element.
The complement of $\mathcal{R}_{red}$ in $\mathcal{R}$ is not closed, but its
closure is a subvariety $\mathcal{R}_{irr}$ which is $SU(2)$-invariant and hence
gives rise to a closed subvariety $\mathcal{X}_{irr}$ of $\mathcal{X}$.
Now fix once and for all a meridian $\mu\in\Gamma$, and consider the following
subset $\mathcal{C}$ of $\mathcal{R}$. A representation $\rho:\Gamma\to SU(2)$
belongs to $\mathcal{C}$ if and only if
$$\rho(\mu)=\left(\begin{array}{cc} x+iy & 0 \\ 0 & x-iy\end{array}\right)$$
with $x,y\in\mathbb{R}$, $x^2+y^2=1$ and $y\ge 0$.
Note that every representation in $\mathcal{R}$ is conjugate to one in
$\mathcal{C}$, so the quotient map $\mathcal{R}\to\mathcal{X}$ restricts
to a surjection on
\mathcal{C}$ (and to a homeomorphism
\mathcal{C}\cap\mathcal{R}_{red}\to\mathcal{X}_{red}$).
\section{Winding numbers}\label{wind}
Let $A$ be an abelian group, $G$ a (connected)
Lie group, and ${\mathcal D}$ a subset of the
variety of representations $A\to G$. Given any path
$P=\{\rho_t,0\le t\le 1\}$ in ${\mathcal D}$, and any $a\in A$, we obtain a path
$P(a)=\{\rho_t(a),0\le t\le 1\}$ in $G$.
Clearly, if $P'$ is homotopic
(rel end points) to $P$, then $P'(a)$ is homotopic (rel end points)
to $P(a)$, for any $a\in A$. Hence we obtain a pairing
$$\nu:A\times\pi_1({\mathcal D})\to\pi_1(G),~~~~\nu(a,[P]) := [P(a)].$$
\noindent{\bf Remark} Recall that, if $f,g:[0,1]\to G$ are closed paths
in the topological group $G$, based at the identity element $1\in G$,
then $[f][g]=[f.g]=[g][f]$ in $\pi_1(G,1)$, where $f.g$ denotes the
pointwise product $t\mapsto f(t)g(t)\in G$. This can easily be seen, for
example, from the diagram below, representing the map $[0,1]^2\to G$,
$(s,t)\mapsto f(s)g(t)$.
\begin{center}
\epsfxsize=5cm \epsfysize=4.5cm \epsfbox{topgroup}
\end{center}
In particular, $\pi_1(G,1)$ is abelian, so the above definition of
$\nu$ is unaffected by base-point choices.
\begin{prop}\label{bilinear} The pairing $\nu:(a,[P])\mapsto [P(a)]$ defined
above is bilinear.
\end{prop}
\begin{proof}
For a fixed element $a\in A$, if $P.Q$ is the concatenation of
paths $P,Q$ in ${\mathcal D}$, then $(P.Q)(a)$ is the concatenation of
$P(a),Q(a)$ in $G$, so $[P]\mapsto [P(a)]$ is a homomorphism
$\pi_1({\mathcal D})\to\pi_1(G)$.
\smallskip
For $a,b\in A$ and a fixed path $P$ in ${\mathcal D}$, we have $P_t(ab)=P_t(a)P_t(b)$
for each $t\in [0,1]$, since $P_t$ is a representation $A\to G$. By the above remark, $[P(ab)]=[P(a)][P(b)]$ in $\pi_1(G)$.
In other words, $a\mapsto [P(a)]$ is a homomorphism $A\to\pi_1(G)$.
\end{proof}
We apply Proposition \ref{bilinear} in the following restricted context. Let $X$ be the
exterior of a nontrivial knot in $S^3$, and let
$A=\pi_1(\partial X)\cong\mathbb{Z}^2$. Let $\mu,\lambda\in A$ denote a fixed meridian and longitude respectively.
Let $G$ be the Lie group $S^1=\{z\in\mathbb{C};|z|=1\}$.
The subset $\mathcal{D}$ of the variety of representations $A\to S^1$
arises as follows. We regard $S^1$ as the subgroup of $SU(2)$ consisting
of diagonal matrices. Recall that $\mathcal{R}$ is the variety of
representations $\pi_1(X)\to SU(2)$, and that $\mathcal{C}$ is the
subvariety of $\mathcal{R}$ consisting of representations
$\rho:\pi_1(X)\to SU(2)$ such that $\rho(\mu)$ is diagonal, and
the imaginary part of the $(1,1)$ entry of $\rho(\mu)$ is non-negative.
Since $A$ is abelian and $\pi_1(X)$ is generated by conjugates of $\mu$,
it follows that $\rho(A)$ contains only diagonal matrices whenever
$\rho\in\mathcal{C}$. We define $\mathcal{D}$ to be the set
of representations $A\to S^1$ that arise as restrictions of
representations in $\mathcal{C}$.
Note that $\pi_1(S)\cong\mathbb{Z}$, so the bilinear pairing $\nu:A\times\pi_1(\mathcal{D})\to\pi_1(S^1)$ is integer-valued.
\begin{prop}
For each $\gamma\in\pi_1(\mathcal{D})$ let $K_\gamma\subset A$ denote
the kernel of the homomorphism $A\to\mathbb{Z}$, $a\mapsto\nu(a,\gamma)$.
Then either $K_\gamma=A$ or $K_\gamma=\mathbb{Z}\mu$, the subgroup of $A$
generated by $\mu$.
\end{prop}
\begin{proof}
Certainly $\mu$ belongs to $K_\gamma$ for all
$\gamma\in\pi_1(\mathcal{D})$, since for $\rho\in\mathcal{C}$ we have $\rho(\mu)$ contained in an
open interval in $S^1$ (so the winding number of $\rho(\mu)$
as $\rho$ travels around $C$ is zero).
On the other hand, let $c=\nu(\lambda,\gamma)$. Then by bilinearity,
for any $m,n\in\mathbb{Z}$ we have $\nu(m\mu+n\lambda,\gamma)=cn$. If $cn=0$
for some $n$ then either $c=0$ or $n=0$. In the first
case $\nu(m\mu+n\lambda,\gamma)=0$ for all $m,n$.
In the second case, $m\mu+n\lambda=m\mu\in\mathbb{Z}\mu$.
\end{proof}
\begin{cor}\label{dehnfill}
If the pairing $\nu:A\times\pi_1(\mathcal{D})\to\mathbb{Z}$ is not uniformly vanishing, and $\alpha$ is
any non-meridian slope on $\partial X$, then $\pi_1(X(\alpha))$ admits
a nontrivial representation to $SU(2)$, where $X(\alpha)$ is the
$3$-manifold obtained from $X$ by Dehn-filling along $\alpha$.
\end{cor}
\begin{proof}
By hypothesis, $K_\gamma\ne A$ for some $\gamma\in\pi_1(\mathcal{D})$,
so $K_\gamma=\mathbb{Z}\mu$ by the Proposition.
Since $\alpha\notin\mathbb{Z}\mu$, it follows that $\nu(\alpha,\gamma)\ne 0$.
Hence the map $S^1\to S^1$ defined by
$t\mapsto\gamma_t(\alpha)$, has nonzero winding number, and hence in
particular is surjective. Thus we may choose $t\in S^1$ such that $\gamma_t(\alpha)=1\in S^1$. Now $\gamma_t$ is the restriction of a nontrivial representation $\sigma:\pi_1(X)\to SU(2)$, so $\sigma(\alpha)=1\in SU(2)$ and hence $\sigma$ induces a nontrivial representation
$$\tau:\pi_1(X(\alpha))=\pi_1(X)/\<\<\alpha\>\>\to SU(2).$$
\end{proof}
In practice, to find suitable closed paths in $\mathcal{D}$
we may find a closed path in $\mathcal{C}$ and project it to
$\mathcal{D}$ using the restriction map $\rho\mapsto\rho|_A$.
The next result shows that it is equally valid to work in the
character variety $\mathcal{X}$ rather than $\mathcal{C}$.
\begin{lemma}\label{char}
The restriction map $\mathcal{C}\to\mathcal{D}$, $\rho\mapsto\rho|_A$,
factors through $\mathcal{X}$.
\end{lemma}
\begin{proof}
Given $\rho,\rho'\in\mathcal{C}$ with the same image in $\mathcal{X}$,
we know that $\rho,\rho'$ are conjugate by some matrix $M\in SU(2)$.
If $\rho(\mu)\in Z(SU(2))=\{\pm I\}$, then the image of $\rho$
is central and so $\rho'=\rho$. Otherwise, $\rho(\mu)=\rho'(\mu)$
is a diagonal matrix with non-real diagonal entries, so the conjugating
matrix $M$ must also be diagonal. But in this case $\rho(A)$ consists
only of diagonal matrices, which therefore commute with $M$, so
the restrictions of $\rho$ and $\rho'$ to $A$ coincide.
\end{proof}
An immediate consequence of Lemma \ref{char} is that any path in
$\mathcal{R}$ between two conjugate representations gives rise to
a closed path in $\mathcal{D}$ by first projecting to $\mathcal{X}$
and then applying the restriction map $\mathcal{X}\to\mathcal{D}$.
\section{Two-bridge knots}\label{2bk}
In this section we prove the following result.
\begin{thm}\label{twobridgethm}
Let $\mathcal{R}_{irr}$ be the variety of irreducible
$SU(2)$-representations of a
two-bridge knot group $G$, and let $A$ be a peripheral subgroup
of $G$. Then there is a closed curve $\gamma$ in $\mathcal{R}_{irr}$
such that the pairing $\nu:\pi_1(\gamma)\times A\to\mathbb{Z}$ is not identically zero.
\end{thm}
\begin{proof}
A two-bridge knot group $G$ has a presentation of the form
$$G=\< x, y | Wx=yW \>,$$
where $W=W(x,y)$ is a word of the form
$x^{\varepsilon(1)}y^{\varepsilon(2)}\cdots y^{\varepsilon(2n)}$ with $\varepsilon(i)=\pm 1$ for
each $i$. Here $x$ and $y$ are meridians.
The symmetry of the presentation ensures that $xW^*=W^*y$
in $G$, where $W^*(x,y):=W(y,x)$. Hence $\beta=W^*W$ commutes with
the meridian $x$, so is a peripheral element and represents a slope
on the boundary torus of the knot exterior.
The exponents $\varepsilon(i)$ can be more explicitly described. There is an
odd integer $k$ coprime to $2n+1$ such that
$$\varepsilon(i)=(-1)^{\lfloor \frac{ik}{2n+1}\rfloor}$$
for each $i$. In particular, since
$$\frac{ik}{2n+1}-1<\lfloor \frac{ik}{2n+1}\rfloor< \frac{ik}{2n+1}$$
for each $i$, we have
$$\lfloor \frac{ik}{2n+1} \rfloor + \lfloor \frac{(2n+1-i)k}{2n+1}\rfloor = k-1\equiv 0~\mathrm{mod}~2$$ for
each $i$, so that $\varepsilon(2n+1-i)=\varepsilon(i)$. From this, it follows that
$$W(x^{-1},y^{-1})=W(y,x)^{-1}=W^{*-1}~~\mathrm{and}~~W^*(x^{-1},y^{-1})=W^{-1}.$$
\medskip
The following construction is essentially due to Burde (see
\cite[p.116]{B}).
Under the action of $SU(2)$ by rotations on $S^2$, we may choose
fixed points of $\rho(x),\rho(W),\rho(y),\rho(W^*)$ as the vertices
$A,B,C,D$ respectively of a spherical rhombus, such that
$\rho(W)(A)=C$ and $\rho(W^*)(C)=A$. (There are degenerate
cases: possibly $A=C$ if $\rho(G)$ is abelian; possibly $B=D$
if $\rho(W)=\rho(W^*)$ with $\rho(W)^2=-I$.) It follows that the
angle of rotation of $W^*W$ is $2\theta$ modulo $2\pi$, where $\theta$
is the angle $\widehat{DAB}$ of the rhombus.
\begin{center}
\epsfxsize=8cm \epsfysize=6cm \epsfbox{rhombus}
\end{center}
Conjugacy in $SU(2)$ allows us freedom to place this rhombus where we
wish. Let us choose to place it with $A=(1,0,0)$, and
$C=(\cos\psi,\sin\psi,0)$ with $0<\psi<\pi$.
\iffalse
In matrix terms, this
means that $\rho(x)$ is diagonal, and its top left entry
has positive imaginary part (in line with our convention for $\mu=x$),
while the top right entry of $\rho(y)$ is real and positive.
(In particular, this ensures that the image of $\rho$ is nonabelian.)
\fi
If we have a path $\rho_t$ ($0\le t\le 1$) of representations,
then this gives rise to a path $A_tB_tC_tD_t$ of rhombi, and a path
$\theta_t\in\mathbb{R}/(2\pi\mathbb{Z})$ of corresponding angles. Parameters $t$
with $\theta_t\in 2\pi\mathbb{Z}$ correspond to degenerate rhombi with $B_t=D_t$,
and hence to representations $\rho_t$ with $\rho_t(W)=\rho_t(W^*)$.
\medskip
Among all $SU(2)$ representations of $G$, a special r\^{o}le is played
by those whose image in $SO(3)$ is dihedral, in other words where
$\rho(x)^2=\rho(y)^2=-I$. In this case, the points $B,D$
of our rhombus coincide with the north and south poles
$N,S=(0,0,\pm 1)$. Burde \cite[pp. 116-117]{B}
explains that, if $\Gamma$ is the group of a two-bridge knot which is not
a torus knot, then there is a path $\rho_t$ of irreducible representations
joining two dihedral representations $\rho_0,\rho_1$, such that $B,D$
switch poles on travelling from $t=0$ to $t=1$. In other words, the
change of angle $\theta_1-\theta_0$ on traversing this path is an odd
multiple of $2\pi$ (in particular nonzero). Replacing the path
$\rho_t$ by a smooth approximation if necessary, we may assume that
$\theta_t$ is differentiable as a function of $t$, and express this
as
$$\int_0^1 \frac{\partial\theta_t}{\partial t} dt \ne 0.$$
\medskip
Now consider another path of representations $\bar{\rho}_t$, defined
by $\bar{\rho}_t(x)=-\rho_t(x^{-1})$, $\bar{\rho}_t(y)=-\rho_t(y^{-1})$.
The equation $W(x^{-1},y^{-1})=W^{*-1}$ enables us to verify that
$\bar{\rho}_t$ is indeed a representation for each $t$. Moreover,
since $\rho_t(x)^2=\rho_t(y)^2=-I$ for $t=0,1$, it follows that
$\bar{\rho}_t=\rho_t$ for $t=0,1$. Finally, since $\bar{\rho}_t(W^*W)=\rho_t(W^*W)^{-1}$, the change in $\theta$ along
the path $\bar{\rho}$ is the negative of the change along the path
$\rho_t$:
$$\frac{\partial\bar{\theta_t}}{\partial t}=-\frac{\partial\theta_t}{\partial t}.$$
If $\gamma$ is the closed curve formed by concatenating
the paths $\rho_t$ and $\bar{\rho}_{1-t}$, the change in $\theta$ around
$\gamma$ is precisely twice that along $\rho_t$, namely an odd multiple of $4\pi$:
$$\int_\gamma \frac{\partial\theta_t}{\partial t} dt =
\int_0^1 \frac{\partial\theta_t}{\partial t} dt + \int_1^0 \frac{\partial\bar{\theta_t}}{\partial t} dt = 2 \int_0^1 \frac{\partial\theta_t}{\partial t} dt \ne 0.$$
In particular $\nu([\gamma],W^*W)\ne 0$.
\end{proof}
\begin{cor}
Let $X$ be the exterior of a two-bridge knot in $S^3$, and let
$X(\alpha)$ be the manifold formed from $X$ by Dehn filling along
a non-meridian slope $\alpha$ in $\partial X$. Then $\pi_1(X(\alpha))$
admits an irreducible representation to $SU(2)$.
\end{cor}
\begin{proof}
By Theorem \ref{twobridgethm}, there is a closed curve $\gamma$ of irreducible
representations $pi_1(X)\to SU(2)$ such that the pairing $\nu$ on
$\pi_1(\gamma)\times\pi_1(\partial X)$ is not identically zero.
Then $\nu([\gamma],-):\pi_1(\partial X)\to\mathbb{Z}$ has kernel $\mu\mathbb{Z}$. Since
$\alpha\notin\mu\mathbb{Z}$, $\nu([\gamma],\alpha)\ne 0$. In other words,
the closed curve $t\mapsto\gamma_t(\alpha)\in S^1$ has non-zero winding
number, and so is surjective. There exists a point $\rho\in\gamma$ such that
$\rho(\alpha)=1$ in $SU(2)$. Since $\pi_1(X(\alpha))$ is the quotient
of $\pi_1(X)$ by the normal closure of $\alpha$, $\rho$ induces
a representation $\pi_1(X(\alpha))\to SU(2)$ with nonabelian
image.
\end{proof}
\section{Torus knots}\label{tk}
In this section we demonstrate that the pairing $\nu$ is not identically zero
on suitable curves in the $SU(2)$-representation
variety of a torus knot. We then apply this to
the fundamental group of any manifold obtained
by nontrivial Dehn surgery on a torus knot, and study
its representations to $SU(2)$.
The $(p,q)$-torus knot has fundamental group $\Gamma=\<x,y|x^p=y^q\>$.
In particular, it has nontrivial centre, generated by $\zeta=x^p=y^q$.
If $\{\mu,\lambda\}$ is any meridian-longitude pair, then $\zeta$ belongs
to the peripheral subgroup generated by $\{\mu,\lambda\}$, since it commutes
with $\mu$.
The character variety $\mathcal{X}$
of $Hom(\Gamma,SU(2))$ splits into a number of
arcs as follows. As for all knots, the subvariety
$\mathcal{X}_{red}$ corresponding to reducible representations
is isomorphic to the closed interval $[-2,2]$, parametrised by the
trace of $\rho(\mu)$.
If $\rho:\Gamma\to SU(2)$ is an irreducible representation, then
$\rho(x),\rho(y)$ are non-commuting matrices with $\rho(x)^p=\rho(y)^q$.
This can arise only if $\rho(x)^p=\rho(y)^q=\pm I$, where $I$
is the identity matrix. Hence $\rho(x)$ has trace $2\cos(a\pi/p)$ and
$\rho(y)$ has trace $2\cos(b\pi/q)$ for some integers $a,b$ of the
same parity. There are $(p-1)(q-1)/2$ open arcs $A_{(a,b)}$
in the irreducible character variety,
one corresponding to each pair $(a,b)$ of integers with $1\le a\le p-1$,
$1\le b\le q-1$, $a\equiv b$ modulo $2$. Each open arc $A_{(a,b)}$
is the interior of a closed arc $\overline{A}_{(a,b)}$
in the whole character variety, whose endpoints
are reducible characters.
\begin{lemma}\label{endpoints}
The endpoints of $\overline{A}_{(a,b)}$ are the points
$$2\cos(c\pi/pq),2\cos(d\pi/pq)\in [-2,2]\cong\mathcal{X}_{red},$$
where where $c,d\in\{1,\dots,pq-1\}$
are the unique solutions to the congruences
$$c,d\equiv\pm a~~\mathrm{modulo}~2p;~~~c,d\equiv\pm b~~\mathrm{modulo}~2q.$$
\end{lemma}
\begin{proof}
On $A_{(a,b)}$, the trace of $\rho(x)$ is constant at $2\cos(a\pi/pq)$,
so the same will hold at each endpoint of $A_{(a,b)}$, which corresponds
to a reducible representation. But $x\equiv\mu^{\pm q}$ modulo the
commutator subgroup, so for any reducible representation $\rho$ we
have $\rho(x)=\rho(\mu)^{\pm q}$. If $z$ is a complex $q$-th root of $\cos(a\pi/pq)\pm i\sin(a\pi/pq)$, then $z=\cos(c\pi/pq)+i\sin(c\pi/pq)$
where $c\equiv\pm a~\mathrm{mod}~2p$. Hence, for a reducible representation
$\rho$ at an endpoint of $A_{(a,b)}$, the trace of $\rho(\mu)$
must be $2\cos(c\pi/pq)$ with $c\equiv\pm a~\mathrm{mod}~2p$.
A similar analysis using $\rho(y)=\rho(\mu)^p$ gives the congruence
$c\equiv\pm b~\mathrm{mod}~2q$.
Finally, note that, since $a\equiv b~\mathrm{mod}~2$ and since $p,q$ are
coprime, each of the four pairs of simultaneous congruences
$$c\equiv\pm a~\mathrm{mod}~2p;\qquad c\equiv\pm b~\mathrm{mod}~2q$$
has a unique solution modulo $2pq$. Moreover, if $c$ is the
solution of one of these pairs of congruences, then $2pq-c$ is the
solution of another, so precisely two of the four solutions lie in the
indicated range $\{1,\dots,pq-1\}$.
\end{proof}
\begin{prop}
Let $\gamma$ be the closed curve in $\mathcal{X}$
formed by the arc $\overline{A}_{(a,b)}$ together with
the subinterval $[2\cos(c\pi/pq),2\cos(d\pi/pq)]$
of $[-2,2]\cong\mathcal{X}_{red}$.
Then $\nu([\gamma],\zeta)\ne 0$.
\end{prop}
\begin{proof}
The knot is embedded in an unknotted torus $T\subset S^3$. Each
component of $S^3\smallsetminus T$ is an open solid torus. Moreover, $x,y$
are represented by the cores of these solid tori, and $\zeta=x^p=y^q$
represents a curve on $T$ parallel to the knot. In particular,
$\zeta\in A$, ie $\zeta$ is a peripheral curve. Now $\rho(\zeta)=\pm I$ for
any irreducible representation $\rho$, and so $\rho(\zeta)$
is constant for $\rho\in A_{(a,b)}$.
Let $z=\exp(i\pi/pq)$, a primitive $(2pq)$-th root of unity. Then the endpoints of $A_{(a,b)}$ correspond to the reducible representations
$\mu\mapsto z^c$ and $\mu\mapsto z^d$, where $c,d$ are given by Lemma
\ref{endpoints}.
Now, as $\rho$ moves continuously through reducible representations
from $\mu\mapsto z^c$ to $\mu\mapsto z^d$, the argument of
$\rho(\mu)$ changes by $(d-c)\pi/pq$, so the argument of
$\rho(\zeta)=\rho(\mu)^{pq}$ changes by $(d-c)\pi$, whence
$\nu([\gamma],\zeta)=(d-c)/2\ne 0$.
\end{proof}
\begin{cor}
Let $X$ be the exterior of a torus knot in $S^3$, and $X(\alpha)$
the manifold obtained from $X$ by Dehn filling along a non-meridian
slope $\alpha$. Then $\pi_1(X(\alpha))$ admits a nontrivial
representation to $SU(2)$.
\end{cor}
\begin{proof}
If $\gamma$ is the curve in the Theorem, then $\nu([\gamma],\zeta)\ne 0$,
and so the kernel of the homomorphism $A\to\mathbb{Z}$, $\beta\mapsto\nu([\gamma],\beta)$, is precisely $\mu\mathbb{Z}$. But by hypothesis
$\alpha\notin\mu\mathbb{Z}$, so $\nu([\gamma],\alpha)\ne 0$. Thus the closed
curve $t\mapsto\gamma_t(\alpha)$ has nonzero winding number on $S^1$, so
is surjective. There is a representation $\rho\in\gamma$ such that $\rho(\alpha)=1$ in $SU(2)$. This choice of $\rho$ induces a nontrivial
representation $\pi_1(X(\alpha))\to SU(2)$.
\end{proof}
Of course, the above corollary is neither new nor surprising. For example,
almost all the groups $\pi_1(X(\alpha))$ have nontrivial abelianisation, so
admit representations to $SU(2)$ that are reducible but nontrivial.
Of more interest is the question of which $\pi_1(X(\alpha))$ admit
irreducible representations to $SU(2)$. This question can also be readily
answered using the known classification of $3$-manifolds obtained by
Dehn surgery on torus knots \cite{M}. Here we present an alternative approach
using an adaptation of our winding-number technique.
\begin{thm}
Let $X$ be the exterior of the $(p,q)$ torus knot, where $1<p<q$,
and $X(\alpha)$
the manifold obtained from $X$ by Dehn filling along a non-meridian
slope $\alpha\in\mathbb{Q}\cup\{\infty\}$. Then
\begin{enumerate}
\item[\rm{(1)}] if $\alpha=pq$ and $p>2$, then $\pi_1(X(\alpha))$ admits
an irreducible representation to $SU(2)$;
\item[\rm{(2)}] if $\alpha=pq$ and $p=2$, then $\pi_1(X(\alpha))$ admits
no irreducible representation to $SU(2)$, but admits
a representation to $SO(3)$ with nonabelian image;
\item[\rm{(3)}] if $\alpha=pq\pm\frac1n$ for some positive integer
$n$, then every representation from $\pi_1(X(\alpha))$ to $SO(3)$
has abelian image;
\item[\rm{(4)}] for any other value of $\alpha$, $\pi_1(X(\alpha))$ admits
an irreducible representation to $SU(2)$.
\end{enumerate}
\end{thm}
\noindent{\bf Remark} The statement of this theorem fits the classification
of \cite{M}, where it is proved that $X(\alpha)$ is a lens
space in Case (3); a connected sum of two lens spaces in Cases
(1) and (2);
and a Seifert fibre space in Case (4).
\medskip
\begin{proof}
(1) Since $2<p<q$, one of the components of $\mathcal{X}_{irr}$ is the arc
$A_{(2,2)}$. But any point on $A_{(2,2)}$ corresponds to a representation
$\rho$ with $\rho(x^p)=\rho(y^q)=I$.
\medskip
(2) In this case $\pi_1(X(\alpha))\cong\mathbb{Z}_2\ast\mathbb{Z}_q$. Since the only
element of order $2$ in $SU(2)$ is the central element $-I$, the image of
any representation $\mathbb{Z}_2\ast\mathbb{Z}_q\to SU(2)$ is abelian. However, corresponding
to any point on $A_{(1,1)}$ is a representation $\rho$ with
$\rho(x^2)=\rho(y^q)=-I$, so composing this with the quotient map
$SU(2)\to SO(3)$ gives a representation of $\pi_1(X(\alpha))$ to $SO(3)$
with nonabelian image.
\medskip
(3) Let $\zeta$ be the curve $x^p=x^q$ of slope $pq$. Then $\zeta=\mu^{pq}\lambda$, so $\alpha=\mu^{npq\pm 1}\lambda^n=\mu^{\pm 1}\zeta^n$
in $\pi_1(\partial X)$. Now any representation from $\pi_1(X(\alpha))$
to $SO(3)$ with nonabelian image arises from a representation of
$\pi_1(X)$ with nonabelian image, which therefore lifts to an irreducible
representation $\rho:\pi_1(X)\to SU(2)$, such that $\rho(\alpha)=\pm I$.
But $\rho$ corresponds to a point on one of the open arcs $A_{(a,b)}$,
so $\rho(\zeta)=(-I)^a$ and hence $\rho(\mu)=(\rho(\alpha)\rho(\zeta)^{-n})^{\pm 1}=\pm I$, contradicting the assumption that $\rho$ is irreducible.
\medskip
(4) As in the previous case, let $\zeta=\mu^{pq}\lambda$ denote the curve
with slope $pq$. Then $\pi_1(\partial X)$ is generated by $\zeta$ and $\mu$,
so we can write $\alpha=\mu^g\zeta^h$. If $|g|\le 1$ then we are in one
of the previous cases, so we have $|g|\ge 2$.
Suppose first that $pq$ is even. Then the endpoints of $A_{1,1}$ are reducible
representations $\rho$ in which the trace of $\rho(\mu)$ is $\pm 2\cos(\pi/pq)$.
Choose $\theta\in [\pi/pq,(pq-1)\pi/pq)]$ such that $\theta$ is an odd
multiple of $\pi/|g|$. Then by continuity of trace, we can choose $\rho\in
A_{(1,1)}$ such that the trace of $\rho(\mu)$ is $2\cos(\theta)$. Provided
$h$ is odd, this gives $\rho(\mu)^g=-I=\rho(\zeta)^{-h}$, so $\rho(\alpha)=I$.
If $h$ is even then $|g|$ is odd, since $\alpha$ is a slope.
In particular $|g|>2$. In this case, we take $\theta$ to be an even multiple
of $\pi/|g|$, and the argument goes through as before.
Now consider the case where $pq$ is odd. Precisely one of the two
positive integers $(q\pm p)/2$ is odd. Call it $c$, and note that $c\in\{1,\dots,q-1\}$. Let $a$ be the unique odd integer with
$1\le a\le p-1$ and $a\equiv\pm~c~\mathrm{mod}~p$.
Then the endpoints of $A_{a,c}$ are reducible representations $\rho$ where
the trace of $\rho(\mu)$ is $2\cos(c\pi/pq)$ and $2\cos((pq-q+c)\pi/pq)$
respectively. Now the interval $[c\pi/pq,(pq-q+c)\pi/pq]$ contains at least
one odd multiple of $\pi/|g|$, and (if $|g|>2$) at least one even
multiple of $\pi/|g|$. Arguing as before, we can choose $\rho\in A_{a,c}$
such that $\rho(\mu)^g=\rho(\zeta)^{-h}$, and so $\rho(\alpha)=I$, except
possibly if $|g|=2$ and $h$ is even (which does not arise, since
$\alpha$ is a slope).
\end{proof}
|
2,869,038,154,312 | arxiv | \section{Introduction}
\label{sec:introduction}
Often time series data experiences multiple abrupt changes in structure which need to be taken into account if the data is to be modelled effectively. These changes known as changepoints (or equivalently breakpoints) cause the data to be split into segments which can then be modelled seperately. Detecting changepoints, both accurately and efficiently, is required in a number of applications including financial data \citep{Fryzlewicz2012}, climate data \citep{Killick2012a, Reeves2007}, EEG data \citep{Lavielle2005} and the analysis of speech signals \citep{Davis2006}.
In Section \ref{sec:emp-eval-fpop} of this paper we look at detecting changes in DNA copy number in tumour microarray data.
Regions in which this copy number is amplified or reduced from a baseline level can relate to tumorous cells and detection of these areas is crucial for classifying tumour progression and type.
Changepoint methods are widely used in this area \citep{Zhang2007, Olshen2004, Picard2011} and moreover tumour microarray data has been used as a basis for benchmarking changepoint techniques both in terms of accuracy \citep{Hocking2013} and speed \citep{Hocking2014}.
Many approaches to estimating the number and position of changepoints \cite[e.g.][]{Braun/Braun/Muller:2000,Davis2006,Zhang2007} can be formulated in terms of defining a cost function for a segmentation. They then either minimise a penalised
version of this cost, which we call the penalised minimisation problem; or minimise the cost under a constraint on the number of changepoints, which we call the
constrained minimisation problem. If the cost function depends on the data through a sum of segment-specific costs then the minimisation can be done exactly using dynamic programming \cite[]{Auger1989,Jackson2005}. However
these dynamic programming methods have a cost that increases at least quadratically with the amount of data. This large computational cost is an increasingly important issue as every larger data sets are
needed to be analysed.
Alternatively, much faster algorithms exist that provide approximate solutions to the minimisation problem.
The most widely used of these approximate techniques is Binary Segmentation \citep{Scott1974}.
This takes a recursive approach, adding changepoints one at a time. With a new changepoint added in the position that would lead to the largest reduction in cost given the location of previous changepoints.
Due to its simplicity, Binary Segmentation is computationally efficient, being roughly linear in the amount of data, however it only provides an approximate solution and can lead to poor estimation
of the number and position of changepoints \cite[]{Killick2012a}.
Variations of Binary Segmentation, such as Circular Binary Segmentation \citep{Olshen2004} and Wild Binary Segmentation \citep{Fryzlewicz2012}, can offer more accurate solutions for slight decreases in the computational efficiency.
A better solution, if possible, is to look at ways of speeding up the dynamic programming algorithms. Recent work has shown this is possible via the pruning of the solution space.
\citet{Killick2012a} present a technique for doing this which we shall refer to as {\em inequality based pruning}. This forms the basis of their method PELT which can be used to solve the penalised minimisation problem.
\citet{Rigaill2010} develop a different pruning technique, {\em functional pruning}, and this is used in their pDPA method which can be used to solve the constained minisation problem.
Both PELT and pDPA are optimal algorithms, in the sense that they
find the true optimum of the minimisation problem they are trying to solve.
However the pruning approaches they take are very different, and work well in different scenarios. PELT is most efficient
in applications where the number of changepoints is large, and pDPA when there are few changepoints.
The focus of this paper is on these pruning techniques, with the aim of trying to combine ideas from PELT and pDPA. This leads to two new algorithms, FPOP and SNIP. The former uses functional pruning to solve the
penalised minisation problem, and the latter uses inequality based pruning to solve the constrainted minisation problem. We further show that FPOP always prunes more than PELT. Empirical results suggest that FPOP is efficient for
large data sets regardless of the number of changepoints, and we observe that FPOP has a computational cost that is even competitive with Binary Segmentation.
The structure of the paper is as follows. We introduce the constrained and penalised optimisation problems for segmenting data in the next section. We then review the existing dynamic progamming methods and pruning approaches
for solving the penalised optimisation problem in Section \ref{sec:pen} and for solving the constrained optimisation problem in Section \ref{sec:const}. The new algorithms, FPOP and SNIP, are developed in
Section \ref{sec:incr-effic}, and compared empirically and theoretically with existing pruning methods in Section \ref{sec:simil-diff-betw}. We then evaluate FPOP empirically on both simulated and CNV data in Section
\ref{sec:emp-eval-fpop}. The paper ends with a discussion.
\section{Model Definition}
\label{sec:defining-change-mean}
Assume we have data ordered by time, though the same ideas extend trivially to data ordered by any other attribute such as position along a chromosome.
Denote the data by $\mathbf{y}=(y_1,\hdots,y_n)$. We will use that notation that, for $s\geq t$, the set of observations from time $t$ to time $s$ is $\mathbf{y}_{t:s}=(y_{t},...,y_s)$.
If we assume that there are $k$ changepoints in the data, this will correspond to the data being split into $k+1$ distinct segments.
We let the location of the $j$th changepoint be $\tau_j$ for $j=1,\hdots,k$, and set $\tau_0=0$ and $\tau_{k+1}=n$. The $j$th segment will consist of data points $y_{\tau_{j-1}+1},\ldots,y_{\tau_j}$. We let
$\mathbf{\tau}=(\tau_0,\ldots,\tau_{k+1})$ be the set of changepoints.
The statistical problem we are considering is how to infer both the number of changepoints and their locations. The specific details of any approach will depend on the type of change, such as change in mean, variance or distribution, that we wish to detect. However a general framework that encompasses many changepoint detection methods is to introduce a cost function for each segment. The cost of a segmentation can then be defined in terms of the sum of the costs across the segments, and we can infer segmentations through minimising the segmentation cost.
Throughout we will let $\mathcal{C}(\mathbf{y}_{t+1:s})$, for $s\leq t$, denote the cost for a segment consisting of data points $y_{t+1},\ldots,y_s$. The cost of a segmentation, $\tau_1,\ldots,\tau_k$ is then
\begin{align}
\sum^k_{j=0}\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}}) .\label{eq:1a}
\end{align}
The form of this cost function will depend on
the type of change we are wanting to detect. One generic approach to defining these segments is to introduce a model for the data within a segment, and then to let the cost be minus the maximum log-likelihood for the data in that segment. If our model is that the data is independent and identically distributed with segment-specific parameter $\mu$ then
\begin{align} \label{eq:nll}
\mathcal{C}(\mathbf{y}_{t+1:s})=\min_{\mu}\sum_{i=t+1}^{s}-\log (p(y_i|\mu)).
\end{align}
In this formulation we are detecting changes in the value of the parameter, $\mu$, across segments.
For example if $\mu$ is the mean in Normally distributed data, with known variance $\sigma^2$, then the cost for a segment would simply be
\begin{align} \label{eq:Cost}
\mathcal{C}(\mathbf{y}_{t+1:s})=
\frac{1}{2\sigma^2}\left[\min_{\mu}\sum_{i=t+1}^s\left(y_i-\frac{1}{s-t}\sum_{j=t+1}^s y_j\right)^2\right],
\end{align}
which is just a quadratic error loss. We have removed a term that does not depend on the data and is linear in segment length, as this term does not affect the optimal segmentation.
Note we will get the same optimal segmentation for any choice of $\sigma>0$ using this cost function.
\subsection{Finding the Optimal Segmentation}
\label{sec:find-optim-segm}
If we knew the number of changepoints in the data, $k$, then we could infer their location through minimising (\ref{eq:1a}) over all segmentations with $k$ changepoints. Normally however $k$ is unknown, and also has to be estimated. A common approach is to define
\begin{align}
C_{k,n}=\min_{\boldsymbol{\tau}}\left[\sum^k_{j=0}\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}})\right],\label{eq:1}
\end{align}
the minimum cost of a segmenting data $y_{1:n}$ with $k$ changepoints. As $k$ increases we have more flexibility in our model for the data, so often $C_{k,n}$ will be monotonically decreasing in $k$ and estimating the number of changepoints by minimising $C_{k,n}$ is not possible. One solution is to solve (\ref{eq:1}) for a fixed value of $k$ which is either assumed to be known or chosen separately. We call this problem the {\em constrained case}.
If $k$ is not known, then a common approach is to calculate $C_{k,n}$ and the corresponding optimal segmentations for a range of values, $k=0,1,\ldots,K$, where $K$ is some chosen maximum number. We can then
estimate the number of changepoints by minimising $C_{k,n}+f(k,n)$ over $k$ for some suitable penalty function $f(k,n)$.
The most common choices of $f(k,n)$, for example SIC \citep{Schwarz1978} and AIC \citep{Akaike1974} are linear in $k$
If the penalty function is linear in $k$, with $f(k,n)=\beta k$ for some $\beta>0$ (which may depend on $n$), then we can directly find the optimal number of changepoints and segmentation by noting that
\begin{eqnarray}
\min_k \left[C_{k,n}+\beta k\right] &=& \min_{k,\boldsymbol{\tau}}\left[\sum^k_{j=0}\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}})\right]+\beta k \nonumber \\
&=& \min_{k,\boldsymbol{\tau}}\left[\sum^k_{j=0}\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}})+\beta \right] -\beta.\label{eq:2}
\end{eqnarray}
We call the minimisation problem in (\ref{eq:2}) the {\em penalised case}.
In both the constrained and penalised cases we need to solve a minimisation problem to find the optimal segmentation under our criteria. There are efficient dynamic programming algorithms for solving each of these minimisation problems.
For the constrained case this is achieved using the Segment Neighbourhood Search algorithm (see Section~\ref{sec:seg-neigh-search}), whilst for the penalised case
this can be achieved using the Optimal Partitioning algorithm (see Section~\ref{sec:optim-part}).
Solving the constrained case offers a way to get optimal segmentations for $k=0,1,\hdots,K$ changepoints, and thus gives insight into how the segmentation varies with the number of segments.
However, a big advantage of the penalised case is that it incorporates model selection into the problem itself, and therefore is often computationally more efficient when dealing with an unknown value of $k$.
\subsection{Conditions for Pruning}
The focus of this paper is on methods for speeding up these dynamic programming algorithms using pruning methods. The pruning methods can be applied under one of two conditions on the segment costs:
\begin{description}
\item[C1] The cost function satisfies
\begin{align*}
\mathcal{C}(\mathbf{y}_{t+1:s})=\min_{\mu}\sum_{i=t+1}^{s}\gamma(y_i,\mu),
\end{align*}
for some function $\gamma(\cdot,\cdot)$, with parameter $\mu$.
\item[C2] There exists a constant $\kappa$ such that for all $t<s<T$,
\begin{align*}
\mathcal{C}(\mathbf{y}_{t+1:s})+\mathcal{C}(\mathbf{y}_{s+1:T})+\kappa \leq\mathcal{C}(\mathbf{y}_{t+1:T}).
\end{align*}
\end{description}
Condition C1 will be used by functional pruning (which is discussed in Sections \ref{sec:segm-neighb-search} and \ref{sec:opr}).
Condition C2 will be used by the inequality based pruning (Section~\ref{sec:optim-part-pelt} and~\ref{sec:snp}). Note that C1 is a stronger
condition than C2. If C1 holds then C2 also holds with $\kappa=0$.
For many practical cost functions these conditions hold; for example it is easily seen that for the negative log-likelihood (\ref{eq:nll}) C1 holds with $\gamma(y_t,\mu)=-\log(p(y_t|\mu))$ and C2 holds with $\kappa=0$.
\section{Solving the Penalised Optimisation Problem}\label{sec:pen}
We first consider solving the penalised optimisation problem \eqref{eq:2} using a dynamic programming approach. The initial algorithm, Optimal Partitioning \citep{Jackson2005}, will be discussed first before mentioning how pruning can be used to reduce the computational cost.
\subsection{Optimal Partitioning}
\label{sec:optim-part}
Consider segmenting the data $\mathbf{y}_{1:t}$. Denote $F(t)$ to be the minimum value of the penalised cost (\ref{eq:2}) for segmenting such data, with $F(0)=-\beta$.
The idea of Optimal Partitioning is to split the minimisation over segmentations into the minimisation over the position of the last changepoint, and then the minimisation over the earlier changepoints.
We can then use the fact that the minimisation over the earlier changepoints will give us the value $F(\tau^*)$ for some $\tau^*<t$
\begin{align*}
F(t)&=\min_{\boldsymbol{\tau},k}\sum_{j=0}^k\left[\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}})+\beta\right]-\beta,\\
&=\min_{\boldsymbol{\tau},k}\left\{\sum_{j=0}^{k-1}\left[\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}})+\beta\right]+\mathcal{C}(\mathbf{y}_{\tau_k+1:t})+\beta\right\}-\beta,\\
&=\min_{\tau^*}\left\{\min_{\boldsymbol{\tau},k'}\sum_{j=0}^{k'}\left[\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}})+\beta\right]-\beta+\mathcal{C}(\mathbf{y}_{\tau^*+1:t})+\beta\right\},\\
&=\min_{\tau^*}\left\{F(\tau^*)+\mathcal{C}(\mathbf{y}_{\tau^*+1:t})+\beta\right\}.
\end{align*}
Hence we obtain a simple recursion for the $F(t)$ values
\begin{align}
F(t)=\min_{0\leq\tau<t}\left[F(\tau) + \mathcal{C}(\mathbf{y}_{\tau+1:t}) + \beta\right]. \label{eq:4}
\end{align}
The segmentations themselves can recovered by first taking the arguments which minimise (\ref{eq:4})
\begin{align}
\label{eq:6}
\tau^*_t=\operatorname*{arg\,min}_{0\leq\tau<t}\left[F(\tau) + \mathcal{C}(\mathbf{y}_{\tau+1:t}) + \beta\right],
\end{align}
which give the optimal location of the last changepoint in the segmentation of $y_{1:t}$.
If we denote the vector of ordered changepoints in the optimal segmentation of $y_{1:t}$ by $cp(t)$, with $cp(0)=\emptyset$, then the optimal changepoints up to a time $t$ can be calculated recursively
\begin{align*}
cp(t)=(cp(\tau^*_t),\tau^*_t).
\end{align*}
As equation~\eqref{eq:4} is calculated for time steps $t=1,2,\hdots,n$ and each time step involves a minimisation over $\tau=0,1,\hdots,t-1$
the computation actually takes $\mathcal{O}(n^2)$ time.
\subsection{PELT} \label{sec:optim-part-pelt}
One way to increase the efficiency of Optimal Partitioning is discussed in \citet{Killick2012a} where they introduce the PELT (Pruned Exact Linear Time) algorithm. PELT works by limiting the set of potential
previous changepoints (i.e. the set over which $\tau$ is chosen from in the minimisation in equation~\ref{eq:4}). They show that if Condition C2 holds for some $\kappa$, and if
\begin{align}
F(t)+\mathcal{C}(\mathbf{y}_{(t+1:s)})+\kappa > F(s),\label{eq:hold}
\end{align}
then at any future time $T>s$, $t$ can never be the optimal location of the most recent changepoint prior to $T$.
This means that at every time step $s$ the left hand side of equation~(\ref{eq:hold}) can be calculated for all potential values of the last changepoint.
If the inequality holds for any individual $t$ then that $t$ can be discounted as a potential last changepoint for all future times.
Thus the update rules (\ref{eq:4}) and (\ref{eq:6}) can be restricted to a reduced set of potential last changepoints, $\tau$, to consider.
This set, which we shall denote as $R_t$, can be updated simply by
\begin{align}
R_{t+1}=\{\tau\in \{R_{t}\cup\{t\}\}:F(\tau)+\mathcal{C}(\mathbf{y}_{(\tau+1):t})+\kappa \leq F(t)\}.
\end{align}
This pruning technique, which we shall refer to as {\em inequality based pruning}, forms the basis of the PELT method
As at each time step in the PELT algorithm the minimisation is being run over fewer values it would be expected that this method would be more efficient than the basic Optimal Partitioning algorithm.
In \citet{Killick2012a} it is shown to be at least as efficient as Optimal Partitioning, with PELT's computational cost being bounded above by $\mathcal{O}(n^2)$. Under certain conditions the
expected computational cost can be shown to be bounded by $Ln$ for some constant $L<\infty$. These conditions are given fully in \citet{Killick2012a}, the most important of which is that the expected number of changepoints in the data increases linearly with the length of the data, $n$
\section{Solving the Constrained Optimisation Problem} \label{sec:const}
We now consider solving the constrained optimisation problem (\ref{eq:1}) using dynamic programming. These methods assume a maximum number of changepoints that are to be considered, $K$, and then solve the constrained optimisation problem for all values of $k=1,2,\ldots,K$. We first describe the initial algorithm, Segment Neighbourhood Search \citep{Auger1989}, and then an approach that uses pruning.
\subsection{Segment Neighbourhood Search}
\label{sec:seg-neigh-search}
Take the constrained case \eqref{eq:1} which segments the data up to $t$, for $t\geq k+1$, into $k+1$ segments (using $k$ changepoints), and denote the minimum value of the cost by $C_{k,t}$.
The idea of Segment Neighbourhood Search is to derive a relationship between $C_{t,k}$ and $C_{s,k-1}$ for $s<t$:
\begin{align*}
C_{k,t}&=\min_{\boldsymbol{\tau}}\sum_{j=0}^k\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}}),\\
&= \min_{{\tau_k}}\left[\min_{\boldsymbol{\tau}_{1:k-1}}\sum_{j=0}^{k-1}\mathcal{C}(\mathbf{y}_{\tau_j+1:\tau_{j+1}})+\mathcal{C}(\mathbf{y}_{\tau_k+1:\tau_{k+1}})\right],\\
&=\min_{{\tau_k}}\left[C_{k-1,\tau_k}+\mathcal{C}(\mathbf{y}_{\tau_k+1:\tau_{k+1}})\right].
\end{align*}
Thus the following recursion is obtained:
\begin{align}
C_{k,t}=\min_{\tau\in\{k,\hdots,t-1\}}\left[C_{k-1,\tau}+\mathcal{C}(\mathbf{y}_{\tau+1:t})\right].\label{eq:SNS}
\end{align}
If this is run for all values of $t$ up to $n$ and for $k=2,\hdots,K$, then the optimal segmentations with $1,\hdots,K$ segments can be acquired.
To extract the optimal segmentation we first let $\tau^*_l(t)$ denote the optimal position of the last changepoint if we segment data $\mathbf{y}_{1:t}$ using $l$ changepoints. This can be calculated as
\[
\tau^*_l(t)=\operatorname*{arg\,min}_{\tau\in\{l,\hdots,t-1\}}\left[C_{l-1,\tau}+\mathcal{C}(\mathbf{y}_{\tau+1:t})\right].
\]
Then if we let $(\tau_1^k,\ldots,\tau_k^k)$ be the set of changepoints in the optimal segmentation of $\mathbf{y}_{1:n}$ into $k+1$ segments, we have $\tau_k^k=\tau^*_k(n)$. Furthermore we can calculate the other
changepoint positions recursively for $l=k-1,\ldots,1$ using
\[
\tau_l^k(n)=\tau^*_l(\tau_{l+1}^k).
\]
For a fixed value of $k$ equation~(\ref{eq:SNS}) is computed for $t\in 1,\hdots,n$. Then for each $t$ the minimisation is done for $\tau=1,\hdots,t-1$.
This means that $\mathcal{O}(n^2)$ calculations are needed.
However, to also identify the optimal number of changepoints this then needs to be done for $k\in 1,\hdots,K$ so the total computational cost in time can be seen to be $\mathcal{O}(Kn^2)$.
\subsection{Pruned Segment Neighbourhood Search} \label{sec:segm-neighb-search}
\citet{Rigaill2010} has developed techniques to increase the efficiency of Segment Neighbourhood Search using functional pruning.
These form the basis of a method called pruned Dynamic Programming Algorithm (pDPA). A more generic implementation of this method is presented in \citet{Cleynen2012}.
Here we describe how this algorithm can be used to calculate the $C_{k,t}$ values. Once these are calculated, the optimal segmentation can be extracted as in Segment Neighbourhood Search.
Assuming condition C1, the segment cost function can be split into the component parts $\gamma(y_i,\mu)$, which depend on the parameter $\mu$.
We can then define new cost functions, $Cost^{\tau}_{k,t}(\mu)$, as the minimal cost of segmenting data $y_{1:t}$ into $k$ segments, with a most
recent changepoint at $\tau$, and where the segment after $\tau$ is conditioned to have parameter $\mu$. Thus for $\tau\leq t-1$,
\begin{align}
Cost^\tau_{k,t}(\mu)=C_{k-1,\tau}+\sum_{i=\tau+1}^t\gamma(y_i,\mu),\label{eq:costfunk}
\end{align}
and $Cost^t_{k,t}(\mu)=C_{k-1,t}$.
These functions, which are stored for each candidate changepoint, can then be updated at each new time step as for $\tau\leq t-1$
\begin{align}
Cost^\tau_{k,t}(\mu)=Cost^\tau_{k,t-1}(\mu)+\gamma(y_{t},\mu).\label{eq:update}
\end{align}
By taking the minimum of $Cost^\tau_{k,t}(\mu)$ over $\mu$, the individual terms of the right hand side of equation~\eqref{eq:SNS} can be recovered.
Therefore, by further minimising over $\tau$, the minimum cost $C_{k,t}$ can be returned
\begin{align*}
\min_\tau \min_\mu Cost^\tau_{k,t}(\mu)&= \min_\tau \min_\mu \left[C_{k-1,\tau}+\sum_{i=\tau+1}^t\gamma(y_i,\mu)\right],\\
&=\min_\tau \left[C_{k-1,\tau}+\min_\mu\sum_{i=\tau+1}^t\gamma(y_i,\mu)\right],\\
&=\min_\tau \left[C_{k-1,\tau}+\mathcal{C}(\mathbf{y}_{\tau+1:t})\right],\\
&= C_{k,t}.
\end{align*}
By interchanging the order of minimisation the values of the potential last changepoint, $\tau$, can be pruned whilst allowing for changes in $\mu$. First we define the function $Cost^*_{k,t}(\mu)$ as follows
\begin{align*}
Cost^*_{k,t}(\mu)= \min_{\tau}Cost^\tau_{k,t}(\mu).
\end{align*}
We can now get a recursion for $Cost^*_{k,t}(\mu)$ by splitting the minimisation over the most recent changepoint $\tau$ into the two cases $\tau\leq t-1$ and $\tau=t$:
\begin{eqnarray*}
Cost^*_{k,t}(\mu)&=& \min\left\{ \min_{\tau\leq t-1} Cost^\tau_{k,t}(\mu)\mbox{ },\mbox{ }Cost^t_{k,t}(\mu) \right\}\\
&=&\min\left\{ \min_{\tau\leq t-1} Cost^\tau_{k,t-1}(\mu)+\gamma(y_t,\mu)\mbox{ },\mbox{ }C_{k-1,t} \right\},
\end{eqnarray*}
which gives
\[
Cost^*_{k,t}(\mu)=\min\left\{ Cost^*_{k,t-1}(\mu)+\gamma(y_t,\mu)\mbox{ },\mbox{ }C_{k-1,t} \right\}.
\]
The idea of pDPA is the use this recursion for $Cost^*_{k,t}(\mu)$. We can then use the fact that $C_{k,t}=\min_\mu Cost^*_{k,t}(\mu)$ to calculate the $C_{k,t}$ values.
In order to do this we need to be able to represent this function of $\mu$ in an efficient way.
This can be done if $\mu$ is a scalar, because for any value of $\mu$, $Cost^*_{k,t}(\mu)$ is equal to the value of $Cost^\tau_{k,t}(\mu)$ for some value of $\tau$.
Thus we can partition the possible values of $\mu$ into intervals, with each interval corresponding to a value for $\tau$ for which $Cost^*_{k,t}(\mu)=Cost^\tau_{k,t}(\mu)$.
To make the idea concrete, an example of $Cost^*_{k,t}(\mu)$ is given in Figure~\ref{fig:pDPAfuncallin} for a change in mean using a least square cost criteria.
Each $Cost^\tau_{k,t}(\mu)$ is a quadratic function in this example.
In this example there are 6 intervals of $\mu$ corresponding to 5 different values of $\tau$ for which $Cost^*_{k,t}(\mu)=Cost^\tau_{k,t}(\mu)$.
The pDPA algorithm needs to just store the 5 different $Cost^\tau_{k,t}(\mu)$ functions, and the corresponding sets.
\begin{figure}[t]
\centering
\includegraphics[width=9cm, trim=1cm 1cm 1cm 1cm]{pDPA/PDPAcandidatesallin.pdf}
\caption{Cost functions, $Cost_{k,\tau}(\mu,t)$ for $\tau=0,\hdots,46$ and $t=46$ and the corresponding $C^*_{k}(\mu,t)$ (in bold) for a change in mean using a least square cost criteria.
Coloured lines correspond to $Cost_{k,\tau}(\mu,t)$ that contribute to $C^*_{k}(\mu,t)$, with the coloured horizontal lines showing the intervals of $\mu$ for which each value of $\tau$ is such that
$Cost_{k,\tau}(\mu,t)=C^*_{k}(\mu,t)$. Greyed out lines correspond to candidates which have previously been pruned, and do not contribute to $C^*_{k}(\mu,t)$.
\label{fig:pDPAfuncallin}}
\end{figure}
Formally speaking we define the set of intervals for which $Cost^*_{k,t}(\mu)=Cost^\tau_{k,t}(\mu)$ as $Set_{k,t}^\tau$. The recursion for $Cost^*_{k,t}(\mu)$ can be used to induce a recursion
for these sets. First define:
\begin{align}
I^\tau_{k,t}=\{\mu: Cost^\tau_{k,t}(\mu)\leq C_{k-1,t}\}. \label{eq:pDPAprune}
\end{align}
Then, for $\tau\leq t-1$ we have
\begin{eqnarray*}
Set_{k,t}^{\tau}&=&\left\{\mu:Cost^\tau_{k,t}(\mu)= Cost^*_{k,t}(\mu)\right\}\\
&=&\left\{\mu:Cost^\tau_{k,t-1}(\mu)+\gamma(y_t,\mu)=\min\left\{ Cost^*_{k,t-1}(\mu)+\gamma(y_t,\mu),C_{k-1,t} \right\} \right\}.
\end{eqnarray*}
Remembering that $Cost^\tau_{k,t-1}(\mu)+\gamma(y_t,\mu)\geq Cost^*_{k,t-1}(\mu)+\gamma(y_t,\mu)$, we have that
for $\mu$ to be in $Set_{k,t}^{\tau}$ we need that $Cost^\tau_{k,t-1}(\mu)=Cost^*_{k,t-1}(\mu)$, and that $Cost^\tau_{k,t-1}(\mu)+\gamma(y_t,\mu)\leq C_{k-1,t}$.
The former condition corresponds to $\mu$ being in $Set_{k,t-1}^{\tau}$ and the second that $\mu$ is in $I^\tau_{k,t}$. So for $\tau\leq t-1$
\[
Set^{\tau}_{k,t}=Set^{\tau}_{k,t-1} \cap I^\tau_{k,t}.
\]
If this $Set^{\tau}_{k,t}=\emptyset$ then the value $\tau$ can be pruned, as $Set^{\tau}_{k,T}=\emptyset$ for all $T>t$.
If we denote the range of values $\mu$ can take to be $D$, then we further have that
\[
Set^{t}_{k,t}=D \backslash \left[\displaystyle\bigcup_{\tau}I^\tau_{k,t}\right],
\]
where $t$ can be pruned straight away if $Set^t_{k,t}=\emptyset$.
\begin{figure}[t]
\centering
\subfloat[End Time $t$]{
\includegraphics[page=1,width=5.5cm,trim=1cm .5cm .5cm 1.5cm]{pDPA/PDPAcandidates.pdf}
}
\subfloat[Middle Time $t+1$]{
\includegraphics[page=2,width=5.5cm,trim=.75cm .5cm .75cm 1.5cm]{pDPA/PDPAcandidates.pdf}
}
\subfloat[End Time $t+1$]{
\includegraphics[page=3,width=5.5cm,trim=.5cm .5cm 1cm 1.5cm]{pDPA/PDPAcandidates.pdf}
}
\caption{Example of pDPA algorithm over two time-steps. On each plot we show individual $Cost^\tau_{k,t}(\mu)$ functions that are stored,
together with the intervals (along the bottom) for which each candidate last changepoint is optimal. In bold is the value of $Cost^*_{k,t}(\mu)$.
For this example $t=43$ and we are detecting a change in mean (see Section \ref{sec:defining-change-mean}). (a) 4 candidates are optimal for some interval of $\mu$, however at $t=44$ (b),
when the candidate functions are updated and the new candidate is added, then the candidate $\tau=43$ is no longer optimal for any $\mu$ and hence can be pruned (c).
\label{fig:pDPAfuncs} }
\end{figure}
An example of the pDPA recursion is given in Figure~\ref{fig:pDPAfuncs} for a change in mean using a least square cost criteria.
The left-hand plot shows $Cost^*_{k,t}(\mu)$.
In this example there are 5 intervals of $\mu$ corresponding to 4 different values of $\tau$ for which $Cost^*_{k,t}(\mu)=Cost^\tau_{k,t}(\mu)$.
When we analyse the next data point, we update each of these four $Cost^\tau_{k,t}(\mu)$ functions, using $Cost^\tau_{k,t+1}(\mu)=Cost^\tau_{k,t}(\mu)+\gamma(y_{t+1},\mu)$, and introduce a new curve corresponding to a change-point at time $t+1$,
$Cost^{t+1}_{k,t+1}(\mu)=C_{k-1,t+1}$ (see middle plot). We can then prune the functions which are no longer optimal for any $\mu$ values, and in this case we remove one such function (see right-hand plot).
pDPA can be shown to be bounded in time by $\mathcal{O}(Kn^2)$.
\citet{Rigaill2010} further analyses the time complexity of pDPA and shows it empirically to be $\mathcal{O}(Kn\log n)$, further indications towards this will be presented in Section~\ref{sec:emp-eval-fpop}.
However pDPA has a computational overhead relative to Segment Neighbourhood Search, as it requires calculating and storing the $Cost^\tau_{k,t}(\mu)$ functions and the corresponding sets $Set_{k,t}^\tau$.
Currently implementations of pDPA have only been possible for models with scalar segment parameters $\mu$, due to the difficulty of calculating the sets in higher dimensions.
Being able to efficiently store and update the $Cost^\tau_{k,t}(\mu)$ have also restricted application primarily to models where $\gamma(y,\mu)$ corresponds to the log-likelihood of an exponential family.
However this still includes a wide-range of changepoint applications, including that of detecting CNVs that we consider in Section \ref{sec:emp-eval-fpop}.
The cost of updating the sets depends heavily on whether the updates \eqref{eq:pDPAprune} can be calculated analytically, or whether they require the use of numerical methods.
\section{New Changepoint Algorithms}
\label{sec:incr-effic}
Two natural ways of extending the two methods introduced above will be examined in this section.
These are, respectively, to apply functional pruning (Section~\ref{sec:segm-neighb-search}) to Optimal Partitioning, and to apply inequality based pruning (Section~\ref{sec:optim-part-pelt}) to Segment Neighbourhood Search.
These lead to two new algorithms, which we call Functional Pruning Optimal Partitioning (FPOP) and Segment Neighbourhood with Inequality Pruning (SNIP).
\subsection{Functional Pruning Optimal Partitioning}
\label{sec:opr}
Functional Pruning Optimal Partitioning (FPOP) provides a version of Optimal Partitioning \citep{Jackson2005} which utilises functional pruning to increase the efficiency. As will be discussed in Section~\ref{sec:simil-diff-betw} and shown in Section~\ref{sec:emp-eval-fpop} FPOP provides an alternative to PELT which is more efficient in certain scenarios. The approach used by FPOP is similar to the approach for pDPA in Section~\ref{sec:segm-neighb-search}, however the theory is slightly simpler here as there is no longer the need to condition on the number of changepoints.
We assume condition C1 holds, that the cost function, $\mathcal{C}(\mathbf{y}_{\tau+1:t})$, can be split into component parts $\gamma(y_i,\mu)$ which depend on the parameter $\mu$. Cost functions $Cost_t^\tau$ can then be defined as the minimal cost of the data up to time $t$, conditional on the last changepoint being at $\tau$ and the last segment having parameter $\mu$. Thus for $\tau\leq t-1$
\begin{align}
\label{eq:3}
Cost_t^\tau(\mu)=F(\tau)+\beta+\sum_{i=\tau+1}^t\gamma(y_i,\mu),
\end{align}
and $Cost_t^t(\mu)=F(t)+\beta$.
These functions which only need to be stored for each candidate changepoint can then be recursively updated at each time step, $\tau\leq t-1$
\begin{align}
\label{eq:8}
Cost^\tau_{t}(\mu)=Cost^\tau_{t-1}(\mu)+\gamma(y_{t},\mu).
\end{align}
Given the cost functions $Cost^\tau_t(\mu)$ the minimal cost $F(t)$ can be returned by minimising over both $\tau$ and $\mu$:
\begin{align*}
\label{eq:9}
\min_\tau\min_\mu Cost_t^\tau(\mu) &= \min_\tau\min_\mu\left[F(\tau)+\beta+\sum_{i=\tau+1}^t\gamma(y_i,\mu)\right],\\
&=\min_\tau\left[F(\tau)+\beta+\min_\mu\sum_{i=\tau+1}^t\gamma(y_i,\mu)\right],\\
&=\min_\tau\left[F(\tau)+\beta+\mathcal{C}(\mathbf{y}_{\tau+1:t})\right],\\
&=F(t).
\end{align*}
As before, by interchanging the order of minimisation, the values of the potential last changepoint, $\tau$, can be pruned whilst allowing for a varying $\mu$. Firstly we will define the function $Cost_{t}^*(\mu)$, the minimal
cost of segmenting data $y_{1:t}$ conditional on the last segment having parameter $\mu$:
\[
Cost^*_t(\mu)=\min_{\tau} Cost^\tau_t(\mu).
\]
We will update these functions recursively over time, and use $F(t)=\min_\mu Cost^*_t(\mu)$ to then obtain the solution of the penalised minimisation problem. The recursions for $Cost^*_t(\mu)$ are obtained by splitting the minimisation over $\tau$ into $\tau\leq t-1$ and $\tau=t$
\begin{align*}
Cost^*_t(\mu)&=\min\left\{ \min_{\tau \leq t-1} Cost^\tau_t(\mu)\mbox{ },\mbox{ }Cost^t_t(\mu)\right\},\\
&=\min\left\{ \min_{\tau \leq t-1} Cost^\tau_{t-1}(\mu)+\gamma(y_t,\mu)\mbox{ },\mbox{ }Cost^t_t(\mu)\right\},
\end{align*}
which then gives:
\[
Cost^*_t(\mu)=\min\{ Cost^*_{t-1}(\mu)+\gamma(y_t,\mu)\mbox{ },\mbox{ }F(t)+\beta\}.
\]
To implement this recursion we need to be able to efficiently store and update $Cost^*_t(\mu)$. As before we do this by partitioning the space of possible $\mu$ values, $D$, into sets where each set corresponds to
a value $\tau$ for which $Cost^*_t(\mu)=Cost^\tau_t(\mu)$. We then need to be able to update these sets, and store $Cost^\tau_t(\mu)$ just for each $\tau$ for which the corresponding set is non-empty.
This can be achieved by first defining
\begin{align}
\label{eq:5}
I^\tau_{t}=\{\mu: Cost^\tau_{t}(\mu)\leq F(t) +\beta\}.
\end{align}
Then, for $\tau\leq t-1$, we define
\begin{align*}
Set_t^\tau &= \{\mu: Cost_t^\tau(\mu)=Cost_t^*(\mu)\}\\
&=\{\mu: Cost_{t-1}^\tau(\mu)+\gamma(y_t,\mu)=\min{\{Cost_{t-1}^*(\mu)+\gamma(y_t,\mu)\mbox{ },\mbox{ }F(t)+\beta\}}\}
\end{align*}
Remembering that $Cost^\tau_{t-1}(\mu)+\gamma(y_t,\mu)\geq Cost^*_{t-1}(\mu)+\gamma(y_t,\mu)$, we have that
for $\mu$ to be in $Set_{t}^{\tau}$ we need that $Cost^\tau_{t-1}(\mu)=Cost^*_{t-1}(\mu)$, and that $Cost^\tau_{t-1}(\mu)+\gamma(y_t,\mu)\leq F(t)+\beta$.
The former condition corresponds to $\mu$ being in $Set_{t-1}^{\tau}$ and the second that $\mu$ is in $I^\tau_{t}$, so for $\tau\leq t-1$
\[
Set^{\tau}_{t}=Set^{\tau}_{t-1} \cap I^\tau_{t}.
\]
If this $Set^{\tau}_{t}=\emptyset$ then the value $\tau$ can be pruned, as then $Set^{\tau}_{T}=\emptyset$ for all $T>t$.
If we denote the range of values $\mu$ can take to be $D$, then we further have that
\[
Set^{t}_{t}=D \backslash \left[\displaystyle\bigcup_{\tau}I^\tau_{t}\right],
\]
where $t$ can be pruned straight away if $Set_t^t=\emptyset$.
This updating of the candidate functions and sets is illustrated in Figure~\ref{fig:OPRfuncs} where the $Cost$ functions and $Set$ intervals are displayed across two time steps. In this example a change in mean has been considered, using a least squares cost. The bold line on the left-hand graph corresponds to the function $Cost^*_t(\mu)$ and is made up of 7 pieces which relate to 6 candidate last changepoints. As the next time point is analysed the six $Cost_t^\tau(\mu)$ functions are updated using the formula $Cost_{t+1}^\tau(\mu)=Cost_t^\tau(\mu)+\gamma(y_{t+1},\mu)$ and a new function, $Cost_{t+1}^{t+1}(\mu)=F(t+1)+\beta$, is introduced corresponding to placing a changepoint at time $t+1$ (see middle plot). The functions which are no longer optimal for any values of $\mu$ (i.e. do not form any part of $Cost^*_{t+1}(\mu)$) can then be pruned, and one such function is removed in the right-hand plot.
\begin{figure}[t]
\centering
\subfloat[End Time $t$]{
\includegraphics[page=1,width=5.5cm,trim=1cm .5cm .5cm 1.5cm]{FPOP/FPOPcandidates.pdf}
}
\subfloat[Middle Time $t+1$]{
\includegraphics[page=2,width=5.5cm,trim=.75cm .5cm .75cm 1.5cm]{FPOP/FPOPcandidates.pdf}
}
\subfloat[End Time $t+1$]{
\includegraphics[page=3,width=5.5cm,trim=.5cm .5cm 1cm 1.5cm]{FPOP/FPOPcandidates.pdf}
}
\caption{Candidate functions over two time steps, the intervals shown along the bottom correspond to the intervals of $\mu$ for which each candidate last changepoint is optimal.
When $t=78$ (a) 6 candidates are optimal for some interval of $\mu$, however at $t=79$ (b), when the candidate functions are updated and the new candidate is added,
then candidate $\tau=78$ is no longer optimal for any $\mu$ and hence can be pruned (c).}
\label{fig:OPRfuncs}
\end{figure}
Once again we denote the set of potential last changes to consider as $R_{t}$ and then restrict the update rules (\ref{eq:4}) and (\ref{eq:6}) to $\tau\in R_{t}$.
This set can then be recursively updated at each time step
\begin{align}
R_{t+1} = \{ \tau\in\{R_{t} \cup \{t\}\} : Set_{t}^\tau \neq \emptyset \}.
\end{align}
These steps can then be applied directly to an Optimal Partitioning algorithm to form the FPOP method and the full pseudocode for this is presented in Algorithm~\ref{algo_OPR}.
\IncMargin{1em}
\begin{algorithm}[ht]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{Set of data of the form $\mathbf{y}_{1:n}=(y_1,\hdots,y_n)$,\\
A measure of fit $\gamma(\cdot,\cdot)$ dependent on the data and the mean,\\
A penalty $\beta$ which does not depend on the number or location of the changepoints.}
\BlankLine
Let $n=$length of data, and set $F(0)=-\beta$, $cp(0)=0$\;
then let $R_1=\{0\}$\;
and set $D=$ the range of $\mu$\;
$Set^0_{0}=D$\;
$Cost^0_0(\mu)=F(0)+\beta=0$\;
\For{$t=1,\hdots,n$}{
\For{$\tau \in R_t$}{
$Cost^\tau_{t}(\mu)=Cost^\tau_{t-1}(\mu)+\gamma(y_{t},\mu)$\;
}
Calculate $F(t)=\min_{\tau\in R_t}(\min_{\mu\in Set^{\tau}_t}[Cost^\tau_{t}(\mu)])$\;
Let $\tau_t=\operatorname*{arg\,min}_{\tau\in R_t}(\min_{\mu\in Set^{\tau}_t}[Cost^\tau_{t}(\mu)])$\;
Set $cp(t)=(cp(\tau_t),\tau_t)$\;
$Cost^t_{t}(\mu)=F(t)+\beta$\;
$Set^{t}_{t}=D$\;
\For{$\tau \in R_t$}{
$I^{\tau}_{t}=\{\mu: Cost^\tau_{t}(\mu)\leq F(t)+\beta\}$\;
$Set^\tau_{t}=Set^{\tau}_{t-1}\cap I^{\tau}_{t}$\;
$Set^t_{t}=Set^{t}_{t}\backslash I^{\tau}_{t}$\;
}
$R_{t+1} = \{ \tau\in\{R_{t} \cup \{t\}\} : Set_{t}^\tau \neq \emptyset \}$\;
}
\Output{The changepoints recorded in $cp(n)$.}
\BlankLine
\caption{Functional Pruning Optimal Partitioning (FPOP)}\label{algo_OPR}
\end{algorithm}\DecMargin{1em}
\subsection{Segment Neighbourhood with Inequality Pruning}
\label{sec:snp}
In a similar vein to Section~\ref{sec:opr}, Segment Neighbourhood Search can also benefit from using pruning methods. In Section~\ref{sec:segm-neighb-search} the method pDPA was discussed as a fast
pruned version of Segment Neighbourhood Search. In this section a new method, Segment Neighbourhood with Inequality Pruning (SNIP), will be introduced. This takes the Segment Neighbourhood Search
algorithm and uses inequality based pruning to increase the speed.
Under condition (C2) the following result can be proved for Segment Neighbourhood Search and this will enable points to be pruned from the candidate changepoint set.
\begin{theorem}
\label{SNPtheorem}
Assume that there exists a constant, $\kappa$, such that condition C2 holds. If, for any $k\geq 1$ and $t<s$
\begin{align}
C_{k-1,t}+\mathcal{C}(\mathbf{y}_{t+1:s})+\kappa> C_{k-1,s}\label{SNPtheorem2}
\end{align}
then at any future time $T>s$, $t$ cannot be the position of the last changepoint in the optimal segmenation of $y_{1:T}$ with $k$ changepoints.
\end{theorem}
\begin{proof} \label{sec:assume-that-there}
The idea of the proof is to show that a segmentation of $y_{1:T}$ into $k$ segments with the last changepoint at $s$ will be better than one
with the last changepoint at $t$ for all $T>s$.
Assume that (\ref{SNPtheorem2}) is true. Now for any $s<T\leq n$
\begin{align*}
C_{k-1,t}+\mathcal{C}(\mathbf{y}_{t+1:s})+\kappa+&>C_{k-1,s},\\
C_{k-1,t}+\mathcal{C}(\mathbf{y}_{t+1:s})+\kappa+\mathcal{C}(\mathbf{y}_{s+1:T})&>C_{k-1,s}+\mathcal{C}(\mathbf{y}_{s+1:T}),\\
C_{k-1,t}+\mathcal{C}(\mathbf{y}_{t+1,T})&>C_{k-1,s}+\mathcal{C}(\mathbf{y}_{s+1,T}),\hspace{50pt}\mbox{ (by C2).}
\end{align*}
Therefore for any $T>s$ the cost $C_{k-1,t}+\mathcal{C}(\mathbf{y}_{t+1,T})>C_{k,T}$ and hence $t$ cannot be the optimal location of the last changepoint when segmenting $\mathbf{y}_{1:T}$ with $k$ changepoints.
\end{proof}
Theorem~\ref{SNPtheorem} implies that the update rule (\ref{eq:SNS}) can be restricted to a reduced set over $\tau$ of potential last changes to consider. This set, which we shall denote as $R_{k,t}$, can be updated simply by
\begin{align}
R_{k,t+1}=\{v\in \{R_{k,t}\cup\{t\}\}:C_{k-1,v}+\mathcal{C}(\mathbf{y}_{v+1,t})+\kappa< C_{k-1,t}\}.
\end{align}
This new algorithm, SNIP, is described fully in Algorithm~\ref{algo_segneighpelt}.
\IncMargin{1em}
\begin{algorithm}[ht]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{Set of data of the form $\mathbf{y}_{1:n}=(y_1,\hdots,y_n)$,\\
A measure of fit $\mathcal{C}(\cdot)$ dependent on the data (needs to be minimised),\\
An integer, $K$, specifying the maximum number of changepoints to find,\\
A constant $\kappa$ that satisfies: $\mathcal{C}(\mathbf{y}_{t+1:s})+\mathcal{C}(\mathbf{y}_{s+1:T})+\kappa\leq \mathcal{C}(\mathbf{y}_{t+1:T})$.}
\BlankLine
Let $n=$length of data\;
Set $C_{0,t}=\mathcal{C}(\mathbf{y}_{1:t})$, for all $t\in \{1,\hdots,n\}$\;
\For{$k=1,\hdots,K$}{
Set $R_{k,k+1}=\{k\}$.
\For{$t=k+1,\hdots,n$}{
Calculate $C_{k,t}=\min_{v\in R_{k,t}}(C_{k-1,v}+\mathcal{C}(\mathbf{y}_{v+1:t}))$\;
Set $R_{k,t+1}=\{v\in \{R_{k,t}\cup\{t\}\}:C_{k-1,v}+\mathcal{C}(\mathbf{y}_{v+1,t})+\kappa< C_{k-1,t}\}$\;
}
Set $\tau_{k,1}=\operatorname*{arg\,min}_{v\in R_{k,n}}(C_{k-1,v}+\mathcal{C}(\mathbf{y}_{v+1,n}))$\;
\For{$i=2,\hdots,k$}{
Let $\tau_{k,i}=\operatorname*{arg\,min}_{v\in R_{k-i,\tau_{k,i-1}}}(C_{k-1,v}+\mathcal{C}(\mathbf{y}_{v+1,\tau_{k,i-1}}))$\;
}
}
\Output{For $k=0,\hdots,K$: the total measure of fit, $C_{k,n}$, for $k$ changepoints and the location of the changepoints for that fit, $\boldsymbol{\tau}_{k,(1:k)}$.}
\BlankLine
\caption{Segment Neighbourhood with Inequality Pruning (SNIP)}\label{algo_segneighpelt}
\end{algorithm}\DecMargin{1em}
\section{Comparisons Between Pruning Methods}
\label{sec:simil-diff-betw}
Functional and inequality based pruning both offer increases in the efficiency in solving both the penalised and constrained problems,
however their use depends on the assumptions which can be made on the cost function.
Inequality based pruning is dependent on the assumption C2, while functional pruning requires the slightly stronger condition C1.
Furthermore, inequality based pruning has a large computational overhead, and is currently only feasible for detecting a change in a univariate parameter.
If we consider models for which both pruning methods can be implemented, we can compare the extent to which the methods prune. This will give some insight into when
the different pruning methods would be expected to work well.
To explore this in Figures \ref{fig:noleftPELT-OPR} and \ref{fig:SNPPDPAcandidates} we look at the amount of candidates stored by functional and inequality based pruning in each of the two optimisation problems.
\begin{figure}[t]
\centering
\includegraphics[width=8cm, trim=1cm 1cm 1cm 1cm]{NumberOfCPs/ConsideredPELTandFPOP.pdf}
\caption{Comparison of the number of candidate changepoints stored over time by FPOP (purple) and PELT (orange). Averaged over 1000 data sets with changepoints at $t=20,40,60$ and $80$.}
\label{fig:noleftPELT-OPR}
\end{figure}
As Figure~\ref{fig:noleftPELT-OPR} illustrates, PELT prunes very rarely; only when evidence of a change is particularly high.
In contrast, FPOP prunes more frequently keeping the candidate set small throughout.
Figure~\ref{fig:SNPPDPAcandidates} shows similar results for the constrained problem. While pDPA constantly prunes,
SNIP only prunes sporadically. In addition SNIP fails to prune much at all for low values of $k$.
\begin{figure}[t]
\centering
\subfloat{
\includegraphics[page=1,width=4.5cm, trim=0cm 1cm 0cm 1cm]{NumberOfCPs/ConsideredSNIPandPDPA.pdf}
}
\subfloat{
\includegraphics[page=2,width=4.5cm, trim=0cm 1cm 0cm 1cm]{NumberOfCPs/ConsideredSNIPandPDPA.pdf}
}
\subfloat{
\includegraphics[page=3,width=4.5cm, trim=0cm 1cm 0cm 1cm]{NumberOfCPs/ConsideredSNIPandPDPA.pdf}
}
\subfloat{
\includegraphics[page=4,width=4.5cm, trim=0cm 1cm 0cm 1cm]{NumberOfCPs/ConsideredSNIPandPDPA.pdf}
}
\caption{Comparison of the number of candidate changepoints stored over time by pDPA (teal) and SNIP (green) at multiple values of $k$ in the algorithms (going from left to right $k=2,3,4,5$). Averaged over 1000 data sets with changepoints at $t=20,40,60$ and $80$.}
\label{fig:SNPPDPAcandidates}
\end{figure}
Figures \ref{fig:noleftPELT-OPR} and \ref{fig:SNPPDPAcandidates} give strong empirical evidence that functional pruning prunes more points than the inequality based method.
In fact it can be shown that any point pruned by inequality based pruning will also be pruned at the same time step by functional pruning.
This result holds for both the penalised and constrained case and is stated formally in Theorem \ref{thr:1}.
\begin{theorem}
\label{thr:1}
Let $\mathcal{C}(\cdot)$ be a cost function that satisfies condition C1, and consider solving either the constrained or penalised
optimisation problem using dynamic programming and either inequality or functional pruning.
Any point pruned by inequality based pruning at time $t$ will also have been pruned by functional pruning at the same time.
\end{theorem}
\begin{proof}
\label{sec:given-cost-function-1}
We prove this for pruning of optimal partitioning, with the ideas extending directly to the pruning of the Segment Neighbourhood algorithm.
For a cost function which can be decomposed into pointwise costs, it's clear that condition C2 holds when $\kappa=0$ and hence inequality based pruning can be used.
Recall that the point $\tau$ (where $\tau<t$, the current time point) is pruned by inequality based pruning in the penalised case if
\begin{align*}
F(\tau)+\mathcal{C}(\mathbf{y}_{\tau+1:t})\geq F(t),
\end{align*}
Then, by letting $\hat{\mu}_{\tau}$ be the value of $\mu$ such that $Cost^\tau_t(\mu)$ is minimised, this is equivalent to
\begin{align*}
Cost^\tau_t(\hat{\mu}_{\tau})-\beta\geq F(t),
\end{align*}
Which can be generalised for all $\mu$ to
\begin{align*}
Cost^\tau_t(\mu)\geq F(t)+\beta.
\end{align*}
Therefore equation~\eqref{eq:5} holds for no value of $\mu$ and hence $I^\tau_t=\emptyset$ and furthermore $Set^\tau_{t+1}=Set^\tau_t\cap I^\tau_t=\emptyset$ meaning that $\tau$ is pruned under functional pruning.
\end{proof}
\section{Empirical evaluation of FPOP}
\label{sec:emp-eval-fpop}
As explained in Section \ref{sec:simil-diff-betw} functional pruning
leads to a better pruning in the following sense: any point pruned by
inequality based pruning will also be pruned by functional pruning.
However, functional pruning is computationally more demanding than
inequality based pruning. We thus decided to empirically compare the
performance of FPOP to PELT, pDPA and Binary Segmentation (Binseg).
To do so, we implement FPOP for the quadratic loss in C++.
More precisely, we consider the quadratic cost (\ref{eq:Cost}).
We assess the runtimes of FPOP on both real microarray data as well as synthetic data.
All algorithms were implemented in C++. For pDPA and Binary Segmentation we had to input
a maximum number of changepoints to search for, which we denote $K$ as before.
Note that the Binary Segmentation heuristic is computationally
extremely fast. That is because its complexity is on average $\mathcal{O}(n \log K)$. Furthermore, it relies on a few
fairly simple operations and hence the constant in front of this big
$\mathcal O$ notation is expected to be quite low. For this reason we
believe that Binary Segmentation is good reference point in terms of
speed and we do not think it is possible to be much faster. However, unlike the other algorithms,
Binary Segmentation is not guaranteed to find the optimal segmentation. For a fuller investigation
of the loss of accuracy that can occur when using Binary Segmentation see \cite{Killick2012a}.
\subsection{Speed benchmark: 4467 chromosomes from
tumour microarrays}
\citet{Hocking2014} proposed to benchmark the speed of
segmentation algorithms on a database of 4467 problems of size varying
from 25 to 153662 data points. These data come from different
microarrays data sets (Affymetrix, Nimblegen, BAC/PAC) and different
tumour types (leukaemia, lymphoma, neuroblastoma, medulloblastoma).
We compared FPOP to several other segmentation algorithms: pDPA \citep{Rigaill2010}, PELT \citep{Killick2012a}, and Binary Segmentation
(Binseg). For such data, we expect a small number of changes and for
all profiles we ran pDPA and Binseg with a maximum number of changes
$K=52$.
We used the R \verb|system.time| function to measure
the execution time of all 4 algorithms on all 4467 segmentation
problems. The R source code for these timings is in
\verb|benchmark/systemtime.arrays.R| in the opfp project repository on
R-Forge:
\url{https://r-forge.r-project.org/projects/opfp/}
It can be seen in Figure~\ref{fig:sys_runtimes_microarray} that in
general FPOP is faster than PELT and pDPA, but about two times slower
than Binseg. Note that \verb|system.time| is inaccurate for small
times due to rounding, as pointed out in Figure~\ref{fig:sys_runtimes_microarray}
middle and right. For these small profiles we also assessed the
performances of FPOP, PELT and Binseg using the \verb|microbenchmark|
package and confirmed that FPOP is faster than PELT. For these small
problems we were surprised to observe that FPOP exhibited about the
same speed as Binseg (microbenchmark results figure not shown).
\begin{figure}[t]
\parbox{6.5cm}{ \includegraphics[height=5cm]{figure-systemtime-arrays-small.pdf}
} \parbox{12.5cm}{ \includegraphics[width=\linewidth]{figure-systemtime-arrays-fpop-pelt-small.pdf}}
\caption{(Left) Runtimes of FPOP, PELT,
pDPA and Binseg as a function of the length of the profile using on the tumour micro array benchmark.
(Middle) Runtimes of PELT and FPOP for the same profiles.
(Right) Runtimes of Binseg and FPOP for the same profiles.
}\label{fig:sys_runtimes_microarray}
\end{figure}
\subsection{Speed benchmark: simulated data with different
number of changes}
The speed of PELT, Binseg and pDPA depends on the underlying number of changes.
For pDPA and Binseg the relationship is clear as to cope with a larger number of changes
one need to increase the maximum number of changes to look for $K$.
For a fixed size signal the runtime dependency is expected to be in $\mathcal{O}(\log K)$ for Binseg and
in $\mathcal{O}(K)$ for pDPA.
For PELT the relationship is less clear, however we expect pruning to be more efficient
if there is a large number of changepoints. Hence for a fixed size signal we expect
the runtime of PELT to improve with the underlying number of changes.
Based on Section \ref{sec:simil-diff-betw} we expect FPOP to be more efficient than PELT and pDPA.
Thus it seems reasonable to expect FPOP to be efficient for the whole range of $K$.
This is what we empirically check in this section.
To do that we simulated Gaussian signal with 200000 data points and varied the number of changes
and timed the algorithms for each of these signals. We then repeat the same experience for signals with $10^7$ and timed FPOP and Binseg only.
The R source code
for these timings is in \verb|benchmark/systemtime.simulation.R| in
the opfp project repository on R-Forge: \url{https://r-forge.r-project.org/projects/opfp/}.
It can be seen in Figure \ref{fig:simu_numberK} that FPOP is always faster than pDPA and PELT.
Interestingly for both $n=2\times 10^5$ and $10^7$ it is faster than Binseg for a true number of changepoints larger than 500.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{figure-systemtime-simulation-small.pdf}
\includegraphics[width=8cm]{figure-systemtime-simulationLarge-small.pdf}
\end{center}
\caption{Runtimes as a function of the true number of changepoints.
(Left) For FPOP, PELT,
pDPA and Binseg and $n=2 \times 10^5$
(Right) For Binseg and FPOP and $n=10^7$
}\label{fig:simu_numberK}
\end{figure}
\subsection{Accuracy benchmark: the neuroblastoma data set}
\citet{Hocking2013} proposed to benchmark the changepoint
detection accuracy of segmentation models by using annotated regions
defined by experts when they visually inspected scatterplots of the
data. The neuroblastoma data are a set of 575 copy number microarrays
of neuroblastoma tumours, and each chromosome is a separate
segmentation problem. The benchmark is to use a training set of $n$
annotated chromosomes to learn a segmentation model, and then quantify
the annotation error on a test set. Let $d_1, \dots, d_n$ be the
number of data points to segment on each chromosome in the training
data set, and let $\mathbf y_1\in\mathbb{R}^{d_1}, \dots, \mathbf
y_n\in\mathbb{R}^{d_n}$ be the vectors of noisy data for each chromosome in
the training set.
Both PELT and pDPA have been applied to this benchmark by first
defining $\beta = \lambda d_i$ for all $i\in\{1, \dots, n\}$, and then
choosing the constant $\lambda$ that maximises agreement with the
annotated regions in the training set.
Since FPOP computes the same
segmentation as PELT, we obtain the same error rate for
FPOP on the neuroblastoma benchmark.
As shown on
\url{http://cbio.ensmp.fr/~thocking/neuroblastoma/accuracy.html}, FPOP
(fpop) achieves 2.2\% test error. This is the smallest error across all
algorithms that have been tested.
\section{Discussion}
We have introduced two new algorithms for detecting changepoints, FPOP and SNIP. A natural question is which of these, and the existing algorithms, pDPA and PELT, should be used in which applications.
There are two stages to answering this question. The first is whether to detect changepoints through solving the constrained or the penalised optimisation problem, and the second is whether to
use functional or inequality based pruning.
The advantage of solving the constrained optimisation problem is that this gives optimal segmentations for a range of numbers of changepoints. The disadvantage is that solving it is slower than
solving the penalised optimisation problem, particularly if there are many changepoints. In interactive situations where you wish to explore segmentations of the data, then solving the
constrained problem is to be preferred \cite[]{Hocking2014}. However in non-interactive scenarios when the penalty parameter is known in advance, it will be faster to solve the penalised problem
to recover the single segmentation of interest.
The decision as to which pruning method to use is purely one of computational efficiency. We have shown that functional pruning always prunes more than inequality based pruning, and empirically have seen
that this difference can be large, particularly if there are few changepoints. However functional pruning can be applied less widely. Not only does it require a stronger condition on the cost functions, but currently its implementation
has been restricted to detecting changes in a uni-variate parameter from a model in the exponential family. Even for situations where functional pruning can be applied, its computational overhead per non-pruned candidate is higher.
Our experience suggests that you should prefer functional pruning in the situations where it can be applied. For example FPOP was always faster than PELT for detecting a change in mean in the empirical studies we conducted,
the difference in speed is particularly large in situations where there are few changepoints. Furthermore we observed FPOP's computational speed was robust to changes in the number of changepoints to be detected, and was
even competitive with, and sometimes faster than, Binary Segmentation.
{\bf Acknowledgements} We thank Adam Letchford for helpful comments and discussions. This research was supported by EPSRC grant EP/K014463/1. Maidstone gratefully acknowledges funding from EPSRC via the
STOR-i Centre for Doctoral Training.
|
2,869,038,154,313 | arxiv | \section{Introduction}
Forecasting for multiple time series with complex hierarchical structure has been applied in various real-world problems \cite{dangerfield1992top, athanasopoulos2009hierarchical, liu2018flexible, jeon2018reconciliation}. For instance, the international sales forecasting of transnational corporations needs to consider geographical hierarchies involving the levels of city, state, and country \cite{han2021simultaneously}. Moreover, retail sales can also be divided into different groups according to the product category
to form another hierarchy.
The combination of different hierarchies can form a more complicated but realistic virtual topology, e.g., the geographical hierarchies (of different regions) and the commodity hierarchies (of varies categories) are nested to be parts of the supply chain
in the retail sector.
The critical challenge in hierarchical forecasting tasks lies in producing accurate prediction results while satisfying aggregation (coherence) constraints \cite{ben2019regularized}.
Specifically, the hierarchical structure in these time series implies coherence constraint, i.e., time series at upper levels are the aggregation of those at lower levels. However, independent forecasts from a prediction model (called {\it base forecasts}) are unlikely to satisfy coherence constraint.
Previous work on hierarchical forecasting mainly focuses on a procedure of two separate stages~\cite{hyndman2016fast, ben2019regularized, corani2020probabilistic, anderer2021forecasting}: In the first stage, {\it base forecasts} are generated independently for each time series in the hierarchy; in the second stage, these forecasts are adjusted via \textit{reconciliation} to derive coherent results.
The base forecasts are obtained by univariate or multivariate time series models, such as traditional forecasting methods (e.g., ARIMA~\cite{ben2019regularized}) and deep learning techniques (e.g. DeepVar~\cite{salinas2019high}). However, both approaches ignore the information of hierarchical structure for prediction.
As for reconciliation, traditional statistical methods mostly rely on strong assumption, such as unbiased forecasts and Gaussian noises (e.g., MinT~\cite{wickramasuriya2019optimal}), which are often inconsistent with non-Gaussian/non-linear real-world \textit{hierarchical time series} (HTS) data. A notable exception is the approach proposed in \cite{rangapuram2021end}, promising coherence by a closed-form projection to achieve an end-to-end reconciliation without any assumptions.
However, this method may introduce huge adjustments on the original forecasts for coherency, making the final forecasts unreasonable.
Moreover, even coherent forecasters might still be impractical for some real-world tasks with further practical operational or task-related constraints, such as inventory management and resource scheduling.
These reconciliation concerns can be addressed by imposing more realistic constraints to control the scale and optimize the task-based targets for the downstream tasks, which can be achieved by \textit{deep neural optimization layer} (OptNet) \cite{amos2017optnet}.
In this work, we provides an end-to-end framework to generate coherent forecasts of hierarchical time series that incorporates the hierarchical structure in the prediction process, while taking task-based constraints and targets into consideration. In detail, our contributions to HTS forecasting and aligned decision-making can be summarised as follows:
\begin{itemize}
\item We propose two tree-based mechanisms, including top-down convolution and bottom-up attention, to leverage hierarchical structure information (through feature fusion of all levels in the hierarchy) for performance improvement of the base forecasts. To the best of our knowledge, our approach is the first model that harnesses the power of deep learning to exploit the complicated structure information of HTS.
\item We provide a flexible end-to-end learning framework to unify the goals of the forecasting and decision-making by employing a deep differentiable convex optimization layer (OptNet), which not only achieves controllable reconciliation
without any assumptions, but also adapts to more practical task-related constraints and targets to solve the real-world
problems without any explicit post-processing step.
\item Extensive experiments on real-world hierarchical datasets from various industrial domains demonstrate that our proposed approach achieves significant improvements over the state-of-the-art baseline methods, and our approach has been deployed online to cloud resource scheduling project in Ant Group.
\end{itemize}
\section{Preliminaries}
\label{section: preliminaries}
A hierarchical time series can be denoted as a tree structure (see Figure \ref{fig:hier_timeseries}) with linear aggregation constraints, expressed by aggregation matrix $\mathbf{S}\in \mathbb{R}^{n \times m}$ ($m$ is the number of bottom-level nodes, and $n$ is the total number of nodes). In the hierarchy, each node represents a time series, to be predicted over a time horizon.
Given a time horizon $t\in \{1,2,\dots,T\}$, we use $y_{i,t}\in \mathbb{R}$ to denote the values of the $i$-th component of a multivariate hierarchical time series, where $i\in \{1,2,\dots,n\}$ is the index of the individual univariate time series. Here we assume that the index $i$ abides by the level-order traversal of the hierarchical tree, going from left to right at each level.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{figures/hts.png}
\caption{An example of HTS structure for $n=8$ time series with $m=5$ bottom-level series and $r=3$ upper-level series.}
\label{fig:hier_timeseries}
\end{figure}
In the tree structure, the time series of leaf nodes are called the \textit{bottom-level} series $\mathbf{b}_{t}\in \mathbb{R}^{m}$, and those of the remaining nodes are termed \textit{upper-levels} series $\mathbf{u}_{t}\in \mathbb{R}^{r}$. Obviously, the total number of nodes $n=r+m$, and $\mathbf{y}_{t} :=[\mathbf{u}_{t},\mathbf{b}_{t}]^{\mathsf{T}} \in \mathbb{R}^{n} $ contains observations at time $t$ for all levels, which satisfies
\begin{equation}
\label{eq_sum}
\mathbf{y}_{t}=\mathbf{S} \mathbf{b}_{t},
\end{equation}
where $\mathbf{S}\in \{0,1\}^{n \times m}$ is an aggregation matrix.
Taking the HTS in Figure~\ref{fig:hier_timeseries} as an example, the aggregation matrix
{is in} the form:
\begin{equation*}
\mathbf{S} =
\begin{bmatrix}
\mathbf{S}_{\text{sum}}\\
\mathbf{I}_5
\end{bmatrix}
\begin{bmatrix}
\begin{array}{ccccc}
1 & 1 & 1 &1 &1 \\
1 & 1 & 0 &0 &0 \\
0 & 0 & 1 &1 &1 \\
\hdotsfor{5}\\
& & \mathbf{I}_5 & &\\
\end{array}
\end{bmatrix}
,
\end{equation*}
where $\mathbf{I}_5$ is an identity matrix of size 5. The total number of series in the hierarchy is $n=3+5$. At each time step $t$,
\begin{equation*}
\begin{aligned}
\mathbf{u}_{t}&=[y_{A,t},y_{B,t},y_{C,t}] \in \mathbb{R}^{3}, \\
\mathbf{b}_{t}&=[y_{D,t},y_{E,t},y_{F,t},y_{G,t},y_{H,t}] \in \mathbb{R}^{5},\\
y_{A,t}&=y_{B,t}+y_{C,t}=y_{D,t}+y_{E,t}+y_{F,t}+y_{G,t}+y_{H,t}.
\end{aligned}
\end{equation*}
The coherence constraint of Eq.~\eqref{eq_sum} can be represented as ~\cite{rangapuram2021end}
\begin{equation}
\label{eq:coherent_co}
\mathbf{A}\mathbf{y}_{t}=\mathbf{0},
\end{equation}
where $\mathbf{A}:=[\mathbf{I}_{r}|-\mathbf{S}_{\text{sum}}] \in \{0,1\}^{r \times n}$, $\mathbf{0}$ is an $r$-vector of zeros, and $\mathbf{I}_{r}$ is a $r \times r $ identity matrix.
\begin{figure*}[h]
\centering
\includegraphics[width=0.85\textwidth]{figures/arch.png}
\caption{The Architecture of SLOTH: the red dashed box is the hierarchical forecasting component including the temporal feature
extraction module, top-down convolution module, bottom-up attention module, and base forecasting module. The base forecasting module
generates predictions without reconciliation. The light blue dashed box is the task-based optimization module that
generates the reconciliation forecasts and achieves task-based targets for real-world scenarios.}
\label{fig_arch}
\end{figure*}
\section{Method}
In this section, we introduce our framework (SLOTH), which not only can improve prediction performance by integrating hierarchical structure into forecasting, but also achieves task-based goals with deep neural optimization layer fulfilling coherence constraint. Figure~\ref{fig_arch} illustrates the architecture of SLOTH, consisting of two main components:
\begin{itemize}
\item a \textit{structured hierarchical forecasting module} that produces forecasts over the prediction horizon across all nodes, utilizing both top-down convolution and bottom-up attention to enhance the {temporal} features for each node.
\item a \textit{task-based constrained optimization module} that leverages OptNet to satisfy the coherence constraint and provide a flexible module for real-world tasks with more complex practical constraints and targets.
\end{itemize}
\subsection{Structured Hierarchical Forecasting}
In this section, we introduce the \textit{structured hierarchical learning module}, which integrates dynamic features from the hierarchical structure to produce better base forecasts.
\subsubsection{\textbf{Temporal Feature Extraction Module}}
This module extracts temporal features of each node as follows:
\begin{align}
\label{eq:uni}
\bar{h}^i_t &= \text{UPDATE}(\bar{h}^i_{t-1}, x^i_t; \mathbf{\theta}), \; i \in \{{1,2, \dots, n}\},\\
\bar{\mathbf{H}}_t & = [{\bar{h}^1_t, \bar{h}^2_t, \dots, \bar{h}^{n}_t}],\notag
\end{align}
where ${x}^i_t$ is the covariates of node $i$ at time $t$, $\bar{h}_{t-1}^i$ is the hidden feature of the previous time step $t-1$, $\mathbf{\theta}$ is the model parameters shared across all nodes, and $\textit{UPDATE}(\cdot)$ is the pattern extraction function.
Any recurrent-type neural network can be adopted as the temporal feature extraction module, such as the RNN variants \cite{hochreiter1997long, chung2014empirical}, TCN \cite{bai2018empirical}, WAVENET~\cite{van2016wavenet}, and NBEATS \cite{oreshkin2019n}. We use GRU ~\cite{chung2014empirical} in our experiments due to its simplicity.
It is worth noting that we process each node independently in this module, which is in contrast to some existing works that utilize multivariate model to extract the relationship between time series~\cite{rangapuram2021end}. It is unnecessary for our framework because we leverage the hierarchical structure in feature integration step, which we believe is enough to characterize the relationship between nodes. We reduce the time complexity from O($n^2$) to O($n$) accordingly.
\subsubsection{\textbf{Top-Down Convolution Module (TD-Conv)}}
\label{section:top_down}
This module incorporates structural information to dynamic pattern by integrating temporal features (e.g., trends and seasonality) from nodes at the top level into those at the bottom level to enhance the temporal stability. In other words, clearer seasonality and more smooth evolving \cite{taieb2021hierarchical}.
Most of the previous methods indicate that time series of nodes at the upper level are easier to predict than nodes at the bottom level, which implies the dynamic pattern at the top level is more stable.
Therefore, the bottom nodes can use their ancestors' features at top levels to improve prediction performance.
Similar to Tree Convolution \cite{mou2016convolutional}, our approach introduces a top-down convolution mechanism by extracting effective top-level temporal patterns to denoise the feature of bottom nodes and increase stability.
The top-down convolution mechanism shown in Figure~\ref{fig:top_down} is based on the outputs of the univariate forecasting model. We want to highlight that all nodes share the forecasting model to obtain the temporal features.
{Please note that ancestor's value are the sum of all children nodes. } In other words, ancestors' features are helpful to predict the considered node, which calls for the integration of the features of both levels for better prediction.
In order to reduce the computational complexity, the hidden states are reorganized into matrices (yellow boxes in the middle part of Figure~\ref{fig:top_down}) after temporal hidden features are obtained. Each row of the matrices represents the feature concatenation of nodes from considered node to root in the hierarchy.
We then apply convolution neural networks (CNNs) to each row of those matrices to aggregate the temporal patterns, since it is well known that CNN is an effective tool to integrate valid information from hidden features.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{figures/td_conv.png}
\caption{Top-Down Convolution Architecture. Given the temporal features of each node (pink boxes) from the temporal feature extraction module, our approach transforms these hidden features into matrices (yellow boxes) for fast convolution. The right part with red dashed lines demonstrates the integration of ancestors' features with CNN, and the green boxes represent the integration feature.}
\label{fig:top_down}
\end{figure}
The computation complexity is very high if the convolution is applied on $\bar{\mathbf{H}}_t$ because it is not sorted as tree structure index. To accelerate the computation, we transform the hierarchical structure to a series of matrix forms to speed up the convolution process as shown in the middle part of
Figure~\ref{fig:top_down}. Specifically, the nodes' hidden features $\mathbf{\bar{H}}_t$ in the shape of $(n, d_h)$ are reorganized into a matrix form $\mathbf{\bar{H}}^{\prime}_t = \{\bar{h'}^{1}_t,..., \bar{h'}^{n}_t\}$, where $\bar{h'}^i_t = [\bar{h}^1_t, \dots, \bar{h}^{i}_t]$ is the concatenation of temporal features $\bar{h}_t$ of all the ancestors of node $i$ in the shape of $(l_i, d_h)$, where $n$ is number of nodes, $l_i$ is the number of levels of node $i$, and $d_h$ is the dimension. Please note that $\mathbf{\bar{H}}^{\prime}_t $ is not an actual matrix but a union of matrices of different dimensions, where the first dimension of each $\bar{h'}_t$ is the level index.
Then we apply convolution to $\bar{h'}_t^i$ to integrate temporal features of all ancestors of $i$ as follows
\begin{align}
\label{eq:conv}
\hat{h}_t^i &= \text{Conv}_{l_i}(\bar{h'}^i_t; \Theta_{l_i}) = \sum_{k=1}^{l_i} w_k \bar{h}^{l_i -k}_t,\\
\mathbf{\hat{H}}_t &= [{\hat{h}^1_t, \hat{h}^2_t, \dots, \hat{h}^{n}_t}],\notag
\end{align}
where different levels have different convolution components $\text{Conv}_{l_i}$, while nodes at the same level share the same parameter $\Theta_{l_i}$.
It is important to emphasize that HTS is different from general graph-structured time series, where spatial-temporal information is passed between adjacent nodes, but the relationship between nodes in hierarchical structure only contains value aggregation, which indicates the message passing mechanism of graph time series (DCRNN~\cite{li2017diffusion} and STGCN~\cite{yu2017spatio}) is not appropriate for our problem. We compare it with our SLOTH method in Appendix E.
\subsubsection{\textbf{Bottom-Up Attention Module (BU-Attn)}}
\label{section: bottom_up}
This module integrates temporal features of nodes at the bottom levels to their ancestors at top levels to enhance the ability of adapting to dynamic pattern variations including sudden level or trend changes of time series.
TD-Conv carries top-level information downward to improve the predictions at the bottom levels. On the other hand, the information from the bottom-levels should also be useful for the prediction of the top-levels, due to their relationship of value aggregation.
Please note that feature aggregation is different from value aggregation (Eq. \eqref{eq_sum}). Specifically, the summing matrix ($\mathbf{S}$) is actually a two-level hierarchical structure, rather than a tree structure that is reasonable for value aggregation, but causes structural information loss in feature aggregation. Direct summing operation is inappropriate for feature aggregation as there is no relationship of summation between parents and children in the feature space.
We therefore adopt the attention mechanism \cite{vaswani2017attention,nguyen2020treestructured} to aggregate temporal features from the bottom levels, due to the following considerations: 1) hierarchical structure has variations (different number of levels and children nodes); 2) child nodes contribute differently to parents with various scales/dynamic patterns. The attention mechanism is appropriate to aggregate the feature as it is flexible for various structures and can learn weighted contributions based on feature similarities.
\begin{align}
\Tilde{h}^{\cup_l}_t& =\operatorname{Softmax}\left(\frac{\mathbf{Q}^{\cup_l}_t (\mathbf{K}^{\cup_{l+1}}_t)^\mathsf{T}}{\sqrt{d_{h}}}\right) \mathbf{V}^{\cup_{l+1}}_t ,\label{eq:at-agg}\\
\Tilde{\mathbf{H}}_t &= [\Tilde{h}^{\cup_{1}}_t, \dots, \Tilde{h}^{\cup_{n_l}}_t ],\nonumber\\
\mathbf{V}_t^{\cup_l} &= \Tilde{h}_t^{\cup_l} - \hat{h}^{\cup_{l}}_t + \bar{h}^{\cup_{l}}_t.\label{eq_value}
\end{align}
\begin{algorithm}[tb]
\caption{Bottom-up Attention Module}
\label{algo_attn}
\begin{algorithmic}[1]
\REQUIRE $\mathbf{\bar{H}}_t, \mathbf{\hat{H}}_t, \phi_q, \phi_k $
\STATE $l \gets n_l-1, \Tilde{h}^{\cup_{n_l}}_t = \hat{h}^{\cup_{n_l}}_t, \mathbf{V}_t = 0 $
\STATE $\mathbf{V}^{n_l}_t = \hat{h}^{\cup_{n_l}}_t $,
\STATE \textit{/* $\cup_i $ means union of nodes at level i}
\WHILE{$l > 0$}
\STATE $\mathbf{Q}_t^{\cup_l} = F(\bar{h}^{\cup_l}_t; \phi_q)$
\STATE $\mathbf{K}_t^{\cup_{l+1}} = F(\bar{h}^{\cup_{l+1}}_t; \phi_k)$
\STATE Update $\Tilde{h}_t^{\cup_l}$ according to Eq.~\eqref{eq:at-agg}
\STATE Update $\mathbf{V}_t^{\cup_l} =
\Tilde{h}_t^{\cup_l} - \hat{h}^{\cup_{l}}_t + \bar{h}^{\cup_{l}}_t $
\STATE $l \gets l-1,$
\ENDWHILE
\RETURN $\mathbf{\Tilde{H}}_t$
\end{algorithmic}
\end{algorithm}
Specifically, as shown in Algorithm~\ref{algo_attn}, attention process starts from the second last to the first level (except for leaf nodes). Parent node takes the original temporal feature $\bar{h}_t$ to generate the query, and the corresponding child node takes the original feature $\bar{h}_t$ to generate the key. Child node feature value $\mathbf{V}_t$ is updated as attention process going upward as in Eq.~\eqref{eq_value}.
$\Tilde{h}^{\cup_l}_t$ is the feature of all nodes at level $l$ by attention aggregation, which contains the contributions from both children and their ancestors. $\mathbf{V}_t^{\cup_l}$ is attention value of each node at level $l$, which is used in Eq.~\eqref{eq:at-agg}. Since we not only attempt to strengthen the information of children and the node's own features, but also to weaken the parents' influence in the bottom-up attention process because parent's feature becomes the node's own feature in the next iteration of BU-Attn, therefore, we subtract top-down temporal features $\hat{h}_t^{\cup_l}$ and add original temporal features $\bar{h}_t^{\cup_l}$.
It is important to note that computation process at same level can be executed concurrently because it is only related to self temporal hidden feature $\bar{\mathbf{H}}_t$.
All the experiments show our bottom-up attention aggregation mechanism carries the bottom-level information containing dynamic patterns to top-level to improve the forecasting performance.
More importantly, both TD-Conv and BU-Attn can be independent components to be used in the fitting process (i.e., step $t$ of RNN) for better prediction accuracy, or to be used after the temporal patterns are obtained, trading accuracy for faster computation.
\subsubsection{\textbf{Base Forecast Module}}
This module serves as prediction generation based on dynamic features, and the prediction can either be probabilistic or point estimates. Our framework is agnostic to the forecasting models, such as MLP~\cite{gardner1998artificial}, seq2seq \cite{sutskever2014sequence}, and attention networks \cite{vaswani2017attention}.
We employ the MLP to generate the base point estimates for its flexibility and simplicity.
In order to avoid the loss of information of temporal features in the cascade of hierarchical learning, we apply residual connection between temporal feature extraction module and base forecast module as follows
\begin{equation}
\label{eq:attn_filter}
{z}_t = \sigma(MLP(\Tilde{h}_t));
h_t = (1 - {z}_t) \Tilde{h}_t + {z}_t \bar{h}_t
\textrm{ . }
\end{equation}
Then we apply MLP to generate base forecasts as follows
\begin{equation}
\label{eq:pred}
\hat{y}^i_t = MLP(h^i_t), \quad \hat{\mathbf{y}}_t = [\hat{y}^1_t, \dots, \hat{y}^{n}_t].\footnote{Note that all nodes share the same generation model.}
\end{equation}
\subsection{Task-based Constrained Optimization}
In this section, we introduce a task-based optimization module that leverages the deep neural optimization layer to achieve targets in realistic scenarios, while satisfying coherence and task-based constraints.
\subsubsection{\textbf{Optimization with Coherence Constraint in Forecasting Task}}
We formally define HTS forecasting task as a prediction and optimization problem in this section. As shown in Eq.~\eqref{eq:recon},
reconciliation on base forecasts can be represented as a constrained optimization problem ~\cite{rangapuram2021end}, where two categories of constraints are considered, i.e., the equality constraints representing coherency, and the inequality constraints
ensuring the reconciliation is restricted, which means the adjustment of the base forecasts is limited in a specific range to reduce the deterioration of forecast performance,
\begin{equation}
\label{eq:recon}
\begin{aligned}
&\Tilde{\mathbf{y}}_t = \mathop{\arg\min}_{\mathbf{y} \in \mathbb{R}^n} \|\mathbf{y} - \hat{\mathbf{y}}_t \|_2
= \mathop{\arg\min}_{\mathbf{y} \in \mathbb{R}^n} \frac{1}{2}\mathbf{y}^{\mathsf{T}}\mathbf{y} - \hat{\mathbf{y}}_t^{\mathsf{T}}\mathbf{y} \\
&\text{subject to}
\begin{cases}
\mathbf{A}{\mathbf{y}} = \mathbf{0},\\
\delta_1 abs(\hat{\mathbf{y}}_t) - \varepsilon_1 \le \mathbf{y} - \hat{\mathbf{y}}_t \le \delta_2 abs(\hat{\mathbf{y}}_t) + \varepsilon_2,
\end{cases}
\end{aligned}
\end{equation}
where $\hat{\mathbf{y}}$ is the base forecasts without reconciliation, and $\delta_i,\varepsilon_i, i=1,2$ are some predefined constants.
Recall that the aforementioned end-to-end optimization architecture~\cite{rangapuram2021end} provides a closed-form solution for reconciliation problem. It projects the base forecasts into the solution space effectively by multiplying reconciliation matrix, which only depends on aggregation matrix $\mathbf{S}$ and is thus easy to calculate (for convenience, the details are supplied in Appendix A).
However, this procedure only considers aggregation constraints without limiting adjustment scale which may sometime cause the reconciled results $\Tilde{\mathbf{y}}$ become unreasonable, e.g., a negative value for small base forecasts in demand forecasting.
In addition, loss based on reconciliation projection derails gradient magnitudes and directions, which may cause the model not to converge to the optimum.
Moreover, it is not a general solution for real-world scenarios, where more complex task-related constraints and targets are involved.
To keep reconciliation result reasonable and training efficient,
we utilize neural network layer OptNet to solve the constrained reconciliation optimization problem, which is essentially a quadratic programming problem. The Lagrangian of formal quadratic programming problem is defined in Eq.~\eqref{eq: optnet} \cite{amos2017optnet}, where equality constraints are $\mathbf{A}\mathbf{z} = \mathbf{b}$ and inequality constraints are $\mathbf{G}\mathbf{z} \le \mathbf{h}$:
\begin{equation}
\label{eq: optnet}
L(\mathbf{z}, {\nu}, {\lambda}) = \frac{1}{2}\mathbf{z}^{\mathsf{T}}\mathbf{Q}\mathbf{z} - \mathbf{q}^{\mathsf{T}}\mathbf{z} + \nu^{\mathsf{T}}(\mathbf{A}\mathbf{z}-\mathbf{b}) + \lambda^{\mathsf{T}}(\mathbf{G}\mathbf{z}-\mathbf{h}).
\end{equation}
When applied to hierarchical reconciliation problems (where we take the special range constraint $\mathbf{y} \ge \mathbf{0}$ as an example),
the Lagrangian can be revised to
\begin{equation}
\label{eq: hier_optnet}
L(\mathbf{y}, \nu, \lambda) = \frac{1}{2}\mathbf{y}^{\mathsf{T}}\mathbf{y} - \hat{\mathbf{y}}^{\mathsf{T}}\mathbf{y} + \nu^{\mathsf{T}}\mathbf{A}\mathbf{y} + \lambda^{\mathsf{T}}(-\mathbf{I}\mathbf{y}),
\end{equation}
where $\nu,\lambda$ are the dual variables of equality and inequality constraints respectively. Then we can derive the differentials of these variables according to the KKT condition, and apply linear differential theory to calculate the Jacobians for backpropagation. The detail is as follows
\begin{equation}
\resizebox{1.\linewidth}{!}{$
\begin{bmatrix}
\mathbf{I} & -\mathbf{I} &
\mathbf{A}^\mathsf{T} \\
diag(\lambda)(-\mathbf{I}) & diag(-\mathbf{y}) & \mathbf{0} \\
\mathbf{A} & \mathbf{0} & \mathbf{0}
\end{bmatrix}
\begin{bmatrix}
d\mathbf{y} \\
d\lambda \\
d\nu
\end{bmatrix}
= -
\begin{bmatrix}
d\mathbf{y} - d\hat{\mathbf{y}} - d\lambda + d\mathbf{A}^{\mathsf{T}}\nu \\
- diag(\lambda)d\mathbf{y} \\
d\mathbf{A} \mathbf{y}
\end{bmatrix}
\mathrm{,}$}
\end{equation}
where function $diag(\cdot)$ means diagonal matrix. We can infer the conditions of the constrained reconciliation optimization problem from the left side, and compute the derivative of the relevant function with respect to model parameters from the right side. In practice, we apply OptNet layer to obtain the solution of argmin differential QPs quickly to solve the linear equation. In this way, our framework achieves end-to-end learning by directly generating reconciliation optimization results, while calculating the derivative and backpropagating the gradient to the optimization model automatically.
\subsubsection{\textbf{Optimization with Task-based Constraints and Target for Real-World Scenarios}}
The coherence constraint is enough in forecasting tasks when the only concern is prediction accuracy, which is, however, most likely unrealistic in real-world tasks with specific limitations and practical targets. Such tasks can be further revised as follows
\begin{equation}
\label{eq:recon_task}
\begin{aligned}
&\mathcal{J}(\hat{\mathbf{y}}) = \mathop{\arg\min}_{\mathbf{y}}{f(\hat{\mathbf{y}}, \mathbf{y})}\\
\text{subject to}
&\begin{cases}
\mathbf{A}\mathbf{y} = \mathbf{0}, e_j = 0, \;j=1, \dots, n_{\text{eq}},\\
g_i(\mathbf{y}, \hat{\mathbf{y}}) \le 0, i=1, \dots, n_{\text{ineq}},
\end{cases}
\end{aligned}
\end{equation}
where $f$ is the task-based quadratic objective, $e_j$ represents task-based equality constraint other than coherence constraint, $n_{\text{eq}}$ is the number of equality constraints, and $g_i$ is an inequality constraint, $n_{\text{ineq}}$ is the number of task-based inequality constraints.
Eq.~\eqref{eq:recon_task} can be efficiently solved using differential QP layer in an end-to-end fashion \cite{donti2017task}, where we need to transform our target into a quadratic loss and add equality/inequality constraints. We construct a scheduling experiment on M5 dataset in the following section to validate the superiority of our framework on realistic tasks.
\section{Experiments}
\begin{table}[t]
\centering
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Dataset & levels & nodes & structure & freq \\ \hline
Labour & 4 & 57 & 1, 8, 16, 32 & 1M \\ \hline
Tourism & 4 & 89 & 1, 4, 28, 56 & 3M \\ \hline
M5 & 5 & 114 & 1, 3, 10, 30 ,70 & 1D\\ \hline
\end{tabular}}
\caption{Dataset statistics. Structure column shows the number of nodes at each level from top to bottom, as for frequency (`freq'), 1D means one day and 3M means three months.}
\label{tab:dataset_statisitic}
\end{table}
\newcolumntype{s}{>{\hsize=1.4\hsize\small}X}
\begin{table*}[htbp]
\centering
\resizebox{.9\linewidth}{!}{
\begin{tabularx}{\textwidth}{s|XX|XX|XX}
\hline
\textbf{Model} & \multicolumn{2}{X}{Tourism} & \multicolumn{2}{X}{Labour} & \multicolumn{2}{X}{M5} \\
\hline
Metric & MAPE & w-MAPE & MAPE & w-MAPE & MAPE & w-MAPE \\
\hline
ARIMA-BU & 0.2966 & 0.1212 &0.0467 & 0.0457 & 0.1134 & \textbf{0.0638}\\
ARIMA-MINT-SHR & 0.2942 & 0.1237 & 0.0506 & 0.0471 & 0.1140 & 0.0675 \\
ARIMA-MINT-OLS & 0.3030 & 0.1254 & 0.0505 & 0.0467 & 0.1400 & 0.0733 \\
ARIMA-ERM & 1.6447 & 0.6198 & 0.0495 & 0.0402 & 0.1810 & 0.1163 \\
PERMBU-MINT& 0.2947(0.0031) & 0.1057(0.0004) & 0.0497(0.0003) & 0.0453(0.0002) & 0.1176(0.0005) & 0.0759(0.0007)\\
\hline
DeepAR-Proj & 0.3214(0.0202) & 0.1171(0.0116) & 0.0423(0.0016) & 0.0290(0.0013)
&0.1546(0.0165) & 0.0951(0.0195) \\
DeepVAR-Proj & 0.4214(0.0548) & 0.2162(0.0307) & 0.0936(0.0206) & 0.0884(0.0235) &
0.2019(0.0279) & 0.1615(0.0311) \\
NBEATS-Proj & 0.3295(0.0231) & 0.1359(0.0264) & 0.0355(0.0018) & 0.0268(0.0043)
&0.2256(0.0399) & 0.1952(0.0637) \\
INFORMER-Proj & 0.5401(0.0339) & 0.5566(0.0261) & 0.1537(0.0685) & 0.1455(0.0683)
&0.3123(0.0735) & 0.3098(0.0708) \\
AUTOFORMER-Proj & 0.3983(0.0678) & 0.1862(0.0596) & 0.0455(0.0037) & 0.0367(0.0016)
&0.1654(0.0153) & 0.1308(0.0560) \\
FEDFORMER-Proj & 0.3741(0.0291) & 0.1685(0.0180) & 0.0440(0.0038) & 0.0334(0.0024)
&0.1505(0.0139) & 0.1188(0.0044) \\
DeepAR-BU & 0.3065(0.0123) & 0.1154(0.0097) & 0.0378(0.0014) & 0.0278(0.0022)
& 0.1151(0.0017) & 0.0686(0.0012) \\
DeepVAR-BU & 0.4135(0.0562) & 0.2195(0.0370) & 0.1112(0.0371) & 0.1008(0.0352)
& 0.1851(0.0153) & 0.1494(0.0140) \\
NBEATS-BU & 0.2904(0.0308) & 0.1259(0.0183) & 0.0393(0.0031) & 0.0310(0.0046)
&0.1740(0.0221) & 0.1398(0.0294) \\
INFROMER-BU & 0.5694(0.0065) & 0.5707(0.0072) &0.1654(0.0824) & 0.1580(0.0840) & 0.3128(0.0728) &0.3099(0.0706) \\
AUTOFORMER-BU & 0.3787(0.0578) & 0.1868(0.0084) & 0.0519(0.0034) & 0.0505(0.0011)
&0.1506(0.0146) & 0.1143(0.0049) \\
FEDFORMER-BU & 0.3408(0.0099) & 0.1544(0.0097) & 0.0464(0.0041) & 0.0369(0.0027)
&0.1424(0.0141) & 0.1081(0.0034) \\
\hline
\textbf{SLOTH(Opt)(ours)} & 0.2613(0.0017) & 0.1032(0.0012) & \textbf{0.0328(0.0006)} & \textbf{0.0183(0.0008)} & \textbf{0.1116(0.0018)} & 0.0696(0.0017) \\
SLOTH(Proj) & 0.2780(0.0051) & 0.1098(0.0008) & 0.0370(0.0052) & 0.0228(0.0072)
& 0.1121(0.0014) & 0.0704(0.0005) \\
SLOTH(BU) & \textbf{0.2583(0.0015)} & \textbf{0.0991(0.0021)} & 0.0391(0.0051) & 0.0248(0.0065)
& 0.1127(0.0017)& 0.0703(0.0023) \\
\hline
\end{tabularx}}
\caption{MAPE and weighted-MAPE (w-MAPE) metric values over five independent runs for baselines such as traditional reconciliation methods and end-to-end methods, as well as our approach. The value in brackets is the variance over the five runs. }
\label{tab:main_results}
\end{table*}
In this section, we conduct extensive evaluations on real-world hierarchical datasets. Firstly, we evaluate the performance of our framework, and compare our approach against the traditional statistical method and end-to-end model (HierE2E). We then add more practical constraints to M5 dataset, building a meaningful optimization target to solve an inventory management problem, and again evaluate various approaches for hierarchical tasks under these realistic scenarios.
Our framework on a real-world task of cloud resource scheduling in Ant Group is shown in Appendix E.
\subsection{Real-world datasets}
We take three publicly available datasets with standard hierarchical structures.
\begin{itemize}
\item{
Tourism \cite{bushell2001tourism,athanasopoulos2009hierarchical} includes an 89-series geographical hierarchy with quarterly observations of Australian tourism flows from 1998 to 2006, which is divided into 4 levels. Bottom-level contains 56 series, aggregated-levels contain 33 series, prediction length is 8.
This dataset is frequently referenced in hierarchical forecasting studies
\cite{hyndman2018forecasting}. }
\item{
Labour \cite{australiaLabour} includes monthly Australian employment data from Feb. 1978 to Dec. 2020. By using included category labels, we construct a 57-series hierarchy, which is divided into 4 levels. Specifically, bottom-level contains 57 series, aggregated-levels contain 49 series in total, and prediction length is 8.}
\item M5 \cite{han2021simultaneously} dataset describes daily sales from Jan. 2011 to June 2016 of various products. We construct 5-level hierarchical structure as state, store, category, department, and detail product from the origin dataset, resulting in 70 bottom time series and 44 aggregated-level time series. The prediction length is 8.
\end{itemize}
\subsection{Results Analysis}
\label{sec:m5_sched}
\begin{figure*}
\centering
\includegraphics[width=.9\linewidth]{figures/m5_shed_new.jpg}
\caption{Results of 5 independent runs of M5 scheduling problem for 7-day prediction (lower MAPE and lower task loss are better). As expected, the network using prediction loss achieves the highest accuracy. Our task-net improves the prediction performance by 36.8\% compared to weighed loss net, and outperforms the prediction loss net by 53.2\%.
}
\label{fig:inv_uni}
\end{figure*}
In this section, we validate the overall performance of our method in the prediction task on three public datasets. We report scale-free metrics \textit{MAPE} and scaled metrics \textit{weighted-MAPE} to measure the accuracy. We apply several representative state-of-the-art forecasting baselines (details shown in Appendix B), including DeepAR \cite{salinas2020deepar}, DeepVAR \cite{salinas2019high}, NBEATS \cite{oreshkin2019n}, and former-based methods \cite{zhou2021informer,wu2021autoformer, zhou2022fedformer}. We then combine these methods with traditional bottom-up aggregation mechanism and closed-form solution \cite{rangapuram2021end} (DeepVar-Proj is HierE2E) to generate reconciliation results.
Table~\ref{tab:main_results} reports the results. The top part shows results of traditional statistic methods, the middle part is of deep neural networks methods with closed-formed solution and bottom-up reconciliation, and the bottom part is the results from our approach and the combination of our forecasting mechanism with traditional bottom-up (BU) and closed-form projection methods (Proj).
We can see that traditional statistic methods perform poorly compared with deep neural networks.
NBEATS performs best on Tourism dataset. In particular, NBEATS-BU performs best on MAPE while NBEATS-Proj performs best on weighted-MAPE. However, informer-related methods perform poorly. One possible explanation is that it requires much larger training dataset.
One can observe that the models performing best on MAPE do not perform as well on weighted-MAPE, which is caused by different level contributions on the overall performance, e.g., the bottom-level contributes more on MAPE but higher levels contribute more on weighted-MAPE.
Our proposed approach (SLOTH) has superior performance compared to other methods on both MAPE and weighted-MAPE metrics in most scenarios. Specifically, SLOTH achieves best performance among all models on Labour and M5 datasets, and ranks the second on Tourism dataset and the third on w-MAPE for M5. Besides optimization reconciliation, we also combine our forecasting mechanism with the aforementioned bottom-up and closed-form projection methods, and both methods achieve higher accuracy than the baselines. Even SLOTH-BU, with primitive structure, achieves the best performance in Tourism dataset. Please note that smaller variances indicate that our framework shows stabler performances across various scenarios.
In conclusion, our SLOTH mechanism improves the forecasting performance, and optimized reconciliation generates reasonable coherent predictions without much performance loss. We also assess the gains in the performance across all levels and running time in Appendix D, and the ablation study for each component is presented in Appendix E.
\subsection{M5 Scheduling Task}
In this section, we apply our framework to realistic scenarios with meaningful task-based constraints and targets, using a designed scheduling task for product sales based on M5 dataset.
Specifically, we define a meaningful task that minimize the cost of scheduling and inventory with more practical conditions:
\begin{itemize}
\item Underestimation and overestimation contribute differently to the final cost. Underestimation implies that the store needs to order more commodity to fulfill the demand, which increases scheduling cost. Overestimation implies that the store has to keep the extra commodity, which increases inventory cost.
\item Different levels generate different weighted contribution due to aggregation, e.g., scheduling and inventory costs for the top-levels are less than those for the bottom-levels
{because the company scale is larger at the top-levels.}
\item Commodities of different types have different inventory and scheduling costs, since the shelf life for food products is shorter than household products, i.e., the inventory cost is lower for shorter storage time and the scheduling cost is higher due to the need for faster transportation.
\end{itemize}
\textbf{Settings.} We assume scheduling takes place every week. We set prediction length to 7 and context length to 14. The other penalty settings and target are detailed in Appendix F. We then compare the outcome from the following models:
\begin{enumerate}
\item Prediction Net: prediction model that takes the prediction metric (MAE) as the loss for optimization, and bottom-up approach for coherency.
\item Weighted Loss Net: prediction model that takes task cost (weighted-MAE) as the loss for optimization, and bottom-up for coherency.
\item SLOTH: our end-to-end approach that takes both cost and constraints, with the task loss for optimization.
\end{enumerate}
As shown in Figure~\ref{fig:inv_uni}, the prediction Net model performs the best on prediction (MAPE), which is the training objective. As for the task cost, which the scheduler really cares in practice, our SLOTH framework outperforms the Prediction Net by a large margin. Specifically, our model improves the task-cost performance by 36.8\% compared to the Prediction Net, while at the same time achieving a similar task loss target as the Weighted Loss Net, but with an improvement of 53.2\% in the prediction accuracy.
\section{Conclusion}
In this paper, we introduced a novel task-based structure-learning framework (SLOTH) for HTS. We proposed two tree-based mechanisms to utilize the hierarchical structure for HTS forecasting. The top-down convolution integrates the temporal feature of the top-level to enhance stability of dynamic patterns, while the bottom-up attention incorporates the features of the bottom-level to improve the coherency of the temporal features. In the reconciliation step, we applied the deep neural optimization layer to produce the controllable coherent result, which also accommodates complicated realistic task-based constraints and targets under coherency without requiring any explicit post-processing step. We unified the goals of forecasting and decision-making and achieved an end-to-end framework. We conducted extensive empirical evaluations on real-world datasets, where the competitiveness of our method under various conditions against other state-of-the-art methods were demonstrated.
Furthermore, our ablation studies proved the efficacy of each component we designed.
Our method has also been deployed in the production environment in Ant Group for its cloud resources scheduling.
\input{aaai23.bbl}
|
2,869,038,154,314 | arxiv | \subsection{Keywords}
exponential series, infinite series, Cauchy-Euler operator
\subsection{Mathematical Classification}
Mathematics Subject Classification 2010: 11L03, 30E10, 30K05, 30B10, 30C10, 30D10
\end{abstract}
\subsection{}
\tableofcontents{}
\section{Introduction}
\subsection{General}
Our study focuses on a generalization of the quadratic exponential power series
\begin{equation}
f(x,2)=\sum^{\infty}_{k=1}e^{-x\cdot{k^2}} \label{eqn10}
\end{equation}
with $x>0,x\in{R}$. This function is very steep in behavior while $x$ is approaching zero, diverging fast and it goes quickly to zero as $x$ increases towards infinity. For negative $x$ values the series diverges. Our generalization will have a parameter $\alpha$
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}e^{-x\cdot{k^{\alpha}}} \label{eqn20}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\textwidth]{general.jpg}
\caption{The generalized exponential series as a function of x with $\alpha$ as a parameter, excluding the origin, where it is singular}
\label{fig:general}
\end{figure}
The range of validity is $x,\alpha\in{R}, x > 0$. By ratio test the generalized series converges absolutely when $\alpha > 0$. Figure \ref{fig:general} indicates how the family of curves saturates when $\alpha > 2.5$. The limit comes out as
\begin{equation}
\lim_{\alpha\rightarrow\infty} f(x,\alpha)=e^{-x}
\end{equation}
The generalized exponential series has a complicated behavior when the argument $x$ is varied over the complex plane. See the crude illustrations below (Figures \ref{fig:reseries1}, \ref{fig:imseries1}, \ref{fig:reseries28}, \ref{fig:imseries28}), generated with National Instruments LabVIEW2016. The origin is on the back plane lower corner center and the $x$ axis is directed towards the viewer. They show part of the behavior over first and fourth quadrants. We observe that the real part is behaving as an even function referring to the $x$ axis but the imaginary part is odd. The scales are only approximate. Rest of the illustrations are created with regular Excel spreadsheet graphics.
\begin{figure}[ht]
\centering
\includegraphics[width=0.800\textwidth]{reseries1.jpg}
\caption{The real part of the generalized exponential series over the complex plane with $\alpha$=1}
\label{fig:reseries1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.800\textwidth]{imseries1.jpg}
\caption{The imaginary part of the generalized exponential series over the complex plane with $\alpha$=1}
\label{fig:imseries1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.800\textwidth]{reseries28.jpg}
\caption{The real part of the generalized exponential series over the complex plane with $\alpha$=2.8}
\label{fig:reseries28}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.800\textwidth]{imseries28.jpg}
\caption{The imaginary part of the generalized exponential series over the complex plane with $\alpha$=2.8}
\label{fig:imseries28}
\end{figure}
We are interested in studying the behavior within the ranges of $x,\alpha\in{R}$. We also study the asymptotic behavior at large $x\in{R}$. That is usually estimated as
\begin{equation}
f(x,\alpha){\approx}e^{-{x}}, x > 1 \label{eqn45}
\end{equation}
being the first term of the series. We wish to test if this approximation is superior to another estimate developed from the series. It is obvious that this is the main term at very large values of the argument. The questions are, what is the range of its validity and how good is our new approximation. In order to enter those tasks we need to transform or break down the exponential series in some manner.
\subsection{Attempts}
The exponential series effectively resists all common methods of summation to transform it to another form better suitable for analysis. Literature offers very little help in this respect. Jacobi theta functions are far relatives to this series. However, they are generally treated in different ways. We can offer a few na\'{i}ve attempts in order to penetrate into the internals. We can display the series term-wise and then collect them by taking a common factor at each step while progressing towards infinity.
\begin{equation}
f(x,2)=e^{-x}+e^{-4x}+e^{-9x}+e^{-16x}+e^{-25x}+e^{-36x}+...
\end{equation}
\begin{equation}
=e^{-x}(1+e^{-3x}(1+e^{-5x}(1+..)))
\end{equation}
Since with $n=0,1,2...$
\begin{equation}
(n+1)^2=\sum^{n}_{j=0}(2j+1) \label{eqn50}
\end{equation}
we get
\begin{equation}
f(x,2)=\sum^{\infty}_{k=1}{(e^{-x}e^{-3x}e^{-5x}e^{-7x}...e^{-(2k-1)x})} \label{eqn60}
\end{equation}
On the other hand we can first expand the series and then recollect to get
\begin{equation}
f(x,2)=\sum^{\infty}_{k=1}[{1-xk^2(1-\frac{xk^2}{2}(1-\frac{xk^2}{3}(1-\frac{xk^2}{4}(1-\frac{xk^2}{5}(1-\frac{xk^2}{6}(...))))))}] \label{eqn65}
\end{equation}
We can also force a regular Taylor's expansion to make it look like
\begin{equation}
f(x,2)=\sum^{\infty}_{k=1}{[1+\sum^{\infty}_{n=1}{\frac{(xk^2-2n)(xk^2)^{2n-1}}{(2n)!}}]} \label{eqn67}
\end{equation}
Unfortunately, these expressions do not seem to be helpful at this time.
The treatment of the subject starts in Chapter 2 by decomposing the original exponential series to nested series. Chapter 3 proceeds by transforming the series and a new polynomial is identified. It also offers recursion formulas and the generating function for it. Chapter 4 handles the differential operator acting on an exponential function which is equivalent to our exponential series. We handle there the eigenfunctions and eigenvalues of this operator. In Chapter 5 we study the asymptotic behavior of the series with the aid of the new results. Appendix \ref{apx:appa} displays the main properties of the Cauchy-Euler operator. In Appendix \ref{apx:appb} we study some properties of the new polynomial. Appendix \ref{apx:appc} treats shortly integration and differentiation properties of the series at hand. Appendix \ref{apx:appd} shows some results of generating new series expressions for some common functions. Our presentation is made in a condensed way leaving out all formal proofs.
\section{Decomposition of the Series}
\subsection{Preliminaries}
We use the notation below while processing the expressions
\begin{equation}
z(k)=\alpha\cdot{ln(k)} \label{eqn1000}
\end{equation}
and
\begin{equation}
k^{\alpha}=e^{{\alpha}\cdot{ln(k)}}=e^z \label{eqn1010}
\end{equation}
Here $x,\alpha{\in{R}}$. The exponential nested structure needs to be broken down to some other functional dependence to allow easier handling in analysis.
\subsection{Exponential Series of an Exponential}
We start by expanding the exponential function as a Taylor's power series
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}e^{-x\cdot{k^{\alpha}}}=\sum^{\infty}_{k=1, n=0}{\frac{(-x)^{n}\cdot{e^{nz}}}{n!}} \label{eqn1020}
\end{equation}
We expand further the term $e^{nz}$ as a power series getting
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{\sum^{\infty}_{n=0}{\sum^{\infty}_{m=0}{\frac{(-x)^{n}n^{m}z^{m}}{m!n!}}}} \label{eqn1030}
\end{equation}
Then we swap the summations $n,m$ to get
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{\sum^{\infty}_{m=0}{\sum^{\infty}_{n=0}{\frac{(-x)^{n}n^{m}z^{m}}{m!n!}}}} \label{eqn1040}
\end{equation}
This is also equal to
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{[e^{-x}+\sum^{\infty}_{m=1}{\frac{z^{m}}{m!}\sum^{\infty}_{n=0}{\frac{(-x)^{n}n^{m}}{n!}}}]} \label{eqn1050}
\end{equation}
\subsection{Using a Function Instead of a Sum}
We define a function formed by the last nested sum as
\begin{equation}
S(x,m)=\sum^{\infty}_{n=0}{\frac{(-x)^{n}n^{m}}{n!}} \label{eqn1060}
\end{equation}
By using the property in Appendix \ref{apx:appa}. equation (\ref{eqn10020}), we can express it as
\begin{equation}
S(x,m)=(x\partial_x)^m\sum^{\infty}_{n=0}{\frac{(-x)^{n}}{n!}}=(x\partial_x)^{m}e^{-x} \label{eqn1070}
\end{equation}
Then we can write our function as
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{\sum^{\infty}_{m=0}{\frac{z^{m}}{m!}S(x,m)}} \label{eqn1080}
\end{equation}
and equivalently as follows
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{\sum^{\infty}_{m=0}{\frac{z^{m}}{m!}(x\partial_x)^{m}e^{-x}}} \label{eqn1090}
\end{equation}
\begin{equation}
=\sum^{\infty}_{k=1}{K(x,z)}e^{-x} \label{eqn1100}
\end{equation}
$K(x,z)$ is a differential operator (recall that $z(k)=\alpha\cdot{ln(k)}$)
\begin{equation}
K(x,z)=\sum^{\infty}_{m=0}{\frac{(zx\partial_x)^{m}}{m!}}=e^{zx\partial_x} \label{eqn1110}
\end{equation}
\section{Polynomials}
\subsection{Triangle}
We can multiply the function $S(x,m)$ in equation (\ref{eqn1070}) by $e^x$ from the left and mark it as $S_m(x)$
\begin{equation}
S_m(x)=e^{x}S(x,m)=e^{x}(x\partial_x)^{m}e^{-x} \label{eqn2000}
\end{equation}
Thus the exponential functions disappear for every $m$ and $S_m(x)$ appears to be a new kind of polynomial. In the following we study a little of its properties. It is notable that $S_m(x)$ and $S(x,m)$ are independent of $\alpha$. From equation (\ref{eqn2000}) we can solve for the first polynomials by direct differentiation. We get the triangle below.
\begin{equation}
S_0(x)=1 \nonumber
\end{equation}
\begin{equation}
S_1(x)=-x \nonumber
\end{equation}
\begin{equation}
S_2(x)=x^2-x \nonumber
\end{equation}
\begin{equation}
S_3(x)=-x^3+3x^2-x \nonumber
\end{equation}
\begin{equation}
S_4(x)=x^4-6x^3+7x^2-x \nonumber
\end{equation}
\begin{equation}
S_5(x)=-x^5+10x^4-25x^3+15x^2-x \nonumber
\end{equation}
\begin{equation}
S_6(x)=x^6-15x^5+65x^4-90x^3+31x^2-x \nonumber
\end{equation}
\begin{equation}
S_7(x)=-x^7+21x^6-140x^5+350x^4-301x^3+63x^2-x \nonumber
\end{equation}
\begin{equation}
S_8(x)=x^8-28x^7+266x^6-1050x^5+1701x^4-966x^3+127x^2-x \label{eqn2010}
\end{equation}
\begin{equation}
... \nonumber
\end{equation}
Figures \ref{fig:poly}, \ref{fig:polynorm} and \ref{fig:polynormlog} illustrate the first few polynomials, the latter normalized with $m!$. The polynomial coefficients resemble the Stirling's numbers of second kind but with a changing sign.
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\textwidth]{poly.jpg}
\caption{Polynomials $S_m(x)$}
\label{fig:poly}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\textwidth]{polynorm.jpg}
\caption{Normalized polynomials as $\frac{S_m(x)}{m!}$}
\label{fig:polynorm}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\textwidth]{polynormlog.jpg}
\caption{Normalized polynomials as $\frac{S_m(x)}{m!}$ in log scale. Note that the logarithmic illustration has cut the negative values.}
\label{fig:polynormlog}
\end{figure}
\subsection{Recursion Relations for the Polynomials}
We can find a recursion relation for the coefficients of the powers $b_{j}^{n}$, ($j$ row, $n$ power), by looking at the elements on the row immediately above in the triangle
\begin{equation}
(-1)^{n+j}b_{j}^{n}x^{n-j}+(-1)^{n+j+1}b_{j+1}^{n}x^{n-j-1}+... \label{eqn2030}
\end{equation}
\begin{equation}
...(-1)^{n+j+1}b_{j+1}^{n+1}x^{n+1-j-1}+... \label{eqn2040}
\end{equation}
We get
\begin{equation}
b_{j+1}^{n+1}=(n-j){b_{j}^{n}}+{b_{j+1}^{n}} \label{eqn2050}
\end{equation}
where we have the absolute value of the coefficient only. This recursion is too clumsy for use in analysis. In the following we develop proper recursion relations for the $S_m(x)$. The simplest one follows directly from equation (\ref{eqn2000}) by multiplying it from the left with $x\partial_x$.
\begin{equation}
xS_n^{'}(x)=xS_n(x)+S_{n+1}(x) \ \ \ \ \ \ (RR1) \label{eqn2060}
\end{equation}
or
\begin{equation}
S_{n+1}(x)=x(S_n^{'}(x)-S_n(x)) \label{eqn2070}
\end{equation}
\subsection{Generating Function of the Polynomial}
An important step is to find the generating function for the polynomial $S_m(x)$. By multiplying equation (\ref{eqn2000}) from the left by $e^{-x}$, we get
\begin{equation}
S_{m}(x)e^{-x}=(x\partial_x)^{m}e^{-x} \label{eqn2080}
\end{equation}
Next we multiply both sides with $\frac{t^m}{m!}$ and sum the terms to obtain
\begin{equation}
\sum^{\infty}_{m=0}{\frac{S_{m}(x)e^{-x}t^{m}}{m!}}=\sum^{\infty}_{m=0}{\frac{(x\partial_x)^{m}e^{-x}t^{m}}{m!}}=e^{tx\partial{x}}e^{-x} \label{eqn2090}
\end{equation}
leading to the generating function
\begin{equation}
\sum^{\infty}_{m=0}{\frac{S_{m}(x)e^{-x}t^{m}}{m!}}=e^{-xe^{t}} \ \ \ \ \ \ (GF) \label{eqn2100}
\end{equation}
We have used the Cauchy-Euler operator properties as expressed in Appendix \ref{apx:appa}.
\subsection{Making Symmetric the Generating Function}
It turns out that we can make symmetric the generating function (GF) by changing the variable $t$ to $y$ as
\begin{equation}
t=ln(y+1)
\end{equation}
we are able to write down the generating function in the following form
\begin{equation}
e^{-xy}=\sum^{\infty}_{m=0}{\frac{S_{m}(x)(ln(y+1))^{m}}{m!}} \label{eqn2102}
\end{equation}
The symmetry is equal to the fact that on the left side we can swap $y\rightleftharpoons{x}$. Therefore, we can do the same swapping on the right side without changing the value of the expression. This is a rare symmetry property. In general, a function
\begin{equation}
f(xy)=g(x,y)
\end{equation}
can be partially differentiated to obtain
\begin{equation}
x\partial{x}g(x,y)=y\partial{y}g(x,y)
\end{equation}
thus containing the Cauchy-Euler differential operator.
\subsection{Further Recursion Relations}
By defining
\begin{equation}
S_m(x)=\sum^{m}_{j=1}{c_{j}^{m}x^{m-j+1}} \label{eqn2200}
\end{equation}
we get by using the equation (\ref{eqn2070})
\begin{equation}
c_j^m=c_{j-1}^{m-1}(m-j+1)-c_{j}^{m-1} \label{eqn2210}
\end{equation}
The values below work as boundaries
\begin{equation}
c_m^m=-1 \label{eqn2230}
\end{equation}
\begin{equation}
c_1^m=(-1)^m \label{eqn2240}
\end{equation}
\begin{equation}
c_j^m=0, j>m, j<1 \label{eqn2250}
\end{equation}
Even this recursion relation in unsatisfactory. We will obtain a pair of good recursion relations by differentiating the (GF) equation (\ref{eqn2100}) with $\partial_x$ and $\partial_t$ separately.
\begin{equation}
-x{e^t}\sum^{\infty}_{m=0}{\frac{S_{m}(x)e^{-x}t^{m}}{m!}}=\sum^{\infty}_{m=0}{\frac{S_{m+1}(x)e^{-x}t^{m}}{m!}} \label{eqn2252}
\end{equation}
\begin{equation}
-{e^t}\sum^{\infty}_{m=0}{\frac{S_{m}(x)e^{-x}t^{m}}{m!}}=\sum^{\infty}_{m=0}{\frac{(S_{m}^{'}(x)-S_{m}(x))e^{-x}t^{m}}{m!}} \label{eqn2254}
\end{equation}
Comparison of the powers of $t$ brings out
\begin{equation}
\sum^{j}_{n=0}{\frac{S_{j-n}(x)}{(j-n)!n!}}=\frac{S_{j}(x)-S_{j}^{'}(x)}{j!} \ \ \ \ \ \ (RR2) \label{eqn2260}
\end{equation}
and
\begin{equation}
-x\sum^{j}_{n=0}{\frac{S_{j-n}(x)}{(j-n)!n!}}=\frac{S_{j+1}(x)}{j!} \ \ \ \ \ \ (RR3) \label{eqn2270}
\end{equation}
The latter recursion is useful for generation of the polynomials.
\subsection{Explicit Form of the Polynomial}
By looking at the recursion relations obtained, we recognize that the Stirling number of the second kind $\hat{S}_n^{[m]}$ is reminiscent to our polynomial coefficients. It has a closed-form expression as follows, (\cite{Gradshteyn2007}, \cite{Abramowitz1970})
\begin{equation}
\hat{S}_n^{[m]}=\frac{1}{m!}\sum^{m}_{i=0}{(-1)^{m-i}(^m_i)i^n} \ \ \ \ \ \label{eqn2275}
\end{equation}
and a recursion formula
\begin{equation}
\hat{S}_{n+1}^{[m]}=m\hat{S}_n^{[m]}+\hat{S}_n^{[m-1]} \label{eqn2277}
\end{equation}
It follows that we are able to express our polynomial in terms of it
\begin{equation}
S_m(x)=\sum^{m-1}_{j=0}{(-x)^{m-j}\hat{S}^{[m-j]}_m} \ \ \ \ \ \label{eqn2279}
\end{equation}
Thus we get an explicit form
\begin{equation}
S_m(x)=\sum^{m-1}_{j=0}{(-x)^{m-j}\frac{1}{(m-j)!}\sum^{m-j}_{i=0}{(-1)^{m-j-i}(^{m-j}_i)i^m}} \ \ \ \ \ \label{eqn2300}
\end{equation}
shortening to the final expression for the polynomial
\begin{equation}
S_m(x)=\sum^{m-1}_{j=0}{x^{m-j}\sum^{m-j}_{i=0}{\frac{(-1)^{i}i^m}{i!(m-j-i)!}}}, m=1,2,3.. \ \ \ \ \label{eqn2310}
\end{equation}
\begin{equation}
S_0(x)=1
\end{equation}
\section{Operator Expressions}
\subsection{The Differential Operator}
To collect our results thus far, we obtain (recalling that $z=z(k)$) and by using equation (\ref{eqn1110})
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{e^{zx\partial_x}e^{-x}} \label{eqn2710}
\end{equation}
\begin{equation}
=\sum^{\infty}_{k=1}{e^{\alpha{ln(k)}x\partial_x}e^{-x}} \label{eqn2720}
\end{equation}
\begin{equation}
=\sum^{\infty}_{k=1}{\frac{1}{k^{-\alpha{x}\partial_x}}e^{-x}} \label{eqn2730}
\end{equation}
\begin{equation}
=\zeta(-\alpha{x}\partial_x)e^{-x} \label{eqn2740}
\end{equation}
$\zeta(x)$ is formally the Riemann zeta function, now becoming a differential operator. Since the expansion can be made inside the k-summation (\ref{eqn1020}), we have finally everything collected to our transformed series
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{e^{-xk^{\alpha}}}=\sum^{\infty}_{k=1}{k^{\alpha{x}\partial_x}e^{-x}}=e^{-x}\sum^{\infty}_{k=1}{\sum^{\infty}_{m=0}{\frac{(\alpha{ln(k)})^m}{m!}S_{m}(x)}} \label{eqn2760}
\end{equation}
Thus we have managed to move the $x$-dependence to the polynomial and also the $\alpha$-dependence is isolated from that. On the other hand, we can apply the (GF) to the inner sum to write down
\begin{equation}
e^{-x}\sum^{\infty}_{k=1}{\sum^{\infty}_{m=0}{\frac{(\alpha{ln(k)})^m}{m!}S_{m}(x)}}=e^{-x}\sum^{\infty}_{k=1}{e^{-xe^{\alpha{ln(k)}}+x}}
\end{equation}
This becomes immediately an identity as expected. This once more proves that the generating function is the essential core for our generalized exponential series.
\subsection{Eigenvalues and Eigenfunctions of the Differential Operator}
The eigenfunctions and eigenvalues of the operator in (\ref{eqn2740}) are determined as follows.
\begin{equation}
\zeta(-\alpha{x}\partial_x)\phi{(x)}=\sum^{\infty}_{k=1}{\phi{(xe^{\alpha{ln(k)}})}}=\sum^{\infty}_{k=1}{\phi{(xk^{\alpha}}}) \label{eqn2800}
\end{equation}
As a trial function we set, with $\beta{>0}$, a constant. $Re{(\alpha{\beta})}>1$
\begin{equation}
\zeta(-\alpha{x}\partial_x)\frac{1}{x^{\beta}}=\frac{1}{x^{\beta}}\sum^{\infty}_{k=1}{k^{-\alpha{\beta}}}=\frac{\zeta(\alpha{\beta})}{x^{\beta}} \label{eqn2820}
\end{equation}
This is convergent if $Re{(\alpha{\beta})}>1$ and we have the essential result of a continuum of eigenvalues and eigenfunctions for the $\zeta()$ operator. The $N$ eigenvalues are thus $\zeta(\alpha{\beta})$ and eigenfunctions are $\frac{1}{x^{\beta}}$.
If $\phi(x)$ can be expanded as a kind of a Laurent series with negative powers only, we have, with $\phi{_j}$ constant coefficients of the expansion
\begin{equation}
\phi(x)=\sum^{N}_{j=1}{\frac{\phi_{j}}{x^j}} \label{eqn2830}
\end{equation}
and therefore, by (\ref{eqn2820})
\begin{equation}
\zeta(-\alpha{x}\partial_x)\phi(x)=\sum^{N}_{j=1}{\frac{\phi_{j}\zeta(\alpha{j})}{x^j}} \label{eqn2840}
\end{equation}
\section{Asymptotic Behavior}
\subsection{Large Argument Estimate}
We can apply some of the results obtained to estimate the asymptotic behavior of the exponential series equation (\ref{eqn20}) when the argument $x$ approaches large values. We can look at the polynomial triangle equation (\ref{eqn2010}) along diagonals, both slanted to the left and to the right, Figure \ref{fig:diagonals}. We mark the right-slanted diagonals as $L_n$ and left-slanted diagonals as $K_n$. The first two can be identified as closed-form expressions but the diagonals following them will have very complicated expressions. Equation (\ref{eqn2310}) can be used directly as well if more terms are required. The terms are, while $m$ is referring to the power of the polynomial, the in first right-slanted diagonal
\begin{equation}
L_1(m)= (-1)^{m}x^m, \ \ \ \ \ m=0,1,2,3... \label{eqn4000}
\end{equation}
and in the second one
\begin{equation}
L_2(m)= (-1)^{m+1}x^{m-1}m(m-1)\frac{1}{2}, \ \ \ \ \ m=2,3... \label{eqn4010}
\end{equation}
The higher coefficients are likely too complicated to be evaluated just by looking at the triangle. The general coefficients can be obtained from equation (\ref{eqn2310}) with $j=n-1$
\begin{equation}
L_n(m)=\sum^{m-n+1}_{i=0}{\frac{(-1)^{i}i^{m}x^{m-n+1}}{i!(m-n+1-i)!}}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\textwidth]{diagonals.jpg}
\caption{Definitions of the diagonals, left and right slanted, for the polynomial triangle}
\label{fig:diagonals}
\end{figure}
The largest terms are those with the highest powers and we pick up $L_1$ and $L_2$ terms for the $S_m(x)$. The exponential series will become
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{e^{-xk^{\alpha}}}=\sum^{\infty}_{k=1}{\sum^{\infty}_{m=0}{\frac{(\alpha{ln(k)})^m}{m!}S_{m}(x)e^{-x}}} \label{eqn4100}
\end{equation}
\begin{equation}
\approx{e^{-x}\sum^{\infty}_{k=1}{[\sum^{\infty}_{m=0}{\frac{(\alpha{ln(k)})^{m}(-x)^m}{m!}}-\frac{1}{x}\sum^{\infty}_{m=2}{\frac{(-\alpha{{x}ln(k)})^{m}m(m-1)}{2(m!)}}]}} \label{eqn4110}
\end{equation}
Instantly we recognize the familiar Riemann zeta function and get
\begin{equation}
f(x,\alpha)\approx{e^{-x}\zeta(x\alpha)-\frac{e^{-x}x}{2}\zeta^{''}{(x\alpha)}}, x >> 0 \label{eqn4120}
\end{equation}
The range of validity can be estimated below with a graph. The traditional estimate has been the first term only
\begin{equation}
f(x,\alpha){\approx}e^{-{x}} \label{eqn4200}
\end{equation}
We compare this to the actual function and to the estimate (\ref{eqn4120}) in the following Figures \ref{fig:asymptotics} and \ref{fig:asymptoticslog}.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{asymptotics.jpg}
\caption{Asymptotic behavior of the exp(-x) estimate and our approximation of two terms, with $\alpha$=1.6, for large argument values, linear scale}
\label{fig:asymptotics}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{asymptoticslog.jpg}
\caption{Asymptotic behavior of the exp(-x) estimate and our approximation of two terms, with $\alpha$=1.6, for large argument values, logarithmic vertical scale}
\label{fig:asymptoticslog}
\end{figure}
The graphs show that for large values of $x > 1$ the simple exponential estimate still gives a better accuracy than our asymptotic model with two biggest terms. We would need more terms for the approximation to possibly beat the simple exponential function. However, that is not justified since the new terms will increase complexity too much.
\newpage
\subsection{Asymptotic Behavior Near Zero}
To select the lowest powers in $x$ we pick up the left-slanted diagonals $K_n(m)$ (see Figure \ref{fig:diagonals})
\begin{equation}
K_0(m)= -x, \ \ \ \ \ m=1,2... \label{eqn4710}
\end{equation}
\begin{equation}
K_1(m)= (2^{m-1}-1)x^2, \ \ \ \ \ m=2,3... \label{eqn4720}
\end{equation}
Again, higher terms become awkward to evaluate by guessing from the triangle and the general expression is
\begin{equation}
K_n(m)=\sum^{n+1}_{i=0}{\frac{(-1)^{i}i^{m}x^{m+1}}{i!(n+1-i)!}}
\end{equation}
Thus
\begin{equation}
K_2(m)= (2^{m}-1-3^{m-1})\frac{x^3}{2!}, \ \ \ \ \ m=3,4... \label{eqn4722}
\end{equation}
\begin{equation}
K_3(m)= (3\cdot{2^{m-1}}-1-3^m+4^{m-1})\frac{x^4}{3!}, \ \ \ \ \ m=4,5... \label{eqn4724}
\end{equation}
$K_n(m)$ refers always to the same power $m$ of $x$.
In order to estimate the behavior when $x$ approaches zero, we may take the terms $K_0$ and $K_1$ to our expressions since they are with the lowest powers. This case is much more complicated than the preceding one as the series will always start to diverge while approaching the limit. The question is only, how does it do it?
The exponential series is
\begin{equation}
f(x,\alpha)=\sum^{\infty}_{k=1}{e^{-xk^{\alpha}}}=e^{-x}\sum^{\infty}_{k=1}{\sum^{\infty}_{m=0}{\frac{(\alpha{ln(k)})^m}{m!}S_{m}(x)}} \label{eqn4770}
\end{equation}
\begin{equation}
\approx{e^{-x}(\sum^{\infty}_{k=1}{1+\frac{x^2}{2}+x-(x^2+x)k^{\alpha}+\frac{x^2k^{2\alpha}}{2})}} \label{eqn4780}
\end{equation}
The sum will diverge rapidly while $x$ approaches zero. The main term is the unity and the other terms will be big too but of opposite signs. At this end of the range, we have much more trouble in obtaining a simple approximating function for the generalized exponential function. Since this approach fails, the analysis must be carried out in another way to obtain an asymptotic function near zero.
We are able to investigate the behavior at small $x$ in the case of $\alpha=1$ since we can solve the simple series as follows
\begin{equation}
\sum^{\infty}_{k=1}{e^{-kx}}=\frac{e^{-x}}{1-e^{-x}}
\end{equation}
Taking the limit of $x$ approaching zero shows that the result is a simple pole. We are not with the liberty of claiming that this would be valid for other values of $\alpha$.
\section{Conclusions}
The exponential series and its generalized version having a power of the index $\alpha$, appeared to be a hard nut to be opened. Our aim was to transform the generalized exponential series, to a form with a simpler functional dependence of the parameter $x$.
We first decomposed the series by making it a series of nested exponents and restructured it into nested series instead. We recognized the new form by applying an exponential Cauchy-Euler differential operator. This allowed further transformation of the innermost power series to become a polynomial. This could, on the other hand, be understood as a series of powers of a differential operator acting on an exponential function. This series was formally identified as the Riemann zeta function with an argument $-\alpha{x}\partial{x}$, simplifying presentation. The original series was converted into a nested series of polynomials in $x$, allowing a simplification for further analysis.
While developing the transformation, we found a new polynomial with interesting properties. Those are expressed in the Appendices and some results are also in the main text. The recursion relations for the polynomial were solved, equations (\ref{eqn2060}), (\ref{eqn2260}) and (\ref{eqn2270}). With the aid of them, we presented a self-contained expression for the polynomial, equation (\ref{eqn2310}) which can be used for further analysis of the generalized exponential series. The polynomial can be simply solved by hand with the aid of a triangle as in equations (\ref{eqn2000}), (\ref{eqn2010}) and (\ref{eqn2050}).
The generating function for the polynomial is a central tool for unwrapping the properties of the polynomial, equation (\ref{eqn2100}). It is based on the first recursion relation (\ref{eqn2060}). Some new series expressions for the common exponential function and for the two basic trigonometric functions are in Appendix \ref{apx:appd}. They were derived with the aid of the generating function. We made a change of variable to the generating function forcing it to become symmetric with respect to the two parameters, equation (\ref{eqn2102}). This is a series of two multiplying functions where the two parameters are separated and can be swapped on both sides. This led to some new interesting series expressions shown in Appendix \ref{apx:appd}.
One of our motives was to better understand the asymptotic behavior at large variable values and also near zero. We tested whether the transformed series can offer a satisfactory approximation. It appeared that for large $x$ by using a few dominant terms of the new series, it fails to give a better asymptotic estimate than the traditional single-term approximation ($=e^{-x}$). While approaching zero from the positive side, there seems not to be any available approximation from our analysis. More work is required in this field.
|
2,869,038,154,315 | arxiv | \section{Introduction}
A proper and precise understanding of the processes induced by
neutrino interactions is required in the analysis of neutrino
oscillation experiments. For instance, at intermediate energies, above
$0.5$ GeV, one pion production becomes relevant. Most of the
theoretical models for this reaction assume the dominance of
$\Delta(1232)$ resonance
mechanism\cite{AlvarezRuso:1997jr,AlvarezRuso:1998hi,Paschos:2003qr,Lalakulich:2005cs},
but others also include background
terms\cite{Sato:2003rq,Hernandez:2007qq,Hernandez:2007eb,Hernandez:2007kb}.
Above these energies new baryonic resonances can be excited, the first
of these resonances being the Roper $N^*(1440)$ which has a sizable
decay into a scalar pion pair and it is very wide. However, the
$\Delta$ does not couple to two pions in $s$-wave and thus it is not
relevant at energies where only slow pions are produced.
There exist very few attempts to measure the two pion production
induced by neutrinos and antineutrinos. Experiments done at
ANL\cite{Barish:1978pj,Day:1984nf} and BNL\cite{Kitagaki:1986ct}
investigated the two pion production processes, in order to test the
predictions of chiral symmetry. Biswas {\it et
al.}\cite{Biswas:1978ey} used PCAC and current algebra methods to
calculate the threshold production of two pions. Adjei {\it et
al.}\cite{Adjei:1980nj} made specific predictions using an effective
Lagrangian incorporating chiral symmetry. However, these models did
not include any resonance production, as we do. Furthermore we use an
expansion of the chiral Lagrangian that includes terms up to ${\cal
O}(1/f_\pi^3)$, while Adjei {\it et al.} kept only terms up to ${\cal
O}(1/f_\pi^2)$. More detailed discussions can be found in
Ref.~\cite{Hernandez:2007ej}.
\section{Pion Production Model}
We will focus on the neutrino--pion production reaction off the
nucleon driven by charged currents,
\begin{equation}
\nu_l(k) + N(p) \to l^-(k^\prime) + N(p^\prime) + \pi(k_{\pi_1})+\pi(k_{\pi_2})
\label{eq:reac} \, .
\end{equation}
For the derivation of the hadronic current we use the effective
Lagrangian given by the SU(2) non-linear $\sigma$ model. This
model\cite{Hernandez:2007qq} provides us with expressions for the
non-resonant hadronic currents that couple with the lepton current, in
terms of the first sixteen Feynman diagrams depicted in Fig.~\ref{fig:fig1}.
\begin{figure}[h]
\centering\includegraphics[width=\textwidth]{2piones.2.eps}
\centering\includegraphics[width=\textwidth]{Fig2.eps}
\vspace*{8pt}
\caption{Top: Nucleon pole, pion pole and contact terms contributing to
$2\pi$ production.Bottom: Direct and crossed Roper excitation
contributions to $2\pi$ production. \protect\label{fig:fig1}}
\end{figure}
We also include the two mechanisms depicted in the bottom of
Fig.\ref{fig:fig1}, which account for the Roper production and its
decay into two pions in a $s$-wave isoscalar state. The coupling of
the Roper to the charged weak current is written in terms of the
current
\begin{equation}
J^\alpha_{cc*} =
\frac{F_1^{V*}(q^2)}{\mu^2}(q^\alpha\slashchar{q}-q^2\gamma^\alpha)
+ i\frac{F_2^{V*}(q^2)}{\mu}\sigma^{\alpha\nu}q_\nu
- G_A\gamma^\alpha\gamma_5 - \frac{G_P}{\mu}q^\alpha\slashchar q\gamma_5
- \frac{G_T}{\mu} \sigma^{\alpha\nu}q_\nu\gamma_5 \,,
\end{equation}
which is the most general form compatible with conservation of the
vector current. The $G_T$ term does not need to
vanish; however, most analyses neglect its contribution and we shall
do so here. The form factors $G_A$ and $G_P$ are constrained by PCAC
and the pion pole dominance assumption. The vector form factors
$F_1^{V*}$ and $F_2^{V*}$ can be related to the isovector
part of the electromagnetic (EM) form factors.
We have fitted the proton-Roper EM transition form factors\cite{AlvarezRuso:2003gj} to the
experimental results for helicity amplitudes
\cite{Aznauryan:2004jd,Tiator:2003uu},
using a modiffied dipole parametrization (labeled FF1).
The Roper EM data have large error bars and it is possible to
accommodate quite different functional forms and values for these FF.
Thus, we shall consider other different models for the vector form
factors: the constituent quark model of Meyer {\it et al.}\cite{Meyer:2001js}
(FF2), the parametrizations of Lalakulich
{\it et al.}\cite{Lalakulich:2006sw} (FF3) and finally the predictions of the
recent MAID\cite{Drechsel:2007if} analysis (FF4).
\section{Results}
In Fig.~\ref{fig:3}, we present results for the cross section for
the process $\nu n\to \mu^- p \pi^+\pi^-$. We show separately the
contribution of the background terms as well as the contribution of
the Roper resonance as calculated by using the various form factors
described above. The interference between background and the Roper
contribution is not shown. We see that the background terms dominate
the cross section for neutrino energies $E_\nu>0.7$ GeV. At lower
energies the contribution from the Roper could be larger or smaller
than the background depending upon the vector form factors used for
the $W^+NN^*$ transition. The differences in the predictions for the
cross sections using the various parametrizations could reach a factor
two. The Roper contribution is specially sensitive to $F_2^{V*}(q^2)$
which is negative in contrast to the positive value which one gets in
the case of the nucleon.
\begin{figure}[h]
\centering\includegraphics[width=0.5\textwidth]{Fig3.eps}
\vspace*{8pt}
\caption{Cross section for the $\nu n\to \mu^- p \pi^+\pi^-$
reaction as a function of the neutrino energy. \protect\label{fig:3}}
\end{figure}
\begin{figure}[h]
\centering{\mbox{\includegraphics[width=0.49\textwidth]{Fig5.FF1.eps}
\hspace{0.01\textwidth}
\includegraphics[width=0.49\textwidth]{Fig6.eps}}}
\vspace*{8pt}
\caption{Cross section for the $\nu n\to \mu^- p \pi^+\pi^-$ (left)
and $\nu p\to \mu^- n \pi^+\pi^+$ (rights) with cuts as explained in
the text. Dashed line: Background terms. Solid line: Full model with
set FF1 of nucleon-Roper transition form factors. Data from
Ref.~\protect\cite{Kitagaki:1986ct} (solid circles) and
Ref.~\protect\cite{Day:1984nf} (open squares).
\label{fig:5}}
\end{figure}
We present the results for the cross section for the $\nu n\to \mu^- p
\pi^+\pi^-$ channel in left panel of Fig.~\ref{fig:5} and for the
channel $\nu p\to \mu^- n \pi^+\pi^+$ in the right hand panel. The
phase space for these results was restricted following a suggestion by
Adjei {\it et al.}\cite{Adjei:1980nj}. We show our results for the
first channel with only background terms and with the full model
evaluated using the set FF1 of nucleon-Roper transition form
factors. Other sets give a similar result in this case. Even in this
kinematic region, the theoretical results including the resonance
contribution are lower than the experiment. For the second channel
there are no contributions from the $N^*(1440)$ resonance.
|
2,869,038,154,316 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure*}[th!]
\centering
\includegraphics[width=0.9\linewidth]{Method_outline_wider.jpg}
\caption{Pictorial representation of our method. (a) We train an auto-encoder to reconstruct point clouds of car shapes, (b) we then train another network to predict the drag coefficient from the latent space, $Z$. (c) Finally we collect examples from the dataset to define human interpretable concepts and then represent them in the latent space of the auto-encoder .}
\label{fig:method}
\end{figure*}
Over the past half-decade, there has been spectacular success with deep learning to synthesize 3D shapes~\cite{Yin2018P2PNET,Park2019DeepSDFLC,Achlioptas2017RepresentationLA,Chen2019LearningIF,Mildenhall2020NeRFRS,Mller2022InstantNG,chen2020bsp,deng2020cvxnet,paschalidou2021neural}. A common approach has been to use a latent space representation that can encode the 3D geometry using some non-linear transformations. This latent space can either be learned directly from the data through an auto-encoder architecture \cite{Yin2018P2PNET,Guan2020GeneralizedAF} or implicitly by sampling from a known distribution (e.g., Gaussian)~\cite{Donahue2017AdversarialFL,Wu2016LearningAP,Chen2019LearningIF,Achlioptas2017RepresentationLA,Nash2017TheSV} and training a decoder network to replicate the samples from the dataset. Other latent representation approaches also include multi-view reconstruction~\cite{Mildenhall2020NeRFRS,Mller2022InstantNG,wu20153d,su15mvcnn,song2016deep,li2018sonet,kanezaki2016rotationnet,SFIKAS2018208,qi2016volumetric}, implicit neural representations~\cite{Sitzmann2020ImplicitNR,atzmon2020sal,atzmon2020sald,gropp2020implicit}, auto-decoder models~\cite{Park2019DeepSDFLC}, etc. The general aim in all these approaches is to obtain a latent space whose dimension is much smaller than the input dimension, thus having a strong relation to the field of signal compression. The latent space representation has been interpreted as a deep learning analog of dimensionality reduction algorithms such as SVD and PCA; indeed, a network with a single hidden layer can be made equivalent to the $\mathbf{U\Sigma V}$ formulation of SVD~\cite{lecun2015deep}. Thus, it is believed that the learned latent space should encode the most salient modes of the data and forms a low-dimensional manifold. Similar to their classical analogs, deep learning models discover the latent manifold in a self-supervised manner, where the objective function is determined by recreating the input distribution. Furthermore, either by construction or assumption, it is possible to linearly interpolate between points on this manifold and obtain synthesized data points. To this end, some works impose regularising conditions on the latent space to encourage the network to discover a well-behaved manifold, such as sparsity constraints \cite{Lin20163DKD} or reparametrization to multi-variate Gaussian distributions \cite{Brock2016GenerativeAD}.
One fundamental drawback of the latent space representation is its lack of interpretability. In the standard approach, there is no control over what each coordinate in the latent vector represents, and there is no canonical way in which the network can decompose the original signal. It is easy enough to encode two dataset samples and linearly interpolate between them; however, it is much harder to determine the changes required apriori. There have been attempts to condition the embedding space on a particular semantic feature in the input \cite{Kudo2019VirtualTS}, thus allowing more principled exploration. However, these conditions need to be defined before training, thus restricting the range of possible semantic combinations that can be investigated. In the engineering context it would be highly desirable to link high level design concepts with physical performance. Usually this is done through simulations for an individual design, but quantifying the impact of a design choice across an entire database is a challenging task. We show that using the representation power of deep neural networks we can analyse the sensitivity of a physical quantity of interest with respect to user-defined design concepts and across an entire dataset.
The idea of explainable AI (XAI) is also emerging over the past half-decade. Very early approaches for XAI were gradient-based saliency maps~\cite{simonyan2013deep}, class-activation maps~\cite{zhou2016learning,selvaraju2017grad}, and some recent approaches are Layerwise relevance propagation~\cite{binder2016layer}. In the field of geometric shape understanding, several researchers have attempted to extend these interpretability algorithms to 3D geometric shapes represented by voxels~\cite{ghadai2018learning,yoo2021explainable} or, more recently, point clouds~\cite{tan2022surrogate,su2021learning,zhang2019explaining,huang2019claim}. However, these methods were mostly characterized by visual appearance and, in many situations, did not satisfying some of the sanity checks outlined in ~\cite{adebayo2018sanity,tomsett2020sanity,kim2018interpretability,adebayo2020debugging}. Further, to the best of the knowledge of the authors, there are no XAI-based approaches for understanding the latent shape representation of the geometries.
An XAI method robust to these sanity checks with a lot of promise is Concept Activation Vectors (CAVs)~\cite{kim2018interpretability,goyal2019explaining}. CAVs allow the definition of semantic representations on the embedding space in a posthoc manner. It offers the freedom to define and explore the latent manifold in a human interpretable way, with concepts and inputs that were not originally part of the training dataset. In this work, we focus on whether we can use CAVs for 3D datasets. To this end, we eschew some of the more involved representation learning techniques and propose to learn a simple MLP auto-encoder on a point cloud dataset. We chose the dataset to contain shapes that are low resolution enough to be efficiently learned by an MLP and varied enough to allow interesting interpolations. Once the learning has converged for the reconstruction loss, we explore the latent space in terms of several hand-defined concepts. To create the CAVs, we gather examples within the dataset and from a synthetic distribution that can be interpreted as abstract examples of a style. We then show that it is possible to take a reference design and add or subtract varying amounts of each concept, allowing a potential designer to modify an existing design in a high-level, human interpretable way. To further take advantage of the XAI properties of TCAVs and test the statistical significance of the learned concepts, we also train a model to predict a physical quantity from the latent space.
\newpage
The main contributions of our work are:
\begin{itemize}[nosep,topsep=3pt]
\item We introduce concept activation vectors (CAV) for \\interpretable 3D shape interpolation.
\item We demonstrate CAVs for a dataset of 3D cars similar to ShapeNet.
\item We introduce the notion of parametric concepts to parametrize designs using human-understandable concepts and regress through them.
\end{itemize}
While these contributions apply to general 3D shape understanding, we are mainly interested in an engineering design perspective. Hence, we chose a cars dataset and its corresponding drag coefficients in our experiments.
\section{Concept Exploration for 3D shapes}
\label{sec:3d_concept}
In this section, we outline the various steps required to traverse a latent space in a human interpretable manner.
\subsection{Training an auto-encoder}
We start with a dataset of point clouds $\mathcal{D}_x = \{x_i\}_{i=1}^N$ where each point cloud $x_i \in \mathbb{R}^{P\times d}$ is a rank two tensor signifying a collection of points in 3D space with $d$ features ($d=3$). In addition we can consider a set of vector quantities $\mathcal{D}_y = \{y_i\}_{i=1}^N$ related to the the point clouds, where $y_i \in \mathbb{R}^F$. We signify the combination of the two datasets as $\mathcal{D}_{x,y} = \{(x_i, y_i)\}_{i=1}^N$. An auto-encoder can be represented as a pair of maps $AE := (e,d)$ such that:
\begin{equation}
e : x \in \mathbb{R}^{P\times d} \to z \in Z
\label{eq:encoder}
\end{equation}
and
\begin{equation}
d : z \in Z \to x \in \mathbb{R}^{P\times d}
\label{eq:decoder}
\end{equation}
The maps $e$ and $d$ are known as the encoder and decoder, respectively. $Z \subset \mathbb{R}^h$ is referred to as the $h$-dimensional latent space, and it represents the salient manifold of the data. In practice we construct the auto-encoder as a pair of deep neural networks $AE_{\Theta}:= (e_{\theta_e},d_{\theta_d})$ where $\Theta = \{\theta_e,\theta_d\}$ is the set of parameters from all networks. We train in an end-to-end fashion with the reconstruction loss:
\begin{equation}
\mathcal{L}(AE_{\Theta},x_i) := \left \|x_i - d_{\theta_d}(e_{\theta_e}(x_i)) \right \|^2_F;
\label{eq:loss}
\end{equation}
where $\left \|\cdot \right \|_F$ denotes the Frobenius norm. We want to minimize the expectation of the loss across the dataset, for which we use stochastic gradient descent based optimization.
\subsection{TCAV framework}
Once the $AE$ is trained, we can use the encoder to obtain latent vectors for any input that matches the dimensions of our original dataset. These new inputs are usually examples from a holdout dataset that follows the same distribution as $\mathcal{D}_x$. However, as was shown in \cite{kim2018interpretability}, we are not entirely limited by the input distribution, and we can obtain useful embeddings for shapes that are similar to the original input as well. We define a concept as a human interpretable collection of shapes that have some quality in common (e.g., a set of sporty cars).
\begin{equation}
C := \{z^{(c)}: z^{(c)} = e(x^{(c)}), x^{(c)} \in \mathbb{R}^{P\times d}\}
\label{eq:concept}
\end{equation}
Conversely, we define a non-concept $\overline{C}$ as a random collection of inputs that have no particular common characteristic. We can use concepts and non-concepts to define multi-way linear classification problems. Since linear classification learns hyperplanes that maximally separate the classes, we can take the normal to the hyperplanes to represent the direction of the concept.
\begin{equation}
w_C = \max_{w \in \mathbb{R}^h} \Big [ \underset{z \sim C}{\mathbb{E}}\big(\mathbb{I}_{w \cdot z +b \geq 1}\big) + \underset{z \nsim C}{\mathbb{E}}\big(\mathbb{I}_{w \cdot z +b \leq -1}\big)\Big ]
\label{eq:CAV}
\end{equation}
Here $\mathbb{I}_A$ is the indicator function, and $w_C$ are known as Concept Activation Vectors (CAVs). We would like to note that since CAVs are obtained as a result of a classification problem, they are sensitive to the other classes. For example, obtaining the CAV for concept $C_1$ against a non-concept $\overline{C}$ is not guaranteed to be the same as the CAV obtained by classifying against another concept $C_2$. In fact, this is a useful property that allows us to obtain relative and more targeted CAVs.
\noindent\textbf{Testing with CAVs}. Once we have recovered the CAV for our chosen concept, it is easy to determine the sensitivity of a network output to the chosen concept by performing the directional derivative:
\begin{equation}
S^C_{ij}(x) = w_C \cdot \mathbf{\nabla}_{z} d(e(x))_{ij}
\label{eq:sensitivity}
\end{equation}
Having defined the sensitivity of a single input, we can extend it to test the sensitivity of an entire dataset or a subset of it.
\begin{equation}
TCAV^C_{ij}(\mathcal{D}_x) := \frac{\left| \{x \in \mathcal{D}_x : S^C_{ij}(x) >0 \} \right |}{\left| \mathcal{D}_x \right|}
\label{eq:TCAV}
\end{equation}
This gives us the fraction of tested inputs positively influenced by the concept. As mentioned in \cite{kim2018interpretability}, it is possible to consider a metric based on the magnitude of the sensitivities as well.
\subsection{Regressing from the latent space}
Since understanding the sensitivity of the auto-encoder output is not particularly interesting, we train another neural network $r_{\theta_r}$ that can regress a quantity of interest from the latent space.
\begin{equation}
r:Z \to y
\end{equation}
Since this is a scalar physical quantity $y \in \mathbb{R}$, the sensitivities have a more intuitive interpretation than for the auto-encoder case:
\begin{equation}
S^C_{y}(x) = w_C \cdot \mathbf{\nabla}_{z} y(z).
\label{eq:sensitivity_y}
\end{equation}
That is, the sensitivity captures how much the quantity $y$ increases or decreases if we move in the direction of the concept. This value, along with the corresponding TCAV metric, can have practical engineering applications.
\subsection{Exploring the latent space with CAVs}
\label{sec:exploring}
\begin{figure}[b!]
\centering
\includegraphics[width=0.65\linewidth]{latent_traversal}
\caption{A conceptual path in latent space.}
\label{fig:latent_trans}
\end{figure}
Since each CAV describes an un-normalized direction in latent space, we can use them to translate a point in $z \in \mathbb{R}^h$ along this direction:
\begin{equation}
z' = z + \varepsilon\hat{w}_C,
\end{equation}
where $\varepsilon \in \mathbb{R}$ is a parameter controlling how far we go towards the concept if $\varepsilon > 0$ or away from the concept if $\varepsilon < 0$. Then we can use the decoder to transform the new point in latent space back into the input space $x' = d(z')$. In this way, we can both navigate the latent space and interpret the output in a naturally human-understandable form. As a straightforward extension, we can chain multiple translations in the latent space and thus blend multiple concepts. Say that we have identified a set $M = \{C_i\}_{i=1}^m$ of concepts and found their corresponding CAVs, then we can add any linear combination of these concepts to the original point.
\begin{equation}
z' = z + \sum_{i=1}^m\varepsilon_i\hat{w}_{C_i}
\end{equation}
Thus the complicated task of modifying an existing design to exhibit a combination of multiple design concepts and styles has been reduced to a simple linear algebra operation.
\noindent \textbf{Concept querying} is a natural benefit of using the CAVs to interpret the latent space. Since the classifier essentially computes a similarity score between the CAV and the latent representation of the input, we can define the query with respect to a certain concept as
\begin{equation}
Q(\mathcal{D}_x,C) := \{q : q = w_C \cdot e(x), x \in \mathcal{D}_x, \left|q\right| \gg 0\}.
\end{equation}
We can recover the instances in $\mathcal{D}_x$ that are most similar or dissimilar to the concept for strongly positive or negative $q$'s, respectively.
\section{Results}
In this section, we outline the results obtained for an in-house dataset consisting of 1165 car shapes represented by point clouds of dimension $\mathbb{R}^{6146 \times 3}$ with accompanying drag coefficients obtained from CFD simulations. A quarter of the shapes were reserved for validation, and the auto-encoder was trained on the rest. The auto-encoder consists of 4 fully connected layers for the encoder resulting in an embedding space with 8 dimensions; the decoder mirrors the encoder structure with the final output dimension equal to that of the original shapes. All activations in the auto-encoder are \texttt{LeakyReLU} \cite{Redmon2016YouOL} apart from the latent layer, which has a $tanh$ activation function to provide some hard bounds on the latent space, and no activation for the final layer of the decoder so that we can model the points in all of $\mathbb{R}^3$. For the regressor network $r_{\theta_r}$, we use a similar fully connected deep architecture to predict the drag coefficient for each car. Consequently we restrict the output to values in $\mathbb{R}_{>0}$ via a \texttt{ReLU} activation. Given the small size of the dataset, a small dropout was added to most layers to combat over-fitting. All models are trained for 1000 epochs using the ADAM optimizer \cite{Kingma2015AdamAM} with default settings.
\subsection{Obtaining CAVs}
\begin{table}[t!]
\centering
\begin{tabular}{c c c}
\toprule
Concept & No. shapes & Example \\
\midrule
Sport & 9 & \raisebox{-.5\height}{\includegraphics[scale=0.1]{Sport}} \\
Sedan & 9 & \raisebox{-.5\height}{\includegraphics[scale=0.1]{Sedan}} \\
Cuboids & 30 & \raisebox{-.5\height}{\includegraphics[scale=0.1]{cuboid}} \\
Ellipsoids & 30 & \raisebox{-.5\height}{\includegraphics[scale=0.1]{ellipsoid}}\\
\bottomrule
\end{tabular}
\caption{Collected and generated examples of concepts}
\label{tab:selected}
\end{table}
\noindent \textbf{Collecting concepts}. To collect interesting concepts, we manually inspected the dataset and identified several styles of cars with a few examples each. In addition, to test the capability of the latent space to encode out of distribution shapes, we decided to represent the concept of boxiness and curviness via randomly generated cuboids and ellipsoids. To be close to the car shape distribution, we restrict the aspect ratio of the generated shapes to lie in a similar range to the cars. Specific numbers and examples are given in Table~\ref{tab:selected}. We chose the examples from the validation set to test the representation power of the latent space. Where non-concepts were necessary, we chose 50 random shapes from both the training and validation set.
\begin{figure}[b!]
\centering
\includegraphics[width=0.85\linewidth,trim={0in 0in 0in 1in},clip]{drag_tcav.png}
\caption{TCAV metric expressed in terms of sign count and magnitude. Since the values for each concept in the CAV pair are complementary we only show the values for the first concept. Error bars represent one standard error.}
\label{fig:drag_tcav}
\end{figure}
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=0.95\linewidth, trim={0in 0in 0in 0.25in}, clip]{Explore1.jpg}
\label{fig:explore1}
\end{subfigure}
\hspace{0.2in}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=0.95\linewidth, trim={0in 0in 0in 0.25in}, clip]{Explore2.jpg}
\label{fig:explore2}
\end{subfigure}
\caption{Concept blending for two pairs of CAVs. We blend the Ellipsoids - Cuboids CAV with the Sport - Sedan CAV (left) and with the Sport - Random CAV (right). Note the different effect of moving towards the Sport concept in both cases.}
\label{fig:image_grids}
\end{figure*}
\noindent \textbf{Training CAVs}. Once the concept examples are selected, we train linear classifiers on the latent space representation of the examples via SGD and with an $L_2$ penalty. We explore CAVs obtained for each concept in Table~\ref{tab:selected} paired with a random subset of the car dataset. In addition, we train CAVs for the Cuboids-Ellipsoids and Sport-Sedan pairs since they are roughly opposite concepts. We can see the TCAV metrics of the drag coefficient in Figure~\ref{fig:drag_tcav}.
The TCAV metrics broadly follow our intuitions in that the less aero-dynamical concepts tend to increase the drag coefficients much more than the streamlined concepts. This is even more pronounced for the Cuboids-Ellipsoids CAV; however, the same can not be said for the Sport-Sedan CAV. This suggests that the Sedan concept might be more aerodynamic than first thought or that this is a poorly understood concept for the model. All the CAVs were verified to be statistically significant using a two-sided t-test.
\subsection{Concept Blending and Querying}
As discussed in Section~\ref{sec:exploring}, we can blend different concepts into an existing design by adding linear combinations of the CAVs to the latent space embeddings. Image grids for two pairs of interesting CAVs are presented in Figure~\ref{fig:image_grids}. First, we would like to note that both examples confirm the viability of our approach since the generated designs are not present in the original dataset and follow the intuition of blending designs well. In addition, it is interesting to observe how the behavior of the CAVs depends on the concepts used to generate them. Specifically moving towards the Sport concept for the Sport - Sedan CAV generates wider based and larger sporty designs in contrast to the Sport influenced shapes of the Sport - Random CAV. Both the synthetic concepts are also changing the design in sensible ways affecting both the shape of the front of the cars as well as the back to generate more compact or SUV-like types. Finally, the different concepts constrain each other in interesting ways depending on their relative strengths in the blend. For example, the last column of the right figure features muscle-type sports cars or racing-type sports cars depending on the degree of Cuboids or Ellipsoids added.
\noindent \textbf{Querying}. We present the top five most similar designs for some concepts in Figure~\ref{fig:querying}. We note that querying the dataset is a good way to evaluate the quality of the learned CAV, as observed from the overlap between the results for the Sport and Ellipsoids concepts. Retraining the CAV for the Sport concept results in better query results in Figure~\ref{fig:sports_q}. This behavior emphasizes the stochastic nature of the CAV and that adding more examples and tuning the classifier might be worthwhile.
\begin{figure}[t!]
\centering
\includegraphics[width=0.96\linewidth]{queries.jpg}
\caption{Top five queries from the dataset for a specific concept. From the top: Sedan, Ellipsoids, Sport and Cuboids.}
\label{fig:querying}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.96\linewidth]{Sport_q.jpg}
\caption{Query results for the re-computed Sport - Random CAV.}
\label{fig:sports_q}
\end{figure}
\subsection{Parametric CAVs}
While the TCAV framework is great for exploring intangible concepts, it would also be useful for designers and engineers to control the parametric qualities of a design. To test whether this is possible, we generate a synthetic dataset of ellipsoids with constant proportions but with deformations of random height. We then select some examples with the highest deformations to obtain a HighBump - Random CAV. By varying the strength of the CAV, we can control the height of the deformation as seen in Figure~\ref{fig:fixed_swell}.
Even though the embedding dimension is set to 8, the data manifold is essentially one-dimensional, as evidenced by the mean off-diagonal correlation coefficient being 0.97. Thus it is not surprising that the CAV can control the height of the deformation so well. To determine if we could isolate the deformation from the other shape parameters, we trained another auto-encoder on a similar dataset but with random ellipsoid proportions. The new manifold is not one-dimensional with an absolute off-diagonal mean correlation of 0.57. However, because of the still significant correlation between the dimensions, we found it necessary to define the CAV using a converse LowBump concept instead of a non-concept. Using this CAV, we were able to vary the height of the deformation without affecting the rest of the ellipsoid shown in Figure~\ref{fig:random_swell}.
\begin{figure}[t!]
\centering
\raisebox{-.5\height}{\includegraphics[width=0.99\linewidth]{fixed_swelling.jpg}}
\caption{Results of varying the HighBump CAV. Original shape is in the middle with negative $\varepsilon$ translations to the left and positive to the right.}
\label{fig:fixed_swell}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.99\linewidth]{random_swelling1.jpg}\\
\vspace{0.3in}
\includegraphics[width=0.99\linewidth]{random_swelling2.jpg}
\caption{HighBump - LowBump CAV isolates the height parameter of the deformation for variable ellipsoid bodies.}
\label{fig:random_swell}
\end{figure}
These results hint that the TCAV framework could be easily extended to regressive concepts. Indeed the HighBump - LowBump CAV is effectively a biased linear regression problem.
\section{Discussion}
In this work, we demonstrated the applicability of the TCAV XAI framework to a dataset of 3D car shapes. We trained a simple auto-encoder to embed high dimensional point clouds into a low dimensional manifold. By gathering examples of different car types from the dataset together with abstract representations of style concepts, we were able to explore the latent space in a human interpretable way. Even with this simple architecture, the latent space proved to be rich enough to blend and represent concepts outside the data distribution. However, at the same time, the model was able to constrain the linear combinations of CAVs to produce interesting blends of various concepts without degenerating quickly into previously seen data points.
We introduced the notion of a parametric CAV that allows for a reinterpretation of the latent space in terms of a known design parameter or one that might have become of interest later. Our approach was to cast the regression problem into a classification task to define the parametric CAV; however, we can use a linear regressor instead of a classifier. Using a linear regressor would, in principle, allow for the construction of single concept CAVs, and we leave it as a future direction of work.
Finally, we would like to highlight that the high performance of the networks was not a prerequisite for the success of these experiments. Indeed, the low amount of data, simple architecture, and small latent dimension hampered the reconstruction capabilities of the auto-encoder. It is encouraging that concept blending worked well in this regime, and we believe it will also work well for high-performance, latent space-based generative models. TCAV may prove even more valuable for such models with a high number of latent dimensions since understanding and directly exploring these spaces is very challenging.
\section{Acknowledgements}
This work was done in the scope of the Innovate UK grant on "Explainable AI system to rationalise accelerated decision making on automotive component performance and manufacturability" (Project 10009522).
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,154,317 | arxiv | \section{Introduction}
In this paper we take the first steps in studying the collider
phenomenology of supersymmetric models that have some approximate
conformal symmetry, which is reflected in the fact that the
superpartners of the standard model (SM) particles have a
continuous spectrum~\cite{Continuum} rather than being discrete
states. The superpartners can have continuous spectra when the
supersymmetric standard model is coupled to a conformal field
theory (CFT) sector~\cite{Continuum} which gives rise to so-called
``unparticle'' behavior.\footnote{Unparticles are fields with a
continuous spectrum \cite{Georgi,Georgi2}, possibly with a with a
mass gap \cite{Fox,coloredunparticles}, whose two point functions
exhibit a nontrivial scaling behavior.} A mass gap is generated
for the superpartners of (and also the excitations of) the SM
particles if the conformal symmetry is softly broken in the
infrared (IR). This scenario is most easily modeled in a
five-dimensional (5D) anti-de Sitter (AdS) space using the AdS/CFT
correspondence~\cite{AdSCFT,Cacciapaglia}. The 5D AdS space is cut
off by an ultraviolet (UV) brane where supersymmetry (SUSY) is
broken. In the absence of an IR brane, the SM fields propagating
in the bulk will have continuous spectra. Such a theory would have
been ruled out if the spectra continue down to zero mass. However,
a mass gap for nonzero-modes can be generated by introducing a
soft wall in the IR, which can be parameterized by a position
dependent bulk mass term or a dilaton field with an appropriate
profile. The boundary conditions on the UV brane remove half of
the fermion zero-modes so that the SM chiral fermions can be
obtained. In the supersymmetric limit, the 4D theory consists of
SM particles and their superpartners as the zero-modes, plus a
continuum of Kaluza-Klein (KK) excitations starting from some gap
for each field. After SUSY breaking is introduced on the UV brane,
the zero modes of the superpartners are lifted while the mass gaps
of the continuum excitations are not affected since they are
determined in the IR. Depending on the parameters, for large
enough SUSY breaking the zero mode of the superpartner can merge
into the continuum and only a continuous spectrum for the
superpartner is left.
As one can imagine, the collider phenomenology could
be quite complicated with continuous spectra. Calculations of production
cross-sections have already been discussed in \cite{coloredunparticles}, so here
we will focus on decay processes. Imagine that
some highly excited mode in the gluino continuum is produced at
the Large Hadron Collider (LHC), it can decay to a squark with an arbitrary mass in
the continuum between the squark gap and the initial gluino mass (neglecting the mass of the
emitted quark). The squark can then decay back to a
gluino as long as the squark mass is above the gluino gap.
An obvious question is: does a continuum decay prefer to occur for small
mass differences or large mass differences? If the gluino prefers
to decay to a squark with an invariant mass close to its own, then the jet
emitted in the decay will carry a relatively small amount of energy, and
there will be many steps of decays before it reaches the bottom of
the spectrum. The events will contain a high multiplicity of soft
visible particles, which can be quite challenging at hadron colliders.
On the other hand, if the gluino prefers to decay to the squark
near the bottom of the spectrum, then we expect only a few decay steps
and hard jets from the decays. The signal events in this case are
more like traditional SUSY models. Since the theory becomes conformal at
high energies, we expect that in the high invariant mass limit we will be closer
to the first picture \cite{Strassler,Polchinski,HofmanMaldacena,Csaki}. Here we would like to
make some more quantitative statements.
Explicit calculations for continuum superpartner events, however, are
not straightforward. Because the initial and/or final states are
not particles, the usual Feynman rules for particles are not
directly applicable. One way to avoid this problem is to introduce
a regularizing IR brane which makes the extra dimension compact,
then the continuum becomes discrete KK modes \cite{Stephanov} and we can perform
calculations just as in the particle case. The continuum limit is
obtained by taking the position of the IR brane to
infinity. The physical results should not depend on this position, as
long as it is much larger than the length scale of the inverse mass gap so that the KK modes are dense enough to approximate
a continuum. Even in this case, one may worry about whether the
usual narrow-width approximation of splitting a decay chain into
steps with independent decays is a good one, as there are an infinite number of KK modes which
can make virtual (``off-shell") contributions, reflecting the fact that the continuum state do not have a mass shell to be on.
We will discuss the validity of this approach and compare full calculations with the narrow-width approximation.
This paper is organized as follows. We first review the 5D construction
of continuum superpartners, then plough into the details of the decay chains, showing that the narrow-width
approximation is generally valid in perturbative theories. We then discuss the phenomenology and the parametric dependence of the
observable quantities and give conclusions. A detailed check of the IR regulator is included in the Appendix.
\section{A Review of Continuum Superpartners}
Let us recall the setup used in Ref.~\cite{Continuum}, which will be
the starting point of our collider studies. We consider a 4D
SUSY theory with approximate conformal symmetry. Through the AdS/CFT
correspondence \cite{AdSCFT}
this can be modeled by a 5D AdS space. We take the metric
of the AdS$_5$ space to be
\begin{eqnarray}}% can be used as {equation} or {eqnarray d s^2 = \left( \frac{R}{z} \right)^2 \left( \eta_{\mu \nu} d
x^\mu d x^\nu - d z^2 \right)\,.\label{metric} \end{eqnarray}
The space is cutoff at $z=z_{UV}=\epsilon$ by a UV
brane where SUSY is broken.
As is well known, such a theory can be described in the language of 4D $\mathcal{N}=2$
superfields, which implies that for each matter field its 5D $\mathcal{N} =1$
hypermultiplet $\Psi$ can be decomposed into two 4D $\mathcal{N} =1$ chiral
superfields $\Phi=\{\phi,\chi,F\}$ and $\Phi_c=\{\phi_c,\psi,F_c \}$,
with the fermionic Weyl components forming a Dirac fermion \cite{MartiPomarol}.
In the case of gauge fields, a 5D $\mathcal{N} =1$ vector superfield can be decomposed
into an $\mathcal{N} =1$ 4D vector superfield $V=\{A_{\mu},\lambda_1,D\}$ and
a 4D $\mathcal{N} =1$ chiral superfield $\sigma=\{(\Sigma+iA_5)/\sqrt{2},\lambda_2,F_{\sigma}\}$.
As in the usual case of extra-dimensional theories, one can decompose the matter fields as:
\begin{eqnarray}
\chi (p,z) & = & \chi _{\rm{4}} (p)\left(\frac{z}{z_{UV}}\right)^2 f_L(p,z), \qquad
\phi (p,z) = \phi _{\rm{4}} (p)\left(\frac{z}{z_{UV}}\right)^{3/2} f_L(p,z), \label{decom1}\\
\psi (p,z) & = & \psi _{\rm{4}} (p)\left(\frac{z}{z_{UV}}\right)^2 f_R(p,z), \qquad
\phi _c (p,z) = \phi _{{\rm{c4}}}
(p)\left(\frac{z}{z_{UV}}\right)^{3/2} f_R(p,z)\label{decom2},
\end{eqnarray}
where the relationships between scalar and fermion profiles are
provided by SUSY.
With a fermion bulk mass $m(z)= c+\mu_{f} z$, the bulk equations of motion give
\begin{eqnarray}
\frac{\partial^2}{\partial z^2}f_R+\left(p^2-\mu_f^2-2\frac{\mu_f
c}{z}-\frac{c(c-1)}{z^2}\right)f_R&=&0\label{2ndorder1},\\
\frac{\partial^2}{\partial z^2}f_L+\left(p^2-\mu_f^2-2\frac{\mu_f
c}{z}-\frac{c(c+1)}{z^2}\right)f_L&=&0 \label{2ndorder2}.
\end{eqnarray}
The solutions for $f_L(p,z)$ and $f_R(p,z)$
can then be expressed as linear combinations of the first order
and second order Whittaker functions:
\begin{eqnarray}
f_L(p,z) &=& a \,M( \kappa,\frac{1}{2} + c,2\sqrt {\mu_{f}^2 - p^2 } z)
+ b \, W( \kappa,\frac{1}{2} + c,2\sqrt {\mu_{f}^2 - p^2 } z)~, \label{eq:fl}
\\
f_R(p,z) &=& -a \, \frac{2(1 + 2c)\sqrt {\mu_{f}^2 - p^2 }}{p} M(
\kappa, - \frac{1}{2} + c,2\sqrt {\mu_{f}^2 - p^2 } z) \nonumber \\
&&-\, b \, \frac{p}{(\mu_{f} +\sqrt {\mu_{f}^2 - p^2 } )} W( \kappa, -
\frac{1}{2} + c,2\sqrt {\mu_{f}^2 - p^2 }z)~, \label{eq:fr}\\
\kappa&\equiv&- \frac{{c\,\mu_{f}}}{{\sqrt {\mu_{f}^2 - p^2 } }}~,
\end{eqnarray}
where $a$ and $b$ are coefficients which may depend on
four-momenta $p$ and that are determined by boundary and
normalization conditions. We will take all SM fields as
left-handed, so we demand that the right-handed chiral superfield
$\Phi_c(p, z)$ satisfies the Dirichlet boundary condition at $z =
z_{UV}$, {\it i.e.}, $\Phi_c(p, z_{UV}) = 0 $. The left-handed
chiral superfield $\Phi$, through the equations of motion,
satisfies modified Neumann boundary conditions. As a result, only
the left-handed chiral superfield has a normalizable zero-mode.
As shown in Ref.~\cite{Continuum}, the zero mode profile for the
left-handed field is given by
\begin{equation}
f^0_{L}(p,z)=\mathcal{N} (\mu_{f}, 0)z^{-c}e^{-\mu_{f} z},
\end{equation}
where the normalization factor $\mathcal{N} (\mu_{f}, 0)$ is obtained from
$~ \int_0^\infty { {f(0,z)} {f(0,z)^{*}} }
dz = 1$,
\begin{equation}
\mathcal{N}(\mu_{f}, 0) = (2^{ - 1 + 2c} \mu_{f} ^{ - 1 + 2c} ~ \Gamma
(1 - 2c))^{-1/2}~. \label{enorm}
\end{equation}
In Fig.~\ref{zeromode}, we show the zero mode profiles for two values
of $c$. As can be seen, when $c$ is positive,
the zero mode is localized near the UV brane, while for $c<0$, the zero
mode is repelled away from the UV brane.
\begin{figure}[htb]
\centering
\begin{tabular}{lcr}
\includegraphics[scale=0.5]{minuszeromode}
&\qquad \qquad &
\includegraphics[scale=0.5]{pluszeromode}
\end{tabular}
\caption {Normalized zero mode profiles. In the left panel we take
$c= - 0.45$, $ \mu_{f} = 0.4 ~\mbox{TeV}$ while in the right panel, we take
$c = 0.45$, and $\mu_{f} = 0.4 ~\mbox{TeV}$.}\label{zeromode}
\end{figure}
Similarly for gauge fields \cite{Continuum} we write:
\begin{eqnarray}
\lambda_1 (p,z) & = & \chi _{\rm{4}} (p) e^{uz} \left(\frac{z}{z_{UV}}\right)^2 h_L , \quad
A_\mu (p,z) = A_{\mu 4 } (p) e^{uz} \left( \frac{z}{z_{UV}} \right)^{1/2} h_L ,\\
\lambda_2 (p,z) & = & \psi _{\rm{4}} (p)e^{uz} \left(\frac{z}{z_{UV}}\right)^2 h_R, \quad
\Sigma = \phi _4 (p)e^{uz} \left( \frac{z}{z_{UV}} \right)^{3/2}
h_R\, ,
\end{eqnarray}
where $h_{L,R}$ represents $f_{L,R}$ evaluated at $c=1/2$ and $\mu_{g}$ is related
to the dilaton vacuum expectation value \cite{Cacciapaglia}, $\langle\Phi\rangle=e^{- 2 \mu_{g} z} / g_5$.
After including SUSY breaking on the UV brane, the zero mode of
the superpartner will be lifted, and the SUSY breaking mass that
it acquires depends on the overlap of its wave function with the
UV brane. For $c$ close to +1/2, the zero mode of the
superpartner will acquire a SUSY breaking mass near the full
strength ({\it i.e.}, comparable to the SUSY breaking on the UV
brane). On the other hand, if $c<0$, the SUSY-breaking mass of the
zero mode superpartner is suppressed relative to the SUSY breaking
on the UV brane. Because $c=1/2$ for the gauge field, we will take
all SM fields to have $c$ close to 1/2 in this paper in order to
have similar SUSY-breaking masses.
The spectrum of nonzero-modes is continuous without an IR cutoff.
As mentioned in the Introduction, it is convenient to introduce a
regulating IR brane at a large distance $z=z_{IR}=L$ so that we
can deal with discrete normalizable KK states \cite{Stephanov}. The continuum limit
is obtained by taking $L\to+\infty$.
The coefficients $a$ and $b$ of the wave functions in
Eq.~(\ref{eq:fl}-\ref{eq:fr}) can be obtained by imposing the
boundary condition on the IR brane and the normalization
condition,
\begin{eqnarray}}% can be used as {equation} or {eqnarray
\tilde f_L (p,z) &=& \mathcal{N}_L (\mu_{f} ,p)\left ( M( \kappa ,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 }
z)\right. \nonumber \\ && \left. + ~ b \cdot W(\kappa,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z) \right), \label{fL} \\
\nonumber \\
\tilde f_R (p,z) &=& \mathcal{N}_R (\mu_{f} ,p)\left(\frac{{2(1 + 2c)\sqrt {\mu_{f} ^2 - p^2 } }}{{\mu_{f} - \sqrt
{\mu_{f} ^2 - p^2 } }} \cdot M( \kappa , - \frac{1}{2} + c,2\sqrt {\mu_{f}^2 - p^2 } z)\right. \\
&& \left. + ~ b \cdot W(\kappa, -\frac{1}{2} + c,2\sqrt {\mu_{f}^2 - p^2 }z) \right)\; . \label{fR}
\end{eqnarray}
Imposing the boundary condition $f_R (p,z_{UV})=0$ we determine that,
\begin{eqnarray}}% can be used as {equation} or {eqnarray
b &=& - \frac{{M( \kappa, - \frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2
} z_{UV})}}
{{W( \kappa, - \frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z_{UV})}}
\cdot \frac{{2(1 + 2c)\sqrt {\mu_{f} ^2 - p^2 } }}{{\mu_{f} - \sqrt
{\mu_{f} ^2 - p^2 } }}\label{bcondition}\; ,
\end{eqnarray}
and $ \kappa \equiv-c\mu_{f}/\sqrt {\mu_{f}^2 - p^2 } $.
We are especially interested in the behavior of the superpartners in the conformal limit at high energies.
In the limit of $p \gg \mu_{f} $, the normalization factor can be
expressed as
\begin{eqnarray}}% can be used as {equation} or {eqnarray \mathcal{N}_L = -\mathcal{N}_R = \left( {\frac{{2^{3 + 4c}
\pi \sec^2 (c\pi) }}{{\Gamma ( - \frac{1}{2} - c)^2}}} \cdot
z_{IR} \right)^{ - 1/2} . \label{NL} \end{eqnarray}
The gluino profiles can be found similarly by making the replacements
$c\to 1/2$ and $\mu_{f}\to \mu_{g}$ in Eqs.~(\ref{fL}--\ref{bcondition}).
The KK spectrum is determined by the boundary condition at the IR brane, $f_{R}(p,z_{IR})=0$,
which leads to:
\begin{eqnarray}}% can be used as {equation} or {eqnarray \frac{{M(- \frac{{\mu_{g}}}{{2 \sqrt {\mu_{g}^2 - p^2 } }} ,0 ,2\sqrt {\mu_{g}
^2 - p^2 }~ z_{UV})}}{{W(- \frac{{\mu_{g}}}{{2 \sqrt {\mu_{g}^2 - p^2 } }} ,0
,2\sqrt {\mu_{g} ^2 - p^2 }~ z_{UV})}}= \frac{{M(- \frac{{\mu_{g}}}{{2 \sqrt
{\mu_{g}^2 - p^2 } }} , 0 ,2\sqrt {\mu_{g} ^2 - p^2 }~ z_{IR})}}{{W(-
\frac{{\mu_{g}}}{{2 \sqrt {\mu_{g}^2 - p^2 } }} , 0 ,2\sqrt {\mu_{g} ^2 - p^2 }~
z_{IR})}} . \label{mass} \\ \nonumber \end{eqnarray}
In the limits of interest, $\mu_{f},\mu_{g} \ll p \ll 1/z_{UV}$,
the left-hand-side of Eq.(\ref{mass}) tends to zero, which
implies that the numerator of the right-hand-side of Eq.(\ref{mass}) satisfies:
\begin{eqnarray}}% can be used as {equation} or {eqnarray M(- \frac{{\mu_{g}}}{{2 \sqrt {\mu_{g}^2 - p^2 } }} , 0 ,2\sqrt {\mu_{g} ^2 -
p^2 }~ z_{IR}) \propto \cos (\frac{1}{4}\pi - \sqrt {p^2 - \mu_{g} ^2
} z_{IR} ) = 0 . \label{spectrum} \end{eqnarray}
Solving Eq.~(\ref{spectrum}), we obtain an approximate expression for the KK masses,
\begin{eqnarray}}% can be used as {equation} or {eqnarray && m_n^2 \approx \mu_{g}^2 + (\frac{1}{4} + n)^2 \pi ^2 /z_{IR}^2 \qquad \mbox{with} \qquad n =0, 1, 2, \cdots .\end{eqnarray}
Some sample spectra are shown in
Fig.~\ref{fig:kkspectrum}, where we have taken $z_{UV} = R = 10^{-3} ~
\mbox{TeV}^{-1}$ and $ c = 1/2$. With mass gaps on the order of half a TeV, a 20 GeV mode spacing
is quite a good approximation to a continuum.
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.53]{L80spectrum}
&
\includegraphics[scale=0.53]{L100spectrum}
\end{tabular}
\caption{\label{fig:kkspectrum} Spectra for the gluino (blue
dots) and the squark (red dots). The plot shows mass versus KK mode number. The parameters are chosen to be
$z_{UV} = 10^{-3} ~ \mbox{TeV}^{-1}$, $z_{IR} = 80 ~
\mbox{TeV}^{-1}$, $c=0.5$, $\mu_{g} = 0.4 ~\mbox{TeV}$ and $\mu_{f} = 0.5
~\mbox{TeV}$ for the left panel, and $z_{UV} = 10^{-3} ~
\mbox{TeV}^{-1}$, $z_{IR} = 100 ~ \mbox{TeV}^{-1}$, $c = 0.5$, $\mu_{g}
= 0.3 ~ \mbox{TeV}$ and $\mu_{f} = 0.4 ~\mbox{TeV}$ for the right
panel.}
\end{figure}
\section{Continuum Decay Chains and Narrow-Width Approximations}
Now we would like to study the decay of a continuum superpartner. The question is:
what are the characteristic features of the decay process? Does it
prefer to decay through multiple steps and emit soft particles
during the decays, or to decay directly down to the bottom of the
spectrum together with a hard particle? What is the
typical energy of the visible particles produced in the decay
chain relative to the energy and other parameters of the superpartner? As mentioned earlier, in order to use the calculation
techniques developed for particles, it is convenient to include a
regularizing IR brane to discretize the continuum into KK modes \cite{Stephanov},
so that we can deal with ordinary particles in the initial and
final states. To study the validity of the narrow-width approximation of the decay chain, which splits
the process into a sequence of independent decays at each step,
we compare the result from a calculation of
a sequence of real two 2-body decays with that of the 3-body decay which includes
all the contributions from the virtual intermediate continuum superpartner.
We assume that we start with an initial state with an
invariant mass much higher than the mass gaps and the
SUSY breaking mass. To simplify the calculations, we will ignore
the SUSY-breaking and the masses of the SM particles. We find
that, in the case that the calculations are reliable ({\it i.e.},
independent of $z_{IR}=L$ as long as $L \gg p^{-1}, \mu_{f}^{-1},
\mu_{g}^{-1}$), the virtual contributions are indeed smaller than the
real contributions so that the narrow-width approximation is reliable.
\subsection{ The 2-body gluino decay calculation}
\begin{figure}[t]
\centering \vspace{15pt}
\includegraphics[scale=1.0]{sequential}
\vspace{5pt}
\caption {Feynman diagram for sequential KK gluino decay.}
\label{feyn}
\end{figure}
Once we include a regulating IR brane to make the spectra
discrete, the calculation of a 2-body decay should be
straightforward. We consider an initial state of a gluino of KK
level $m$, decaying in its rest frame to a quark and a KK
squark of level $n$, $\tilde{g}^m\to u^0_L\tilde{u}^{*,n}_L$. Since we ignore
the quark mass, the decay can occur to any KK squark lighter
than the initial gluino. The energy of the emitted quark
depends on which level of the KK squark the gluino decays to. To
calculate the decay rate, we need the coupling between the
gluino, squark, and the quark. The effective interaction of
the KK gluino, KK squark and the quark in momentum space is
given by:
\begin{eqnarray}}% can be used as {equation} or {eqnarray S_{eff} = c(p_m, q_n) \int {\frac{{d^4 p_n }}{{(2\pi )^4 }}}
\frac{{d^4 q_m }}{{(2\pi )^4 }}~ u^0_L(p_m-q_n)
~\tilde{u}^{m,*}_L(q_n )~\tilde{g}^n (p_m) ~, \end{eqnarray}
where $c(p_m, q_n)$ is the vertex coefficient which can be
computed by integration of the gluino, squark and quark 5D
profiles over the fifth dimension. The expression is:
\begin{eqnarray}}% can be used as {equation} or {eqnarray c (p_m, q_n) &=& \mathcal{N}_{\tilde g} (\mu_{g}, p_m) ~
\mathcal{N}_{\tilde u }(\mu_{f}, q_n)^{*} ~ \mathcal{N}_{u}(\mu_{f}, 0) ~
g_5\int_{z_{UV} }^{z_{IR}} {dz} \left( {\frac{z_{UV}}{z}}
\right)^5
~ \left(
{\frac{z}{z_{UV}}} \right)^2 e^{ - \mu_{f} z} z^{ - c}\nonumber \\
\nonumber \\ && \left( {\frac{z}{z_{UV}}} \right)^{3/2}
{f}_L(q_n,z)^{*}
~ e^{uz} \left( {\frac{z}{z_{UV}}} \right)^2
{h}_L(p_m,z)\; , \label{coupling}\end{eqnarray}
where $g_5$ is the 5D gauge coupling, which only enters the
calculation as an overall factor. The 4D gauge coupling $g_4$ is
related to the 5D gauge coupling $g_5$ by integrating over the zero
mode profiles in the kinetic term for the gauge fields coupled with
the dilaton. In that case one finds \cite{Cacciapaglia} that,
\begin{eqnarray}}% can be used as {equation} or {eqnarray g_4^2 = \frac{g_5^2}{z_{UV}} \frac{1}{{\log (1/(2 z_{UV}
\mu_{g})) - \gamma_E )}}, \end{eqnarray}
where $\gamma_E$ is Euler's constant. Since we are interested
in the gluino decay into a quark and a squark, we consider
the QCD gauge coupling which equals
$ \alpha_s(m_{Z})= 0.1184 $ at the $Z$-mass. If we take $z_{UV} = 10^{-3} ~
\mbox{TeV}^{-1} $ and the nominal value for the gluino mass gap of $\mu_{g}=0.1$ TeV,\footnote{The dependence
on the gluino mass gap $\mu_{g}$ is only logarithmic, reflecting the gauge coupling's running.} the 5D strong gauge coupling assumes
the dimensionful value $g_5 = 0.108~ \mbox{TeV}^{-1/2} $.
An immediate observation from Eq.~(\ref{coupling}) is that a
finite coupling in the limit $z_{IR}\to+\infty$ is obtained only
for $\mu_{f}>\mu_{g}$. For $\mu_{f} < \mu_{g}$, the integral is dominated by the large
$z$ region and it blows up as we take $z_{IR}\to+\infty$. In this
case we cannot perform a sensible calculation. One can
see that the exponentially growing factor comes from the dilaton
profile for the KK gluino, which is a result of that the zero
mode gauge field has to be a constant along the fifth dimension
due to gauge invariance. A similar situation happens in the
Randall-Sundrum 2 (RS2) scenario~\cite{RS2}. If one calculates the
self interactions among KK gravitons in RS2 with a regulating IR
brane, one also finds that the coupling blows up as one takes the
IR brane to infinity. In the 4D CFT picture, these KK
gravitons correspond to the conformal bound states. The large
coupling just means that these bound states are strongly coupled.
These divergent couplings can never make any physical process
involving UV zero-modes
infinite. As is well
known, the KK picture sometimes can give misleading results when
locality in the extra dimension is involved.
The point is that any process starting with zero-modes
localized on the UV brane will mostly be sensitive to the physics
near the UV brane due to the locality in the extra dimension. The
``nonlocal'' process of producing KK gravitons in the deep IR
region must be suppressed due to interference of various diagrams
even though each of them can have a large coupling. In our theory,
all zero-modes are localized near the UV brane (for $c\sim 1/2$),
so all physical processes originated from the zero-modes should
also only be sensitive to the physics near the UV brane. The
divergent coupling in the $\mu_{f} < \mu_{g}$ case just means that
the na\"ive calculations done in the KK picture are not valid. On
the other hand, for $\mu_{f} >\mu_{g}$ the integral is dominated
by the region near the UV brane, and the calculations are
trustworthy. For the same reason, the coupling between a KK
gluino, a KK squark, and a KK quark is also dominated by the IR
region and diverges when the IR brane is moved to infinity.
However, since the wave functions of the KK quarks are suppressed
in the UV region, the decay to a KK squark plus a KK quark should
also be suppressed if the initial KK gluino was produced in the UV
region in the first place. To avoid such complications, we will
restrict our study to the $\mu_{f} > \mu_{g}$ case and consider
decays to the quark zero mode only in this paper.
After obtaining the coupling, it is straightforward to calculate
the decay rate to each KK squark. To compare with the continuum
limit, we express the result in terms of the differential decay
rate as a function of the energy of the outgoing quark:
\begin{eqnarray}}% can be used as {equation} or {eqnarray \frac{d\Gamma_{\tilde{g}^m \rightarrow u^0_L\tilde{u}^{n,
*}_L}}{d E_{u^0_{L}}}&\approx& \sum_{n\in\;\mu_{f}<E_n<E_m}\frac{\Delta
\Gamma_{\tilde{g}^m \rightarrow u^0_L\tilde{u}^{n,*}_L}}{\Delta
E_{u^0_{L}} }\nonumber \\
&\approx& \sum_{n\in\;\mu_{f}<E_n<E_m}\frac{{c(p_m,q_n)c^\dag
(p_m,q_n)}}{{\Delta E_{u^0_{L}}}}\frac{E^2_{u^0_{L}}}{{4\pi p_m
}}\frac{{\rm Tr}[t^at^a]}{8} \label{twobody}\end{eqnarray}
where $t^a$ are the SU(3) generators in the fundamental representation and $p_m \equiv \sqrt {p_m^2 } $. From conservation of
energy-momentum we have,
\begin{equation}
E_{u^0_{L}} = \frac{1}{2} p_m \left( {1 - \frac{{q_n^2
}}{{p_m^2 }}} \right),
\end{equation}
which implies
\begin{equation}
\Delta E_{u^0_{L}} =
\frac{1}{2} p_m \left(1 - \frac{{q_{n-1} ^2 }}{{p_m^2 }}\right) -
\frac{1}{2} p_m \left(1 - \frac{{q_{n} ^2 }}{{p_m^2 }}\right)\;.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{L100E2}
\caption {Gluino differential decay rate with respect to the first
quark energy. The initial KK gluino mass is $ p_{\tilde g}=1.15
~\mbox{TeV}$ and we use $g_5 = 0.108~\mbox{TeV}^{-1/2}$. We
choose $z_{UV} = 10^{-3} ~ \mbox{TeV}^{-1}$, $z_{IR} = 100 ~
\mbox{TeV}^{-1}$, $c=0.5$, $\mu_{g} = 0.3 ~\mbox{TeV}$ and
$\mu_{f} = 0.4 ~\mbox{TeV}$.}\label{diffrate}
\end{figure}
A typical differential decay rate as a function of the outgoing
quark energy is shown in Fig.~\ref{diffrate}. We have normalized
the decay rate by a factor $p\cdot z_{IR}$ as we do with all
differential decay rate figures hereafter, in order to cancel the
unphysical IR dependence $\Delta m_n=\pi/z_{IR}$ in the continuum
limit.\footnote{It is easy to see that the coupling is
proportional to $L$ from the normalizations of the gluino and
squark wave functions. The differential decay rate of a single KK
gluino is proportional to $(1/L)^2 \times L=1/L$ as the density of
the final KK squarks is proportional to $L$. This is related to
the fact that the overlap of a single KK gluino wave function with
the UV region where the zero modes are located is also
proportional to $1/L$. In reality the initial KK gluino will be
produced with some energy range and the number of KK gluinos in
that energy range is again proportional to $L$, which cancels the
unphysical $1/L$ dependence in the differential decay rate of a
single KK mode.} At small quark energies, the decay rate is
suppressed by phase space, while at large quark energies, the
decay rate is suppressed by the couplings due to the small overlap
between the gluino and squark wave functions of large KK level
differences. Overall we see that the suppression due to the
coupling is stronger and the outgoing quark has a relatively soft
spectrum. The suppression of high energy emissions is exactly the
behavior that is expected in a CFT
\cite{Strassler,Polchinski,HofmanMaldacena,Csaki}. Starting with
very high KK modes this leads to approximately spherical events
\cite{Csaki}. A more detailed discussion of the quark spectrum and
its parameter dependence will be presented in the next section
after the narrow-width approximation is justified.
\subsection{The 5D mixed position-momentum propagator }
{}From the previous subsection we see that the result of the
2-body calculation shows that the gluino prefers to decay to the
squark with a mass not far below the gluino mass. This means that
the resulting squark is likely to decay again back to the gluino
and the whole process will involve a long decay chain. An
important question is whether the sequence of decays can be treated independently and
correlations implied by the full intermediate propagator
can be safely neglected. This question is less
trivial for the continuum case than the usual particle case
because there are an infinite number, a continuum, of intermediate
states which can give contributions at each step. To study this problem, we
consider the 3-body gluino decay process $\tilde{g}^m\to u^0_L
u^{0*}_{L}\tilde{g}^n$ where the continuum squark is in the intermediate
state. We will calculate the contribution from the virtual (``off-shell")
intermediate squark while using real KK modes for the initial and final states.
In the KK picture with a regulating IR brane, the intermediate
squark propagator is simply a sum of the particle propagators of
all KK levels \cite{Stephanov}. The propagator of an individual scalar particle is
\begin{eqnarray}}% can be used as {equation} or {eqnarray
\lim_{\epsilon\rightarrow 0^+} \frac{i}{q^2 - m_{n}^2 + i \epsilon} = \pi \delta (q^2 - m_{n}^2 ) +i\,{\mathcal P} \frac{1}{q^2 -
m_{n}^2}
\label{PPart}
\end{eqnarray}
where ${\mathcal P}$ denotes the Cauchy principal value. The delta function (the
real part) represents the phase space for a real intermediate
particle (the ``on-shell" contribution), while the second term (the
imaginary part) represents
the contribution of a virtual intermediate state (the ``off-shell" contribution). However, the KK picture is not the most
convenient one to perform calculations with virtual intermediate states as it involves
an infinite sum of singular functions. We know that in the
continuum limit, the infinite sum of KK propagators simply becomes
the unparticle propagator \cite{Stephanov}, and the series of poles on the positive
real axis of $p^2$ merge into a branch cut of the unparticle
propagator. Therefore, we will employ the full unparticle
propagator for the intermediate state in our calculation. In this
subsection we derive the unparticle propagator in mixed
position-momentum space \cite{Falkowski}.
The left-handed scalar propagator satisfies
the equation of motion for the left-handed profile, with a delta-function source,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \left( {-\partial _z^2 + \frac{3}{z}\partial _z + (c^2 + c
- \frac{{15}}{4})\frac{1}{{z^2 }} + \frac{{2c\mu_{f} }}{z} + \mu_{f} ^2 -
p^2 } \right) i P(p,z,z') = \left( \frac{z}{z_{UV}} \right)^{ 3}
\delta (z - z') , \label{Green}\end{eqnarray}
and the UV boundary condition for the propagator is
\begin{eqnarray}}% can be used as {equation} or {eqnarray \left. {\left( {\partial _z + \frac{1}{z}( - \frac{3}{2} + c
+ \mu_{f} z)} \right)P (p,z,z')} \right|_{z = z_{UV}} = 0 .\label{Greenbc} \end{eqnarray}
To connect to the continuum limit, with outgoing boundary conditions, we demand that the propagator
is exponentially damped \cite{Falkowski} at large Euclidean momenta:
\begin{equation}
P(i p,z_{IR},z')|_{p\to+\infty,z'<z_{IR}}\to e^{-p\,z_{IR}} .
\end{equation}
In order to solve Eq.~(\ref{Green}--\ref{Greenbc}) for $P(p,z,z')$, we look for solutions in the
regions $z> z'$, $P_ > (p,z,z')$, and $ z < z'$, $P_ < (p,z,z') $, and use matching boundary conditions at $ z= z'$,
\begin{eqnarray}}% can be used as {equation} or {eqnarray
&& \left. {P_ < (p,z,z') - P_ > (p,z,z')} \right|_{z = z'} = 0, \\
&& \left. { {\partial _z P_ < (p,z,z') - \partial _z P_ >
(p,z,z')} } \right|_{z = z'} = -i \left( \frac{z}{z_{UV}}
\right)^{3} . \end{eqnarray}
We can write the general solution to Eq.~(\ref{Green})
in terms of two independent solutions to the homogeneous equation \cite{Falkowski}, $K(p,z)$ and
$S(p,z)$:
\begin{eqnarray}}% can be used as {equation} or {eqnarray
K(p,z) &=& \left( {\frac{z}{z_{UV}}} \right)^{3/2}
\frac{{W(\kappa ,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z)}}
{{W(\kappa ,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z_{UV})}}\;, \\ \nonumber \\
S(p,z) &=& \left( {\frac{z}{z_{UV}}} \right)^{3/2} \frac{1}{{2\sqrt {\mu_{f} ^2 - p^2 } }}
\frac{{\Gamma (1 + c - \kappa )}}{{\Gamma (2 + 2c)}} \nonumber \\
&& \left( M(\kappa ,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z) ~
W(\kappa ,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z_{UV}) \right.
\nonumber
\\ && \left. - W(\kappa ,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z) ~ M(\kappa
,\frac{1}{2} + c,2\sqrt {\mu_{f} ^2 - p^2 } z_{UV}) \right) \label{SN}, \end{eqnarray}
which satisfy the following boundary conditions:
\begin{eqnarray}}% can be used as {equation} or {eqnarray K(p,z_{UV}) = 1,\qquad K(i p,z)|_{p\to+\infty}\to e^{-pz}, \qquad S(p,z_{UV})=0, \qquad
S'(p,z_{UV})=1. \end{eqnarray}
In the region $z < z' (z> z')$, the general solution can be written as
\begin{eqnarray}}% can be used as {equation} or {eqnarray P_ {< (>)} (p,z,z') &=& a_ {<(>)} ~ K(p,z) + b_ {<(>)} ~ S(p,z) .\end{eqnarray}
The boundary condition at the UV brane, Eq.~(\ref{Greenbc}), fixes
the ratio of $a_</b_<$ to be proportional to the kinetic function
$\Sigma_{F_c}(p)$ already encountered in Ref.~\cite{Continuum}:
\begin{eqnarray}}% can be used as {equation} or {eqnarray \frac{a_< }{ b_<}=\frac{\Sigma_{F_c}(p)}{z_{UV}}=
\frac{(\mu_{f} +\sqrt {\mu_{f}^2 - p^2 })}{p^2 } \frac{{W\left( {
- \frac{{c\mu_{f}}}{{\sqrt { - p^2 + \mu_{f}^2 } }},\frac{1}{2} +
c,2\sqrt { - p^2 + \mu_{f}^2 }~ z_{UV}} \right)}}{{W\left( { -
\frac{{c\mu_{f}}}{{\sqrt { - p^2 + \mu_{f}^2 } }},\frac{1}{2} -
c,2\sqrt { - p^2 + \mu_{f}^2 }~ z_{UV}} \right)}}~,\end{eqnarray} \begin{eqnarray}}% can be used as {equation} or {eqnarray
\Sigma_{F_c}(p) &=&z_{UV} \cdot
\left(\frac{R}{z_{UV}}\right)^3\frac{1}{p}\frac{f_L}{f_R}~. \end{eqnarray}
The kinetic function $\Sigma_{F_c}(p)$ not only fixes the spectral density, but it is
also essential in determining the phase space.
Using the required IR behavior of $P (p,z,z')$, we conclude that
$P_ > (p,z,z')$ can only be proportional to $K(p,z)$, ({\it
i.e.}, $b_ > = 0 $). At this stage we use the matching conditions
at $z=z'$, so that in the range $ z < z'$, the left-handed squark
propagator can be expressed as \cite{Falkowski}:
\begin{eqnarray}}% can be used as {equation} or {eqnarray i\,P(p,z,z') &=& \frac{\Sigma_{F_c}(p)}{z_{UV}}K(p,z)K(p,z')
- S(p,z)K(p,z')~. \label{pro} \end{eqnarray}
For the case $z>z'$ we just interchange $z\leftrightarrow z'$ in Eq.~(\ref{pro}).
Obviously for $p^2 > \mu_{f}^2$,
$\sqrt{\mu_{f}^2- p^2}$ becomes imaginary, so the propagator has a
branch cut on the real axis for $p^2 >\mu_{f}^2$, and the
discontinuity is just twice the real part. This discontinuity corresponds
to a real intermediate unparticle (the ``on-shell'' contribution). From the
analogy of the particle propagator (\ref{PPart}) we interpret the
imaginary part of the unparticle propagator as the virtual (``off-shell")
contribution to the 3-body decay process, while the real part (the discontinuity)
corresponds to the phase space of the unparticle. In fact, one can
use this phase space to calculate directly the real 2-body
differential decay rate considered in the previous subsection,
instead of summing over KK modes in a discretized theory. In the
Appendix we show the equivalence of the two pictures in the limit
that the IR regulator is removed,
$L\to \infty$. The numerical results for the 2-body differential
decay rates using the two approaches also agree well for large
$L$.
\subsection{The 3-body gluino decay}
With the unparticle propagator derived in the previous subsection,
we can compute the virtual contribution to the 3-body decay
process, $\tilde{g}^m\to u^0_L u^{0*}_{L}\tilde{g}^n$. We take
both the initial and final gluinos to be KK states
in an IR regularized theory, but use the unparticle propagator for
the intermediate state with only the imaginary part corresponding to the
virtual contribution. It is straightforward to find the
amplitude squared for the virtual 3-body process by integrating
the imaginary part of the propagator $P (q, z, z')$ derived in
Eq.(\ref{pro}) over the positions of the two vertices in the extra
dimension,
\begin{eqnarray}}% can be used as {equation} or {eqnarray \left| {M(\tilde{g}^m\to u^0_L u^{0*}_{L}\tilde{g}^n )}
\right|^2 = 4 g_5^4 ~ |v (p_{\tilde{g}^m}, q,
p_{\tilde{g}^{n}})|^2 ~ ( p_{\tilde{g}^{m}}.p_{u^{0*}_{L}}) ~ (p_{u^0_L}.p_{\tilde{g}^n} )~.
\label{amplitude} \end{eqnarray}
We have labeled the 4-momenta of the initial, final, and
intermediate states as follows: $p_{\tilde{g}^{m}}$ for the
initial gluino $\tilde{g}^m$, $p_{\tilde{g}^{n}}$ for the
outgoing gluino $\tilde{g}^n $, $p_{u^0_L}$ for the
quark, $p_{u^{0*}_{L}}$ for the anti-quark, and $q$
for the intermediate squark. The factor $v (p_m, q, p_{n})$ is
given by the integration in the extra dimension of the respective
profiles and the propagator:
\begin{eqnarray}}% can be used as {equation} or {eqnarray v (p_{\tilde g }^n, q, p_{\tilde g }^m) &=& \mathcal{N}_{u}^2
(\mu_{f}, 0) \mathcal{N}_{\tilde g}(\mu_{g}, p_{\tilde g }^m)
\mathcal{N}_{\tilde g }(\mu_{g}, p_{\tilde g }^n ) \nonumber \\
&&\times \int_{z_{UV}
}^{z_{IR}} {dz} \int_{z_{UV} }^{z_{IR}} {d z'} ~ e^{(\mu_{g} -\mu_{f}) z}
z^{1/2 - c} h_L(p_{\tilde g }^m ,z)
z_{UV}^{-1} \tilde{P}(q, z, z') \nonumber \\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad e^{(\mu_{g} -\mu_{f}) z'} z'^{1/2 - c} h_L(p_{\tilde g }^n,z')~, \end{eqnarray}
here we use a rescaled scalar propagator, $ \tilde{P}(q, z, z')=
\left( {\frac{z}{z_{UV}}} \right)^{-3/2} \left(
{\frac{z'}{z_{UV}}} \right)^{-3/2} P(q, z, z')$, and we can write
the differential decay rate as:
\begin{eqnarray}}% can be used as {equation} or {eqnarray \frac{{d\Gamma_3 }}{{dE_u }} = g_5^4 ~ {v (p_{\tilde g }^m,
q, p_{\tilde g }^n)}
{v (p_{\tilde g }^m, q, k_{\tilde g }^n)^\dagger}
~\frac{{E_u^2 (2E_u p_{\tilde g }^m - (p_{\tilde g }^m)^2 +
(p_{\tilde g }^n)^2 )^2}}{{32 p_{\tilde g }^m (2E_u -
p_{\tilde g }^m) \pi ^3 }} \frac{1}{8} {\rm Tr} [t^at^bt^bt^a] \label{total}\\
\nonumber
\end{eqnarray}
where $E_{u_L^0 }$ is the energy of the (first) outgoing quark in
the initial gluino rest frame and $\frac{1}{8}{\rm Tr}
[t^at^bt^bt^a] = \frac{1}{8} C_A C_F^2 = 2/3$ for $SU(3)$. The
range of $E_{u_L^0 }$ is determined by the masses of the initial
and final gluinos:
\begin{eqnarray}}% can be used as {equation} or {eqnarray
E_{u_L^0 \min } &=& 0 ~,\label{E2min} \\
E_{u_L^0 \max } &=& \frac{1}{2} ~ p_{\tilde g }^m~
\left( {1 - \frac{{(p_{\tilde g }^n )^2 }}{{(p_{\tilde g }^m)^2 }}}
\right)~.
\label{E2max}\end{eqnarray}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.56]{L80E2PR3}
&
\includegraphics[scale=0.56]{L100E2PR3}
\end{tabular}
\caption {Gluino decay rate with respect to the first quark
energy. The yellow dash line represents the virtual contribution
from three body decay which is rescaled by a factor of $3.24
\times 10^2$ , while the blue line represents two body decay
result. In both plots, the initial KK gluino mass is $ p_{\tilde
g} =1.15 ~\mbox{TeV}$ and we use $g_5 = 0.108 ~\mbox{TeV}^{-1/2}$.
In the left one, we choose $z_{UV} = 10^{-3} ~ \mbox{TeV}^{-1}$,
$z_{IR} = 80 ~ \mbox{TeV}^{-1}$, $c= 0.5$, $\mu_{g}= 0.4
~\mbox{TeV}$ and $\mu_{f} = 0.5 ~\mbox{TeV} $; In the right one,
we choose $z_{UV} = 10^{-3} ~ \mbox{TeV}^{-1}$, $z_{IR} = 100 ~
\mbox{TeV}^{-1}$, $c=0.5$, $\mu_{g} = 0.3 ~\mbox{TeV}$ and
$\mu_{f} = 0.4 ~\mbox{TeV}$.}\label{twovsthree}
\end{figure}
The contribution to the differential decay rate as a function of
the outgoing quark energy from the virtual 3-body process
can be compared with the real 2-body contribution by summing
over all final KK gluinos and outgoing anti-quark energies
which are kinematically allowed. The result is shown in
Fig.~\ref{twovsthree}. The virtual 3-body process is the cut of a two-loop diagram
while the real 2-body process is the cut of a one-loop diagram, so
we expect that the 3-body decay rate should be suppressed with
respect to the 2-body decay rate by a 4D loop factor as long as the theory remains perturbative. The
shapes of the two contributions are also similar, but with the 3-body
contribution being slightly harder. Thus we can conclude that in this case, it is reasonable
to use the narrow-width approximation to calculate the energy
distributions of the visible particles coming from the
continuum decays by treating each decay step going to real states that subsequently
decay independent of the details of the previous decay.
\section{Phenomenology of the Continuum Superpartner Decays}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.56]{L80muu03}&
\includegraphics[scale=0.56]{L80Pu04}
\end{tabular}
\caption{ 2-body gluino differential decay rate as a function of
the outgoing quark energy $ E_{u^0_{L}}$ in TeV. We have taken in
this example $z_{UV} = 10^{-3} ~ \mbox{TeV}^{-1}$, $z_{IR} = 80 ~
\mbox{TeV}^{-1}$, $c= 1/2$ and $ g_5 = 0.108 ~ \mbox{TeV}^{-1/2}
$. In the figure on the left, we fixed the initial gluino KK-mass
at $ p_{\tilde g} = 1.26 ~\mbox{TeV}$ and its mass-gap to
$\mu_{g} = 0.3 ~\mbox{TeV}$, and vary the squark mass gap by
$\mu_{f}= 0.40 ~, 0.43~, 0.46~, 0.50~, 0.56 ~\mbox{TeV} $. As can
be seen from the figure, the peak position decreases in magnitude
and shifts towards larger values of $E_{u^0_{L}}$. In the figure
on the right, we fixed $\mu_{g} = 0.4 ~\mbox{TeV}$ and $\mu_{f} =
0.5 ~\mbox{TeV} $, and vary the initial gluino KK-mass by $
p_{\tilde g} = 0.83 ~,0.97~ ,1.15~,1.29~, 1.52 ~ \mbox{TeV}$. In
this case the peak increases in magnitude with increasing $ p $
and its position remains roughly constant as a function of $
E_{u^0_{L}}$.}\label{width1}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.56]{L80muu02}&
\includegraphics[scale=0.56]{L80Pu02}
\end{tabular}
\caption{Same as Fig.~\ref{width1} for different sets of energies and mass gaps.
In the plot on the left, we fixed $ p_{\tilde g}= 2.4
~\mbox{TeV}$, and $\mu_{g} = 0.2 ~\mbox{TeV}$. We vary the squark
mass gap by $\mu_{f}= 0.36~, 0.40~, 0.43~,0.46~, 0.5 ~\mbox{TeV}
$. As can be seen, the peak decreases in magnitude and slightly
moves towards larger values of $ E_{u^0_{L}}$. In the figure on
the right, we fixed $\mu_{g} = 0.2 ~\mbox{TeV}$ and $\mu_{f} = 0.3
~\mbox{TeV} $, and vary the initial gluino KK-mass by $ p_{\tilde
g} = 1.24~, 1.62~, 2.01~ ,2.21~, 2.4 ~ \mbox{TeV}$. In this case,
the peak magnitude increases and its position remains roughly
constant with respect to $ E_{u^0_{L}}$.}\label{width2}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[scale=0.6]{L80E2muu} \\
\includegraphics[scale=0.6]{L80Peakmuu}
\end{tabular}
\caption{$ \frac{E_{u^0_{L},max}}{\mu_{f}-\mu_{g}}$ vs.
$\frac{\mu_{f}-\mu_{g}}{p_{\tilde g}}$ and $(p_{\tilde
g}z_{IR})\frac{ d \Gamma}{dE_{u^0_{L}}}|_{max}$ vs.
$\frac{\mu_{f}-\mu_{g}}{p_{\tilde g}}$ evaluated at the peak
positions. We use $z_{UV} = 10^{-3} ~ \mbox{TeV}^{-1}$, $z_{IR} =
80 ~ \mbox{TeV}^{-1}$, $c= 0.5$ and $ g_5 = 0.108
~\mbox{TeV}^{-1/2} $ and fix $ p_{\tilde g}= 2.40 ~\mbox{TeV}$,
$\mu_{g} = 0.2 ~\mbox{TeV}$, while the squark mass gap is varied
from $\mu_{f}= 0.3 ~\mbox{TeV}$ to $\mu_{f} = 0.75 ~\mbox{TeV} $.
The plots display the fitted functions. }\label{peak}
\end{figure}
The comparison of the differential decay rate calculations for a
real 2-body process and a virtual 3-body process in the previous
section, shows that it is reasonable to calculate the energy
distributions of the visible particles coming from the continuum
superpartner decays using the narrow-width approximation. This
greatly simplifies the phenomenological study of continuum
superpartners. As mentioned earlier, the continuum states tend to
decay to other continuum states that are nearby in invariant mass,
and so the ordinary particles tend to be emitted with soft
energies. This is the behavior that is expected in a CFT
\cite{Strassler,Polchinski,HofmanMaldacena,Csaki}. Thus if the
decay chain starts fairly high up in the continuum, then there is
usually a long decay chaining with an approximately spherical
distribution of energy \cite{Csaki}. In this section, we examine
how the energy distributions of the visible particles depend on
the parameters of the theory and the process.
As the energy of the emitted quark $ E_{u^0_{L}}$ increases (or
what is the same, the squark mass $q_n$ decreases), the phase
space of the final particles increases while the vertex
$c(p_n,q_m)$ decreases. It is the competition between these two
factors that makes the differential decay rate have its maximum at
an quark energy around $ E_{u^0_{L},{\rm max}} \sim (\mu_{f}
-\mu_{g} )$ as can be seen in Figs.~\ref{width1} and \ref{width2}.
As a consequence, for a high-energy initial state ($p_{\tilde g}
\gg \mu_f, \, \mu_g$), we expect order $p_{\tilde g}/(\mu_f
-\mu_g)$ particles coming out a decay chain. For fixed $g_4$,
$c$, $z_{IR}$, and $z_{UV}$, there are only three parameters: the
initial gluino energy $p_{\tilde g}$ and the two mass gaps
$\mu_f$, $\mu_g$. Since there should be little dependence on
$z_{IR}$ and $z_{UV}$, as long as they are far away from the
energy scale interested, physical quantities can be expressed as
functions of the two dimensionless ratios of $p_{\tilde g}$,
$\mu_f$, and $\mu_g$ up to an overall normalization. In
Fig.~\ref{peak} we show the dependence of the peak position and
magnitude of the differential decay rate on the initial energy
$p_{\tilde g}$ and the mass gap difference $\mu_f -\mu_g$. From
these figures we can determine an approximate behavior for the
peak position, $E_{u^0_{L},{\rm max}}$ in the differential decay
rate as a function of $\mu_{f}-\mu_{g}$ and $p_{\tilde g}$. We
find that it can be parameterized by an exponential decaying
function with an overall normalization $A$ and a numerical
exponent $B$:
\begin{eqnarray}}% can be used as {equation} or {eqnarray E_{u^0_{L},{\rm max}}= A (\mu_{f}-\mu_{g}) ~ e^{- B\frac{(\mu_{f}-\mu_{g}
)}{ p_{\tilde{g}}}}\;.
\label{Emaxapprox}
\end{eqnarray}
The numerical values of $A$ and $B$ are obtained by fitting
numerical data points, as shown in Fig.~\ref{peak}, where we find
that $A\approx 1$ and $B\approx 3/2$. Using the plane wave
approximation for the wave functions of the incoming gluino and
outgoing squark, the coupling squared is proportional to $|c(p_m,
q_n) |^2 \propto \frac{1}{z_{IR}^2 ~(\mu_f -\mu_g)^2}$~ . Then one
can substitute Eq.(\ref{Emaxapprox}) into Eq.(\ref{twobody}),
using $\Delta E_{u^0_L}\approx \pi/z_{IR}$, to obtain an
approximate analytical expression for
$(d\Gamma/dE_{u^0_{L}})|_{E_{u^0_{L},{\rm max}}}$ as a function of
$\mu_{f}$ and $\mu_{g}$:
\begin{eqnarray}}% can be used as {equation} or {eqnarray \left( p_{\tilde{g}}
z_{IR}\frac{d\Gamma}{dE_{u^0_{L}}}\right)\bigg|_{E_{u^0_{L},max}} = C
\cdot g_4^2 \frac{e^{- 2 B \frac{ (\mu_{f}-\mu_{g}
)}{p_{\tilde{g}}}}}{4 \pi ^2} \frac{( \log (1/(2 \mu_{g} z_{UV})
)-\gamma )}{( \log (1/(2 \mu_{f} z_{UV}) )-\gamma )}\nonumber\\
\label{dGdEmax}
\end{eqnarray}
the overall normalization $C$ compensates for the inaccuracy of the plane wave
approximation, and we have written everything in terms of 4D quantities.
The dependence on $z_{UV}$
is only logarithmic and therefore mild. To give an idea of the validity of the approximation,
we plot against numerical data in the second plot of Fig.~\ref{peak}.
We see that (\ref{Emaxapprox}) and (\ref{dGdEmax}) provide reasonably good estimates
for the functional dependence.
\section{Conclusions}
Supersymmetry is one of the best motivated scenarios for new physics beyond the standard model at the TeV scale.
For the past two decades it has been intensively searched for. Currently, the experiments at the LHC have placed very strong limits on the masses of the squarks and gluino to be above $\sim$ 1 TeV in the standard scenario~\cite{daCosta:2011qk,Khachatryan:2011tk,Chatrchyan:2011ek}. This means that either the superpartner spectrum is unnaturally heavy or the superpartners decay in unusual ways which escape the standard SUSY searches. As we showed in this paper, if the superpartners have continuous spectra, they tend to have long decay chains and produce many soft SM particles. This is a challenging scenario at the LHC because the soft particles may not pass the experimental cuts, and the signals could be buried in the QCD backgrounds. It may require a more specialized study to search for this kind of signal.
Na\"ively the phenomenological studies of continuum superpartners at colliders may appear to be formidable as the usual calculation techniques have only been developed for particles. Here we have seen that these methods can still be used in continuum calculations. In particular, we showed that the narrow-width approximation is still generally valid in the perturbative decay processes that we are interested in. This greatly simplifies the calculations because processes with long sequences of decays can be divided into individual steps involving real states and each step can be calculated independently. The easiest way to perform the calculations is to introduce a regularizing IR brane in the 5D picture which transforms the continuum into discrete KK modes, then one can carry out the calculations as in the usual particle case. As long as the KK modes have reasonably fine spacings, the results are basically independent of the position of the IR brane. In this way, continuum superpartners can also be implemented in the usual collider simulation tools in order for more detailed studies to develop new strategies to search for such kinds of exotic collider signals.
\section*{Acknowledgments}
We thank Jack Gunion and Markus Luty for useful discussions and comments. We also thank the Aspen Center for Physics and the Kavli Institute for Theoretical Physics where part of this work was completed. The authors were
supported by the US Department of Energy under contract
DE-FG02-91ER406746.
|
2,869,038,154,318 | arxiv | \section{Introduction}
There is a continued interest in improving time resolution of scintillator detectors.
Such improvements are especially challenging in case of the registration of low energy gamma quanta
where the time resolution is limited by the low statistics of scintillation photons.
Superior time resolution for registration of low energy gamma quanta is of crucial importance in the nuclear medicine applications
as e.g. in positron emission tomography (PET), where the new generation of PET scanners utilizes for the image reconstruction
differences between time of flight (TOF) of annihilation quanta from the annihilation
vertex
to the detectors (
Conti \hyperlink{Conti2009}{2009},
Humm et al \hyperlink{Humm2003}{2003},
Karp et al \hyperlink{Karp2008}{2008},
Townsend \hyperlink{Townsend2004}{2004},
Moses Derenzo \hyperlink{Moses1999}{1999},
Moses \hyperlink{Moses2003}{2003},
Conti Eriksson \hyperlink{ContiEriksson2009}{2009}
).
In order to improve the TOF resolution and to increase a geometrical acceptance of the PET scanners we are developing
a J-PET detection system (
Moskal et al \hyperlink{Moskal2011}{2011} \hyperlink{Moskal2014}{2014} \hyperlink{Moskal2015}{2015},
Raczynski et al \hyperlink{Raczynski2014}{2014} \hyperlink{Raczynski2015}{2015}
).
The system is based on long strips of plastic scintillators which are characterized
by better timing properties than the inorganic scintillator
crystals used in the state of the art PET scanners (
Conti \hyperlink{Conti2009}{2009},
Humm et al \hyperlink{Humm2003}{2003},
Karp et al \hyperlink{Karp2008}{2008},
Townsend \hyperlink{Townsend2004}{2004},
)
\begin{figure}[h]
\begin{center}
\includegraphics[width=220pt]{Fig_2strips.jpg}
\includegraphics[width=170pt]{Fig_detector_2layers.jpg}
\end{center}
\caption{
(Left)
Schematic view of the two detection modules of the J-PET detector.
A single detection module consists of a scintillator strip read out by two photomultipliers.
A single event used for the image reconstruction includes information about
times of light signals arrival to the left (L) and right (R) ends of the upper (up) and lower (dw)
scintillators (t$^R_{up}$, t$^L_{up}$, t$^R_{dw}$, t$^L_{dw}$).
A filled red dot inside the figure indicates a place of e$^+$e$^-$ annihilation.
$\Delta{z}$ denotes the position of the interaction point along the scintillator,
and $\Delta{LOR}$ indicates the position of annihilation
along the line-of-response (LOR). More details are explained in the text.
(Right) Schematic visualization of an example of two detection layers of the J-PET detector.
Each scintillator strip is aligned axially and read out at two ends by photomultipliers.
\label{barrel}
}
\end{figure}
Left panel of Fig.~\ref{barrel} shows a schematic view of two detection modules of the \ \hspace{10cm} \ J-PET detector,
where (similarly as described in the
reference~(Nickles et al \hyperlink{Nickles1978}{1978}))
the time of the interaction (hit-time) of gamma quantum in the scintillator
($t_{hit} = (t^L + t^R) / 2$) is calculated as an arithmetic mean of times
at left ($t^L$) and right ($t^R$) side of the strip.
The position of interaction along the strip (axial hit position) may be in the first approximation
calculated as $\Delta z$ = ($t^L - t^R$) $v$ / 2, where $v$ denotes the speed of light signals
in the scintillator strip. For example for plastic strips with cross section of 0.5~cm x 1.9~cm
the effective light signal velocity is equal to $v$~=~12.2~cm/ns~(Moskal et al \hyperlink{Moskal2014}{2014}).
Thus, in the case of strips with the length of 30~cm characterized with the hit-time resolution
of 0.188~ns (FWHM) the axial position resolution amounts to about 2.3~cm (FWHM)~(Moskal et al \hyperlink{Moskal2014}{2014}).
The position along the line-of-response ($\Delta LOR$) between two strips
(e.g. up and down shown in the left panel of Fig.~\ref{barrel}) is calculated as
($\Delta LOR = (t_{hit}^{up} - t_{hit}^{dw}) c / 2$, where $c$ denotes the speed of light.
The hit-time and hence also axial position resolution may still be improved
e.g. by probing photomuliplier pulses in the voltage domain by a newly developed electronics~(Palka et al \hyperlink{Palka2014}{2014}),
and by applying in the reconstruction the compressing
sensing theory~(Raczynski et al \hyperlink{Raczynski2014}{2014},\hyperlink{Raczynski2015}{2015})
and the library of synchronized model signals~(Moskal et al \hyperlink{Moskal2015}{2015}).
The timing resolution, as it is introduced hereafter in this article,
can be also improved by making a readout allowing
to record time-stamps from larger number of photons compared to the case
of the vacuum tube photomultipliers.
Due to the relatively low costs of plastic scintillators and their
large light attenuation length (in the order of 100~cm) it is possible
to construct a detector with a long axial field-of-view in a cost effective way.
This feature makes the J-PET detector competitive to the present solutions as regards the whole-body imaging.
One of the possible arrangements of the scintillator strips in the J-PET scanner
is visualized in the right panel of Fig.~\ref{barrel}.
Such alignment permits to use more than one detection layer thus increasing the efficiency of gamma quanta registration.
Plastic scintillators were not used so for as possible detectors for PET imaging due to the negligible
probability of the photoelectric effect and lower detection probability with respect to the inorganic crystals.
With the plastic scintillators the detection of 0.511 MeV gamma quanta is based in practice
only on the Compton scattering. In Fig.~\ref{comptons} we show a distribution of Compton scattered electron
energy for (i) the energy of gamma quanta reaching the detector
without scattering in the patient's body, (ii) after the scattering through
an angle of 30 degrees and (iii) after scattering through an angle of 60 degrees.
The presented distributions show that in order to limit registration of gamma quanta scattered
in the patient to the range from 0 to about 60 degrees
(as it was applied earlier e.g. in some LSO or BGO based tomographs~(Humm et al \hyperlink{Humm2003}{2003})),
one has to use an energy threshold of about 0.2 MeV~(Moskal et al \hyperlink{Moskal2012}{2012}).
The scatter fraction can be further reduced at the expense of the sensitivity,
yet it should be noted that its suppression to the level achievable
in the newest LSO based scanners with the energy window of
0.440-0.625 MeV (Surti et al \hyperlink{Surti2007}{2007})
is questionable.
However, for the quantitative statement, more detailed investigations are needed.
\begin{figure}[h]
\begin{center}
\includegraphics[width=250pt]{Fig_comptons.pdf}
\end{center}
\caption{
Energy distribution of electrons scattered via the Compton process by gamma quanta
with energies shown in the plot. The shown spectra include energy deposition resolution determined
for the 30~cm long strips of the J-PET detector which is equal
to $\sigma(E)/E = 0.044 / \sqrt{E(MeV)}$~(Moskal et al {\protect\hyperlink{Moskal2014}{2014}}).
\label{comptons}
}
\end{figure}
Application of 0.2~MeV threshold suppresses also
to the negligible level signals due to the secondary
Compton scattering in the detector material~(Kowalski et al \hyperlink{Kowalski2015}{2015}).
So far we have obtained 0.188~ns ($FWHM$) of hit-time resolution
for the registration of 0.511~MeV annihilation quanta
by means of 30~cm long strips of plastic scintillators
read out at both ends by vacuum tube photomultipliers~(Moskal et al \hyperlink{Moskal2014}{2014}).
The experiment was performed using a collimated beam of annihilation quanta
from the $^{68}Ge$ isotope placed inside a lead collimator with a 1.5~mm wide and 20~cm long slit.
A dedicated mechanical system allowed us to irradiate the tested scintillator at the desired position.
A coincidence registration of signals from the tested module and the reference detector
on the other side of the collimator ensured selection of annihilation gamma quanta.
A tested module consisted of a BC-420 plastic scintillator~(\hyperlink{SaintGobain}{Saint Gobain Crystals})
with dimensions
of 0.5~cm x 1.9~cm x ~30~cm connected optically at the ends to the Hamamatsu photomultipliers
R5230~(\hyperlink{Hamamatsu}{Hamamatsu}).
The experimental setup and results of the measurements are
described in detail in reference~(Moskal et al \hyperlink{Moskal2014}{2014}).
In principle, information about a time of interaction of gamma quantum in the scintillator
is carried by all
emitted
scintillation photons.
However, in practice in the typical detectors, only few first registered photons, contributing
to the leading edge of the electrical signal generated by the photoelectric converters,
are utilized in the determination of the onset of these signals
and hence in the determination of the time of the gamma quantum interaction.
This is also the case for the scintillator strips in the current version
of the J-PET detector (see upper part of Fig.~\ref{detector}),
where the time of the interaction is determined as an arithmetic mean of times at which electric signals generated by
photomultipliers attached to both ends cross a preset threshold voltage.
Therefore, the time resolution may be improved
by making a readout allowing to record timestamps from larger number of photons arriving at the scintillator edge.
There are first attempts to register all timestamps using arrays of single-photon avalanche diodes
(Meijlink et al \hyperlink{Meijlink2011}{2011}),
but presently the registration of arrival time of all photons with a good time resolution
at large areas is still rather impractical.
It is, however, important to stress that
the intrinsic timing resolution limit is approached already
when using only about 20 timestamps from first detected photons
(Seifert et al \hyperlink{Seifert2012}{2012}).
\begin{figure}[h]
\begin{center}
\includegraphics[width=220pt]{Fig_detector.png}
\end{center}
\caption{
Upper scheme indicates a single module of the J-PET detector consisting of the scintillator
strip read out on two sides by vacuum tube photomultipliers~(Moskal et al \protect\hyperlink{Moskal2014}{2014}).
Lower part of the figure indicates a scheme of an exemplary multi-SiPM readout allowing for
determination of timestamps of 20 detected photons (ten on each side).
Right panel of the figure shows arrangement of photomultipliers.
Geometrical overlap between the scintillator and the photosensitive
part of the photomultipliers is marked as white rectangles.
\label{detector}
}
\end{figure}
In this article we study
the possibility of improving the time resolution for the large size detectors (few tens of centimeters)
by registration of timestamps
from several photons.
This may be realized by preparing a readout in the form of an array with several SiPM
photomultipliers as it is indicated in the lower part of Fig~\ref{detector}.
In such a case,
a set of all registered photons is divided into several subgroups and
a time of the registration of the first photon in each subgroup is recorded.
This allows to construct estimators of the time of gamma quantum interaction
based on the number of timestamps equal to the number of SiPM photomultipliers.
In the following sections,
first we estimate time resolution limits
for infinitesimally small detector making an idealistic assumption that
the time of arrival of each photon can be measured
and used for the estimation of the time of gamma quantum interaction.
We use Fisher information
from all emitted photons and calculate the Cram\'e{}r-Rao lower limit (
Seifert et al \hyperlink{Seifert2012}{2012},
De~Grot \hyperlink{DeGroot1986}{1986}
)
which is independent of the estimator used for the time resolution determination.
Such estimations of the lower bound for the time resolution
have been published recently (Seifert et al \hyperlink{Seifert2012}{2012}) for small size crystals,
taking into account the transit time spread of photomultipliers
and neglecting the spread of the transit time inside scintillators.
In this article we extend these investigations to the plastic scintillators strips with the length of up to 100~cm
and include in the estimations the transit time spread due to the propagation
of photons in scintillator strips as well as the transit time spread of photomultipliers.
Further on we determine the lower bounds for the time resolution using a weighted mean of timestamps
as an estimator of the time of the gamma quantum interaction.
Next, we describe parameters used in the realistic simulations,
including time distribution of photons emitted in ternary plastic scintillators,
losses and time spread of photons due to their propagation through the scintillator, as well as
quantum efficiency and transit time spread of photomultipliers. We test the simulation procedures
by comparing simulated and experimental results for the BC-420 plastic scintillator read out at two ends
by Hamamatsu
R5320
photomultipliers.
Section \ref{mainsection} contains description of the main idea of this article
where the estimator of the time of the gamma quantum interaction
is defined based on the time ordered set including timestamps from first photons registered
by the matrix of SiPM converters. In this section we perform realistic simulations for the BC-420 scintillator strip
and various configurations for the arrays of S12572-100P Hamamatsu silicon photomultipliers.
Finally we estimate time resolution as a function of the scintillator length
for the multi-SiPM readout allowing to determine timestamps of 20 photons. The results are compared to the resolutions
achievable with the traditional readout with the vacuum tube photomultipliers.
The light yield of plastic scintillators amounts to about 10000 photons per 1~MeV of deposited energy.
Annihilation gamma quanta (0.511~MeV) used for the positron emission tomography
interact with plastic scintillators predominantly via Compton scattering
(Szymanski et al \hyperlink{Szymanski2014}{2014}),
and therefore may deposit maximally an energy of 0.341~MeV (2/3 of electron mass).
This corresponds to the emission of about 3410 photons. On the other hand, in order
to decrease the noise
due to the scattering of gamma quanta inside a patient body a minimum energy deposition
of about 0.2~MeV is required (Moskal et al \hyperlink{Moskal2012}{2012}).
Therefore, number of emitted photons discussed hereafter
in this article includes the range from 2000 to 3410 photons.
\section{Estimator of hit-time resolution (variance)}
A single detection module considered in this article consists of the plastic scintillator strip
connected at two ends to photomultipliers (see Fig.~\ref{detector}).
We assume that the gamma quantum
or any other particle of interest interacts in the scintillator
at time~$\Theta$.
We consider the
resolution for the reconstruction of the value of $\Theta$ based on time measurement of signals generated
by photosensors attached to two scintillator ends.
For practical reasons, if applicable, we use notation analogous
to the one introduced in references (Seifert et al \hyperlink{Seifert2012}{2012}).
In general, timestamps of all photons detected at the left ($t^L_1, t^L_2, ..., t^L_{N_L}$)
and at the right side ($t^R_1, t^R_2, ..., t^R_{N_R}$)
of the scintillator may be used
for the estimation of the time of the interaction $\Theta$.
It is advantageous to order the sets of timestamps according to ascending time such that:
($t^L_{(1)} \le t^L_{(2)} \le ... \le t^L_{(N_L)}$) and
($t^R_{(1)} \le t^R_{(2)} \le ... \le t^R_{(N_R)}$),
where indices in brackets indicate timestamps from the ordered set.
The $t_{(i)}$ element in the ordered set is referred to as i-th order statistic
(Seifert et al \hyperlink{Seifert2012}{2012}).
After ordering, the timestamps on one side become correlated
but the ordered set allows for the simple and intuitive estimation
of time difference between the signal arrivals to the ends of the scintillator:
\begin{equation}
\Delta t_{(i)} = t^L_{(i)} - t^R_{(i)},
\label{delta_t_i}
\end{equation}
as well as interaction time $\Theta$ which may be estimated by:
\begin{equation}
\Theta_{(i)} = \frac{t^L_{(i)}+t^R_{(i)}}{2} + const_{(i)},
\label{Theta_i}
\end{equation}
where $const_{(i)}$ is subject to calibration and for simplicity,
but without loss of generality, it will be omitted in the further considerations.
Distributions of ordered timestamps at one side (e.g. $t^L_{(i)}$ and $t^L_{(j)}$ for $i\ne j$)
are correlated and not identical. However,
distributions for the same order statistics at left and right side
($t^L_{(i)}$ and $t^R_{(i)}$)
are uncorrelated
since the ordering at left side was done independently of the ordering at right side.
Hence, it follows that variances of
$\Delta t_{(i)}$, and
$\Theta_{(i)}$ may be expressed as:
\begin{equation}
\sigma^2(\Delta t_{(i)}) = \sigma^2(t^L_{(i)}) + \sigma^2(t^R_{(i)}) -2cov(t^L_{(i)},t^R_{(i)}) = \sigma^2(t^L_{(i)}) + \sigma^2(t^R_{(i)}),
\label{variances1}
\end{equation}
\begin{equation}
\sigma^2(\Theta_{(i)}) = \frac{1}{4}
\left( \sigma^2(t^L_{(i)}) + \sigma^2(t^R_{(i)}) + 2cov(t^L_{(i)},t^R_{(i)}) \right) =
\frac{1}{4} \left( \sigma^2(t^L_{(i)}) + \sigma^2(t^R_{(i)}) \right).
\label{variances2}
\end{equation}
The above equations imply that:
\begin{equation}
\sigma^2(\Delta t_{(i)}) = 4 \sigma^2(\Theta_{(i)}) = \sigma^2(t^L_{(i)}) + \sigma^2(t^R_{(i)})
\label{variances3}
\end{equation}
We have checked this supposition by numerical simulations for the probability density
distributions of emission times considered in this article.
Therefore, as regards the variance, it is sufficient to study properties of only one of these estimators.
Moreover, in order to facilitate direct comparison with results published in the field of TOF-PET
we will express resolution as FWHM of coincidence resolving time (CRT),
where
coincidence resolving time determined for i-th order statistic
will be referred to as
CRT$_{(i)}$.
It should be, however, noted that in general, even though
$t^L_{(i)}$ and $t^R_{(i)}$ are uncorrelated,
the
$\Delta t_{(i)}$ and
$\Theta_{(i)}$ may be correlated since
cov($\Delta t_{(i)}$,$\Theta_{(i)}$) = ($\sigma^2(t^L_{(i)})$ - $\sigma^2(t^R_{(i)})$)/2
is equal to zero only if the emission point is in the center of the detector
because only in this case
the
$t^L_{(i)}$ and $t^R_{(i)}$ are identically distributed.
In next sections we define the emission time distribution for the plastic scintillators and estimate
the
Cram\'e{}r-Rao
lower limit for the achievable time resolution.
Further on we will simulate time resolution for each order statistics
$\Delta t_{(i)}$ separately, and we will test the variance of the weighted mean of
$\Delta t_{(i)}$ values showing that such estimator of
$\Delta t$ allows to reach significantly better resolution than achievable with single order statistics.
\section{Emission time distributions}
In case of the ternary plastic scintillators, as e.g. BC-420
(\hyperlink{SaintGobain}{Saint Gobain Crystals})
and its equivalent EJ-230
(\hyperlink{Eljen}{Eljen Technology})
used in the J-PET detector,
the distribution of the time of the
photon emission
followed by the interaction of the gamma quantum at time~$\Theta$
can be well approximated by the following convolution of gaussian and exponential terms
(Moszynski Bengtson \hyperlink{Moszynski1977}{1977} \hyperlink{Moszynski1979}{1979}):
\begin{equation}
f(t|\Theta) = K \int_\Theta^t{(e^{-\frac{t-\tau}{t_{d}}}-e^{-\frac{t-\tau}{t_r}})
\cdot e^{-\frac{(\tau-\Theta-2.5\sigma)^2}{2\sigma^2}} d\tau},
\label{emissiontime}
\end{equation}
where the gaussian term with the standard deviation~$\sigma$ reflects the rate of energy transfer
to the primary solute, whereas $t_r$ and $t_d$ denote the average time of the energy transfer to the
wavelength shifter, and decay time of the final light emission, respectively
(Moszynski Bengtson \hyperlink{Moszynski1979}{1979}).
K stands for the normalization constant ensuring that
$\int_{\Theta}^{+\infty}f(t|\Theta) dt = 1$.
We have set
$t_d = 1.5$~ns and
treated $t_r$ and $\sigma$ as a phenomenological parameters
and adjusted their values to:
$t_r = 0.005$~ns,
$\sigma = 0.2$~ns
in order to describe the properties of the light pulses from the BC-420 scintillator
i.e. rise time of 0.5~ns, decay time $t_d~=~1.5$~ns and FWHM of 1.3~ns
(\hyperlink{SaintGobain}{Saint Gobain Crystals}).
The resultant distribution of the emission time is indicated by the black solid
line in Fig.~\ref{emissiontimedistribution}.
Other lines in this figure correspond to time distributions of photons
in the scintillator with the cross section of 0.5~cm x 1.9~cm
simulated at
various distances
from the interaction point.
These distributions will be used in the next section for the estimation of the
lower limits of the achievable time resolutions.
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_time_distr.png}
\end{center}
\caption{
Thick solid line denotes the time distribution for photons simulated according to the formula~\ref{emissiontime}
describing the probability density distribution for ternary plastic scintillators
with parameters adjusted to the properties of BC-420 scintillator.
Thick dashed line indicates distribution at 30~cm from the interaction point,
thin solid at 50~cm, and
thin dashed at 100~cm.
These probability density distributions were simulated taking into account the time of photons propagation through
a given distance along the scintillator with cross section of 0.5~cm x 1.9~cm.
Simulations are described in greater details in the Appendix.
\label{emissiontimedistribution}
}
\end{figure}
\section{Cram\'e{}r-Rao lower limit on the resolution of hit-time reconstruction}
The time resolution achievable with the scintillator detectors is limited by the
optical and electronic time spread caused by the detector components,
and by the time distribution of photons contributing to the formation of electric signals.
The latter depends on the number of registered photons and
is referred to as the photon counting statistics
(Seifert et al \hyperlink{Seifert2012}{2012}).
Limitations of the time resolution due to the photon counting statistics have been studied
in detail e.g. in refs.(
Seifert et al \hyperlink{Seifert2012}{2012},\hyperlink{Seifert2012b}{2012b)},
Spanoudaki Levin \hyperlink{Spanoudaki2011}{2011},
Fishburn et al \hyperlink{Fishburn2010}{2010}
)
and the comprehensive account on this topic may be examined e.g. in ref.(Seifert et al \hyperlink{Seifert2012}{2012}).
To large extent this research is driven by the endeavor to improve the
timing properties of the PET systems
(
Conti Eriksson \hyperlink{ContiEriksson2009}{2009},
Moszynski et al \hyperlink{Moszynski2011}{2011},
Schaart et al \hyperlink{Schaart2009}{2009},\hyperlink{Schaart2010}{2010},
Kuhn et al \hyperlink{Kuhn2006}{2006},
Lecoq et al \hyperlink{Lecoq2013}{2013}),
and therefore so far
the investigations concentrated on the small size crystal scintillators. In the recent work
a detailed elaboration of the lower bound for time resolution
has been published for most kinds of available crystal scintillators
(Seifert et al \hyperlink{Seifert2012}{2012}).
The estimation included transit time spread of photomultipliers
but
the spread due to the
transport of photons inside the scintillators was neglected. This was justified since only small size crystals
(in the order of 1~cm or smaller) were considered.
Here, inspired by the new solution for the PET system based on plastic scintillators
(Moskal et al \hyperlink{Moskal2011}{2011},\hyperlink{Moskal2014}{2014},\hyperlink{Moskal2015}{2015},
Raczynski et al \hyperlink{Raczynski2014}{2014},\hyperlink{Raczynski2015}{2015}),
we extend the studies of Seifert and coauthors (Seifert et al \hyperlink{Seifert2012}{2012}) from the small size crystals
to the large size plastic scintillators.
In this section we estimate the lower limit of the time resolution achievable with scintillator strips of up to 100~cm
assuming ideal electronic systems, and further on
in the following sections we describe
results of realistic simulations
conducted taking into account both photon transport in scintillator material and transit time spread in photomultipliers.
The variance of any unbiased estimator $\Xi$ of the hit-time $\Theta$
satisfies the Cram\'e{}r-Rao inequality
(De~Groot \hyperlink{DeGroot1986}{1986},
Seifert et al \hyperlink{Seifert2012}{2012}):
\begin{equation}
var(\Xi) \ge \frac{1}{I_N(\Theta)},
\label{cramer-rao}
\end{equation}
where $I_N(\Theta)$ denotes the Fisher information concerning $\Theta$ in the set of N randomly chosen timestamps.
This very general formula enables calculation of the lower bound of the variance of unbiased estimator.
In case of point estimation of a parameter it quantitatively informs about the estimation efficiency
and whether there is room for improvement.
Knowing the probability density distribution of the photon registration time t following the gamma quantum interaction
time $\Theta$: $f(t|\Theta)$, the Fisher information in the sample of
N independent timestamps reads
(De~Groot \hyperlink{DeGroot1986}{1986},
Seifert et al \hyperlink{Seifert2012}{2012}):
\begin{equation}
I_N(\Theta)=N \int_{-\infty}^{+\infty}\frac{(\frac{\partial}{\partial\Theta}f(t|\Theta))^2}{f(t|\Theta)}dt
\label{fisherinfo}
\end{equation}
Fig.~\ref{figlimitteoria} shows the lower limit of the time resolution estimated as a function of the number of registered photons N
based on relations~\ref{cramer-rao}~and~\ref{fisherinfo}.
Thick-solid line shows results assuming that the time distribution of registered photons is the same
as the emission time distribution indicated by the thick solid line in Fig.~\ref{emissiontimedistribution}.
The other lower limits shown with thick-dashed,
thin-solid and thin-dashed lines were obtained assuming
time distributions of photons after passing a distance of 30~cm,
50~cm and 100~cm of the plastic scintillator strip with a cross section of 0.5~cm~x~1.9~cm.
The corresponding time distributions are shown
in Fig.~\ref{emissiontimedistribution}.
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_theory_estimation.png}
\end{center}
\caption{
Cram\'e{}r-Rao lower limit
for the time resolution achievable with plastic scintillators calculated as a function of
number of registered photons and as a function of the scintillator length assuming cross section of 0.5~cm~x~1.9~cm.
The meaning of the curves is described in the legend.
The square indicates time resolution determined experimentally using a first version of the J-PET prototype
with plastic scintillator strips of dimensions
0.5~cm~x~1.9~cm~x~30~cm~(Moskal et al \protect\hyperlink{Moskal2014}{2014}).
The result does not include the time spread due to the unknown depth-of-interaction.
\label{figlimitteoria}
}
\end{figure}
In the limit of only one photon the result is quite intuitive since in this case the
Cram\'e{}r-Rao lower bound
corresponds to about 3~ns which is approximately in the order of
the FWHM of the time distribution of the emitted photons
(solid line in Fig.~\ref{emissiontimedistribution})
amounting to $\sim$~3.5~ns.
The superimposed square indicates an experimental result obtained for the strip of BC-420 plastic
scintillator with the dimensions
of 0.5~cm x 1.9~cm x 30~cm read out by Hamamatsu
R5320
photomultipliers~(Moskal et al \hyperlink{Moskal2014}{2014}).
A comparison with the corresponding lower limit implies that there is still room for the substantial improvement
of the time resolution.
In the next sections we will present a novel solution which allows to improve the time resolution
by more than a factor of 1.5.
\section{Time resolution as a function of the order statistics for the ideal plastic scintillator}
In this section we consider
an ideal plastic scintillator detector where all emitted photons are registered
by the two ideal photosensors (either left or right) with ideal time resolution
and 100\% quantum efficiency. It is also assumed that there is no photon absorption and no time spread
in the infinitely small plastic scintillator.
In Fig.~\ref{fig5} filled symbols show how
the coincidence resolving time
for first, second and third order statistic changes as a function of the number of emitted photons,
and Fig.~\ref{fig6} illustrates how
CRT varies as a function of order statistic in the case of 3000 emitted photons.
As expected from the shape of the probability density distribution of emitted photons (Fig.~\ref{emissiontimedistribution})
the average time difference between emitted photons decreases and hence the time resolution improves
with the growing order statistic up to the time
when the probability of emission of photons acquires maximum and then time resolution starts to worsen
since the average time interval between emitted photons increases.
Having timestamps from all registered photons, in the simplest way we can estimate the hit-time~$\Theta$
and time difference~$\Delta t$ e.g. as weighted means
of corresponding values determined for all ordered statistics:
\begin{equation}
\Theta \equiv \frac{\sum_i \frac{\Theta_{(i)}}{\sigma^2(\Theta_{(i)})}}{\sum_i \frac{1}{\sigma^2(\Theta_{(i)})}}
=\frac{\sum_i \frac{t^L_{(i)}/2}{\sigma^2(\Theta_{(i)})}}{\sum_i \frac{1}{\sigma^2(\Theta_{(i)})}}
+\frac{\sum_i \frac{t^R_{(i)}/2}{\sigma^2(\Theta_{(i)})}}{\sum_i \frac{1}{\sigma^2(\Theta_{(i)})}}
\end{equation}
\begin{equation}
\Delta t \equiv \frac{\sum_i \frac{\Delta t_{(i)}}{\sigma^2(\Delta t_{(i)})}}{\sum_i \frac{1}{\sigma^2(\Delta t_{(i)})}}
=\frac{\sum_i \frac{t^L_{(i)}}{\sigma^2(\Delta t_{(i)})}}{\sum_i \frac{1}{\sigma^2(\Delta t_{(i)})}}
-\frac{\sum_i \frac{t^R_{(i)}}{\sigma^2(\Delta t_{(i)})}}{\sum_i \frac{1}{\sigma^2(\Delta t_{(i)})}}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_Graph_1.png}
\end{center}
\caption{
Coincidence resolving time CRT$_{(i)}$
as a function of number of emitted photons N
simulated assuming the emission time distribution of BC-420 plastic scintillator (solid line in Fig.~\ref{emissiontimedistribution}).
Filled points denote results for order statistic $i~=~1$ (circles),
$i~=~2$ (triangles), $i~=~3$ (squares) and
open circles stands for CRT determined based on the weighted time difference $\sigma(\Delta t)$.
The result does not include the time spread due to the unknown depth-of-interaction.
\label{fig5}
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_Order_Statistics.png}
\end{center}
\caption{
Coincidence resolving time CRT$_{(i)}$
as a function of order statistic
determined based on simulations of 3000 photons using the
emission time distribution of BC-420 plastic scintillator (solid line in Fig.~\ref{emissiontimedistribution}).
The result does not include the time spread due to the unknown depth-of-interaction.
\label{fig6}
}
\end{figure}
Coincidence resolving time CRT$_{(i)}$
is presented in Fig.~\ref{fig5}
as a function of number of emitted photons
assuming the probability density distribution for plastic scintillator BC-420 (solid line in Fig.~\ref{emissiontimedistribution}).
These calculations allow us to find out
what is the best limit of time resolution for the considered detection systems
when the hit-time is estimated as a weighted mean of the registered timestamps.
Results shown in Fig.~\ref{fig5} imply that in principle
for the energy deposition in the range from 0.2~MeV (2000 photons) to 0.341~MeV (3410 photons)
a coincidence resolving time is equal to about CRT~=~0.042~ns.
In the following section we will present results of simulations for the solution presently used in the J-PET detector
and compare them with the experimental results. Further on simulations with a matrix SiPM readout will be presented and discussed.
\section{Time resolution for the single module of the J-PET detector}
The realistic simulations of the timestamps registered by the large size scintillator detectors
require a proper account for
the emission time distribution, photon transport and absorption inside the scintillator,
as well as quantum efficiency and transit time spread of photosensors.
All these effects have been taken into account as it is described in detail in the Appendix.
In this section in order to
test
the simulation procedures
we present results for the plastic scintillator BC-420
with dimensions of 0.5~cm~x~1.9~cm~x~30~cm
read out at two ends by the Hamamatsu
R4998
(R5320)
photomultipliers. Recent measurements conducted with such detector
revealed that about 280 photoelectrons are produced from the emission of about 3410 photons corresponding
to the maximum energy deposition of the 0.511~MeV
gamma quanta via the Compton effect~
(Moskal et al \hyperlink{Moskal2014}{2014}).
This is very well reproduced in the simulations as can be inferred from Fig.~\ref{ModelScin} by a comparison
of values at the upper and lower horizontal axes. Fig.~\ref{ModelScin} shows dependence of the time resolution for the
first, second and third order statistic as a function of the number of emitted photons.
The result is consistent with the
experimental value of
CRT equal to about 0.266~ns obtained
when determining time at the threshold -50~mV of the leading edge
of signals corresponding to the range of number of emitted photons between 2000 and 3410~
(Moskal et al \hyperlink{Moskal2014}{2014}).
Fig.~\ref{ModelScin} indicates that in this range the experimental time resolution of
0.266~ns is between
the values of time resolutions simulated for the first and third order statistics.
This is as expected since predominantly only the first few photoelectrons contribute to the
onset of the leading edge of photomultiplier signals. The obtained result shows that
in practice the time resolution achievable from the leading edge may be estimated as a mean
of resolutions for first and third order statistics.
It is also interesting to note that for the discussed detector
the best time resolution would be obtained by the measurement
of the tenth ordered statistic (Fig.~\ref{ModelScin_3000}).
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_Graph_3_Number_of_photoelectrons.png}
\end{center}
\caption{
Coincidence resolving time as a function of number of emitted photons N and as a number of photoelectrons
simulated for the BC-420 plastic scintillator with dimensions of
0.5~cm~x~1.9~cm~x~30~cm
read out at two ends by the Hamamatsu R4998 (R5320) photomultipliers.
Filled points denote results for order statistic $i~=~1$ (circles),
$i~=~2$ (triangles), $i~=~3$ (squares) and
open circles stand for CRT determined based on the standard deviation of weighted time difference $\sigma(\Delta t)$.
The result does not include the time spread due to the unknown depth-of-interaction.
\label{ModelScin}
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_Sigma_n_2.png}
\end{center}
\caption{
Coincidence resolving time
as a function of order statistic i,
determined based on simulations of 3000 photons using the
emission time distribution of BC-420 plastic scintillator (solid line in Fig.~\ref{emissiontimedistribution})
with dimensions of 0.5~cm~x~1.9~cm~x~30~cm
and taking into account a transit time spread of
the Hamamatsu R4998 (R5320) photomultipliers.
The result does not include the time spread due to the unknown depth-of-interaction.
\label{ModelScin_3000}
}
\end{figure}
\section{Time resolution for plastic scintillator read out by matrices of silicon photomultipliers}
\label{mainsection}
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_Photon_number_distribution.png}
\end{center}
\caption{
Distribution of average number of registered photons as a function of the ID of the photomultiplier.
The simulations were performed for interactions in the center of the scintillator
with dimensions of 0.7~cm~x~1.9~cm~x~30~cm,
assuming 3410 photons per interaction,
corresponding to the maximum energy deposition of 0.511~MeV gamma quanta via Compton effect.
\label{id_distribution}
}
\end{figure}
In the previous sections it was shown that the time resolution may be significantly improved by recording individual timestamps
of photons arriving to the scintillator edge.
In this section we present simulation of timing properties achievable
for the long plastic scintillator strips equipped with readouts
at two sides in the form of a matrix of silicon photomultipliers
arranged as depicted in Fig.~\ref{detector}.
The simulations have been performed assuming properties of the Hamamatsu
S12572-100P silicon photomultiplier~(\hyperlink{Hamamatsu}{Hamamatsu})
with photosensitive area of 0.3~cm~x~0.3~cm and the width of non-sensitive rim of 0.05~cm,
and assuming that the scintillator has dimensions of 0.7~cm~x~1.9~cm~x~30~cm.
A 2~x~5 SIPM matrix (as shown in Fig.~\ref{detector})
enables to cover with the photo-sensitive area
about 68\% of the end of scintillator with the cross section of 0.7~cm~x~1.9~cm.
Such matrices of photomultipliers enable to group photons reaching the end of the scintillator
into ten subsamples on the left side:
($t^L_{1,1}, t^L_{1,2}, ..., t^L_{1,N1L}$), ..., ($t^L_{10,1}, t^L_{10,2}, ..., t^L_{10,N10L}$)
and ten subsamples on the right side:
($t^R_{1,1}, t^R_{1,2}, ..., t^R_{1,N1R}$), ..., ($t^R_{10,1}, t^R_{10,2}, ..., t^R_{10,N10R}$),
where first lower index denotes the ID of SiPM and the second lower index denotes the ID of the photon.
Fig.~\ref{id_distribution} shows that the photons are homogeneously distributed among different photomultipliers
and
(in the case of the maximum energy deposition by 0.511 MeV gamma quanta)
on the average about 22 photons are registered by each SiPM.
Further on, we assume that a timestamp available from a given SiPM corresponds to the
time of the fastest photon from the subsample registered by this photomultiplier.
Therefore, for each subsample separately, we order
timestamps according to ascending time such that:
($t^L_{1,(1)} \le t^L_{1,(2)} \le ... \le t^L_{1,(N1L)}$), ..., ($t^L_{10,(1)} \le t^L_{10,(2)} \le ... \le t^L_{10,(N10L)}$),
and analogously for the right side,
where indices in brackets indicate timestamps from the ordered subsample.
The fastest timestamps in subsamples:
$t^L_{1,(1)}, t^L_{2,(1)}, ..., t^L_{10,(1)}$,
and
$t^R_{1,(1)}, t^R_{2,(1)}, ..., t^R_{10,(1)}$
are considered as timestamps registered by the
photomultipliers (hereafter referred to as photomultiplier's timestamps).
Next,
for the left and right readout separately, we order the photomultiplier's
timestamps according to ascending time such that:
($t^L_{[1]} \le t^L_{[2]} \le ... \le t^L_{[10]}$)
and
($t^R_{[1]} \le t^R_{[2]} \le ... \le t^R_{[10]}$),
where indices in square brackets indicate SiPM timestamps after ordering,
and the
$t_{[i]}$ element in this set will be hereafter referred to as i-th order SiPM statistic.
For each ordered SiPM statistic the interaction time $\Theta_{[i]}$
and the time difference between the signal arrivals
to the ends of the scintillator $\Delta t_{[i]}$
are estimated as follows:
\begin{equation}
\Delta t_{[i]} = t^L_{[i]} - t^R_{[i]},
\label{delta_t_id}
\end{equation}
and
\begin{equation}
\Theta_{[i]} = \frac{t^L_{[i]}+t^R_{[i]}}{2} + const_{[i]},
\label{Theta_i}
\end{equation}
where $const_{[i]}$ will be omitted in the further considerations
without loss of generality. Finally using information available from all
SiPMs we estimate the
hit-time $\Theta$ and the time difference $\Delta t$ as the weighted mean
of above defined $\Theta_{[i]}$ and $\Delta t_{[i]}$ values, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_Graph_2.png}
\end{center}
\caption{
Coincidence resolving time CRT$_{[i]}$
as a function of number of emitted photons N and as a number
of registered photons (photoelectrons)
simulated for the BC-420 plastic scintillator with dimensions of
0.7~cm~x~1.9~cm~x~30~cm
read out at two ends by a 2~x~5 matrix
of the Hamamatsu
S12572-100P silicon photomultipliers.
Filled points denote results for the first, second and third SiPM order statistic:
$i~=~1$ (circles),
$i~=~2$ (triangles),
$i~=~3$ (squares) and
open circles stand for
CRT determined based on the
weighted average of all measured $\Delta t_{[i]}$.
\label{silicon_phm}
}
\end{figure}
Results of the performed simulations are shown in Fig.~\ref{silicon_phm}.
The number of photoelectrons expected for the maximum energy deposition
of 0.511~MeV gamma quanta (N~=~3410)
is equal to about 440 and is much higher than 280 obtained with the present J-PET prototype.
This increase is due to the higher quantum efficiency of
S12572-100P silicon photomultipliers with
respect to the R4998 (R5320) vacuum tube photomultipliers (see Fig.~\ref{fig4} in the Appendix).
Nevertheless, the time resolution for the first SiPM order statistics
is worse with respect to the one obtainable with the R4998 (R5320) photomultipliers
because of the larger transit time spread
of SiPM with respect to R4998 (R5320).
However, due to the access to ten SiPM timestamps (available with the
2~x~5 SiPM matrix readout),
a coincidence resolving time of CRT~$\approx$~0.180~ns
can be achieved when
using a weighted mean of the measured SiPM timestamps. This is an average value for the range of interest
(from 2000 to 3400 photons).
In order to test a dependence of the achievable time resolution on the number of the SiPM in the readout,
a systematic simulations were conducted changing the number of SiPM from 2 to 21.
Fig.~\ref{silicon_phms} shows
CRT
obtained
for various SiPM configurations.
The result indicates that the improvement of resolution saturates with the growing number of
photomultipliers, and that the 2~x~5 configurations allowing to read 20 timestamps
constitutes an optimal solution and further increase of number of SiPM
does not improve the resolution significantly.
As a final result in Fig.~\ref{different_ds} a time resolution achievable with
vacuum photomultipliers R4998 (R5320) is compared to the resolution achievable with
the 2~x~5 SiPM matrix readout for the length of the scintillators from 2.5~cm up to 100~cm.
Both results are confronted with the resolution
limit simulated for the ideal photosensors allowing for the measurement
of each photon reaching the end of the strip.
The result presented in the figure indicates that
the 2~x~5 SiPM matrix readout can improve the time resolution significantly by about
a factor of $1.5$ (up to the length of 50~cm) and that still further significant improvement may be
achieved by increasing the quantum efficiency and decreasing the transit time spread with respect
to the presently available
S12572-100P silicon photomultiplier produced by ~\hyperlink{Hamamatsu}{Hamamatsu}.
The comparison was done assuming emission of 2700 photons according to the
spectrum of the EJ-230 (BC-420) scintillator. As it was discussed earlier in the text,
the number of 2700 photons corresponds
to the average amount of photons useful for the positron emission tomography
by means of the plastic scintillators.
Finally, the obtained results show that the 2~x~5 S12572-100P matrix readout
allows to obtain
CRT~$\approx$~0.366~ns
even for the J-PET constructed with
the 100~cm long plastic scintillators.
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig12_new.png}
\end{center}
\caption{
Coincidence resolving time
as a function of number of emitted photons N
simulated for the BC-420 plastic scintillator with dimensions of
0.7~cm~x~1.9~cm~x~30~cm.
In the simulations it was assumed that the readout consists of photomultipliers
characterized by time spread and quantum efficiency the same as
the SiPM Hamamatsu
S12572-100P but with the dimensions allowing to cover fully the
scintillator with the sensitive area with the following readout configurations:
1~x~2 (filled circles),
1~x~3 (filled triangles),
2~x~5 (filled squares),
2~x~7 (filled inverted triangles), and
3~x~7 (filled lozenges).
Open squares indicate results for the 2~x~5 configuration taking into account non-sensitive area
of the S12572-100P SiPM
(the same as in Fig.~\ref{silicon_phm}).
The result does not include the time spread due to the unknown depth-of-interaction.
\label{silicon_phms}
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_length_DOI.png}
\end{center}
\caption{
Coincidence resolving time
as a function of the scintillator's length for 2700 emitted photons
in the center of the scintillator with the cross section of
0.7~cm~x~1.9~cm.
(Triangles) A mean value of
CRT$_{(1)}$ and
CRT$_{(3)}$
simulated for the scintillator read out at two ends by the Hamamatsu R4998 (R5320) photomultipliers.
(Squares)
CRT
simulated for the scintillator read out at two ends by the matrix
of 2~x~5 S12572-100P photomultipliers.
(Circles)
CRT
determined as a weighted mean from all measured timestamps assuming ideal photosensors with no
transit time spread and 100\% quantum efficiency and assuming that there
is no photon absorption in the scintillator material.
The shown values take into account an additional smearing of the time due to the unknown depth of interaction.
This can be well approximated by the FWHM equal to about 0.063~ns in the case of the 1.9~cm thick scintillators.
The lines are shown to guide the eye.
\label{different_ds}
}
\end{figure}
\section{Summary}
The realistic simulations based on the Monte-Carlo method were conducted
in order to estimate the time resolution achievable with the J-PET tomography scanner
built from strips of plastic
scintillators~(Moskal et al \hyperlink{Moskal2011}{2011},\hyperlink{Moskal2014}{2014},\hyperlink{Moskal2015}{2015},
Raczynski et al \hyperlink{Raczynski2014}{2014},\hyperlink{Raczynski2015}{2015}).
The simulations took into account: (i) emission spectrum of the plastic scintillator BC-420 (EJ-230),
(ii) probability density distribution of photon emission times,
(iii) transport of photons along the scintillator strip,
(iv) absorption of photons in the scintillator material,
(v) spectrum of quantum efficiency of photosensors, and (vi) photomultipliers transit time spread.
Arrangement of SiPM photosensors in the form of 2~x~5 matrix attached at two ends to the scintillator strip
allowed for registering 10 timestamps at each side. These after ordering according to the ascending time
were used to estimate the time of interaction as a weighted mean of times registered for
each ordered SiPM statistics. Exploitation of information on 10 timestamps at each side improved the time
resolution with respect to the present readout based on vacuum tube photomultipliers
by about a factor of 1.5 despite the fact that the transit time spread of the considered
silicon photosensors S12572-100P
($\sigma$(TTS)~=~0.128~ns)
is almost two times larger than TTS
of photomultipliers R4998 (R5320)
($\sigma$(TTS)~=~0.068~ns)
used in the present version of the J-PET detector.
For the energy loss
in the range from 0.2~MeV to 0.341~MeV (corresponding to the emission of 2000 to 3410 photons),
relevant for the positron emission tomography with plastic scintillators,
it was shown that
with the S12572-100P
photosensors arranged into a 2~x~5 matrix at two ends of the scintillator strip
the coincidence resolving time changes from CRT~$\approx$~0.170~ns to CRT~$\approx$~0.365~ns
when extending an axial field-of-view from
15~cm to 100~cm.
This corresponds to the changes of the axial position resolution from 1.4~cm (FWHM)
to 3.1~cm (FWHM), respectively. However, as it is shown by solid circles in Fig.~\ref{different_ds}
there is still room for improving CRT and hence also for improving an axial position resolution
by about a factor of two by decreasing the time-jitter of the SiPMs.
The results open perspectives for construction of the cost-effective TOF-PET scanner
with significantly better TOF resolution and larger field-of-view
with respect to the newest TOF-PET modalities
characterized by
CRT~$\approx$~0.4~ns~(\hyperlink{Philips}{Philips},\hyperlink{GeneralElectric}{General Electric}).
In addition, a J-PET scanner built from long strips of plastic scintillators
read out by the silicon photosensors, may be combined with the Magnetic Resonance Imaging modality
in a way allowing for the simultaneous PET and MRI measurement
with the large field-of-view~(Moskal \hyperlink{Moskal2014b}{2014b})
e.g. by inserting a barrel built of plastic strips into the MRI system.
Finally, it was shown that not only an intrinsic lower bound for the time resolution
calculated
using
the Fisher information and
Cram\'e{}r-Rao
inequality, but also more practical limit determined for the
time estimated as a mean of all timestamps registered with the ideal photosensor
is much lower than the above quoted resolutions.
Therefore, there is still room for further improvement of the TOF resolution
of the J-PET tomograph
which can be achieved anticipating
future availability of silicon photosensors with transit-time-spread lower than $\sigma$(TTS)~=~0.128~ns
of S12572-100P Hamamatsu photomultipliers.
The main purpose of the development of the J-PET system is to find a cost-effective
way of the whole body PET imaging. Thus, in order to compare the performance of the J-PET with
the presently available LSO based TOF-PET devices, by analogy
to references~(Conti \hyperlink{Conti2009}{2009}, Eriksson Conti \hyperlink{ErikssonConti2015}{2015})
we introduce a following formula expressing a figure-of-merit $FOM_{wb}$
relevant for the whole body imaging:
\begin{equation}
FOM_{wb} ~= \epsilon_{detection}^2 \times \epsilon_{selection}^2 \times Acc \ / \ (CRT \times N_{steps}),
\label{FOM}
\end{equation}
where
$\epsilon_{detection}$ denotes the detection efficiency of a single 0.511 MeV gamma quantum,
$\epsilon_{selection}$ indicates the selection efficiency of “image-forming” events,
$CRT$ denotes the coincidence resolving time,
$N_{steps}$ indicates number of steps (bed positions) needed to scan a whole body, and
$Acc$ denotes a geometrical acceptance.
In the first order of approximation we may assume that
$N_{steps}$ is inversely proportional to the AFOV,
and that
the term
$\epsilon_{detection}^2 \times Acc$
is proportional to the
$\int_{\theta_{min}}^{\theta_{max}} (\epsilon_{0detection}/\sin\!\theta)^2 \sin\!\theta d\theta$,
where $\theta$ denotes the angle between the direction of the gamma quanta and the main
axis of the tomograph; the term $\epsilon_{0detection}/\sin\!\theta$
accounts for the changes of the detection efficiency as a function of the $\theta$ angle,
with
$\epsilon_{0detection}$ denoting detection efficiency when gamma quantum crosses
the detector perpendicularly to its surface;
$\sin\!\theta d\theta$
stands for the angular dependence of the differential element of the solid angle,
and
$\theta_{min}$ to $\theta_{max}$
determines the range of angular acceptance of the tomograph.
The above assumptions yield:
\begin{equation}
FOM_{wb} ~=
\int_{\theta_{min}}^{\theta_{max}} (\epsilon_{0detection}/\sin\!\theta)^2 \sin\!\theta d\theta
\times AFOV \ / \ CRT.
\label{FOM2}
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[width=250pt]{Fig14_FOM.png}
\end{center}
\caption{A ratio of figure of merits for the whole body imaging with J-PET and LSO based PET detectors defined as
R(AFOV-J-PET) = $FOM_{wb}$(AFOV-J-PET)~/~$FOM_{wb}$(LSO with AFOV=20cm).
Horizontal axis of the figure refers to the length of the J-PET detector. The length of the LSO scanner was fixed to 20~cm,
and the diameter of the scanners was fixed to 80~cm.
Full dots indicate result determined for a single J-PET layer ($N_{layers}$ = 1),
and open squares indicate result for J-PET with two layers ($N_{layers}$ = 2).
For small number of layers, the $\epsilon_{detection}$ of the J-PET detector
is approximately proportional to $N_{layers}$. The presented result was obtained under assumptions
that CRT of the LSO based detectors is equal to 0.4 ns and that CRT values of the J-PET corresponds to the
results of this article shown by full squares in Fig.~\ref{different_ds}.
Calculations of $FOM_{wb}$ of the J-PET were performed assuming a threshold of 0.2~MeV.
The lines connecting points are shown to guide the eye,
whereas the horizontal solid line indicates R~=~1.
\label{R-FOM}
}
\end{figure}
Fig.~\ref{R-FOM} shows the ratio $R$ of $FOM_{wb}$ of the J-PET and LSO based PET detectors defined as:
R(AFOV-J-PET) = $FOM_{wb}$(AFOV-J-PET)/$FOM_{wb}$(LSO with AFOV~=~20~cm).
The shown ratio is determined for a fixed AFOV~=~20~cm of the LSO based PET,
but varying the AFOV of the J-PET detector.
Full dots indicate result determined for a single J-PET layer ($N_{layers}$~=~1),
and open squares indicate result for J-PET with two layers ($N_{layers}$~=~2).
The results shown in Fig.~\ref{R-FOM} were obtained assuming a 2~cm radial thickness of the detection
layers and taking into account that linear attenuation coefficients of 0.511 MeV gamma quanta
are equal to $\mu_{LSO}~=~0.87~cm^{-1}$~(Mechler \hyperlink{Mechler2000}{2000})
and $\mu_{plastic}~=~0.098~cm^{-1}$~(\hyperlink{SaintGobain}{Saint Gobain Crystals}).
Furthermore it was assumed that (i) CRT of LSO based scanners is equal to 0.4~ns
as achieved recently by manufacturers Philips~(\hyperlink{Philips}{Philips})
and General Electric~(\hyperlink{GeneralElectric}{General Electric}),
and that (ii) $\epsilon_{selection}$ for LSO is equal to 0.32,
which is a fraction of the photoelectric effect in the case of the LSO crystals [Humm2003],
and that (iii) $\epsilon_{selection}$ of the J-PET is equal to 0.44 which corresponds to the fraction
of events with energy deposition larger than 0.2 MeV in the case of plastic scintillators.
Fig.~\ref{R-FOM} indicates that in order to compensate for the lower efficiency
of plastic scintillators and thus to obtain $FOM_{wb}$ of the J-PET comparable to the LSO based scanners
with AFOV~=~20~cm it is required to use either two detection layers or to increase the J-PET AFOV to about 50~cm.
Certainly the $FOM_{wb}$ of the LSO based PET would also grow approximately as square of AFOV but at the same time
the cost of such PET detector would increase almost linearly proportional to AFOV,
whereas the cost of the J-PET does not increase significantly when increasing the AFOV.
The relative ease of the cost effective increase of the axial field-of-view
makes the J-PET tomograph competitive with respect to the current commercial PET scanners
as regards sensitivity and time resolution,
yet this is achieved at the expense of the significant reduction of the axial spatial resolution.
Finally, it is worth to stress that the J-PET with a long diagnostic chamber opens unique
perspectives for simultaneous whole-body metabolic imaging not accessible with the presently
available PET/CT modalities.
\section{Acknowledgements}
We acknowledge technical and administrative support of T.~Gucwa-Rys,
A.~Heczko, M.~Kajetanowicz, G.~Konopka-Cupia{\l}, W.~Migda{\l},
and the financial support by The Polish National Center for
Research and Development through grant INNOTECH-K1/IN1/64/159174/NCBR/12,
the Foundation for Polish Science through MPD programme, the EU and MSHE
Grant No. POIG.02.03.00-161 00-013/09, Doctus - the Lesser Poland PhD
Scholarship Fund, and Marian Smoluchowski Krakow Research Consortium
"Matter-Energy-Future".
\section{References}
\leftskip= 1cm
\parindent= -1cm
\par
\hypertarget{3Mcom}
3M Optical Systems,
\textit{www.3M.com/Vikuiti}
\par
\hypertarget{Baszak2014}
Baszak~J
2014
Hamamatsu, private communication.
\par
\hypertarget{Bettinardi2011}
Bettinardi~V et al
2011
Physical Performance of the new hybrid PET-CT Discovery-690 FWH-TOF=544ps
\textit{Med. Phys.} {\bf 38} 5394-5411
\par
\hypertarget{Conti2009}
Conti~M
2009
State of the art and challenges of time-of-flight PET
\textit{Phys. Med.} {\bf 25} 1-11
\par
\hypertarget{Conti2011}
Conti~M
2011
Focus on time-of-flight PET: the benefits of improved time resolution
\textit{Eur. J. Nucl. Med. Mol. Imaging} {\bf 38} 1147-1157
\par
\hypertarget{ContiEriksson2009}
Conti~M, Eriksson~L, Rothfuss~H, Melcher~C~L
2009
Comparison of Fast Scintillators With TOF PET Potential
\textit{IEEE Trans. Nucl. Sci.} {\bf 56} 926-933
\par
\hypertarget{DeGroot1986}
DeGroot~M~H
1986
\textit{Probability and Statistics Addison-Weslay 420-6}
\par
\hypertarget{Eljen}
Eljen Technology,
\textit{www.eljentechnology.com}
\par
\hypertarget{ErikssonConti2015}
Eriksson~L and Conti~M
2015
Randoms and TOF gain revisited
\textit{Phys. Med. Biol.} {\bf 60} 1613-1623
\par
\hypertarget{Fishburn2010}
Fishburn~M~W and Charbon~E
2010
System Tradeoffs in Gamma-Ray Detection Utilizing SPAD Arrays and Scintillators
\textit{IEEE Trans. Nucl. Sci.} {\bf 57} 2549-2557
\par
\hypertarget{GeneralElectric}
General Electric,
\textit{http://www3.gehealthcare.com/en/products/categories/magnetic\_resonance\_imaging/signa\_pet-mr}
\par
\hypertarget{Hamamatsu}
Hamamatsu,
\textit{www.hamamatsu.com}
\par
\hypertarget{Humm2003}
Humm~J~L, Rosenfeld~A, Del~Guerra~A
2003
From PET detectors to PET scanners
\textit{Eur. J. Nucl. Med. Mol. Imaging} {\bf 30} 1574-1597
\par
\hypertarget{Karp2008}
Karp~J~S et al
2008
Benefit of Time-of-Flight in PET: Experimental and Clinical Results
\textit{J. Nucl. Med.} {\bf 49} 462-470
\par
\hypertarget{Korcyl2014}
Korcyl~G et al
2014
Trigger-less and reconfigurable data acquisition system for positron emission tomography
\textit{Bio-Algorithms and Med-Systems} {\bf 10} 37-40
\par
\hypertarget{Kowalski2015}
Kowalski~P et al
2015
Multiple scattering and accidental coincidences in the J-PET detector simulated using GATE package
\textit{Acta Phys. Pol.} {A 127} 1505-1512 [arXiv:1502.04532 [physics.ins-det]].
\par
\hypertarget{Kuhn2006}
Kuhn~A et al
2006
Performance assessment of pixelated LaBr3 detector modules for time-of-flight PET
\textit{IEEE Trans. Nucl. Sci.} {\bf 53} 1090-1095
\par
\hypertarget{Lecoq2013}
Lecoq~P, Auffray~E, Knapitsch~A
2013
How Photonic Crystals Can Improve the Timing Resolution of Scintillators
\textit{IEEE Trans. Nucl. Sci.} {\bf 60} 1653-1657
\par
\hypertarget{Mechler2000}
Mechler~C~L
2000
Scintillation Crystals for PET,
\textit{J. Nucl. Med.} {\bf 41} 1051-1055
\par
\hypertarget{Meijlink2011}
Meijlink~J~R et al
2011
First Measurement of Scintillation Photon Arrival Statistics Usign a High-Granularity Solid-State
Photosensor Enabling Time-Stamping of up to 20,480 Single Photons
\textit{Proc. IEEE Nuclear Science Symposium (NSS/MIC).} 2254-2257
\par
\hypertarget{Moses1999}
Moses~W~W, Derenzo~S~E
1999
Prospects for Time-of-Flight PET using LSO Scintillator
\textit{IEEE Trans. Nucl. Sci.} {\bf 46} 474-478
\par
\hypertarget{Moses2003}
Moses~W~W
2003
Time of Flight in PET Revisited
\textit{IEEE Trans. Nucl. Sci.} {\bf 50} 1325-1330
\par
\hypertarget{Moskal2011}
Moskal~P et al
2011
Novel detector systems for the Positron Emission Tomoraphy
\textit{Bio-Algorithms and Med-Systems} {\bf 7} 73-78; [arXiv:1305.5187 [physics.med-ph]].
\par
\hypertarget{Moskal2012}
Moskal~P et al
2012
TOF-PET detector concept based on organic scintillators
\textit{Nuclear Medicine Review} {\bf 15} C81-C84; [arXiv:1305.5559 [physics.ins-det]].
\par
\hypertarget{Moskal2014}
Moskal~P et al
2014
Test of a single module of the J-PET scanner based on plastic scintillators
\textit{Nucl. Instrum. Methods Phys. Res.} {\bf A 764} 317-321; [arXiv:1407.7395 [physics.ins-det]].
\par
\hypertarget{Moskal2014b}
Moskal~P
2014b
A hybrid TOF-PET/MRI tomograph
\textit{Patent Application} PCT/EP2014/068373 WO2015028603 A1
\par
\hypertarget{Moskal2015}
Moskal~P et al
2015
A novel method for the line-of-response and time-of-flight reconstruction
in TOF-PET detectors based on a library of synchronized model signals
\textit{Nucl. Instrum. Methods Phys. Res.} {\bf A 775} 54-62; [arXiv:1412.6963 [physics.ins-det]].
\par
\hypertarget{Moszynski1977}
Moszynski~M and Bengtson~B
1977
Light pulse shapes from plastic scintillators
\textit{Nucl. Instrum. Methods} {\bf 142} 417-434
\par
\hypertarget{Moszynski1979}
Moszynski~M and Bengtson~B
1979
Status of timing with plastic scintillation detectors
\textit{Nucl. Instrum. Methods} {\bf 158} 1-31
\par
\hypertarget{Moszynski2011}
Moszynski~M, Szczesniak~T
2011
Optimization of Detectors for Time-of-Flight PET
\textit{Acta Phys. Pol. B. Proc. Supp.} {\bf 4} 59-64
\par
\hypertarget{Nickles1978}
Nickles~R~J, Meyer~H~O
1978
Design of a three-dimensional positron camera for nuclear medicine
\textit{Phys. Med. Biol.} {\bf 23} 686-695
\par
\hypertarget{Palka2014}
Palka~M et al
2014
A novel method based solely on FPGA units enabling measurement of time and charge
of analog signals in Positron Emission Tomography
\textit{Bio-Algorithms and Med-Systems} {\bf 10} 41-45; [arXiv:1311.6127 [physics.ins-det]].
\par
\hypertarget{Philips}
Philips,
\textit{http://www.philips.co.uk/healthcare/product/HC882446/vereos-digital-pet-ct}
\par
\hypertarget{Raczynski2014}
Raczy{\'n}ski~L et al
2014
Novel method for hit-position reconstruction using voltage signals in plastic scintillators
and its application to Positron Emission Tomography
\textit{Nucl. Instrum. Methods Phys. Res.} {\bf A 764} 186-192; [arXiv:1407.8293 [physics.ins-det]].
\par
\hypertarget{Raczynski2015}
Raczy{\'n}ski~L et al
2015
Compressive Sensing of Signals Generated in Plastic Scintillators in a Novel J-PET Instrument
\textit{Nucl. Instrum. Methods Phys. Res.} {\bf A 786} 105-112; [arXiv:1503.05188 [physics.ins-det]].
\par
\hypertarget{SaintGobain}
Saint Gobain Crystals
\textit{www.crystals.saint-gobain.com}
\par
\hypertarget{Seifert2012}
Seifert~S, van~Dam~H~T, Schaart~D~R
2012
The lower bound on the timing resolution of scintillation detectors
\textit{Phys. Med. Biol.} {\bf 57} 1797-1814
\par
\hypertarget{Seifert2012b}
Seifert~S,van~Dam~H~T, Vinke~R, Dendooven~P, Lohner~H, Beekman~F~J, Schaart~D~R
2012
A Comprehensive Model to Predict the Timing Resolution of SiPM-Based Scintillation Detectors:
Theory and Experimental Validation
\textit{IEEE Trans. Nucl. Sci.} {\bf 59} 190-204
\par
\hypertarget{Schaart2009}
Schaart~R~D et al
2009
A novel, SiPM-array-based, monolithic scintillator detector for PET
\textit{Phys. Med. Biol.} {\bf 54} 3501-3512
\par
\hypertarget{Schaart2010}
Schaart~R~D et al
2010
LaBr3:Ce and SiPMs for TOF PET: achieving 100ps coincidence resolving time.
\textit{Phys. Med. Biol.} {\bf 55} N179-N189
\par
\hypertarget{Senchyshyn2006}
Senchyshyn~V et al
2006
Accounting for self-absorption in calculation of light collection
in plastic scintillators
\textit{Nucl. Instrum. Methods Phys. Res.} {\bf A 566} 286-293
\par
\hypertarget{Spanoudaki2011}
Spanoudaki~V~Ch and Levin~C~S
2011
Investigating the temporal resolution limits of scintillation
detection from pixellated elements: comparison between experiment and simulations
\textit{Phys. Med. Biol.} {\bf 56} 735-756
\par
\hypertarget{Surti2007}
Surti~S et al
2007
Performance of Philips Gemini TF PET/CT Scanner with Special Consideration
for Its Time-of-Flight Imaging Capabilities
\textit{J. Nucl. Med.} {\bf 48} 471-480
\par
\hypertarget{Szymanski2014}
Szyma{\'n}ski~K et al
2014
Simulations of gamma quanta scattering in a single module of the J-PET detector
\textit{Bio-Algorithms and Med-Systems} {\bf 10} 71-77
\par
\hypertarget{Townsend2004}
Townsend~D~W
2004
Physical Principles and Technology of Clinical PET Imaging
\textit{Ann. Acad. Med. Singapore} {\bf 33} 133-145
\par
\section*{APPENDIX: Simulation of photon transport in cuboidal scintillator strips}
In a long scintillator strip, a photon on its way from the emission point to the photomultiplier
may undergo many internal reflections whose number strongly depends on the scintillator size and the photon emission angle.
However, the space reflection symmetries of the cuboidal shapes, which are considered in this article,
enables a significant simplification of the photon transport algorithm, without
following photon propagation in a typical manner.
In our simulations for each emitted photon
the initial direction of its flight is obtained in polar coordinate
system as two uniformly distributed random values of $cos\theta$ and $\phi$,
where $\theta$ is the angle between flight direction and $z$-axis
and $\phi$ is the azimuthal angle as defined in standard spherical coordinate system.
The coordinate system is defined in Fig.~\ref{detector} where the $z$-axis is directed along the longest axis of the scintillator strip.
The components of photon flight direction vector can be expressed as follows:
\begin{equation}
\vec{dir} = [sin\theta cos\phi,~sin\theta sin\phi,~cos\theta]
\end{equation}
The number of reflections from the side surfaces that are normal to $x$ or $y$-axis
are calculated using the projection of the flight-direction vector to $y$-$z$ or $x$-$z$-plane, respectively:
\begin{equation}
tg\theta_x=\frac{dir_x}{dir_z};~tg\theta_y=\frac{dir_y}{dir_z},
\end{equation}
{\noindent where $\theta_x$ is the angle between $\vec{dir}$ projection on $x$-$z$-plane and $z$-axis and $\theta_y$ is the same for projection on $y$-$z$-plane.}\\
Taking into account the fact that each reflection changes only the sign of respective component of $\vec{dir}$
we can assume that the reflection angle is not changed for each pair of side surfaces during the whole photon flight.
So we need to obtain two values of reflection angle:
one for side surfaces normal to $x$-axis and one for ones normal to $y$-axis.
Knowing that $|\vec{dir}|=1$ and that these are the angles between photon
flight direction and the normal vectors for respective side surfaces ($x$ and $y$ axes) we obtain:
\begin{equation}
cos\alpha_x=dir_x;~cos\alpha_y=dir_y,
\end{equation}
where $\alpha_{x}$ and
$\alpha_{y}$
are the reflection angles for side surfaces normal to respective axis.
Then the probability of photon's reaching the photomultiplier can be calculated using a following formula:
\begin{equation}
P_{reach}=P_{refl}(sin\alpha_x)^{n_x} P_{refl}(sin\alpha_y)^{n_y},
\label{flightformula}
\end{equation}
where $n_{x}$
and
$n_{y}$
denote the respective numbers of reflections.
The dependence $P_{refl}(sin\alpha)$ is obtained from Fresnel equations
and is shown in Fig.~\ref{fig2}.
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_reflection.png}
\end{center}
\caption{
The dependence of reflection probability on sinus of the reflection angle $\alpha$.
\label{fig2}
}
\end{figure}
Further factors that influence the photon registration probability
are absorption in the scintillator material, losses at surface imperfections,
and the photomultiplier's
quantum efficiency.
In current algorithm of simulation the following formula for photon registration probability has been used
\begin{equation}
P_{reg} = P_{reach}~\epsilon(\lambda)~e^{-{\mu_{eff}(\lambda)}~\frac{\Delta L}{cos \theta}}
\end{equation}
where $P_{reach}$ denotes the probability from formula (\ref{flightformula}),
$\lambda$ denotes the photon's wavelength,
$\epsilon(\lambda)$ stands for the photomultiplier's quantum efficiency and $\mu_{eff}(\lambda)$
is the effective absorption coefficient for the scintillator material.
The latter, shown by thick solid line in Fig.~\ref{fig3}, accounts effectively for the absorption
of photons on the way to photomultipliers
and was determined by scaling the
absorption coefficient of pure polystyrene~(Senchyshyn et al \hyperlink{Senchyshyn2006}{2006})
to the experimental results obtained with the single detection unit of the J-PET detector~
(Kowalski et al \hyperlink{Kowalski2015}{2015}).
The scaling factor accounts effectively for the absorption due
to the primary and secondary admixture in the scintillator material,
imperfections of surfaces and reflectivity of the foil~(Kowalski et al \hyperlink{Kowalski2015}{2015}).
It was determined by the comparison of simulations with experimental results obtained
for the EJ-230 plastic scintillator with dimensions
of 0.5~cm~x~1.9~cm~x~30~cm~(Kowalski et al \hyperlink{Kowalski2015}{2015}).
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_absorption.png}
\end{center}
\caption{
(Thick line) The dependence of scintillator's absorption coefficient $\mu_{eff}$ on photon's wavelength.
The effective coefficient $\mu_{eff}$ was determined by scaling
the absorption coefficient of the pure
polystyrene~(Senchyshyn et al 2006) by factor of 1.8~(Kowalski et al 2015).
(Thin line) Emission spectrum of the BC-420 plastic scintillator~(\protect\hyperlink{SaintGobain}{Saint Gobain Crystals}).
The left axis denotes absorption coefficient and right axis denotes the emission intensity.
\label{fig3}
}
\end{figure}
Photomultipliers quantum efficiencies that were used in current calculations are shown in
Fig.~\ref{fig4}.
\begin{figure}
\begin{center}
\includegraphics[width=220pt]{Fig_displ_fn.png}
\end{center}
\caption{
Quantum efficiency as a function of photon's wavelength for Hamamatsu R4998 (R5320) photomultiplier (Baszak 2014)
(solid line)
and Hamamatsu silicon
S12572-100P photomultiplier (dashed line).
A superimposed thin solid line denotes emission spectrum
of the Saint Gobain BC-420 plastic scintillator.
The left axis denotes quantum efficiency and right axis denotes the emission intensity.
\label{fig4}
}
\end{figure}
The time
of arrival
$t_i^{arrival}$
of $i$th photon at the photomultiplier
(or in general the time of passing a given distance ${\Delta L}$ along the scintillator)
may be expressed as:
\begin{equation}
t_i^{arrival} = t_i^{e} + \frac{\Delta L}{\frac{c}{n} cos \theta},
\label{time_phm}
\end{equation}
where $t_i^{e}$ is the emission time of $i$th photon,
$\Delta L$ denotes the distance between the emission point and the photomultiplier,
$c$ denotes the speed of light
and $n$ stands for scintillator's refractive index (the value of $n = 1.58$ was used)
(\hyperlink{SaintGobain}{Saint Gobain Crystals}).
Finally, the timestamp $t_i$ is simulated by smearing the time
$t_i^{arrival}$
taking into account the transition time spread of the photosensors:
\begin{equation}
t_{i} = t_i^{arrival} + RG(t_{offset}, \sigma_t),
\end{equation}
where $RG(0,\sigma)$ is value generated randomly according to the Gauss distribution
with the mean at $t_{offset}$,
and with standard deviation $\sigma_t$
equal to the standard deviation of time spread of a given photomultiplier.
For simulations referred to in this paper as done with "ideal photomultiplier" $\sigma_t = 0$.
Otherwise for Hamamatsu R4998 (R5320) photomultiplier $\sigma_t = 0.068~ns$~\hyperlink{Hamamatsu}{Hamamatsu}
and for silicon photomultiplier S12572-100P $\sigma = 0.128~ns$~\hyperlink{Hamamatsu}{Hamamatsu}.
The parameter $t_{offset}$ accounts for all constant electronic time delays and
its value does not influence the time resolution.
Therefore, for simplicity,
but without loss of generality, it is set to zero.
\end{document}
|
2,869,038,154,319 | arxiv | \section{Introduction}
In recent years the numerical simulation of lattice quantum
chromodynamics (QCD) has developed into an enormously successful
method for obtaining precise values of several Standard Model
parameters. It is the only truly {\it ab initio} method available for
doing nonperturbative studies of QCD. It provides a unified framework
for treating mesons and baryons involving charm and bottom quarks,
whether in a heavy-light or heavy-heavy configuration.
For charm physics the primary phenomenological objectives of lattice
QCD are to help in the discovery and characterization of excited
states, including exotics, to determine decay
constants\cite{Simone,Na} and form factors, and to calculate
electromagnetic transition rates. Theoretical objectives include
providing guidance for effective field theory.
Since the last Charm conference there have been no new comprehensive
numerical lattice studies of exotics and excited quarkonium states.
So I will review the current status only briefly. There has been
exciting progress, however, in the high precision determination of
masses of several of the lowest lying states involving heavy quarks.
Progress here is, of course, prerequisite to an accurate determination
and characterization of excited states.
Lattice simulations have their limitations. While static properties,
including masses and matrix elements are relatively easy to calculate,
scattering processes, including charmonium production, inclusive
processes, and multihadronic (more than two-body) decays are very
difficult.
In addition to the usual challenges of extrapolating to the continuum
and physical light quark masses, heavy quark simulations face special
difficulties. The lattice spacing $a$ introduces a momentum cut off
of order $1/a$, typically less than $2-3$ GeV with today's lattices.
As the quark mass $M$ approaches the cut off, discretization errors
grow. Such mass-related errors can be controlled by a suitable
formulation of the Dirac action for the heavy quark. Among them are
(1) a lattice version of nonrelativistic QCD\cite{Lepage:1992tx},
which converges rather slowly for charm, but well for bottom, (2) the
Fermilab action\cite{EKM} and its
improvements\cite{Oktay:2008ex,Oktay:LAT2010}, good for both charm and
bottom, and (3) the highly improved staggered quark action (HISQ) with
discretization errors of order ${\cal O}(\alpha_s^2(a M_c)^2)$, good
for charm, but not so good for bottom with today's lattices.
\begin{figure}
\begin{tabular}{cc}
\begin{minipage}{0.45\textwidth}
\vspace*{-3mm}
\includegraphics[width=\textwidth]{figs/spectrum--}\hfill
\end{minipage}
&
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/spectrum+-}\hfill
\end{minipage}
\end{tabular}
\caption{Charmonium excitation spectrum in MeV from
Ref.~\protect\refcite{Dudek:2007wv}. Left: $J^{--}$ states.
Right $J^{+-}$ states including exotics. Lattice results (blue)
are compared with PDG values (black) and various quark model
results (purple and orange). Color online.
\label{fig:Dudek}
}
\end{figure}
\section{Excitation spectrum and exotics}
In 2008 Dudek {\it et al.}\cite{Dudek:2007wv} published results of a
comprehensive lattice QCD study of the excitation spectrum of
charmonium. In order to access excited states they introduced a large
set of interpolating operators and used a ``variational'' method to
determine their masses.
How well does this work? Figure~\ref{fig:Dudek} (left) gives an
impression. Consider the $1^{--}$ channel, where the ground state
$J/\psi$ has a very clean signal. Six states were found, but only the
first three shown were unambiguous enough that the authors were
willing to associate them with known levels, namely the $J/\psi$,
$\psi(2S)$ and $\psi(3S)$. The lattice levels were generally higher
than experiment and the discrepancies grew from 12 to 82 MeV. They
were also higher than quark model determinations, as shown. In the
$PC = +-$ channel this study also produced a couple spin-exotic states
as shown in Fig.~\ref{fig:Dudek}. See Ref.~\refcite{Dudek:2007wv} for
more states.
These results are pioneering, but they are deficient for a number of
reasons. Sea quark effects were omitted, the calculation was done at
only one lattice spacing, so an extrapolation to the continuum is not
possible, and open charm states were ignored. The Hadron Spectrum
Collaboration is remedying these shortcomings\cite{Ryan}.
To include sea quarks and allow extrapolation to the continuum and to
physical light quark masses requires generating gauge field
configurations (ensembles) with various lattice spacings and light
quark mass combinations. The MILC collaboration makes publicly
available a large archive of such ensembles with lattice spacing
ranging from 0.15 fm to 0.045 fm and light quark mass ratios
$m_{ud}/m_s$ ranging down to 0.05, compared with the physical value of
approximately 0.037\cite{Bazavov:2009bb}.
MILC gauge configurations are being used by the FNAL/MILC and HPQCD
collaborations to study the quarkonium spectrum. The FNAL/MILC
campaign currently uses Fermilab quarks for both charm and bottom and
HPQCD uses HISQ charm quarks and NRQCD bottom quarks. Most of the
FNAL/MILC results presented here are from last year's
study\cite{Burch:2009az}, which, like the HPQCD study, used a limited
set of interpolating operators and looked at only the low-lying
states. A new FNAL/MILC campaign currently under way is aimed at
excited as well as ground states using a large set of interpolating
operators, higher statistics, smaller lattice spacings, and more
accurate heavy quark mass tuning. I will give a preview of the
improvement we hope to obtain.
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/chiral-cc_spectrum.pdf}\hfill
\end{minipage}
&
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{figs/goldmeson}
\end{minipage}
\end{tabular}
\caption{Left: Charmonium levels from
Ref.~\protect\refcite{Burch:2009az} based on splittings from the
spin-averaged $1S$ level with quark masses determined from the
$D_s$ and light pseudoscalar mesons and scale determined from
upsilon splittings. Results for four lattice spacings from 0.18
fm (red) to 0.09 fm (blue) are shown. Black lines are PDG values.
Right: complete low-lying heavy and light meson spectrum from
Ref.~\protect\refcite{Gregory:2009hq} showing five masses used to
fix four quark masses and the lattice scale and three masses
predicted in advance of experiment.
\label{fig:overview}}
\end{figure}
\begin{figure}
\vspace*{-7mm}
\centering
\includegraphics[width=0.45\textwidth]{figs/charm_hfs_vs_a2}\hfill
\includegraphics[width=0.45\textwidth]{figs/hyp-cc}\hfill
\caption{Hyperfine splitting in charmonium extrapolated to zero
lattice spacing. PDG values are in black. Left:
FNAL/MILC\protect\cite{Burch:2009az}. Units are $r1 \approx 1.58$
Gev$^{-1}$. Right: HPQCD\protect\cite{Follana}.
\label{fig:HFS}}
\end{figure}
\section{Low-lying levels}
Broadly speaking, lattice simulations do quite well reproducing the
low-lying hadronic states. Figure~\ref{fig:overview} gives an overview
of the charmonium spectrum of the FNAL/MILC collaboration (left)
\cite{Burch:2009az} and the full gold-plated meson spectrum of the
HPQCD collaboration\cite{Gregory:2009hq} (right). To appreciate the
precision of the calculation, we turn to a finer mass scale in the
remainder of this minireview.
Historically, the hyperfine splitting of the $1S$ state has proven to
be a stringent test of lattice methodology. In Fig.~\ref{fig:HFS}
recent results of the FNAL/MILC collaboration (left)
\cite{Burch:2009az} are compared with still more recent results of the
HPQCD collaboration (right)\cite{Follana}. The FNAL/MILC calculation
was based on Fermilab quarks. The extrapolated value is 117(11) MeV.
In this result annihilation effects are not included. The HPQCD
result is based on HISQ charm quarks. The pink band indicates the
full error budget ($\pm 2$ MeV), and includes a small increase due to
annihilation, as suggested by perturbation theory. Determining level
shifts due to annihilation is difficult. A recent nonperturbative
(lattice) result estimates, instead, a decrease of $1-4$ MeV from
charm annihilation\cite{Levkova:2010ft}. Thus, apparently, we have
reached the point where annihilation effects cannot be ignored.
\begin{figure}
\includegraphics[width=0.42\textwidth]{figs/mdsfullvsasq}\hfill
\includegraphics[width=0.42\textwidth]{figs/mbc}\hfill
\caption{Recent HPQCD results for the $D_s$ mass
(left)\protect\cite{Davies:2010ip} and $B_c$
(right)\protect\cite{Gregory:2010gm}.
\label{fig:DsBc}}
\end{figure}
Further examples of achievable precision in gold-plated quantities are
shown in Fig.~\ref{fig:DsBc}. The $D_s$ mass is obtained to 3 MeV
accuracy from the mass splitting $M(c\bar s) -
\frac{1}{2}M(\eta_c)$\cite{Davies:2010ip}, and the $B_c$ mass is
obtained to 10 MeV accuracy from the splitting $M(B_c) - M(b\bar b)/2
- M(\eta_c)/2$\cite{Gregory:2010gm}.
Going beyond gold-plated quantities, the FNAL/MILC simulation results
for the spin-averaged $2S-1S$ splitting, shown in
Fig.~\ref{fig:2S1SnewHFS} (left) disagrees with the experimental
value\cite{Burch:2009az}. Could this be a result of complications from
the nearby open charm threshold? Those simulations do not currently
treat two-body scattering states. A recent calculation by Bali and
Ehmann with a small basis set suggests that they may be important for
even states well below threshold, but further study is
needed\cite{Bali:2009er}.
\begin{figure}
\vspace*{-5mm}
\includegraphics[width=0.45\textwidth]{figs/charm_2S1S_vs_a2}\hfill
\includegraphics[width=0.45\textwidth]{figs/fit_hyperfine_r1}\hfill
\caption{Left: $\bm{\overline{2S}-\overline{1S}}$ splitting from
Ref.~\protect\refcite{Burch:2009az} showing possible complications from
open charm. Right: Sample FNAL/MILC preliminary result for the $1S$
hyperfine splitting from the new analysis with much reduced errors.
\label{fig:2S1SnewHFS}}
\end{figure}
\section{Conclusions and Outlook}
We have seen dramatic improvement in our ability to reproduce the
masses of the lowest lying charmonium states. This has been brought
about by the use of improved quark actions, higher statistics,
accurate tuning of the heavy quark masses, smaller lattice spacing,
and the availability of gauge field ensembles that support an
extrapolation to physical sea quark masses and zero lattice spacing.
New analysis campaigns with multiple interpolating operators are
underway both by the Hadron Spectrum Collaboration and the Fermilab
Lattice/MILC collaborations. A preview of the hyperfine splitting
from this new analysis is shown in Fig.~\ref{fig:2S1SnewHFS} (right), a
considerable improvement over Fig.~\ref{fig:HFS} (left). These campaigns
should provide more accurate information about excited and exotic
states. Treating two-body scattering states will continue to be a
long-term challenge.
|
2,869,038,154,320 | arxiv | \section{Unbiasedness and consistency}\label{sec:theory}
Unbiased or asymptotically unbiased estimates of $\tau_{it}$ can be obtained with all four methods described in this review.
For each one of them, we now describe the sampling framework and the main assumptions for the unbiasedness to hold.
For ease of exposition, we choose not to list some technical regularity conditions required for the results presented to hold; readers can refer to the original publications for these.
\textit{DID}.
For the linear DID estimator, we make use of some well-known results for OLS regression, see e.g.\ \citet{Wool2013}.
If the DID model of Equation \eqref{eq:didb} holds then $\hat\tau_{it}^\mathrm{DID}$ is unbiased, that is,
\begin{equation}\nonumber
\mathbb{E}\left[\hat\tau_{it}^\mathrm{DID}\right]=\tau_{it},
\end{equation}
where the expectation is taken with respect to the conditional distribution of $\boldsymbol\varepsilon=\left(\varepsilon_{11},\dots,\varepsilon_{1T},\dots,\varepsilon_{n1},\dots,\varepsilon_{nT}\right)^\top$ given $\boldsymbol{S}_n$, where $\boldsymbol{S}_n=(\boldsymbol{x}_{11}^\top,\dots,\boldsymbol{x}_{1T}^\top,\kappa_1,\boldsymbol{d}_{1}^\top,\dots,\\ \boldsymbol{x}_{n1}^\top,\dots,\boldsymbol{x}_{nT}^\top,\kappa_n,\boldsymbol{d}_{n}^\top)^\top$.
That is, $\boldsymbol{S}_n$ is common to the repeated samples but the errors $\boldsymbol{\varepsilon}$ differ in each repeated sample.
\textit{LFM}.
\citet{Xu2017} study the properties of $\hat\tau^{\mathrm{XU}}_{it}$.
If the LFM of Equation \eqref{eq:factor} holds then under some regularity conditions (which include weak serial correlation of the error terms within each unit) $\hat\tau^{\mathrm{XU}}_{it}$ is asymptotically unbiased, that is
\begin{equation}
\nonumber \mathbb{E}\left[\hat\tau^{\mathrm{XU}}_{it}\right]\rightarrow \tau_{it}
\end{equation}
as $n_1\rightarrow\infty$ and $T_1\rightarrow\infty$, where the expectation is taken with respect to the conditional distribution of $\boldsymbol{\varepsilon}$ given $\boldsymbol{S}_n$, where $\boldsymbol{S}_n=(\boldsymbol{x}_{11}^\top,\dots,\boldsymbol{x}_{1T}^\top,\boldsymbol\lambda_1^\top,\boldsymbol{d}_{1}^\top,\dots,\boldsymbol{x}_{n1}^\top,\\\dots,\boldsymbol{x}_{nT}^\top,\boldsymbol\lambda_n^\top,\boldsymbol{d}_{n}^\top,\boldsymbol{f}_1^\top,\dots,\boldsymbol{f}_T^\top)^\top$.
Intuitively, we require that both $n_1$ and $T_1$ are large in order to accurately estimate factors at each post-intervention time point and loadings for the treated units, respectively, which we need in order to predict the counterfactual outcomes.
\textit{Synthetic control-type approaches}.
These methods do not assume a generative model, but rather exploit linear relationships between the data on the treated and control units in order to construct a counterfactual.
Such relationships may arise from various data-generating mechanisms.
Hence, their unbiasedness properties can be studied under any of these.
Assume that the LFM
\begin{eqnarray}\label{eq:sclfm}\nonumber
\quad y_{it}&=&y_{it}^{(0)}+\tau_{it} d_{it} \\
y_{it}^{(0)}&=&\mu_t + \boldsymbol{x}^\top_{i}\boldsymbol\theta_t + \boldsymbol\lambda_i^\top\boldsymbol{f}_t +\varepsilon_{it},
\end{eqnarray}
holds, the error terms $\varepsilon_{it}$ have zero mean given $\boldsymbol{S}_n$ and $\varepsilon_{it}\independent\varepsilon_{js}$ given $\boldsymbol{S}_n$, ($i\neq j$ and $t\neq s$), where $\mu_t$ are time fixed-effects, $\boldsymbol{x}_i=(x_{i1},\ldots,x_{iK})^\top$ are time-invariant covariates and $\boldsymbol{S}_n=(\boldsymbol{x}_{1}^\top,\boldsymbol\lambda_1^\top,\boldsymbol{d}_{1}^\top,\dots,\boldsymbol{x}_{n}^\top,\boldsymbol\lambda_{n}^\top,\boldsymbol{d}_{n}^\top,\boldsymbol{f}_1^\top,\dots,\boldsymbol{f}_T^\top,\mu_1,\ldots,\\\mu_t)^\top$.
\citet{Abadie2010} show that if there exist $\varpi_1,\dots,\varpi_{n_1}$ such that
\begin{eqnarray}\label{eq:scvarpi}\nonumber
\boldsymbol{\lambda}_{n_1+1}& = & \sum_{i=1}^{n_1}{\varpi_i\boldsymbol{\lambda}_{i}}
\\
\boldsymbol{x}_{n_1+1}& = & \sum_{i=1}^{n_1}{\varpi_i\boldsymbol{x}_{i}},
\end{eqnarray}
then under some regularity conditions $\hat\tau_{n_1+1,t}^\mathrm{SC}$ is asymptotically unbiased, \textit{i.e.}
\begin{equation}\nonumber
\mathbb{E}\left[\hat\tau_{n_1+1,t}^{\mathrm{SC}}\right]\rightarrow \tau_{n_1+1,t}
\end{equation}
as $T_1\rightarrow \infty$, where the expectation is taken with respect to the conditional distribution of $\boldsymbol\varepsilon$ given $\boldsymbol{S}_n$.
The conditions \eqref{eq:scvarpi} imply that both observed ($\boldsymbol{x}_{n_1+1}$) and unobserved ($\boldsymbol{\lambda}_{n_1+1}$) characteristics of the treated unit lie in the convex hull of the characteristics of control units, thus allowing interpolation.
When this is not true (\textit{i.e.}\ when such $\varpi_1,\dots,\varpi_{n_1}$ do not exist, thus forcing extrapolation to be used), the SCM estimator will be generally biased \citep{Gobillon2016,Ferman2016a}.
Assume the following variant of the LFM:
\begin{eqnarray}\label{eq:hcw
y_{it}^{(0)}&=&\kappa_i+\boldsymbol\lambda_i^\top\boldsymbol{f}_t +\varepsilon_{it},
\end{eqnarray}
where $\kappa_i$ are unit fixed effects and $\varepsilon_{it}$ are zero-mean, homoscedastic error terms which are independent of $\boldsymbol{f}_s$ for all $t,s$ and independent of $d_{js}$ for all $i\neq j$.
\citet{Hsiao2012} prove that if there exist $\gamma_1,\dots,\gamma_{n_1}$ such that
\begin{equation}\label{eq:hcw1}
\lambda_{n_1+1,j} = \sum_{i=1}^{n_1}\gamma_i\lambda_{ij}
\end{equation}
is true for every $j=1,\dots,J$ (along with some technical conditions), then $\hat{\tau}_{n_1+1,t}^\mathrm{HCW}$ is unbiased, \textit{i.e.}
\begin{equation}
\nonumber \mathbb{E}\left[\hat{\tau}_{n_1+1,t}^\mathrm{HCW}\right] = \tau_{n_1+1,t},
\end{equation}
where the expectation is taken with respect to the conditional distribution of $\boldsymbol\varepsilon$ given $\boldsymbol{S}_n$, where $\boldsymbol{S}_n=(\kappa_1,\boldsymbol{\lambda}_1^\top,\boldsymbol{d}_1^\top,\dots,\kappa_{n},\boldsymbol{\lambda}_{n}^\top,\boldsymbol{d}_{n}^\top,\boldsymbol{f}_1^\top,\dots,\boldsymbol{f}_T^\top)^\top$.
\textit{CIM}.
If the CIM of Equations \eqref{eq:bstseg} is the true data-generating model and the prior on the vector of model parameters $\boldsymbol\vartheta=(\beta_1,\ldots,\beta_{n_1},\sigma_\varepsilon^2,\sigma_\eta^2,\sigma_\zeta^2)^\top$ assigns non-zero probability to its true value, then the posterior distribution of $\boldsymbol{\vartheta}$ will converge to a point mass on its true value as $T_1\rightarrow\infty$.
Consequently, the posterior mean of $y^{(0)}_{n_1+1,t}$ will converge to its true value, and so $\hat{\tau}_{n_1+1,t}^\mathrm{CIM}$ is an asymptotically unbiased estimate of of $\tau_{n_1+1,t}$ (as $T_1\rightarrow\infty$).
The above results concern (asymptotic) unbiasedeness.
Consistent estimation of $\tau_{it}$ is not feasible, unless it is assumed that $\tau_{it} = \tau_i$ or $\tau_{it} = \tau_t$, \textit{i.e.} that the unit-specific treatment effects are the same at all post-intervention times or (when $n>n_1+1$) are the same at each time for all treated units.
This is because, regardless of how many units and timepoints there are, $y_{it}^{(1)}$ is only measured once for each $i>n_1$ and $t>T_1$.
It is not uncommon to assume that $\tau_{it}=\tau$ (\textit{e.g.} \citet{Angrist2009,Gobillon2016}). When this is done, existing results for the linear DID model (\textit{e.g}.\ \citet{Wool2013}) and the LFM method of \citet{Xu2017} (\textit{e.g}.\ \citet{Bai2009}) imply consistency of the estimator of $\tau$ when either of those methods are used.
These results, though, require some additional technical assumptions to hold.
Alternatively, a looser structure could be imposed on $\tau_{it}$. For example, HCW assume that $\tau_{n_1+1,T_1+1}, \ldots, \tau_{n_1+1,T},$ is an auto-regressive moving-average process.
This enables the mean of this process to be consistently estimated.
\section{Real data supplementary analysis}\label{sec:suppl}
In this section, we provide supplementary material for the data analysis of Section \ref{sec:real1}.
Figure \ref{fig:didpar} shows the parallel trends diagnostic check described in Section \ref{sec:blacks}.
The estimated factor loadings obtained by applying the LFM method of \citet{Xu2017} are shown in Table \ref{tab:loadings}.
Finally, the $r$ statistics obtained by applying the empirical test of \citet{Abadie2015} with the SCM and HCW methods are shown in Table \ref{tab:rstat}.
\begin{figure}[htp]
\centering
\includegraphics[scale=0.4]{Figures/did_parallel.pdf}
\vspace{-0.15in}
\caption{Difference over time between West Germany's per capita GDP and the average of control countries. Rather than being constant, the difference increases over time thus suggesting that the DID parallel trends assumption might not be plausible.}
\label{fig:didpar}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{rrrrrrrrrrr}
& \multicolumn{10}{c}{\textbf{Factor}}\\
\textbf{Country} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline\hline
West Germany & -0.75 & -0.16 & 0.29 & 0.26 & -0.15 & -0.12 & -0.80 & 0.35 & -0.42 & -1.42 \\
Australia & -0.14 & 0.47 & 0.91 & -0.55 & 1.71 & -0.62 & -0.34 & -0.64 & -0.25 & 0.93 \\
Austria & -0.50 & -0.52 & -0.17 & 0.18 & -0.69 & -1.06 & 0.26 & 0.45 & -2.28 & 0.21 \\
Belgium & -0.22 & -0.46 & 0.10 & 1.02 & -0.44 & -0.32 & 0.98 & 1.61 & -0.43 & -0.20 \\
Denmark & -0.34 & 0.24 & 0.21 & -1.34 & 0.09 & -1.01 & -0.93 & -0.25 & 1.24 & 0.02 \\
France & -0.14 & -0.33 & 0.05 & 0.11 & -0.45 & -0.18 & -0.95 & 1.12 & -0.19 & -0.92 \\
Greece & 1.95 & -0.02 & 1.52 & 1.10 & -0.55 & -0.88 & 0.17 & -1.99 & -0.01 & -1.54 \\
Italy & -0.03 & -0.72 & -0.62 & -0.65 & -0.39 & -0.26 & -0.49 & 0.48 & -0.85 & -1.14 \\
Japan & -0.33 & -1.18 & -2.04 & -0.30 & -0.07 & -0.53 & 2.30 & -1.28 & 1.15 & 0.12 \\
Netherlands & -0.45 & 0.31 & 0.02 & 1.30 & 0.07 & -1.68 & -0.83 & 1.08 & 1.99 & 0.74 \\
New Zealand & 1.32 & -0.12 & 1.40 & -2.35 & -0.26 & 0.35 & 1.24 & 1.10 & 0.14 & 0.96 \\
Norway & -1.60 & 2.55 & 0.29 & -0.15 & -2.04 & 0.83 & 0.53 & -0.70 & 0.08 & -0.05 \\
Portugal & 1.73 & 0.49 & -2.18 & -0.58 & -0.53 & 0.92 & -1.74 & -0.17 & 0.02 & 0.14 \\
Spain & 0.98 & 0.85 & -0.44 & 1.57 & 0.74 & 0.54 & 0.37 & -0.16 & -0.93 & 2.16 \\
Switzerland & -0.71 & -2.19 & 1.10 & 0.70 & -0.71 & 2.23 & -0.77 & -0.49 & 0.77 & 0.77 \\
UK & 0.10 & 0.89 & -0.10 & 0.49 & 1.92 & 1.58 & 0.88 & 1.04 & 0.61 & -1.92 \\
USA & -1.63 & -0.24 & -0.05 & -0.55 & 1.58 & 0.07 & -0.68 & -1.20 & -1.05 & -0.28 \\
\hline\hline
\end{tabular}
\caption{Factor loadings for the 17 countries, as obtained by fitting the LFM of \citet{Xu2017} to the West German reunification data.}
\label{tab:loadings}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{lrr}
& \multicolumn{2}{c}{\textbf{Method}} \\
\textbf{Country} & \textbf{SCM} & \textbf{HCW}\\
\hline \hline
West Germany & 30.72 & 71.53 \\
Australia & 6.86 & 16.83 \\
Austria & 4.24 & 45.04 \\
Belgium & 4.52 & 16.90 \\
Denmark & 6.25 & 21.35 \\
France & 8.00 & 52.07 \\
Greece & 7.83 & 18.72 \\
Italy & 20.48 & 46.72 \\
Japan & 4.87 & 29.73 \\
Netherlands & 20.44 & 36.83 \\
New Zealand & 5.16 & 16.99 \\
Norway & 13.78 & 77.36 \\
Portugal & 0.70 & 57.69 \\
Spain & 7.53 & 14.32 \\
Switzerland & 2.36 & 29.75 \\
UK & 6.94 & 30.27 \\
USA & 5.97 & 42.55 \\
\hline\hline
\end{tabular}
\caption{$r$ statistics obtained by applying the empirical test of \citet{Abadie2015} to the German reunification data, for both the SCM and HCW methods.}
\label{tab:rstat}
\end{table}
\begin{comment}
\begin{table}[ht]
\centering
\begin{tabular}{rrrrr}
\textbf{Country}& \textbf{LFM} & \textbf{SCM} &\textbf{HCW}&\textbf{CIM}\\
\hline\hline
Australia & -0.32 & 0.00 & -0.03 & 0.02 \\
Austria & 0.34 & \textbf{0.33} & 0.18 & \textbf{0.21} \\
Belgium & 0.23 & 0.00 & \textbf{0.22} & \textbf{0.10} \\
Denmark & 0.04 & 0.00 & 0.01 & 0.03 \\
France & \textbf{0.76} & 0.00 & 0.07 & \textbf{0.12} \\
Greece & 0.22 & \textbf{0.11} & 0.08 & 0.01 \\
Italy & \textbf{0.65} & \textbf{0.06} & \textbf{0.21} & \textbf{0.35} \\
Japan & \textbf{-0.83} & 0.00 & -0.01 & 0.01 \\
Netherlands & -0.01 & 0.00 & \textbf{0.22} & 0.03 \\
New Zealand & \textbf{-0.86} & 0.00 & -0.04 & 0.00 \\
Norway & 0.11 & 0.04 & 0.04 & 0.05 \\
Portugal & -0.28 & 0.00 & 0.06 & 0.00 \\
Spain & \textbf{-1.02} & 0.00 & \textbf{-0.39} & 0.00 \\
Switzerland & 0.07 & \textbf{0.11} & -0.01 & 0.00 \\
UK & 0.41 & 0.00 & 0.10 & 0.01 \\
USA & 0.49 & \textbf{0.35} & \textbf{0.26} & \textbf{0.11} \\
\hline\hline
\end{tabular}
\caption{text}
\label{tab:importance}
\end{table}
\end{comment}
\section{Quantification of uncertainty \& hypothesis testing}\label{sec:uncertainty}
\textcolor{black}{We now describe approaches to estimating standard errors and testing the null hypothesis that $\tau_{it}=0$, or, for Bayesian methods, estimating the posterior distribution of $\tau_{it}$.}
\textbf{DID}.
\textcolor{black}{If it is assumed that the errors $\varepsilon_{it}$ in the linear DID are mutually independent and homoscedastic, variance estimates for the OLS estimates of $\tau_{it}$ ($i>n_1,t>T_1$) are easy to obtain.
These represent the variance of $\hat{\tau}_{it}^{\mathrm{DID}}$ over repeated samples of the errors $\varepsilon_{it}$ holding $(\boldsymbol{x}_{11}^\top,\ldots,\boldsymbol{x}_{1T}^\top,\kappa_1,\boldsymbol{d}_1^\top,\ldots,\boldsymbol{x}_{n1}^\top,\ldots,\boldsymbol{x}_{nT}^\top,\kappa_n,\boldsymbol{d}_n^\top,\\\mu_1,\ldots,\mu_T)^\top$ fixed.
A Wald test for $\tau_{it}=0$ can then be performed.
}
However, the assumption that the errors $\varepsilon_{it}$ are mutually independent may not be plausible.
\citet{Bertrand2004} show that when, as is likely, the errors $\varepsilon_{i1},\ldots,\varepsilon_{iT}$ are serially correlated, the variance estimator for $\hat{\tau}_{it}$ is biased downwards and type-I error rates are inflated, and they describe methods to deal with this.
Standard errors can also be underestimated if there are correlations due to units being grouped (e.g. hospitals within the same county); \citet{Donald2007} discuss possible solutions.
\textbf{LFM}. \citet{Xu2017} uses parametric bootstrap to obtain confidence intervals for $\hat\tau_{it}^\mathrm{XU}$ and $p$-values, assuming that $\varepsilon_{1t},\ldots,\varepsilon_{nt}$ are independent and homoscedastic at each individual time $t$.
Repeated sampling here is of the errors $\varepsilon_{it}$ holding $(\boldsymbol{x}_{11}^\top,\ldots,\boldsymbol{x}_{1T}^\top,\boldsymbol\lambda_1^\top,\boldsymbol{d}_1^\top,\ldots,\boldsymbol{x}_{n1}^\top,\ldots,\boldsymbol{x}_{nT}^\top,\boldsymbol\lambda_n^\top,\boldsymbol{d}_n^\top,\boldsymbol{f}_1^\top,\ldots,\boldsymbol{f}_T^\top)^\top$ fixed.
\textcolor{black}{
\citet{Li2018} derive the asymptotic distribution (as $T_1,T_2\rightarrow\infty$) of the average effect $\sum_{t=T_1+1}^{T}\hat{\tau}_{it}^\mathrm{XU}$ for the $i$-th treated unit.
}
\textcolor{black}{
\textbf{Synthetic control-type approaches.}
\citet{Abadie2010,Abadie2015} argue that
traditional statistical inference is difficult in this setting, unless one is prepared to assume that the unit that received the intervention was chosen at random.
Under that assumption, a standard permutation test would provide a valid $p$-value for the null hypothesis that treatment would have no effect on any of the units (\textit{i.e.} $\tau_{it}=0$ for all $i=1,\ldots,n_1+1$).
\citeauthor{Abadie2015} propose using a very similar test even in settings where the intervention is not randomly assigned and called this a `placebo test'.
They argue that such a test provides an alternative mode of inference, saying that our confidence that a large treatment effect estimate truly reflects the effect of the intervention would be undermined if similarly large effect estimates were obtained when the treatment labels of the units were permuted.
}
More specifically, \citet{Abadie2010,Abadie2015} compare $\hat{\tau}_{n_1+1,t}^\mathrm{SCM}$ to $\hat{\tau}_{1t}^\mathrm{SCM},\dots,\hat{\tau}_{n_1t}^\mathrm{SCM}$, the estimated effects considering each of the control units in turn as though it had been the treated unit, and using the remaining $n_1-1$ controls to estimate the weights, at each post-intervention time.
Their test statistic $r_i$ is
\begin{equation}
\label{eq:sctest}
r_{i}=
\frac{T_1( \boldsymbol{y}_{i,T_1+1:T}-\hat{\boldsymbol{y}}_{i,T_1+1:T} )^\top( \boldsymbol{y}_{i,T_1+1:T}-\hat{\boldsymbol{y}}_{i,T_1+1:T} ) }{T_2( \boldsymbol{y}_{i,1:T_1}-\hat{\boldsymbol{y}}_{i,1:T_1} )^\top( \boldsymbol{y}_{i,1:T_1}-\hat{\boldsymbol{y}}_{i,1:T_1} )}
,
\end{equation}
that is, the ratio of post- to pre-intervention MSE between the observed and predicted outcomes.
The predicted counterfactual of control unit $i$ ($i\leq n_1$) is obtained by applying the SC method to that unit, using the remaining $n_1-1$ controls to find weights.
Their intuition is that under the null hypothesis the predictive ability of the SCM should be similar in the two periods and thus the ratio $r_{n_1+1}$ close to 1.
Hence, a value of $r_{n_1+1}$ that lies in the tail of the empirical distribution of $r_1,\dots,r_{n_1+1}$ can be viewed as evidence for a non-zero intervention effect.
\textcolor{black}{\citet{Firpo2018} investigate the impact that the choice of the test statistic has on the results of \citeauthor{Abadie2015}'s test, finding that $r_i$ outperformed alternative statistics that they considered in several performance measures.}
\textcolor{black}{
\citet{Firpo2018} propose a generalisation of \citeauthor{Abadie2015}'s placebo test.
Rather than giving equal weight to all possible permutations of the treatment labels when calculating the $p$-value, they make the weights depend on a sensitivity parameter $\phi$.
The reasoning is that even if the unit that received the intervention had actually been chosen at random, some units might have been more likely to be chosen, thus making some permutations of the treatment labels more probable than others.
\citet{Firpo2018} vary the value of $\phi$ and assess how robust to this value is the conclusion of a treatment effect (or lack thereof).
}
\citet{Amjad2017} take an empirical Bayes approach to test the hypothesis that $\tau_{n_1+1,t}=0$.
They assume that $\boldsymbol{y}_{n_1+1,1:T}\sim\mathcal{N}\left(\tilde{\boldsymbol{Y}}_{1:T_1}^{\mathrm{c}}\boldsymbol{w},\sigma^2\boldsymbol{I}\right)$, where $\tilde{\boldsymbol{Y}}_{1:T_1}^{\mathrm{c}}$ are the de-noised control outcomes obtained via singular value thresholding, and the weights $\boldsymbol{w}$ have a $\mathcal{N}\left(0,\sigma^2_{\boldsymbol{w}}\boldsymbol{I}\right)$ prior distribution, for some value of $\sigma^2_{\boldsymbol{w}}$.
The posterior distribution of $\boldsymbol{w}$ can be used to calculate the posterior predictive distribution of $y_{n_1+1,t}^{(0)}$ ($t>T_1$).
Let $a$ and $b$ be the 97.5th and 2.5th centiles of this distribution.
The 95\% posterior credible interval for $\tau_{n_1+1,t}$ is $(y_{n_1+1,t} - a, y_{n_1+1,t} - b)$.
\textcolor{black}{
\citet{Abadie2010} also consider a variant of their placebo test in which the time of the intervention, rather than the unit that receives intervention, is changed.
They do not, however, propose this as being a way to calculate a $p$-value.
}
\textcolor{black}{
In their application, HCW fit an autoregressive model to the estimated intervention effects $\hat{\tau}_{n_1+1,T_1+1}^\mathrm{HCW},\ldots,\hat{\tau}_{n_1+1,T}^\mathrm{HCW}$.
They then test the null hypothesis that the mean of these effects, which they refer to as the long-run intervention effect, equals zero.
In their implementation of the HCW method, \citet{Gardea2017} use a test that is equivalent to the test proposed by \citet{Abadie2010,Abadie2015} for the SCM, to test if $\tau_{it}=0$ for all $i$ and $t>T_1$.
As pointed out by one of the referees, an intuitive approach to obtain confidence intervals for $\hat{\tau}_{n_1+1,t}^\mathrm{HCW}$ would be to use the bootstrap.
Finally, \citet{Li2017} derive the asymptotic distribution (as $T_1,T_2\rightarrow\infty$) of the average effect $\sum_{t=T_1+1}^{T}\hat{\tau}_{n_1+1,t}^\mathrm{HCW}$.
}
\textbf{CIM}. For the CIM, a 95\% posterior credible interval for $\tau_{n_1+1,t}$ can be calculated as $(y_{n_1+1,t} - a, y_{n_1+1,t} - b)$, where $a$ and $b$ denote, respectively, the 97.5th and 2.5th centiles of the posterior predictive distribution of the counterfactual $y_{n_1+1, t}^{(0)}$.
\textcolor{black}{
\textbf{All methods}. Recently, there has been work building upon the end-of-sample stability test \citep{Andrews2003}.
For a single treated unit, the idea is that under the hypothesis of no intervention effect, the process $y_{n_1+1,t}-\hat{y}_{n_1+1,t}^{(0)}$ ($t=1,\ldots,T$) is stationary.
\citet{Cherno2017} propose a permutation procedure to test the stationarity of this process and show that their approach gives valid inference for several methods including DID, LFM and SCM.
\citet{Hahn2017} apply the same idea to the SCM.
Both note that confidence sets for the intervention effect can be obtained by statistic inversion.
}
\section{Implementation issues}\label{sec:practice}
In this section, we discuss issues related to the practical implementation of the methods presented in Section \ref{sec:met}: model choice and diagnostic checks.
\subsection{Model choice}\label{sec:model}
\textit{Choosing the control units}.
When implementing the SCM, HCW, DI and CIM, it may be desirable to exclude some of the potential control units.
Using all potential controls might result in non-unique causal effect estimates when there are more such controls than pre-intervention time points.
Moreover, standard errors of estimates can be reduced by discarding controls whose outcomes are not related to the outcome of the treated unit.
HCW develop a two-stage approach to exclude potential controls.
For each $\ell=1,\dots,n_1$ they implement their method ${n_1 \choose \ell}$ times, where each time they use a different subset of size $\ell$ of the control units.
For each $\ell$ they choose the subset that maximises the regression $R^2$ and thus obtain $n_1$ candidate models.
They recommend choosing one of these $n_1$ models according to a model selection criterion such as the AIC.
An alternative approach was suggested by \citet{Li2017}, who use the least absolute shrinkage and selection operator (LASSO) to select controls.
DI exclude potential controls by encouraging some of the weights $\beta_i$ to shrink towards (or even equal to) zero.
This achieved by including the penalty term
\begin{equation}\label{eq:scpen}
\rho\left(\frac{1-\delta}{2}\sum_{i=1}^{n_1}\beta_i^2+\phi\sum_{i=1}^{n_1}|\beta_i|\right),
\end{equation}
in the objective function \eqref{eq:net}, where $\rho$ and $\phi$ are penalty parameters.
For CIM \citet{Brodersen2015} induce sparsity on the vector $(\beta_1,\ldots,\beta_{n_1})^\top$ that describes the dependence on controls by using a spike-and-slab prior.
\textit{Choosing the covariates}.
An issue that may arise when implementing the linear DID and the LFM methodology of \citet{Xu2017} is the choice of covariates to include.
Exclusion of potential covariates may be desirable for the same reasons that one might exclude control units.
For the linear DID model, covariates, which may include lagged outcomes and interactions of lagged outcomes with the covariates, may be selected by imposing sparsity on the regression coefficient vector, using, for example, the LASSO.
\textcolor{black}{For the LFM, one can use the factor-lasso approach of \citet{Hansen2016}.}
When implementing the SCM, users need to decide which variables (pre-intervention outcomes, covariates or summaries of these) to use to determine the weights.
\citet{Ferman2016b} demonstrate that the estimated counterfactual may differ depending on which variables are used.
\citet{Dube2015} develop the following approach for selecting among $K$ sets of variables.
First, for every set $k$ ($k=1,\dots,K$) they apply the SCM to the data on every control unit in turn, and calculate the predicted outcomes $\hat{y}_{it}^{(k)}$ ($i=1,\dots,n_1$) based on the estimated weights.
Then, they choose the set $k^\ast$ that minimises the mean (over control units) MSE between observed and predicted outcomes $\hat{y}_{it}^{(k)}$ in the post-intervention period.
An alternative approach when $T_1$ is large is to split the pre-intervention data into a training dataset, to which the SCM is applied using different sets of variables, and a validation dataset, which is used to assess which set has the best predictive performance.
\textit{Other issues}.
Some of the methods for estimating the parameters of the LFM \citep{Gobillon2016,Chan2016,Xu2017} require that the number of factors $J$ be chosen.
The usual approach is to fit the LFM for various values of $J$ and determine the optimal $J$ using cross-validation.
An alternative for choosing $J$ is to use the procedures developed by \citet{Bai2002}.
However, these approaches provide estimates of standard errors that do not account for the uncertainty about $J$.
For the CIM, practitioners need to decide what dynamical components to include in the counterfactual model.
Similar methods to those used for choosing variables for the SCM can be used.
An alternative is to fit several models and use the one that achieves the optimal trade-off between accuracy (the difference $y_{n_1+1,t}-\hat{y}_{n_1+1,t}$) and precision (the length of the credible interval for $y_{n_1+1,t}$) in the pre-intervention period.
In small datasets the inferences provided by the CIM impact method will be sensitive to the choice of prior distributions.
Therefore, these specifications should ideally be determined based on expert opinion.
\subsection{Diagnostics} \label{sec:blacks}
All the methods described in this article make assumptions about the counterfactual outcomes of the treated units in the post-intervention period.
Since these outcomes cannot be observed, it is never possible to test the full set of assumptions.
Nonetheless, it is sometimes possible to assess the validity of a subset of these assumptions using data from the pre-intervention period.
When no covariates are used and $T_1>1$, an informal check of the parallel trends assumption of DID methods can be conducted by plotting the average outcomes of control and treated units in the pre-intervention period \citep{Keele2013}: an approximately constant (over time) distance between the two lines suggests that parallel trends is plausible.
The SCM should not be used when the outcome of the treated unit lies outside the convex hull of the outcomes of controls units.
This can be checked by plotting the time-series of the outcome on all the units.
The fit provided for the outcomes on treated units in the pre-intervention period can be used as a diagnostic check.
Intuitively, if a model is not predictive of the outcome in the pre-intervention period, it is less likely to provide good predictions for the counterfactuals in the post-intervention period.
Goodness-of-fit can be assessed using the MSE between the observed and predicted values.
However, in order for a good pre-intervention fit to be reassuring, one needs to establish that it does not occur due to overfitting, as can be the case \textit{e.g}. for the SCM when $n_1>T_1$.
For the methods that provide fitted values for the outcomes of control units, \textit{i.e.}\ the linear DID model and the method of \citet{Xu2017}, one can further use the fit over the post-intervention period for these units as a diagnostic tool.
Finally, for both the linear DID model and the LFM method of \citet{Xu2017}, extrapolation biases may occur when the covariates (and loadings for the LFM) of treated and control units do not share a common support.
In order to exclude the possibility of such biases, it suffices to ensure that the characteristics of the treated units are not extreme compared to the characteristics of control units.
When a small number of covariates (and factors) is used, one can visually compare the two groups for each covariate (and loading) in turn.
If this is not feasible, methods for multivariate outlier detection (\textit{e.g}.\ \citet{Filz2008}) can be used to identify treated units with extreme characteristics.
\section{Discussion}\label{sec:discussion}
\subsection{Connections between methods}\label{sec:connections}
\textcolor{black}{
There are several ways in which the methods described in this paper relate to one another.
}
\textcolor{black}{
Firstly, for the case of a single treated unit and no covariates, most of them propose counterfactual estimators of the form $\hat{y}^{(0)}_{n_1+1,t}=\alpha_t+\sum_{i=1}^{n_1}\beta_iy_{it}$ ($t>T_1$) with $\alpha_t$ and $\beta_i$ being estimated using the data from the pre-intervention period (\textit{i.e.}\ $t\leq T_1$).
For the DID method, the parallel trends assumption implies that $\alpha_t=\frac{1}{T_1}\sum_{s=1}^{T_1}\left(y_{n_1+1,s}-\frac{1}{n_1}\sum_{i=1}^{n_1}y_{is}\right)$ for all $t>T_1$ and $\beta_i=\frac{1}{n}$ for all $i\leq n_1$ \citep{Cherno2017}.
The SCM assumes $\alpha_t=0$ for all $t$ and requires that $\beta_1,\ldots,\beta_n$ are non-negative and sum to one.
The HCW and DI methods impose the constraint that the intercept is constant over time, \textit{i.e.} $\alpha_t=\alpha$ for all $t$.
Finally, the CIM assumes that $\alpha_t$ obeys a time-series model (\textit{e.g.}\ a random walk model).
When there are covariates these similarities break down because methods account for covariates in a different way.
}
\textcolor{black}{
Secondly, most of the methods relate to the LFM.
We have already seen that the linear DID model \eqref{eq:didb} is a special case of the LFM \eqref{eq:factor}.
As discussed in Appendix \ref{sec:theory}, the SCM and HCW estimators are asymptotically unbiased when the true data-generating mechanism obeys a LFM.
We expect that due to their similarities with the HCW estimator just explained, both DI and CIM estimators will be unbiased under the same LFM.
}
\subsection{Recommendations for implementation}
\textcolor{black}{
None of the methods is universally superior to the others.
Extensive simulation experiments comparing the relative performance of a subset of them have been conducted by multiple authors including \citet{Gobillon2016,ONeill2016,Gardea2017,Xu2017} and \citet{Kinn2018}.
They all find settings in which one of the methods outperforms the others.
However, the findings from these simulation studies may not generalise to other data generating mechanisms.
Practitioners should choose the method to apply on the basis of the characteristics of the dataset and, in particular the values, of $n_1$, $T_1$ and their ratio $n_1/T_1$.}
\textcolor{black}{
The DID method can be used for any $n_1$ and $T_1$.
As explained in Sections \ref{sec:did} and \ref{sec:connections}, the DID method arguably requires the strongest assumptions.
As a result, it may provide more precise estimates of the intervention effects compared to the other methods.
However, these estimates might be severely biased when the parallel trends assumption does not hold (see simulation studies by \citet{ONeill2016} and \citet{Gobillon2016}).
This occurred in our application (Section \ref{sec:real1}), where the DID estimate of the average reunification effect had opposite sign compared to all the other estimates.
Hence, it is essential to test the plausibility of parallel trends in the pre-intervention period before applying DID to a dataset.
This is easy when there are no covariates.
}
\textcolor{black}{
The LFM can be used for any value of $n_1/T_1$.
However, both $n_1$ and $T_1$ should be at least moderate in size in order to accurately estimate the factors and loadings, respectively.
For example, for the asymptotic unbiasedness property of \citeauthor{Xu2017}'s method (see Appendix \ref{sec:theory}) to be relevant to a finite sample, they recommend $T_1>10$ and $n_1>40$.
}
\textcolor{black}{
Synthetic control approaches are mostly suited for applications where $T_1$ is large.
This is required to accurately estimate the relationships between the outcome of the treated unit and the outcomes of control units.
When $n_1\geq T_1$ regularisation is required because the number of parameters exceeds the number of observations\footnote{Synthetic control approaches regress the outcome of the treated unit on the outcomes of the control units. Therefore we can think of $\boldsymbol{y}_{1:n,t}$ as a single data point (observation) in a regression model.}.
Regularisation is possible for both the HCW and DI estimators, as described in Section \ref{sec:model}, but not for the SCM.
The SCM should not be used when the outcome of the treated unit does not lie in the convex hull of the outcomes of control units.
}
\textcolor{black}{
The CIM is similar in spirit to synthetic control approaches and also requires large $T_1$.
Because of its time-series component it can work even in cases when the outcome of treated unit is not correlated to the outcomes of control units.
However, it requires larger $T_1$ than synthetic control-type approaches to estimate the additional time-series parameters.
In practice, the value of $T_1$ required will depend on the complexity of the time-series model.
When $n_1>T_1$ regularisation can be achieved via a spike-and-slab prior on the regression coefficients.
}
\textcolor{black}{
In applications where certain covariates are known to be highly predictive of the outcome, it is preferable to use the linear DID or LFM.
This is because they use the covariates of control units and therefore can estimate the regression parameters of the predictive covariates with higher precision compared to the HCW, DI and CIM\footnote{Although one might argue that for the HCW, DI and CIM, the effect of covariates is taken into account through the outcomes of control units which the covariates affect.}.
This can in turn lead to more precise estimates of the counterfactuals.
Covariates that are potentially affected by the intervention should not be included when using any of the methods except the SCM, because the treatment-free values of the covariate are not observed in the post-intervention period.
The SCM can use these covariates in the pre-intervention period to estimate the weights.
}
\textcolor{black}{
There will be applications where more than one method is appropriate.
This is to be expected considering their connections explained in Section \ref{sec:connections}.
For example, the SCM, HCW, DI and CIM estimators are all well-suited when $T_1$ is large and there are few control units.
In such cases, users might choose any of these methods.
However, it is still worth applying the remaining methods in order to check that conflicting results are not obtained.
Methods that perform poorly on diagnostic checks or are based on assumptions that seem unrealistic for the dataset of interest should not be considered.
Even within the same method a sensitivity analysis is recommended.
This can be carried out by implementing the method using different model specifications as explained in Section \ref{sec:model}.
Ideally, results obtained from the different models should not conflict.
For the SCM, HCW, DI and CIM, one can re-implement these methods excluding control units that received large coefficients (or weights for the SCM) in the first implementation, to provide reassurance that results are not driven by a single control unit.
}
\subsection{Connections with matching}
Our review does not cover matching methods even though some forms of matching are suitable for application in the setting that we are investigating.
This is because we view the SCM as the best suited matching method in this setting: by using data on all control units it attempts to construct an exact match for the treated unit\footnote{HCW, DI and CIM also attempt to construct an exact match for the treated units. However, these methods may rely on extrapolation, which is not done in matching approaches.}.
\textcolor{black}{
However, matching can be used prior to applying the methods described in this paper, to restrict the pool of controls to those with similar characteristics to the treated units.
This approach has been adopted for the DID \citep{ONeill2016}, LFM \citep{Gobillon2016} and CIM \citep{Schmitt2018}.
For the DID method, \citet{Ryan2018} showed that matching can reduce biases that occur when the parallel trends assumption is violated.
For a detailed overview of the matching literature in the context of causal inference with observational data, see \citet[Chapter 10]{Rosenbaum2002} or \citet{Stuart2010}.
See \citet{Imai2018} for matching techniques for time-series data.
}
\section{Proposals for future research}\label{sec:future}
There remain several open problems.
Most existing methods do not fully account for autocorrelation in the outcome of the treated unit measured over time.
In particular, the treatment effect estimates obtained by any of the methods except for the CIM are invariant to permutation of the time labels in the pre-intervention period.
There may be potential gains in efficiency by extending these methods to account for structure over time.
\textcolor{black}{
The SCM, HCW, DI and CIM assume a linear relationship between the outcome of the treated unit and the outcomes of control units but this is a strong assumption.
\citet{Carvalho2018} account for non-linear relationships by regressing $y_{n_1+1,t}$ on transformations of the outcomes of control units but it is hard to choose which transformations to use.
Therefore, it would be worth estimating the relationship between $y_{n_1+1,t}$ and the outcomes of the control units non-parametrically using, for example, machine learning techniques.
}
The methods we have described are designed to be applied to a single outcome.
In the majority of applications there are several outcomes that may be affected by the intervention.
For example, in the case study of Section \ref{sec:germany} we have considered per-capita GDP but there are alternative indexes, such as the unemployment rate, which we could instead be examined.
Modelling of all outcomes jointly may provide a more precise estimate of the causal effect of intervention on any one of them.
Although \citet{Robbins2017} provide an extension of the SCM method for multiple outcomes, the other methods could also benefit from being extended to handle multiple outcomes.
Another possible direction for future research is to develop models that take into account geographic location of units.
In many applications, one might expect the outcomes on units with spatial proximity to be correlated.
It would be useful to develop models that incorporate these correlations.
\citet{Lopes2008} present a Bayesian LFM that models the correlation between the loadings of any two units as a function of the distance between these units.
Their model could be used to estimate intervention effects with minor modifications.
We will investigate some of these problems in our future work.
\section{Introduction}\label{sec:intro}
Evaluation of the causal effect of an intervention (\textit{e.g}.\ a newly introduced policy, a novel experimental practice or an unexpected event) on an outcome of interest is a problem frequently encountered in several fields of scientific research.
These include economics \citep{Angrist2009,Imbens2009}, epidemiology and public health \citep{Rothman2005,Glass2013}, management \citep{Antonakis2010}, marketing \citep{Rubin2006,Varian2016}, and political sciences \citep{Keele2015}.
Researchers are often interested in assessing the impact of an intervention (occasionally referred to as treatment henceforth) in situations where: i) the data are observational, \textit{i.e.}\ the allocation of the sample units to the intervention and control groups is not randomised, but instead determined by factors that confound the association between the indicator of intervention and the outcome of interest; ii) the intervention is binary, \textit{i.e.}\ sample units cannot receive the interventions at varying intensities; iii) only one or a small number of units are treated; and iv) \textcolor{black}{at each of a set of time points, before and after the time at which the intervention is introduced, the outcome is measured on every sampled unit, thus giving rise to panel data.}
Several statistical methods for causal inference in this setting have been developed to account for the special characteristics of the data: the presence of (likely unobserved) confounders, the existence of temporal trends in the outcome and the limited number of sample units to which the intervention is given.
\textcolor{black}{
In this paper we review the existing literature, motivated by recent methodological developments and the increasing application of these methods to real-life problems.
Since the existing literature comes from a wide range of research disciplines, our focus is on unifying the various methods under a common terminology and notation, appropriate to a statistical audience.
Further, we draw connections between various methods and point out issues related to their practical implementation.
Finally, we suggest some possible directions for future research.
}
We focus on four classes of methods: \textit{difference-in-differences}, \textit{latent factor models}, \textit{\textcolor{black}{synthetic control-type methods}} and the \textit{causal impact} method.
Excluded from our review are propensity score methods \citep{Rosenbaum1983} (see \citet{Austin2011} for a recent review), because the \textcolor{black}{small number of treated units} does not allow accurate estimation of the parameters of a propensity score model, and the interrupted time series method \citep{Bernal2016}, because it does not use data on the units that do not receive the intervention.
This manuscript is structured as follows.
In Section \ref{sec:background} we define notation, describe the causal framework underlying the methods and introduce the illustrative example.
Section \ref{sec:met} presents the four \textcolor{black}{classes of methods}.
\textcolor{black}{
Section \ref{sec:uncertainty} is about quantification of uncertainty and hypothesis testing.
Section \ref{sec:practice} discusses issues related to practical implementation.
Sections \ref{sec:real1} and \ref{sec:discussion} contain an applications to real data and a discussion, respectively.
Finally, in Section \ref{sec:future} we highlight some remaining problems in the field.
}
\section{Estimation methods} \label{sec:met}
In this section, we \textcolor{black}{review four classes of methods} for predicting the counterfactual treatment-free outcomes $y_{it}^{(0)}$ of the treated units at post-intervention times, needed to calculate $\hat{\tau}_{it}$.
\textcolor{black}{
Here we focus on the intuition and the assumptions underlying each method, and report results on theoretical properties of unbiasedness and consistency in Appendix \ref{sec:theory}.
For full technical details of each approach, the reader is directed to the original publications.
}
\subsection{Difference-in-differences}\label{sec:did}
Early works \citep{Ashenfelter1978,Ashenfelter1985,Card1994} used so-called difference-in-differences (DID) models to compare two time periods (pre versus post-intervention).
The identifying assumption in DID models is that the average outcomes of control and treated units in the absence of an intervention would follow \textit{parallel trends} over time \citep{Abadie2005}.
Figure \ref{fig:did} is a graphical representation of the basic DID method \textcolor{black}{for a single control and single treated unit.
The four points A-D on the graph represent the control (A) and treated (B) units at $t=1$, and the control (C) and treated (D) units at $t=2$.
Under the parallel trends assumption, the difference between the outcome of the treated unit and of the control unit would be constant over time in the absence of intervention.
The counterfactual outcome for the treated unit at the post-intervention time can then be predicted as point E in Figure \ref{fig:did}.
Letting $y_A,y_B,y_C,y_D$ and $y_E$ denote the $y$-values corresponding to the points A, B, C, D and E in Figure \ref{fig:did}, respectively, the estimated effect of the intervention is
\begin{eqnarray}\label{eq:dida}\nonumber
\hat\tau_{22}& = & y_D - y_E\\ \nonumber
& = & y_D - \left\{y_C +\left\{y_B-y_A \right\} \right\} \\ \nonumber
& = & \left\{y_D - y_C\right\} - \left\{y_B-y_A\right\},
\end{eqnarray}
\textit{i.e.} the difference (after versus before) of the differences between the two units.
The same method can be used when multiple time points \textcolor{black}{and multiple control units} are available.}
\begin{figure}[h]
\centering
\includegraphics[scale=0.40]{Figures/pt.pdf}
\vspace{-0.21in}
\caption{Graphical illustration of the difference-in-differences method.}
\label{fig:did}
\end{figure}
A commonly used solution to adjust for the effect of covariates $\boldsymbol{x}_{it}$\footnote{\textcolor{black}{\citet{Blundell2004} and \citet{Abadie2005} propose alternative DID estimators that can account for the effect of covariates. However, these methods are not suitable when only a small number of units are treated and hence are not reviewed in this article.}} is to specify a parametric linear DID model for the observed outcome $y_{it}$ \citep{Angrist2009,Jones2011}
\begin{eqnarray}\label{eq:didb}\nonumber
y_{it}&=&y_{it}^{(0)}+\tau_{it} d_{it} \\
y_{it}^{(0)}&=&\boldsymbol{x}_{it}^\top\boldsymbol\theta+\kappa_i+\mu_t+\varepsilon_{it},
\end{eqnarray}
where $\boldsymbol{\theta}$ is a vector of regression coefficients, $\kappa_i$ is an (unknown) fixed effect of unit $i$, $\mu_t$ allows for temporal trends and $\varepsilon_{it}$ are the zero-mean error terms which are independent of $d_{js},\boldsymbol{x}_{js},\kappa_{j},\mu_s$ for all $i,j,t,s$.
Lagged outcomes and/or transformations of $\boldsymbol{x}_{it}$ can be included as extra covariates in the linear DID model \citep{Jones2011}.
The parameters of the linear DID model can be estimated by ordinary least squares (OLS) regression (see \citet[pp. 167]{Angrist2009} for details).
Let $\hat{\tau}_{it}^{\mathrm{DID}}$ denote the resulting estimate of $\tau_{it}$.
\textcolor{black}{
The linear DID model \eqref{eq:didb} makes very strong assumptions regarding the data generating mechanism.
The term $\kappa_i$ in \eqref{eq:didb} allows expected counterfactual treatment-free outcomes to be higher (or) lower in the treated units than in control units, even after adjusting for observed covariates $\boldsymbol{x}_{it}$.
Hence, $\kappa_i$ can represent an unobserved confounder.
However, Equation \eqref{eq:didb} assumes that the effect of this possible confounder on the outcome is constant over time.
Similarly, the term $\mu_t$ in \eqref{eq:didb} can only account for temporal trends that are common to both treated and control units.
}
\textcolor{black}{
Although the linear DID specification \eqref{eq:didb} is often preferred in practice due to its simple interpretation and implementation, there exist other methods that build on the parallel trends idea.
\citet{Athey2006} relax the linearity assumption of \eqref{eq:didb}, allowing the outcome $y_{it}$ to be a more general (non-linear) function of the unobserved characteristics of unit $i$.
However, it is difficult to implement their method when there are more than two time points and $\boldsymbol{x}_{it}$ is high-dimensional.
Another method based on parallel trends is the triple differences method \citep{Atanasov2016,Wing2018}, which uses two groups of control units.
For example, when the treated group consists of male employees of a company, then the control group can be either the female employees of the same company, or the male employees of a different company.
In such situations, the triple differences method can use the second control group to correct for biases caused by the violation of the assumption of parallel trends between the outcomes of the treated group and the control group.
}
There are many examples of the use of DID models.
\citet{Ashenfelter1978} and \cite{Ashenfelter1985} investigate the effect of training programs on worker earnings.
\citet{Card1990} assesses the impact that the Mariel Boatlift, a mass migration of Cuban citizens to Miami in 1980, had on the city's labour market, using four other cities as controls.
\citet{Card1994} estimate the effect that the increase of the minimum salary had on employment rates in New Jersey's fast-food industry in 1992, using fast-food restaurants located in Pennsylvania as the control group.
See \citet{Galiani2005,Branas2011,King2013} for recent works applications of the DID approach.
\subsection{Latent factor models}\label{sec:lfm}
\textcolor{black}{
In the linear DID model of Equation \eqref{eq:didb}, there is one unit-specific term, $\kappa_i$, and this can represent a single unobserved confounder whose effect on the outcome is constant over time.
In the following latent factor model (LFM), $\kappa_i$ is replaced by $\boldsymbol{\lambda}_i\boldsymbol{f}_t$
\begin{eqnarray}\label{eq:factor}\nonumber
\quad y_{it}&=&y_{it}^{(0)}+\tau_{it} d_{it} \\
y_{it}^{(0)}&=&\boldsymbol{x}^\top_{it}\boldsymbol\theta + \boldsymbol\lambda_i^\top\boldsymbol{f}_t +\varepsilon_{it},
\end{eqnarray}
where $\boldsymbol{f}_t=(f_{1t},\dots,f_{Jt})^\top$ are $J$ time-varying factors, $\boldsymbol\lambda_i=(\lambda_{i1},\dots,\lambda_{iJ})^\top$ are unit-specific factor loadings, and $\varepsilon_{it}$ are the zero-mean errors which are independent of $d_{js},\boldsymbol{x}_{js},\boldsymbol\lambda_{j},\boldsymbol{f}_s$ for all $i,j,t,s$.
When $\boldsymbol{f}_t=(1,\mu_t)^\top$ and $\boldsymbol\lambda_i=(\kappa_i,1)^\top$, the second line in \eqref{eq:factor} reduces to the second line in \eqref{eq:didb}.
So, the linear DID model is a special case of the LFM.
Just as $\kappa_i$ in the linear DID model can represent a single unobserved confounder, $\boldsymbol\lambda_i$ can represent $J$ unobserved confounders, whose effect on the outcome varies with time and is described by $\boldsymbol{f}_t$.
Hence, the LFM \eqref{eq:factor} relaxes the DID assumption that the average outcomes of control and treated units follow parallel trends.
In econometrics, $\boldsymbol{f}_t$ is interpreted as a `shock' that affects all units at time $t$ and $\boldsymbol\lambda_i$ represents the response of unit $i$ to these shocks \citep{Bai2009}.
}
\citet{Xu2017} proposes a three-step estimation procedure for predicting counterfactual treatment-free outcomes using the LFM model.
In the first step, observations on control units are used to estimate $\boldsymbol\theta$, $\boldsymbol{f}_1,\ldots,\boldsymbol{f}_T$ and $\boldsymbol\lambda_1,\ldots,\boldsymbol\lambda_{n_1}$ through the iterative procedure of \citet{Bai2009} that minimises $\sum_{i=1}^{n_1}\sum_{t=1}^{T}(y_{it}-\hat{y}_{it}^{(0)})^2$, the mean squared error (MSE) between the observations $y_{it}$ and the corresponding predicted values $\hat{y}_{it}^{(0)}=\boldsymbol{x}_{it}^\top\hat{\boldsymbol\theta}+\hat{\boldsymbol\lambda}^\top_i\hat{\boldsymbol{f}}_t$.
In the second step, the estimated factor loadings for the treated units, $\hat{\boldsymbol\lambda}_i$ ($i>n_1$), are obtained conditional on the parameter estimates obtained in the first step by minimising the MSE between $y_{it}$ and $\hat{y}_{it}^{(0)}$ for the treated units in the pre-intervention period.
Finally, the third step involves estimating the intervention effects $\tau_{it}$ as $\hat\tau_{it}^{\mathrm{XU}}=y_{it}-\hat{y}_{it}^{(0)}$.
\textcolor{black}{
Several authors have proposed alternative methods for predicting counterfactuals using the LFM.
These include \citet{Ahn2013,Gobillon2016,Chan2016} and \citet{Athey2017}.
We have focused on the method of \citeauthor{Xu2017} because, to the best of our knowledge, it is the only one for which an R package has been developed.
}
\citet{Gobillon2016} and \citet{Xu2017} describe applications of the LFM to real data.
\citet{Gobillon2016} estimate the effect on unemployment rates of a French program offering tax reliefs to companies that hired at least 20\% of their personnel from the local labour force.
\textcolor{black}{\citet{Xu2017} evaluates the impact of Election Day Registration (EDR), a law that enables eligible citizens to register on site when they arrive at the voting centre, on voter turnout in the US}.
For more applications of the LFM see \citet{Kim2014} and \citet{Navarro2018}.
\subsection{\textcolor{black}{Synthetic control-type approaches}} \label{sec:sc}
The original synthetic controls method (SCM) was developed by \citet{Abadie2003a} and \citet{Abadie2010} and can only be applied to one treated unit at a time.
The idea behind the SCM is to find weights $\boldsymbol{w}=\left(w_1,\dots,w_{n_1}\right)^\top$ for the control units such that the weighted average of the controls' outcomes best predicts (in terms of MSE) the outcome of the treated unit during the pre-intervention period, and then use the weights to estimate the counterfactual treatment-free outcomes in the post-intervention period.
The set of weights $\boldsymbol{w}$ minimises
\begin{equation} \label{eq:synth1}
\sqrt{\left(\boldsymbol{y}_{n_1+1,1:T_1}-\boldsymbol{Y}_{1:T_1}^{\mathrm{c}}\boldsymbol{w}\right)^\top\boldsymbol{V}\left(\boldsymbol{y}_{n_1+1,1:T_1}-\boldsymbol{Y}_{1:T_1}^\mathrm{c}\boldsymbol{w}\right)},
\end{equation}
subject to the constraints
\begin{equation}\label{eq:scw}
\sum_{i=1}^{n_1}{w_i}=1 \mbox{ and } w_i\geq0,
\end{equation}
where $\boldsymbol{Y}_{1:T_1}^\mathrm{c}$ is the $T_1\times n_1$ matrix with $i$-th column $\boldsymbol{y}_{i,1:T_1}$ and $\boldsymbol{V}$ is a $T_1\times T_1$ symmetric, positive semi-definite matrix reflecting the importance given to the different pre-intervention time points \citep{Abadie2003a}.
The predicted counterfactual of the treated unit is:
\begin{equation}\label{eq:synth2}
\hat{\boldsymbol{y}}^{(0)}_{n_1+1,(T_1+1):T}=\boldsymbol{Y}_{(T_1+1):T}^{\mathrm{c}}\boldsymbol{w},
\end{equation}
where $\boldsymbol{Y}_{(T_1+1):T}^{\mathrm{c}}$ is defined analogously to $\boldsymbol{Y}_{1:T_1}^{\mathrm{c}}$.
The estimated intervention effect at times $t$ ($t>T_1$) is then $\hat{\tau}_{n_1+1,t}^\mathrm{SCM}={y}_{n_1+1,t}-\hat{{y}}^{(0)}_{n_1+1,t}$.
It is also possible to use the covariates, by replacing $\boldsymbol{y}_{i,1:T_1}$ with $\boldsymbol{z}_{i,1:T_1}=(\boldsymbol{y}_{i,1:T_1}^\top,\boldsymbol{x}_{i,1:T_1}^\top)^\top$ in Equation \eqref{eq:synth1}.
\citet{Abadie2010} suggest that instead of using the full data $\boldsymbol{z}_{i,1:T_1}$, it may be reasonable to consider only a few summaries, such as the mean outcome $\frac{1}{T_1}\sum_{t=1}^{T_1}{y_{it}}$ in the pre-intervention period, and the corresponding means of the covariates \textit{i.e}.\ to replace $\boldsymbol{z}_{i,1:T_1}$ by $(\frac{1}{T_1}\sum_{t=1}^{T_1}{y_{it}},\frac{1}{T_1}\sum_{t=1}^{T_1}{\boldsymbol{x}^\top_{it}})^\top$.
Such reduction of the dimensionality of $\boldsymbol{z}_{i,1:T_1}$ might be necessary in applications with $T_1>>n_1$ in order to reduce computation time.
The choice of matrix $\boldsymbol{V}$ can either be based on a subjective judgement of the relative importance of the variables in $\boldsymbol{y}_{i,1:T_1}$ or $\boldsymbol{z}_{i,1:T_1}$ or be determined through a data-driven approach.
For example, \citet{Abadie2003a} and \citet{Abadie2010} choose $\boldsymbol{V}$ as the positive definite diagonal matrix that minimises the MSE between the observed outcomes $\boldsymbol{y}_{i,1:T_1}$ ($i=1,\dots,T_1$) and estimated outcomes $\hat{\boldsymbol{y}}_{i,1:T_1}=\boldsymbol{Y}_{1:T_1}^{\mathrm{c}}\boldsymbol{w}(\boldsymbol{V})$ in the pre-intervention period, where $\boldsymbol{w}(\boldsymbol{V})$ is the solution to \eqref{eq:synth1} for a fixed $\boldsymbol{V}$.
The SCM makes no assumptions regarding the data generating mechanism.
The method has strong links with the matching literature, where the outcome of each treated individual is compared to the outcomes of controls with similar covariate values \citep{Rosenbaum2002,Stuart2010}.
However, it is more general in the sense that a good match is sought by weighted averaging of the controls.
The SCM also relates to the method of analogues used for time-series prediction.
The difference is that in the method of analogues there is only one time-series and `controls' are simply earlier segments of the time-series; for more details see \citet{Viboud2003}.
There have been several proposed extensions of the SCM.
To allow for multiple treated units, \citet{Kreif2016} apply the SCM to the averaged vector outcome $\bar{\boldsymbol{y}}^\mathrm{tr}=(\frac{1}{n_2}\sum_{i=n_1+1}^{n}{y_{i1}},\dots, \frac{1}{n_2}\sum_{i=n_1+1}^{n}{y_{iT}})^\top$ of the treated units.
\citet{Acemoglu2016} assume that the intervention effects $\boldsymbol{\tau}_{n_1+1},\dots,\boldsymbol{\tau}_{n}$ are equal and estimate the common effect at time $t$ ($t>T_1$) as the weighted average $\sum_{i=n_1+1}^{n}{q_i^{-1}\hat{\tau}_{it}}/
\sum_{i=n_1+1}^{n}{q_i^{-1}}$, where $\hat{\tau}_{it}$ ($i>n_1$) is obtained by applying the original algorithm to just the data on treated unit $i$ and the control units $1,\ldots,n_1$, and $q_i=\sqrt{T_1^{-1}(\boldsymbol{y}_{i,1:T_1}-\hat{\boldsymbol{y}}^{(0)}_{i,1:T_1})^\top(\boldsymbol{y}_{i,1:T_1}-\hat{\boldsymbol{y}}^{(0)}_{i,1:T_1})}$.
Their stated rationale for using weights $q_i^{-1}$ is that units with good fit in the pre-intervention period should be more reliable for estimating the common intervention effect and hence receive higher weights.
\textcolor{black}{\citet[henceforth HCW]{Hsiao2012} and \citet[henceforth DI]{Imbens2016} extend the SCM by adding a time-constant intercept term to the SCM estimator and removing the constraints on the weights.
The intercept is necessary when the outcome of the treated unit is systematically (over time) higher or lower than the outcomes of the controls units and hence there exists no set of weights that can provide a good fit for $y_{n_1+1,t}$ in the pre-intervention period.
The removal of the constraints on the weights is useful, for example, when there exist control units with outcomes that are negatively correlated with the outcomes on the treated unit.
HCW suggest estimating $y_{n_1+1,t}^{(0)}$ ($t>T_1$) as $\hat{y}_{n_1+1,t}^{(0)}=\beta_0+\sum_{i=1}^{n_1}{\beta_iy_{it}}$, where $\beta_0,\dots,\beta_{n_1}$ are the OLS coefficient estimates of the regression of $\boldsymbol{y}_{n_1+1,1:T_1}$ on $\boldsymbol{y}_{1,1:T_1},\dots,\boldsymbol{y}_{n_1,1:T_1}$, \textit{i.e.} they minimise \begin{equation}\label{eq:net}
\left(\boldsymbol{y}_{n_1+1,1:T_1}-\beta_0\boldsymbol{1}-\boldsymbol{Y}_{1:T_1}^{\mathrm{c}}\boldsymbol{\beta}\right)^\top\left(\boldsymbol{y}_{n_1+1,1:T_1}-\beta_0\boldsymbol{1}-\boldsymbol{Y}_{1:T_1}^{\mathrm{c}}\boldsymbol{\beta}\right),
\end{equation}
where $\boldsymbol{1}$ denotes a $T_1$-vector of ones, $\beta_0$ is the intercept and $\boldsymbol{\beta}=\left(\beta_1,\ldots,\beta_{n_1}\right)^\top$.
\citet{Amjad2017} also remove the constraint on the weights and suggest that, before estimating these weights, the data on the control outcomes $\boldsymbol{Y}_{1:T_1}^{\mathrm{c}}$ should be de-noised.
}
\textcolor{black}{
\citet{Ben2018} introduced the augmented SCM.
First, the SCM is applied and weights $w_1,\ldots,w_{n_1}$ obtained.
Second, a model (e.g.\ a LFM) for the untreated outcomes $y_{i}^{(0)}$ of all $n_1+1$ units is fitted to all the outcomes of the untreated units and the pre-intervention outcomes of the untreated unit.
If $\tilde{y}^{(0)}_{it}$ ($i=1,\ldots, n_1+1; t>T_1$) denote the predicted untreated outcomes from this model, then $\sum_{i=1}^{n_1} w_iy_{it} - \tilde{y}^{(0)}_{n_1+1,t}$ is an estimate of the bias of the
SCM estimator.
The augmented SCM estimator of the counterfactual $y_{n_1+1}^{(0)}$ equals the original SCM estimate plus this estimated bias.
They argue that this method is particularly useful when the SCM method provides a poor fit in the pre-intervention period.
}
\textcolor{black}{
\citet{Hazlett2018} estimate the weights using a kernel transformation of the pre-intervention outcomes.
This is done to ensure that higher-order features of the outcomes (authors mention, \textit{e.g.}, volatility and variance) are taken into account when estimating the weights.
Using simulated examples, they showed that their approach can eliminate biases that occur if the untransformed outcomes are used to estimate the weights.
}
Several recent works utilise synthetic control-type approaches for estimating the effects of an intervention.
These include \citet{Cavallo2013}, who examine the effect of large-scale natural disasters on gross domestic product, and \citet{Ryan2016}, who investigate the impact that UK's Quality and Outcomes Framework, a pay-for-performance scheme in primary health, had on population mortality.
For more applications of synthetic control-type methods, see \citet{Billmeier2013,Fujiki2015,Saunders2015} and \citet{Aytuug2017}.
\subsection{Causal impact}\label{sec:cim}
The causal impact method (CIM) was introduced by \citet{Brodersen2015} and can only be applied to a single treated unit at a time.
A Bayesian model is assumed for the outcome of the treated unit.
This model includes a time-series component that relates the outcome of the treated unit at time $t$ to previous outcomes on the same unit, and a regression component that uses the outcomes on control units as covariates.
Specifically:
\begin{eqnarray}\label{eq:bstseg}\nonumber
y_{n_1+1,t}^{(0)} &=& \beta_{0t}+\sum_{i=1}^{n_1}\beta_iy_{it}+\varepsilon_{t} \hspace{1cm} (t=1,\dots,T)
\\ \nonumber
\beta_{0,t+1} &=& \beta_{0,t}+\delta_{t}+\eta_{t}\\
\delta_{t+1} &=& \delta_{t}+\zeta_{t},
\end{eqnarray}
with mutually independent $\varepsilon_{it}\sim\mathrm{N}(0,\sigma_\varepsilon^2)$, $\eta_t\sim\mathrm{N}(0,\sigma_\eta^2)$ and $\zeta_t\sim\mathrm{N}(0,\sigma_\zeta^2)$, and priors for $\beta_{00}$, $\delta_0$, $\beta_1,\ldots,\beta_{n_1}$, $\sigma_\varepsilon^2$, $\sigma_\eta^2$ and $\sigma_\zeta^2$.
In Equations \eqref{eq:bstseg}, the component $\beta_{0t}$ induces temporal correlation in the outcome, the regression component $\sum_{i=1}^{n_1}\beta_iy_{it}$ relates $y_{n_1+1,t}^{(0)}$ to measurements from control units, and the error component $\varepsilon_{t}$ accounts for unexplained variability.
More complex models can be adopted \citep{Brodersen2015}, \textit{e.g}.\ by adding a seasonal component.
The model \eqref{eq:bstseg} is fitted to the observed data, $y_{n_1+1,1}^{(0)}, \ldots, y_{n_1+1,T_1}^{(0)}$, treating the counterfactuals $y_{n_1+1,T_1+1}^{(0)}, \ldots, y_{n_1+1,T}^{(0)}$ as unobserved random variables.
Independent, improper, uniform priors are used for $\tau_{n_1+1,T_1+1}, \ldots,\tau_{n_1+1,T}$.
Then, $L$ samples $y_{n_1+1, t}^{(0, l)}$ $(l=1, \ldots, L$) are drawn from the resulting posterior predictive distribution of the counterfactual outcome $y_{n_1+1, t}^{(0)}$ ($t > T_1$), thus providing samples $y_{n_1+1,t}-y_{n_1+1, t}^{(0, l)}$ from the posterior distribution of $\tau_{n_1+1,t}$.
Typically, this would be done using a Markov chain Monte Carlo algorithm.
A point estimate $\hat{\tau}_{n_1+1,t}^\mathrm{CIM}$ for the causal effect $\tau_{n_1+1,t}$ at time $t$ ($t>T_1$) is then given by its posterior mean.
\citet{Bruhn2017} use the CIM to assess the impact of pneumococcal conjugate vaccines on pneumonia-related hospitalisations using hospitalisations from other diseases as the control time-series.
\citet{Vocht2017} evaluate the benefits of stricter alcohol licensing policies on alcohol-related hospitalisations in several areas, control areas being other areas where these policies were not implemented.
See also \citet{DeVocht2016,Gonzalez2016,Vizzotti2016} for other applications of the CIM.
\section*{Acknowledgements}
We acknowledge funding and support from NIHR Health Protection Unit on Evaluation of Interventions ((PS, MH, DDA), Medical Research Council grants MC\_UU\_00002/10 (SRS) and MC\_UU\_00002/11 (DDA, AMP), Public Health England (DDA), and NIHR PGfAR RP-PG-0616-20008 (EPIToPe, PS, MH, DDA). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
\bibliographystyle{imsart-nameyear}
\section{Preliminaries}\label{sec:background}
\subsection{Notation} \label{sec:notation}
Let $i=1,\dots ,n$ index the entities (e.g.\ hospitals or general practices) for which the outcome of interest is observed: henceforth, we refer to these entities as units.
For unit $i$ we have measurements $\boldsymbol{y}_{i\cdot}=(y_{i1},\dots,y_{iT})^\top$, where $t$ indexes time.
We let $\boldsymbol{y}_{\cdot t}=(y_{1t},\dots,y_{nt})^\top$ denote the vector containing the set of $n$ observations at time $t$ and $\boldsymbol{y}_{i,t_1:t_2}=(y_{it_1},\dots,y_{it_2})^\top$ denote the measurements on unit $i$ from time $t_1$ to time $t_2$ ($t_1\leq t_2$).
Throughout, we assume that $y_{it}$ is univariate.
Let $d_{it}=1$ if unit $i$ receives the intervention at or before time $t$, and $d_{it}=0$ otherwise.
Let $\boldsymbol{d}_i=(d_{i1},\dots,d_{iT})^\top$.
Of the $n$ units, the first $n_1$ remain untreated for the entire study period.
We call these the \textit{controls}.
For the $n_2$ \textit{treated} units, there is a time $T_{1}$ ($1<T_1<T$) immediately after which the intervention is applied.
We assume that all treated units receive the intervention at the same time.
Hence $d_{it}=1$ if $i>n_1$ and $t>T_1$, and $d_{it}=0$ otherwise.
We make this assumption to simplify the notation, but the methods we describe can be easily extended to allow for different treatment times.
The number of post-intervention observation times is denoted by $T_2=T-T_1$.
For each unit and time we may also observe a set of $K$ covariates $\boldsymbol{x}_{it}=(x_{it1},\dots x_{it K})^\top$.
\subsection{Potential outcomes}
We adopt the \textit{potential outcomes} framework \citep{Rubin1974,Rubin1990}, also known as the \textit{Rubin causal model} \citep[RCM]{Holland1986}.
\textcolor{black}{
Under this model, for each treated unit ($i>n_1$) and post-intervention time ($t>T_1$) there are two potential outcomes, $y_{it}^{(0)}$ and $y_{it}^{(1)}$: $y_{it}^{(0)}$ represents the outcome that would be observed if intervention were not applied, and $y_{it}^{(1)}$ is the outcome that would be observed if the intervention were applied.
We only observe $y_{it}^{(1)}$, \textit{i.e}., $y_{it}=y_{it}^{(1)}$.
For the control units ($i\leq n_1$) at any time $t$ and the treated units ($i>n_1$) at a pre-intervention time ($t\leq T_1$), we only define $y_{it}^{(0)}$ and $y_{it}^{(0)}=y_{it}$ is observed.
}
The RCM allows the effect of intervention unit $i$ ($i>n_1$) at time $t$ ($t>T_1$) to be expressed as $\tau_{it} = y_{it}^{(1)} - y_{it}^{(0)}$.
Estimation of $\tau_{it}$ is complicated by the fact that $y_{it}^{(0)}$ is not observed.
In order to estimate $\tau_{it}$ from the observed data, it is necessary to make identifying assumptions \citep{Morgan2007,Keele2013}.
For the methods considered in this paper, these assumptions allow the unobserved \textit{counterfactual} outcomes $y_{it}^{(0)}$ of treated units in the post-intervention period (\textit{i.e}.\ for $i>n_1$ and $t> T_1$) to be predicted using the observed outcomes on control and treated units. Denote these predictions as $\hat{y}_{it}^{(0)}$.
The intervention effect $\tau_{it}$ can then be estimated as $\hat{\tau}_{it}=y_{it}-\hat{y}_{it}^{(0)}$.
\subsection{An illustrative example}\label{sec:germany}
As our illustrative example we use data from \citet{Abadie2015} who investigated the effect that West Germany's reunification with East Germany in 1990 had on the economic growth of the former.
To do so, they compared West Germany's annual per-capita GDP (the outcome variable) to its counterfactual GDP (i.e.\ its GDP had reunification not taken place), which they predicted based on annual per-capita GDP data from $n_1=16$ member countries of the Organisation for Economic Co-operation and Development (OECD) (none of which underwent reunification, and so are `control units').
The authors used data from 1960-2003 and hence there are $T_1=30$ pre-intervention and $T_2=13$ post-intervention time points.
\textcolor{black}{
Figure \ref{fig:germany} shows the time-series of the outcome on all 17 units.
In Section \ref{sec:real1}, we analyse this dataset using the methods reviewed in this article.
}
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{Figures/germany.pdf}
\vspace{-0.21in}
\caption{Time series plot of the German reunification data. The values on the $y$-axis represent per-capita GDP measured in U.S. dollars. West Germany's per-capita GDP is shown in blue; the data on control units (16 other OECD countries) are shown in light red; the dashed red line represents the average GDP of the control units. The dashed gray line indicates 1990, the year of reunification.}
\label{fig:germany}
\end{figure}
\section{Application: Effect of German reunification on GDP} \label{sec:real1}
In this section, we demonstrate the use of the methods we have described by analysing the data introduced in Section \ref{sec:germany}.
The dataset is publicly available\footnote{\url{http://dx.doi.org/10.7910/DVN/24714}}.
\textcolor{black}{We omit the available covariates because they might have been affected by the reunification}.
\textcolor{black}{
For the DID and SCM, some of the diagnostic checks described in Section \ref{sec:blacks} do not require implementing these methods and therefore we started by carrying out these tests.
Figure \ref{fig:didpar} of Appendix \ref{sec:suppl} shows the difference between West Germany's GDP, $y_{17,t}$, and the average GDP in the control countries, $\frac{1}{16}\sum_{i=1}^{16}y_{it}$, over the pre-reunification period.
The difference has a clear increasing trend suggesting that the parallel trends assumption does not hold, so the linear DID model is not appropriate for this application.
As we see from Figure \ref{fig:germany}, the outcome of the treated unit lies in the convex hull of the outcomes of control units so this provides no evidence that the SCM should not be used.
}
We only implement methods for which (to the best of our knowledge) R \citep{R} software exists.
The linear DID method can be implemented using any linear regression function (\textit{e.g}.\ \textit{lm}).
For the remaining methods, we used the packages specifically developed for these methods: \textit{gsynth} for the LFM; \textit{Synth} \citep{Abadie2011} for the SCM; \textit{pamp} \citep{Vega2015} for the HCW method; and \textit{CausalImpact} for the CIM.
The code we used for our real data analysis is available online\footnote{\url{https://osf.io/b5fv3/}}.
\textcolor{black}{
We fitted the linear DID model \eqref{eq:didb}.
For the method of \citet{Xu2017} we set $f_{1t}=1$ for all $t$ and $\lambda_{i2}=1$ for all $i$ in order to have time and country fixed effects, respectively.
The total number of latent factors was set via cross-validation.
For the SCM, we estimated the weights using the whole vector of outcomes $\boldsymbol{y}_{i,1:T_1}$ in the pre-intervention period (rather than summaries of the outcomes).
The HCW method was implemented using all control countries and pre-intervention time points.
Finally, for the CIM we fitted the model of Equation \eqref{eq:bstseg} but without the term $\delta_t$ because we found that inclusion of this term did not improve the fit and led to substantially wider credible intervals for the causal effect of interest.
The prior distributions for all model parameters were set to the software defaults.
We fitted the linear DID method for illustration purposes even though DID should not be used here.
}
\textcolor{black}{
Before examining the causal estimates, we performed the remaining diagnostic checks.
Figure \ref{fig:eff} shows the difference between the actual and estimated counterfactual West German GDP, $\boldsymbol{y}_{17,\cdot}-\hat{\boldsymbol{y}}_{17,\cdot}$ for the entire study period.
We see that all methods except for the linear DID almost perfectly reproduce West Germany's GDP before reunification.
Thus, the pre-intervention goodness-of-fit diagnostic provides no indication against any of the methods except for linear DID.
The estimated factor loadings for the 17 countries in the dataset are shown in Table \ref{tab:loadings} of Appendix \ref{sec:suppl}.
The estimated loadings for West Germany are not extreme compared to the estimated loadings of the control countries, hence suggesting that the predicted counterfactual is not obtained by extrapolation.
Overall we see that the only method that fails our diagnostic checks is the linear DID.}
\begin{figure}[htp]
\centering
\includegraphics[scale=1.0]{Figures/eff.pdf}
\vspace{-0.15in}
\caption{Annual estimates of the effect of the German reunification on West Germany's per-capita GDP obtained using the linear DID model (red), the LFM (green), the SCM method (light blue), the HCW method (blue) and the CIM (purple). The dashed lines (when applicable) represent the 95\% confidence/credible intervals.}
\label{fig:eff}
\end{figure}
\textcolor{black}{
Figure \ref{fig:eff} reveals that the other four methods provide similar estimates of the causal effect.
In particular, the difference between the observed and counterfactual outcomes is positive during the first three years after 1989, suggesting that reunification initially had a positive impact on West Germany's GDP.
\citet{Abadie2015} attribute this to a `demand boom'.
The estimated impact reduces thereafter, and is negative for all four methods in year 2003.
The estimated average reduction in annual GDP over the period 1990-2003 due to the reunification (we also show DID for completeness) is shown in Table \ref{tab:gdp}.
}
\begin{table}[ht]
\begin{center}
\begin{tabular}{cr}
\textbf{Method}&\textbf{GDP decrease} \\ \hline \hline
Linear DID& -604\\
LFM (XU)& 1546\\
SCM& 1322\\
HCW& 1473\\
CIM& 1629\\
\end{tabular}
\caption{Average (over the period 1990-2003) reduction in West Germany's annual per capita GDP, as estimated by the 5 methods. All values are in United States dollars.}
\label{tab:gdp}
\end{center}
\end{table}
Figure \ref{fig:eff} presents 95\% intervals for the LFM of \citet{Xu2017} and the CIM.
These exclude zero in all years after 1993 thus suggesting a significant intervention effect.
The placebo test of no intervention effect in any of the years 1990-2003 described by \citet{Abadie2010,Abadie2015} is also suggestive of a non-zero intervention effect.
In particular, the $r$ statistic defined in Equation \eqref{eq:sctest} is $r_{17}=30.72$ for West Germany, larger than all the $r_i$ values obtained for the 16 control countries.
We further implemented this test with the HCW method.
The rank of the $r$ statistic for West Germany is 16, that is there is only one country whose $r$ statistic is higher.
Table \ref{tab:rstat} in Appendix \ref{sec:suppl} shows the $r$ statistics obtained by applying the SCM and HCW methods.
Overall, taking into consideration all tests conducted, we conclude that there is evidence that reunification had a negative long-term impact on West Germany's per-capita GDP, although it may have had a positive short-term impact.
|
2,869,038,154,321 | arxiv |
\section{Introduction}
The discovery of fast radio bursts (FRBs) has inspired renewed interest of the astrophysical community in coherent emission mechanisms. An extremely high brightness temperature of the observed emission implies that the emission mechanism is coherent, i.e., a large number of particles must emit the radio waves in phase. Half a century ago, this sort of consideration led to conclusion that the pulsar radio emission is coherent.
However, a theory of pulsar radio emission has not yet been developed. Therefore, it is not surprising that a pessimistic view of the theoretical study of coherent emission processes is common.
Under astrophysical conditions, coherent emissions are typically generated in fully ionized plasma (a marked exception is the molecular line masers). The plasma is a medium with a long-range interaction, which, in principle, enables the coherent motions of large ensembles of particles. However, the same long-range interaction strongly affects both the emission and the propagation of electromagnetic waves. Therefore, coherent emission is a collective plasma process that should be described in the language of plasma physics.
Unfortunately, this branch of physics is not popular among
the astrophysical community. Two views are prevalent. According to the first view,
this is an obscured and untrustworthy field, so there is no chance of any progress. The second view is the opposite (but, in some sense, closely related): coherent emission could be simply described by
formulas from the Jackson's textbook \citep{Jackson_book}, into which the charge $Ne$ could be substituted with a large enough $N$.
This review advocates for a more insightful approach. The theory of collisionless plasmas supplemented by numerical simulations has already provided spectacular progress in our understanding of astrophysical shock waves, magnetic reconnection, and particle acceleration \citep{Treumann_Baumjohann15,Sironi_etal15,Kagan_etal15,Pelletier_etal17}. The coherent emission from the Sun and planetary magnetospheres has also become a mature field \citep{Melrose17}. The situation with mechanisms of pulsar radio emission looks disappointing but the reason for this is rather specific: it is not that the processes in the magnetospheric plasma defy analysis---the mechanisms of the plasma production and the very structure of the plasma flows in pulsar magnetopsheres remain uncertain, which prevents the confrontation of the theory and observations \citep{Lyubarsky08a}.
Therefore, there is no reason to halt efforts to resolve the enigma of FRBs.
This review is an attempt to present, at a basic level, the physics of collective plasma emission mechanisms applied to FRBs. When dealing with radiation processes in plasma, a sensible line of action is to address three basic questions:
\begin{enumerate}[leftmargin=*,labelsep=4.9mm]
\item What electromagnetic waves are supported by plasma at the assumed conditions?
\item How could these waves be excited?
\item Could they escape the system as radio waves?
\end{enumerate}
Any reasonable model has to provide self-consistent answers to these questions. I try to show how this could be achieved in particular cases relevant to FRBs.
Only basic observational facts are used in this review. First, the theoretical model should explain the $\sim 1$ ms burst duration and a wide range of emitted isotropic energies, from $\sim10^{35}$ to $\sim10^{43}$ erg. Another basic fact is that FRBs typically exhibit strong linear polarization, sometimes reaching 100\%. For a comprehensive analysis of observations, the reader is referred to recent reviews \citep{Cordes_Chatterjee19,Petroff_etal19}. A concise summary of observational data, as well as the description of theoretical models is given in ref.\ \citep{Zhang20}.
The remainder of this paper is structured as follows: In Section 2, I describe the coherent radiation mechanisms discussed in the context of FRBs. Magnetars are considered the main candidate source of FRBs. In Section 3, I outline the physics of magnetar magnetospheres and magnetar flares, and discuss how the powerful coherent radiation may be produced by these flares. In Section 4, I describe non-linear processes that could affect the propagation of high intensity radio waves and even prevent the escape of radio emission from the source.
\section{Coherent Radiation Mechanisms}
\subsection{Brightness temperature}
To estimate how many coherently emitting particles are minimally required to produce the observed emission, let us estimate the typical brightness temperature of FRBs. The brightness temperature of a radiation source, $T_b$, is defined such that the radiation intensity of the source at the frequency $\nu$ is presented according to the Rayleigh-Jeans law:
\begin{equation}
I_{\nu}=\frac{2k_BT_b\nu^2}{c^2},
\end{equation}
where $k_B$ is the Boltzmann constant, and $c$ the speed of light. The brightness temperature typically depends on the frequency.
If the source is at rest, the observed frequency and intensity are the same as those in the source. For an isotropic source, the observed spectral flux is:
\begin{equation}
F_{\nu}=I_{\nu}\Omega\sim\frac{a^2}{D^2}I_{\nu},
\end{equation}
where $\Omega$ is the solid angle subtended by the source, $a$ is the size of the source, and $D$ is the distance. The characteristic spectral flux of FRBs is of the order of 1 Jy; the duration of the bursts, $\tau$, is of the order of 1 ms. The size of the source is limited by $a\le \tau c$, which immediately implies a low estimate of the brightness temperature:
\begin{equation}
T_b\sim\frac{F_{\nu}D^2}{k_B\tau^2\nu^2}=7\cdot
10^{35}\frac{F_{\nu,\rm Jy}D^2_{\rm Gpc}}{\tau^2_{\rm ms}\nu^2_{\rm GHz}} \,\rm K.
\end{equation}
The indexes like Jy and Gpc indicate the units in which the corresponding quantities are measured. Below, I also employ the shorthand notation $q_x = q/10^x$ in cgs units, e.g., $L_{44}=L/(10^{44}\rm erg/s)$.
In a moving source, the brightness temperature is related to the intensity in the source frame, $I'_{\nu'}=2k_BT_b\nu'^2/c^2$, where the prime means that the corresponding quantities are measured in the source frame.
If the source moves toward the observer with the Lorentz factor $\Gamma\gg 1$, the lower limit on the observed variability time significantly decreases due to the transit time effect:
\begin{equation}
\tau_{\rm obs
\ge\frac{a'}{c\Gamma}.
\label{tau_duration}\end{equation}
To find a relation between the measured spectral flux and the brightness temperature of the source, take into account
that the total luminosity of the source,
\begin{equation}
L'\sim a'^2I'_{\nu'}\Delta\nu'\sim
\frac{a'^2k_BT_b\nu'^2\Delta\nu'}{c^2},
\end{equation}
is the relativistic invariant, $L=L'$.
Then, the observed spectral flux is estimated, accounting for the relativistic beaming, as:
\begin{equation}
F_{\nu}\sim\frac{L}{\Gamma^2D^2\Delta\nu}\sim\frac{a'^2k_BT_b\nu'^2}{c^2D^2\Gamma^2}\frac{\Delta\nu'}{\Delta\nu}.
\end{equation}
Now, the Lorentz transform of the frequency, $\nu\sim\Gamma\nu'$, together with the upper limit on the source size (\ref{tau_duration}), yields the low limit on the brightness temperature:
\begin{equation}
T_b\sim\frac{F_{\nu}D^2}{k_B\tau^2\nu^2\Gamma}=
7\cdot 10^{35}\frac{F_{\nu,\rm Jy}D^2_{\rm Gpc}}{\tau^2_{\rm obs, ms}\nu^2_{\rm GHz}\Gamma} \,\rm K.
\end{equation}
The brightness temperature for an ensemble of incoherently emitting particles cannot exceed the temperature of the ensemble, understood as the characteristic energy of particles expressed in degrees. This immediately implies that the emission mechanism of FRBs should be coherent, i.e., a large number of particles must emit radio waves in phase.
Assuming that the emitting particles are electrons,
we estimate the minimal number of coherently radiating particles as:
\begin{equation}
{\cal N}\sim\frac{k_BT_b}{m_ec^2}\sim
10^{26}\frac{F_{\nu,\rm Jy}D^2_{\rm Gpc}}{\tau^2_{\rm obs, ms}\nu^2_{\rm GHz}\Gamma}.
\label{Nminimal}\end{equation}
If the electrons in the source are relativistic, the required number of particles is smaller by the characteristic Lorentz factor of electrons, but in any reasonable case, $N$ remains tremendously large. This implies that the FRB emission is definitely coherent. The question is what causes the concerted motion of a macroscopic number of particles.
The simplest assumption is that bunches of charged particles are somehow formed in the system, with each bunch emitting as a single entity. Then, two questions immediately arise: what is the bunching mechanism and what are the radiation properties of these bunches?
\subsection{Curvature Emission of Bunches}
At the very early stage of pulsar studies, particles were found to emit in the radio band if they move with the Lorentz factors $\Gamma\sim 100$ along curved magnetic field lines in the neutron star's magnetosphere. Since then,
the popular view is that bunches of charged particles are somehow formed in the magnetosphere, and that pulsar radio emission could be attributed to the curvature emission of these bunches. Recently, the same model was applied to FRBs
\citep{Falcke_Rezzolla14,Cordes_Wasserman16,Dai_etal16,Kumar17,Gisellini_Locatelli18,Katz18,Yang_Zhang18,Lu_Kumar18,Wang_etal19,Kumar_Bosnjak20,LuKumarZhang20,Wang_etal20,Yang_etal20}.
The formation of bunches is typically attributed to the electrostatic plasma waves excited by the two-stream instability. All the available estimates are based on the theory developed at the assumption that the energy of perturbations is small as compared with the energy of the plasma. However, the electrostatic energy of the assumed bunches enormously exceeds the plasma energy. Namely, for the emission to be coherent, the size of the bunch in the comoving frame could not be larger than $c/\omega'$, where $\omega'=2\pi\nu/\Gamma$ is the frequency in the frame of the bunch. Using the estimate (\ref{Nminimal}) for the minimal number of electrons in the bunch, we find the electric potential in the bunch $V'\sim {\cal N}e\omega'/c$. Then the ratio of the electrostatic to the plasma energy is found as
\begin{equation}
\epsilon\sim \frac{eV'}{mc^2}\sim 10^{9}\frac{F_{\nu,\rm Jy}D^2_{\rm Gpc}}{\tau^2_{\rm obs, ms}\nu_{\rm GHz}\Gamma_2^2}.
\end{equation}
In this case, the results of the linear theory are completely irrelevant. The energy of plasma waves could not exceed the plasma energy because due to non-linear effects, it efficiently goes to heat. One can speculate that the plasma is so hot that $\epsilon$ does not exceed unity. However, the properties of both the plasma waves and the two-stream instability in a relativistically hot, strongly magnetized plasma differ drastically from those in the cold plasma \citep{Lominadze_Mikhailovski79,Arons_Barnard86}. No attempts were made to analyze the formation of the bunches at realistic conditions.
Note that $\epsilon$ represents the repulsive electrostatic potential in units of the electron rest energy. This enormous number clearly shows how difficult is to form the bunches and to keep them against destruction. Unless physical mechanisms capable of formation and maintenance of the bunches will be elaborated, models assuming the existence of such bunches
should be considered as highly speculative.
In the papers quoted above,
the emission of the bunches was estimated by applying standard formulas of electrodynamics in a vacuum. The characteristic frequency is estimated as:
\begin{equation}
\omega_c\sim\frac{c\Gamma^3}{R_c},
\label{curv_freq}\end{equation}
whereas the emission power of a bunch is assumed to be:
\begin{equation}
P=\frac{2q^2c\Gamma^4}{3R_c^2},
\label{curv_power}\end{equation}
where $R_c\sim 10^7-10^9$ cm is the curvature radius of the magnetic field line and $q={\cal N}e$ is the charge of the bunch. The demand that the emitted frequency is in the radio band implies $\Gamma\sim 100$. Choosing an appropriate $\cal N$, we find the required power.
Even leaving aside the question of how the bunches are formed, we cannot
ignore the fact that their radiation properties are strongly affected by the plasma.
The density of this plasma could not be much less than the particle density in the bunches therefore the minimal comoving plasma density is:
\begin{equation}
N'\sim\frac{\omega'^3\cal N}{c^3}\sim
10^{16}\frac{F_{\nu,\rm Jy}D^2_{\rm Gpc}\nu_{\rm GHz}}{\tau^2_{\rm obs, ms}\Gamma^4_2}\,\rm cm^{-3}.
\end{equation}
In this case, the plasma
frequency, $\omega'_p=(4\pi e^2N'/ m_e)^{1/2}$, is well above the frequency of the emitted waves:
\begin{equation}
\frac{\omega_p'}{\omega'} \sim 10^5
\frac{F_{\nu,\rm Jy}^{1/2}D_{\rm Gpc}}{\tau_{\rm obs, ms}\nu_{\rm GHz}^{1/2}\Gamma_2}.
\end{equation}
Waves with frequencies below the plasma frequency could propagate in the highly magnetized plasma of a neutron star's magnetosphere , see, e.g., \citep{Arons_Barnard86}. However, their emission cannot be described by formulas of vacuum electrodynamics.
The curvature emission of a point charge at the condition $\omega'_c\ll\omega'_p$ was calculated in ref.\ \citep{Gil_etal04}. It was found that the characteristic radiation frequency is the same as that in the vacuum case, Eq. (\ref{curv_freq}), but the emission power is strongly suppressed:
\begin{equation}
P=K_0\frac{q^2c\Gamma^4}{R_c^2}\left[\frac{\omega'_c}{\omega'_p}\left(1-\frac{\Gamma^2_p}{\Gamma^2}\right)\right]^2,
\end{equation}
where $\Gamma$ is the Lorentz factor of the charge, $\Gamma_p$ is the Lorentz factor of the plasma, $K_0\approx 0.1$, and the prime means that the corresponding quantities are measured in the frame comoving with the plasma. If the charge moves
together with the plasma, $\Gamma=\Gamma_p$, it does not radiate at all because it is completely shielded in this case. Note that in this calculation, the plasma was assumed to be cold. Hot plasmas completely shield charges moving with a velocity smaller than the electron thermal velocity \citep{Krall1973}. Therefore, in a relativistically hot plasma, the curvature emission could be expected to be completely suppressed for a wide range of $\Gamma$. If the charge moves with respect to the plasma such that its velocity is out of the range of the plasma thermal velocities, it could emit but the emission rate would be suppressed by a factor $(\omega'/\omega'_p)^2\sim 10^{-10}$ compared with the vacuum emission rate (\ref{curv_power}).
Of course, this highly idealized result could be modified by considering additional effects, such as non-point-like charge distribution, hot plasma, etc. However, this is a burden on the authors of the model to present a self-consistent description of all relevant physical processes. The above example just shows that the plasma effects could not be
ingnored in the emission process. It is worth noting that in pulsars, the plasma frequency is comparable with the curvature frequency, and the electrostatic energy of the assumed bunches is not too large in typical cases. Therefore, in this case, the model of the curvature emission could not be certainly ruled out, even though speculations of the origin of bunches are not convincing. In all the available versions of the coherent curvature emission model for FRBs, both the formation of bunches and their emission is described using theoretical results obtained at the conditions by far violated in the model. Therefore for a while, this model could not be considered physically consistent, so that the discussion of any details of the model is premature.
\subsection{Masers, Some Preliminaries}
The standard explanation of maser action is based on the notion of induced emission introduced by Einstein in his analysis of the quantum radiation transitions. A photon with energy matching the energy difference between two energy levels could not only be absorbed by the system, causing a transition of an electron from the lower to the upper level, but could also cause an electron in the upper level to jump downward, emitting one more photon with the same energy and direction. Therefore, if the population of the upper level exceeds that of the lower level, a seed radiation beam is exponentially amplified.
The above quantum picture is well known and is, in principle, correct, even though it could be sometimes misleading, as with any purely qualitative picture. But it is worth stressing that we deal with classical systems, so even though the classical view is paradoxically less vivid than the quantum view, quantitative theory may be developed using only classical plasma physics.
According to the classical picture, maser action occurs in resonantly unstable systems. A seed wave satisfying the resonance conditions modulates the unstable plasma, triggering currents that emit in phase with the seed wave, thus amplifying it. In this case, the maser emission could also be attributed to charged bunches formed in the course of the modulation process. However in this approach, both the formation of the bunches and their emission are considered self-consistently, accounting for the dispersion properties of the medium.
\subsection{Synchrotron Maser}
Charged particles rotate in a magnetic field; therefore, an inverse population in the magnetized plasma could be simply imagined as a ring in the particle momentum space. Before considering how such a distribution could be formed in nature, a couple of subtle points must be discussed.
First, maser action is impossible in an infinite system of equidistant energy levels, like the system of non-relativistic Landau levels \footnote
{The cyclotron maser works only due to relativistic corrections to Landau levels \citep{Melrose17}}. Even if the $n$th level is overpopulated, the resonant photon could trigger both the transition $n\to n-1$ (induced emission) and $n\to n+1$ (absorption). The absorption rate is larger, so the resonant photons are always absorbed in this case. In particular, maser action is impossible in a system of linear oscillators. The reason why the induced emission was discovered first in quantum physics is that we could not find a simple classical system that exhibits this effect. In classical electrodynamics, the amplified emission was found later, with the theory of the effect being quite involved.
The system of relativistic Landau levels is not equidistant; therefore, at first glance, an amplified synchrotron emission could be easily achieved provided an inverse population is formed in a system of relativistic electrons. However, there is another subtle point.
By synchrotron, we typically mean emission at very high harmonics of the rotational frequency, such that the spectrum is continuous. In this case, waves emitted by an electron could be absorbed by electrons with a different energy. The inverse energy distribution can be formed only in a limited range of particle energies, say, at $E<E_0$, whereas at $E>E_0$, the level population decreases with the energy. Therefore, radiation from electrons with $E<E_0$ may be absorbed by electrons with $E>E_0$. For this reason, the synchrotron maser is possible only in two cases (see \citep{Melrose_book80} and references therein). The first is an extremely narrow energy distribution, which could hardly form in realistic conditions. In the second case, the inversely populated electrons radiate in the frequency range, where the dispersion of the waves is modified by the plasma,
$(\omega/ck)-1\ge\gamma^{-2}$, where $\gamma$ is the characteristic Lorentz factor of electrons. Then, both emission and absorption are suppressed by the Razin effect, with the suppression being stronger for higher energy electrons. In ring-like distributions, the inverse population is formed at smaller energies; therefore, the induced emission of these electrons is less suppressed than the absorption by higher energy electrons. Then, the total absorption coefficient becomes negative, which implies maser emission.
It was noticed \citep{Sazonov70} that the dispersion of electromagnetic waves may be sufficiently modified by the relativistic electrons themselves provided their density is large enough.
For this case, the theory of the synchrotron maser instability has been developed both for spherically symmetric \citep{SagivWaxman02,Gruzinov_Waxman19} and for ring-like electron distributions \citep{Lyubarsky06}. The growth rate of the maser instability is estimated as
\begin{equation}
\kappa\sim 0.1 \frac{\Omega_B^{5/4}}{\Omega_p^{1/4}},
\end{equation}
where
\begin{equation}
\Omega_B=\frac{eB}{m_ec\gamma};
\qquad\Omega_p=\sqrt{\frac{4\pi e^2 N}{m_e\gamma}}
\label{Omega_B,Omega_p}\end{equation}
are the relativistic Larmor and plasma frequencies, respectively.
The characteristic frequency of the amplified waves is:
\begin{equation}
\omega\sim \frac{\Omega_p^{5/4}}{\Omega_B^{1/4}}.
\end{equation}
The above result was obtained under the condition $\Omega_B<\Omega_p$. In relativistic sources, we can conveniently use the magnetization parameter, defined as twice the ratio of the magnetic to the plasma energy density in the source:
\begin{equation}
\sigma=\frac{B^2}{4\pi m_ec^2\gamma N}=\left(\frac{\Omega_B}{\Omega_p}\right)^2.
\end{equation}
Then, the characteristic frequency of amplified waves and the growth rate may be respectively presented as:
\begin{equation}
\kappa\sim 0.1\sigma^{1/4}\Omega_B; \qquad\omega\sim \sigma^{-1/4}\Omega_p;\qquad \sigma<1.
\label{maser_sigma<1}\end{equation}
Because of a very weak dependence on $\sigma$, the emission frequency of the maser does not much exceed the plasma frequency.
Above, only radiation at high harmonics of the rotational frequency, $\Omega_B$, was discussed. Note that the few first harmonics do not overlap if the electron energy distribution is not too wide, $\Delta E<E$. Therefore, the maser emission at the first harmonic is possible, even in a vacuum. In a strongly magnetized plasma, $\sigma>1$, the Larmor frequency exceeds the plasma frequency; therefore, waves with frequencies of the order of $\Omega_B$ could propagate. Therefore, in this case, the maser emits at the main and the few first harmonics, with the instability growth rate being (e.g., \citep{Aleksandrov_eyal84}):
\begin{equation}
\kappa\sim 0.3\sigma^{-1/3}\Omega_B\qquad \omega\sim \Omega_B=\sigma^{1/2}\Omega_p;\qquad \sigma>1.
\label{maser_sigma>1}\end{equation}
One sees that the maser action is possible at any magnetization, provided an inverse electron population is formed. As, in the systems of interest, the magnetization is not too low and not too big, the characteristic emission frequency may be generally presented as:
\begin{equation}
\omega\sim \zeta\Omega_p,
\label{maser_frequency}
\end{equation}
where $\zeta\sim$ a few.
\subsection{Coherent Emission from the Front of a Relativistic, Magnetized Shock}
The reverse level population means that in some energy range, the particle energy distribution function grows faster with the energy than the thermal distribution.
This distribution could be formed only in highly non-equilibrium systems under rather non-trivial conditions.
A ring-like particle distribution is naturally formed at a front of the collisionless shock in a magnetized flow because this shock is mediated by the Larmor rotation: when the upstream flow enters the shock, the bulk velocity sharply drops, and particles begin to rotate in the enhanced magnetic field. It was assumed \citep{Langdon_etal88} that at relativistic shocks, synchrotron maser instability develops. Numerical simulations of relativistic, magnetized shocks \citep{Hoshino92,Gallant92,Sironi_Spitkovsky11,Iwamoto_etal17,Iwamoto_etal18,Iwamoto_etal19,Plotnikov_Sironi19,Babul_Sironi20} revealed strong electromagnetic precursors with the characteristic frequencies compatible with the estimate (\ref{maser_frequency}).
Note that the frequencies (\ref{maser_frequency}) are in fact given in the downstream frame. In the upstream frame, they are Lorentz-boosted so that the frequencies of the emitted waves are well above the local plasma and Larmor frequencies. Therefore, the precursor waves propagate away relatively easily.
The simulations show that a ring-like distribution is formed at the front of the shock, provided the magnetization of the flow is high enough, $\sigma\ge 10^{-3}$. However, we cannot certainly attribute the precursor emission to the maser mechanism. The ring is immediately destroyed just beyond the front, so that the width of the unstable region in the shock frame is only $\sim 2c\Omega_B^{-1}$. As the growth rate of the instability is less than $\Omega_B$, see Eqs.(\ref{maser_sigma<1}) and (\ref{maser_sigma>1}), a seed wave could not be amplified considerably while crossing the shock. The waves propagating within the shock plane perpendicular to the shock normal could be amplified, but then the precursor should be attributed to the scattering of the amplified waves. It is possible that the intrinsic unsteadiness of the shock front leads to strong fluctuations in the charge density so that the thus-formed charged bunches rotate in the magnetic field and emit coherent radiation. In any case, the emitted frequency is of the order of a few $\Omega_B$ at $\sigma>1$ and a few $\Omega_p$ at $\sigma<1$, which is compatible with our expectation from the synchrotron maser emission. Despite this uncertainty, I refer to this emission as the synchrotron maser emission, because this term is widely used in the literature.
In the referenced simulations, the authors found that the emitted energy reaches a few percent of the inflow power at $\sigma\sim 0.1$ and decreases toward both large and small magnetizations.
The strength parameter of the electromagnetic waves is defined as the ratio of the cyclotron frequency in the wave field to the wave frequency:
\begin{equation}
a=\frac{eE}{m_ec\omega},
\end{equation}
where $E=B$ is the amplitude of the wave. The cases $a\ll 1$ and $a\gg 1$ correspond, respectively, to non-relativistic and highly relativistic oscillations of electrons in the wave. This quantity is a relativistic invariant. Let us define the efficiency of the maser emission, $\chi$, as the ratio of the emitted power to the total power of the inflow, both measured in the shock front reference frame. Using Eq. (\ref{maser_frequency}), we find that if the inflow Lorentz factor is $\gamma$, the strength parameter of the emitted waves is \citep{Iwamoto_etal17,Plotnikov_Sironi19}:
\begin{equation}
a\sim\frac{\chi^{1/2}}{\zeta}\gamma.
\label{strength}\end{equation}
One sees that a highly relativistic shock emits strong electromagnetic waves, which could have profound influence on their interaction with the surroundings.
The polarization of the radiation is an important issue. Most of the available simulations of the maser emission from relativistic shocks were performed either in 1D or 2D with the simulation plane perpendicular to the background magnetic field. No information on polarization could be inferred from these simulations. In \citep{Gallant92}, 2D simulations were performed for the magnetic field in the simulation plane; it was reported that the outgoing radiation is 100\% polarized perpendicular to the magnetic field. Conversely, both polarization modes were found in other studies \citep{Iwamoto_etal18,Plotnikov_Sironi19a}, where both 2D and 3D simulations were performed. The last result seems to be more compatible with theoretical expectations. The synchrotron emission is completely polarized only in the plane of the rotating particles, but the out-of-plane emission has an electric field component parallel to the magnetic field. Therefore, the total emission is polarized predominantly perpendicular to the magnetic field, but the degree of polarization never reaches 100\%.
The dependence of the synchrotron maser emission on the pre-shock temperature was numerically investigated \citep{Babul_Sironi20} for the strongly magnetized case, $\sigma\ge 1$. It was found that the maser efficiency is nearly independent of the temperature at $k_BT<0.03m_ec^2$, but sharply drops at higher temperatures, becoming two orders of magnitude smaller at $k_BT=0.1m_ec^2$. The reason is that even at such a relatively small energy spread of the upcoming electrons, the ring at the shock front becomes thick, $\Delta E\sim E$. As it was discussed in the previous subsection, the system emits at the first harmonics of the Larmor frequency at $\sigma>1$, only if these harmonics do not overlap, so that if $\Delta E< E$, which explains the above result. How weakly magnetized shocks depend on the upstream temperature remains unclear.
All the above refer to shocks in electron-positron plasma. The case of electron-ion plasma is more complicated. In electron-ion flow, the electrons transfer only a small fraction of the energy. However, the electrons lag behind ions in the field of intense precursor waves. Then, a longitudinal large-scale electric field is excited so that the electrons are accelerated, taking the energy from the ion flow \citep{Lyubarsky06}. Moreover the Raman scattering of the precursor waves produces plasma waves; this process leads to acceleration of the electrons and even to the formation of non-thermal electron distributions \citep{Hoshino08}. Therefore, the electrons could eventually reach an energy equipartition with the ions before the flow reaches the shock. In this case, the electron mass should be substituted by one-half the ion mass in Eq. (\ref{maser_frequency}) for the frequency of the emitted radiation so that the frequency of the precursor decreases.
Numerical simulations of relativistic shocks in magnetized electron-ion flows \citep{Lyubarsky06,Hoshino08,Iwamoto_etal18} reveal a large-scale longitudinal electric field and the electron acceleration until the equipartition upstream of the shock. The tendency of the emitted frequency to decrease was also found \citep{Hoshino08}, but the relatively small ion-to-electron mass ratio in simulations (only 50) does not permit to firm conclusions.
The outlined energy equilibration mechanism assumes that the strong-enough precursor is produced from the beginning, when the electrons are still light.
However, in this case, the shock is mediated by the Larmor rotation of ions; it is not evident that the ring-like distribution of light electrons is formed. The synchrotron radiation of ions could be suppressed at this stage because the frequency is well below the electron plasma frequency; then, the equilibration mechanism is not involved. However, if the electron magnetization is not small, such that the electron Larmor frequency exceeds the radiation frequency, the low-frequency radiation becomes possible. This question has not been properly investigated theoretically. In numerical simulations, the shock is initiated by hitting the flow upon a rigid conductive wall. In this case, a strong electromagnetic perturbation is initially formed, which is sufficient to trigger the acceleration process. It is unclear how the process is triggered in reality.
Another problem is that the accelerated electrons have a wide quasithermal energy distribution, in which case the synchrotron maser emission may be suppressed. It was speculated \citep{Iwamoto_etal18} that after the precursor is quenched, cold undisturbed electrons enter the shock once again and the whole positive feedback cycle is initiated. More simulations are necessary to clarify the issue.
\subsection{Radiation from Reconnecting Current Sheets}
In this section, a sort of antenna mechanism is described. An antenna is a linear conductor carrying a variable current. Variable linear currents are naturally formed in the course of magnetic reconnection in a current
sheet, separating oppositely directed magnetic fields. In the reconnection process, the current
sheet breaks into a system of linear currents, which are called magnetic islands because of their
island-like appearance in 2D simulations. The process is highly variable so that we can easily imagine that variable currents are sources of electromagnetic waves. This analogy is not very useful because antennas emit in a vacuum, whereas the reconnection process occurs in plasma, the characteristic variability times being larger than the microscopic plasma times. The frequency of the emitted waves is lower than both the plasma and the Larmor frequency; therefore, the system should be described in terms of magnetic hydrodynamics (MHD).
Inasmuch as the parallel currents are attracted one to another, the islands continuously merge (see, e.g., \citep{Kagan_etal15}). The merging
of two islands perturbs the magnetic field in the vicinity of the merging point, thus exciting
MHD waves around the reconnecting current sheet. There are generally three types of MHD waves: the Alfv'en wave and two magnetosonic
waves. In the simplest case
of cold plasma, only the Alfv'en and the fast magnetosonic (fms) waves remain. In the Alfv'en waves, the magnetic field lines oscillate due to the magnetic tension as stretched strings. These waves propagate along the magnetic field lines;
therefore, they do not transfer the energy away from the current sheet. Fast magnetosonic waves propagate across the magnetic field lines. These waves resemble sound waves, but instead of the gas pressure, the restoring force is provided by the magnetic pressure. In cold plasma, the fms velocity is (e.g., \citep{Appl_Camenzind88}):
\begin{equation}
v_{\rm fms}=\sqrt{\frac{B^2}{4\pi\rho c^2+B^2}}=\sqrt{\frac{\sigma}{1+\sigma}},
\label{fms_velocity}\end{equation}
where $\rho$ is the plasma density; all quantities are measured in the plasma frame. The fms velocity is independent of the propagation direction; therefore, any merging event produces
a quasi-spherical fms pulse that propagates away, with the duration of the pulse being $\sim a/c$, where $a$ is the transverse size of the island. The size of magnetic islands scales with the Larmor radius of particles in the current sheet, $a\sim{\rm few}\,c/\Omega_B$.
Therefore, the reconnection process produces an emission with wavelength of the order of a dozen of Larmor radii of particles within the sheet. The fms pulses are clearly observed in simulations of the reconnection in the current sheet \citep{Philippov19}; it was found that they take away roughly 0.5\% of the magnetic energy dissipated in the course of the reconnection process.
In the fms wave, the plasma density oscillates along the propagation direction, just as in sound waves. The magnetic field is frozen into the plasma; therefore, plasma compression and rarefaction are accompanied by oscillations in the field strength. The electric field vanishes locally in the plasma frame; therefore, in the global frame where the plasma is on average at rest, the electric field satisfies the relation $\mathbf{E}+\frac 1c\mathbf{v\times B}=0$, where $v$ is the oscillation velocity. Thus, the electric field of the wave is perpendicular both to the background magnetic field and to the propagation direction. The fms wave is longitudinal as the hydrodynamic wave but it is a transverse electromagnetic wave.
In the magnetically dominated plasma, $\sigma\gg 1$, the fms velocity is close to the speed of light, and when the wave propagates toward smaller plasma densities, such that $\sigma\to\infty$, it becomes a vacuum electromagnetic wave polarized perpendicular to the background magnetic field. This could be easily understood as follows: Consider a
vacuum electromagnetic wave superimposed on a homogeneous
magnetic field such that the electric field of the wave is
perpendicular to the background field. In this system, a seed charged particle oscillates due to drift in the crossed electric field of the wave and the background magnetic field.
The velocity of the electric drift is independent of the particle charge; therefore, if a small amount of plasma is added to
the system, no electric current appears in the system and the
wave propagates as in vacuum.
The coherent radio emission could be produced in the course of reconnection in magnetically dominated plasma, provided the Larmor radius of particles in the current sheet is comparable with the radio wavelength. This mechanism was recently proposed as a source of radio emission from the Crab and Crab-like
pulsars \citep{Uzdensky_Spitkovsky14,Lyubarsky19,Philippov19}. In ref.\ \citep{Lyubarsky20}, the mechanism was applied to FRBs.
\section{FRBs from Magnetar Flares}
In the previous section, mechanisms of coherent radio emission were outlined, which look promising in the FRB context. They all imply highly magnetized and relativistic plasma, which could be found in many potential FRB progenitors. Magnetars look to be the most promising sources of FRBs. Initially, the FRB-magnetar connection was proposed on statistical ground \citep{PopovPostnov07} because the rate of magnetar flares is comfortably above the FRB rate, as distinct from, say, gamma-ray bursts. The recent discovery of weak FRBs from the Galactic magnetar SGR 1935+2154 \citep{CHIME20,Bochenec20} lends strong support to magnetar models. In this section, I describe how the above-outlined coherent radiation mechanisms could work in magnetars.
\subsection{Magnetar Magnetosphere and Wind}
Magnetars are neutron stars with surface magnetic field $\sim 10^{15}$ G. Even a larger field, $\sim 10^{16}$ G, could be buried under the surface of the star. The notion of magnetars was introduced to astrophysics by Duncan and Thompson \citep{Duncan_Thompson92,Thompson_Duncan95}, who predicted that ultrastrong magnetic fields could be generated in proto-neutron stars, and demonstrated that the decay of this field could feed soft gamma repeaters (for a recent review of magnetars, see \citep{Kaspi_Beloborodov17}). Restructuring the magnetic field in a magnetar's magnetosphere produces X-ray flares with $10^{40}-10^{46}$ erg energies and with durations from a fraction of a second to a few minutes.
A rotating magnetar's magnetosphere emits a relativistic wind, similar to pulsar wind, which removes the rotational energy of the neutron star. Therefore, the neutron star spins down; the magnetar period increases to $\sim 1$ s for only $\sim 10$ years.
In the magnetosphere of an active magnetar, the magnetic field lines are twisted; therefore, the field is non-potential. The twist currents in the magnetosphere are carried by electron--positron pairs produced by the current \citep{BeloborodovThompson07,Beloborodov13a}: when the density of the pairs drops below the limit sufficient to maintain the current, the induced electric field (the displacement current) accelerates particles and triggers an avalanche, producing new pairs. Nevertheless, the stability considerations imply that in the persistent state, the magnetic field could hardly strongly exceed the potential field; therefore, for rough estimates of the field strength, we could use the dipole field.
The rotating magnetosphere becomes open at the light cylinder radius:
\begin{equation}
R_L=cP/2\pi =4.8\times 10^9P\,\rm cm,
\label{RL}\end{equation}
where $P$ is the rotational period. The magnetic field at the light cylinder is estimated as:
\begin{equation}
B_L=\frac{\mu}{R_L^3}=9\cdot 10^3 \frac{\mu_{33}}{P^{3}}\,\rm
G,
\label{BL}\end{equation}
where $\mu$ is the magnetic moment of the star. In magnetars, the surface magnetic field is $B_*\sim 10^{15}$ G therefore the magnetic moment is $\mu\sim B_*R^3_*\sim 10^{33}$ G$\cdot$cm$^3$, where $R_*\approx 10^6$ cm is the neutron star's radius. The pairs are continuously ejected from the magnetosphere, forming a magnetar wind. The particle flux in the wind from a strongly twisted magnetar magnetosphere was estimated as \citep{Beloborodov20}:
\begin{equation}
\dot{\cal N}\sim{\cal M}\frac{c\mu}{eR_LR_{\pm}}=2.5\times 10^{39}\frac{{\cal M}_3\mu_{33}^{2/3}}{P}\,\rm s^{-1},
\label{Ndot}\end{equation}
where ${\cal M}$ is the pair multiplicity (should not be confused with the multiplicity parameter used in the pulsar theory) and $R_{\pm}\sim 5\times 10^6\mu^{1/3}_{33}$ cm is the distance from the star where the magnetic field falls to $10^{13}$ G, such that the pair production
stops. The maximal multiplicity was estimated to be of the order of ${\cal M}\sim 10^3$ \citep{Beloborodov20}, but it should be stressed that the process of pair production in magnetar's magnetospheres is highly uncertain therefore one cannot exclude that the pair loading of the magnetar wind is much larger or smaller than that given by eq.\ (\ref{Ndot}).
The magnetar wind is strongly magnetized.
Beyond the light cylinder, the field is wound up by the rotating magnetosphere so that in the far zone, the azimuthal field dominates:
\begin{equation}
B_{\rm wind}=B_L\frac{R_L}{R}.
\end{equation}
The initial magnetization parameter of the magnetar wind, which is
defined as the ratio of the Poynting flux to the rest mass energy flux in the wind, is:
\begin{equation}
\eta=\frac{B_L^2R_L^2}{\dot{\cal N}m_ec}=2.5\cdot 10^4\frac{\mu_{33}^{4/3}}{{\cal M}_3P^3}.
\label{eta}\end{equation}
The magnetization parameter is the maximal Lorentz factor achievable by the wind if the
magnetic energy is completely converted to kinetic energy.
Beyond the light cylinder, the strongly magnetized wind accelerates linearly with distance $\gamma\sim R/R_L$, until it reaches the fast
magnetosonic point, $\gamma\sim\eta^{1/3}$. If there is no dissipation, the wind
accelerates very slowly beyond the fast magnetosonic point, $\propto(\ln R)^{1/3}$ \citep{Beskin98}. Therefore, we can take the Lorentz factor of the wind in the far zone, $R\gg \eta^{1/3}R_L$, to be:
\begin{equation}
\tilde{\gamma}_{\rm wind}=3\eta^{1/3}=90\frac{\mu_{33}^{4/9}}{{\cal M}^{1/3}_3P}.
\end{equation}
The magnetic dissipation could lead to a gradual acceleration of the wind in the equatorial belt,
where the magnetic field changes sign every half of a period
\citep{Lyubarsky_Kirk01,Kirk_Skjeraasen03}; then, the Lorentz factor of the wind exceeds
$\tilde{\gamma}_{\rm
wind}$ and may even reach $\eta$. The efficiency of the dissipation is quite uncertain therefore in all formulas below I retain $\gamma_{\rm wind}\ge\tilde{\gamma}_{\rm
wind}$ as a free parameter. The ratio of the Poynting to the plasma kinetic flux in the wind may be generally presented as:
\begin{equation}
\sigma_{\rm wind}=\frac{B^2}{4\pi mc^2\gamma_{\rm wind}N}=\frac{\eta}{\gamma_{\rm wind}}=
280\frac{\mu_{33}^{8/9}}{{\cal M}^{2/3}_3P^2}\frac{\tilde{\gamma}_{\rm wind}}{\gamma_{\rm wind}}, \label{sigma_wind}\end{equation}
where $N$ is the pair density in the wind.
\subsection{Magnetar Flares and FRBs}
The energy of magnetar flares is sufficient to feed FRBs, even if a small fraction is converted to radio waves. The flares are produced by a rapid restructuring of the magnetar's magnetic field. The duration of FRBs is compatible with the Alfv'en crossing time of the inner magnetosphere, which is the characteristic time of the development of MHD instabilities, which trigger the flares. The duration of the flares significantly exceeds this time, which could be attributed to the relatively long time necessary to radiate the released energy away. It is also possible that during the flare, a few magnetic explosions happen (see, e.g., \citep{Yuan_etal20}). In this case, the question remains why only one explosion typically produces an FRB.
In the magnetar magnetosphere, both the Larmor and the plasma frequencies are far above the radio band. Therefore, if the source of FRBs was deep within the magnetosphere, it should have emitted MHD waves. However, these waves are unable to escape because of non-linear interactions, as will be shown in Section 4.5. Therefore, FRBs could be produced only in the outer magnetosphere or in the magnetar wind. Far from the neutron star, the magnetic energy density is not sufficient in order to produce a short, powerful burst. However, the necessary energy could be delivered to the far zone by a strong magnetic pulse excited during a sudden rearrangement of the magnetosphere, which gives rise to
the magnetar flare. Let us consider the properties of such a pulse.
A rapid restructuring of the magnetosphere
produces a large-scale MHD perturbation that propagates
outward, sweeping the magnetic field lines into a
pulse of length $l=c\tau=3\times 10^7\tau_{\rm ms}$, where $\tau$ is the duration of the magnetic restructuring process. The last is of the order of the Alfv'en crossing time of the inner magnetosphere, i.e., of a few to a few tens of stellar radii. The pulse opens the magnetosphere, and propagates further out into the magnetar wind. As discussed in Section 2.6, only fms waves can propagate across the magnetic field lines. Therefore, independently of the initial magnetic configuration, we deal an fms pulse in the far zone. The amplitude of the pulse may be conveniently presented as:
\begin{equation}
B_{\rm pulse}=\sqrt{\frac{L_{\rm pulse}}c}\frac 1R=3.8\times 10^8\frac{L_{\rm
pulse,47}^{1/2}}{P}\frac{R_L}R\, \rm G,
\label{Bpulse}
\end{equation}
where $L_{\rm pulse}$ is the isotropic luminosity associated with the pulse. The total
energy in the pulse is:
\begin{equation}
{\cal E}=L_{\rm pulse}\tau=10^{44}L_{\rm pulse,47}\tau_{\rm ms}\,\rm erg.
\label{energy-total}
\end{equation}
In the far zone, the amplitude of the pulse significantly exceeds the background magnetic field.
Just beyond the light cylinder, the pulse enters the magnetar wind. Neglecting plasma inertia, i.e., in the limit $\sigma\to\infty$, the pulse may be considered purely electromagnetic. It propagates nearly at the speed of light; the magnetic field is purely azimuthal, whereas the electric field is poloidal and equal to the magnetic field $E_{\rm pulse}=B_{\rm pulse}$. The plasma is squeezed in the pulse and pushed forward; the plasma velocity with respect to the wind is estimated as the velocity of the zero electric field frame, $v'=E'/B'=B'_{\rm pulse}/(B'_{\rm wind}+B'_{\rm pulse})$, where the prime refers to quantities in the wind frame.
The corresponding Lorentz factor is:
\begin{equation}
\Gamma'=\sqrt{\frac{B'_{\rm pulse}}{2B'_{\rm wind}}}=\frac 12\sqrt{\frac{B_{\rm pulse}}{B_{\rm
wind}}}=100\frac{L_{{\rm pulse,}47}^{1/4}P}{\mu_{33}^{1/2}}.
\label{Gamma}\end{equation}
In the lab frame, the Lorentz factor of the pulse is $\Gamma=2\Gamma'\gamma_{\rm wind}$.
The pulse itself, i.e.,
the waveform, moves with the fms velocity (\ref{fms_velocity}) with respect to the local plasma velocity. For a large magnetization $\sigma\gg 1$ and a high amplitude of the pulse $B_{\rm pulse}\gg B_{\rm wind}$, the difference between the velocity of the waveform and the speed of light is estimated in the wind frame as:
\begin{equation}
\frac{c-v'_{\rm form}}c\sim (\sigma\Gamma'^2)^{-1}\sim\frac{B_{\rm wind}}{\sigma\gamma^2_{\rm wind}B_{\rm pulse}},
\label{delta_v}\end{equation}
so we can safely assume that the pulse moves with the speed of light. The pulse moves through the magnetar wind as a
propagating wave so that the plasma enters the pulse through the front part and eventually leaves it through the rear part. Within the pulse, the plasma moves with respect to the wind with the large Lorentz factor (\ref{Gamma}). Therefore, the plasma is dragged within the pulse to a large distance and is very slowly substituted by the wind plasma.
The dependence of the waveform velocity on the local density and magnetic field, which vary
across the pulse, could lead to the non-linear steepening of the pulse. However, this velocity is very close to the speed of light (see Eq. (\ref{delta_v})); therefore, the nonlinearity is weak even if the amplitude of the pulse is very large \citep{Levinson_vanPutten97,Lyubarsky03,Lyutikov10}. The physical reason for this is that the displacement current significantly exceeds the conductivity current, so the pulse propagates nearly as if in a vacuum. Moreover, the waveform propagates with the fms velocity in the local plasma frame, which moves at relativistic speeds if the pulse amplitude exceeds the background field, see Eq.(\ref{Gamma}). Substituting the estimated above parameters of the wind and pulse in Eq. (\ref{delta_v}), the non-linear steepening scale is estimated as
\citep{Lyubarsky20}:
\begin{equation}
R_{\rm steep}=\frac{cl}{c-v_{\rm form}}=8\Gamma^2\sigma_{\rm pulse}l=10^{13}\sigma_{\rm pulse}\gamma^2_{\rm wind}\frac{L_{\rm pulse, 47}^{1/2}P^2\tau_{\rm ms}}{\mu_{33}}\,\rm cm
\label{steepening1}\end{equation}
The above considerations implicitly assume that the mean
field in the wind occurs in the same direction as the initial
field of the pulse. However, we can imagine a situation
where the field in the wind is opposite to the field in the pulse. In an ideal case, this does not affect the obtained results because we can imagine a pulse within which the magnetic field changes sign such that a current sheet separates the domains of the opposite polarity. The electromagnetic stress is a quadratic function of the fields; therefore, a solution of ideal
MHD equations is not affected if we reverse fields in any bundle of the magnetic field lines and insert an appropriate
current sheet, provided that the plasma in the sheet is light
enough (so that the overall inertia is not affected). In reality, the current sheet is unstable so that the new flux could annihilate with the flux in the pulse. The flux in the pulse, $B_{\rm pulse}l$, significantly exceeds the flux in the wind, $B_{\rm wind}R=B_LR_L$; therefore, the pulse will not be destroyed.
However, the field annihilation heats the plasma and
the heated plasma expands, transforming heat into kinetic energy.
A detailed analysis of this process has not yet been performed.
\subsection{FRBs Produced by Relativistic Shocks from Magnetar Flares}
As shown in Section 2.5, relativistic magnetized shocks are efficient sources of coherent emission. Let us discuss how such shocks could be produced by outflows from magnetar flares and whether the properties of their emission are compatible with the observed properties of FRBs.
It was proposed \citep{Lyubarsky14} that an FRB is produced when the electromagnetic perturbation from the flare (magnetic pulse described in Section 3.2) reaches the nebula inflated by the magnetar wind in the surrounding gas. The inner boundary of the nebula is determined by the balance of the pressure within the nebula and the dynamic pressure of the magnetar wind:
\begin{equation}
R_s=\frac{B_LR_L}{\sqrt{4\pi p}}=1.2\times 10^{15}\frac{\mu_{33}}{Pp_{-4}^{1/2}}\,\rm cm,
\end{equation}
where $p$ is the pressure within the nebula. The last quantity is highly uncertain; normalization by $10^{-4}\,\rm dyne/cm^2$ was chosen because this value is roughly compatible with the observational data for the nebula surrounding FRB 121102 \citep{Beloborodov17}.
When the electromagnetic pulse arrives at the wind termination
shock, it pushes the plasma outward like a magnetic piston. A forward shock propagates through the magnetized and relativistically hot
plasma of the nebula, whereas a reverse shock enters the magnetic piston. Between the shocks, a contact discontinuity separates
the shocked plasma of the nebula from the magnetic piston. At the
contact discontinuity, the magnetic pressure of the pulse is balanced
by the bulk pressure of the relativistically hot plasma entering the forward shock. The pressure balance condition yields the Lorentz factor of the contact discontinuity:
\begin{equation}
\Gamma_{\rm cd}=\left(\frac{B_{\rm pulse}^2}{32\pi p}\right)^{1/4}=150\frac{L_{\rm pulse,47}^{1/4}P}{\mu^{1/2}_{33}}.
\end{equation}
The plasma in the nebula is mildly magnetized, $\sigma\le 1$; therefore, the forward shock moves relative to the contact discontinuity only mildly relativistically; we can roughly take the Lorentz factor of the shock to be $\Gamma_{\rm cd}$.
The frequency of the maser radiation in the shock frame is estimated by Eq. (\ref{maser_frequency}). The plasma pressure beyond the forward shock is of the order of the magnetic pressure in the pulse. Therefore, the relativistic plasma frequency (\ref{Omega_B,Omega_p}) at the shock front is roughly equal to the Larmor frequency of particles rotating with the Lorentz factor $\Gamma_{\rm cd}$ in the field $B'_{\rm pulse}=B_{\rm pulse}/\Gamma_{\rm cd}$. Making the Lorentz transform, we estimate the observed frequency as:
\begin{equation}
\nu\sim\frac{\zeta eB_{\rm pulse}}{2\pi m_ec\Gamma_{\rm cd}}=28\zeta\frac{ L_{\rm pulse,47}^{1/4}p_{-4}^{1/2}P}{\mu_{33}}\,\rm MHz.
\end{equation}
The observed duration of the pulse is determined by the time spread of the arrival of radiation emitted by the fraction of the forward shock, subtending the angle $\Gamma^{-1}_{\rm cd}$. This yields:
\begin{equation}
\tau_{\rm obs}\sim\frac{R_s}{2c\Gamma_{\rm cd}^2}=0.9\frac{\mu_{33}}{L^{1/2}_{\rm pulse,47}P^4p_{-4}^{1/2}}\,\rm s.
\end{equation}
The observed duration of the burst, $\tau\sim 1$ ms, and the frequency, $\nu\sim 1$ GHz, may be reproduced only by assuming a long-period magnetar, $P\sim 5$ s, and the power, $L_{\rm pulse}\sim 10^{38}$ erg/s. However, the nebula around such a long-period magnetar could hardly have large energy density and pressure, whereas decreasing $p$ demands a further increase in the burst power. In the original paper \citep{Lyubarsky14}, FRBs were attributed to very rare superflares with the total energy of $\sim 10^{48}$ erg, which could occur when the neutron star eventually becomes unstable to a dynamic overturning instability that destroys most of its dipole moment in a single event \citep{Eichler02}. Later-discovered repeating FRBs, as well as the ubiquity of single FRBs, indicated that the energy beyond a typical FRB could hardly ever be that large. Moreover, the discussion in Section 2.5 of simulations casts doubt of the possibility of maser emission from shocks in a relativistically hot medium.
To resolve these difficulties, it was assumed that the shock arises in the magnetar wind at distances of the order of $10^{14}$ cm \citep{Beloborodov17,Beloborodov20,Yuan_etal20}. In this case, the pulse acts as a piston driving a blast wave into the cold pre-explosion wind. The plasma in the pulse moves with the Lorentz factor (\ref{Gamma}) with respect to the wind. In a highly magnetized medium, the shock propagates with respect to the downstream plasma with the fast magnetosonic Lorentz factor $\sqrt{\sigma}$, Eq. (\ref{fms_velocity}). Therefore, in the lab frame, the shock Lorentz factor is $\Gamma_{\rm shock}=2\Gamma\sigma_{\rm wind}^{1/2}$. In the downstream frame, the particles rotate with the Lorentz factor $\Gamma'$ in the magnetic field $B'_{\rm pulse}=B_{\rm pulse}/\Gamma$. The frequency of the observed maser emission is obtained by the Lorentz transform:
\begin{equation}
\nu=\zeta\frac{eB'_{\rm pulse}}{2\pi m_ec\Gamma'}\Gamma
=0.5\zeta\frac{\mu_{33}^{1/2}L^{1/2}_{\rm pulse, 47}}{PR_{14}}\,\rm GHz,
\label{frequency1}\end{equation}
which is compatible with observations (recall that $\zeta\sim$ a few, see Eq. \ref{maser_frequency}), if the radiation occurs at the distance $R\sim 10^{14}$ cm.
The observed duration of the burst is:
\begin{equation}
\tau_{\rm obs}=\frac{R}{2c\Gamma^2_{\rm shock}}=0.01\frac{\mu_{33}R_{14}}{\sigma_{\rm wind}\gamma^2_{\rm wind}L^{1/2}_{\rm pulse,47}P^2}\, \rm s.
\label{tau_obs}\end{equation}
The observed duration, $\sim 1$ ms, may be obtained only if the magnetar wind is mildly relativistic, $\gamma_{\rm wind}\sim$ a few, and moderately magnetized, $\sigma_{\rm wind}\sim 1$. Moderate Lorentz factor and magnetization of the wind are also required by the condition that the deceleration scale of the piston, $\sim \Gamma^2l$, does not significantly exceed $\sim 10^{14}$ cm because in the opposite case, too little energy is emitted in the GHz band.
This model is based on the assumption that the
pulse may be considered as a piston, i.e., the front part of the pulse is a flat wall pushing the plasma of the wind outward. However, initially, the pulse has a smooth shape, with the magnetic field gradually growing to the maximum at the scale of $\sim l=c\tau$.
The shock could only arise due to the non-linear steepening
discussed in Section 3.2. The larger the local magnetic field, the closer the velocity of the waveform to the speed of light, see Eq. (\ref{delta_v}). Fractions of the pulse with a larger field overtake the foot of the pulse, where the magnetic field approaches the background field; therefore, initially, a weak shock arises at the foot of the pulse. The field jump at the shock grows gradually as the fractions of the pulse with a larger field reach the shock. The shock velocity increases when the jump increases; therefore, the steepening process slows. The shock matches the amplitude of the pulse when the point of the maximum field arrives at the shock. This occurs at the distance given by Eq. (\ref{steepening1}). Note that the magnetization $\sigma_{\rm pulse}$ in this formula is determined by the magnetization just downstream of the shock because these parts of the pulse overtake the shock. There, the matter is already substituted by the matter of the wind; therefore, we can substitute $\sigma_{\rm pulse}=\sigma_{\rm wind}$ into this expression.
Substituting $R_{\rm steep}$ into Eq. (\ref{frequency1}), we obtain:
\begin{equation}
\nu=5\frac{\zeta}{\sigma_{\rm wind}\gamma_{\rm wind}^2}\frac{\mu^{3/2}_{33}}{L^{1/4}_{\rm pulse, 47}P^3\tau_{\rm ms}}\,\rm GHz.
\label{frq}\end{equation}
We again see that we obtain a reasonable observed frequency if $\gamma_{\rm wind}\sim$ a few and $\sigma_{\rm wind}\sim 1$.
Substituting $R_{\rm steep}$ into Eq. (\ref{tau_obs}), we find that $\tau_{\rm obs}=\tau\sim 1$ ms, independent of other parameters of the pulse.
At the distance (\ref{steepening1}), the shock passes across a significant fraction of the pulse, thus releasing a significant fraction of the pulse energy. Therefore, the isotropic energy of the FRB is only proportional to the isotropic energy of the pulse:
\begin{equation}
{\cal E}_{\rm FRB}=\chi {\cal E}=10^{41}\chi_{-3}L_{\rm pulse,47}\tau_{\rm ms}\,\rm erg.
\end{equation}
The model easily explains even bright FRBs. As the radiation frequency very weakly depends on the pulse energy, whereas the burst duration does not depend on it at all, the model can explain the large scatter of the observed bursts in energy. For example, taking small pulse energy and luminosity, say, $L_{\rm pulse}\sim 10^{42}-10^{41}$ erg/s, and the magnetar period of a few seconds, we obtain the same radiation frequency of about GHz, whereas the energy of the burst is only $\sim 10^{35}$ erg, which is compatible with the data for an FRB from galactic magnetar SGR 1935+2154. Some repeaters exhibit bursts over a broad range of frequencies from ~400 MHz to 8 GHz. This could be attributed to the variations in $\sigma_{\rm wind}$ and $\gamma_{\rm wind}$.
The model of the shock in the magnetar wind successfully confronted
to observations, provided the magnetar wind is only mildly relativistic and moderately magnetized. This implies that the initial magnetization of the wind, $\eta$, is of the order of a few so that the wind is heavily loaded by the plasma. According to Eq. (\ref{eta}), the required plasma multiplicity is ${\cal M}\sim 10^7$, which is about four orders of magnitude larger than the available magnetar models could provide. It was claimed \citep{Beloborodov17,Beloborodov20} that in a highly twisted pre-flare magnetosphere, the effective magnetic moment increases; then, the observed parameters of the burst may be achieved at $\gamma_{\rm wind}\sim 10$ and $\sigma_{\rm wind}\sim$ a few, so that $\eta$ may reach a few dozen. Still, this demands an extremely large multiplicity. What mechanism could load the wind by the plasma so efficiently is still unclear.
It was assumed \citep{Yuan_etal20} that during the magnetar flare, repeating magnetic explosions eject a chain of plasmoids. The magnetic field lines gradually reconnect behind each plasmoid, generating a wind far stronger than the normal spindown. Repeating ejections drive blast waves in the amplified wind, producing FRBs. This model successfully reproduces the observed properties of an FRB from the Galactic magnetar SGR 1935+2154 \citep{CHIME20,Bochenec20}, and specifically two subburst components separated by 30 ms. However, the model is based on axisymmetric simulations. In 3D, these plasmoids become unstable and dissipate for about 10 Alfven times \citep{Riddhi_etal20}. Therefore, they could hardly escape the magnetosphere. However, copious pair production in the course of the magnetar flare could load the wind, at least temporarily. The model deserves further development.
To resolve the problem of a highly mass loaded magnetar wind, it was suggested that the shock arises when the pulse collides with the mildly relativistic baryon cloud ejected by a previous flare \citep{Beloborodov17,Metzger19,Beloborodov20,Margalit20a,Margalit20b,Xiao_Dai20}. It is an observational fact that a significant amount of baryonic matter is released in giant magnetar flares \citep{Gaensler05,Gelfand_etal05,Granot_etal06}. The model assumes that not earlier than 1 day before the FRB, a strong flare ejected the required amount of material. Then, the magnetic pulse drives a shock in the cloud. However, the magnetization rapidly decreases within an expanded cloud, which means that the conditions for the synchrotron maser could be violated in this shock. Even if the magnetization of the cloud is not too small, the maser emission from a shock in the baryonic plasma is determined by proton plasma and cyclotron frequencies, which are very low. Therefore, the observed FRBs could hardly be produced by shocks in such clouds. After the cloud is ejected, the magnetar wind flows around it and forms a bow shock from behind. Therefore, before the pulse enters the cloud, it passes through a magnetized electron-positron plasma of the shocked magnetar wind. The pulse drives the shock in this plasma; therefore, we can speculate that the emission is produced by this shock. However, this plasma is relativistically hot; therefore, the synchrotron maser may be disabled in this case, see \citep{Babul_Sironi20} and the discussion in Section 3.3. Therefore, for a while, there is no satisfactory solution to the problem of a mildly relativistic, heavily-plasma-loaded medium required for the synchrotron maser model.
In all the above models, the shock propagates outward; therefore, the emission frequency, which scales with the local Larmor frequency, decreases. So, the sometimes-observed
downward frequency drift in FRBs \citep{CHIME19,Hessels19} could be naturally explained by the synchrotron maser models \citep{Metzger19,Beloborodov20,Margalit20a}.
The synchrotron maser emission is polarized perpendicular to the background magnetic field. This agrees with the typically observed high linear polarization. However, this implies that in repeaters, the polarization position angle remains the same, namely, along the rotation axis of the magnetar. This is incompatible with the diverse polarization angle swings observed in FRB 180301 \citep{Luo_etal20}. As discussed in Section 2.5, the synchrotron radiation could not be fully polarized. Therefore, the observed 100\% degree of polarization in a few FRBs \citep{Gijjar18,Michili18,oslowski19} could hardly be explained within the scope of the synchrotron maser model.
\subsection{FRBs from Magnetic Reconnection in the Upper Magnetar Magnetosphere}
As discussed in Section 2.6, in the course of magnetic reconnection, fast magnetosonic (fms) waves are efficiently generated, and they are converted to electromagnetic waves when propagating toward a decreasing plasma density. The wavelength of this radiation is of the order of the size of magnetic islands in the current sheet, which, in turn, are scaled with the Larmor radius of the particles in the sheet. The violent reconnection occurs during the magnetar flare when the unstable magnetic configuration is disrupted
(e.g., \citep{Parfrey_etal13,Carrasco_etal19}). However, all microscopic plasma parameters are very small in the inner magnetosphere, so it is unclear why the waves with wavelengths $\sim 10$ cm
could be generated with a sufficiently large power. In any case, Section 4.5 shows that these waves could not escape because of non-linear interactions.
FRBs could be produced when the magnetic perturbation from the magnetar flare (magnetic pulse) reaches the current sheet separating, just beyond the light cylinder, the oppositely directed magnetic fields \citep{Lyubarsky20}. When the magnetic pulse arrives at the sheet, the sharp acceleration and compression cause violent reconnection. We can speculate that the current sheet is destroyed by the
Kruskal–Schwarzschild instability \citep{Lyubarsky10,Gill_etal18}, which is the magnetic counterpart of the Rayleigh–Taylor instability, so that the field line tubes with the oppositely-directed fields fall into the magnetic pulse, forming multiple small current sheets scattered over the body of the pulse. Within each of the small current sheets, the reconnection process occurs via formation and merging of magnetic islands, which procudes an fms noise. This noise is converted, as was outlined in Section 2.6, into radio emission. Since the sources of the noise are distributed in the body of the magnetic pulse but not concentrated at the front part, the duration of the
observed radiation burst is of the order of the duration of the pulse, $\tau_{\rm obs}\sim l/c=\tau$.
Assuming the above picture, let us estimate the parameters of the outgoing radiation.
The magnetic pulse from the flare enters the magnetar wind just beyond the light cylinder. The plasma is squeezed in the pulse and pushed forward with the Lorentz factor (\ref{Gamma}). In the magnetar wind, a current sheet separates two magnetic hemispheres, with the shape of the sheet having been likened to
a ballerina's skirt.
The total reconnecting magnetic flux
may be estimated as $B_LR_L^2$. Within the pulse, the stripe with the
oppositely-directed fields is compressed $B_{\rm pulse}/B_L$ times. Then, the total energy of annihilated fields is roughly:
\begin{equation}
{\cal E}\sim\left(B_{\rm pulse}/B_{\rm wind}\right)B^2_LR_L^3=\left(L_{\rm pulse}/c\right)^{1/2}B_LR_L^2.
\end{equation}
According to the simulations \citep{Philippov19}, the fraction $f\sim 0.005$ of the reconnecting
magnetic energy is emitted in the form of fms waves. Making use of Eqs. (\ref{RL}) and (\ref{BL}), we estimate the isotropic energy of the radio burst as:
\begin{equation}
{\cal E}_{\rm FRB}=f{\cal E}=3.8\cdot 10^{39}\frac{f_{-2}\mu_{33}L_{{\rm
pulse,}47}^{1/2}}{P}\,\rm erg.
\label{energy}\end{equation}
The wavelength of the emitted waves is of the order of the size of magnetic islands within the current sheet, which are 10--100 times larger than the width of the current sheet
\citep{Philippov19}. Therefore, we can estimate the emitted frequency in the comoving frame as:
\begin{equation}
\omega'=\frac c{\xi a'},
\label{freq_reconn}\end{equation}
where $a'$ is the width of the sheet in the comoving frame, $\xi\sim 10-100$.
The reconnection occurs via the collisionless tearing
instability; therefore, the width of the sheet is of the order of a few Larmor radii:
\begin{equation}
a'=\zeta\frac{\varepsilon_T}{eB'_{\rm pulse}},
\end{equation}
where $\varepsilon_T$ is the characteristic thermal energy of
the pairs within the sheet, $\zeta\sim$ a few. To estimate $\varepsilon_T$, let us consider the energy and the pressure balance within the sheet.
Within the sheet, the pressure of the external magnetic
field is balanced by the pressure of hot pairs:
\begin{equation}
\frac 13N'\varepsilon_T=\frac{B'^2_{\rm pulse}}{8\pi},
\end{equation}
where $N'$ is the pair density within the sheet. The synchrotron cooling time is very short at the inferred parameters. Therefore, within the sheet, the reconnection energy release is balanced by the synchrotron cooling. The energy release per unit square of the sheet is determined by the Poynting flux into the sheet, $cE'B'_{\rm pulse}/4\pi$. Introducing the reconnection rate, $\epsilon=E'/B'_{\rm pulse}\sim 0.1$, the energy balance may be written as:
\begin{equation}
\epsilon\frac{B'^2_{\rm pulse}}{4\pi}c=N'\sigma_T\frac{B'^2_{\rm pulse}}{4\pi}c\left(\frac{\varepsilon_T}{m_ec^2}\right)^2a',
\end{equation}
where $\sigma_T$ is the Thomson cross-section. Eliminating from the last three equations
$\varepsilon_T$ and $n$ in favor of $a$, we obtain:
\begin{equation}
a'=\left(\frac{\epsilon\zeta}{r_e}\right)^{1/2}\left(\frac c{\omega'_B}
\right)^{3/2},
\end{equation}
where $r_e$ is the classical electron radius and $\omega'_B=eB'_{\rm pulse}/m_ec$ is the cyclotron frequency.
The emitted frequency in the observer's frame may be estimated by substituting the obtained width of the sheet into Eq. (\ref{freq_reconn}) and making a Lorentz transform. The Lorentz factor of the plasma within the pulse with respect to the wind is given by Eq. (\ref{Gamma}). Just beyond the light cylinder, the wind is only mildly relativistic; therefore, we can use this estimate as the Lorentz factor in the observer's frame. Then, we obtain:
\begin{equation}
\nu=\Gamma'\frac{\omega'}{2\pi}=\frac 1{2\pi\xi} \left(\frac{r_e}{\epsilon\zeta
c\Gamma'}\right)^{1/2}\omega_B^{3/2}
=3\frac{\mu_{33}^{1/4}L_{\rm pulse,47}^{5/8}}{\xi_1\zeta_1^{1/2}\epsilon_{-1}^{1/2}P^2}\,\rm
GHz.
\label{frequency}\end{equation}
One sees that the basic parameters of the outgoing radiation are compatible with the observed properties of typical FRBs.
The emitted fms waves are formed within the magnetic pulse, which moves with the relativistic velocity. Therefore, they propagate within the pulse and escape from it far away from the magnetar, when the pulse is decelerated, colliding with the surrounding plasma. Importantly, these waves are polarized perpendicular to the background magnetic field. The magnetic pulse is formed well within
the magnetosphere, and the magnetic flux from this region is transferred by the pulse outward. Therefore, the direction of the magnetic field in the pulse is determined by the rotational phase of the magnetar. When the pulse travels in the magnetar wind, it picks up the azimuthal magnetic field, which is accumulated at the front of the pulse. The radiation passes through this layer on the way out. If the field varies gradually, the polarization is adiabatically adjusted to the local magnetic field, so that the outgoing radiation eventually becomes completely polarized perpendicular to the field in the wind, i.e., along the rotational axis of the magnetar. If the field varies sharply or the accumulated flux is too small, the waves are split into two normal modes corresponding to the local magnetic field so that the radiation is depolarized, whereas the position angle depends on the details of the transition zone.
In contrast to the synchrotron maser, the reconnection emission model could explain the observed \citep{Gijjar18,Michili18,oslowski19} 100\% degree of linear polarization in a few FRBs. The position angle in this model may be determined by both the field in the magnetar wind and by the magnetosheric field, depending on how the magnetic pulse propagates through the wind. This is compatible with the diverse polarization patterns observed in FRBs \citep{Luo_etal20}. However, the model could not be applied to weak FRBs because, according to Eq. (\ref{frequency}), the emitted frequency becomes too low for small luminosities. This is because the magnetic field in weak pulses is too low already at light cylinder distances. It was suggested \citep{Yuan_etal20} that in the course of the magnetar flare, the reconnection emission could be generated in the upper magnetosphere inside the light cylinder. The ejected plasmoids push out the magnetospheric field lines, so current sheets are formed behind the plasmoid separating oppositely-directed fields. The reconnection in these current sheets could produce the radio emission. A more detailed analysis of this idea is necessary.
\section{Non-Linear Effects and Escape of Radio Emission}
The high brightness temperature of FRBs implies that the non-linear processes could significantly affect the properties of the outgoing radiation and even prevent escape of radio waves. Here, I briefly outline the relevant processes.
\subsection{Electron in a Strong Electromagnetic Wave}
Let us first consider non-linear effects in the non-magnetized plasma. The strength of the electromagnetic wave is measured by the parameter $a$ introduced in Section 2.5 in Eq. (\ref{strength}). This parameter represents the four-velocity of electron oscillations in the field of the wave (see, e.g., \citep{Melrose_book80}).
In FRBs, the strength parameter exceeds unity at the distance from the source \citep{Luan_Goldreich14}:
\begin{equation}
R<2\times 10^{13}\frac{F^{1/2}_{\nu,\rm Jy}D_{\rm Gpc}}{\nu^{1/2}_{\rm GHz}}\,\rm cm.
\end{equation}
The standard linear theory of electromagnetic waves in a plasma assumes that $a\ll 1$. Even under this condition, the non-linear effects could significantly affect the propagation of the wave because even small non-linear corrections accumulate considerably at a long enough path. At $a>1$, the electron oscillates relativistically so that the Lorentz force, $(1/c)\mathbf{v\times B}$, becomes comparable with the electric force. In a linearly-polarized strong wave, the electrons oscillate both in the transverse and in the longitudinal directions, producing a figure-of-eight trajectory in the oscillation-center frame.
If an electron at rest is illuminated by a strong wave, it is pushed forward with the Lorentz factor $\sim a$. Together with oscillations with amplitude $\sim a$ in the oscillation-center frame, this produces the total energy $\sim m_ec^2a^2$. Therefore, the electron-positron plasma is boosted forward by a strong wave. Conversely, electrons in an electron-ion plasma are tied electrostatically to ions. Therefore, they remain, on average, at rest and only experience oscillations with the amplitude of $\sim a$ \citep{Waltz_Manley78,Sprangle_etal90}.
In many cases, an electron in a high frequency electromagnetic wave may be considered a particle with the effective mass:
\begin{equation}
m_{\rm eff}=m_e\sqrt{1+\frac 12a^2},
\label{eff_mass}\end{equation}
moving with the velocity of the electron oscillation center, $\mathbf{v}_{\rm d}$, and having the energy and momentum $\varepsilon=m_{\rm eff}c^2\gamma_{\rm d}$ and $\mathbf{P}=m_{\rm eff}\mathbf{v}_{\rm d}\gamma_{\rm d}$, respectively, where $\gamma_{\rm d}=(1-v_{\rm d}^2/c^2)^{-1/2}$ is the Lorentz factor of the oscillation center (see, e.g., \citep{Melrose_book80}). In particular, the conservation laws for the Compton scattering of a strong wave look like
\begin{equation}
\varepsilon+s\hbar\omega=\varepsilon'+\hbar\omega';\qquad \mathbf{P}+s\hbar\mathbf{k}=\mathbf{P'}+\hbar\mathbf{k'},
\end{equation}
where $s$ is an integer. The scattering of a strong wave may be described
as the absorption of $s$ photons and the reemission of a single photon. Neglecting recoil, we recover the classical picture: a relativistically oscillating electron emits at harmonics of the oscillation frequency. In the regime $a\gg 1$, the power of the scattered radiation is dominated by the higher harmonics of the incident
wave. As with synchrotron emission, the maximal power is achieved at $s\sim a^3$, and the scattering cross-section exceeds the Thomson cross-section $a^2$ times.
The amplitude-dependent relativistic mass of electrons has many implications. The plasma refraction index,
\begin{equation}
n=\sqrt{1-\frac{\omega_p^2}{\omega^2}},
\label{refraction}\end{equation}
depends on the plasma frequency,
\begin{equation}
\omega_p=\sqrt{\frac{4\pi e^2 N}{m_e}}.
\label{omega_p}\end{equation}
Increasing the effective electron mass decreases the plasma frequency. Therefore, the cutoff frequency, below which the wave could not propagate, decreases from $\omega_{\rm cutoff}=\omega_p$ to $\omega_{\rm cutoff}\sim\omega_p/\sqrt{a}$. The dispersion measure decreases appropriately, which could have implications for FRBs \citep{Lu_Phinney20,Yang_Zhang20}. An important point is that these relations are written in the plasma frame, which is the oscillation center frame of electrons.
The synchrotron absorption of strong waves occurs as if the absorbing particles have mass (\ref{eff_mass}) and rotate in the magnetic field with the Lorentz factor $\gamma_{\rm d}$ \citep{Lyubarsky18}.
In some cases, the results of linear theory are strongly modified even at $a<1$. For example, the free-free absorption is determined by the velocity of electrons.
Therefore, if the velocity of oscillations in the wave exceeds the thermal electron velocity, $a>v_T/c$,
the absorption coefficient is suppressed by the factor $(ac/v_T)^3=a^3(m_ec^2/k_BT)^{3/2}$ \citep{Lu_Phinney20}.
All the above effects are based on strong oscillations of electrons in the field of the wave. If the plasma is magnetized such that the Larmor frequency exceeds the wave frequency, these effects are suppressed for the waves polarized perpendicularly to the background magnetic field \citep{Lyutikov20}; these waves represent the so called X-mode. In the O-mode, where the polarization vector has a component along the background magnetic field, the electron oscillates strongly. Therefore the refraction indexes for the two modes are different, which has important implications for the polarization transfer. When the radiation propagates through a slowly varying magnetic field, the polarization of each mode is adjusted to the local direction of the field if the wavelength of beating between the two modes is small as compared with the inhomogeneity scale. The polarization of the outgoing radiation is fixed at the so called limiting polarization radius, where this condition is violated (e.g., \citep{Ginzburg70,Cheng_Ruderman79}). For strong waves, these effects were studied in ref.\ \citep{Lu_etal19}.
\subsection{Induced Compton Scattering}
The induced Compton scattering may be described in terms of radiation transfer theory.
The scattering process could be considered as absorption followed by emission. Therefore, to consider the induced scattering, we must only multiply, according to the general quantum prescription, the scattering rate by $1+n_{\nu,\mathbf{l}}$, where $n_{\nu,\mathbf{l}}$ is the occupation number of the scattered photons with frequency $\nu$ propagating along the unit vector $\mathbf{l}$. The photon occupation number is simply related to the brightness temperature, $n_{\nu,\mathbf{l}}=k_BT_b/h\nu$. The scattering rate from state $(\nu',\mathbf{l'})$ to state $(\nu,\mathbf{l})$ is proportional to $n_{\nu',\mathbf{l'}}(1+n_{\nu,\mathbf{l}})$. At first glance, this means that the induced scattering rate dominates when the occupation number exceeds unity. However, the net scattering rate is determined by the difference between the scattering $\mathbf{l}\to\mathbf{l'}$ and the reverse process $\mathbf{l'}\to\mathbf{l}$. Then, the product terms $n_{\nu\mathbf{l}}n_{\nu'\mathbf{l'}}$ cancel out unless we take into account that, due to a small recoil, the frequency of the scattered photon decreases in the electron rest frame:
\begin{equation}
\delta\nu=\frac{h\nu^2}{m_ec^2}(1-\mathbf{l\cdot l'}).
\end{equation}
Let us consider photons with the frequency $\nu$ propagating into the direction $\mathbf{l}$ and assume for a while that they are scattered only into or from the direction $\mathbf{l}'$.
The photon scattered from the direction $\mathbf{l}'$ to the direction $\mathbf{l}$ has the frequency $\nu+\delta\nu$ whereas when a photon is scattered from the direction $\mathbf{l}$ into the direction $\mathbf{l}'$, the frequency of the scattered photon is $\nu-\delta\nu$. Therefore the net rate of scattering into and out of the state $(\nu,\mathbf{l})$ is proportional to
\begin{equation}
n_{\nu+\delta\nu,\mathbf{l'}}\left(1+n_{\nu\mathbf{l}}\right)\nu^2-n_{\nu\mathbf{l}}\left(1+n_{\nu-\delta\nu,\mathbf{l'}}\right)(\nu-\delta\nu)^2=(n_{\nu\mathbf{l'}}-n_{\nu\mathbf{l}})\nu^2+2n_{\nu\mathbf{l}}\frac{\partial \nu^2n_{\nu\mathbf{l'}}}{\partial\nu}\delta\nu,
\end{equation}
where the factor $\nu^2$ takes into account the phase volume. The induced scattering rate exceeds the spontaneous scattering rate only when the second term in the rhs of this equation exceeds the first term, i.e., if $n_{\nu\mathbf{l}}\delta\nu>\nu$. This requires a very high brightness temperature, $k_BT_b(1-\mathbf{l\cdot l'})\gg m_ec^2=k_B\times 6\cdot 10^9$ K.
The total scattering rate is found by multiplying the above expression by the scattering probability and integrating over the directions $\mathbf{l}'$. When the plasma is nonrelativistic, $v_T\ll c$, and the radiation spectrum is wide, $\Delta\nu/\nu\gg v_T/c$, the resulting radiation transfer equation for the induced scattering is written as (e.g., \citep{Wilson82}):
\begin{equation}
\frac{d I_{\nu,\mathbf{l}}}{dt}
=\frac{r_e^2Nc}{m_e}I_{\nu,\mathbf{l}}
\int(\mathbf{e\cdot e}')^2
(1-\mathbf{l\cdot l}')\frac{\partial
}{\partial\nu}\left(\frac{I_{\nu,\mathbf{l'}}}{\nu}\right)d\Omega',
\label{kinComp}\end{equation}
where $I_{\nu,\mathbf{l}}=h\nu^3n_{\nu,\mathbf{l}}/c^2$ is the radiation intensity, $N$ is the electron number density, and $\mathbf{e}$ is the
polarization unit vector. The time derivative is along the ray.
The induced scattering does not affect the escape time of photons from the source, but redistributes them toward lower frequencies, thus heating the plasma \citep{Syunyaev71}. The total number of photons, $\int\omega^2 n_{\nu,\mathbf{k}}d\omega d\Omega$, is conserved. If the initial spectrum has a maximum, such that $I_{\nu,\mathbf{l}}/\nu$ increases with $\nu$ at $\nu<\nu_0$ and decreases at $\nu>\nu_0$, the induced scattering increases the intensity at $\nu<\nu_0$ and decreases it at $\nu>\nu_0$, the maximum intensity shifting toward larger frequencies.
Note that the equation does not contain the Planck constant, even though the above simple picture is based on the quantum approach. The reason is that the effect is purely classical: the non-linear interaction of two waves yields a beating wave,
which exerts a constant force on electrons moving with the velocity equal to the beating phase velocity, $\omega-\omega'=\mathbf{v\cdot(k-k')}$. This transfers the energy from the wave to the plasma, which means that the wave with the higher frequency decays. The classical derivation of the induced Compton scattering of electro-magnetic waves is given in \citep{Galeev_sunyaev73,Drake_etal74}.
The probability of the induced scattering into a state is proportional to the number of photons already available in the state. The radiation transfer equation (\ref{kinComp}) has the form:
\begin{equation}
\frac{d I_{\nu,\mathbf{l}}}{dt}
=GI_{\nu,\mathbf{l}},\qquad G==\frac{r_e^2Nc}{m_e}
\int(\mathbf{e\cdot e}')^2
(1-\mathbf{l\cdot l}')\frac{\partial
}{\partial\nu}\left(\frac{I_{\nu,\mathbf{l'}}}{\nu}\right)d\Omega',
\label{ind_rate1}\end{equation}
so that the quantity $G$, which depends on the radiation intensity, is the induced scattering rate.
Within the source, where the radiation is nearly isotropic, the recoil factor $(1-\mathbf{l\cdot l}')$ is of the order of unity. Then, we can estimate the scattering rate as $G\sim (k_BT_b/m_ec^2)\sigma_Tnc$, where $\sigma_T$ is the Thomson cross-section. If the plasma in the source is relativistically hot, the induced scattering rate decreases by $\gamma_T$, which is the characteristic thermal Lorentz factor of electrons in the source \citep{Lyubarsky08}. The escape time from a source of size $L$ is $L/c$. Then, the optical depth to the induced scattering is conveniently defined as the ratio of the escape time to the frequency redistribution time:
\begin{equation}
\tau_{\rm ind}=GL/c\sim
\frac{k_BT_b}{m_ec^2}\sigma_TNL.
\end{equation}
The source is transparent to the induced scattering if $\tau_{\rm ind}<1$. In the opposite case, the radiation is redistributed toward smaller frequencies before escaping.
If the radiation is highly beamed, as is the case at large distances from the source, the recoil factor $(1-\mathbf{l\cdot l'})$ makes the scattering within the beam inefficient. Then, the scattering outside the beam dominates \citep{Coppi_etal93,Lyubarsky08} because, according to Equation (\ref{ind_rate1}), even weak isotropic background radiation (created, e.g., by spontaneous scattering) grows exponentially. Let us choose in Eq. (\ref{ind_rate1}) $\mathbf{l'}$ in the direction of the beam and $\mathbf{l}$ in the direction well outside the beam, where only a weak background emission is initially present.
Then, the intensity in the direction $\mathbf{l}$ grows exponentially with the rate:
\begin{equation}
G\sim \sigma_TNc\frac{F_{\nu}}{m_e\nu^2},
\label{ind_rate}\end{equation}
where $F_{\nu}= I_{\nu,\mathbf{l}}\Delta\Omega$ is the spectral flux at the scattering point and $\Delta\Omega$ is the solid angle subtended by radiation at this point. Eventually, the scattered radiation takes whole the energy of the primary beam. Inasmuch as the intensity in the direction $\mathbf{l}$ is initially very low, many e-folding times are necessary to remove significant energy from the beam. Therefore, in this case, the transparency condition may be written as:
\begin{equation}
\tau_{\rm ind}= \frac 1c\int GdR\sim GR/c<10.
\label{transp}\end{equation}
Here, the flux, and therefore the rate $G$, decreases as $R^{-2}$; therefore, only the region $\Delta R\sim R$ contributes into the integral.
The above assumes that the emission is steady. In the case of a short pulse with duration $\tau<R/c$, the induced scattering occurs
only while the scattered ray remains within the zone illuminated by the primary radiation. Therefore, the optical depth is determined not by the size of the medium but by the duration of the pulse, so that we must substitute $R$ by $c\tau$ in Eq. (\ref{transp}) \citep{Lyubarsky08}.
The above considerations also assume that the electrons oscillate non-relativistically in the field of the wave. This means that the wave strength parameter (Eq. (\ref{strength}) is smaller than unity. With this parameter, the induced scattering rate (\ref{ind_rate}) may be presented as:
\begin{equation}
G\sim\frac{\omega_p^2a^2}{\omega},
\end{equation}
where $\omega_p$ is the plasma frequency (\ref{omega_p}).
When $a>1$, the scattering rate decreases because the effective mass of a relativistically oscillating electron increases. A larger effective mass reduces recoil, which is crucial for the induced scattering. In this case, the scattering rate is estimated as \citep{Lyubarsky19a}:
\begin{equation}
G\sim\frac{\omega_p^2}{\omega a}.
\end{equation}
Even though the spontaneous scattering is dominated by high harmonics at $a\gg 1$, the induced scattering occurs predominantly in the first harmonic at any $a$.
In FRBs, the induced scattering could affect the wave propagation close enough to the source. In synchrotron maser models (see Section 3.3), the transparency condition (\ref{transp}) is marginally satisfied for $\sim 1$ GHz and violated for smaller frequencies \citep{Metzger19,Beloborodov20}, which could explain the non-detection of low frequency FRBs \citep{Karastergiou15}. The effect of the induced scattering was also used in order to show that FRBs could not be produced in stellar coronae \citep{Lyubarsky_Ostrovska16}.
\subsection{Induced Raman Scattering}
Raman scattering is the scattering of electromagnetic waves on plasma oscillations. The process may be interpreted as a decay of an electromagnetic wave into another electromagnetic wave and a plasma wave or/and a merging of an electromagnetic and a plasma waves into another electromagnetic wave. The energy and momentum conservations imply relations between the frequencies and the wave vectors of three waves:
\begin{equation}
\omega=\omega'+\omega_p;\qquad \mathbf{k=k'+q},
\label{resonance}\end{equation}
where $\omega$, $\mathbf{k}$, and $\omega'$ and $\mathbf{k'}$ are the frequencies and wave vectors of the electromagnetic waves, respectively; $\omega_p$ is the plasma frequency (\ref{omega_p}); and $\mathbf{q}$ is the wave vector of the plasma wave. If a powerful beam propagates through a medium with no preexisting plasma turbulence, the induced Raman scattering results in the exponential growth of both the intensity of the scattered wave and the level of plasma turbulence (parametric instability \citep{Drake_etal74}). With the classical language, the non-linear interaction between two waves produces a beating wave, which resonantly excites the third wave.
Importantly, this process is impossible in electron-positron plasma. The reason is that the beating wave exerts the same force both on electrons and on positrons, so that the plasma waves in which electrons and positrons oscillate in antiphase are not excited.
In electron-ion plasma, the rate of the induced Raman scattering could be represented as \citep{Thompson94,Lyubarsky08}:
\begin{equation}
G=\sigma_TNc\frac{F_{\nu}}{m_e\nu_p\nu}(1-\cos\theta),
\label{rate1}\end{equation}
where $\theta$ is the scattering angle. If $\theta$ is not small, the Raman scattering rate exceeds that of Compton scattering (\ref{ind_rate}). However, due to Landau damping, the phase velocity of the plasma wave could not be smaller than a few electron thermal velocities, which may be written as $q<\omega_p/4v_T$. Therefore, the scattering angle is limited by the resonance conditions (\ref{resonance}), so that the Raman scattering rate exceeds the Compton scattering only if:
\begin{equation}
\frac{\omega}{\omega_p}<\frac{m_ec^2}{8k_BT}.
\end{equation}
The condition of efficient scattering is still given by Eq. (\ref{transp}) because, just as for the case of the induced Compton scattering, many e-folding times are necessary for the scattered waves to grow significantly.
Importantly, the scattering by a small angle does not prevent escape of radiation, it only makes the beam wider, which could smear a short pulse. However, the situation with short pulses is more complicated so that we cannot only substitute $R$ by $c\tau$ as with induced Compton scattering. As for the case of induced Compton scattering, the process occurs only in the illuminated region. However, if the scattering angle is small, the scattered wave remains within the illuminated region along a large distance, $L=c\tau/(1-\cos\theta)$. Convesely, the group velocity of the plasma waves is small; therefore, they are amplified only for the time $\tau$. The Raman scattering of short pulses was addressed considering these effects in \citep{Lyubarsky08}.
Notably, the scattering rate (\ref{rate1}) was obtained by neglecting the decay of the plasma waves, i.e., assuming that the decay rate is less than the scattering rate. If this condition is violated, the Raman scattering is suppressed.
The collisional decay of plasma waves could be easily taken into account by comparing the decay rate with the scattering rate \citep{Thompson94,Lyubarsky08}. Since the latter is proportional to the beam intensity, the collisional decay of plasma waves may be neglected for powerful enough radiation beams. However, the Raman scattering of powerful beams could be affected by the non-linear interactions of plasma waves, such as the induced scattering of plasma waves (non-linear Landau damping) and the modulation instability (see, e.g., \citep{Shapiro_Shevchenko84,BreizmanREVIEW}). This could prevent the growth of plasma turbulence beyond some level, thus suppressing the rate of Raman scattering. To determine the role of the Raman scattering in any particular case, one must consider all the above effects.
\subsection{Modulation and Filamentation Instabilities}
Due to the non-linear interactions of radiation and matter, the refraction index depends on the radiation power, which could lead, under some conditions, to self-modulation or/and self-focusing of the radiation beam (see, e.g., \citep{Karpman75}). Let the refraction index increase with the radiation intensity. Then, any transverse intensity gradient is amplified because the rays are focused toward the intensity maximum. In wide beams, self-focusing leads to filamentation, i.e., to modulation in the direction perpendicular to the propagation direction.
Self-modulation in the longitudinal direction occurs if the wave group velocity decreases with intensity. Then, the waves accumulate near the intensity maximum.
In plasma, the refraction index is given by Eq. (\ref{refraction}).
Three effects provide the dependence of the refraction index on the radiation intensity and, therefore, could potentially lead to modulation and/or filamentation instabilities:
The first effect is based on the ponderomotive force, which expels plasma from the regions of enhanced radiation intensity. Therefore, if this fluctuation forms, the plasma frequency in the region decreases so that the refraction index increases. Due to refraction, the radiation is focused at the region, thus repelling more plasma and amplifying the focusing effect. This effect leads to filamentation of the beam; however, it could not produce self-modulation \citep{Shearer_Eddleman73,Kaw_etal73,Drake_etal74}. The effect works efficiently in electron-positron plasma, whereas in electron-ion plasma, it is suppressed by the large inertia of ions, and therefore it is only involved at very large intensities.
The second effect reduces the plasma frequency in the regions of enhanced radiation because, as discussed in Section 4.1, the effective mass of the oscillating electron increases with oscillation amplitude. This results in increasing the refraction index.
The third effect arises because, in the field of the wave, the longitudinal $\mathbf{v\times B}$ force produces an electron density perturbation, $\Delta n$, at the double wave frequency. The beating between this perturbation and the electron velocity oscillations in the pumping wave yields current $\mathbf{j}=\delta N\mathbf{v}$ at the wave frequency, which affects the propagation of the pumping wave. The last two effects are not suppressed by the ion inertia because they do not affect the average plasma density. They are of the same order, and therefore should be considered together. It is shown \citep{Max_etal74} that they lead to both filamentation and modulation of the radiation beam. The scale of filamentation is significantly larger than the longitudinal modulation scale, so the beam breaks into pancakes transverse to the radial direction.
Thus, in the electron-positron plasma, where the effect of the ponderomotive force dominates, only the filamentation instability could develop. In particular, simulations of the maser emission from relativistic shocks reveal filamentation of the upstream flow \citep{Iwamoto_etal17}. In electron-ion plasma, the effect of the ponderomotive force is suppressed by ion inertia; therefore, in a wide range of parameters, both filamentation and modulation are possible. The comprehensive analysis of different regimes in electron-ion plasma was given in \citep{Sobacchi_etal20}.
Neither filamentation nor modulation prevent escape of radiation from the source. However, they could affect the properties of the outgoing radiation. It was suggested \citep{Yang_Zhang20,Lyutikov20} that self-focusing could affect the collimation angle and the true event rate of FRBs. However, when the beam is macroscopically wide, it is not focused as a whole. Instead, the filamentation instability develops faster on small scales; therefore, the beam only breaks into narrow subbeams, each of which may be also modulated in the longitudinal direction. This could produce other observable effects \citep{Sobacchi_etal20}. Namely, when the burst exits the plasma slab, the subbeams are strongly diffracted, which could lead to (1) smearing of the burst in time if the diffraction angle is large enough and (2) frequency modulation of the observed intensity due to interference between the subbeams.
The longitudinal modulation could, in turn, imprint a microsecond structure on the light curve of the burst.
\subsection{Non-Linear Interaction of Waves in a highly magnetized plasma}
Let us consider the most relevant for the magnetar magnetospheres case when the wave frequency is small compared with both the plasma and the Larmor frequencies and when the magnetic energy considerably exceeds the plasma energy. Then, we can work in the scope of the force-free MHD. Only two types of waves exist in this system: the fast magnetosonic (fms) and Alfven waves. The fms waves were briefly discussed in Section 4. Within the limit $\sigma\to\infty$, their dispersion law is reduced to $\omega=ck$. The dispersion law of the Alfven waves in the same limit is $\omega=ck\vert\cos\theta\vert $, where $\theta$ is the angle between the background magnetic field and the propagation direction of the wave. The fms waves are a low frequency $\omega\ll\omega_p$ limit of the so called X-mode of electromagnetic wave (e.g., \citep{Arons_Barnard86}). When propagating towards decreasing plasma density, the fms wave is smoothly converted into the X-mode and could, in principle, eventually escape as outgoing electromagnetic waves. The Alfven waves propagate only along the magnetic field lines and therefore if they are excited on closed field lines, they remain trapped.
The non-linear interactions of force-free MHD waves were studied in refs.\ \citep{Thompson_Blaes98,Lyubarsky20}. The strongest are the resonant three-wave interactions, i.e., the decay of a wave into two waves and the merging of two waves into one. In this case, the resonant conditions (the conservation laws in the quantum language) looks like
\begin{equation}
\omega=\omega_1+\omega_2;\qquad \mathbf{k=k_1+k_2}.
\end{equation}
Substituting the dispersion laws into the conservation laws, one sees that only interactions
involving both types of waves are possible:
\begin{equation}
{\rm fms}\longleftrightarrow {\rm fms}+{\rm Alfven}\quad {\rm and}\quad
{\rm fms}\longleftrightarrow {\rm Alfven}+{\rm Alfven}.
\end{equation}
The rate of the interaction may be roughly estimated as:
\begin{equation}
G\sim\left(\frac{\delta B}{B}\right)^2\omega,
\label{q}\end{equation}
where $\delta B$ is the amplitude of the waves and $B$ is the background field.
Assume that there is a powerful source of fms waves well within the magnetar magnetosphere. As they propagate outwards, their amplitude decreases with the distance as $1/R$, whereas the background field in the magnetosphere decreases as $1/R^3$, so that:
\begin{equation}
\frac{\delta B}B=\sqrt{\frac{L_{\rm FRB}}c}\frac{R^2}{\mu}.
\label{relativ-ampl}\end{equation}
Therefore, the characteristic time of the non-linear interaction, $1/G$, rapidly decreases with distance. For waves in the GHz band, it becomes smaller than the propagation time, $R/c$, already at distance \citep{Lyubarsky20}:
\begin{equation}
R= 10^7\left(\frac{\mu_{33}^2}{L_{\rm FRB, 43}\nu_9}\right)^{1/5}\,\rm cm,
\end{equation}
which is well within the magnetosphere.
Beyond this distance, the non-linear interaction yields a cascade redistribution of the waves toward larger and smaller frequencies. High-frequency waves eventually decay. The amplitude of low-frequency waves increases with decreasing frequency because of energy conservation. Therefore, eventually, the non-linear steepening is involved and they decay too.
As no other waves in the GHz band can propagate across magnetic field lines, even if a source of GHz waves with the required power existed inside the magnetosphere, no radiation would escape.
In the outer magnetosphere, the situation is different. The FRB is triggered by the magnetar flare, which produces a magnetic perturbation (the magnetic pulse) described in Section 3.2. In the outer magnetosphere, the field of the pulse is larger than the local magnetospheric field, and the pulse propagates with relativistic velocity. When the pulse excites high-frequency waves, as described in Section 3.4, they propagate on the top of the pulse to very large distances. They are not strongly affected by non-linear interactions because the ratio of the wave amplitude to the field in the pulse is small and because of relativistic time dilation within the pulse \citep{Lyubarsky20}.
\section{Conclusions}
In this review I outlined
the radiation mechanisms proposed for FRBs. The aim was to demystify, at least somewhat, the field of the coherent emission mechanisms. I tried to describe them at the very basic physics level, avoiding technical details.
However, one has to stress again, that these mechanisms may be understood only as collective plasma processes; an over-simplistic approach could not provide reasonable results.
For a while, we have two workable mechanisms: the synchrotron maser at the front of a relativistic magnetized shock and the radiation from variable currents in a reconnecting current sheet. Both mechanisms assume relativistic magnetized outflows, which could be found in different astrophysical environments. Observational evidence as well as theoretical considerations point to magnetar flares as the most promising progenitor. Therefore, both types of FRB models were developed within the scope of the magnetar paradigm. Both models have their pros and cons. The synchrotron maser produces radio emission in a wide range of luminosities; however, this emission could hardly be 100\% polarized as is sometimes observed. Moreover, shock waves with the required parameters may be obtained only with an assumption that a moderately relativistic and mildly magnetized medium is present relatively close to the magnetar. The origin of such a medium is unclear. The emission from a reconnecting current sheet may have a 100\% degree of polarization; however, this model could not explain weak FRBs because the emission frequency becomes too low for small luminosities. This emission has been studied only for a quiet, steady reconnection. It is unclear how the process develops in a highly unsteady and turbulent reconnection when the sheet is destroyed
by a strong electromagnetic perturbation caused by the magnetar flare. Further development of both models is necessary. It is possible that both mechanisms work in different cases. And of course we could not exclude that new models will be presented that will successfully agree with observations.
\acknowledgments{I gratefully acknowledge grant I-1362-303.7/2016 from the German-Israeli Foundation for
Scientific Research and Development and grant 2067/19 from the Israeli Science Foundation.}
\reftitle{References}
\externalbibliography{yes}
|
2,869,038,154,322 | arxiv | \section{Introduction}
There are many classification problems where observations are time consuming and/or expensive. One example arises in health care analytics, where physicians have to make medical decisions (e.g. a course of drugs, surgery, and expensive tests). Assume that a doctor faces a discrete set of medical choices, and that we can characterize an outcome as a success (patient does not need to return for more treatment) or a failure (patient does need followup care such as repeated operations). We encounter two challenges. First, there are very few patients with the same characteristics, creating few opportunities to test a treatment. Second, testing a medical decision may require several weeks to determine the outcome. This creates a situation where experiments (evaluating a treatment decision) are time consuming and expensive, requiring that we learn from our decisions as quickly as possible.
The challenge of deciding which medical decisions to evaluate can be modeled mathematically as a sequential decision making problem with binary outcomes. In this setting, we have a budget of measurements that we allocate sequentially to medical decisions so that when we finish our study, we have collected information to maximize our ability to identify the best medical decision with the highest response (probability of success). Scientists can draw on an extensive body of literature on the classic design of experiments \cite{morris1970optimal, wetherill1986sequential, montgomery2008design} whose goal is to decide what observations to make when fitting a function. Yet in the laboratory settings considered in this paper, the decisions need to be guided by a well-defined utility function (that is, identify the best alternative with the highest probability of success). This problem also relates to active learning \cite{schein2007active,tong2002support,freund1997selective,settles2010active} in several aspects. In terms of active learning scenarios, our model is most similar to membership query synthesis where the learner may request labels
for any unlabeled instance in the input space to learn
a classifier that accurately predicts the labels of new examples. By contrast, our goal is to maximize a utility function such as the success of a treatment. Also, it is typical in active learning not to query a label more than once, whereas we have to live with noisy outcomes, requiring that we sample the same label multiple times. Moreover, the expense of labeling each alternative sharpens the conflicts of learning the prediction and finding the best alternative. Another similar sequential decision making setting is multi-armed bandit problems (e.g. \cite{auer2002finite,bubeck2012regret}). Our work will initially focus on offline settings such as laboratory experiments or medical trials, but the knowledge gradient for offline learning extends easily to online settings \cite{ryzhov2012knowledge}.
There is a literature studying sequential decision problems to maximize a utility function (e.g., \cite{he2007opportunity,chick2001new,powell2012optimal}). We are particularly interested in a policy that is called the knowledge gradient (KG) that maximizes the expected value of information. After its first appearance for ranking and selection problems \cite{frazier2008knowledge}, KG has been extended to various other belief models (e.g. \cite{mes2011hierarchical,negoescu2011knowledge,ryzhov2012knowledge,wang2015nested}). Yet there is no KG variant designed for binary classification with parametric models. In this paper, we extend the KG policy to the setting of classification problems under a logistic belief model which introduces the computational challenge of working with nonlinear models.
This paper is organized as follows. We first rigorously establish a sound mathematic model for the problem of sequentially maximizing the response under binary outcome in Section \ref{sec:problem}. We then develop a recursive Bayesian logistic regression procedure to predict the response of each alternative and further formulate the problem as a Markov decision process. In Section 3, we design a knowledge-gradient type policy under a logistic belief model to guide the experiment and provide a finite-time analysis on the estimated error. This is different from the PAC (passive) learning bound which relies on the i.i.d. assumption of the examples. Experiments are demonstrated in Section 4.
\section{Problem formulation} \label{sec:problem}
In this section, we state a formal model for our response maximization problem, including transition and objective functions. We then formulate the problem as a Markov decision process.
\subsection{The mathematical model}
We assume that we have a finite set of alternatives $\bm{x}\in \mathcal{X}=\{\bm{x}_1,\dots,\bm{x}_M\}$. The observation of measuring each $\bm{x}$ is a binary outcome $y \in \{-1,+1\}$ with some unknown probability $\text{Pr}[y=+1|\bm{x}]$. Under a limited budget $N$, our goal is to choose the measurement policy $(\bm{x}^1,\dots,\bm{x}^{N})$ and implementation decision $\bm{x}^{N+1}$ that maximizes $\text{Pr}(y=+1|\bm{x}^{N+1})$. We assume a parametric model where each $\bm{x}$ is a $d$-dimensional vector and the probability of an example $\bm{x}$ belonging to class $+1$ is given by a nonlinear transformation of an underlying linear function of $\bm{x}$ with a weight vector $\bm{w}$:
$$
\text{Pr}(y=+1|\bm{x},\bm{w})=\sigma(\bm{w}^T\bm{x}),
$$
with the sigmoid function $\sigma(a)$ chosen as the logistic function $\sigma(a)=\frac{1}{1+\text{exp}(-a)}.$
We assume a Bayesian setting in which we have a multivariate prior distribution for the unknown parameter vector $\bm{w}$. At iteration n, we choose an alternative $\bm{x}^n$ to measure and observe a binary outcome $y^n$ assuming labels are generated independently given $\bm{w}$. Each alternative can be measured more than once with potentially different outcomes. Let $\mathcal{D}^n=\{(\bm{x}^i,y^i)\}_{i=1}^n$ denote the previous measured data set for any $n=1,\dots,N$. Define the filtration $(\mathcal{F}^n)_{n=0}^N$ by letting $\mathcal{F}^n$ be the sigma-algebra generated by $\bm{x}^1,y^1,\dots, \bm{x}^{n},y^n$. We use $\mathcal{F}^n$ and $\mathcal{D}^n$ interchangeably. Measurement and implementation decisions $\bm{x}^{n+1}$ are restricted to be $\mathcal{F}^n$-measurable so that decisions
may only depend on measurements made in the past. We use Bayes' rule to form a sequence of posterior predictive distributions $ \text{Pr}(\bm{w}|\mathcal{D}^n)$ for $\bm{w}$ from the prior and the previous measurements.
The next lemma states the equivalence of using true probabilities and sample estimates when evaluating a policy, where $\Pi$ is the set of policies. The proof is left in the supplementary material.
\begin{lemma}\label{eqv}
Let $\pi \in \Pi$ be a policy, and $\bm{x}^\pi = \arg \max_{\bm{x}} \text{Pr}[y = +1 | \bm{x}, \mathcal{D}^N]$ be the implementation decision after the budget $N$ is exhausted. Then
$$
\mathbb{E}[\text{Pr}(y=+1|\bm{x}^\pi,\bm{w})]=\mathbb{E}[\max_{\bm{x}}\text{Pr}(y=+1|\bm{x},\mathcal{D}^{N})],
$$
where the expectation is taking over the prior distribution of $\bm{w}$.
\end{lemma}
By denoting $\mathcal{X}^I$ as an implementation policy for selecting an alternative after the measurement budget is exhausted, then $\mathcal{X}^I$ is a mapping from the history $\mathcal{D}^N$ to an alternative $\mathcal{X}^I(\mathcal{D}^N)$. Then as a corollary of Lemma \ref{eqv}, we have \cite{powell2012optimal}
$$\max_{\mathcal{X}^I}\mathbb{E}\big[ \text{Pr}\big(y = +1 | \mathcal{X}( \mathcal{D}^N)\big)\big]= \max_{\bm{x}}\text{Pr}(y = +1 | \bm{x}, \mathcal{D}^N).
$$
In other words, the optimal decision at time $N$ is to go with our final set of beliefs. By the equivalence of using true probabilities and sample estimates when evaluating a policy, while we want to learn the unknown true value $\max_{\bm{x}}\text{Pr}(y=+1|\bm{x})$, we may write our problem's objective as
\begin{equation}\label{obj}
\max_{\pi \in \Pi} \mathbb{E}^{\pi}[ \max_{\bm{x}}\text{Pr}(y = +1 | \bm{x}, \mathcal{D}^N)].
\end{equation}
\subsection{From logistic regression to Markov decision process formulation}\label{LR}
Logistic regression is widely used in machine learning for binary classification \cite{hosmer2004applied}. Given a training set $\mathcal{D}=\{(\bm{x}_i,y_i)\}_{i=1}^n$ with $\bm{x}_i$ a $d$-dimensional vector and $y_i \in \{-1,+1\}$, with the assumption that training labels are generated independently given $\bm{w}$, the likelihood $\text{Pr}(\mathcal{D}|\bm{w})$ is defined as
$
\text{Pr}(\mathcal{D}|\bm{w}) = \prod_{i=1}^n \sigma(y_i\cdot\bm{w}^T\bm{x}_i).$
In frequentists' interpretation, the weight vector $\bm{w}$ is found by maximizing the likelihood of the training data $\text{Pr}(\mathcal{D}|\bm{w})$. $l_2$-regularization has been used to avoid over-fitting with the estimate of the weight vector $\bm{w}$ given by:
\begin{equation}\label{RLR}
\min_{\bm{w}} \frac{\lambda}{2}\|\bm{w}\|^2+\sum_{i=1}^n\log(1+\exp(-y_i \bm{w}^T\bm{x}_i)).
\end{equation}
\iffalse
\begin{equation*}
\min_{\bm{w}} \sum_{i=1}^n\log(1+\exp(-y_i \bm{w}^T\bm{x}_i)).
\end{equation*}
\fi
\subsubsection{Bayesian setup}\label{BLR}
Exact Bayesian inference for logistic regression is intractable since the evaluation of the posterior distribution comprises a product of logistic sigmoid functions and the integral in the normalization constant is intractable as well. With a Gaussian prior on the weight vector, the Laplace approximation can be obtained by finding the mode of the posterior distribution and then fitting a Gaussian distribution centered at that mode (see Chapter 4.5 of \cite{bishop2006pattern}). Specifically, suppose we begin with a Gaussian prior
$
\text{Pr}(\bm{w})= \mathcal{N}(\bm{w}|\bm{m}, \bm{\Sigma}),
$
and we wish to approximate the posterior
$
\text{Pr}(\bm{w}|\mathcal{D}) \propto \text{Pr}(\mathcal{D}|\bm{w})\text{Pr}(\bm{w}).
$
Define the logarithm of the unnormalized posterior distribution
\begin{eqnarray}\nonumber \label{LPD}
\Psi(\bm{w}|\bm{m},\bm{\Sigma},\mathcal{D})&=&\log \text{Pr}(\mathcal{D}|\bm{w})+
\log\text{Pr}(\bm{w}) \\
&=& -\frac{1}{2}(\bm{w}-\bm{m})^T\bm{\Sigma}^{-1}(\bm{w}-\bm{m})- \sum_{i=1}^n\log(1+\exp(-y_i \bm{w}^T\bm{x}_i)).
\end{eqnarray}
The Laplace approximation is based on a Taylor expansion to $\Psi$ around its MAP (maximum a posteriori) solution $\hat{\bm{w}}= \arg \max_{\bm{w}}\Psi(\bm{w})$, which defines the mean of the Gaussian. The covariance is then given by the Hessian of the negative log posterior evaluated at $\hat{\bm{w}}$, which takes the form
\begin{equation}\label{LPDD}
(\bm{\Sigma}')^{-1}=-\nabla^2 \Psi(\bm{w})|_{\bm{w}=\hat{\bm{w}}} = \bm{\Sigma}^{-1}+\sum_{i=1}^n p_i(1-p_i)\bm{x}_i\bm{x}_i^T,
\end{equation}
where $p_i = \sigma(\hat{\bm{w}}^T\bm{x}_i)$.
The Laplace approximation results in a normal approximation to the posterior
\begin{equation}\label{pos}
\text{Pr}(\bm{w}|\mathcal{D}) \approx \mathcal{N}(\hat{\bm{w}},\bm{\Sigma}').
\end{equation}
By substituting an independent normal prior with $q_i^{-1}$ as the diagonal element of diagonal covariance matrix $\bm{\Sigma}$, the Laplace approximation to the posterior distribution of each weight $w_j$ reduces to
$
\text{Pr}(w_j|\mathcal{D}) \approx \mathcal{N}(\hat{w}_j,q_j^{-1}).
$ Note here if $q_j=\lambda, m_i=0$, the solution of Eq. \eqref{RLR} is the same as the MAP solution of \eqref{LPD}. So an $l_2$-regularized logistic regression can be interpreted as a Bayesian model with a Gaussian prior on the weights with standard deviation $1/\sqrt{\lambda}$.
\iffalse
the unnormalized log likelihood $\Psi(\bm{w})$ reduces to
\begin{equation}\label{ind}
\Psi(\bm{w}) = -\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \sum_{i=1}^n \log(1+\exp(-y_i\bm{w}^T\bm{x}_i)),
\end{equation}
and the diagonal approximation of the Hessian gives the inverse of the $j$th diagonal element of the posterior covariance matrix as
\begin{equation}\label{indC}
q_j'=q_j + \sum_{i=1}^n p_i(1-p_i)x^2_{ij}.
\end{equation}
\fi
\iffalse
\subsection{$l_2$-regularized logistic regression (sparse logistic regression)}
Gaussian prior equals to an $l_2$-regularized logistic regression as stated in the above section. While Bayesian with Laplace prior equals to an $l_2$-regularized logistic regression which yields sparse solutions. The Laplace approximation can also be used to yield a recursive update formula. The details will be filled in later.
\fi
\subsubsection{Recursive Bayesian logistic update}\label{sec:RBLR}
Our state space is the space of all possible predictive distributions for $\bm{w}$. Starting from a Gaussian prior $\mathcal{N}(\bm{w}|\bm{m}^0, \bm{\Sigma}^0)$, after the first $n$ observed data, the Laplace approximated posterior distribution is $\text{Pr}(\bm{w}|\mathcal{D}^n) \approx \mathcal{N}(\bm{w}|\bm{m}^n, \bm{\Sigma}^n)$ according to \eqref{pos}. We formally define the state space $\mathcal{S}$ to be the cross-product of $\mathbb{R}^d$ and the space of positive semidefinite matrices. At each time $n$, our state of knowledge is thus $S^n=(\bm{m}^n, \bm{\Sigma}^n)$.
Since retraining the logistic model using all the previous data after each new data comes in to update from $S^n$ to $S^{n+1}$ by obtaining the MAP solution of \eqref{LPD} or even with diagonal covariance with constant diagonal elements is clumsy, the Bayesian logistic regression can be extended to leverage for recursive model updates after each of the training data.
To be more specific, after the first training data, the Laplace approximated posterior is $\mathcal{N}(\bm{w}|\bm{m}^1, \bm{\Sigma}^1)$. This serves as a prior on the weights to update the model when the next training data becomes available. In this recursive way of model updating, previously measured data need not be stored or used for retraining the logistic model. For the rest of this paper, we focus on independent normal priors (with $ \bm{\Sigma}=\lambda^{-1} \bm{I}$, where $ \bm{I}$ is the identity matrix), which is equivalent to $l_2$-regularized logistic regression, which also offers greater computational efficiency. All the results can be easily generalized to the correlative normal case. By setting the batch size $n=1$ and $\Sigma=\lambda^{-1} \bm{I}$ in Eq. \eqref{LPD} and \eqref{LPDD}, we have the recursive Bayesian logistic regression as in Algorithm \ref{RBLR}.
\begin{algorithm}\label{RBLR}
\caption{Recursive Bayesian Logistic Regression}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Regularization parameter $\lambda > 0$}
$m_j=0$, $q_j=\lambda$. (Each weight $w_j$ has an independent prior $\mathcal{N}(m_j, q_j^{-1})$)\\
\For{$t=1$ to $T$}{
~\\
Get a new point $(\bm{x}, y)$.\\
Find $\hat{\bm{w}}$ as the maximizer of \eqref{LPD}: $-\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \log(1+\exp(-y\bm{w}^T\bm{x})).$\\
$m_j=\hat{w}_j$\\
Update $q_i$ according to \eqref{LPDD}:
$q_j \leftarrow q_j +\sigma(\hat{\bm{w}}^T\bm{x})(1-\sigma(\hat{\bm{w}}^T\bm{x})x^2_{j}$.
}
\end{algorithm}
Since $\Psi(\bm{w}|\bm{m},\bm{\Sigma},\mathcal{D})$ is convex in $\bm{w}$, we can tap a wide range of convex optimization algorithms including gradient search, conjugate gradient, an BFGS method (see \cite{wright1999numerical} for details). Yet when setting $n=1$ and $\Sigma=\lambda^{-1} \bm{I}$ in Eq. \eqref{LPD}, a stable and efficient algorithm for solving $\arg\max \Psi(\bm{w})=-\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \log(1+\exp(-y\bm{w}^T\bm{x}))$ can be obtained as follows. First we calculate $$\frac{\partial F}{\partial w_i}=-q_i(w_i-m_i)+\frac{yx_i\exp(-y\mathbf{w}^T\mathbf{x})}{1+\exp(-y\mathbf{w}^T\mathbf{x})}.$$ By setting
$\partial F/\partial w_i=0$ for all $i$, then by denoting $(1+\exp(y\mathbf{w}^T\mathbf{x}))^{-1}$ as $p$, we have
$$q_i(w_i-m_i)=ypx_i,~~~~i=1,2,\dots,d,$$ and thus
$w_i=m_i+yp\frac{x_i}{q_i}.$
Plugging in these equalities into the definition of $p$, we have
$$\frac{1}{p}=1+\exp\Big{(}y\sum_{i=1}^{d}(m_i+yp\frac{x_i}{q_i})x_i\Big{)}
=1+\exp(y\mathbf{m}^T\mathbf{x})\exp \Big{(}y^2p\sum_{i=1}^d\frac{x_i^2}{q_i}\Big{)}.$$
The left hand side decreases from infinity to 1 and the right hand side increases from 1 when $p$ goes from 0 to 1, therefore the solution exists and is unique in $[0,1]$. By reducing a $d$-dimensional problem to a $1$-dimensional one, the simple bisection method is good enough.
\subsection{Markov decision process formulation}
Our learning problem is a dynamic program that can be formulated as a Markov decision process. By using diagonal covariance matrices, the state space degenerates to $\mathcal{S} := \mathbb{R}^d \times (0,\infty] ^d$ and it consists of points $s=(\bm{m},\bm{q})$, where $m_i, q_i$ are the mean and the precision of a normal distribution. We next define the transition function based on the recursive Bayesian logistic regression.
\begin{definition}The transition function $T$: $\mathcal{S}\times\mathcal{X}\times \{-1, 1\}$ is defined as
$$T\Big((\bm{m},\bm{q}), \bm{x},y\Big) = \bigg(\Big(\hat{\bm{w}}(\bm{m}),\bm{q}+ p(1-p)\textbf{diag}(\bm{x}\bm{x}^T)\Big)\bigg),
$$
where $\hat{\bm{w}}(\bm{m})=\arg \min_{\bm{w}}\Psi(\bm{w}|\bm{m},\bm{q})$, $p = \sigma(\hat{\bm{w}}^T\bm{x})$ and $\textbf{diag}(\bm{x}\bm{x}^T)$ is a column vector containing the diagonal elements of $\bm{x}\bm{x}^T$, so that $S^{n+1}=T(S^n, \bm{x},y)$.
\end{definition}
In a dynamic program, the value function is defined as the value of the optimal policy given a particular state
$S^n$ at time $n$, and may also be determined recursively through Bellman's equation. If the value function can be computed efficiently, the optimal policy may then also be computed from it. The value function $V^n:\mathcal{S} \mapsto \mathbb{R}$ at time $n=1,\dots,N+1$ is given by \eqref{obj} as
$$
V^n(s) := \max_{\pi} \mathbb{E}^{\pi}[ \max_{\bm{x}}\text{Pr}(y = +1 | \bm{x}, \mathcal{F}^N)|S^n=s].
$$
By noting that $\max_{\bm{x}}\text{Pr}(y = +1 | \bm{x},\mathcal{F}^N$) is $\mathcal{F}^N$-measurable and thus the expectation does not depend on policy $\pi$, the terminal value function $V^N$ can be computed directly as
$$
V^{N+1}(s)=\max_{\bm{x}}\text{Pr}(y = +1 | \bm{x},s), \forall s \in \mathcal{S}.
$$
The value function at times $n=1,\dots,N$, $V^n$, is given recursively by
$$
V^n(s)=\max_{\bm{x}}\mathbb{E}[V^{n+1}(T(s,\bm{x},y))], s \in \mathcal{S}.
$$
\section{Knowledge gradient policy for logistic belief model}
Since the ``curse of dimensionality''
makes direct computation of the value function intractable, computationally efficient approximate
policies need to be considered. A computationally attractive policy for ranking and selection problems is the knowledge
gradient (KG), which is a stationary policy that at the $n$th iteration chooses its ($n+1$)th measurement to maximize the single-period expected increase in value \cite{frazier2008knowledge}. It enjoys some nice properties, including myopic and asymptotic optimality. After its first appearance, KG has been extended to various belief models (e.g. \cite{mes2011hierarchical,negoescu2011knowledge,ryzhov2012knowledge,wang2015nested}) for offline learning, and an immediate extension to online learning problems \cite{ryzhov2012knowledge}. Yet there is no KG variant designed for binary classification with parametric models, primarily because of the complexity of dealing with nonlinear belief models. In what follows, we extend the KG policy to the setting of classification problems under a logistic belief model.
\begin{definition}The knowledge gradient of measuring an alternative $\bm{x}$ while in state $s$ is
\begin{equation} \label{KG}
\nu_{\bm{x}}^{\text{KG}}(s) := \mathbb{E}[V^{N+1}(T(s,\bm{x},y))-V^{N+1}(s)].
\end{equation}
\end{definition}
Since the label for alternative $\bm{x}$ is not known at the time of selection, the expectation is computed conditional on the current model specified by $s=(\bm{m},\bm{q})$. Specifically,
given a state $s=(\bm{m},\bm{q})$, the label $y$ for an alternative $\bm{x}$ follows from a Bernoulli distribution with a predictive distribution
\begin{eqnarray}\label{predictD}
\text{Pr}(y=+1|\bm{x},s) =\int \text{Pr}(y=+1|\bm{x},\bm{w})\text{Pr}(\bm{w}|s)\text{d}\bm{w}
= \int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s)\text{d}\bm{w}.
\end{eqnarray}
We have
\begin{eqnarray*}
\mathbb{E}[V^{N+1}(T(s,\bm{x},y))] &=& \text{Pr}(y=+1|\bm{x},s)V^N(T(s, \bm{x},+1))+ \text{Pr}(y=-1|\bm{x},s)V^N(T(s, \bm{x},-1))\\
&=&\text{Pr}(y=+1|\bm{x},s)\cdot \max_{\bm{x}'}\text{Pr}(y = +1 | \bm{x}',T(s,\bm{x},+1))\\
&&+\text{Pr}(y=-1|\bm{x},s)\cdot \max_{\bm{x}'}\text{Pr}(y = +1 | \bm{x}',T(s,\bm{x},-1)).
\end{eqnarray*}
The knowledge gradient policy suggests at each time $n$ selecting the alternative that maximizes $\nu_{\bm{x}}^{\text{KG}}(s^{n-1})$ where ties are broken randomly. The same optimization procedure as in recursive Bayesian logistic regression needs to be conducted for calculating the transition functions $T(s,\bm{x},\cdot)$.
The predictive distribution $ \int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s)\text{d}\bm{w}$ cannot be evaluated exactly using the logistic function in the role of the sigmoid $\sigma$. An approximation procedure is deployed as follows.
Denoting $a=\bm{w}^T\bm{x}$ and $\delta(\cdot)$ as the Dirac delta function, we have $\sigma(\bm{w}^T\bm{x})=\int \delta(a-\bm{w}^T\bm{x})\sigma(a)\text{d}a.$
Hence
$$\int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w}=\int \sigma(a)p(a)\text{d}a,$$
where
$p(a)=\int \delta(a-\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w}.$
Since $p(\bm{w}|s) = \mathcal{N}(\bm{m},\bm{q}^{-1})$ is Gaussian, the marginal distribution $p(a)$ is also Gaussian. We can evaluate $p(a)$ by calculating the mean and covariance of this distribution \cite{bishop2006pattern}. We have
\begin{eqnarray*}
\mu_a&=&\mathbb{E}[a]=\int p(a)a \text{ d}a = \int p(\bm{w}|s)\bm{w}^T\bm{x} \text{ d}\bm{w}=\hat{\bm{w}}^T\bm{x},\\
\sigma_a^2&=& \int p(\bm{w}|s) \big((\bm{w}^T\bm{x})^2-(\bm{m}^T\bm{x})^2 \big) \text{ d}\bm{w}=\sum_{j=1}^d q_j^{-1} x_j^2.
\end{eqnarray*} Thus $\int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w}=\int \sigma(a)p(a)\text{d}a=\int \sigma(a) \mathcal{N}(a|\mu_a, \sigma^2_a) \text{d}a.$
For a logistic function, in order to obtain the best approximation \cite{barber1998ensemble,spiegelhalter1990sequential}, we approximate $\sigma(a)$ by $\Phi(\alpha a)$ with $\alpha=\pi/8$. Denoting $\kappa(\sigma^2)=(1+\pi \sigma^2/8)^{-1/2}$ , we have
$$
\text{Pr}(y=+1|\bm{x},s)=\int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w} \approx \sigma(\kappa(\sigma^2)\mu).
$$
We summarize the decision rule of the knowledge gradient policy at each iteration in Algorithm \ref{KG}.
\begin{algorithm}\label{KG}
\caption{Knowledge Gradient Policy for Logistic Belief Model}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{$m_j$, $q_j$ (Each weight $w_j$ has an independent prior $\mathcal{N}(m_j, q_j^{-1})$)}
\For{$\bm{x}$ in $\mathcal{X}$}{
~\\
Let $\Psi(\bm{w},y)=-\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \log(1+\exp(-y\bm{w}^T\bm{x}))$\\
$\hat{\bm{w}}_{+}=\arg \max_{\bm{w}}\Psi(\bm{w},+1)$ \\
$\hat{\bm{w}}_{-}=\arg \max_{\bm{w}}\Psi(\bm{w},-1)$ \\
$\mu=\bm{m}^T\bm{x}$\\
$\sigma^2=\sum_{j=1}^d q_j^{-1} x_j^2$\\
Define $\mu_{+}(\bm{x}')=\hat{\bm{w}}_{+}^T\bm{x}'$, $\mu_{-}(\bm{x}')=\hat{\bm{w}}_{-}^T\bm{x}'$\\
Define $\sigma^2_{\pm}(\bm{x}')= \sum_{j=1}^d\Big( q_j+\sigma(\hat{\bm{w}}^T_{\pm}\bm{x}')(1-\sigma(\hat{\bm{w}}_{\pm}^T\bm{x}')x^2_{j} \Big) (x'_j)^2$\\
$\tilde{\nu}_{\bm{x}}=\sigma(\kappa(\sigma^2)\mu)\cdot \max_{\bm{x'}}\sigma \big(\kappa(\sigma_+^2(\bm{x}'))\mu_{+}(\bm{x'})\big)+\big(1-\sigma(\kappa(\sigma^2)\mu)\big)\cdot \max_{\bm{x'}}\sigma \big(\kappa(\sigma_-^2(\bm{x}'))\mu_{-}(\bm{x'})\big)$\\
}
$\bm{x}^{\text{KG}} = \arg \max_{\bm{x}}\tilde{\nu}_{x}$
\end{algorithm}
We close this section by presenting the following finite-time bound on the MSE
of the estimated weight with the proof in the supplement. Without loss of generality, we assume $\|\bm{x}\|_2 \le 1$, $\forall{\bm{x}\in \mathcal{X}}$.
\begin{theorem}
Let $\mathcal{D}^n$ be the $n$ measurements produced by the KG policy and $\bm{w}^n=\arg\max_{\bm{w}}\Psi(\bm{w}|\bm{m}^0, \Sigma^0)$ with the prior distribution $\text{Pr}(\bm{w}^*)= \mathcal{N}(\bm{w}^*|\bm{m}^0, \bm{\Sigma}^0).$ Then with probability $P_d(M)$, the expected error of $\bm{w}^n$ is bounded as
$$\mathbb{E}_{ \bm{y}\sim \mathcal{B}(\mathcal{D}^n,\bm{w}^*)}||\bm{w}^n-\bm{w}^*||_2\le\frac{C_{min}+\lambda_{min}\big{(}\bm{\Sigma}^{-1}\big{)}}{2},$$
where the distribution $\mathcal{B}(\mathcal{D}^n,\bm{w}^*)$ is the vector Bernoulli distribution
$Pr(y^i=+1)=\sigma(\bm{w}^{*T}\bm{x}^i)$,
$P_d(M)$ is the probability of a d-dimension standard normal random variable appears in the ball with radius $M =\frac{1}{8}\frac{\lambda_{min}^2}{\sqrt{\lambda_{max}}}$ and
$C_{min}= \lambda_{min}
\Big{(}\frac{1}{n}\sum_{i=1} \sigma(\bm{w}^{*T}\bm{x}^i)\big{(}1-\sigma(\bm{w}^{*T}\bm{x}^i)\big{)}\bm{x}^i(\bm{x}^i)^T \Big{)}.$
\end{theorem}
In the special case where $\bm{\Sigma}^0=\lambda^{-1}\bm{I}$, we have $\lambda_{max}=\lambda_{min}=\lambda$ and $M=\frac{\lambda^{3/2}}{8}$.
The bound holds with higher probability $P_d(M)$ with Iarger $\lambda$. This is natural since a larger $\lambda$ represents a normal distribution with narrower bandwidth, resulting in a more concentrated $\bm{w}^*$ around $\bm{m}^0$.
\section{Experimental results}
We evaluate the proposed method on both synthetic datasets and the UCI machine learning repository \cite{Lichman:2013} which includes classification problems drawn from settings including fertility, glass identification, blood transfusion, survival, breast cancer (wpbc), planning relax and climate model failure. We first analyze the behavior of the KG policy and then compare it to state-of-the-art active learning algorithms. On synthetic datasets, we randomly generate a set of $M$ $d$-dimensional alternatives $\bm{x}$ from [-3,3]. We conduct experiments in a Bayesian fashion where at each run we sample a true $d+1$-dimensional weight vector $\bm{w}^*$ from the prior distribution $w^*_i \sim \mathcal{N}(0, \lambda)$. The $+1$ label for each alternative $\bm{x}$ is generated with probability $\sigma(w^*_0+\sum_{j=1}^d w^*_dx_d)$. For each UCI dataset, we use all the data points as the set of alternatives with their original attributes. We then simulate their labels using a weight vector $\bm{w}^*$. This weight vector could have been chosen arbitrarily, but it was in fact a perturbed version of the weight vector trained through logistic regression on the original dataset. All the policies start with the same one randomly selected example per class.
\subsection{Behavior of the KG policy}
To better understand the behavior of the KG policy, Fig. \ref{2d} shows the snapshot of the KG policy at each iteration on a $2$-dimensional synthetic dataset ($M=200$) in one run. The scatter plots show the KG values with both the color and the size of the point reflecting the KG value of the corresponding alternative. The star denotes the true alternative with the largest response. The red square is the alternative with the largest KG value. The pink circle is the implementation decision that maximizes the response under current estimation of $\bm{w}^*$ if the budget is exhausted after that iteration.
\begin{figure}[htp]
\centering
\hspace*{-0.4cm}
\begin{tabular}{cccc}
\includegraphics[width=0.25\textwidth]{h1.pdf}
\includegraphics[width=0.25\textwidth]{h2.pdf}
\includegraphics[width=0.25\textwidth]{h3.pdf}
\includegraphics[width=0.25\textwidth]{h4.pdf}
\end{tabular}
\caption{The scatter plots illustrate the KG values at 1-4 iterations from left to right with both the color and the size reflecting the magnitude. The star, the red square and pink circle indicate the true best alternative, the alternative to be selected and the implementation decision, respectively. \label{2d}}
\end{figure}
\iffalse
\begin{figure}[htp]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.333\textwidth]{h1.pdf}
\includegraphics[width=0.333\textwidth]{h2.pdf}
\includegraphics[width=0.333\textwidth]{h3.pdf} \\
\includegraphics[width=0.333\textwidth]{h4.pdf}
\includegraphics[width=0.333\textwidth]{h5.pdf}
\includegraphics[width=0.333\textwidth]{h6.pdf}
\end{tabular}
\caption{Snapshots on a $2$d dataset. The scatter plots illustrate the KG values at 1-6 iterations from left to right, bottom to down with both the color and the size reflecting the magnitude. The star, the red square and pink circle indicate the true best alternative, the alternative to be selected and the implementation decision, respectively. \label{2d}}
\end{figure}
\fi
It can be seen from the figure that the KG policy finds the true best alternative after only three measurements, reaching out to different alternatives to improve its estimates. We can infer from Fig. \ref{2d} that the KG policy tends to choose alternatives near the boundary of the region.
This criterion is natural since in order to find the true maximum, we need to get enough information about $\bm{w}^*$ and
estimate well the probability of points near the true maximum which appears near the boundary. On the other hand, in a logistic model with labeling noise, a data $\bm{x}$ with small $\bm{x}^T\bm{x}$ inherently brings little information as pointed out in \cite{zhang2000value}. For an extreme example, when $\bm{x}=\bm{0}$ the label is always completely random for any $\bm{w}$ since $\text{Pr}(y=+1|\bm{w},\bm{0}) \equiv 0.5$. This is an issue when perfect classification is not achievable. So it is essential to label a data with larger $\bm{x}^T\bm{x}$ that has the most potential to enhance its confidence non-randomly.
\begin{wrapfigure}{r}{0.6\textwidth}
\begin{center}
\includegraphics[width=0.6\textwidth]{h_Diff_6.pdf}
\end{center}
\caption{Absolute error. \label{Abs}}
\end{wrapfigure}
Also depicted in Fig. \ref{Abs} is the absolute class distribution error of each alternative, which is the absolute difference between the predictive probability of class $+1$ under current estimate and the true probability after $6$ iterations. We see that the probability at the true maximum is well approximated, while moderate error in
the estimate is located away from this region of interest. We also provide the analysis on a 3-dimensional dataset in the supplement.
\subsection{Comparison with other policies}
Recall that our goal is to maximize the expected response of the implementation decision. We define the Opportunity Cost (OC) metric as the expected response of the implementation decision $\bm{x}^{N+1}:=\arg\max_{\bm{x}} \text{Pr}(y=+1|\bm{x},\bm{w}^N)$ compared to the true maximal response under weight $\bm{w}^*$:
$$\text{OC}:=\max_{\bm{x}\in \mathcal{X}}\text{Pr}(y=+1|\bm{x},\bm{w}^*)-\text{Pr}(y=+1|\bm{x}^{N+1},\bm{w}^*).$$
Note that the opportunity cost is always non-negative and the smaller the better.
To make a fair comparison, on each run, all the time-$N$ labels of all the alternatives are randomly pre-generated according to the weight vector $\bm{w}^*$ and shared across all the competing policies.
Since there is no policy directly solving the same sequential response maximizing problem under a logistic model, considering the relationship with active learning as described in Section 1, we compare with the following state-of-the-art active learning policies compatible with logistic regression:
Random sampling (Random), a myopic method that selects the most uncertain instance each step (MostUncertain), Fisher information (Fisher) \cite{hoi2006batch}, the batch-mode active learning via error bound minimization (Logistic Bound) \cite{gu2014batch} and discriminative batch-mode active learning (Disc) \cite{guo2008discriminative} with batch size equal to 1. All the state transitions are based on recursive Bayesian logistic regression while different policies provides different rules for labeling decisions at each iteration. The experimental results are shown in figure \ref{33}.
In all the figures, the x-axis denotes the number of measured alternatives and the y-axis represents the averaged opportunity cost over 100 runs.
\begin{figure}[htp!]
\centering
\begin{tabular}{ccc}
\subfigure[fertility]{
\includegraphics[width=0.29\textwidth]{Fertility.pdf} }
\subfigure[glass identification]{
\includegraphics[width=0.29\textwidth]{GlassIdentification.pdf} }
\subfigure[blood transfusion]{
\includegraphics[width=0.29\textwidth]{BloodTransfusion.pdf}
} \\
\subfigure[survival]{
\includegraphics[width=0.29\textwidth]{hrberman.pdf} }
\subfigure[breast cancer (wpbc)]{
\includegraphics[width=0.29\textwidth]{wpbc.pdf} }
\subfigure[planning relax ]{
\includegraphics[width=0.29\textwidth]{plrx.pdf}
} \\
\subfigure[climate]{
\includegraphics[width=0.29\textwidth]{pop.pdf} }
\subfigure[Synthetic data, $d=10$]{
\includegraphics[width=0.29\textwidth]{OC3.pdf} }
\subfigure[Synthetic data, $d=15$]{
\includegraphics[width=0.29\textwidth]{OC4.pdf}
} \\
\end{tabular}
\caption{Opportunity cost on UCI and synthetic datasets. \label{33}}
\end{figure}
It is demonstrated in FIG. \ref{33} that KG outperforms the other policies significantly in most cases, especially in early iterations. MostUncertain, Fisher and Logistic Bound perform well on some datasets while badly on others. Disc and Random yield relatively stable and satisfiable performance. A possible explanation is that the goal of active leaning is to learn a classifier which accurately predicts the labels of new examples so their criteria are not directly related to maximize the response aside from the intent to learn the prediction. After enough iterations when active learning methods presumably have the ability to achieve a good estimator of $\bm{w}^*$, their performance will be enhanced.
However, in the case when an experiment is expensive and only a small budget is allowed, the KG policy, which is designed specifically to maximize the response, is preferred.
\section{Conclusion}
In this paper, we consider binary classification problems where we have to run expensive experiments, forcing us to learn the most from each experiment. The goal is to learn the classification model as quickly as possible to identify the alternative with the highest response. We develop a knowledge gradient policy using a logistic regression belief model, for which we developed an approximation method to overcome computational challenges in finding the knowledge gradient. We provide a finite-time analysis on the estimated error, and report the results of a series of experiments that demonstrate its efficiency.
\section{Introduction}
There are many classification problems where observations are time consuming and/or expensive. One example arises in health care analytics, where physicians have to make medical decisions (e.g. a course of drugs, surgery, and expensive tests). Assume that a doctor faces a discrete set of medical choices, and that we can characterize an outcome as a success (patient does not need to return for more treatment) or a failure (patient does need followup care such as repeated operations). We encounter two challenges. First, there are very few patients with the same characteristics, creating few opportunities to test a treatment. Second, testing a medical decision may require several weeks to determine the outcome. This creates a situation where experiments (evaluating a treatment decision) are time consuming and expensive, requiring that we learn from our decisions as quickly as possible.
The challenge of deciding which medical decisions to evaluate can be modeled mathematically as a sequential decision making problem with binary outcomes. In this setting, we have a budget of measurements that we allocate sequentially to medical decisions so that when we finish our study, we have collected information to maximize our ability to identify the best medical decision with the highest response (probability of success). Scientists can draw on an extensive body of literature on the classic design of experiments \cite{morris1970optimal, wetherill1986sequential, montgomery2008design} whose goal is to decide what observations to make when fitting a function. Yet in the laboratory settings considered in this paper, the decisions need to be guided by a well-defined utility function (that is, identify the best alternative with the highest probability of success). This problem also relates to active learning \cite{schein2007active,tong2002support,freund1997selective,settles2010active} in several aspects. In terms of active learning scenarios, our model is most similar to membership query synthesis where the learner may request labels
for any unlabeled instance in the input space to learn
a classifier that accurately predicts the labels of new examples. By contrast, our goal is to maximize a utility function such as the success of a treatment. Also, it is typical in active learning not to query a label more than once, whereas we have to live with noisy outcomes, requiring that we sample the same label multiple times. Moreover, the expense of labeling each alternative sharpens the conflicts of learning the prediction and finding the best alternative. Another similar sequential decision making setting is multi-armed bandit problems (e.g. \cite{auer2002finite,bubeck2012regret}). Our work will initially focus on offline settings such as laboratory experiments or medical trials, but the knowledge gradient for offline learning extends easily to online settings \cite{ryzhov2012knowledge}.
There is a literature studying sequential decision problems to maximize a utility function (e.g., \cite{he2007opportunity,chick2001new,powell2012optimal}). We are particularly interested in a policy that is called the knowledge gradient (KG) that maximizes the expected value of information. After its first appearance for ranking and selection problems \cite{frazier2008knowledge}, KG has been extended to various other belief models (e.g. \cite{mes2011hierarchical,negoescu2011knowledge,ryzhov2012knowledge,wang2015nested}). Yet there is no KG variant designed for binary classification with parametric models. In this paper, we extend the KG policy to the setting of classification problems under a logistic belief model which introduces the computational challenge of working with nonlinear models.
This paper is organized as follows. We first rigorously establish a sound mathematic model for the problem of sequentially maximizing the response under binary outcome in Section \ref{sec:problem}. We then develop a recursive Bayesian logistic regression procedure to predict the response of each alternative and further formulate the problem as a Markov decision process. In Section 3, we design a knowledge-gradient type policy under a logistic belief model to guide the experiment and provide a finite-time analysis on the estimated error. This is different from the PAC (passive) learning bound which relies on the i.i.d. assumption of the examples. Experiments are demonstrated in Section 4.
\section{Problem formulation} \label{sec:problem}
In this section, we state a formal model for our response maximization problem, including transition and objective functions. We then formulate the problem as a Markov decision process.
\subsection{The mathematical model}
We assume that we have a finite set of alternatives $\bm{x}\in \mathcal{X}=\{\bm{x}_1,\dots,\bm{x}_M\}$. The observation of measuring each $\bm{x}$ is a binary outcome $y \in \{-1,+1\}$ with some unknown probability $\text{Pr}[y=+1|\bm{x}]$. Under a limited budget $N$, our goal is to choose the measurement policy $(\bm{x}^1,\dots,\bm{x}^{N})$ and implementation decision $\bm{x}^{N+1}$ that maximizes $\text{Pr}(y=+1|\bm{x}^{N+1})$. We assume a parametric model where each $\bm{x}$ is a $d$-dimensional vector and the probability of an example $\bm{x}$ belonging to class $+1$ is given by a nonlinear transformation of an underlying linear function of $\bm{x}$ with a weight vector $\bm{w}$:
$$
\text{Pr}(y=+1|\bm{x},\bm{w})=\sigma(\bm{w}^T\bm{x}),
$$
with the sigmoid function $\sigma(a)$ chosen as the logistic function $\sigma(a)=\frac{1}{1+\text{exp}(-a)}.$
We assume a Bayesian setting in which we have a multivariate prior distribution for the unknown parameter vector $\bm{w}$. At iteration n, we choose an alternative $\bm{x}^n$ to measure and observe a binary outcome $y^n$ assuming labels are generated independently given $\bm{w}$. Each alternative can be measured more than once with potentially different outcomes. Let $\mathcal{D}^n=\{(\bm{x}^i,y^i)\}_{i=1}^n$ denote the previous measured data set for any $n=1,\dots,N$. Define the filtration $(\mathcal{F}^n)_{n=0}^N$ by letting $\mathcal{F}^n$ be the sigma-algebra generated by $\bm{x}^1,y^1,\dots, \bm{x}^{n},y^n$. We use $\mathcal{F}^n$ and $\mathcal{D}^n$ interchangeably. Measurement and implementation decisions $\bm{x}^{n+1}$ are restricted to be $\mathcal{F}^n$-measurable so that decisions
may only depend on measurements made in the past. We use Bayes' rule to form a sequence of posterior predictive distributions $ \text{Pr}(\bm{w}|\mathcal{D}^n)$ for $\bm{w}$ from the prior and the previous measurements.
The next lemma states the equivalence of using true probabilities and sample estimates when evaluating a policy, where $\Pi$ is the set of policies. The proof is left in the supplementary material.
\begin{lemma}\label{eqv}
Let $\pi \in \Pi$ be a policy, and $\bm{x}^\pi = \arg \max_{\bm{x}} \text{Pr}[y = +1 | \bm{x}, \mathcal{D}^N]$ be the implementation decision after the budget $N$ is exhausted. Then
$$
\mathbb{E}[\text{Pr}(y=+1|\bm{x}^\pi,\bm{w})]=\mathbb{E}[\max_{\bm{x}}\text{Pr}(y=+1|\bm{x},\mathcal{D}^{N})],
$$
where the expectation is taking over the prior distribution of $\bm{w}$.
\end{lemma}
By denoting $\mathcal{X}^I$ as an implementation policy for selecting an alternative after the measurement budget is exhausted, then $\mathcal{X}^I$ is a mapping from the history $\mathcal{D}^N$ to an alternative $\mathcal{X}^I(\mathcal{D}^N)$. Then as a corollary of Lemma \ref{eqv}, we have \cite{powell2012optimal}
$$\max_{\mathcal{X}^I}\mathbb{E}\big[ \text{Pr}\big(y = +1 | \mathcal{X}( \mathcal{D}^N)\big)\big]= \max_{\bm{x}}\text{Pr}(y = +1 | \bm{x}, \mathcal{D}^N).
$$
In other words, the optimal decision at time $N$ is to go with our final set of beliefs. By the equivalence of using true probabilities and sample estimates when evaluating a policy, while we want to learn the unknown true value $\max_{\bm{x}}\text{Pr}(y=+1|\bm{x})$, we may write our problem's objective as
\begin{equation}\label{obj}
\max_{\pi \in \Pi} \mathbb{E}^{\pi}[ \max_{\bm{x}}\text{Pr}(y = +1 | \bm{x}, \mathcal{D}^N)].
\end{equation}
\subsection{From logistic regression to Markov decision process formulation}\label{LR}
Logistic regression is widely used in machine learning for binary classification \cite{hosmer2004applied}. Given a training set $\mathcal{D}=\{(\bm{x}_i,y_i)\}_{i=1}^n$ with $\bm{x}_i$ a $d$-dimensional vector and $y_i \in \{-1,+1\}$, with the assumption that training labels are generated independently given $\bm{w}$, the likelihood $\text{Pr}(\mathcal{D}|\bm{w})$ is defined as
$
\text{Pr}(\mathcal{D}|\bm{w}) = \prod_{i=1}^n \sigma(y_i\cdot\bm{w}^T\bm{x}_i).$
In frequentists' interpretation, the weight vector $\bm{w}$ is found by maximizing the likelihood of the training data $\text{Pr}(\mathcal{D}|\bm{w})$. $l_2$-regularization has been used to avoid over-fitting with the estimate of the weight vector $\bm{w}$ given by:
\begin{equation}\label{RLR}
\min_{\bm{w}} \frac{\lambda}{2}\|\bm{w}\|^2+\sum_{i=1}^n\log(1+\exp(-y_i \bm{w}^T\bm{x}_i)).
\end{equation}
\iffalse
\begin{equation*}
\min_{\bm{w}} \sum_{i=1}^n\log(1+\exp(-y_i \bm{w}^T\bm{x}_i)).
\end{equation*}
\fi
\subsubsection{Bayesian setup}\label{BLR}
Exact Bayesian inference for logistic regression is intractable since the evaluation of the posterior distribution comprises a product of logistic sigmoid functions and the integral in the normalization constant is intractable as well. With a Gaussian prior on the weight vector, the Laplace approximation can be obtained by finding the mode of the posterior distribution and then fitting a Gaussian distribution centered at that mode (see Chapter 4.5 of \cite{bishop2006pattern}). Specifically, suppose we begin with a Gaussian prior
$
\text{Pr}(\bm{w})= \mathcal{N}(\bm{w}|\bm{m}, \bm{\Sigma}),
$
and we wish to approximate the posterior
$
\text{Pr}(\bm{w}|\mathcal{D}) \propto \text{Pr}(\mathcal{D}|\bm{w})\text{Pr}(\bm{w}).
$
Define the logarithm of the unnormalized posterior distribution
\begin{eqnarray}\nonumber \label{LPD}
\Psi(\bm{w}|\bm{m},\bm{\Sigma},\mathcal{D})&=&\log \text{Pr}(\mathcal{D}|\bm{w})+
\log\text{Pr}(\bm{w}) \\
&=& -\frac{1}{2}(\bm{w}-\bm{m})^T\bm{\Sigma}^{-1}(\bm{w}-\bm{m})- \sum_{i=1}^n\log(1+\exp(-y_i \bm{w}^T\bm{x}_i)).
\end{eqnarray}
The Laplace approximation is based on a Taylor expansion to $\Psi$ around its MAP (maximum a posteriori) solution $\hat{\bm{w}}= \arg \max_{\bm{w}}\Psi(\bm{w})$, which defines the mean of the Gaussian. The covariance is then given by the Hessian of the negative log posterior evaluated at $\hat{\bm{w}}$, which takes the form
\begin{equation}\label{LPDD}
(\bm{\Sigma}')^{-1}=-\nabla^2 \Psi(\bm{w})|_{\bm{w}=\hat{\bm{w}}} = \bm{\Sigma}^{-1}+\sum_{i=1}^n p_i(1-p_i)\bm{x}_i\bm{x}_i^T,
\end{equation}
where $p_i = \sigma(\hat{\bm{w}}^T\bm{x}_i)$.
The Laplace approximation results in a normal approximation to the posterior
\begin{equation}\label{pos}
\text{Pr}(\bm{w}|\mathcal{D}) \approx \mathcal{N}(\hat{\bm{w}},\bm{\Sigma}').
\end{equation}
By substituting an independent normal prior with $q_i^{-1}$ as the diagonal element of diagonal covariance matrix $\bm{\Sigma}$, the Laplace approximation to the posterior distribution of each weight $w_j$ reduces to
$
\text{Pr}(w_j|\mathcal{D}) \approx \mathcal{N}(\hat{w}_j,q_j^{-1}).
$ Note here if $q_j=\lambda, m_i=0$, the solution of Eq. \eqref{RLR} is the same as the MAP solution of \eqref{LPD}. So an $l_2$-regularized logistic regression can be interpreted as a Bayesian model with a Gaussian prior on the weights with standard deviation $1/\sqrt{\lambda}$.
\iffalse
the unnormalized log likelihood $\Psi(\bm{w})$ reduces to
\begin{equation}\label{ind}
\Psi(\bm{w}) = -\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \sum_{i=1}^n \log(1+\exp(-y_i\bm{w}^T\bm{x}_i)),
\end{equation}
and the diagonal approximation of the Hessian gives the inverse of the $j$th diagonal element of the posterior covariance matrix as
\begin{equation}\label{indC}
q_j'=q_j + \sum_{i=1}^n p_i(1-p_i)x^2_{ij}.
\end{equation}
\fi
\iffalse
\subsection{$l_2$-regularized logistic regression (sparse logistic regression)}
Gaussian prior equals to an $l_2$-regularized logistic regression as stated in the above section. While Bayesian with Laplace prior equals to an $l_2$-regularized logistic regression which yields sparse solutions. The Laplace approximation can also be used to yield a recursive update formula. The details will be filled in later.
\fi
\subsubsection{Recursive Bayesian logistic update}\label{sec:RBLR}
Our state space is the space of all possible predictive distributions for $\bm{w}$. Starting from a Gaussian prior $\mathcal{N}(\bm{w}|\bm{m}^0, \bm{\Sigma}^0)$, after the first $n$ observed data, the Laplace approximated posterior distribution is $\text{Pr}(\bm{w}|\mathcal{D}^n) \approx \mathcal{N}(\bm{w}|\bm{m}^n, \bm{\Sigma}^n)$ according to \eqref{pos}. We formally define the state space $\mathcal{S}$ to be the cross-product of $\mathbb{R}^d$ and the space of positive semidefinite matrices. At each time $n$, our state of knowledge is thus $S^n=(\bm{m}^n, \bm{\Sigma}^n)$.
Since retraining the logistic model using all the previous data after each new data comes in to update from $S^n$ to $S^{n+1}$ by obtaining the MAP solution of \eqref{LPD} or even with diagonal covariance with constant diagonal elements is clumsy, the Bayesian logistic regression can be extended to leverage for recursive model updates after each of the training data.
To be more specific, after the first training data, the Laplace approximated posterior is $\mathcal{N}(\bm{w}|\bm{m}^1, \bm{\Sigma}^1)$. This serves as a prior on the weights to update the model when the next training data becomes available. In this recursive way of model updating, previously measured data need not be stored or used for retraining the logistic model. For the rest of this paper, we focus on independent normal priors (with $ \bm{\Sigma}=\lambda^{-1} \bm{I}$, where $ \bm{I}$ is the identity matrix), which is equivalent to $l_2$-regularized logistic regression, which also offers greater computational efficiency. All the results can be easily generalized to the correlative normal case. By setting the batch size $n=1$ and $\Sigma=\lambda^{-1} \bm{I}$ in Eq. \eqref{LPD} and \eqref{LPDD}, we have the recursive Bayesian logistic regression as in Algorithm \ref{RBLR}.
\begin{algorithm}\label{RBLR}
\caption{Recursive Bayesian Logistic Regression}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Regularization parameter $\lambda > 0$}
$m_j=0$, $q_j=\lambda$. (Each weight $w_j$ has an independent prior $\mathcal{N}(m_j, q_j^{-1})$)\\
\For{$t=1$ to $T$}{
~\\
Get a new point $(\bm{x}, y)$.\\
Find $\hat{\bm{w}}$ as the maximizer of \eqref{LPD}: $-\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \log(1+\exp(-y\bm{w}^T\bm{x})).$\\
$m_j=\hat{w}_j$\\
Update $q_i$ according to \eqref{LPDD}:
$q_j \leftarrow q_j +\sigma(\hat{\bm{w}}^T\bm{x})(1-\sigma(\hat{\bm{w}}^T\bm{x})x^2_{j}$.
}
\end{algorithm}
Since $\Psi(\bm{w}|\bm{m},\bm{\Sigma},\mathcal{D})$ is convex in $\bm{w}$, we can tap a wide range of convex optimization algorithms including gradient search, conjugate gradient, an BFGS method (see \cite{wright1999numerical} for details). Yet when setting $n=1$ and $\Sigma=\lambda^{-1} \bm{I}$ in Eq. \eqref{LPD}, a stable and efficient algorithm for solving $\arg\max \Psi(\bm{w})=-\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \log(1+\exp(-y\bm{w}^T\bm{x}))$ can be obtained as follows. First we calculate $$\frac{\partial F}{\partial w_i}=-q_i(w_i-m_i)+\frac{yx_i\exp(-y\mathbf{w}^T\mathbf{x})}{1+\exp(-y\mathbf{w}^T\mathbf{x})}.$$ By setting
$\partial F/\partial w_i=0$ for all $i$, then by denoting $(1+\exp(y\mathbf{w}^T\mathbf{x}))^{-1}$ as $p$, we have
$$q_i(w_i-m_i)=ypx_i,~~~~i=1,2,\dots,d,$$ and thus
$w_i=m_i+yp\frac{x_i}{q_i}.$
Plugging in these equalities into the definition of $p$, we have
$$\frac{1}{p}=1+\exp\Big{(}y\sum_{i=1}^{d}(m_i+yp\frac{x_i}{q_i})x_i\Big{)}
=1+\exp(y\mathbf{m}^T\mathbf{x})\exp \Big{(}y^2p\sum_{i=1}^d\frac{x_i^2}{q_i}\Big{)}.$$
The left hand side decreases from infinity to 1 and the right hand side increases from 1 when $p$ goes from 0 to 1, therefore the solution exists and is unique in $[0,1]$. By reducing a $d$-dimensional problem to a $1$-dimensional one, the simple bisection method is good enough.
\subsection{Markov decision process formulation}
Our learning problem is a dynamic program that can be formulated as a Markov decision process. By using diagonal covariance matrices, the state space degenerates to $\mathcal{S} := \mathbb{R}^d \times (0,\infty] ^d$ and it consists of points $s=(\bm{m},\bm{q})$, where $m_i, q_i$ are the mean and the precision of a normal distribution. We next define the transition function based on the recursive Bayesian logistic regression.
\begin{definition}The transition function $T$: $\mathcal{S}\times\mathcal{X}\times \{-1, 1\}$ is defined as
$$T\Big((\bm{m},\bm{q}), \bm{x},y\Big) = \bigg(\Big(\hat{\bm{w}}(\bm{m}),\bm{q}+ p(1-p)\textbf{diag}(\bm{x}\bm{x}^T)\Big)\bigg),
$$
where $\hat{\bm{w}}(\bm{m})=\arg \min_{\bm{w}}\Psi(\bm{w}|\bm{m},\bm{q})$, $p = \sigma(\hat{\bm{w}}^T\bm{x})$ and $\textbf{diag}(\bm{x}\bm{x}^T)$ is a column vector containing the diagonal elements of $\bm{x}\bm{x}^T$, so that $S^{n+1}=T(S^n, \bm{x},y)$.
\end{definition}
In a dynamic program, the value function is defined as the value of the optimal policy given a particular state
$S^n$ at time $n$, and may also be determined recursively through Bellman's equation. If the value function can be computed efficiently, the optimal policy may then also be computed from it. The value function $V^n:\mathcal{S} \mapsto \mathbb{R}$ at time $n=1,\dots,N+1$ is given by \eqref{obj} as
$$
V^n(s) := \max_{\pi} \mathbb{E}^{\pi}[ \max_{\bm{x}}\text{Pr}(y = +1 | \bm{x}, \mathcal{F}^N)|S^n=s].
$$
By noting that $\max_{\bm{x}}\text{Pr}(y = +1 | \bm{x},\mathcal{F}^N$) is $\mathcal{F}^N$-measurable and thus the expectation does not depend on policy $\pi$, the terminal value function $V^N$ can be computed directly as
$$
V^{N+1}(s)=\max_{\bm{x}}\text{Pr}(y = +1 | \bm{x},s), \forall s \in \mathcal{S}.
$$
The value function at times $n=1,\dots,N$, $V^n$, is given recursively by
$$
V^n(s)=\max_{\bm{x}}\mathbb{E}[V^{n+1}(T(s,\bm{x},y))], s \in \mathcal{S}.
$$
\section{Knowledge gradient policy for logistic belief model}
Since the ``curse of dimensionality''
makes direct computation of the value function intractable, computationally efficient approximate
policies need to be considered. A computationally attractive policy for ranking and selection problems is the knowledge
gradient (KG), which is a stationary policy that at the $n$th iteration chooses its ($n+1$)th measurement to maximize the single-period expected increase in value \cite{frazier2008knowledge}. It enjoys some nice properties, including myopic and asymptotic optimality. After its first appearance, KG has been extended to various belief models (e.g. \cite{mes2011hierarchical,negoescu2011knowledge,ryzhov2012knowledge,wang2015nested}) for offline learning, and an immediate extension to online learning problems \cite{ryzhov2012knowledge}. Yet there is no KG variant designed for binary classification with parametric models, primarily because of the complexity of dealing with nonlinear belief models. In what follows, we extend the KG policy to the setting of classification problems under a logistic belief model.
\begin{definition}The knowledge gradient of measuring an alternative $\bm{x}$ while in state $s$ is
\begin{equation} \label{KG}
\nu_{\bm{x}}^{\text{KG}}(s) := \mathbb{E}[V^{N+1}(T(s,\bm{x},y))-V^{N+1}(s)].
\end{equation}
\end{definition}
Since the label for alternative $\bm{x}$ is not known at the time of selection, the expectation is computed conditional on the current model specified by $s=(\bm{m},\bm{q})$. Specifically,
given a state $s=(\bm{m},\bm{q})$, the label $y$ for an alternative $\bm{x}$ follows from a Bernoulli distribution with a predictive distribution
\begin{eqnarray}\label{predictD}
\text{Pr}(y=+1|\bm{x},s) =\int \text{Pr}(y=+1|\bm{x},\bm{w})\text{Pr}(\bm{w}|s)\text{d}\bm{w}
= \int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s)\text{d}\bm{w}.
\end{eqnarray}
We have
\begin{eqnarray*}
\mathbb{E}[V^{N+1}(T(s,\bm{x},y))] &=& \text{Pr}(y=+1|\bm{x},s)V^N(T(s, \bm{x},+1))+ \text{Pr}(y=-1|\bm{x},s)V^N(T(s, \bm{x},-1))\\
&=&\text{Pr}(y=+1|\bm{x},s)\cdot \max_{\bm{x}'}\text{Pr}(y = +1 | \bm{x}',T(s,\bm{x},+1))\\
&&+\text{Pr}(y=-1|\bm{x},s)\cdot \max_{\bm{x}'}\text{Pr}(y = +1 | \bm{x}',T(s,\bm{x},-1)).
\end{eqnarray*}
The knowledge gradient policy suggests at each time $n$ selecting the alternative that maximizes $\nu_{\bm{x}}^{\text{KG}}(s^{n-1})$ where ties are broken randomly. The same optimization procedure as in recursive Bayesian logistic regression needs to be conducted for calculating the transition functions $T(s,\bm{x},\cdot)$.
The predictive distribution $ \int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s)\text{d}\bm{w}$ cannot be evaluated exactly using the logistic function in the role of the sigmoid $\sigma$. An approximation procedure is deployed as follows.
Denoting $a=\bm{w}^T\bm{x}$ and $\delta(\cdot)$ as the Dirac delta function, we have $\sigma(\bm{w}^T\bm{x})=\int \delta(a-\bm{w}^T\bm{x})\sigma(a)\text{d}a.$
Hence
$$\int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w}=\int \sigma(a)p(a)\text{d}a,$$
where
$p(a)=\int \delta(a-\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w}.$
Since $p(\bm{w}|s) = \mathcal{N}(\bm{m},\bm{q}^{-1})$ is Gaussian, the marginal distribution $p(a)$ is also Gaussian. We can evaluate $p(a)$ by calculating the mean and covariance of this distribution \cite{bishop2006pattern}. We have
\begin{eqnarray*}
\mu_a&=&\mathbb{E}[a]=\int p(a)a \text{ d}a = \int p(\bm{w}|s)\bm{w}^T\bm{x} \text{ d}\bm{w}=\hat{\bm{w}}^T\bm{x},\\
\sigma_a^2&=& \int p(\bm{w}|s) \big((\bm{w}^T\bm{x})^2-(\bm{m}^T\bm{x})^2 \big) \text{ d}\bm{w}=\sum_{j=1}^d q_j^{-1} x_j^2.
\end{eqnarray*} Thus $\int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w}=\int \sigma(a)p(a)\text{d}a=\int \sigma(a) \mathcal{N}(a|\mu_a, \sigma^2_a) \text{d}a.$
For a logistic function, in order to obtain the best approximation \cite{barber1998ensemble,spiegelhalter1990sequential}, we approximate $\sigma(a)$ by $\Phi(\alpha a)$ with $\alpha=\pi/8$. Denoting $\kappa(\sigma^2)=(1+\pi \sigma^2/8)^{-1/2}$ , we have
$$
\text{Pr}(y=+1|\bm{x},s)=\int \sigma(\bm{w}^T\bm{x})p(\bm{w}|s) \text{d}\bm{w} \approx \sigma(\kappa(\sigma^2)\mu).
$$
We summarize the decision rule of the knowledge gradient policy at each iteration in Algorithm \ref{KG}.
\begin{algorithm}\label{KG}
\caption{Knowledge Gradient Policy for Logistic Belief Model}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{$m_j$, $q_j$ (Each weight $w_j$ has an independent prior $\mathcal{N}(m_j, q_j^{-1})$)}
\For{$\bm{x}$ in $\mathcal{X}$}{
~\\
Let $\Psi(\bm{w},y)=-\frac{1}{2}\sum_{j=1}^d q_i(w_i-m_i)^2 - \log(1+\exp(-y\bm{w}^T\bm{x}))$\\
$\hat{\bm{w}}_{+}=\arg \max_{\bm{w}}\Psi(\bm{w},+1)$ \\
$\hat{\bm{w}}_{-}=\arg \max_{\bm{w}}\Psi(\bm{w},-1)$ \\
$\mu=\bm{m}^T\bm{x}$\\
$\sigma^2=\sum_{j=1}^d q_j^{-1} x_j^2$\\
Define $\mu_{+}(\bm{x}')=\hat{\bm{w}}_{+}^T\bm{x}'$, $\mu_{-}(\bm{x}')=\hat{\bm{w}}_{-}^T\bm{x}'$\\
Define $\sigma^2_{\pm}(\bm{x}')= \sum_{j=1}^d\Big( q_j+\sigma(\hat{\bm{w}}^T_{\pm}\bm{x}')(1-\sigma(\hat{\bm{w}}_{\pm}^T\bm{x}')x^2_{j} \Big) (x'_j)^2$\\
$\tilde{\nu}_{\bm{x}}=\sigma(\kappa(\sigma^2)\mu)\cdot \max_{\bm{x'}}\sigma \big(\kappa(\sigma_+^2(\bm{x}'))\mu_{+}(\bm{x'})\big)+\big(1-\sigma(\kappa(\sigma^2)\mu)\big)\cdot \max_{\bm{x'}}\sigma \big(\kappa(\sigma_-^2(\bm{x}'))\mu_{-}(\bm{x'})\big)$\\
}
$\bm{x}^{\text{KG}} = \arg \max_{\bm{x}}\tilde{\nu}_{x}$
\end{algorithm}
We close this section by presenting the following finite-time bound on the MSE
of the estimated weight with the proof in the supplement. Without loss of generality, we assume $\|\bm{x}\|_2 \le 1$, $\forall{\bm{x}\in \mathcal{X}}$.
\begin{theorem}
Let $\mathcal{D}^n$ be the $n$ measurements produced by the KG policy and $\bm{w}^n=\arg\max_{\bm{w}}\Psi(\bm{w}|\bm{m}^0, \Sigma^0)$ with the prior distribution $\text{Pr}(\bm{w}^*)= \mathcal{N}(\bm{w}^*|\bm{m}^0, \bm{\Sigma}^0).$ Then with probability $P_d(M)$, the expected error of $\bm{w}^n$ is bounded as
$$\mathbb{E}_{ \bm{y}\sim \mathcal{B}(\mathcal{D}^n,\bm{w}^*)}||\bm{w}^n-\bm{w}^*||_2\le\frac{C_{min}+\lambda_{min}\big{(}\bm{\Sigma}^{-1}\big{)}}{2},$$
where the distribution $\mathcal{B}(\mathcal{D}^n,\bm{w}^*)$ is the vector Bernoulli distribution
$Pr(y^i=+1)=\sigma(\bm{w}^{*T}\bm{x}^i)$,
$P_d(M)$ is the probability of a d-dimension standard normal random variable appears in the ball with radius $M =\frac{1}{8}\frac{\lambda_{min}^2}{\sqrt{\lambda_{max}}}$ and
$C_{min}= \lambda_{min}
\Big{(}\frac{1}{n}\sum_{i=1} \sigma(\bm{w}^{*T}\bm{x}^i)\big{(}1-\sigma(\bm{w}^{*T}\bm{x}^i)\big{)}\bm{x}^i(\bm{x}^i)^T \Big{)}.$
\end{theorem}
In the special case where $\bm{\Sigma}^0=\lambda^{-1}\bm{I}$, we have $\lambda_{max}=\lambda_{min}=\lambda$ and $M=\frac{\lambda^{3/2}}{8}$.
The bound holds with higher probability $P_d(M)$ with Iarger $\lambda$. This is natural since a larger $\lambda$ represents a normal distribution with narrower bandwidth, resulting in a more concentrated $\bm{w}^*$ around $\bm{m}^0$.
\section{Experimental results}
We evaluate the proposed method on both synthetic datasets and the UCI machine learning repository \cite{Lichman:2013} which includes classification problems drawn from settings including fertility, glass identification, blood transfusion, survival, breast cancer (wpbc), planning relax and climate model failure. We first analyze the behavior of the KG policy and then compare it to state-of-the-art active learning algorithms. On synthetic datasets, we randomly generate a set of $M$ $d$-dimensional alternatives $\bm{x}$ from [-3,3]. We conduct experiments in a Bayesian fashion where at each run we sample a true $d+1$-dimensional weight vector $\bm{w}^*$ from the prior distribution $w^*_i \sim \mathcal{N}(0, \lambda)$. The $+1$ label for each alternative $\bm{x}$ is generated with probability $\sigma(w^*_0+\sum_{j=1}^d w^*_dx_d)$. For each UCI dataset, we use all the data points as the set of alternatives with their original attributes. We then simulate their labels using a weight vector $\bm{w}^*$. This weight vector could have been chosen arbitrarily, but it was in fact a perturbed version of the weight vector trained through logistic regression on the original dataset. All the policies start with the same one randomly selected example per class.
\subsection{Behavior of the KG policy}
To better understand the behavior of the KG policy, Fig. \ref{2d} shows the snapshot of the KG policy at each iteration on a $2$-dimensional synthetic dataset ($M=200$) in one run. The scatter plots show the KG values with both the color and the size of the point reflecting the KG value of the corresponding alternative. The star denotes the true alternative with the largest response. The red square is the alternative with the largest KG value. The pink circle is the implementation decision that maximizes the response under current estimation of $\bm{w}^*$ if the budget is exhausted after that iteration.
\begin{figure}[htp]
\centering
\hspace*{-0.4cm}
\begin{tabular}{cccc}
\includegraphics[width=0.25\textwidth]{h1.pdf}
\includegraphics[width=0.25\textwidth]{h2.pdf}
\includegraphics[width=0.25\textwidth]{h3.pdf}
\includegraphics[width=0.25\textwidth]{h4.pdf}
\end{tabular}
\caption{The scatter plots illustrate the KG values at 1-4 iterations from left to right with both the color and the size reflecting the magnitude. The star, the red square and pink circle indicate the true best alternative, the alternative to be selected and the implementation decision, respectively. \label{2d}}
\end{figure}
\iffalse
\begin{figure}[htp]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.333\textwidth]{h1.pdf}
\includegraphics[width=0.333\textwidth]{h2.pdf}
\includegraphics[width=0.333\textwidth]{h3.pdf} \\
\includegraphics[width=0.333\textwidth]{h4.pdf}
\includegraphics[width=0.333\textwidth]{h5.pdf}
\includegraphics[width=0.333\textwidth]{h6.pdf}
\end{tabular}
\caption{Snapshots on a $2$d dataset. The scatter plots illustrate the KG values at 1-6 iterations from left to right, bottom to down with both the color and the size reflecting the magnitude. The star, the red square and pink circle indicate the true best alternative, the alternative to be selected and the implementation decision, respectively. \label{2d}}
\end{figure}
\fi
It can be seen from the figure that the KG policy finds the true best alternative after only three measurements, reaching out to different alternatives to improve its estimates. We can infer from Fig. \ref{2d} that the KG policy tends to choose alternatives near the boundary of the region.
This criterion is natural since in order to find the true maximum, we need to get enough information about $\bm{w}^*$ and
estimate well the probability of points near the true maximum which appears near the boundary. On the other hand, in a logistic model with labeling noise, a data $\bm{x}$ with small $\bm{x}^T\bm{x}$ inherently brings little information as pointed out in \cite{zhang2000value}. For an extreme example, when $\bm{x}=\bm{0}$ the label is always completely random for any $\bm{w}$ since $\text{Pr}(y=+1|\bm{w},\bm{0}) \equiv 0.5$. This is an issue when perfect classification is not achievable. So it is essential to label a data with larger $\bm{x}^T\bm{x}$ that has the most potential to enhance its confidence non-randomly.
\begin{wrapfigure}{r}{0.6\textwidth}
\begin{center}
\includegraphics[width=0.6\textwidth]{h_Diff_6.pdf}
\end{center}
\caption{Absolute error. \label{Abs}}
\end{wrapfigure}
Also depicted in Fig. \ref{Abs} is the absolute class distribution error of each alternative, which is the absolute difference between the predictive probability of class $+1$ under current estimate and the true probability after $6$ iterations. We see that the probability at the true maximum is well approximated, while moderate error in
the estimate is located away from this region of interest. We also provide the analysis on a 3-dimensional dataset in the supplement.
\subsection{Comparison with other policies}
Recall that our goal is to maximize the expected response of the implementation decision. We define the Opportunity Cost (OC) metric as the expected response of the implementation decision $\bm{x}^{N+1}:=\arg\max_{\bm{x}} \text{Pr}(y=+1|\bm{x},\bm{w}^N)$ compared to the true maximal response under weight $\bm{w}^*$:
$$\text{OC}:=\max_{\bm{x}\in \mathcal{X}}\text{Pr}(y=+1|\bm{x},\bm{w}^*)-\text{Pr}(y=+1|\bm{x}^{N+1},\bm{w}^*).$$
Note that the opportunity cost is always non-negative and the smaller the better.
To make a fair comparison, on each run, all the time-$N$ labels of all the alternatives are randomly pre-generated according to the weight vector $\bm{w}^*$ and shared across all the competing policies.
Since there is no policy directly solving the same sequential response maximizing problem under a logistic model, considering the relationship with active learning as described in Section 1, we compare with the following state-of-the-art active learning policies compatible with logistic regression:
Random sampling (Random), a myopic method that selects the most uncertain instance each step (MostUncertain), Fisher information (Fisher) \cite{hoi2006batch}, the batch-mode active learning via error bound minimization (Logistic Bound) \cite{gu2014batch} and discriminative batch-mode active learning (Disc) \cite{guo2008discriminative} with batch size equal to 1. All the state transitions are based on recursive Bayesian logistic regression while different policies provides different rules for labeling decisions at each iteration. The experimental results are shown in figure \ref{33}.
In all the figures, the x-axis denotes the number of measured alternatives and the y-axis represents the averaged opportunity cost over 100 runs.
\begin{figure}[htp!]
\centering
\begin{tabular}{ccc}
\subfigure[fertility]{
\includegraphics[width=0.29\textwidth]{Fertility.pdf} }
\subfigure[glass identification]{
\includegraphics[width=0.29\textwidth]{GlassIdentification.pdf} }
\subfigure[blood transfusion]{
\includegraphics[width=0.29\textwidth]{BloodTransfusion.pdf}
} \\
\subfigure[survival]{
\includegraphics[width=0.29\textwidth]{hrberman.pdf} }
\subfigure[breast cancer (wpbc)]{
\includegraphics[width=0.29\textwidth]{wpbc.pdf} }
\subfigure[planning relax ]{
\includegraphics[width=0.29\textwidth]{plrx.pdf}
} \\
\subfigure[climate]{
\includegraphics[width=0.29\textwidth]{pop.pdf} }
\subfigure[Synthetic data, $d=10$]{
\includegraphics[width=0.29\textwidth]{OC3.pdf} }
\subfigure[Synthetic data, $d=15$]{
\includegraphics[width=0.29\textwidth]{OC4.pdf}
} \\
\end{tabular}
\caption{Opportunity cost on UCI and synthetic datasets. \label{33}}
\end{figure}
It is demonstrated in FIG. \ref{33} that KG outperforms the other policies significantly in most cases, especially in early iterations. MostUncertain, Fisher and Logistic Bound perform well on some datasets while badly on others. Disc and Random yield relatively stable and satisfiable performance. A possible explanation is that the goal of active leaning is to learn a classifier which accurately predicts the labels of new examples so their criteria are not directly related to maximize the response aside from the intent to learn the prediction. After enough iterations when active learning methods presumably have the ability to achieve a good estimator of $\bm{w}^*$, their performance will be enhanced.
However, in the case when an experiment is expensive and only a small budget is allowed, the KG policy, which is designed specifically to maximize the response, is preferred.
\section{Conclusion}
In this paper, we consider binary classification problems where we have to run expensive experiments, forcing us to learn the most from each experiment. The goal is to learn the classification model as quickly as possible to identify the alternative with the highest response. We develop a knowledge gradient policy using a logistic regression belief model, for which we developed an approximation method to overcome computational challenges in finding the knowledge gradient. We provide a finite-time analysis on the estimated error, and report the results of a series of experiments that demonstrate its efficiency.
|
2,869,038,154,323 | arxiv | \section{Introduction\label{introd}}
A quantum field theory should provide two items: the Hilbert space of the physical states and the (perturbative) expression of the scattering matrix. In perturbation theory the Hilbert space is generated from the vacuum by some set of free fields i.e. it is a Fock space. In theories describing higher spin particles one considers a larger Hilbert space of physical and unphysical degrees of freedom and gives a rule of selection for the physical states; this seems to be the only way of saving unitarity and renormalizability, in the sense of Bogoliubov. In this case one should check that the interaction Lagrangian (i.e. the first order of the $S$-matrix) leaves invariant the physical states. If the preceding picture is available in all detail then one can go very easily to explicit computations of some scattering process. Other constructions of a quantum field theory, as those based on functional integration, are incomplete in our opinion if they are not translated in the operatorial language in such a way that the consistency checks can be easily done.
The construction of the QCD Lagrangian in the causal approach goes as follows
\cite{Gr1}, \cite{Sc}. The Hilbert space of the massless vector field
$
v_{\mu}
$
is enlarged to a bigger Hilbert space
${\cal H}$
including two ghost fields
$u,~\tilde{u}$
which are Fermi scalars of null mass; in
${\cal H}$
we can give a Hermitian structure such that we have
\begin{equation}
v_{\mu}^{\dagger} = v_{\mu} \qquad u^{\dagger} = u, \qquad \tilde{u}^{\dagger} = - \tilde{u}.
\label{hermite-1}
\end{equation}
Then one introduces the {\it gauge charge} $Q$ according to:
\begin{eqnarray}
Q \Omega = 0, \qquad Q^{\dagger} = Q,
\nonumber \\
~[ Q, v_{\mu} ] = i \partial_{\mu}u,
\nonumber \\
~\{ Q, u \} = 0, \qquad
\{ Q, \tilde{u} \} = - i~\partial^{\mu}v_{\mu};
\label{gh-charge}
\end{eqnarray}
here
$
\Omega \in {\cal H}
$
is the vacuum state. Because
$
Q^{2} = 0
$
the physical Hilbert space is given by
$
{\cal H}_{phys} = Ker(Q)/Im(Q).
$
The gauge charge is compatible with the following causal (anti)commutation
relation:
\begin{equation}
[ v_{\mu}(x), v_{\nu}(y) ] = i~~g_{\mu\nu}~D_{0}(x-y)
\quad
~\{ u(x), \tilde{u}^{\dagger}(y) \} = - i~D_{0}(x-y)
\label{ccr-vector}
\end{equation}
and the other causal (anti)commutators are null; here
$
D_{m}(x-y)
$
is Pauli-Jordan causal distribution of mass
$m \geq 0$.
In fact, the first relation together with the definition of the gauge charge, determines uniquely the second relation as it follows from the Jacobi identity
\begin{eqnarray}
[ v_{\mu}(x), \{ \tilde{u}(y), Q \} ] + \{ \tilde{u}(y), [ Q, v_{\mu}(x) ] \}
= \{ Q, [ v_{\mu}(x), \tilde{u}(y) ] \} = 0.
\label{YM+CCR}
\end{eqnarray}
We can then assume that all the fields
$v_{\mu}, u, \tilde{u}$
have the canonical dimension equal to $1$ so the gauge charge raises the
canonical dimension by $1$. It is usefull to convince the reader that the gauge structure above gives the right physical Hilbert space. We do this for the one-particle Hilbert space. The generic form of a state
$
\Psi \in {\cal H}^{(1)} \subset {\cal H}
$
from the one-particle Hilbert subspace is
\begin{equation}
\Psi = \left[ \int f_{\mu}(x) v^{\mu}(x) + \int g_{1}(x) u(x) + \int g_{2}(x) \tilde{u}(x) \right] \Omega
\end{equation}
with test functions
$
f_{\mu}, g_{1}, g_{2}
$
verifying the wave equation equation. We impose the condition
$
\Psi \in Ker(Q) \quad \Longleftrightarrow \quad Q\Psi = 0;
$
we obtain
$
\partial^{\mu}f_{\mu} = 0 \qquad g_{2} = 0
$
i.e. the generic element
$
\Psi \in {\cal H}^{(1)} \cap Ker(Q)
$
is
\begin{equation}
\Psi = \left[ \int f_{\mu}(x) v^{\mu}(x) + \int g(x) u(x) \right] \Omega
\label{kerQ}
\end{equation}
with $g$ arbitrary and
$
f_{\mu}
$
constrained by the transversality condition
$
\partial^{\mu}f_{\mu} = 0;
$
so the elements of
$
{\cal H}^{(1)} \cap Ker(Q)
$
are in one-one correspondence with couples of test functions
$
(f_{\mu}, g)
$
with the transversality condition on the first entry. Now, a generic element
$
\Psi^{\prime} \in {\cal H}^{(1)} \cap Im(Q)
$
has the form
\begin{equation}
\Psi^{\prime} = Q\Phi = \left[ - \int \partial^{\mu}f^{\prime}_{\mu}(x) u(x)
+ \int \partial_{\mu}g^{\prime}(x) v^{\mu}(x) \right] \Omega
\label{imQ}
\end{equation}
so if
$
\Psi \in {\cal H}^{(1)} \cap Ker(Q)
$
is indexed by
$
(f_{\mu}, g)
$
then
$
\Psi + \Psi^{\prime}
$
is indexed by
$
(f_{\mu} + \partial_{\mu}g^{\prime}, g - \partial^{\mu}f^{\prime}_{\mu}).
$
If we take
$
f^{\prime}_{\mu}
$
conveniently we can make
$
g = 0.
$
We introduce the equivalence relation
$
f_{\mu}^{(1)} \sim f_{\mu}^{(2)} \quad \Longleftrightarrow
f_{\mu}^{(1)} - f_{\mu}^{(2)} = \partial_{\mu}g^{\prime}
$
and it follows that the equivalence classes from
$
Ker(Q)/Im(Q)
$
are indexed by equivalence classes of wave functions
$
[f_{\mu}];
$
we have obtained the usual one-particle Hilbert space for the photon. The preceding argument can be generalized to multi-particle Hilbert space \cite{Gr0}; the idea comes from Hodge theory and amounts in finding an homotopy operator
$
\tilde{Q}
$
such that the spectum of the ``Laplace" operator
$
\{Q,\tilde{Q}\}
$
can be easily determined.
By definition quantum chromodynamics assumes that we have $N$ copies
$
v^{\mu}_{j},~u_{j},~\tilde{u}_{j}\quad j = 1,\dots,N
$
verifying the preceding algebra for any
$j= 1,\dots,N$.
The interaction Lagrangian
$t(x)$
is some Wick polynomial acting in the total Hilbert space
${\cal H}$
and verifying the conditions: (a) canonical dimension
$
\omega(t) = 4;
$
(b) null ghost number
$
gh(t) = 0
$
(where by definition we have
$
gh(v^{\mu}_{j}) = 0, \quad gh(u_{j}) = 1 \quad gh(\tilde{u}_{j}) = - 1
$
and the ghost number is supposed to be additive);
(c) Lorentz covariant; (d) gauge invariance in the sense:
\begin{eqnarray}
[Q, t(x)]= i\partial_{\mu} t^{\mu}(x)
\label{gauge}
\end{eqnarray}
for some Wick polynomials
$t^{\mu}$
of canonical dimension
$
\omega(t^{\mu}) = 4
$
and ghost number
$
gh(t^{\mu}) = 1.
$
The gauge invariance condition guarantees that, after spatial integration the
interaction Lagrangian
$t(x)$
factorizes to the physical Hilbert space
$Ker(Q)/Im(Q)$
in the adiabatic limit, i.e. after integration over $x$; the condition
(\ref{gauge}) is equivalent to the usual condition of (free) current
conservation. Expressions of the type
\begin{equation}
d_{Q}b + \partial_{\mu}t^{\mu}
\end{equation}
with
\begin{equation}
\omega(b) = \omega(t^{\mu}) = 3 \qquad gh(b) = - 1 \quad gh(t^{\mu}) = 0
\end{equation}
are called {\it trivial Lagrangians} because they induce a null interaction after space integration (i.e. the adiabatic limit) on the physical Hilbert space. One can prove that the condition (\ref{gauge}) restricts drastically the possible form of $t$ i.e. every such expression is, up to a trivial Lagrangian, equivalent to
\begin{equation}
t = f_{jkl} ( :v_{j}^{\mu} v_{k}^{\nu} \partial_{\nu}v_{l\mu}:
- :v_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l}:)
\label{qcd}
\end{equation}
where the (real) constants
$
f_{jkl}
$
must be completely antisymmetric. In the the rest of the paper we will skip the
Wick ordering notations. Going to the second order of the perturbation theory produces the Jacobi identity. So we see that, starting from some very natural assumptions, we obtain in an unique way the whole structure of Yang-Mills models. We expect the same thing to happen for more complex models as supersymmetric theories.
If we want to generalize to the supersymmetric case we must include all the fields
$
v_{\mu}, u, \tilde{u}
$
in some supersymmetric multiplets. By definition \cite{GS1} a supersymmetric multiplet is a set of Bose and Fermi fields
$
b_{j}, f_{A}
$
together with the supercharge operators
$
Q_{a}
$
such that the commutator (resp. the anticommutator) of a Bose (resp. Fermi) field with the supercharges is a linear combination of Fermi (resp. Bose) fields; the coefficients of these linear combinations are partial derivative operators. We must also suppose that the supercharges are part of an extension of the Poincar\'e algebra called the supersymmetric algebra; essentially we have (for $N = 1$ supersymmetry):
\begin{equation}
Q_{a} \Omega = 0, \quad \bar{Q}_{\bar{a}} \Omega = 0 \quad
\bar{Q}_{\bar{a}} = (Q_{a})^{\dagger}
\label{vac}
\end{equation}
\begin{eqnarray}
\{ Q_{a} , Q_{b} \} = 0, \quad
\{ Q_{a} , \bar{Q}_{\bar{b}} \} - 2 \sigma^{\mu}_{a\bar{b}} P_{\mu} = 0
\label{SUSY}
\end{eqnarray}
and
\begin{equation}
[ Q_{a}, P_{\mu} ] = 0, \quad
U_{A}^{-1} Q_{a} U_{A} = {A_{a}}^{b} Q_{b}.
\label{poincare}
\end{equation}
Here
$
U_{A}
$
is a unitary representation of the Poincar\'e group and
$
P_{\mu}
$
are the infinitesimal generators of the space-time translations.
There are not many ways to do this. We will show that for the
$
v_{\mu}
$
we must use the vector multiplet and for the ghost fields we must use
chiral multiplets. Then we must impose that $t$ is also supersymmetric
invariant. A natural definition is:
\begin{equation}
[ Q_{a}, t] = d_{Q}s_{a} + \partial_{\mu}t_{a}^{\mu} \qquad
\omega(s_{a}) = 7/2 \quad gh(s_{a}) = - 1 \qquad
\omega(t_{a}^{\mu}) = 7/2 \quad gh(t_{a}^{\mu}) = 0;
\label{susy-inv}
\end{equation}
this means that after space integration (i.e. the adiabatic limit) we
obtain on the physical Hilbert space an expression commuting with the
supercharges.
In the supersymmetric framework one usually makes a supplementary
requirement, namely that the basic supersymmetric multiplets should be
organized in superfields \cite{BK}, \cite{GGRS}, \cite{So}, \cite{Wei} i.e. fields dependent on space-time variables and some auxiliary Grassmann parameters
$
\theta_{a}, \bar{\theta}_{\bar{a}}.
$
It is showed in \cite{GS1} that
there is a canonical map
$w \mapsto sw \equiv W$
mapping a ordinary Wick monomial
$w(x)$
into its supersymmetric extension
\begin{equation}
W(x,\theta,\bar{\theta}) \equiv
\exp\left(i\theta^{a} Q_{a} - i\bar{\theta}^{\bar{a}} \bar{Q}_{\bar{a}}\right);
\label{W-expo}
\end{equation}
in particular this map associates to every field of the model a superfield. Moreover, one postulates that the interaction Lagrangian $t$ should be of the form
\begin{equation}
t(x) \equiv \int d\theta^{2} d\bar{\theta}^{2} T(x,\theta,\bar{\theta})
\label{t-T}
\end{equation}
for some supersymmetric Wick polynomial $T$. We expect that the
preceding expression is of the form (\ref{qcd}) plus other monomials
where the super-partners appear.
One can hope to have an uniqueness result for the coupling if one finds out a
supersymmetric generalization of (\ref{gauge}). A natural candidate would be
the relation:
\begin{equation}
[Q, T(x,\theta,\bar{\theta}) ] = {\cal D}T(x,\theta,\bar{\theta}) - H.c.
= {\cal D}T + \bar{\cal D}\bar{T}
\label{gauge-susy}
\end{equation}
where
\begin{eqnarray}
{\cal D}_{a} \equiv {\partial \over \partial \theta^{a}}
- i \sigma^{\mu}_{a\bar{b}} \bar{\theta}^{\bar{b}} \partial_{\mu}
\qquad
\bar{\cal D}_{\bar{a}} \equiv
- {\partial \over \partial \bar{\theta}^{\bar{a}}}
+ i \sigma^{\mu}_{b\bar{a}} \theta^{b} \partial_{\mu}.
\label{calD}
\end{eqnarray}
We have
\begin{eqnarray}
( {\cal D}_{a} T)^{\dagger} = \pm \bar{\cal D}_{\bar{a}} T^{\dagger},
\nonumber \\
\{{\cal D}_{a}, {\cal D}_{b} \} = 0, \quad
\{\bar{\cal D}_{\bar{a}}, \bar{\cal D}_{\bar{b}} \} = 0, \quad
\{{\cal D}_{a}, \bar{\cal D}_{\bar{b}} \} =
-2 i \sigma^{\mu}_{a\bar{b}}~\partial_{\mu}
\label{DD}
\end{eqnarray}
where in the first formula the sign $+ (-)$ corresponds to a super-Bose (-Fermi)
field. The last relations is used to eliminate space-time divergences
$\partial_{\mu} T^{\mu}(x,\theta,\bar{\theta})$
in the right-hand side of the relation (\ref{gauge-susy}). It is clear that
(\ref{gauge-susy}) implies (\ref{gauge}).
The most elementary and general way of analyzing supersymmetries is
to work in components and to see later if the solution can be expressed
in terms of superfields. We will prove that in the massless case there
is an unique solution for SUSY-QCD if we consider a weaker form of
(\ref{susy-inv}) i.e. we require that this relation is true only on
physical states:
\begin{equation}
<\Psi_{1}, ([ Q_{a}, t] - d_{Q}s_{a} - \partial_{\mu}t_{a}^{\mu} ) \Psi_{2} > = 0
\label{susy-inv-phys}
\end{equation}
where
$
\Psi_{1}, \Psi_{2} \in Ker(Q)~{\rm modulo}~Im(Q).
$
We remark that (\ref{susy-inv}) implies (\ref{susy-inv-phys}) but not the other way round.
In the next Section we give the structure of the multiplets of the model and we present the gauge structure. In Section \ref{qcd-interaction} we determine the most general form of the
interaction Lagrangian compatible with gauge invariance and prove that we have supersymmetric invariance also. The expression for the ghost coupling seems to be new in the literature. The details of the computation are given in the Appendix.
We also investigate in what sense one can rephrase the result using superfields. An immediate consequence of the analysis in terms of component fields is that one cannot impose (\ref{gauge-susy}). However we can establish a contact with traditional literature based on the so-called Wess-Zumino gauge.
In Section \ref{massive} we extend the result to the massive case hoping to obtain
the minimal supersymmetric extension of the standard model. We obtain a curious obstruction, namely the sector of massive gauge fields and the sector of massless gauge fields must decouple; this does not agree with the standard model.
Unfortunately, if we proceed to the second order of perturbation theory, we obtain a supersymmetric contribution to the anomaly which cannot be eliminated by redefinitions of the chronological products.
In Section \ref{new} we do the same analysis for the new vector multiplet \cite{GS1} working in components also. In conclusion
$
N = 1
$
supersymmetry and gauge invariance do not seem to be compatible in quantum theory.
\section{The Quantum Superfields of the Model\label{superfields}}
\subsection{The Vector Multiplet\label{vector}}
The vector multiplet is the collection of fields
$
C, \phi, v_{\mu}, d, \chi_{a}, \lambda_{a}
$
where
$C, $
is real scalar, $\phi$ is a complex scalar,
$
v_{\mu}
$
is a real vector and
$
\chi_{a}, \lambda_{a}
$
are spinor fields. We suppose that all these fields are of mass
$m \geq 0$.
We can group them in the superfield
\begin{equation}
V= C + \theta\chi + \bar{\theta} \bar{\chi} + \theta^{2}~\phi
+ \bar{\theta}^{2}~\phi^{\dagger}
+ (\theta \sigma^{\mu} \bar{\theta})~v_{\mu}
+ \theta^{2}~\bar{\theta}\bar{\lambda}
+ \bar{\theta}^{2}~\theta\lambda
+ \theta^{2} \bar{\theta}^{2}~ d.
\label{V}
\end{equation}
It is convenient to define the new field:
\begin{equation}
\lambda^{\prime}_{a} \equiv \lambda_{a} + {i\over 2} \sigma^{\mu}_{a\bar{b}}
\partial_{\mu}\bar{\chi}^{\bar{b}} \qquad
d^{\prime} \equiv d - {m^{2} \over 4} C
\end{equation}
and then the action of the supercharges is given by
\begin{eqnarray}
i~[Q_{a}, C ] = \chi_{a}
\nonumber \\
~\{ Q_{a}, \chi_{b} \} = 2i~\epsilon_{ab} \phi
\nonumber \\
\{ Q_{a}, \bar{\chi}_{\bar{b}} \} = - i~\sigma^{\mu}_{a\bar{b}}
~( v_{\mu} + i~\partial_{\mu}C )
\nonumber \\
~[Q_{a}, \phi ] = 0
\nonumber \\
i~[Q_{a}, \phi^{\dagger} ] = \lambda^{\prime}_{a}
- i~\sigma^{\mu}_{a\bar{b}}\partial_{\mu}\bar{\chi}^{\bar{b}}
\nonumber \\
i~[Q_{a}, v^{\mu} ] = \sigma^{\mu}_{a\bar{b}} \bar{\lambda^{\prime}}^{\bar{b}}
- i~\partial^{\mu}\chi_{a}
\nonumber \\
~\{ Q_{a}, \lambda^{\prime}_{b} \} = 2i~\epsilon_{ab}~d^{\prime}
- 2i~\sigma^{\mu\rho}_{ab} \partial_{\mu}v_{\rho}
\nonumber \\
\{ Q_{a}, \bar{\lambda}^{\prime}_{\bar{b}} \} = 0
\nonumber \\
~[Q_{a}, d^{\prime} ] = - {1\over 2} \sigma^{\mu}_{a\bar{b}}
\partial_{\mu}\bar{\lambda^{\prime}}^{\bar{b}}.
\label{susy-action1}
\end{eqnarray}
It is a long but straightforward exercise to verify that the
supersymmetric algebra is valid \cite{GS2}. In \cite{GS2} we have also
determined the generic form of the causal (anti)commutation relations:
\begin{eqnarray}
~[ C(x), C(y) ] = - i~c_{1}~D_{m}(x-y)
\nonumber \\
~[ C(x), d(y) ] = - i~c_{2}~D_{m}(x-y)
\nonumber \\
~[ C(x), \phi(y) ] = - i~(c_{4} - i c_{3})~D_{m}(x-y)
\nonumber \\
~[ \phi(x), \phi^{\dagger}(y) ] =
- i~\left( {m^{2}\over 4}~c_{1} + c_{2}\right)~D_{m}(x-y)
\nonumber \\
~[ \phi(x), d(y) ] = {m^{2}\over 4}~(c_{4} - i c_{3})~D_{m}(x-y)
\nonumber \\
~[ \phi(x), v_{\mu}(y) ] = (c_{3} + i c_{4})~\partial_{\mu}D_{m}(x-y)
\nonumber \\
~[ d(x), d(y) ] = - {im^{4}\over 16}~c_{1}~D_{m}(x-y)
\nonumber \\
~[ v_{\mu}(x), v_{\rho}(y) ] =
i~c_{1}~\partial_{\mu}\partial_{\rho}~D_{m}(x-y)
+ i~\left({m^{2}\over 2}~c_{1} - 2 c_{2}\right)~g_{\mu\rho}~D_{m}(x-y)
\nonumber \\
\left\{ \chi_{a}(x), \chi_{b}(y) \right\}
= 2 (c_{4} - i c_{3})~~\epsilon_{ab}~D_{m}(x-y),
\nonumber \\
\left\{ \chi_{a}(x), \bar{\chi}_{\bar{b}}(y) \right\} =
c_{1}~\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{m}(x-y)
\nonumber \\
\left\{ \lambda_{a}(x), \lambda_{b}(y) \right\}
= - {m^{2}\over 2}(c_{4} - i c_{3})~\epsilon_{ab}~D_{m}(x-y),
\nonumber \\
\left\{ \lambda_{a}(x), \bar{\lambda}_{\bar{b}}(y) \right\} =
{m^{2}\over 4}~c_{1}~\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{m}(x-y)
\nonumber \\
\left\{ \chi_{a}(x), \lambda_{b}(y) \right\}
= - 2i~c_{2}~\epsilon_{ab}~D_{m}(x-y),
\nonumber \\
\left\{ \chi_{a}(x), \bar{\lambda}_{\bar{b}}(y) \right\} =
- i~(c_{4} + i c_{3})~\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{m}(x-y)
\label{CCR-V}
\end{eqnarray}
and the rest of the (anti)commutators are null; here
$
c_{j}~j = 1,\dots,4
$
are some real coefficients. However, if we want that the commutation relations
for the vector field remain unchanged (\ref{ccr-vector}) then we must
require
$
c_{1} = 0,~c_{2} = - {1\over 2}.
$
This choice implies in particular that
$
v_{\mu}
$
and $\phi$ have the canonical dimensions $1$ so the causal commutator
between them should have the order of singularity $- 2$. But this is compatible
only with the choice
$
c_{3} = c_{4} = 0.
$
In the end we find out that the causal (anti)commutation relations:
\begin{eqnarray}
~[ C(x), d^{\prime}(y) ] = {i\over 2}~D_{m}(x-y)
\nonumber \\
~[ d^{\prime}(x), d^{\prime}(y) ] = - {i m^{2}\over 4}~D_{m}(x-y)
\nonumber \\
~[ \phi(x), \phi^{\dagger}(y) ] = {i\over 2}~D_{m}(x-y)
\nonumber \\
~[ v_{\mu}(x), v_{\nu}(y) ] = i~g_{\mu\nu}~D_{m}(x-y)
\nonumber \\
\left\{ \chi_{a}(x), \lambda^{\prime}_{b}(y) \right\}
= i~~\epsilon_{ab}~D_{0}(x-y),
\nonumber \\
\left\{ \lambda^{\prime}_{a}(x), \bar{\lambda}^{\prime}_{\bar{b}}(y) \right\}
= \sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{0}(x-y)
\label{CCR-V1}
\end{eqnarray}
and the rest of the (anti)commutators are null. More compactly
\begin{equation}
[ V(X), V(Y) ] = - {1\over 2}~D_{2}(X;Y)
\end{equation}
where we are using the notations from \cite{GS2} for the possible causal
super-distributions. The canonical dimension of the component fields are
\begin{equation}
\omega(v_{\mu}) = 1 \qquad \omega(\lambda^{\prime}) = 3/2 \qquad
\omega(d) = 2 \qquad \omega(\phi) = 1 \quad
\omega(\chi) = {1\over 2} \quad \omega(\lambda^{\prime}) = {3\over 2}.
\label{can1}
\end{equation}
It is natural to assume that
\begin{equation}
\omega(\theta) = - 1/2 \qquad
\omega({\cal D}) = 1/2
\end{equation}
as it is usually done in the literature. In this way one can make sense
of the notion of canonical dimension for the vector superfield; more precisely
we have:
\begin{equation}
\omega(V) = 0.
\label{CAN1}
\end{equation}
\subsection{The Ghost Chiral Multiplets\label{ghosts}}
We require that the ghost fields are also members of some multiplets of chiral type.
We admit that these ghost multiplets are also massless.
The generic forms of a chiral ghost and anti-ghost superfields are
\begin{eqnarray}
U(x,\theta,\bar{\theta}) = a(x)
+ 2i~\bar{\theta} \bar{\zeta}(x)
+ i~(\theta \sigma^{\mu} \bar{\theta})~\partial_{\mu}a(x)
+ \bar{\theta}^{2}~g(x)
+ \bar{\theta}^{2}~\theta \sigma^{\mu} \partial_{\mu}\bar{\zeta}(x)
\label{U}
\end{eqnarray}
and respectively
\begin{eqnarray}
\tilde{U}(x,\theta,\bar{\theta}) = \tilde{a}(x)
- 2i~\bar{\theta} \bar{\tilde{\zeta}}(x)
+ i~(\theta \sigma^{\mu} \bar{\theta})~\partial_{\mu}\tilde{a}(x)
+ \bar{\theta}^{2}~\tilde{g}(x)
- \bar{\theta}^{2}~\theta \sigma^{\mu} \partial_{\mu}\bar{\tilde{\zeta}}(x)
\label{tildeU}
\end{eqnarray}
where
$
a,~g,~\tilde{a},~\tilde{g}
$
are Fermi scalar fields and
$
\zeta_{a}, \tilde{\zeta}_{a}
$
are Bose spinor fields. Let us remind that we choose them such that \cite{GS2}
\begin{equation}
(\zeta_{a})^{\dagger} = \bar{\zeta}_{\bar{a}} \qquad
(\tilde{\zeta}_{a})^{\dagger} = - \bar{\tilde{\zeta}}_{\bar{a}}
\label{hermite-2}
\end{equation}
- see (\ref{hermite-1}); the chirality condition means
\begin{equation}
{\cal D}_{a}U = 0 \qquad {\cal D}_{a}\tilde{U} = 0.
\end{equation}
The action of the supercharges on these fields is
\begin{eqnarray}
\{ Q_{a}, a \} = 0 \qquad
\{ Q_{a}, a^{\dagger} \} = 2\zeta_{a}
\nonumber \\
\{ Q_{a}, g \} = -2i~\sigma^{\mu}_{a\bar{b}}
~\partial_{\mu}\bar{\zeta}^{\bar{b}}
\qquad
\{ Q_{a}, g^{\dagger} \} = 0
\nonumber \\
~[ Q_{a}, \zeta_{b} ] = \epsilon_{ab} g^{\dagger}
\qquad
i [ Q_{a}, \bar{\zeta}_{\bar{b}} ] = \sigma^{\mu}_{a\bar{b}}~\partial_{\mu}a
\end{eqnarray}
and respectively:
\begin{eqnarray}
\{ Q_{a}, \tilde{a} \} = 0 \qquad
\{ Q_{a}, \tilde{a}^{\dagger} \} = 2\tilde{\zeta}_{a}
\nonumber \\
\{ Q_{a}, \tilde{g} \} = 2i~\sigma^{\mu}_{a\bar{b}}
~\partial_{\mu}\bar{\tilde{\zeta}}^{\bar{b}}
\qquad
\{ Q_{a}, \tilde{g}^{\dagger} \} = 0
\nonumber \\
~[ Q_{a}, \tilde{\zeta}_{b} ] = \epsilon_{ab} \tilde{g}^{\dagger}
\qquad
i [ Q_{a}, \bar{\tilde{\zeta}}_{\bar{b}} ] = - \sigma^{\mu}_{a\bar{b}}~
\partial_{\mu}\tilde{a}.
\end{eqnarray}
It is convenient to work with the Hermitian (resp. anti-Hermitian) fields
\begin{eqnarray}
u \equiv a + a^{\dagger} \qquad v \equiv - i (a - a^{\dagger})
\nonumber \\
\tilde{u} \equiv \tilde{a} - \tilde{a}^{\dagger} \qquad
\tilde{v} \equiv - i (\tilde{a} + \tilde{a}^{\dagger})
\label{h-ghost}
\end{eqnarray}
such that we have
\begin{equation}
u^{\dagger} = u \qquad v^{\dagger} = v \qquad
\tilde{u}^{\dagger} = - \tilde{u} \quad \tilde{v}^{\dagger} = - \tilde{v}.
\end{equation}
Then we have the following action of the supercharges:
\begin{eqnarray}
\{ Q_{a}, u \} = 2 \zeta_{a} \qquad
\{ Q_{a}, v \} = 2 i~\zeta_{a}
\nonumber \\
\{ Q_{a}, g \} = -2i~\sigma^{\mu}_{a\bar{b}}
~\partial_{\mu}\bar{\zeta}^{\bar{b}}
\qquad
\{ Q_{a}, g^{\dagger} \} = 0
\nonumber \\
~[ Q_{a}, \zeta_{b} ] = \epsilon_{ab} g^{\dagger}
\qquad
i [ Q_{a}, \bar{\zeta}_{\bar{b}} ]
= {1\over 2} \sigma^{\mu}_{a\bar{b}}~\partial_{\mu}(u + i v)
\label{susy-gh}
\end{eqnarray}
and respectively:
\begin{eqnarray}
\{ Q_{a}, \tilde{u} \} = - 2 \tilde{\zeta}_{a} \qquad
\{ Q_{a}, \tilde{v} \} = - 2 i~\tilde{\zeta}_{a}
\nonumber \\
\{ Q_{a}, \tilde{g} \} = 2i~\sigma^{\mu}_{a\bar{b}}
~\partial_{\mu}\bar{\tilde{\zeta}}^{\bar{b}}
\qquad
\{ Q_{a}, \tilde{g}^{\dagger} \} = 0
\nonumber \\
~[ Q_{a}, \tilde{\zeta}_{b} ] = \epsilon_{ab} \tilde{g}^{\dagger}
\qquad
i [ Q_{a}, \bar{\tilde{\zeta}}_{\bar{b}} ] = - {1\over 2} \sigma^{\mu}_{a\bar{b}}~
\partial_{\mu}(\tilde{u} +i \tilde{v}).
\label{susy-anti-gh}
\end{eqnarray}
These relations are consistent with the following canonical dimensions for the fields:
\begin{eqnarray}
\omega(u) = 1 \quad \omega(v) = 1 \quad \omega(g) = 2 \quad
\omega(\zeta) = {3\over 2}
\nonumber \\
\omega(\tilde{u}) = 1 \quad \omega(\tilde{v}) = 1 \quad \omega(\tilde{g}) = 2 \quad
\omega(\tilde{\zeta}) = {3\over 2}.
\label{can2}
\end{eqnarray}
One can define the so-called $R$ symmetry by:
\begin{equation}
R\Omega = 0 \qquad R^{\dagger} = R
\end{equation}
and
\begin{eqnarray}
~[R, C] = 0 \quad
[ R, \phi] = - 2 \phi \quad [R, \phi^{\dagger}] = 2 \phi^{\dagger} \quad [R, d] = 0
\nonumber \\
~[R, \chi ] = - \chi \quad
[R, \bar{\chi} ] = \bar{\chi} \quad
[ R, \lambda^{\prime} ] = \lambda^{\prime} \quad
[ R, \bar{\lambda}^{\prime} ] = - \bar{\lambda}^{\prime}
\nonumber \\
~[R, u ] = 0 \quad
[R, v ] = 0 \quad
[R, g ] = - 2 g \quad
[R, g^{\dagger} ] = 2 g^{\dagger} \qquad
[R, \zeta_{a} ] = - \zeta \qquad
[ R, \bar{\zeta}_{\bar{a}} ] = \bar{\zeta}
\nonumber \\
~[R, \tilde{u} ] = 0 \quad
[R, \tilde{v} ] = 0 \quad
[R, \tilde{g} ] = 0 \quad
[R, \tilde{g}^{\dagger} \} = 0
\nonumber \\
~[R, \tilde{\zeta}_{a} ] = - \tilde{\zeta}_{a} \qquad
[ R, \bar{\tilde{\zeta}}_{\bar{a}} ] = \bar{\tilde{\zeta}}_{\bar{a}}.
\label{R}
\end{eqnarray}
One usually imposes this invariance on the interaction Lagrangian.
\subsection{The Gauge Charge\label{gauge-charge}}
The purpose is to generalize the construction from the Introduction and
to extend naturally the formulas (\ref{gh-charge}) to the supersymmetric
case. We define the gauge charge $Q$ postulating the following properties:
\begin{itemize}
\item
We have
\begin{equation}
Q \Omega = 0, \qquad Q^{\dagger} = Q.
\label{gauge1}
\end{equation}
\item
The (anti)commutator of $Q$ with a Bose (resp. Fermi) field is a linear combination of Fermi (resp. Bose) fields; the coefficients of this linear combinations are partial differential operators.
\item
The (anti)commutator of $Q$ with a field raises the canonical dimension by an unit;
\item
The (anti)commutator of $Q$ with a field raises the ghost number by an unit;
\item
The gauge charge commutes with the action of the Poincar\'e group; in this way the Poincar\'e group induces an action on the physical space
$
{\cal H}_{\rm phys} \equiv Ker(Q)/Im(Q).
$
\item
The gauge charge anticommutates with the supercharges:
\begin{equation}
\{Q,Q_{a}\}= 0;
\end{equation}
in this way the supersymmetric algebra induces an action on the physical space
$
{\cal H}_{\rm phys}.
$
\item
The gauge charges squares to zero as in the Yang-Mills case
\begin{equation}
Q^{2} = 0.
\end{equation}
\end{itemize}
If one makes the most general ansatz for the action of $Q$ compatible with the preceding conditions one finds out an important result: making some convenient rescaling of the fields, the gauge charge is uniquely determineded preceding assumptions. In the massless case we have
$
d^{\prime}= d
$
and $Q$ is uniquely determined by:
\begin{equation}
Q \Omega = 0 \qquad Q^{\dagger} = Q
\end{equation}
and
\begin{eqnarray}
~[Q, C] = i~v \quad [Q, v^{\mu}] = i \partial^{\mu}u \quad
[ Q, \phi] = - g^{\dagger} \quad [Q, \phi^{\dagger}] = g \quad [Q, d] = 0
\nonumber \\
\{Q, \chi \} = 2 i \zeta \quad
\{Q, \bar{\chi} \} = - 2 i \bar{\zeta} \quad
\{Q, \lambda^{\prime} \} = 0
\nonumber \\
\{Q, u \} = 0 \quad
\{Q, v \} = 0 \quad
\{Q, g \} = 0 \quad
\{Q, g^{\dagger} \} = 0 \qquad
[Q, \zeta_{a} ] = 0 \qquad
[ Q, \bar{\zeta}_{\bar{a}} ] = 0
\nonumber \\
\{Q, \tilde{u} \} = - i~\partial_{\mu}v^{\mu}\quad
\{Q, \tilde{v} \} = 2 i~d \quad
\{Q, \tilde{g} \} = 0 \quad
\{Q, \tilde{g}^{\dagger} \} = 0
\nonumber \\
~[Q, \tilde{\zeta}_{a} ] = - {1\over 2} \sigma^{\mu}_{a\bar{b}}
\partial_{\mu}\bar{\lambda}^{\prime\bar{b}}
\qquad
[ Q, \bar{\tilde{\zeta}}_{\bar{a}} ] = - {1\over 2} \sigma^{\mu}_{b\bar{a}} \partial_{\mu}{\lambda^{\prime}}^{b}.
\label{gauge4}
\end{eqnarray}
One can express everything in terms of superfields also \cite{GGRS}:
\begin{eqnarray}
~[ Q, V ] = U - U^{\dagger} \qquad
\{Q, U \} = 0
\nonumber \\
\{Q, \tilde{U} \} = - {1\over 16}~{\cal D}^{2}~\bar{\cal D}^{2}V
\end{eqnarray}
As in the Introduction we postulate that the physical Hilbert space is the
factor space
$
{\cal H}_{phys} = Ker(Q)/Im(Q).
$
Using this gauge structure it is easy to prove that the one-particle Hilbert
subspace contains the following type of particles: a) a particle of null mass and helicity $1$ (the photon); b) a particle of null mass and helicity $1/2$ (the photino); c) the ghost states generated by the fields
$
\tilde{g}
$
from the vacuum. These states must be eliminated by imposing the supplementary condition that the physical states have null ghost number. Only the transversal degrees of freedom of
$v_{\mu}$
and
$\lambda^{\prime}_{a}$
are producing physical states.
We can now determine the causal (anti)commutation relations for the
ghost fields. As in the Yang-Mills case - see relation (\ref{YM+CCR}) - one uses the Jacobi identities
\begin{eqnarray}
[ b(x), \{ f(y), Q \} ] + \{ f(y), [ Q, b(x) ] \} = \{ Q, [ b(x), f(y) ] \} = 0
\label{susy+CCR}
\end{eqnarray}
where
$
b = C, \phi, \phi^{\dagger}, v_{\mu}, d^{\prime}, \chi, \bar{\chi}, \lambda^{\prime},\bar{\lambda}^{\prime}
$
and
$
f = \tilde{u}, \tilde{g}, \tilde{g}^{\dagger}, \tilde{\zeta}, \bar{\tilde{\zeta}};
$
if we take into account the particular choice we have made for the causal (anti)commutation relations of the vector multiplet one finds out \cite{GS2} that we have
\begin{equation}
~\{ a(x), \tilde{a}^{\dagger}(y) \} = {i\over 2}~D_{0}(x-y) \qquad
~[ \zeta_{a}(x), \bar{\tilde{\zeta}}_{\bar{b}}(y) ] = - {1\over 4}
\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{0}(x-y)
\label{CCR-chiral-ghost}
\end{equation}
and all other causal (anti)commutators are null. So, like for ordinary
Yang-Mills models, the causal (anti)commutators of the ghost fields are
determined by the corresponding relations of the vector multiplet
fields.
Finally we mention that we can impose that the vector field is part of the so-called rotor multiplet \cite{GS2}. The fields of this are
$
(d,v_{\mu},\lambda_{a})
$
and the supercharges defined through
\begin{eqnarray}
~[Q_{a}, d ] = \sigma^{\mu}_{a\bar{b}}~\partial_{\mu}\bar{\lambda}^{\bar{b}}.
\nonumber \\
~\{ Q_{a}, \lambda_{b} \} = - i~( \epsilon_{ab} d + \sigma^{\mu\nu}_{ab} F_{\mu\nu})
\nonumber \\
\{ Q_{a}, \bar{\lambda}_{\bar{b}} \} = 0
\nonumber \\
i~[Q_{a}, v^{\mu} ] = \sigma^{\mu}_{a\bar{b}} \bar{\lambda}^{\bar{b}}.
\label{rotor1}
\end{eqnarray}
where
$
F_{\mu\nu} \equiv \partial_{\mu}v_{\nu} - \partial_{\nu}v_{\mu}.
$
However in this case the general form of the gauge charge
\begin{equation}
~[Q, v^{\mu}] = i \partial^{\mu}u \quad [Q, d] = 0 \quad
\{Q, \lambda^{\prime}_{a} \} = \alpha~\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}\bar{\zeta}^{\bar{b}}
\end{equation}
is not compatible with the relation
$
\{Q, Q_{a}\} = 0.
$
Also if we consider that the ghost multiplets are Wess-Zumino we obtain a contradiction.
\section{Supersymmetric QCD\label{qcd-interaction}}
\subsection{Supersymmetric QCD in Terms of Component Fields}
It is better to illustrate the method we use to find the most general
gauge invariant Lagrangian on the simplest case, namely ordinary QCD,
i.e. we will briefly show how to obtain the expression (\ref{qcd})
as the unique possibility. So, we consider a Wick polynomial $t$ which
is tri-linear in the fields
$
v_{j}^{\mu}, u_{j}, \tilde{u}_{j}
$
has canonical dimension $4$ and null ghost number, is Lorentz covariant and gauge invariant in the sense (\ref{gauge}). First we list all possible monomials compatible with all these requirements; they are:
\begin{equation}
f^{(1)} = f^{(1)}_{jkl} v_{j}^{\mu} v_{k}^{\nu} \partial_{\mu}v_{l\mu} \qquad
f^{(2)} = f^{(2)}_{jkl} v_{j}^{\mu} v_{k\mu} \partial_{\nu}v_{l}^{\nu}
\end{equation}
and
\begin{equation}
g^{(1)} = g^{(1)}_{jkl} v^{\mu}_{j} u_{k} \partial_{\mu}\tilde{u}_{l} \qquad
g^{(2)} = g^{(2)}_{jkl} \partial_{\mu}v^{\mu}_{j} u_{k} \tilde{u}_{l} \qquad
g^{(3)} = g^{(3)}_{jkl} v^{\mu}_{j} \partial_{\mu}u_{k} \tilde{u}_{l}.
\end{equation}
We now list the possible trivial Lagrangians. They are total divergences of null ghost number
\begin{eqnarray}
t^{(1)}_{\mu} = t^{(1)}_{jkl} v_{j}^{\nu} v_{k\nu} v_{l\mu} \qquad
t^{(1)}_{jkl} = t^{(1)}_{kjl}
\nonumber \\
t^{(2)}_{\mu} = t^{(2)}_{jkl} v_{j\mu} u_{k} \tilde{u}_{l}
\end{eqnarray}
and the co-boundary terms of ghost number $- 1$:
\begin{eqnarray}
b^{(1)} = b^{(1)}_{jkl} v_{j}^{\mu} v_{k\mu} \tilde{u}_{l} \qquad
b^{(1)}_{jkl} = b^{(1)}_{kjl}
\nonumber \\
b^{(2)} = b^{(2)}_{jkl} u_{j} \tilde{u}_{k} \tilde{u}_{l} \qquad
b^{(2)}_{jkl} = - b^{(2)}_{jlk}.
\end{eqnarray}
Now we proceed as follows: using
$
\partial^{\mu}t^{(1)}_{\mu}
$
it is possible to make
\begin{equation}
f^{(1)}_{jkl} = - f^{(1)}_{lkj};
\label{A1}
\end{equation}
using
$d_{Q}b^{(1)}$
we can make
\begin{equation}
f^{(2)}_{jkl} = 0;
\end{equation}
using
$
\partial^{\mu}t^{(2)}_{\mu}
$
it is possible to take
\begin{equation}
g^{(3)}_{jkl} = 0;
\end{equation}
finally, using
$d_{Q}b^{(2)}$
we can make
\begin{equation}
g^{(2)}_{jkl} = g^{(2)}_{kjl}.
\label{S1}
\end{equation}
So we are left only with three terms. If we compute
$
d_{Q}t
$
and use the known identity:
\begin{equation}
\partial^{2} f_{j} = 0, \quad j = 1,2,3
\quad \Longrightarrow
(\partial^{\mu}f_{1}) (\partial_{\mu}f_{2}) f_{3} =
{1\over 2} \partial_{\mu} \Bigl[ (\partial^{\mu}f_{1}) f_{2} f_{3}
+ f_{1} (\partial^{\mu}f_{2}) f_{3} - f_{1} f_{2} (\partial^{\mu}f_{3}) \Bigl]
\label{magic}
\end{equation}
the result is
\begin{equation}
d_{Q}t = i u_{j} A_{j} + {\rm total~div}
\end{equation}
where:
\begin{eqnarray}
A_{j} = - 2 f^{(1)}_{jkl}~\partial^{\nu}v^{\mu}_{k}~\partial_{\mu}v_{l\nu}
+ (f^{(1)}_{lkj} + g^{(2)}_{kjl})~
\partial_{\mu}v^{\mu}_{k}~\partial_{\nu}v_{l}^{\nu}
\nonumber \\
+ (- f^{(1)}_{jkl} + f^{(1)}_{lkj} + f^{(1)}_{klj} + g^{(1)}_{kjl})~
v^{\mu}_{k}~\partial_{\mu}\partial_{\nu}v^{\nu}_{l}.
\end{eqnarray}
Now the gauge invariance condition (\ref{gauge}) becomes
\begin{equation}
u_{j} A_{j} = \partial_{\mu}t^{\mu}.
\end{equation}
From power counting arguments it follows that the general form for
$
t^{\mu}
$
is
\begin{equation}
t^{\mu} = u_{j} t^{\mu}_{j} + (\partial_{\nu}u_{j}) t^{\mu\nu}_{j}.
\end{equation}
We can prove that
$
t^{\mu\nu}_{j} = g^{\mu\nu}~t_{j}
$
from where
$
A_{j} = - \partial^{2}t_{j};
$
making a general ansatz for
$
t_{j}
$
we obtain that we must have in fact
\begin{equation}
A_{j} = 0
\end{equation}
i.e. the following system of equations:
\begin{eqnarray}
f^{(1)}_{jkl} = - f^{(1)}_{kjl}
\nonumber \\
f^{(1)}_{lkj} + g^{(2)}_{kjl} = 0
\nonumber \\
- f^{(1)}_{jkl} + f^{(1)}_{lkj} + f^{(1)}_{klj} + g^{(1)}_{kjl} = 0;
\label{A2}
\end{eqnarray}
the first equation, together with (\ref{A1}) amounts to the total antisymmetry of the expression
$
f_{jkl} \equiv f^{(1)}_{jkl}.
$
The second equation from the preceding system gives then
$
g^{(2)}_{kjl} = 0.
$
As a result we obtain the (unique) solution:
\begin{equation}
t^{(1)} = f^{(1)}_{jkl} ( v_{j}^{\mu} v_{k}^{\nu} \partial_{\nu}v_{l\mu}
- v_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l});
\label{tym}
\end{equation}
it can be easily be proved, using the formula (\ref{magic}) that
$
d_{Q}t^{(1)}
$
is indeed a total divergence.
The supersymmetric case goes on the same lines only the computational difficulties grow exponentially because now we have many more terms of the type
$
f,~g,~t_{\mu},~b.
$
We would expect to obtain the expression (\ref{tym}) together with terms with supersymmetric partners. The details are given in the Appendix. It is remarkable that we obtain only two possible solutions namely the usual Yang-Mills solution for QCD~
$t^{(1)}$
given above by (\ref{tym}) and
\begin{eqnarray}
t^{(2)} = f^{\prime}_{jkl} [
(\lambda^{\prime}_{j} \sigma_{\mu} \bar{\lambda}^{\prime}_{k}) v_{l}^{\mu}
+ 2 i~(\lambda^{\prime}_{j} \tilde{\zeta}_{k}) u_{l}
+ 2 i~(\bar{\lambda}^{\prime}_{k} \bar{\tilde{\zeta}}_{j}) u_{l} ].
\label{tsusy}
\end{eqnarray}
We impose now supersymmetric invariance condition (\ref{susy-inv}) on
\begin{equation}
t \equiv t^{(1)} + t^{(2)}
\label{t12}
\end{equation}
and we hope to obtain a non-trivial solution. If we are successful we
must also go the the second order of perturbation theory. First we
consider only the terms bilinear in
$v_{\mu}$
and linear in
$\lambda^{\prime}$
from the commutator
$
[Q_{a}, t]
$
and impose that condition that they are a sum of a co-boundary and a
total divergence. It is not very hard to prove that we obtain the
restriction
$
f^{(2)}_{jkl} = - i~f^{(1)}_{jkl}
$
i.e. the interaction Lagrangian should be:
\begin{equation}
t = f_{jkl} [ v_{j}^{\mu} v_{k}^{\nu} \partial_{\nu}v_{l\mu}
- i (\lambda^{\prime}_{j} \sigma_{\mu} \bar{\lambda}^{\prime}_{k}) v_{l}^{\mu}
- v_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l}
+ 2 ~(\lambda^{\prime}_{j} \tilde{\zeta}_{k}) u_{l}
+ 2 ~(\bar{\lambda}^{\prime}_{k} \bar{\tilde{\zeta}}_{j}) u_{l} ]
\label{susy-t}
\end{equation}
where we have simplified the notation:
$
f_{jkl} \equiv f^{(1)}_{jkl}.
$
We note that this interaction Lagrangian is $R$-invariant. The first two terms are standard in the literature - see for instance \cite{Ro1}, \cite{Ro2}. The next contribution is the standard ghost coupling from the Yang-Mills theory \cite{Gr0}, \cite{Sc}. The last two contributions to the ghost coupling in seems to be absent from this analysis. So, gauge invariance in the first order is not true for the expression from \cite{Ro2}.
We can prove after some computation that the preceding expression verifies the following equation:
\begin{equation}
[ Q_{a}, t] = d_{Q}s_{a} + \partial^{\mu}t_{\mu a}
- 2~f_{jkl} ( v_{j}^{\mu} u_{k} \partial_{\mu}\tilde{\zeta}_{la}
+ 2 i \sigma^{\mu\nu}_{ab} v_{j\nu} \partial_{\mu} \tilde{\zeta}_{k}^{b} u_{l})
\label{brocken-susy-inv}
\end{equation}
where
\begin{eqnarray}
s_{a} \equiv f_{jkl} [ - 2 \sigma^{\mu\nu}_{ab} v_{j\mu} v_{k\nu} \tilde{\zeta}_{l}^{b}
- i \chi_{ja} v_{k}^{\mu} \partial_{\mu}\tilde{u}_{l}
- 2 i (\lambda^{\prime}_{j}\tilde{\zeta}_{k}) \zeta_{la}
- 2 i (\bar{\lambda}^{\prime}_{k}\bar{\tilde{\zeta}}_{j}) \zeta_{la}
\nonumber \\
+ i \tilde{v}_{j}\sigma^{\mu}_{a\bar{b}} \bar{\lambda}^{\prime \bar{b}}_{k} v_{l\mu}
- 2 \tilde{v}_{j} \tilde{\zeta}_{ka} u_{l}
- 2 \lambda^{\prime}_{j} \phi_{k} u_{l} ]
\label{sa}
\end{eqnarray}
and
\begin{eqnarray}
t_{\mu a} \equiv f_{jkl} \Bigl[
\partial_{\mu}\chi_{ja} u_{k} \tilde{u}_{l}
- \chi_{ja} \partial_{\mu}u_{k} \tilde{u}_{l}
+ \chi_{ja} v_{k}^{\nu} \partial_{\mu}v_{l\nu}
- {i\over 2} \epsilon_{\mu\nu\rho\alpha} \sigma^{\alpha}_{a\bar{b}}
\bar{\lambda}^{\prime \bar{b}}_{j} v_{k}^{\nu} v_{l}^{\rho}
\nonumber \\
- \chi_{ja} v_{k}^{\nu} \partial_{\nu}v_{l\mu}
+ i (\lambda_{j}^{\prime} \sigma_{\mu} \bar{\lambda}_{k}^{\prime}) \chi_{la}
+ g_{\mu\nu} \sigma^{\nu}_{a\bar{b}} \tilde{v}_{j} \bar{\lambda}_{k}^{\prime \bar{b}} u_{l}
+ 4 i g_{\mu\nu} \sigma^{\nu\rho}_{ab} v_{j\rho} \tilde{\zeta}_{k}^{b} u_{l} \Bigl].
\end{eqnarray}
So, its seems that the last two terms from (\ref{brocken-susy-inv}) - which cannot be rewritten as
$
d_{Q}s_{a} + {\rm total~divergence}
$
- are apparently spoiling the supersymmetric invariance condition (\ref{susy-inv}).
However, as we have said in the Introduction, there is a natural way to save supersymmetric invariance namely
to impose instead of (\ref{susy-inv}) the weaker form (\ref{susy-inv-phys})
\begin{eqnarray}
<\Psi_{1}, ([ Q_{a}, t ] - d_{Q}s_{a} - \partial_{\mu}t_{a}^{\mu} ) \Psi_{2}> = 0
\nonumber
\end{eqnarray}
with
$
\Psi_{1}, \Psi_{2} \in Ker(Q)~{\rm modulo}~Im(Q).
$
We first take
$
\Psi
$
to be generated from the vacuum by the physical fields
$
v_{j}^{\mu},~\lambda^{\prime}_{j},~d^{\prime}_{j};
$
if we consider as before, the terms bilinear in
$v_{\mu}$
and linear in
$\lambda^{\prime}$
from the commutator
$
[Q_{a}, t]
$
we obtain the interaction Lagrangian should be given by (\ref{susy-t}). It follows that we also have (\ref{brocken-susy-inv}); however, if we apply this relation on a physical states
$
\Psi_{j}
$
we obtain that (\ref{susy-inv-phys}) is true; indeed the extra terms give zero in this average. If we substitute in (\ref{susy-inv-phys})
$
\Psi_{j} \rightarrow \Psi_{j} + Q \Phi_{j}
$
the relation stays true (one has to use the anticommutativity of $Q$ with the supercharges.)
So we have obtained an unique solution for supersymmetric QCD as in the Yang-Mills case.
\subsection{Supersymmetric QCD in Terms of Superfields}
From the preceding Subsection we conclude that we cannot express the interaction Lagrangian in terms of superfields such that (\ref{t-T}) and (\ref{gauge-susy}) are true; indeed if this would be true then we would have (\ref{susy-inv}) with
$
s_{a} = 0.
$
Even the possibility of obtaining the expression (\ref{susy-t}) in the form (\ref{t-T}) is very unlikely. Nevertheless let us start from the superfield expression of the interaction suggested by the classical analysis \cite{GS2}.
Arguments from classical field theory suggests that the interaction should be an expression of the type
\begin{equation}
t_{\rm classical} = t_{\rm classical}^{(1)} + t_{\rm classical}^{(2)}
\label{t-class}
\end{equation}
where
\begin{eqnarray}
t_{\rm classical}^{(1)} \equiv - {i\over 8} \int d\theta^{2} d\bar{\theta}^{2}~
f_{jkl}~[ V_{j} {\cal D}^{a}V_{k}~\bar{\cal D}^{2}{\cal D}_{a}V_{l} - H.c. ]
\nonumber \\
t_{\rm classical}^{(2)} \equiv 2 i~\int d\theta^{2} d\bar{\theta}^{2}~f_{jkl}~
V_{j} (U_{k} + U_{k}^{\dagger})~(\tilde{U}_{l} + \tilde{U}_{l}^{\dagger})
\end{eqnarray}
(see \cite{GS2}). After a tedious computation one obtains up to a total divergence
\begin{eqnarray}
t_{\rm classical}^{(1)} = f_{jkl}~\Bigl[
C_{j} (\partial_{\nu}\chi_{k} \sigma^{\mu\nu} \partial_{\mu} \lambda^{\prime}_{l})
- C_{j} (\partial_{\nu}\bar{\chi}_{k} \sigma^{\mu\nu} \partial_{\mu} \bar{\lambda}^{\prime}_{l})
- 2~\partial_{\mu}C_{j} v_{k}^{\mu} d_{l}
\nonumber \\
+ {1\over 2}~(\chi_{j} \sigma^{\mu} \partial_{\mu} \bar{\chi}_{k})~d_{l}
- {1\over 2}~d_{j} (\partial_{\mu}\chi_{j} \sigma^{\mu} \bar{\chi}_{k})~d_{l}
- 4 i~\phi_{j} \phi_{k}^{\dagger} d_{l}
\nonumber \\
+ v_{j\mu}v_{k\nu}\partial^{\nu}v_{l}^{\mu}
- i~(\lambda^{\prime}_{j} \sigma_{\mu} \bar{\lambda}^{\prime}_{k}) v_{l}^{\mu}
\nonumber \\
- i~(\chi_{j} \sigma_{\mu\nu} \partial^{\mu}\lambda^{\prime}_{k}) v_{j}^{\nu}
- i~(\bar{\chi}_{j} \bar{\sigma}_{\mu\nu} \partial^{\mu}\bar{\lambda}^{\prime}_{k}) v_{j}^{\nu}
+ {1\over 2}~(\chi_{j} \partial_{\mu}\lambda^{\prime}_{k}) v_{l}^{\nu}
- {1\over 2}~(\bar{\chi}_{j} \partial_{\mu}\bar{\lambda}^{\prime}_{k}) v_{l}^{\nu}
\nonumber\\
+ 2 i~(\chi_{j} \lambda^{\prime}_{k})d_{l}
- 2 i~(\bar{\chi}_{j} \bar{\lambda}^{\prime}_{k})d_{l} \Bigl]
+ {\rm total~divergence};
\label{t-class-1}
\end{eqnarray}
the third line is suggested in the literature e.g. \cite{H} see formula
(C.1c) and appears in (\ref{susy-t}) also. The expression of the ghost coupling is much more complicated:
\begin{eqnarray}
t_{\rm classical}^{(2)} = f_{jkl}~\Bigl[ - v^{\mu}_{j} u_{k} \partial_{\mu}\tilde{u}_{l}
+ v^{\mu}_{j} \partial_{\mu}v_{k} \tilde{v}_{l}
- 2 d_{j} u_{k} \tilde{v}_{l}
- 2~\phi_{j} g_{k} \tilde{v}_{l} - 2~\phi^{\dagger}_{j} g^{\dagger}_{k} \tilde{v}_{l}
\nonumber \\
+ 2 i~\phi_{j} u_{k} \tilde{g}_{l}
+ 2 i~\phi^{\dagger}_{j} u^{\dagger}_{k} \tilde{g}_{l}
+ 2 i~C_{j} g_{k}^{\dagger} \tilde{g}_{l}
+ 2 i~C_{j} g_{k} \tilde{g}_{l}^{\dagger}
\nonumber \\
- i~(\chi_{j}\sigma^{\mu}\bar{\zeta}_{k}) \partial_{\mu}\tilde{u}_{l}
+i~(\zeta_{j}\sigma^{\mu}\bar{\chi}_{k}) \partial_{\mu}\tilde{u}_{l}
+ 2 i~(\lambda^{\prime}_{j}\zeta_{k}) \tilde{v}_{l}
+ 2 i~(\bar{\lambda}^{\prime}_{j}\bar{\zeta}_{k}) \tilde{v}_{l}
\nonumber \\
- (\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\zeta}_{k}) \tilde{v}_{l}
+ (\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k}) \tilde{v}_{l}
+ (\partial_{\mu}\zeta_{j}\sigma^{\mu}\bar{\chi}_{k}) \tilde{v}_{l}
- (\zeta_{j}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{k}) \tilde{v}_{l}
\nonumber \\
+ 2~(\lambda^{\prime}_{j}\tilde{\zeta}_{k}) u_{l}
+ 2~(\bar{\lambda}^{\prime}_{j}\bar{\tilde{\zeta}}_{l}) u_{l}
+ i~(\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) u_{k}
- i~(\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l}) u_{k}
\nonumber \\
+ i~(\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\chi}_{j}) u_{k}
- i~(\tilde{\zeta}_{l}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{j}) u_{k}
- (\chi_{j}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) \partial_{\mu}v_{k}
+ (\tilde{\zeta}_{l}\sigma^{\mu}\bar{\chi}_{j}) \partial_{\mu}v_{k}
\nonumber \\
- 4 i~\phi^{\dagger}_{j} (\zeta_{k}\tilde{\zeta}_{l})
+ 4 i~\phi_{j} (\bar{\zeta}_{k}\bar{\tilde{\zeta}}_{l})
- 2~C_{j} (\partial_{\mu}\zeta_{k}\sigma^{\mu}\bar{\tilde{\zeta}}_{l})
+ 2~C_{j} (\zeta_{k}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l})
\nonumber \\
- 2~C_{j} (\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\zeta}_{k})
+ 2~C_{j} (\tilde{\zeta}_{l}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k})
- 2 i~v_{j}^{\mu}~(\zeta_{k}\sigma_{\mu}\bar{\tilde{\zeta}}_{l})
- 2 i~v_{j}^{\mu}~(\tilde{\zeta}_{l}\sigma_{\mu}\bar{\zeta}_{k})
\nonumber \\
+ 2~(\chi_{j}\zeta_{k}) \tilde{g}_{l}
+ 2~(\bar{\chi}_{j}\bar{\zeta}_{k}) \tilde{g}_{l}^{\dagger}
- 2~(\chi_{j}\tilde{\zeta}_{l}) g_{k}
+ 2~(\bar{\chi}_{j}\bar{\tilde{\zeta}}_{l}) g_{k}^{\dagger} \Bigl]
+ {\rm total~divergence}
\label{t-class-2}
\end{eqnarray}
Now one can eliminate trivial terms following the procedure given in the table from the Appendix. As a result the total expression is of the following form:
\begin{equation}
t_{\rm classical} = t + d_{Q}b + \partial^{\mu}t_{\mu} + \tilde{t}
\end{equation}
where $t$ is given by (\ref{susy-t}) and
\begin{eqnarray}
\tilde{t} \equiv f_{jkl} \Bigl[
{i\over 2}~(\partial_{\mu}\chi_{j} \sigma^{\mu} \bar{\chi}_{k}) \partial_{\nu}v^{\nu}_{l}
+ {i\over 2}~(\chi_{j} \sigma^{\nu} \partial_{\nu}\bar{\chi}_{k}) \partial_{\mu}v_{l}^{\mu}
+ \phi_{j}^{\dagger} (\chi_{k} \sigma^{\mu} \partial_{\mu}\bar{\lambda}^{\prime}_{l})
- \phi_{j} (\partial_{\mu}\lambda^{\prime}_{l} \sigma^{\mu} \bar{\chi}_{k})
\nonumber \\
+ 2 i~\phi_{j} u_{k} \tilde{g}_{l}
+ 2 i~\phi^{\dagger}_{j} u_{k} \tilde{g}_{l}^{\dagger}
+ 2 i~C_{j} g_{k}^{\dagger} \tilde{g}_{l}
+ 2 i~C_{j} g_{k} \tilde{g}_{l}^{\dagger}
\nonumber \\
+ i~(\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) u_{k}
- i~(\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l}) u_{k}
+ i~(\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\chi}_{k}) u_{k}
- i~(\tilde{\zeta}_{l}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{k}) u_{k}
\nonumber \\
+ (\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l}) v_{k}
- (\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\chi}_{k}) v_{k}
+ 2~(\chi_{j}\zeta_{k}) \tilde{g}_{l}
+ 2~(\bar{\chi}_{j}\bar{\zeta}_{k}) \tilde{g}_{l}^{\dagger} \Bigl]
\end{eqnarray}
is not trivial. So we do not have an exact match between classical and quantum analysis. However, there is a way of eliminating the last term used in the literature, namely to compute
$
t_{\rm classical}
$
in the so-called Wess-Zumino gauge \cite{WB}. This means that the superfield $V$ can be written as
\begin{equation}
V = A + A^{\dagger} + V^{\prime}
\end{equation}
where:
\begin{equation}
V^{\prime} = (\theta \sigma^{\mu} \bar{\theta})~v_{\mu}
+ \theta^{2}~\bar{\theta}\bar{\lambda}^{\prime}
+ \bar{\theta}^{2}~\theta\lambda^{\prime}
+ \theta^{2} \bar{\theta}^{2}~ d^{\prime}
\label{V-prime}
\end{equation}
and
\begin{equation}
A = {1\over 2}~C + \bar{\theta} \bar{\chi} + \bar{\theta}^{2}~\phi^{\dagger}
+ {i\over 2}~(\theta \sigma^{\mu} \bar{\theta})~\partial_{\mu}C
- {i\over 2}~\bar{\theta}^{2}~(\theta\sigma^{\mu}\partial_{\mu}\bar{\chi}).
\end{equation}
If instead of $V$ one uses $V^{\prime}$ then we have
$
\tilde{t} \rightarrow 0
$
so we have
\begin{equation}
t_{\rm classical} \rightarrow t + d_{Q}b + \partial^{\mu}t_{\mu}.
\end{equation}
The conclusion is that our expression for the interaction Lagrangian $t$
can be obtained from (\ref{t-class}) - (\ref{t-class-2}) if we substitute
$
V_{j} \rightarrow V_{j}^{\prime}.
$
Gauge invariance and supersymmetric invariance are not very conveniently expressed in terms of
$
V_{j}^{\prime}
$
so one must work with the interaction Lagrangian (\ref{t-class}) in every order of the perturbation theory and make at the end
$
V_{j} \rightarrow V_{j}^{\prime}.
$
However, going to the second order of perturbation theory might be more difficult in this superfield formalism than working in component fields.
\subsection{Second Order of Perturbation Theory}
One can go to the second order of perturbation theory very easily in components and compute the anomaly as in \cite{Gr1}; we have in (\ref{gauge})
\begin{equation}
t^{\mu} = f_{jkl} \left[
{1\over 2} u_{j} v_{k\nu} (\partial^{\nu}v_{l}^{\mu} - \partial^{\mu}v_{l}^{\nu})
- {1\over 2} u_{j} u_{k} \partial^{\mu}\tilde{u}_{l}
- i(\lambda_{j}^{\prime} \sigma^{\mu} \bar{\lambda}_{k}^{\prime}) u_{l} \right].
\end{equation}
From here we obtain
\begin{equation}
d_{Q}~[ t(x),t(y) ] =
i {\partial \over \partial x^{\mu}}~[ t^{\mu}(x),t(y)]
+ i {\partial \over \partial y^{\mu}}~[ t(x),t^{\mu}(y)]
\end{equation}
The anomalies are obtained in the process of causal splitting of this identity; we obtain in general
\begin{equation}
d_{Q} T(t(x),t(y)) =
i {\partial \over \partial x^{\mu}}~T(t^{\mu}(x),t(y))
+ i {\partial \over \partial y^{\mu}}~T(t(x),t^{\mu}(y)) + A(x,y)
\end{equation}
where
$
T(a(x),b(y))
$
is the chronological product associated to the Wick monomials $a$ and $b$ and
$
A(x,y)
$
is the anomaly which spoils the gauge invariance in the second order. One finds out that
\begin{equation}
A(x,y) = A^{\rm YM}(x,y) + A^{\rm susy}(x,y)
\end{equation}
where the first term already appears in the pure Yang-Mills case and can be eliminated (as a co-boundary plus a total divergence) if and only if one imposes Jacobi identity on the constants
$
f_{jkl}.
$
The second term from the anomaly is of purely supersymmetric nature:
\begin{equation}
A^{\rm susy}(x,y) = a^{\rm susy}(x)~\delta(x - y)
\end{equation}
with
\begin{equation}
a^{\rm susy} = f_{jkl}~f_{mnl}\left[ - {i \over 2}u_{j} v_{k}^{\mu}
(\lambda_{m}^{\prime} \sigma_{\mu} \bar{\lambda}_{n}^{\prime})
+ 2 u_{j} u_{k} (\lambda_{m}^{\prime} \tilde{\zeta}_{n})
+ 2 u_{j} u_{k} (\bar{\lambda}_{m}^{\prime} \bar{\tilde{\zeta}}_{n}) \right].
\label{ano}
\end{equation}
Apparently one cannot write this expression as co-boundary plus a total divergence.
\section{Extension to the Massive Case\label{massive}}
The first thing we must do is to remind how one can give mass to the photon in the causal approach \cite{Gr1} \cite{Sc}. One modifies the framework from the Introduction as follows.
The Hilbert space of the massive vector field
$
v_{\mu}
$
is enlarged to a bigger Hilbert space
${\cal H}$
including three ghost fields
$u,~\tilde{u},~h$.
The first two ones are Fermi scalars and the last is a Bose real scalar field; all the ghost fields are supposed to have the same mass
$
m > 0
$
as the vector field. In
${\cal H}$
we can give a Hermitian structure such that we have
\begin{equation}
v_{\mu}^{\dagger} = v_{\mu} \qquad u^{\dagger} = u, \qquad \tilde{u}^{\dagger} = - \tilde{u}
\qquad h^{\dagger} = h
\end{equation}
and we also convene that
$
gh(h) = 0.
$
Then one introduces the {\it gauge charge} $Q$ according to:
\begin{eqnarray}
Q \Omega = 0, \qquad Q^{\dagger} = Q,
\nonumber \\
~[ Q, v_{\mu} ] = i \partial_{\mu}u, \qquad
[Q , h ] = i m u
\nonumber \\
~\{ Q, u \} = 0, \qquad
\{ Q, \tilde{u} \} = - i~(\partial^{\mu}v_{\mu} + m h)
\label{gh-charge-m}
\end{eqnarray}
and, because
$
Q^{2} = 0,
$
the physical Hilbert space is given, as in the massless case, by
$
{\cal H}_{phys} = Ker(Q)/Im(Q).
$
The gauge charge is compatible with the following causal (anti)commutation
relation:
\begin{equation}
[ v_{\mu}(x), v_{\nu}(y) ] = i~~g_{\mu\nu}~D_{m}(x-y)
\quad
~\{ u(x), \tilde{u}^{\dagger}(y) \} = - i~D_{m}(x-y)
\quad
[ h(x), h(y) ] = - i~D_{m}(x-y)
\label{ccr-vector-m}
\end{equation}
and the other causal (anti)commutators are null. We see that the canonical dimension of the scalar ghost field must be
$
\omega(h) = 1.
$
It is an easy exercise to determine the physical space in the one-particle sector: the equivalence classes are indexed by wave functions of the form
\begin{equation}
\Psi_{0} = \int f_{\mu}(x) v^{\mu}(x) \qquad \partial^{\mu}f_{\mu} = 0.
\end{equation}
The general argument can be found in \cite{Gr1}.
To go to the supersymmetric case we must include the field $h$ into a supersymmetric multiplet. Again, the natural candidate is a chiral field we take:
\begin{eqnarray}
B(x,\theta,\bar{\theta}) = b(x)
+ 2~\bar{\theta} \bar{\psi}(x)
+ i~(\theta \sigma^{\mu} \bar{\theta})~\partial_{\mu}b(x)
+ \bar{\theta}^{2}~f(x)
\nonumber \\
- i~\bar{\theta}^{2}~\theta \sigma^{\mu} \partial_{\mu}\bar{\psi}(x)
+ {m^{2} \over 4}~\theta^{2} \bar{\theta}^{2}~b(x)
\label{chiral}
\end{eqnarray}
where
$b, f$
are complex scalars and
$\psi$
is a spinor field. The chirality conditions is:
\begin{equation}
{\cal D}_{a}B = 0.
\end{equation}
Some mass-dependent extra-terms
$
{m^{2} \over 4}~\theta^{2} \bar{\theta}^{2}~u(x)
$
and
$
{m^{2} \over 4}~\theta^{2} \bar{\theta}^{2}~\tilde{u}(x)
$
should be included in the expressions of (\ref{U}) and (\ref{tildeU}) of $U$ and $\tilde{U}$ from subsection \ref{ghosts} respectively.
The action of the supercharges can be taken to be:
\begin{eqnarray}
i~[Q_{a}, b ] = 0 \qquad
i~[Q_{a}, b^{\dagger} ] = 2\psi_{a}
\nonumber \\
~[Q_{a}, f ] = -2~\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}\bar{\psi}^{\bar{b}}
\qquad
~[Q_{a}, f^{\dagger} ] = 0
\nonumber \\
~\{ Q_{a}, \psi_{b} \} = i~\epsilon_{ab} f^{\dagger}
\qquad
\{ Q_{a}, \bar{\psi}_{\bar{b}} \} = \sigma^{\mu}_{a\bar{b}}~\partial_{\mu}b.
\label{susy-h}
\end{eqnarray}
These relations are compatible with the following canonical dimensions:
\begin{equation}
\omega(b) = 1 \qquad \omega(\psi) = 3/2 \qquad \omega(f) = 2.
\end{equation}
It is convenient to work with the self-adjoint bosonic ghost fields:
\begin{equation}
h \equiv b + b^{\dagger} \qquad h^{\prime} \equiv - i (b - b^{\dagger}).
\end{equation}
Next we must find the general form of the gauge charge. We proceed as in Subsection \ref{gh-charge} but we {\bf do not} assume that the (anti)commutator with the gauge charge raises the canonical dimension by an unit; we find out that in the massive case $Q$ is uniquely determined by:
\begin{equation}
Q\Omega = 0 \qquad Q^{\dagger} = Q
\end{equation}
and
\begin{eqnarray}
~[Q, C] = i~v \quad [Q, v^{\mu}] = i \partial^{\mu}u \quad
[ Q, \phi] = - g^{\dagger} \quad [Q, \phi^{\dagger}] = g \quad [Q, d^{\prime}] = 0
\nonumber \\
\{Q, \chi \} = 2 i \zeta \quad
\{Q, \bar{\chi} \} = - 2 i \bar{\zeta} \quad
\{Q, \lambda^{\prime} \} = 0
\nonumber \\
\{Q, u \} = 0 \quad
\{Q, v \} = 0 \quad
\{Q, g \} = 0 \quad
\{Q, g^{\dagger} \} = 0 \qquad
[Q, \zeta_{a} ] = 0 \qquad
[ Q, \bar{\zeta}_{\bar{a}} ] = 0
\nonumber \\
\{Q, \tilde{u} \} = - i~(\partial_{\mu}v^{\mu} + m h)
\nonumber \\
\{Q, \tilde{v} \} = i~( 2d^{\prime} + m h^{\prime} + m^{2} C)
\nonumber \\
\{Q, \tilde{g} \} = - m^{2} \phi^{\dagger} - i m f \quad
\{Q, \tilde{g}^{\dagger} \} = - m^{2} \phi + i m f^{\dagger}
\nonumber \\
~[Q, \tilde{\zeta}_{a} ] = - {1\over 2} \sigma^{\mu}_{a\bar{b}}
\partial_{\mu}\bar{\lambda}^{\prime\bar{b}} - m \psi_{a} - {i \over 2} m^{2} \chi_{a}
\qquad
~[ Q, \bar{\tilde{\zeta}}_{\bar{a}} ] = - {1\over 2} \sigma^{\mu}_{b\bar{a}} \partial_{\mu}{\lambda^{\prime}}^{b} - m \bar{\psi}_{\bar{a}} + {i \over 2} m^{2} \bar{\chi}_{\bar{a}}
\nonumber \\
~[Q, h ] = i m~u \quad ~[Q, h^{\prime} ] = i m~v \quad
[Q, f ] = - i m g \quad
\{Q,\psi_{a} \} = m \zeta_{a} \quad
\{Q,\bar{\psi}_{\bar{a}} \} = m \bar{\zeta}_{\bar{a}}.
\label{gauge5}
\end{eqnarray}
One can express everything in terms of superfields also \cite{GS2}:
\begin{eqnarray}
~[ Q, V ] = U - U^{\dagger} \qquad
\{Q, U \} = 0
\nonumber \\
\{Q, \tilde{U} \} = - {1\over 16}~{\cal D}^{2}~\bar{\cal D}^{2}V - i m B
\qquad
[ Q, B ] = i m U.
\end{eqnarray}
As in the Introduction we postulate that the physical Hilbert space is the
factor space
$
{\cal H}_{phys} = Ker(Q)/Im(Q).
$
Using this gauge structure it is easy to prove that the one-particle Hilbert
subspace contains the following type of particles: a) a particle of mass $m$ and spin $1$ (the massive photon); b) a particle of mass $m$ and spin $1/2$ (the massive photino); c) a scalar particle of mass $m$. Only the transversal degrees of freedom of
$v_{\mu}$
and
$\lambda^{\prime}_{a}$
are producing physical states.
We can now determine the causal (anti)commutation relations for the
ghost fields. If we take into account the particular choice we have made
for the causal (anti)commutation relations of the vector multiplet one finds
out \cite{GS2} that we have
\begin{eqnarray}
~\{ a(x), \tilde{a}^{\dagger}(y) \} = {i\over 2}~D_{m}(x-y) \qquad
~[ \zeta_{a}(x), \bar{\tilde{\zeta}}_{\bar{b}}(y) ] = - {1\over 4}
\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{m}(x-y)
\nonumber \\
~[ b(x), b^{\dagger}(y) ] = - {i\over 2}~D_{m}(x-y) \qquad
~\{ \psi_{a}(x), \bar{\tilde{\psi}}_{\bar{b}}(y) \} = - {1\over 4}
\sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{m}(x-y)
\label{CCR-chiral-ghost-m}
\end{eqnarray}
and all other causal (anti)commutators are null. So, like for ordinary
Yang-Mills models, the causal (anti)commutators of the ghost fields are
determined by the corresponding relations of the vector multiplet
fields. So we see that the causal (anti)commutators are uniquely fixed by some
natural requirements. We also can prove that it is not possible to use rotor multiplet instead of the full vector multiplet and it is not possible to use Wess-Zumino multiplets instead of the ghost multiplets.
First, we eliminate terms independent on the fields
$
b, f, \psi_{a},
$
of the type
$
d_{Q}b + \partial_{\mu}t^{\mu}
$
exactly as in the massless case. Now comes an important observation: suppose that we have a Wick monomial
$
t_{B}
$
which has at least one factor
$
b, f, \psi_{a}
$
and it is of canonical dimension
$
\omega(t_{B}) \leq 4;
$
if we consider the expression
$
d_{Q}t_{B}
$
then from (\ref{gauge5}) it follows that
$
\omega(d_{Q}t_{B}) \leq 4.
$
This means that the new ($b, f, \psi$-dependent terms) from the interaction
Lagrangian cannot produce terms of canonical dimension $5$ when commuted
with the gauge charge. So, first we can proceed with the elimination of trivial Lagrangians of canonical dimension $4$ like in the Appendix. This in turn means that the general solution of the gauge invariance problem must be a sum of two expressions of the
type
$t^{(1)}$
and
$t^{(2)}$
from the preceding Section - see (\ref{tym}) and (\ref{tsusy}) - to which one
must add $b$-dependent terms such that gauge invariance is restored. These
new terms are easy to obtain. For the Yang-Mills coupling $t$ they are
well-known \cite{Gr1}
\begin{eqnarray}
t^{(1)}_{\rm massive} \equiv
f_{jkl} ( v_{j\mu} v_{k\nu} \partial^{\mu}v_{l}^{\nu}
- v_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l} )
\nonumber \\
+ f^{\prime}_{jkl} ( h_{j} \partial_{\mu}h_{k} v_{l}^{\mu}
- m_{k}~h_{j} v_{k\mu} v_{l}^{\mu}
- m_{k}~h_{j} \tilde{u}_{k} u_{l} )
+ f^{\prime\prime}_{jkl}~h_{j} h_{k} h_{l}
\label{inter}
\end{eqnarray}
and for the second coupling the generic expression is
\begin{eqnarray}
t^{(2)}_{\rm massive} \equiv
- i~f_{jkl} [ (\lambda^{\prime}_{j} \sigma_{\mu} \bar{\lambda}^{\prime}_{k}) v_{l}^{\mu}
+ 2 i~(\lambda^{\prime}_{j} \tilde{\zeta}_{k}) u_{l}
+ 2 i~(\bar{\lambda}^{\prime}_{k} \bar{\tilde{\zeta}}_{j}) u_{l}]
\nonumber \\
+ p^{(1)}_{jkl}~(\lambda^{\prime}_{j} \psi_{k}) h_{l}
+ p^{(2)}_{jkl}~(\bar{\lambda}^{\prime}_{j} \bar{\psi}_{k}) h_{l}
+ p^{(3)}_{jkl}~(\lambda^{\prime}_{j} \chi_{k}) h_{l}
+ p^{(4)}_{jkl}~(\bar{\lambda}^{\prime}_{j} \bar{\chi}_{k}) h_{l}.
\end{eqnarray}
The gauge invariance condition gives well-known constraints on the constants
$
f^{\prime}, f^{\prime\prime}
$
and:
\begin{eqnarray}
p^{(3)}_{jkl} = {i \over 2} p^{(4)}_{jkl}~m_{k} \qquad
p^{(4)}_{jkl} = - {i \over 2} p^{(2)}_{jkl}~m_{k}
\nonumber \\
i~p^{(1)}_{jkl}~m_{l} = 2 f_{jkl}~m_{k} \qquad
i~p^{(2)}_{jkl}~m_{l} = - 2 f_{jkl}~m_{k}.
\end{eqnarray}
We see from the second set of relations that we must have
\begin{equation}
f_{jkl} = 0 \qquad {\rm for} \qquad m_{k} \not= 0 \quad m_{l} = 0
\end{equation}
which implies that the massive and massless gauge fields must decouple; this
does not agree with the standard model.
Regarding supersymmetric invariance, this means that the argument which
prevents the equation (\ref{susy-inv}) to be true remains valid but
(\ref{susy-inv-phys}) should stay true.
We note that the anomaly (\ref{ano}) will remain in this case also.
{\bf Remark} The solution found in \cite{GS4} has in fact terms of canonical dimension 5: indeed the third term of formula (4.1) from this paper produces after integration over the Grassmann variables the term
$
(\psi\zeta) \tilde{g}
$
which is of canonical dimension $5$.
\section{The New Vector Multiplet\label{new}}
The new vector multiplet was introduced in \cite{GS1}; it is the second possibility of a multiplet which contains a vector field. First we give the definition of a Wess-Zumino multiplet: such a multiplet contains a complex scalar field
$\phi$
and a spin $1/2$ Majorana field
$f_{a}$
of the same mass $m$. The supercharges are defined in this case by:
\begin{eqnarray}
[Q_{a}, \phi ] = 0, \qquad [\bar{Q}_{\bar{a}}, \phi^{\dagger} ] = 0
\nonumber \\
i~[Q_{a}, \phi^{\dagger} ] = 2 f_{a}, \qquad
i~[\bar{Q}_{\bar{a}}, \phi] = 2 \bar{f}_{\bar{a}}
\nonumber \\
~\{ Q_{a}, f_{b} \} = - i~m~\epsilon_{ab} \phi, \qquad
\{ \bar{Q}_{\bar{a}}, \bar{f}_{\bar{b}} \} = i~m~\epsilon_{\bar{a}\bar{b}}
\phi^{\dagger}
\nonumber \\
~\{ Q_{a}, \bar{f}_{\bar{b}} \} = \sigma^{\mu}_{a\bar{b}} \partial_{\mu}\phi,
\qquad
\{ \bar{Q}_{\bar{a}}, f_{b} \} = \sigma^{\mu}_{b\bar{a}}
\partial_{\mu}\phi^{\dagger}.
\label{wess}
\end{eqnarray}
The first vanishing commutators are also called (anti) chirality condition.
The causal (anti)commutators are:
\begin{eqnarray}
\left[ \phi(x), \phi(y)^{\dagger} \right] = - 2i~ D_{m}(x-y),
\nonumber \\
\left\{ f_{a}(x), f_{b}(y) \right\} = i~~\epsilon_{ab}~m~D_{m}(x-y),
\nonumber \\
\left\{ f_{a}(x), \bar{f}_{\bar{b}}(y) \right\}
= \sigma^{\mu}_{a\bar{b}}~\partial_{\mu}D_{m}(x-y)
\label{CCR-wess}
\end{eqnarray}
and the other (anti)commutators are zero.
To construct the new vector multiplet one and adds a vector index i.e. makes the substitutions
$
\phi \rightarrow v_{\mu} \quad f_{a} \rightarrow \psi_{\mu a};
$
here
$
v_{\mu}
$
is a complex vector field. In the massless case the action of the supercharges is:
\begin{eqnarray}
~[Q_{a}, v_{\mu} ] = 0, \qquad [\bar{Q}_{\bar{a}}, v^{\dagger}_{\mu} ] = 0
\nonumber \\
i~[Q_{a}, v_{\mu}^{\dagger} ] = 2~\psi_{\mu a}, \qquad
i~[\bar{Q}_{\bar{a}}, v_{\mu} ] = 2~\bar{\psi}_{\mu\bar{a}}
\nonumber \\
~\{ Q_{a}, \psi_{\mu b} \} = 0, \qquad
\{ \bar{Q}_{\bar{a}}, \bar{\psi}_{\mu\bar{b}} \} = 0,
\nonumber \\
~\{ Q_{a}, \bar{\psi}_{\mu\bar{b}} \} = \sigma^{\nu}_{a\bar{b}}
\partial_{\nu}v_{\mu}, \qquad
\{ \bar{Q}_{\bar{a}}, \psi_{\mu b} \} = \sigma^{\nu}_{b\bar{a}}
\partial_{\nu}v_{\mu}^{\dagger}.
\label{susy-v}
\end{eqnarray}
The gauge structure is done by considering that the ghost and the anti-ghost multiplets are also Wess-Zumino (of null mass) i.e. we can take in the analysis from Subsection \ref{ghosts}
$
g = 0 \quad \tilde{g} = 0.
$
The gauge charge is defined by
\begin{equation}
Q\Omega = 0 \quad Q^{\dagger} = Q
\end{equation}
and
\begin{eqnarray}
~[ Q, v_{\mu} ] = i \partial_{\mu}u, \qquad
~[ Q, \psi_{\mu a} ] = \partial_{\mu}\zeta_{a}, \qquad
\nonumber \\
\{ Q, a \} = 0, \qquad
[ Q, \zeta ] = 0
\nonumber \\
\{ Q, \tilde{a} \} = - i~\partial^{\mu}v_{\mu}, \qquad
[ Q, \tilde{\zeta} ] = - \partial_{\mu}\psi^{\mu}_{a}
\end{eqnarray}
and the rest of the (anti)commutators are zero. It is more convenient to work with two real vector fields:
\begin{equation}
A_{\mu} = v_{\mu} + v_{\mu}^{\dagger} \qquad
B_{\mu} = - i ( v_{\mu} - v_{\mu}^{\dagger})
\end{equation}
such that we have
\begin{eqnarray}
~[ Q, A_{\mu} ] = i \partial_{\mu}u \qquad
[ Q, B_{\mu} ] = i \partial_{\mu}v
\nonumber \\
\{ Q, \tilde{u} \} = - i \partial_{\mu}A^{\mu} \qquad
\{ Q, \tilde{v} \} = - i \partial_{\mu}B^{\mu}.
\end{eqnarray}
If we have $r$ such multiplets i.e. the fields
$
A_{j}^{\mu}, B_{j}^{\mu}, u_{j}, v_{j}, \tilde{u}_{j}, \tilde{v}_{j} \quad j =1,\dots,r
$
then it is convenient to define the new fields
$
A_{J}^{\mu}, u_{J}, \tilde{u}_{J}, \quad j =1,\dots,2r
$
according to
\begin{equation}
A_{j+r}^{\mu} = B_{j}^{\mu} \quad u_{j+r} = u_{j} \quad \tilde{u}_{j+r} = \tilde{u}_{j}
\qquad j =1,\dots,r.
\end{equation}
In this way the gauge structure becomes similar to the Yang-Mills case i.e.
\begin{eqnarray}
~[ Q, A_{J}^{\mu} ] = i \partial^{\mu}u_{J}
\nonumber \\
\{ Q, u_{J} \} = 0 \qquad \{ Q, \tilde{u}_{J} \} = - i \partial_{\mu}A_{J}^{\mu}
\end{eqnarray}
for
$
J = 1,\dots,2r.
$
It is not hard to prove that the gauge invariance fixes the interaction Lagrangian to be
of the Yang-Mills type i.e.
\begin{equation}
t = f_{JKL} ( A_{J}^{\mu} A_{K}^{\nu} \partial_{\nu}A_{L\mu}
- A_{J}^{\mu} u_{K} \partial_{\mu}\tilde{u}_{L} )
\end{equation}
where the constants
$
f_{JKL}
$
are real (from Hermiticity), completely antisymmetric (from first order gauge invariance) and verify Jacobi identity (from second order gauge invariance); no other terms containing spinor fields are compatible with (first order) gauge invariance.
One can revert to the old variables defining
\begin{eqnarray}
f_{jkl}^{(1)} \equiv f_{jkl} \qquad f_{jkl}^{(2)} \equiv f_{j+r,k+r,l+r}
\nonumber \\
f_{jkl}^{(3)} \equiv f_{j,k+r,l} \qquad f_{jkl}^{(4)} \equiv f_{j+r,k,l+r}.
\end{eqnarray}
Then $t$ above is a sum of four expressions:
\begin{eqnarray}
t^{(1)} = f_{jkl}^{(1)} ( A_{j}^{\mu} A_{k}^{\nu} \partial_{\nu}A_{l\mu}
- A_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l} ),
\nonumber \\
t^{(2)} = f_{jkl}^{(2)} ( B_{j}^{\mu} B_{k}^{\nu} \partial_{\nu}B_{l\mu}
- B_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l} ),
\nonumber \\
t^{(3)} = f_{jkl}^{(3)} ( A_{j}^{\mu} B_{k}^{\nu} \partial_{\nu}A_{l\mu}
- A_{j}^{\mu} A_{k}^{\nu} \partial_{\nu}B_{l\mu}
- B_{j}^{\mu} A_{k}^{\nu} \partial_{\nu}A_{l\mu}
\nonumber \\
- A_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l}
+ A_{j}^{\mu} u_{k} \partial_{\mu}\tilde{v}_{l}
+ B_{j}^{\mu} u_{k} \partial_{\mu}\tilde{u}_{l} );
\end{eqnarray}
the expression
$
t^{(4)}
$
can be obtained from
$
t^{(3)}
$
performing the change
$
A_{j}^{\mu} \leftrightarrow B_{j}^{\mu}, \quad
u_{j} \leftrightarrow v_{j}, \quad
\tilde{u}_{j} \leftrightarrow \tilde{v}_{j}.
$
One can show that
$
t^{(3)}
$
is equivalent to a simple expression, namely:
\begin{equation}
t^{(3)} \sim f_{jkl}^{(3)} ( A_{j}^{\mu} B_{k}^{\nu} \partial_{\nu}A_{l\mu}
- 2B_{k}^{\mu} A_{j}^{\nu} \partial_{\nu}A_{l\mu}
+ A_{l}^{\mu} \partial_{\mu}u_{k} \tilde{v}_{k}
- 2 A_{j}^{\mu} v_{k} \partial_{\mu}\tilde{u}_{l} ).
\end{equation}
Now we impose the susy-invariance condition (\ref{susy-inv-phys}); after some computations one obtains the restrictions:
\begin{equation}
f_{jkl}^{(3)} = i~f_{jkl}^{(1)}, \qquad f_{jkl}^{(2)} = - i~f_{jkl}^{(1)}
\qquad f_{jkl}^{(4)} = - f_{jkl}^{(1)}
\end{equation}
which, unfortunately, are in contradiction with the reality condition. So, we cannot find a susy-invariant model even in the first order of the perturbation theory.
\section{Conclusions}
There is another possibility, namely to use extended supersymmetries \cite{GS3}. However in this case one finds out immediately that one cannot include the usual ghost fields in some supersymmetric ghost multiplet. Our analysis indicates serious obstacles in constructing supersymmetric extensions of the standard model in the causal approach.
\newpage
\section{Appendix}
I) We have the following non-trivial possibilities for terms of the type
$
A_{j}A_{k}A_{l}
$
where
$
A_{j},~A_{k},~A_{l}
$
are fields of the type
$
C, \phi, v^{\mu}, d, \chi, \lambda^{\prime}:
$
\begin{itemize}
\item
from the sector
$
C\chi\lambda:
$
\begin{equation}
f^{(1a)}_{jkl} = C_{j} (\partial_{\mu}\chi_{k} \sigma^{\mu\nu}
\partial_{\nu}\lambda_{l}^{\prime}),
\qquad
f^{(1b)}_{jkl} = C_{j} (\partial_{\mu}\bar{\chi_{k}} \bar{\sigma}^{\mu\nu} \partial_{\nu}\bar{\lambda}_{l}^{\prime})
\end{equation}
and terms with one derivative on
$C_{j}$
which will be denoted generically by
$
\tilde{F}^{(1)};
$
\item
from the sector
$
C v_{\mu} v_{\nu}
$
\begin{eqnarray}
f^{(2a)}_{jkl} C_{j} \partial^{\mu}v_{k}^{\nu} \partial_{\nu}v_{l\mu},
\qquad
f^{(2b)}_{jkl} C_{j} v_{k}^{\mu} \partial_{\mu}\partial_{\nu}v_{l}^{\nu},
\nonumber \\
f^{(2c)}_{jkl} \epsilon_{\mu\nu\rho\sigma} C_{j} \partial^{\mu}v_{k}^{\nu} \partial^{\rho}v_{l}^{\sigma},
\qquad
f^{(2d)}_{jkl} C_{j} \partial_{\mu}v_{k}^{\mu} \partial_{\nu}v_{l}^{\nu}
\end{eqnarray}
and terms with one derivative on
$C_{j}$
which will be denoted generically by
$
\tilde{F}^{(2)};
$
\item
from the sector
$
C v_{\mu} d:
$
\begin{equation}
f^{(3a)}_{jkl} C_{j} v_{j}^{\mu} \partial_{\mu}d_{l},
\qquad
f^{(3b)}_{jkl} C_{j} \partial_{\mu}v^{\mu}_{k} d_{l}
\end{equation}
and terms with one derivative on
$C_{j}$
which will be denoted generically by
$
\tilde{F}^{(3)};
$
\item
from the sector
$
C\lambda\lambda:
$
\begin{equation}
f^{(4a)}_{jkl} C_{j} (\lambda^{\prime}_{k} \sigma^{\mu} \partial_{\mu}\bar{\lambda}^{\prime}_{l}),
\qquad
f^{(4b)}_{jkl} C_{j} (\partial_{\mu}\lambda^{\prime}_{k} \sigma^{\mu} \bar{\lambda}^{\prime}_{l})
\end{equation}
and terms with one derivative on
$C_{j}$
which will be denoted generically by
$
\tilde{F}^{(4)};
$
\item
from the sector
$
C d d:
$
\begin{equation}
f^{(5)}_{jkl} C_{j} d_{k} d_{l};
\end{equation}
\item
from the sector
$
\chi\chi \phi:
$
\begin{eqnarray}
f^{(6a)}_{jkl} (\partial_{\mu}\chi_{j} \sigma^{\mu\nu} \partial_{\nu}\chi_{k}) \phi_{l},
\qquad
f^{(6b)}_{jkl} (\partial_{\mu}\chi_{j} \sigma^{\mu\nu} \partial_{\nu}\chi_{k}) \phi_{l}^{\dagger},
\nonumber \\
f^{(6c)}_{jkl} (\partial_{\mu}\bar{\chi}_{j} \bar{\sigma}^{\mu\nu} \partial_{\nu}\bar{\chi}_{k}) \phi_{l},
\qquad
f^{(6d)}_{jkl} (\partial_{\mu}\bar{\chi}_{j} \bar{\sigma}^{\mu\nu} \partial_{\nu}\bar{\chi}_{k}) \phi_{l}^{\dagger}
\end{eqnarray}
and terms with one derivative on
$\phi_{j}$
which will be denoted generically by
$
\tilde{F}^{(6)};
$
\item
from the sector
$
\chi\chi v_{\mu}:
$
\begin{eqnarray}
f^{(7a)}_{jkl} (\partial_{\mu}\chi_{j} \sigma^{\mu} \partial_{\nu}\bar{\chi}_{k}) v^{\nu}_{l},
\qquad
f^{(7b)}_{jkl} (\partial_{\mu}\chi_{j} \sigma^{\nu} \partial_{\nu}\bar{\chi}_{k}) v_{l}^{\mu},
\nonumber \\
f^{(7c)}_{jkl} (\partial_{\mu}\partial_{\nu}\chi_{j} {\sigma}^{\mu} \bar{\chi}_{k}) v^{\nu}_{l},
\qquad
f^{(7d)}_{jkl} (\chi_{j} \sigma^{\mu} \partial_{\mu}\partial_{\nu}\bar{\chi}_{k}) v_{l}^{\nu}
\qquad
f^{(7e)}_{jkl} \epsilon_{\mu\nu\rho\lambda}
(\partial^{\nu}\chi_{j} \sigma^{\rho} \partial^{\lambda}\bar{\chi}_{k}) v_{l}^{\mu}
\end{eqnarray}
and terms with one derivative on
$v^{\mu}_{l}$
which will be denoted generically by
$
\tilde{F}^{(7)};
$
\item
from the sector
$
\chi\chi d:
$
\begin{equation}
f^{(8a)}_{jkl} (\partial_{\mu}\chi_{j} \sigma^{\mu} \bar{\chi}_{k}) d_{l},
\qquad
f^{(8b)}_{jkl} (\chi_{j} \sigma^{\mu} \partial_{\mu}\bar{\chi}_{k}) d_{l}
\end{equation}
and terms with one derivative on
$d_{l}$
which will be denoted generically by
$
\tilde{F}^{(8)};
$
\item
from the sector
$
\chi\lambda \phi:
$
\begin{eqnarray}
f^{(9a)}_{jkl} (\partial_{\mu}\chi_{j} \sigma^{\mu} \bar{\lambda}^{\prime}_{k}) \phi_{l},
\qquad
f^{(9b)}_{jkl} (\chi_{j} \sigma^{\mu} \partial_{\mu}\bar{\lambda}^{\prime}_{k}) \phi_{l},
\nonumber \\
f^{(9c)}_{jkl} (\partial_{\mu}\lambda^{\prime}_{j} \sigma^{\mu} \bar{\chi}_{k}) \phi_{l},
\qquad
f^{(9d)}_{jkl} (\lambda^{\prime}_{j} \sigma^{\mu} \partial_{\mu}\bar{\chi}_{k}) \phi_{l},
\nonumber \\
f^{(9e)}_{jkl} (\partial_{\mu}\chi_{j} \sigma^{\mu} \bar{\lambda}^{\prime}_{k}) \phi^{\dagger}_{l},
\qquad
f^{(9f)}_{jkl} (\chi_{j} \sigma^{\mu} \partial_{\mu} \bar{\lambda}^{\prime}_{k}) \phi^{\dagger}_{l},
\nonumber \\
f^{(9g)}_{jkl} (\partial_{\mu}\lambda^{\prime}_{j} \sigma^{\mu} \bar{\chi}_{k}) \phi^{\dagger}_{l},
\qquad
f^{(9h)}_{jkl} (\lambda^{\prime}_{j} \sigma^{\mu} \partial_{\mu}\bar{\chi}_{k}) \phi^{\dagger}_{l}
\end{eqnarray}
and terms with one derivative on
$\phi_{l}, \phi_{l}^{\dagger}$
which will be denoted generically by
$
\tilde{F}^{(9)};
$
\item
from the sector
$
\phi \phi v_{\mu}
$
\begin{equation}
f^{(10a)}_{jkl} \phi_{j} \partial_{\mu}\phi_{k} v_{l}^{\mu},
\qquad
f^{(10b)}_{jkl} \phi_{j} \partial_{\mu}\phi_{k}^{\dagger} v_{l}^{\mu},
\qquad
f^{(10c)}_{jkl} \phi_{j}^{\dagger} \partial_{\mu}\phi_{k}^{\dagger} v_{l}^{\mu},
\qquad
f^{(10d)}_{jkl} \phi_{j}^{\dagger} \partial_{\mu}\phi_{k} v_{l}^{\mu}
\end{equation}
and terms with one derivative on
$v^{\mu}_{l}$
which will be denoted generically by
$
\tilde{F}^{(10)};
$
\item
from the sector
$
\phi \phi d
$
\begin{equation}
f^{(11a)}_{jkl} \phi_{j} \phi_{k} d_{l},
\qquad
f^{(11b)}_{jkl} \phi_{j} \phi_{k}^{\dagger} d_{l},
\qquad
f^{(11c)}_{jkl} \phi_{j}^{\dagger} \phi_{k}^{\dagger} d_{l};
\end{equation}
\item
from the sector
$
\phi\lambda\lambda:
$
\begin{equation}
f^{(12a)}_{jkl} (\lambda_{j}^{\prime}\lambda^{\prime}_{k}) \phi_{l},
\qquad
f^{(12b)}_{jkl} (\lambda_{j}^{\prime}\lambda^{\prime}_{k}) \phi^{\dagger}_{l},
\qquad
f^{(12c)}_{jkl} (\bar{\lambda}_{j}^{\prime}\bar{\lambda}^{\prime}_{k}) \phi_{l},
\qquad
f^{(12d)}_{jkl} (\bar{\lambda}_{j}^{\prime}\bar{\lambda}^{\prime}_{k}) \phi^{\dagger}_{l};
\end{equation}
\item
from the sector
$
v_{\mu} v_{\nu} v_{\rho}
$
\begin{equation}
f^{(13a)}_{jkl} v_{j}^{\mu} v_{k}^{\nu} \partial_{\nu}v_{l\mu} \qquad
f^{(13b)}_{jkl} v_{j}^{\mu} v_{k\mu} \partial_{\nu}v_{l}^{\nu}
\end{equation}
\item
from the sector
$
v_{\mu}\lambda\lambda:
$
\begin{equation}
f^{(14)}_{jkl} (\lambda_{j}^{\prime}\sigma_{\mu}\bar{\lambda}^{\prime}_{k}) v_{l}^{\mu}
\end{equation}
\item
from the sector
$
\chi\lambda v_{\mu}:
$
\begin{eqnarray}
f^{(15a)}_{jkl} (\partial^{\mu}\chi_{j} \sigma_{\mu\nu} \lambda^{\prime}_{k}) v^{\nu}_{l},
\qquad
f^{(15b)}_{jkl} (\chi_{j} \sigma_{\mu\nu} \partial^{\mu}\lambda^{\prime}_{k}) v^{\nu}_{l},
\nonumber \\
f^{(15c)}_{jkl} (\partial^{\mu}\bar{\chi}_{j} \bar{\sigma}_{\mu\nu} \bar{\lambda}^{\prime}_{k}) v^{\nu}_{l},
\qquad
f^{(15d)}_{jkl} (\bar{\chi}_{j} \bar{\sigma}_{\mu\nu} \partial^{\mu}\bar{\lambda}^{\prime}_{k}) v^{\nu}_{l},
\nonumber \\
f^{(15e)}_{jkl} (\partial_{\mu}\chi_{j} \lambda^{\prime}_{k}) v^{\mu}_{l},
\qquad
f^{(15f)}_{jkl} (\chi_{j} \partial_{\mu}\lambda^{\prime}_{k}) v^{\mu}_{l},
\nonumber \\
f^{(15g)}_{jkl} (\partial_{\mu}\bar{\chi}_{j} \bar{\lambda}^{\prime}_{k}) v^{\mu}_{l},
\qquad
f^{(15h)}_{jkl} (\bar{\chi}_{j} \partial_{\mu}\bar{\lambda}^{\prime}_{k}) v^{\mu}_{l}
\end{eqnarray}
and terms with one derivative on
$v^{\mu}_{l}$
which will be denoted generically by
$
\tilde{F}^{(15)};
$
\item
from the sector
$
\chi\lambda d:
$
\begin{equation}
f^{(16a)}_{jkl} (\chi_{j} \lambda^{\prime}_{k}) d_{l},
\qquad
f^{(16b)}_{jkl} (\bar{\chi}_{j} \bar{\lambda}^{\prime}_{k}) d_{l};
\end{equation}
\item
from the sector
$
v_{\mu}v_{\nu} d:
$
\begin{equation}
f^{(17)}_{jkl} v_{j\mu} v^{\mu}_{k} d_{l}.
\end{equation}
\end{itemize}
II.) We proceed in the same way with the ghost terms.
We have to consider terms of the type
$
A_{j} A_{k} A_{l}
$
where
$
A_{j} = C, \phi, \phi^{\dagger}, v_{\mu}, d, \chi, \lambda^{\prime}
$
and
$
A_{k} = u, v, g, \zeta
$
and
$
A_{l} = \tilde{u}, \tilde{v}, \tilde{g}, \tilde{\zeta}.
$
\begin{itemize}
\item
from the sector
$
Au\tilde{u}:
$
\begin{equation}
g^{(1a)}_{jkl} v^{\mu}_{j} u_{k} \partial_{\mu}\tilde{u}_{l} \qquad
g^{(1b)}_{jkl} \partial_{\mu}v^{\mu}_{j} u_{k} \tilde{u}_{l} \qquad
g^{(1c)}_{jkl} d_{j} u_{k} \tilde{u}_{l} \quad
\end{equation}
and terms with the derivative on
$
u_{k}
$
which will be generically denoted by
$
\tilde{G}^{(1)};
$
\item
from the sector
$
Av\tilde{v}:
$
\begin{equation}
g^{(2a)}_{jkl} v^{\mu}_{j} v_{k} \partial_{\mu}\tilde{v}_{l} \qquad
g^{(2b)}_{jkl} \partial_{\mu}v^{\mu}_{j} v_{k} \tilde{v}_{l} \qquad
g^{(2c)}_{jkl} d_{j} v_{k} \tilde{v}_{l}
\end{equation}
and terms with the derivative on
$
v_{k}
$
which will be generically denoted by
$
\tilde{G}^{(2)};
$
\item
from the sector
$
Av\tilde{u}:
$
\begin{equation}
g^{(3a)}_{jkl} v^{\mu}_{j} v_{k} \partial_{\mu}\tilde{u}_{l} \qquad
g^{(3b)}_{jkl} \partial_{\mu}v^{\mu}_{j} v_{k} \tilde{u}_{l} \qquad
g^{(3c)}_{jkl} d_{j} v_{k} \tilde{u}_{l}
\end{equation}
and terms with the derivative on
$
v_{k}
$
which will be generically denoted by
$
\tilde{G}^{(3)};
$
\item
from the sector
$
Au\tilde{v}:
$
\begin{equation}
g^{(4a)}_{jkl} v^{\mu}_{j} \partial_{\mu}u_{k} \tilde{v}_{l} \qquad
g^{(4b)}_{jkl} \partial_{\mu}v^{\mu}_{j} u_{k} \tilde{v}_{l} \qquad
g^{(4c)}_{jkl} d_{j} u_{k} \tilde{v}_{l}
\end{equation}
and terms with the derivative on
$
\tilde{v}_{l}
$
which will be generically denoted by
$
\tilde{G}^{(4)};
$
\item
from the sector
$
Ag\tilde{u}:
$
\begin{equation}
g^{(5a)}_{jkl} \phi_{j}g_{k}\tilde{u}_{l} \qquad
g^{(5b)}_{jkl} \phi^{\dagger}_{j}g_{k}\tilde{u}_{l} \qquad
g^{(5c)}_{jkl} \phi_{j}g^{\dagger}_{k}\tilde{u}_{l} \qquad
g^{(5d)}_{jkl} \phi^{\dagger}_{j}g^{\dagger}_{k}\tilde{u}_{l}
\end{equation}
\item
from the sector
$
Ag\tilde{v}:
$
\begin{equation}
g^{(6a)}_{jkl} \phi_{j} g_{k} \tilde{v}_{l} \qquad
g^{(6b)}_{jkl} \phi^{\dagger}_{j} g_{k} \tilde{v}_{l} \qquad
g^{(6c)}_{jkl} \phi_{j} g^{\dagger}_{k} \tilde{v}_{l} \qquad
g^{(6d)}_{jkl} \phi^{\dagger}_{j} g^{\dagger}_{k} \tilde{v}_{l}
\end{equation}
\item
from the sector
$
Au\tilde{g}:
$
\begin{equation}
g^{(7a)}_{jkl} \phi_{j} u_{k} \tilde{g}_{l} \qquad
g^{(7b)}_{jkl} \phi^{\dagger}_{j} u_{k} \tilde{g}_{l} \qquad
g^{(7c)}_{jkl} \phi_{j} u_{k} \tilde{g}^{\dagger}_{l} \qquad
g^{(7d)}_{jkl} \phi^{\dagger}_{j} u_{k} \tilde{g}^{\dagger}_{l}
\end{equation}
\item
from the sector
$
Av\tilde{g}:
$
\begin{equation}
g^{(8a)}_{jkl} \phi_{j} v_{k} \tilde{g}_{l} \qquad
g^{(8b)}_{jkl} \phi^{\dagger}_{j} v_{k} \tilde{g}_{l} \qquad
g^{(8c)}_{jkl} \phi_{j} v_{k} \tilde{g}^{\dagger}_{l} \qquad
g^{(8d)}_{jkl} \phi^{\dagger}_{j} v_{k} \tilde{g}^{\dagger}_{l}
\end{equation}
\item
from the sector
$
Ag\tilde{g}:
$
\begin{equation}
g^{(9a)}_{jkl} C_{j} g_{k} \tilde{g}_{l} \qquad
g^{(9b)}_{jkl} C_{j} g_{k}^{\dagger} \tilde{g}_{l} \qquad
g^{(9c)}_{jkl} C_{j} g_{k} \tilde{g}_{l}^{\dagger} \qquad
g^{(9d)}_{jkl} C_{j} g^{\dagger}_{k} \tilde{g}^{\dagger}_{l}
\end{equation}
\item
from the sector
$
A\zeta\tilde{u}:
$
\begin{eqnarray}
g^{(10a)}_{jkl} (\lambda^{\prime}_{j}\zeta_{k}) \tilde{u}_{l} \qquad
g^{(10b)}_{jkl} (\bar{\lambda}^{\prime}_{j}\bar{\zeta}_{k}) \tilde{u}_{l} \qquad
\nonumber \\
g^{(10c)}_{jkl} (\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\zeta}_{k}) \tilde{u}_{l} \qquad
g^{(10d)}_{jkl} (\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k}) \tilde{u}_{l} \qquad
\nonumber \\
g^{(10e)}_{jkl} (\partial_{\mu}\zeta_{k}\sigma^{\mu}\bar{\chi}_{j}) \tilde{u}_{l} \qquad
g^{(10f)}_{jkl} (\zeta_{k}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{j}) \tilde{u}_{l} \qquad
\end{eqnarray}
and terms with the derivative on
$
\tilde{u}_{l}
$
which will be generically denoted by
$
\tilde{G}^{(10)};
$
\item
from the sector
$
A\zeta\tilde{v}:
$
\begin{eqnarray}
g^{(11a)}_{jkl} (\lambda^{\prime}_{j}\zeta_{k}) \tilde{v}_{l} \qquad
g^{(11b)}_{jkl} (\bar{\lambda}^{\prime}_{j}\bar{\zeta}_{k}) \tilde{v}_{l} \qquad
\nonumber \\
g^{(11c)}_{jkl} (\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\zeta}_{k}) \tilde{v}_{l} \qquad
g^{(11d)}_{jkl} (\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k}) \tilde{v}_{l} \qquad
\nonumber \\
g^{(11e)}_{jkl} (\partial_{\mu}\zeta_{k}\sigma^{\mu}\bar{\chi}_{j}) \tilde{v}_{l} \qquad
g^{(11f)}_{jkl} (\zeta_{k}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{j}) \tilde{v}_{l}
\end{eqnarray}
and terms with the derivative on
$
\tilde{v}_{l}
$
which will be generically denoted by
$
\tilde{G}^{(11)};
$
\item
from the sector
$
Au\tilde{\zeta}:
$
\begin{eqnarray}
g^{(12a)}_{jkl} (\lambda^{\prime}_{j}\tilde{\zeta}_{l}) u_{k} \qquad
g^{(12b)}_{jkl} (\bar{\lambda}^{\prime}_{j}\bar{\tilde{\zeta}}_{l}) u_{k} \qquad
\nonumber \\
g^{(12c)}_{jkl} (\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) u_{k} \qquad
g^{(12d)}_{jkl} (\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l}) u_{k} \qquad
\nonumber \\
g^{(12e)}_{jkl} (\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\chi}_{j}) u_{k} \qquad
g^{(12f)}_{jkl} (\tilde{\zeta}_{l}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{j}) u_{k}
\end{eqnarray}
and terms with the derivative on
$
u_{k}
$
which will be generically denoted by
$
\tilde{G}^{(12)};
$
\item
from the sector
$
Av\tilde{\zeta}:
$
\begin{eqnarray}
g^{(13a)}_{jkl} (\lambda^{\prime}_{j}\tilde{\zeta}_{k}) v_{l} \qquad
g^{(13b)}_{jkl} (\bar{\lambda}^{\prime}_{j}\bar{\tilde{\zeta}}_{l}) v_{l} \qquad
\nonumber \\
g^{(13c)}_{jkl} (\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) v_{k} \qquad
g^{(13d)}_{jkl} (\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l}) v_{k} \qquad
\nonumber \\
g^{(13e)}_{jkl} (\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\chi}_{j}) v_{k} \qquad
g^{(13f)}_{jkl} (\tilde{\zeta}_{l}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{j}) v_{k}
\end{eqnarray}
and terms with the derivative on
$
v_{k}
$
which will be generically denoted by
$
\tilde{G}^{(13)};
$
\item
from the sector
$
A\zeta\tilde{\zeta}:
$
\begin{eqnarray}
g^{(14a)}_{jkl} \phi_{j} (\zeta_{k}\tilde{\zeta}_{l}) \qquad
g^{(14b)}_{jkl} \phi^{\dagger}_{j} (\zeta_{k}\tilde{\zeta}_{l}) \qquad
g^{(14c)}_{jkl} \phi_{j} (\bar{\zeta}_{k}\bar{\tilde{\zeta}}_{l}) \qquad
g^{(14d)}_{jkl} \phi^{\dagger}_{j} (\bar{\zeta}_{k}\bar{\tilde{\zeta}}_{l})
\nonumber \\
g^{(14e)}_{jkl} C_{j} (\partial_{\mu}\zeta_{k}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) \qquad
g^{(14f)}_{jkl} C_{j} (\zeta_{k}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l})
\nonumber \\
g^{(14g)}_{jkl} C_{j} (\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\zeta}_{k}) \qquad
g^{(14h)}_{jkl} C_{j} (\tilde{\zeta}_{l}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k})
\nonumber \\
g^{(14i)}_{jkl} v_{j}^{\mu}~(\zeta_{k}\sigma_{\mu}\bar{\tilde{\zeta}}_{l}) \qquad
g^{(14j)}_{jkl} v_{j}^{\mu}~(\tilde{\zeta}_{l}\sigma_{\mu}\bar{\zeta}_{k})~~
\end{eqnarray}
and terms with the derivative on
$
C_{j}
$
which will be generically denoted by
$
\tilde{G}^{(14)};
$
\item
from the sector
$
A\zeta\tilde{g}:
$
\begin{equation}
g^{(15a)}_{jkl} (\chi_{j}\zeta_{k}) \tilde{g}_{l} \qquad
g^{(15b)}_{jkl} (\chi_{j}\zeta_{k}) \tilde{g}_{l}^{\dagger} \qquad
g^{(15c)}_{jkl} (\bar{\chi}_{j}\bar{\zeta}_{k}) \tilde{g}_{l} \qquad
g^{(15d)}_{jkl} (\bar{\chi}_{j}\bar{\zeta}_{k}) \tilde{g}_{l}^{\dagger}
\end{equation}
\item
from the sector
$
Ag\tilde{\zeta}:
$
\begin{equation}
g^{(16a)}_{jkl} (\chi_{j}\tilde{\zeta}_{l}) g_{k} \qquad
g^{(16b)}_{jkl} (\chi_{j}\tilde{\zeta}_{l}) g_{k}^{\dagger} \qquad
g^{(16c)}_{jkl} (\bar{\chi}_{j}\bar{\tilde{\zeta}}_{l}) g_{k} \qquad
g^{(16d)}_{jkl} (\bar{\chi}_{j}\bar{\tilde{\zeta}}_{l}) g_{k}^{\dagger}
\end{equation}
\end{itemize}
Some of the possible terms have been discarded according to the ``magic"
formula. The coefficients are subject to various obvious (anti)symmetry
properties.
III.) We have total divergence terms
$
t^{(j)}_{\mu}~,j = 1,\dots,20
$
in the sectors
$$
f^{(1)} - f^{(4)}, f^{(6)} -f^{(10)},f^{(13)}, f^{(15)}, g^{(1)} - g^{(4)}, g^{(10)} - g^{(14)}
$$
respectively.
IV.) We also have the co-boundary terms
$
d_{Q}b
$
to eliminate some of the previous expressions; we list the possible expressions
$b$:
\begin{itemize}
\item
of the type
$
A A^{\prime} \tilde{u}:
$
\begin{eqnarray}
b^{(1a)}_{jkl}~\phi_{j} \phi_{k} \tilde{u}_{l} \qquad
b^{(1b)}_{jkl}~\phi^{\dagger}_{j} \phi^{\dagger}_{k} \tilde{u}_{l} \qquad
b^{(1c)}_{jkl}~\phi_{j} \phi^{\dagger}_{k} \tilde{u}_{l} \qquad
b^{(1d)}_{jkl}~v^{\mu}_{j} v_{k\mu} \tilde{u}_{l} \qquad
b^{(1e)}_{jkl}~C_{j} d_{k} \tilde{u}_{l}
\nonumber \\
b^{(1f)}_{jkl}~(\chi_{j}\lambda^{\prime}_{k}) \tilde{u}_{l} \qquad
b^{(1g)}_{jkl}~(\bar{\chi}_{j}\bar{\lambda}^{\prime}_{k}) \tilde{u}_{l} \qquad
b^{(1h)}_{jkl}~(\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\chi}_{k}) \tilde{u}_{l} \qquad
b^{(1i)}_{jkl}~(\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{k}) \tilde{u}_{l}
\nonumber \\
b^{(1j)}_{jkl}~C_{j} \partial_{\mu}v^{\mu}_{k} \tilde{u}_{l} \qquad
b^{(1k)}_{jkl}~C_{j} v^{\mu}_{k} \partial_{\mu}\tilde{u}_{l}
\end{eqnarray}
\item
of the type
$
A A^{\prime} \tilde{v}:
$
\begin{eqnarray}
b^{(2a)}_{jkl}~\phi_{j} \phi_{k} \tilde{v}_{l} \qquad
b^{(2b)}_{jkl}~\phi^{\dagger}_{j} \phi^{\dagger}_{k} \tilde{v}_{l} \qquad
b^{(2c)}_{jkl}~\phi_{j} \phi^{\dagger}_{k} \tilde{v}_{l} \qquad
b^{(2d)}_{jkl}~v^{\mu}_{j} v_{k\mu} \tilde{v}_{l} \qquad
b^{(2e)}_{jkl}~C_{j} d_{k} \tilde{v}_{l}
\nonumber \\
b^{(2f)}_{jkl}~(\chi_{j}\lambda^{\prime}_{k}) \tilde{v}_{l} \qquad
b^{(2g)}_{jkl}~(\bar{\chi}_{j}\bar{\lambda}^{\prime}_{k}) \tilde{v}_{l} \qquad
b^{(2h)}_{jkl}~(\partial_{\mu}\chi_{j}\sigma^{\mu}\bar{\chi}_{k}) \tilde{v}_{l} \qquad
b^{(2i)}_{jkl}~(\chi_{j}\sigma^{\mu}\partial_{\mu}\bar{\chi}_{k}) \tilde{v}_{l}
\nonumber \\
b^{(2j)}_{jkl}~C_{j} \partial_{\mu}v^{\mu}_{k} \tilde{v}_{l} \qquad
b^{(2k)}_{jkl}~C_{j} v^{\mu}_{k} \partial_{\mu}\tilde{v}_{l}
\end{eqnarray}
\item
of the type
$
A A^{\prime} \tilde{g}:
$
\begin{eqnarray}
b^{(3a)}_{jkl}~C_{j} \phi_{k} \tilde{g}_{l} \qquad
b^{(3b)}_{jkl}~C_{j} \phi^{\dagger}_{k} \tilde{g}_{l} \qquad
b^{(3c)}_{jkl}~C_{j} \phi_{k} \tilde{g}^{\dagger}_{l} \qquad
b^{(3d)}_{jkl}~C_{j} \phi^{\dagger}_{k} \tilde{g}^{\dagger}_{l}
\nonumber \\
b^{(3e)}_{jkl}~(\chi_{j}\chi_{k}) \tilde{g}_{l} \qquad
b^{(3f)}_{jkl}~(\chi_{j}\chi_{k}) \tilde{g}^{\dagger}_{l} \qquad
b^{(3g)}_{jkl}~(\bar{\chi}_{j}\bar{\chi}_{k}) \tilde{g}_{l} \qquad
b^{(3h)}_{jkl}~(\bar{\chi}_{j}\bar{\chi}_{k}) \tilde{g}^{\dagger}_{l}
\end{eqnarray}
\item
of the type
$
A A^{\prime} \tilde{\zeta}:
$
\begin{eqnarray}
b^{(4a)}_{jkl}~\phi_{j} (\chi_{k} \tilde{\zeta}_{l}) \qquad
b^{(4b)}_{jkl}~\phi^{\dagger}_{j} (\chi_{k} \tilde{\zeta}_{l}) \qquad
b^{(4c)}_{jkl}~\phi_{j} (\bar{\chi}_{k} \bar{\tilde{\zeta}}_{l}) \qquad
b^{(4d)}_{jkl}~\phi^{\dagger}_{j} \bar{\chi}_{k} \bar{\tilde{\zeta}}_{l}) \qquad
\nonumber \\
b^{(4e)}_{jkl}~C_{j} (\partial_{\mu}\chi_{k}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) \qquad
b^{(4f)}_{jkl}~C_{j} (\chi_{k} \sigma^{\mu} \partial_{\mu}\bar{\tilde{\zeta}}_{l}) \qquad
\nonumber \\
b^{(4g)}_{jkl}~C_{j} (\partial_{\mu}\tilde{\chi}_{l}\sigma^{\mu}\bar{\zeta}_{k}) \qquad
b^{(4h)}_{jkl}~C_{j} (\tilde{\chi}_{k} \sigma^{\mu} \partial_{\mu}\bar{\zeta}_{l})
\nonumber \\
b^{(4i)}_{jkl}~C_{j} (\lambda^{\prime}_{k} \tilde{\zeta}_{l}) \qquad
b^{(4j)}_{jkl}~C_{j} (\bar{\lambda}^{\prime}_{k} \bar{\tilde{\zeta}}_{l}) \qquad
b^{(4k)}_{jkl}~v_{j}^{\mu} (\chi_{k} \sigma_{\mu} \bar{\tilde{\zeta}}_{l}) \qquad
b^{(4l)}_{jkl}~v_{j}^{\mu} (\tilde{\zeta}_{k} \sigma_{\mu} \bar{\chi}_{l})
\end{eqnarray}
\item
tri-linear in the ghost fields:
\begin{equation}
b^{(5a)}_{jkl}~u_{j}\tilde{u}_{k}\tilde{u}_{l} \quad
b^{(5b)}_{jkl}~u_{j}\tilde{u}_{k}\tilde{v}_{l} \quad
b^{(5c)}_{jkl}~u_{j}\tilde{v}_{k}\tilde{v}_{l} \quad
b^{(5d)}_{jkl}~v_{j}\tilde{u}_{k}\tilde{u}_{l} \quad
b^{(5e)}_{jkl}~v_{j}\tilde{u}_{k}\tilde{v}_{l} \quad
b^{(5f)}_{jkl}~v_{j}\tilde{v}_{k}\tilde{v}_{l}.
\end{equation}
\end{itemize}
V.) Now we eliminate terms of the type $f$ and $g$ using total divergences. We have
to pay care to the order in which we proceed because if we use a total
divergence (or a co-boundary) we must not modify terms which have already been
fixed. We proceed as follows: first we use
\begin{equation}
t^{(10)}_{\mu} \equiv t_{jkl} v_{j}^{\nu} v_{k\nu} v_{l\mu} \qquad
t_{jkl} = t_{kjl};
\end{equation}
because
\begin{equation}
\partial^{\mu}t^{(10)}_{\mu} = t_{jkl} ( 2\partial^{\mu}v_{j}^{\nu} v_{k\nu} v_{l\mu}
+ v_{j}^{\nu} v_{k\nu} \partial_{\mu}v_{l}^{\mu} )
\end{equation}
it is possible to take
\begin{equation}
f^{(13a)}_{jkl} = - f^{(13a)}_{lkj};
\label{a1}
\end{equation}
For simplicity we denote from now on:
$
f^{(13)}_{jkl} \equiv f^{(13a)}_{jkl}.
$
We find convenient to describe the various redefinitions in the following table:
\vskip 0.5cm
\begin{tabular}{|c|c|c|c|}
\hline
\hline
Nr. crt. & $t_{\mu}, b$ & Restrictions & Modified Terms \\ \hline \hline
1 & $t^{(10)}$ & $f^{(13a)}_{jkl} = - (j \longleftrightarrow l)$ & $f^{(13b)}_{jkl}$ \\ \hline
2 & $b^{(1d)}$ & $f^{(13b)} = 0$ & $\tilde{G}^{(1)}$ \\ \hline
3 & $t^{(12)}$ & $\tilde{G}^{(1)} = 0$ & $g^{(1a)},g^{(1b)} $ \\ \hline
4 & $b^{(1a)}$ & $g^{(5c)}_{jkl} = - (j \longleftrightarrow k)$ & $\tilde{F}^{(10)}$ \\ \hline
5 & $b^{(1b)}$ & $g^{(5b)}_{jkl} = - (j \longleftrightarrow k)$ & $\tilde{F}^{(10)}$ \\ \hline
6 & $b^{(1c)}$ & $g^{(5d)} = 0$ & $\tilde{F}^{(10)}, g^{(5a)}$ \\ \hline
7 & $b^{(1e)}$ & $g^{(3c)} = 0$ & $f^{(3b)}$ \\ \hline
8 & $b^{(1f)}$ & $g^{(10a)} = 0$ & $\tilde{F}^{(15)}$ \\ \hline
9 & $b^{(1g)}$ & $g^{(10b)} = 0$ & $\tilde{F}^{(15)}$ \\ \hline
10 & $t^{(16)}_{\mu}$ & $\tilde{G}^{(10)} = 0$ & $g^{(10c)},\dots, g^{(10f)}$ \\ \hline
11 & $b^{(1h)}$ & $g^{(10c)} = 0$ & $g^{(10e)}, \tilde{F}^{(7)}$ \\ \hline
12 & $b^{(1i)}$ & $g^{(10f)} = 0$ & $g^{(10d)}, \tilde{F}^{(7)}$ \\ \hline
13 & $t^{(2)}_{\mu}$ & $\tilde{F}^{(2)}$ & $f^{(2a)},\dots, f^{(2d)}$ \\ \hline
14 & $t^{(14)}_{\mu}$& $\tilde{G}^{(3)}$ & $g^{(3a)}, g^{(3b)}$ \\ \hline
15 & $b^{(1j)}$ & $f^{(2d)} = 0$ & $g^{(3b)}$ \\ \hline
16 & $b^{(1h)}$ & $f^{(2b)} = 0$ & $g^{(3a)}$, {\rm total div} \\ \hline
17 & $b^{(2a)}$ & $g^{(6c)}_{jkl} = - (j \longleftrightarrow k)$ & $f^{(11a)}$ \\ \hline
18 & $b^{(2b)}$ & $g^{(6b)}_{jkl} = - (j \longleftrightarrow k)$ & $f^{(11c)}$ \\ \hline
19 & $b^{(2c)}$ & $g^{(6d)} = 0$ & $g^{(6a)}, f^{(11b)}$ \\ \hline
20 & $t^{(15)}_{\mu}$ & $\tilde{G}^{(4)} = 0$ & $g^{(4a)}, g^{(4b)}$ \\ \hline
\end{tabular}
\begin{tabular}{|c|c|c|c|}
\hline
\hline
Nr. crt. & $t_{\mu}, b$ & Restrictions & Modified Terms \\ \hline \hline
21 & $b^{(2d)}$ & $g^{(4a)}_{jkl} = - (j \leftrightarrow k)$ & $f^{(17)}$ \\ \hline
22 & $b^{(2e)}$ & $f^{(5)} = 0$ & $g^{(2c)}$ \\ \hline
23 & $b^{(2f)}$ & $g^{(11a)} = 0$ & $f^{(16a)}$ \\ \hline
24 & $b^{(2g)}$ & $g^{(11b)} = 0$ & $f^{(16b)}$ \\ \hline
25 & $t^{(17)}_{\mu}$ & $\tilde{G}^{(11)} = 0$ & $g^{(11c)} - g^{(11f)}$ \\ \hline
26 & $b^{(2h)}$ & $g^{(11c)} = 0$ & $g^{(11e)}, f^{(8a)}$ \\ \hline
27 & $b^{(2i)}$ & $g^{(11f)} = 0$ & $g^{(11d)}, f^{(8b)}$ \\ \hline
28 & $t^{(13)}_{\mu}$ & $\tilde{G}^{(2)} = 0$ & $g^{(2a)}, g^{(2b)}$ \\ \hline
29 & $t^{(3)}_{\mu}$ & $\tilde{F}^{(3)} = 0$ & $f^{(3a)}, f^{(3b)}$ \\ \hline
30 & $b^{(2j)}$ & $f^{(3b)} = 0$ & $g^{(2b)}$ \\ \hline
31 & $b^{(2h)}$ & $f^{(3a)} = 0$ & $g^{(2a)}$, {\rm tot div} \\ \hline
32 & $b^{(3a)} - b^{(3d)}$ & $g^{(8a)},\dots, g^{(8d)} = 0$ & $g^{(9a)} - g^{(9d)}$ \\ \hline
33 & $b^{(3e)} - b^{(3h)}$ & $g^{(15a,b,c,d)}_{jkl} = - (j \leftrightarrow k)$ & \\ \hline
34 & $b^{(4a)} - b^{(4d)}$ & $g^{(14a)} - g^{(14d)} = 0$ & $g^{(16a)} - g^{(16d)}, f^{(9b)}, f^{(9f)}, f^{(9c)}, f^{(9g)}$ \\ \hline
35 & $t^{(20)}_{\mu}$ & $\tilde{G}^{(14)} = 0$ & $g^{(14e)} - g^{(14h)}$ \\ \hline
36 & $b^{(4e)} - b^{(4h)}$ & $g^{(14e)} - g^{(14h)} = 0$ & $g^{(13c)} - g^{(13f)}, f^{(1a)}, f^{(1b)}$, {\rm total div} \\ \hline
37 & $b^{(4k)}, b^{(4l)}$ & $g^{(14i)} = g^{(14j)} = 0$ & $\tilde{G}^{(12)}, f^{(15f)}, f^{(15b)}, f^{(15h)}, f^{(15d)}$ \\ \hline
38 & $t^{(18)}_{\mu}$ & $\tilde{G}^{(12)} = 0$ & $g^{(12c)} - g^{(12f)}$ \\ \hline
39 & $t^{(19)}_{\mu}$ & $\tilde{G}^{(13)} = 0$ & $g^{(13c)} - g^{(13f)}$ \\ \hline
40 & $b^{(4i)}$ & $g^{(13a)} = 0$ & $f^{(4a)}$ \\ \hline
41 & $b^{(4j)}$ & $g^{(13b)} = 0$ & $f^{(4b)}$ \\ \hline
42 & $b^{(5a)}$ & $g^{(1b)}_{jkl} = (j \leftrightarrow l)$ & \\ \hline
43 & $b^{(5b)}$ & $g^{(1c)} = 0$ & $g^{(4b)}$\\ \hline
44 & $b^{(5c)}$ & $g^{(4c)}_{jkl} = (j \leftrightarrow l)$ & \\ \hline
45 & $b^{(5d)}$ & $g^{(3b)}_{jkl} = (j \leftrightarrow l)$ & \\ \hline
46 & $b^{(5e})$ & $g^{(3c)} = 0$ & $g^{(2b)}$ \\ \hline
47 & $b^{(5f)}$ & $g^{(2c)}_{jkl} = (j \leftrightarrow l)$ & \\ \hline
48 & $t^{(1)}_{\mu}, t^{(4)}_{\mu} $ & $\tilde{F}^{(1)} = \tilde{F}^{(4)} = 0$ &
$f^{(1a)}, f^{(1b)}, f^{(4a)}, f^{(4b)}$ \\ \hline
49 & $t^{(5)}_{\mu}, t^{(6)}_{\mu} $ & $\tilde{F}^{(6)} = \tilde{F}^{(7)} = 0$ &
$f^{(6a)} - f^{(6d)}, f^{(7a)} - f^{(7d)}$ \\ \hline
50 & $t^{(7)}_{\mu}, t^{(8)}_{\mu} $ & $\tilde{F}^{(8)} = \tilde{F}^{(9)} = 0$ &
$f^{(8a)}, f^{(8b)}, f^{(9a)} - f^{(9h)}$ \\ \hline
51 & $t^{(9)}_{\mu}, t^{(11)}_{\mu} $ & $\tilde{F}^{(10)} = \tilde{F}^{(15)} = 0$ &
$f^{(10a)} - f^{(10d)}, f^{(15a)} - f^{(15h)}$ \\ \hline
\end{tabular}
\vskip 0.5cm
Using (\ref{gauge4}) we can compute the expression
$
d_{Q}T.
$
We exhibit it in the form
\begin{equation}
d_{Q}t = d_{Q}t^{(3)} + i u_{j} A_{j} + i v_{j} B_{j}
+ g_{j} A^{\prime}_{j} + g^{\dagger}_{j} B^{\prime}_{j}
+ i \zeta_{j} X_{j} + i \bar{\zeta}_{j} \bar{X}^{\prime}_{j} + {\rm total~div}
\end{equation}
where the first term is tri-linear in the ghost fields and the expressions
$
A_{j},A^{\prime}_{j},B_{j},B^{\prime}_{j},X_{j},X^{\prime}_{j}
$
are independent of the ghost fields. We impose (\ref{gauge}) and note that the first term above must be a total divergence by himself. The explicit expression is
\begin{eqnarray}
d_{Q}t^{(3)} = - g^{(5a)}_{jkl} g^{\dagger}_{j} g_{k} \tilde{u}_{l}
+ g^{(5b)}_{jkl} g_{j} g_{k} \tilde{u}_{l}
- g^{(5c)}_{jkl} g^{\dagger}_{j} g^{\dagger}_{k}\tilde{u}_{l}
\nonumber \\
- g^{(6a)}_{jkl} g^{\dagger}_{j} g_{k} \tilde{v}_{l}
+ g^{(6b)}_{jkl} g_{j} g_{k} \tilde{v}_{l}
- g^{(6c)}_{jkl} g^{\dagger}_{j} g^{\dagger}_{k} \tilde{v}_{l}
\nonumber \\
- g^{(7a)}_{jkl} g^{\dagger}_{j} u_{k} \tilde{g}_{l}
+ g^{(7b)}_{jkl} g_{j} u_{k} \tilde{g}_{l}
- g^{(7c)}_{jkl} g^{\dagger}_{j} u_{k} \tilde{g}^{\dagger}_{l}
+ g^{(7d)}_{jkl} g_{j} u_{k} \tilde{g}^{\dagger}_{l}
\nonumber \\
+ i~g^{(9a)}_{jkl} v_{j} g_{k} \tilde{g}_{l}
+ i~g^{(9b)}_{jkl} v_{j} g^{\dagger}_{k} \tilde{g}_{l}
+ i~g^{(9c)}_{jkl} v_{j} g_{k} \tilde{g}^{\dagger}_{l}
+ i~g^{(9d)}_{jkl} v_{j} g^{\dagger}_{k} \tilde{g}^{\dagger}_{l}
\nonumber \\
+ 2 i~g^{(10d)}_{jkl} (\zeta_{j}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k}) \tilde{u}_{l}
- 2 i~g^{(10e)}_{jkl} (\partial_{\mu}\zeta_{j}\sigma^{\mu}\bar{\zeta}_{k}) \tilde{u}_{l}
\nonumber \\
+ 2 i~g^{(11d)}_{jkl} (\zeta_{j}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k}) \tilde{v}_{l}
- 2 i~g^{(11e)}_{jkl} (\partial_{\mu}\zeta_{j}\sigma^{\mu}\bar{\zeta}_{k}) \tilde{v}_{l}
\nonumber \\
+ 2 i~g^{(12c)}_{jkl}~
(\partial_{\mu}\zeta_{j}\sigma^{\mu}\bar{\tilde{\zeta}}_{k}) u_{k}
+ 2 i~g^{(12d)}_{jkl}~
(\zeta_{j}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{k}) u_{k}
\nonumber \\
- 2i~g^{(12e)}_{jkl} (\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\zeta}_{j}) u_{k}
- 2 i~g^{(12f)}_{jkl} (\tilde{\zeta}_{j}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{k}) u_{k}
\nonumber \\
+ 2 i~g^{(13c)}_{jkl} (\partial_{\mu}\zeta_{j}\sigma^{\mu}\bar{\tilde{\zeta}}_{l}) v_{k}
+ 2 i~g^{(13d)}_{jkl} (\zeta_{j}\sigma^{\mu}\partial_{\mu}\bar{\tilde{\zeta}}_{l}) v_{k}
\nonumber \\
- 2i~g^{(13e)}_{jkl} (\partial_{\mu}\tilde{\zeta}_{l}\sigma^{\mu}\bar{\zeta}_{j}) v_{k}
- 2 i~g^{(13f)}_{jkl} (\tilde{\zeta}_{l}\sigma^{\mu}\partial_{\mu}\bar{\zeta}_{j}) v_{k}
\nonumber \\
+ 2 i~g^{(15a)}_{jkl} (\zeta_{j}\zeta_{k}) \tilde{g}_{l}
+ 2 i~g^{(15b)}_{jkl} (\zeta_{j}\zeta_{k}) \tilde{g}_{l}^{\dagger}
- 2 i~g^{(15c)}_{jkl} (\bar{\zeta}_{j}\bar{\zeta}_{k}) \tilde{g}_{l}
- 2 i~g^{(15d)}_{jkl} (\bar{\zeta}_{j}\bar{\zeta}_{k}) \tilde{g}_{l}^{\dagger}
\nonumber \\
+ 2 i~g^{(16a)}_{jkl} (\zeta_{j}\tilde{\zeta}_{l}) g_{k}
+ 2 i~g^{(16b)}_{jkl} (\zeta_{j}\tilde{\zeta}_{l}) g_{k}^{\dagger}
- 2 i~g^{(16c)}_{jkl} (\bar{\zeta}_{j}\bar{\tilde{\zeta}}_{l}) g_{k}
- 2 i~g^{(16d)}_{jkl} (\bar{\zeta}_{j}\bar{\tilde{\zeta}}_{l}) g_{k}^{\dagger}
\end{eqnarray}
and it is easy to see that
$
d_{Q}t^{(3)}
$
is a total divergence {\it iff} it is identically zero. This amounts to
\begin{eqnarray}
g^{(p)} = 0 \qquad p = 5,6,7,9,10,11,15,16
\nonumber \\
g^{(12c)}_{jkl} = \quad\dots\quad = g^{(12f)}_{jkl} = 0
\nonumber \\
g^{(13c)}_{jkl} = 0 \quad \cdots \quad = g^{(13f)}_{jkl} = 0.
\end{eqnarray}
The expressions
$
A_{j},A^{\prime}_{j},B_{j},B^{\prime}_{j},X_{j},X^{\prime}_{j}
$
have the following form
\begin{eqnarray}
A_{j} = - 2 f^{(13)}_{jkl}~\partial^{\nu}v^{\mu}_{k}~\partial_{\mu}v_{l\nu}
\nonumber \\
+ (- f^{(13)}_{jkl} + f^{(13)}_{lkj} + f^{(13)}_{klj} + g^{(1a)}_{kjl})~
v^{\mu}_{k}~\partial_{\mu}\partial_{\nu}v^{\nu}_{l}
\nonumber \\
+ (f^{(13)}_{lkj} + g^{(1b)}_{kjl})~
\partial_{\mu}v^{\mu}_{k}~\partial_{\nu}v_{l}^{\nu}
\nonumber \\
- \left(f^{(14)}_{lkj} - {i\over 2}~g^{(12b)}_{kjl}\right)~
(\partial_{\mu}\lambda^{\prime}_{l}~\sigma^{\mu}~\bar{\lambda}_{k}^{\prime})
- \left(f^{(14)}_{lkj} + {i\over 2}~g^{(12a)}_{ljk}\right)~
(\lambda^{\prime}_{l}~\sigma^{\mu}~\partial_{\mu}\bar{\lambda}_{k}^{\prime})
\nonumber \\
- (f^{(15a)}_{klj} - f^{(15b)}_{klj})~
(\partial_{\mu}\chi_{l}~\sigma^{\mu\nu}~\partial_{\nu}\lambda_{k}^{\prime})
\nonumber \\
- (f^{(15c)}_{lkj} - f^{(15d)}_{lkj})~(\partial_{\mu}\bar{\chi}_{l}~\sigma^{\mu\nu}~
\partial_{\nu}\bar{\lambda}_{k}^{\prime})
\nonumber \\
- 2~(f^{(17)}_{jkl} - g^{(4a)}_{klj} + g^{(4b)}_{klj})~v^{\mu}_{k}~\partial_{\mu}d_{l}
- 2~(f^{(17)}_{jkl} - g^{(4a)}_{klj})~\partial_{\mu}v^{\mu}_{k}~d_{l}
- 2~g^{(4c)}_{klj}~d_{k}~d_{l}
\end{eqnarray}
\begin{eqnarray}
B_{j} = f^{(1a)}_{jkl}~
(\partial_{\mu}\chi_{k}~\sigma^{\mu\nu}~\partial_{\nu}\lambda^{\prime}_{l})
+ f^{(1b)}_{jkl}~
(\partial_{\mu}\bar{\chi}_{k}~\bar{\sigma}^{\mu\nu}~\partial_{\nu}\bar{\lambda}^{\prime}_{l})
\nonumber \\
+ f^{(2a)}_{jkl}~\partial^{\nu}v^{\mu}_{k}~\partial_{\mu}v_{l\nu}
+ f^{(2c)}_{jkl}~\epsilon_{\mu\nu\rho\sigma}~
\partial^{\mu}v^{\nu}_{k}~\partial^{\rho}v_{l}^{\sigma}
\nonumber \\
+ f^{(4a)}_{jkl}~
(\lambda^{\prime}_{k}~\sigma^{\mu}~\partial_{\mu}\bar{\lambda}_{l}^{\prime})
+f^{(4b)}_{jkl}~
(\partial_{\mu}\lambda^{\prime}_{k}~\sigma^{\mu}~\bar{\lambda}_{l}^{\prime})
\nonumber \\
- 2~g^{(2a)}_{jkl}~v^{\mu}_{k}~\partial_{\mu}d_{l}
- 2~g^{(2b)}_{jkl}~\partial_{\mu}v^{\mu}_{k}~d_{l}
- 2~g^{(2c)}_{jkl}~d_{k}~d_{l}
\end{eqnarray}
\begin{eqnarray}
A^{\prime}_{j} = f^{(6b)}_{ljk}~
(\partial_{\mu}\chi_{k}~\sigma^{\mu\nu}~\partial_{\nu}\chi_{l})
+ f^{(6d)}_{ljk}~
(\partial_{\mu}\bar{\chi}_{k}~\bar{\sigma}^{\mu\nu}~\partial_{\nu}\bar{\chi}_{l})
\nonumber \\
+ f^{(9e)}_{ljk}~(\partial_{\mu}\chi_{l}~\sigma^{\mu}~\bar{\lambda}^{\prime}_{k})
+ f^{(9f)}_{ljk}~(\chi_{l}~\sigma^{\mu}~\partial_{\mu}\bar{\lambda}^{\prime}_{k})
\nonumber \\
+ f^{(9g)}_{ljk}~(\partial_{\mu}\lambda^{\prime}_{l}~\sigma^{\mu}~\bar{\chi}^{\prime}_{k})
+ f^{(9h)}_{ljk}~(\lambda^{\prime}_{l}~\sigma^{\mu}~\partial_{\mu}\bar{\chi}^{\prime}_{k})
\nonumber \\
+ (- f^{(10b)}_{kjl} + f^{(10d)}_{jkl})~\partial_{\mu}\phi_{k}~v_{l}^{\mu}
- f^{(10b)}_{kjl}~\phi_{k}~\partial_{\mu}v^{\mu}_{l}
\nonumber \\
+ (f^{(10c)}_{jkl} - f^{(10c)}_{kjl})~\partial_{\mu}\phi^{\dagger}_{k}~v_{l}^{\mu}
- f^{(10c)}_{kjl}~\phi^{\dagger}_{k}~\partial_{\mu}v^{\mu}_{l}
\nonumber \\
+ f^{(11b)}_{kjl}~\phi_{k}~d_{l}
+ 2~f^{(11c)}_{jkl}~\phi^{\dagger}_{k}~d_{l}
\nonumber \\
+ f^{(12b)}_{lkj}~(\lambda^{\prime}_{k}~\lambda_{l}^{\prime})
+ f^{(12d)}_{lkj}~(\bar{\lambda}^{\prime}_{k}~\bar{\lambda}_{l}^{\prime})
\end{eqnarray}
\begin{eqnarray}
B^{\prime}_{j} = - f^{(6a)}_{ljk}~
(\partial_{\mu}\chi_{l}~\sigma^{\mu\nu}~\partial_{\nu}\chi_{k})
- f^{(6c)}_{ljk}~
(\partial_{\mu}\bar{\chi}_{l}~\bar{\sigma}^{\mu\nu}~\partial_{\nu}\bar{\chi}_{k})
\nonumber \\
+ f^{(9a)}_{ljk}~(\partial_{\mu}\chi_{l}~\sigma^{\mu}~\bar{\lambda}^{\prime}_{k})
- f^{(9b)}_{ljk}~(\chi_{l}~\sigma^{\mu}~\partial_{\mu}\bar{\lambda}^{\prime}_{k})
\nonumber \\
- f^{(9c)}_{ljk}~(\partial_{\mu}\lambda^{\prime}_{l}~\sigma^{\mu}~\bar{\chi}^{\prime}_{k})
+ f^{(9d)}_{ljk}~(\lambda^{\prime}_{l}~\sigma^{\mu}~\partial_{\mu}\bar{\chi}^{\prime}_{k})
\nonumber \\
+ (- f^{(10a)}_{jkl} + f^{(10a)}_{kjl})~\partial_{\mu}\phi_{k}~v_{l}^{\mu}
+ f^{(10a)}_{kjl}~\phi_{k}~\partial_{\mu}v^{\mu}_{l}
\nonumber \\
+ (- f^{(10b)}_{jkl} + f^{(10d)}_{kjl})~\partial_{\mu}\phi^{\dagger}_{k}~v_{l}^{\mu}
+ f^{(10d)}_{kjl}~\phi^{\dagger}_{k}~\partial_{\mu}v^{\mu}_{l}
\nonumber \\
- 2 f^{(11a)}_{jkl}~\phi_{k}~d_{l}
- f^{(11b)}_{jkl}~\phi^{\dagger}_{k}~d_{l}
\nonumber \\
- f^{(12a)}_{lkj}~(\lambda^{\prime}_{k}~\lambda_{l}^{\prime})
- f^{(12c)}_{lkj}~(\bar{\lambda}^{\prime}_{k}~\bar{\lambda}_{l}^{\prime})
\end{eqnarray}
\begin{eqnarray}
X_{j} = - 2 f^{(1a)}_{kjl}~
\partial_{\mu}C_{k}~\sigma^{\mu\nu}~\partial_{\nu}\lambda^{\prime}_{l}
\nonumber \\
- 4 f^{(6a)}_{jkl}~\sigma^{\mu\nu}~\partial_{\nu}\chi_{k}~\partial_{\mu}\phi_{l}
- 4 f^{(6b)}_{jkl}~\sigma^{\mu\nu}~\partial_{\nu}\chi_{k}~\partial_{\mu}\phi^{\dagger}_{l}
\nonumber \\
+ 2 (- f^{(7a)}_{jkl} - f^{(7b)}_{jkl} + f^{(7c)}_{jkl} + f^{(7d)}_{jkl})~ \sigma^{\mu}~\partial_{\mu}\partial_{\nu}\bar{\chi}_{k}~v^{\nu}_{l}
\nonumber \\
+ 2 (- f^{(7a)}_{jkl} + f^{(7c)}_{jkl})~ \sigma^{\mu}~\partial_{\nu}\bar{\chi}_{k}~\partial_{\mu}v^{\nu}_{l}
\nonumber \\
+ 2 (- f^{(7b)}_{jkl} + f^{(7c)}_{jkl})~ \sigma^{\mu}~\partial_{\mu}\bar{\chi}_{k}~\partial_{\nu}v^{\nu}_{l}
+ 2 f^{(7c)}_{jkl}~\sigma^{\mu}~\bar{\chi}_{k}~\partial_{\mu}\partial_{\nu}v^{\nu}_{l}
\nonumber \\
- 2 f^{(7e)}_{jkl} \epsilon_{\mu\nu\rho\lambda}
\sigma^{\nu} \partial^{\rho}\bar{\chi}_{k} \partial^{\mu}v_{l}^{\lambda}
\nonumber \\
+ 2 (- f^{(8a)}_{jkl} + f^{(8b)}_{jkl})~ \sigma^{\mu}~\partial_{\mu}\bar{\chi}_{k}~d_{l}
- 2 f^{(8a)}_{jkl}~ \sigma^{\mu}~\partial_{\mu}\bar{\chi}_{k}~d_{l}
\nonumber \\
+ 2 (f^{(9a)}_{jkl} + f^{(9b)}_{jkl})~
\sigma^{\mu}~\partial_{\mu}\bar{\lambda}^{\prime}_{k}~\phi_{l}
+ 2 f^{(9a)}_{jkl}~\sigma^{\mu}\bar{\lambda}^{\prime}_{k}~\phi_{l}
\nonumber \\
+ 2 (- f^{(9e)}_{jkl} + f^{(9f)}_{jkl})~
\sigma^{\mu}~\partial_{\mu}\bar{\lambda}^{\prime}_{k}~\phi^{\dagger}_{l}
- 2 f^{(9e)}_{jkl}~\sigma^{\mu}\bar{\lambda}^{\prime}_{k}~\phi^{\dagger}_{l}
\nonumber \\
+ (- 2 f^{(15a)}_{jkl} + 2 f^{(15b)}_{jkl})~
\sigma_{\mu\nu}~\partial^{\mu}\lambda^{\prime}_{k}~v^{\nu}_{l}
- 2 f^{(15a)}_{jkl}~\sigma_{\mu\nu}\lambda^{\prime}_{k}~\partial^{\mu}v^{\nu}_{l}
\nonumber \\
+ 2~(- f^{(15e)}_{jkl} + f^{(15f)}_{jkl})~
\partial_{\mu}\lambda^{\prime}_{k}~v^{\mu}_{l}
- 2 f^{(15e)}_{jkl}~\lambda^{\prime}_{k}~\partial_{\mu}v^{\mu}_{l}
\nonumber \\
+ 2 f^{(16a)}_{jkl} \lambda^{\prime}_{k}~d_{l}
\end{eqnarray}
\begin{eqnarray}
\bar{X}^{\prime}_{j} = 2 f^{(1b)}_{kjl}~
\partial_{\mu}C_{k}~\bar{\sigma}^{\mu\nu}~\partial_{\nu}\bar{\lambda}^{\prime}_{l}
\nonumber \\
+ 4 f^{(6c)}_{jkl}~\partial_{\nu}\bar{\chi}_{k}\bar{\sigma}^{\mu\nu}~\partial_{\mu}\phi_{l}
+ 4 f^{(6d)}_{jkl}~
\partial_{\nu}\bar{\chi}_{k}\bar{\sigma}^{\mu\nu}~\partial_{\mu}\phi^{\dagger}_{l}
\nonumber \\
+ 2 (- f^{(7a)}_{kjl} - f^{(7b)}_{kjl} + f^{(7c)}_{kjl} + f^{(7d)}_{kjl})~ \partial_{\mu}\partial_{\nu}\bar{\chi}_{k}\sigma^{\mu}~v^{\nu}_{l}
\nonumber \\
+ 2 (- f^{(7a)}_{kjl} + f^{(7d)}_{kjl})~
\partial_{\mu}\chi_{k} \sigma^{\mu}~\partial_{\nu}v^{\nu}_{l}
\nonumber \\
+ 2 (- f^{(7b)}_{kjl} + f^{(7d)}_{kjl})~
\partial_{\mu}\chi_{k}~\sigma^{\nu}~\partial_{\nu}v^{\mu}_{l}
+ 2 f^{(7d)}_{kjl}~\chi_{k}~\sigma^{\mu}~~\partial_{\mu}\partial_{\nu}v^{\nu}_{l}
\nonumber \\
- 2 f^{(7e)}_{kjl} \epsilon_{\mu\nu\rho\lambda}
\partial^{\mu}\chi_{j} \sigma^{\nu} \partial^{\rho} v_{l}^{\lambda}
\nonumber \\
+ 2 ( f^{(8a)}_{kjl} - f^{(8b)}_{kjl})~\sigma^{\mu}~\partial_{\mu}\bar{\chi}_{k}~d_{l}
- 2 f^{(8b)}_{jkl}~\partial_{\mu}\chi_{k}~\sigma^{\mu}~d_{l}
\nonumber \\
+ 2 (f^{(9c)}_{kjl} - f^{(9d)}_{kjl})~
\partial_{\mu}\lambda^{\prime}_{k}~\sigma^{\mu} \phi_{l}
- 2 f^{(9d)}_{kjl}~\lambda^{\prime}_{k}~\sigma^{\mu}~\partial_{\mu}\phi_{l}
\nonumber \\
+ 2 (f^{(9g)}_{kjl} - f^{(9h)}_{kjl})~
\partial_{\mu}\lambda^{\prime}_{k}~\sigma^{\mu} \phi^{\dagger}_{l}
- 2 f^{(9h)}_{kjl}~\lambda^{\prime}_{k}~\sigma^{\mu}~\partial_{\mu}\phi^{\dagger}_{l}
\nonumber \\
+ 2 (f^{(15c)}_{jkl} - f^{(15d)}_{kjl})~
\partial^{\mu}\bar{\lambda}^{\prime}_{k}~\bar{\sigma}_{\mu\nu} v^{\nu}_{l}
+ 2 f^{(15c)}_{jkl}~
\bar{\lambda}^{\prime}_{k}~\bar{\sigma}_{\mu\nu}~\partial^{\mu}v^{\nu}_{l}
\nonumber \\
- 2 (f^{(15g)}_{jkl} - f^{(15h)}_{kjl})~
\partial_{\mu}\bar{\lambda}^{\prime}_{k}~v^{\mu}_{l}
+ 2 f^{(15g)}_{jkl}~\bar{\lambda}^{\prime}_{k}~\partial_{\mu}v^{\mu}_{l}
\nonumber \\
- 2 f^{(16b)}_{jkl} \bar{\lambda}^{\prime}_{k}~d_{l}
\end{eqnarray}
If we make a general ansatz
\begin{equation}
T^{\mu} = u_{j} T^{\mu}_{j} + (\partial_{\nu}u_{j}) T^{\mu\nu}_{j}
+ (\partial_{\nu}\partial_{\rho}u_{j}) T^{\mu\nu\rho}_{j}
\end{equation}
we can prove, as in the Yang-Mills case, that we must have in fact
\begin{equation}
A_{j} = 0 \quad A_{j}^{\prime} = 0 \quad B_{j} = 0 \quad B_{j}^{\prime} = 0 \quad
X_{j} = 0 \quad X_{j}^{\prime} = 0.
\end{equation}
In particular we have from the coefficient
of
$
\partial^{\nu}v^{\mu}_{k}~\partial_{\mu}v_{l\nu}
$
that
\begin{equation}
f^{(13)}_{jkl} = - f^{(13)}_{kjl};
\label{a7}
\end{equation}
together with
\begin{equation}
g^{(1b)}_{jkl} = g^{(1b)}_{kjl} ;
\label{s1}
\end{equation}
this amounts to the total antisymmetry of the expression
$
f^{(13)}_{jkl}
$
and the Yang-Mills solution emerges. We also have
\begin{equation}
g^{(12a)}_{jkl} = 2 i~f^{(14)}_{jlk} \qquad
g^{(12b)}_{jkl} = 2 i~f^{(14)}_{ljk}
\end{equation}
and the second solution emerges.
One can make a double check of the computation as follows: one eliminates only the terms of the type
$
\tilde{F}, \tilde{G}
$
and do not use the co-boundaries. So, we work with the whole set of 132 terms of type
$
F, G.
$
As a result we find 35 solutions of which 33 are trivial and the other two are those already obtained above.
\newpage
|
2,869,038,154,324 | arxiv | \section{Introduction}
\cite{murray_physiological_1926} proposed the first law for the optimal design of blood vessels, based on a trade-off between the power needed to make blood circulate in the vessel and the metabolic power needed to maintain blood. His work was based on the original researches developed by \cite{hess_prinzip_1914}. The law is formulated using Poiseuille's regime in cylindrical vessels,
thus it accounts for viscous effects only
and neglects any perturbations due to fluid divergence or convergence at the bifurcations points.
Blood flow rate is assumed constant to mimic for the fact that the vessels have to feed a downstream organ whose needs are independent on the vessels geometry.
The optimal configuration corresponds to the blood flow rate being proportional to the cube power of the radius of the blood vessel. The well known corollary for a bifurcation states that the cube radius of the parent vessel equals the sum of the cube radii of the daughter vessels.
Murray's law is independent on the amount of blood flow rate, thus it does not depend on the functioning regime of the downstream organ, at least in the limit of its hypotheses.
In the seventies, \cite{zamir_optimality_1976} extended this law to account for bifurcation angles and he expressed Murray's law in term of wall shear stress being constant independently on the vessel size \citep{zamir_shear_1977}.
While blood arterial
mid-level
circulation meets conditions for Murray's law, this is not the case for
the larger vessels, where inertia and/or turbulence occur \citep{uylings_optimization_1977}, and for
microcirculation, where wall shear stress is decreasing with the sizes of the vessels
\citep{sherman_connecting_1981}.
Different hypotheses were developed to explain the shifts to Murray's law observed with microcirculation, for example \cite{taber_optimization_1998} proposed to add an energy cost related to smooth muscles.
Later,
\cite{alarcon_design_2005} used semi-empirical laws from
\cite{pries_resistance_1994}
and showed that wall shear stress behavior in microcirculation can be explained by the F\aa hr\ae us-Lindvquist effect. The F\aa hr\ae us-Lindvquist effect is a phase separation effect occurring in small blood vessels that makes blood viscosity become a non monotonous function of vessel radius \citep{fahraeus_suspension_1929, pries_design_1995, fung_biomechanics_1997, pries_blood_2008}.
Similarly, studies were made to understand why Murray's law does not apply in large blood vessels, where inertia and turbulence play an important role. These studies were based on modified formulation of the power associated to fluid circulation, shifting the radius exponent from $3$ in laminar case down to $2.33$ for fully turbulent flow \citep{uylings_optimization_1977}. Finally, generalization of Murray's law has also been developed to account for the non-local properties of tree structures and extends the predictions using empirical exponents based on the fractal nature of the biological networks \citep{zhou_design_1999, kassab_scaling_2007}. These exponents aggregate rich information from the physiological configuration, amongst which the non linear behavior of blood viscosity. These last studies should however be considered with care, since the empirical data injected in the model is actually dependent on the variables that are optimized, which might bring a bias in the predictions.
Another step to fully extend Murray's law to blood circulation is now to add the dependence of vessel blood viscosity on flow amplitude. Indeed, blood is a
shear-thinning
fluid and its local viscosity depends on the local shear rate in the vessel. Local shear rate is a function of both blood flow amplitude in the vessel and vessel radius. Thus, in addition to red blood cells volume fraction and vessel radius, equivalent viscosity in blood microcirculation is expected to be also dependent on the shear rates in the vessel.
First we show that if F\aa hr\ae us effect does not occur, the fluid equivalent viscosity in a vessel is dependent on mean shear rate and hematocrit in the branch only. If F\aa hr\ae us effect occurs, then we show that equivalent viscosity becomes also dependent on vessel radius. Then we apply Murray's optimal design to both cases with and without F\aa hr\ae us effect and derive corresponding laws for optimal configuration of a vessel and of a bifurcation. Finally, we study how these new laws affect optimal geometries of fractal tree structures, using inherent properties for mean shear rates variation inside a fractal tree.
\section{Mathematical model}
Since Murray's law was formulated in the frame of the cardiovascular system, we chose to work with
a model that mimics blood rheology.
However, blood exhibits a complex thixotropic behavior with a yield stress to overcome for blood to flow \citep{merrill_rheology_1969,bureau_rheological_1980,apostolidis_modeling_2014}. Moreover, blood rheology is not only affected by red blood cells concentration but also by its inner composition in proteins, such as fibrinogen concentration \citep{merrill_yield_1969}. We did not want to account for such refined behaviors of blood rheology since our study focused on the interaction between Murray's optimization process and a shear rate-dependent rheology. Thus we chose to work with Qu\'emada's fluid model
which tightly fits our needs and were initially proposed for modeling blood rheology.
Those fluids are well documented \citep{quemada_towards_1984} and have been intensively studied and validated as good approximations for blood modeling \citep{cokelet_rheology_1987, neofytou_comparison_2004, marcinkowska-gapinska_comparison_2007,sriram_non-newtonian_2014}. The viscosity in Qu\'emada's fluid model depends on the local shear rate $\dot{\gamma}$ and on the local red blood cells volume fraction $H$. Complete details about Qu\'emada's model are given in section I of supplementary materials.
Notice that other models, such as Casson's model could also have been used in this study \citep{casson_flow_1959}.
In Qu\'emada's model, the dependence of fluid viscosity on shear rates divides into three parts, as shown on figure \ref{viscosityvs} (blood case): a plateau of high viscosity for low shear rates (blood: $<10^{-3} \ s^{-1}$); one sharp decrease of viscosity for "medium-ranged" shear rates (blood: from $10^{-3} \ s^{-1}$ to $1 \ s^{-1}$); a plateau of low viscosity for high shear rates (blood: $> 1 \ s^{-1}$).
\begin{figure}[h!]
\centering
\includegraphics[height=4.5cm]{figure1.pdf}
\caption{Qu\'emada's model of viscosity \citep{quemada_towards_1984, cokelet_rheology_1987}. Black curve: viscosity dependence on the shear rate (blood case, $H = 0.45$). Red dashed line: optimal mean shear rate predicted by the model for blood large circulation $\left<\dot{\gamma}\right>_{noF}^* \sim 12.5 \ s^{-1}$.}
\label{viscosityvs}
\end{figure}
We assume the vessels to be cylindric and the fluid to be at low regime, axi-symmetric and fully developed in all the vessels.
Blood flow rate through a vessel is assumed constant in order to mimic a downstream organ which is fed by the vessel and whose needs are independent on the delivering structure.
Then combining Qu\'emada's formula for viscosity with fluid mechanics equations in a vessel, we use the equivalent viscosity $\mu_{eq}$ of our fluid in the vessel. It is defined as the viscosity a Newtonian fluid would need in order to dissipate the same amount of viscous energy than our fluid inside that vessel.
Interestingly when F\aa hr\ae us effect can be neglected, the equivalent viscosity is driven only by the mean shear rate in the branch $\left<\dot{\gamma}\right>$ and by the red blood cells volumetric fraction $H_D$: $\mu_{eq}(\left<\dot{\gamma}\right>,H_D) = \frac{G(\left<\dot{\gamma}\right>,H_D)}{8 \left<\dot{\gamma}\right>}$, with $G$ a smooth function.
Explanations about why and how they are such related are detailed in section II-A of supplementary materials.
F\aa hr\ae us effect occurs mainly for vessels whose diameters are smaller than $300 \ \mu m$, see for example \cite{pries_blood_2008} or section III of supplementary materials. For such vessels, the independence of the equivalent viscosity on the vessel radius $r$ is lost. As a consequence, the equivalent viscosity can be expressed using a smooth function $K$: $\mu_{eq}(\left<\dot{\gamma}\right>,r,H_D) = \frac{K(\left<\dot{\gamma}\right>,r,H_D)}{8 \left<\dot{\gamma}\right>}$, see details in section II-B of supplementary materials.
\section{Extending Murray's law to Qu\'emada's fluid model}
In this section, we extend Murray's law to non-Newtonian fluids that can be modeled with Qu\'emada's fluid model. Thus we take into account the dependence of viscosity on shear rates. We start with the assumption of no phase separation effects in the fluid. Next, we study the case where F\aa hr\ae us effect occurs in the fluid.
\subsection{Without phase separation effects}
Now we will show how these principles allow to extend Murray's law to any fluid that can be modeled with Qu\'emada's model -and consequently to blood- as long as F\aa hr\ae us effects are negligible. Let us consider a fluid that can be modeled with Qu\'emada's model, and let us assume its flow rate $F$ is going through a vessel with radius $r$ and length $l$. The dissipated power $W$ defined by Murray \citep{hess_prinzip_1914,murray_physiological_1926, alarcon_design_2005} divides into two parts: $W = W_H + W_M$, where $W_H$ is the power dissipated by the flow, and $W_M$ the energy consumption rate of the fluid (in the case of blood, this is a metabolic consumption rate). With the above results and denoting $\alpha_b$ the energy consumption rate per unit volume of fluid, we have
\begin{equation}
\label{DP1}
W_H = \frac{8 F^2 \mu_{eq}(\left<\dot{\gamma}\right>) l}{\pi r^4} \ \text{ and } \ W_M = \alpha_b \pi r^2 l
\end{equation}
The design principle proposed by Murray is to search for a minimum of $W$ relatively to the radius of the vessel, thus solving $\frac{\partial W}{\partial r} = 0$. In our case, expanding this last equality leads to a non linear equation which depends only on $\left<\dot{\gamma}\right>$, see section III of supplementary materials.
As a consequence, the optimal configuration is reached when the mean shear rate in the vessel solves the preceding equation, independently on the fluid flow rate $F$ or the vessel sizes $r$ or $l$. Interestingly, we observed that the optimal shear rates are as small as possible, to get low viscous dissipation, but still high enough to keep viscosity in its lowest values. In the case of blood, the optimal shear rate is $\left<\dot{\gamma}\right>_{noF}^* \sim 12.5 \ s^{-1}$, it can be easily located on figure \ref{viscosityvs}: it stands on the left of the low viscosity plateau (red dashed line).
Let us apply this result to a bifurcation where the flow rate in the parent branch is $F_p$ and the flow rates in the daughter branches are $F_1$ and $F_2$. Their respective radii are denoted $r_p$, $r_1$ and $r_2$. If $\left<\dot{\gamma}\right>^*_{noF}$ is the solution of $\frac{\partial W}{\partial r} = 0$, then $\left<\dot{\gamma}\right>^*_{noF}$ is the optimal mean shear rate in the three vessels and $\left<\dot{\gamma}\right>^*_{noF} = \frac{F_p}{\pi r_p^3} = \frac{F_1}{\pi r_1^3} = \frac{F_2}{\pi r_2^3}$. By flow conservation, we have an additional equation which is $F_P = F_1 + F_2$. Combining these equations finally leads to Murray's original law: $r_p^3 = r_1^3 + r_2^3$.
The method and these result are most general, they apply to any fluid whose viscosity is monotonously driven by the shear rate in the vessel.
\subsection{With F\aa hr\ae us effect.}
The F\aa hr\ae us effect in blood is a phase separation effect due to the biphasic characteristics of blood: red blood cells tend to migrate toward the centre of the vessels and blood near the vessel wall is depleted in red blood cells. F\aa hr\ae us effect contributes to blood F\aa hr\ae us-Lindvquist effect \citep{fahraeus_suspension_1929,fung_biomechanics_1997}. It becomes non negligible for vessels with diameters smaller than $300 \ \mu m$, see for example \citep{pries_blood_2008}. To estimate the role of F\aa hr\ae us effects on Murray's law in such vessels, we approximated this effect by assuming that a red-blood-cell depleted layer stands near the wall of the vessels, see section II-B of supplementary materials. The thickness of this depleted layer, and consequently the whole fluid dynamics into that vessel, depend on the branch radius \citep{pries_blood_2008}. As a consequence, the equivalent viscosity in a branch where F\aa hr\ae us effect occurs is not anymore dependent on the mean shear rate only, it also becomes dependent on the radius of the branch: $\mu_{eq}(\left<\dot{\gamma}\right>,r) = \frac{K(\left<\dot{\gamma}\right>,r)}{4 \left<\dot{\gamma}\right>}$, with $K$ a smooth function. In this case the power dissipation in the fluid used for deducing Murray's law is $W_H = \frac{8 F^2 \mu_{eq}(\left<\dot{\gamma}\right>,r) l}{\pi r^4}$.
The energy consumption rate of the fluid $W_M$ remains $W_M = \alpha_b \pi r^2 l$. As before, the radius that minimizes the total work $W = W_H + W_M$ solves $\frac{\partial W}{\partial r} = 0$.
Expanding this last equation shows that the minimum power is reached on a curve $r \rightarrow \left<\dot{\gamma}\right>^*_F(r)$ that depends on the red blood cells volumetric fraction $H_D$. As an example, we plotted the curve computed in the case of blood on figure (\ref{gpmoyFPlot}). When $r$ is large enough, say larger than about $300 \ \mu m$ \citep{pries_blood_2008}, then the dependence on $r$ is lost and $\left<\dot{\gamma}\right>^*_F(r) = \left<\dot{\gamma}\right>^*_{noF}$.
\begin{figure}[h!]
\centering
\includegraphics[height=4.5cm]{figure2.pdf}
\caption{In black, the optimal mean shear rate in a branch as a function of the radius of the branch with F\aa hr\ae us effect (function $r \rightarrow \left<\dot{\gamma}\right>^*_{noF}(r)$ in text); in red dotted line, the optimal mean shear rate without F\aa hr\ae us effect. Blood case: $H_D = 0.45$ and $\alpha_b = 77.8 \ J.m^{-3}.s^{-1}$ \citep{taber_optimization_1998,alarcon_design_2005}.}
\label{gpmoyFPlot}
\end{figure}
Let us now consider a bifurcation where the mean shear rate in the parent branch is $\left<\dot{\gamma}\right>_p$ and the mean shear rates in the two daughter branches are $\left<\dot{\gamma}\right>_1$ and $\left<\dot{\gamma}\right>_2$. Their respective radii are denoted $r_p$, $r_1$ and $r_2$. Using flow conservation through the bifurcation, we can deduce that the optimal radii occur for the following equality:
\begin{equation}
\label{murrayLawF}
\left<\dot{\gamma}\right>^*_F(r_p) \ r_p^3 = \left<\dot{\gamma}\right>^*_F(r_1) \ r_1^3 + \left<\dot{\gamma}\right>^*_F(r_2) \ r_2^3
\end{equation}
As expected, when F\aa hr\ae us effect vanishes, the previous equation simplifies into the original Murray's law $r_p^3 = r_1^3 + r_2^3$.
\section{Extended Murray's law in fractal trees}
We are now interested in a tree structure built as a cascade of cylinders. The branches divides regularly into $n$ smaller identical branches. The number of divisions between a vessel and the root vessel of the tree defines its generation index. The tree root stands at generation $0$ and the tree limbs end at generation $N$. The size of the branches are defined thanks to the homothety ratio or scaling factor $h$ \citep{mauroy_optimal_2004, mauroy_influence_2010} that corresponds to the relative change in vessels diameter and lengths in a bifurcation, i.e. if $r_i$ is the radius of vessels of generation $i$, then the radii of vessels of generation $i+1$ are $r_{i+1} = h \times r_i$.
It has been shown that bifurcating points in tree structures affects in a complex way the fluid dynamics, even for the Newtonian case, see for example \cite{pedley_energy_1970}. However, for the sake of keeping the model tractable, we will stick to Murray's initial hypothesis and assume that fluid dynamics is not perturbed at the bifurcations and that velocities remain fully developed all along the branches.
Blood flow rate through the tree is assumed constant to mimic the needs of a downstream organ that are independent on the network geometry.
\begin{figure}[h!]
\centering
\includegraphics[height=5cm]{figure3.pdf}
\caption{Tree network structure with $n=2$ and $N = 6$. The tree is dichotomous and the vessels size decreases at each generation: their diameters and lengths are multiplied by the homothety factor $h<1$ after each bifurcation.}
\label{treegeom}
\end{figure}
\subsection{Mean shear rates}
Denoting $F_{i}$ the blood flow in a branch of generation $i$ and $S_i = \pi r_i^2$ the surface of its circular cross section, then the mean shear rate in that branch is $\left<\dot{\gamma}_i\right> = \frac{F_{i}/S_i}{r_i}=\frac{F_{i}}{\pi r_i^3}$. Since the tree branches divide into $n$ smaller identical branches, the total blood flow in a branch of generation $i$ is $n$ times the total blood flow in a branch of the next generation $i+1$. Then the mean shear rate in a branch of generation $i+1$ is
$\left<\dot{\gamma}_{i+1}\right> = \frac{F_{i+1}}{\pi r_{i+1}^3} = \frac{F_{i}}{\pi r_i^3} \frac{1}{n h^3} = \left<\dot{\gamma}_{i}\right> \frac{1}{n h^3}$. Thus, the regularity of the structure (fractal) induces also a scaling law on the mean shear rates in the tree. This scaling law depends only on the homothety ratio $h$. Depending on the position of the factor $\frac{1}{n h^3}$ relatively to $1$, the mean shear rate has different behaviors:
\begin{enumerate}
\item If $h>\left(\frac{1}{n}\right)^{1/3}$, then the mean shear rate decreases along the generations, consequently blood viscosity tends to increase along the generations;
\item If $h=\left(\frac{1}{n}\right)^{1/3}$, then the mean shear rate remains constant along the generations and so for blood viscosity;
\item If $h<\left(\frac{1}{n}\right)^{1/3}$, then the mean shear rate increases along the generations, consequently blood viscosity tends to decrease along the generations.
\end{enumerate}
\begin{figure}[h!]
\centering
\includegraphics[height=4.5cm]{figure4.pdf}
\caption{Mean viscosity variation in a dichotomous tree ($n=2$) with $N = 10$ generations for three different values of the homothety reduction factor $h$. In the case plotted F\aa hr\ae us effect is negligible and the shear rate in the root branch of the tree is $640 \ s^{-1}$.}
\label{viscosityvsGens}
\end{figure}
Blood viscosity depends on shear rate in a non linear way, with a plateau of high viscosities at low shear rates and a plateau of lower viscosities for high shear rates. Between the two plateaus, for medium shear rates, blood viscosity varies steeply, see figure \ref{viscosityvs}.
When the shear rate decreases or increases along the generations of the tree, then viscosity will vary more strongly if shear rate variation makes the viscosity go through the steep part. Consequently, the amplitude of viscosity variation depends on the initial mean shear rate $\left<\dot{\gamma}_0\right>$ in the root branch of the tree: a high shear rate in the root branch can induce notable viscosity variations throughout the tree only if shear rate is decreasing enough along generations; similarly, a low shear rate in the root branch can induce notable viscosity variations throughout the tree only if the shear rate increases enough along the generations. In any other case, the viscosity is stalled, either in one of the two plateaus or because mean shear rate does not vary much if $h \sim \left(\frac{1}{n}\right)^{1/3}$.
\subsection{Without phase separation effects}
In the absence of F\aa hr\ae us effect, Murray's law extended to Qu\'emada's model states that the mean shear rates in the tree branches should be all equal to the optimal mean shear rate $\left<\dot{\gamma}\right>^*$. Since the mean shear rate is following a scaling law in the tree, the only way for the tree to minimize the dissipative power throughout the tree is for the scaling parameter $\frac1{n h^3}$ to be equal to $1$, i.e. $h_* = \left(\frac{1}{n}\right)^\frac13$. This optimal configuration corresponds to mean shear rates and equivalent viscosities being constant throughout the whole tree. This result is fully compatible with blood arterial macrocirculation \citep{pries_blood_2008}. For blood, it has been estimated that $\alpha_b = 77.8 \ J.m^{-3}.s^{-1}$ and $H_D = 0.45$ \citep{taber_optimization_1998, alarcon_design_2005}; with these numbers, the optimal mean shear rate we predict is $\left<\dot{\gamma}\right>^*_{noF} \sim 12.5 \ s^{-1}$, see figure \ref{viscosityvs}.
Interestingly, except in very large vessels and in microcirculation, blood arterial network geometry exhibits a constant mean shear rate along its generations \citep{pries_blood_2008}, with a homothety ratio of about $\left(\frac12\right)^{\frac13}$ \citep{rossitti_vascular_1993}, as predicted by our model. This raises the question on how physiology can reach this configuration. A scenario proposed by \cite{zamir_optimality_1976} is based on the today well-known fact that endothelial cells standing on the arterial walls are able to respond to shear stress stimuli \citep{pries_microvascular_2005}. If this response is made accordingly to a shear stress threshold common to all cells, increasing the vessel diameter if the shear stress is over a threshold (by cells division) and decreasing the vessel diameter if the shear stress is below the same threshold (either by apoptosis or migration), then the resulting tree will exhibit a homothety ratio that is $\left(\frac12\right)^{\frac13}$. This is a direct consequence of the scaling law on mean shear rates described in the previous section.
\subsection{With F\aa hr\ae us effect.}
The case of arterial microcirculation is more complex, because it includes phase separation effects.
Many experiments have shown the influence of F\aa hr\ae us effect on equivalent viscosity, as reported in \cite{pries_blood_1992}. Fully detailed analysis and curve fittings are available in \cite{pries_blood_1992} and \cite{pries_blood_2008} about the relative viscosity, which is the ratio between the equivalent viscosity and the embedding fluid viscosity (plasma in the case of blood). Our model remains in the range of the experimental data and gets a similar trend as that of the data in these studies, as shown on figure \ref{relViscosity}. Notice that by altering the parameters of our model, we would be able to better agree with the fit given in \cite{pries_blood_1992}, but this is out of the scope of this paper.
\begin{figure}[h!]
\centering
\includegraphics[height=4.5cm]{figure4bis.pdf}
\caption{Relative viscosity versus tube radius for a constant pressure drop in the tube (high shear rates). The dashed red line represent the mean fit from experiments performed in \cite{pries_blood_1992}. The black continuous line represents the behavior of our model. Notice that we are interested only in having a similar trend and not to exactly fit the red dashed curve.}
\label{relViscosity}
\end{figure}
The optimal configuration of each division in the tree meets the conditions for the extended Murray's law equation (\ref{murrayLawF}) with daughter branches all identical. In addition, the optimal configuration also verifies the scaling law for the branches radii, i.e. $r_{i+1} = h r_i$. These equations make the scaling factor $h$ generation dependent, thus scaling factors need now to be indexed with their generation index $i$. The optimal $h_i$ in term of Murray's design verifies the equation: $n h_{i,*}^3 \left<\dot{\gamma}\right>^*_{F}(h_{i,*} r_{i,*}) = \left<\dot{\gamma}\right>^*_{F}(r_{i,*})$.
\begin{figure}[h!]
\centering
\includegraphics[height=5cm]{figure5ab.pdf}
\caption{Example for blood circulation in a dichotomous tree ($n=2$), with 9 generations, a root radius of $800 \ \mu m$ and a ratio of length over radius for the branches equal to $10$; Blood properties are from \citep{taber_optimization_1998,alarcon_design_2005}: $H_D = 0.45$ and $\alpha_b = 77.8 \ J.m^{-3}.s^{-1}$. {\bf (a)}: Optimal scaling factor with F\aa hr\ae us effect (black) and without F\aa hr\ae us effect (red).
{\bf (b)}: Mean shear stress versus pressure within the tree. Deepest generations are on the left and upper generations are on the right.
}
\label{optimhF}
\end{figure}
We computed numerically the optimal scaling factors in the case of blood for a dichotomical tree ($n=2$) with nine generations, see figure \ref{optimhF}A. The root branch radius is $800 \ \mu m$, thus F\aa hr\ae us effect is negligible in the first generations. In the optimal configuration, the three first generations are bifurcating with a scaling factor of $\left(\frac{1}{2}\right)^\frac13 \sim 0.7937$, then F\aa hr\ae us effect becomes non negligible and it affects the optimal scaling factors of the next generations.
The function $r \rightarrow \left<\dot{\gamma}\right>^*_{F}(r)$ is a decreasing function, thus $h_{i,*} < \left(\frac{1}{n}\right)^\frac13 = h_*$. Thus the optimal configuration has tighter branches where F\aa hr\ae us effect occurs than the optimal configuration without F\aa hr\ae us effect.
Indeed, F\aa hr\ae us effect improves the lubrication of the core fluid in the tube because the red blood cell depleted layer near the wall is less viscous that the core layer. Consequently, equivalent viscosities decrease and mean shear rates increase along the generations as soon as F\aa hr\ae us effect appears.
The mean shear stress in a branch where F\aa hr\ae us effect occurs is
$\sigma_{i,*} = \frac18 K(\left<\dot{\gamma}\right>^*_{F}(r_{i,*}),r_{i,*})$. Since $K$ is an increasing function of the mean shear rate, the mean shear stress increases along the generations when F\aa hr\ae us effect occurs (see section II-B of supplementary materials for details). Because pressure is decreasing along the generation, the mean shear stress is a decreasing function of pressure within the tree, see the example of blood on figure \ref{optimhF}b. This result is in agreement with the observation of arterial blood microcirculation \citep{pries_blood_2008},
and confirms the conclusions of \cite{alarcon_design_2005} that F\aa hr\ae us effects might be the core phenomena driving the decrease of blood equivalent viscosity and wall shear stresses in the microcirculation.
\section{Conclusion}
In this study, we extended Murray's law to fluids that can be modeled with Qu\'emada's model. In a cylindrical tube, we showed that the mean shear rate drives the behavior of the equivalent viscosity of the tube and thus of the power dissipated in the flow. The consequence is that Murray's optimization principle in a cylindrical tube brings an universal optimal mean shear rate that does not dependent on the flow amplitude or on the size of the tube. In the case of Qu\'emada's law mimicking blood behavior, we found that this optimal mean shear rate is $\left<\dot{\gamma}\right>_{noF}^* \sim 12.5 \ s^{-1}$. This value is in accordance with arterial macrocirculation data where the mean shear rates in the successive vessels remain almost constant at about $10 \ s^{-1}$. Applied to a fractal tree whose branches divide into $n$ identical smaller branches, we found that the optimal tree scaling factor remains the same as for a Newtonian fluid, i.e. $\left(\frac1{n}\right)^{\frac13}$.
However, if the vessel diameter is small enough, F\aa hr\ae us phase separation effect occurs. A red blood cells depleted layer appears near the wall of the vessel, thus making the equivalent viscosity dependent on the mean shear rate and on the vessel radius $r$. We derived a new optimal configuration from Murray's optimisation principle, which is expressed through a decreasing function that relates the optimal mean shear rate in a vessel with its radius: $r \rightarrow \left<\dot{\gamma}\right>^*_F(r)$. For large $r$, this function maps the optimal mean shear rate without F\aa hr\ae us effect. The function behavior relates to the lubrication phenomenon induced by F\aa hr\ae us effect. We also derived the optimal configuration of a fractal tree when F\aa hr\ae us effect occurs. We showed that the optimal scaling factors become dependent on the generation index, on the size of the root of the tree and on the function $r \rightarrow \left<\dot{\gamma}\right>^*_F(r)$, i.e. $n h_{i,*}^3 \left<\dot{\gamma}\right>^*_{F}(h_{i,*} r_{i,*}) = \left<\dot{\gamma}\right>^*_{F}(r_{i,*})$. The optimal configuration of the fractal tree allows tighter branches when F\aa hr\ae us effect occurs and induces a drop in mean shear stresses along the tree, which has been reported in the literature for blood arterial microcirculation.
Our study is based on the sole fact that viscosity is a monotonous function of the fluid shear rate. Consequently, our results hold not only for fluid that can be modeled with Qu\'emada's model, but also for any fluid for which this monotonous condition is true. This make our study most general and apply for example to shear-thinning or shear-thickening fluids.
Concerning blood, interestingly, the optimal configuration fits that of large circulation. Large circulation is thus in a state where non-Newtonian effects remain small, just on the edge of the steep part of blood rheogram, as shown on figure \ref{viscosityvs}. It is important to notice that this property is a result of the optimization process and not an hypothesis. This result can only be obtained by including the non Newtonian behavior of blood in the optimization process. This also highlights why, although it was based on Newtonian hypothesis, the original Murray's law fitted nevertheless blood large circulation.
Of course, many other phenomena might play a role in the complex behavior of arterial blood circulation.
Hence, our model does not account for the complex regulation processes that exist in blood network. These regulation processes are likely to affect the optimal configuration and enhance the robustness of the system.
In large circulation, the role of inertia, turbulence or oscillating flow caused by the heart beat may influence Murray's optimal design. In the microcirculation, the roles of other phase separation effects have to be taken into account to improve the validity of the predictions, for example the role of the glycocalyx molecules standing on the vessels walls \citep{pries_endothelial_2000}. Also, because the power dissipation in Murray's optimal design is not symmetric relatively to its optimal value, biological noise affecting blood vessel sizes may also influence the optimal configuration \citep{mauroy_influence_2010,vercken_dont_2012}. The inclusion of such a noise in Murray's optimal design may bring interesting insights on blood vessels size distribution in the arterial network.
Finally, blood network is regulated and this regulation probably affects the optimal configuration.
\section*{Acknowledgments}
The authors would like to thank Philippe Dantan, Patrice Flaud and Daniel Qu\'emada (University Paris Diderot - Paris 7, Paris, France) for very fruitful discussions and support about this work.
\newpage
\noindent{\Large{\bf{Appendix}}}
|
2,869,038,154,325 | arxiv | \section{Appendix}
We present proofs for theorems stated in the main paper.\\
\begin{theorem}
Suppose that $\forall (\mathbf{x}_i, y_i) \in \mathcal{D}, \mathbf{w} \in \mathbb{R}^d: \|\nabla \ell(\mathbf{w}^\top \mathbf{x}_i, y_i)\|_2 \leq C$. Suppose also that $\ell''$ is $\gamma$-Lipschitz and $\| \mathbf{x}_i \|_2 \leq 1$ for all $(\mathbf{x}_i, y_i) \in \mathcal{D}$. Then:
\begin{align*}
\| \nabla L(\mathbf{w}^-; \mathcal{D}') \|_2 &= \|(H_{\mathbf{w}_\eta} - H_{\mathbf{w}^*}) H_{\mathbf{w}^*}^{-1} \Delta\|_2 \\
&\leq \gamma (n-1) \| H_{\mathbf{w}^*}^{-1} \Delta \|_2^2 \leq \frac{4 \gamma C^2}{\lambda^2 (n-1)},
\end{align*}
where $H_{\mathbf{w}_\eta}$ denotes the Hessian of $L(\cdot; \mathcal{D}')$ at the parameter vector $\mathbf{w}_\eta = \mathbf{w}^* + \eta H_{\mathbf{w}^*}^{-1} \Delta$ for some $\eta \in [0,1]$.
\end{theorem}
\begin{proof}
Let $G(\mathbf{w}) = \nabla L(\mathbf{w}; \mathcal{D}')$ denote the gradient at $\mathbf{w}$ of the empirical risk on the reduced dataset $\mathcal{D}'$. Note that $G : \mathbb{R}^d \rightarrow \mathbb{R}^d$ is a vector-valued function. By Taylor's Theorem, there exists some $\eta \in [0,1]$ such that:
\begin{align*}
G(\mathbf{w}^-) &= G(\mathbf{w}^* + H_{\mathbf{w}^*}^{-1} \Delta) \\
&= G(\mathbf{w}^*) + \nabla G(\mathbf{w}^* + \eta H_{\mathbf{w}^*}^{-1} \Delta) H_{\mathbf{w}^*}^{-1} \Delta.
\end{align*}
Since $G$ is the gradient of $L(\cdot; \mathcal{D}')$, the quantity $\nabla G(\mathbf{w}^* + \eta H_{\mathbf{w}^*}^{-1} \Delta)$ is exactly the Hessian of $L(\cdot; \mathcal{D}')$ evaluated at the point $\mathbf{w}_\eta = \mathbf{w}^* + \eta H_{\mathbf{w}^*}^{-1} \Delta$. Thus:
\begin{align*}
G(\mathbf{w}^-) &= G(\mathbf{w}^*) + H_{\mathbf{w}_\eta} H_{\mathbf{w}^*}^{-1} \Delta \\
&= \left(G(\mathbf{w}^*) + \Delta\right) + H_{\mathbf{w}_\eta} H_{\mathbf{w}^*}^{-1} \Delta - \Delta \\
&= 0 + H_{\mathbf{w}_\eta} H_{\mathbf{w}^*}^{-1} \Delta - H_{\mathbf{w}^*} H_{\mathbf{w}^*}^{-1} \Delta \\
&= (H_{\mathbf{w}_\eta} - H_{\mathbf{w}^*}) H_{\mathbf{w}^*}^{-1} \Delta.
\end{align*}
This gives:
\begin{align*}
\| G(\mathbf{w}^-) \|_2 &= \| (H_{\mathbf{w}_\eta} - H_{\mathbf{w}^*}) H_{\mathbf{w}^*}^{-1} \Delta \|_2 \\
&\leq \| H_{\mathbf{w}_\eta} - H_{\mathbf{w}^*} \|_2 \| H_{\mathbf{w}^*}^{-1} \Delta \|_2.
\end{align*}
Using the Lipschitz-ness of $\ell''$, we have for every $i$:
\begin{align*}
\| \nabla^2 \ell(\mathbf{w}_\eta^\top \mathbf{x}_i, y_i) - \nabla_\mathbf{w}^2 \ell((\mathbf{w}^*)^\top \mathbf{x}_i, y_i)\|_2 &= \| [\ell''(\mathbf{w}_\eta^\top \mathbf{x}_i, y_i) - \ell''((\mathbf{w}^*)^\top \mathbf{x}_i, y_i)] \mathbf{x}_i \mathbf{x}_i^\top \|_2 \\
&\leq | \ell''(\mathbf{w}_\eta^\top \mathbf{x}_i, y_i) - \ell''((\mathbf{w}^*)^\top \mathbf{x}_i, y_i) | \cdot \| \mathbf{x}_i \|_2^2 \\
&\leq \gamma \| \mathbf{w}_\eta - \mathbf{w}^* \|_2 \hspace{4ex} \text{\small since $\| \mathbf{x}_i \|_2 \leq 1$} \\
&= \gamma \| \eta H_{\mathbf{w}^*}^{-1} \Delta \|_2 \\
&\leq \gamma \| H_{\mathbf{w}^*}^{-1} \Delta \|_2.
\end{align*}
As a result, we can conclude that:
\begin{align*}
\| H_{\mathbf{w}_\eta} - H_{\mathbf{w}^*} \|_2 &\leq \sum_{i=1}^{n-1} \Big\Vert \nabla^2 \ell(\mathbf{w}_\eta^\top \mathbf{x}_i, y_i) - \nabla^2 \ell((\mathbf{w}^*)^\top \mathbf{x}_i, y_i) \Big\Vert_2 \\
&\leq \gamma (n-1) \| H_{\mathbf{w}^*}^{-1} \Delta \|_2.
\end{align*}
Combining these results leads us to conclude that $\| G(\mathbf{w}^-) \|_2 \leq \gamma (n-1) \| H_{\mathbf{w}^*}^{-1} \Delta \|_2^2$.
We can simplify this bound by analyzing $\| H_{\mathbf{w}^*}^{-1} \Delta \|_2$.
Since $L(\cdot; \mathcal{D}')$ is $\lambda (n-1)$-strongly convex, we get $\| H_{\mathbf{w}^*} \|_2 \geq \lambda (n-1)$, hence $\| H_{\mathbf{w}^*}^{-1} \|_2 \leq \frac{1}{\lambda (n-1)}$. Recall that
$$\Delta = \lambda \mathbf{w}^* + \nabla \ell\left((\mathbf{w}^*)^\top \mathbf{x}_n, y_n\right).$$ Since $\mathbf{w}^*$ is the global optimal solution of the loss $L(\cdot; \mathcal{D})$, we obtain the condition:
$$0 = \nabla L(\mathbf{w}^*; \mathcal{D}) = \sum_{i=1}^n \nabla \ell\left((\mathbf{w}^*)^\top \mathbf{x}_i, y_i\right) + \lambda n \mathbf{w}^*.$$
Using the norm bound $\| \nabla \ell(\mathbf{w}^\top \mathbf{x}, y) \|_2 \leq C$ and re-arranging the terms, we obtain:
$$\| \mathbf{w}^* \|_2 = \frac{\| \sum_{i=1}^n \nabla \ell((\mathbf{w}^*)^\top \mathbf{x}_i, y_i) \|_2}{\lambda n} \leq \frac{C}{\lambda}.$$
Using this and the same norm bound, we observe:
$$\| \Delta \|_2 \leq \lambda \| \mathbf{w}^* \|_2 + \| \nabla \ell((\mathbf{w}^*)^\top \mathbf{x}_n, y_n) \|_2 \leq 2C,$$ from which we obtain:
$$\| H_{\mathbf{w}^*}^{-1} \Delta \|_2 \leq \| H_{\mathbf{w}^*}^{-1} \|_2 \| \Delta \|_2 \leq \frac{2C}{\lambda (n-1)},$$
which leads to the desired bound.
\end{proof}
\begin{theorem}
Suppose that $\mathbf{b}$ is drawn from a distribution with density function $p(\cdot)$
such that for any $\mathbf{b}_1, \mathbf{b}_2 \in \mathbb{R}^d$ satisfying
$\|\mathbf{b}_1 - \mathbf{b}_2\|_2 \leq \epsilon'$, we have that:
$e^{-\epsilon} \leq \frac{p(\mathbf{b}_1)}{p(\mathbf{b}_2)} \leq e^{\epsilon}$.
Then:
$$e^{-\epsilon} \leq \frac{f_{\tilde{A}}(\tilde{\mathbf{w}})}{f_A(\tilde{\mathbf{w}})} \leq e^\epsilon,$$
for any solution $\tilde{\mathbf{w}}$ produced by $\tilde{A}$.
\end{theorem}
\begin{proof}
Let $p$ be the density function of $\mathbf{b}$ and let $g_{\tilde{A}}$ be the density
functions of the gradient residual under optimizer $\tilde{A}$. Consider the density functions
$q_A$ and $q_{\tilde{A}}$ of $\mathbf{z} = \mathbf{b} - \mathbf{u}$ under optimizers $A$ and $\tilde{A}$.
We obtain:
\begin{align*}
q_{\tilde{A}}(\mathbf{z}) &= \int_\mathbf{v} g_{\tilde{A}}(\mathbf{v}) p(\mathbf{z} + \mathbf{v}) d\mathbf{v} \\
&= \int_{\mathbf{v} : \|\mathbf{v}\|_2 \leq \epsilon'} g_{\tilde{A}}(\mathbf{v}) p(\mathbf{z} + \mathbf{v}) d\mathbf{v} \hspace{1ex} \text{\small{since $g_{\tilde{A}}$ has no support elsewhere}} \\
&\leq \int_{\mathbf{v} : \|\mathbf{v}\|_2 \leq \epsilon'} g_{\tilde{A}}(\mathbf{v}) e^\epsilon p(\mathbf{z}) d\mathbf{v} \hspace{3ex} \text{\small{since $\|\mathbf{v}\|_2 \leq \epsilon'$}} \\
&= e^\epsilon p(\mathbf{z}) \\
&= e^\epsilon q_A(\mathbf{z})
\end{align*}
where the last step follows since the gradient residual $\mathbf{u}$ under $A$ is 0.
To complete the proof, note that the value of $\tilde{\mathbf{w}}$ is completely determined
by $\mathbf{z} \!=\! \mathbf{b} \!-\! \mathbf{u}$. Indeed, any $\mathbf{w}$ satisfying Equation \ref{eq:approximate_gradient}
is an exact solution of the strongly convex loss $L_{\mathbf{b}}({\mathbf{w}}) - \mathbf{u}^\top \mathbf{w}$ and, hence, must be unique. This gives:
\begin{align*}
f_{\tilde{A}}(\tilde{\mathbf{w}}) &= \int_\mathbf{z} f_{\tilde{A}}(\tilde{\mathbf{w}} | \mathbf{z}) q_{\tilde{A}}(\mathbf{z}) d\mathbf{z} \\
&= \int_\mathbf{z} f_A(\tilde{\mathbf{w}} | \mathbf{z}) q_{\tilde{A}}(\mathbf{z}) d\mathbf{z} \hspace{1ex} \text{\small{since $\tilde{\mathbf{w}}$ is governed by $\mathbf{z}$}} \\
&\leq \int_\mathbf{z} f_A(\tilde{\mathbf{w}} | \mathbf{z}) e^\epsilon q_A(\mathbf{z}) d\mathbf{z} \\
&= e^\epsilon f_A(\tilde{\mathbf{w}}).
\end{align*}
In the above, note that while $f_{\tilde{A}}$ and $f_A$ are not the same in general, their difference is governed entirely by $\mathbf{z}$: given a fixed $\mathbf{z}$, the \emph{conditional} density of $\tilde{\mathbf{w}}$ is the same under both density functions.
Using a similar approach as above, we can also show that $f_{\tilde{A}}(\tilde{\mathbf{w}}) \geq e^{-\epsilon} f_A(\tilde{\mathbf{w}})$.
\end{proof}
\begin{theorem}
Let $A$ be the learning algorithm that returns the unique optimum of the loss $L_{\mathbf{b}}(\mathbf{w}; \mathcal{D})$ and let $M$ be the Newton update removal mechanism (cf., Equation~\ref{eq:newton_update}). Suppose that $\| \nabla L(\mathbf{w}^-; \mathcal{D}') \|_2 \leq \epsilon'$ for some computable bound $\epsilon' > 0$. We have the following guarantees for $M$:
\begin{enumerate}[(i)]
\setlength\itemsep{0ex}
\vspace{-1ex}
\item If $\mathbf{b}$ is drawn from a distribution with density $p(\mathbf{b}) \propto e^{-\frac{\epsilon}{\epsilon'} \|\mathbf{b}\|_2}$, then $M$ is $\epsilon$-CR for $A$;
\item If $\mathbf{b} \sim \mathcal{N}(0, c \epsilon' / \epsilon)^d$ with $c > 0$, then $M$ is $(\epsilon,\delta)$-CR for $A$ with $\delta = 1.5 \cdot e^{-c^2/2}$.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof involves bounding the density ratio of $\mathbf{b}_1$ and $\mathbf{b}_2$ when $\| \mathbf{b}_1 - \mathbf{b}_2 \|_2 \leq \epsilon'$ and then invoking Theorem \ref{thm:epsilon_cd}.
\begin{enumerate}[(i)]
\setlength\itemsep{0ex}
\item $\frac{p(\mathbf{b}_1)}{p(\mathbf{b}_2)} = e^{-\frac{\epsilon}{\epsilon'} (\|bb_1\|_2 - \|bb_2\|_2)} \leq e^{\frac{\epsilon}{\epsilon'} (\|bb_1 - bb_2\|_2)} \leq e^\epsilon$. The reverse direction can be obtained similarly.
\item The proof of Theorem 3.22 in \cite{dwork2011differential} applies using $\Delta_2(f) = \epsilon'$, giving that with probability at least $1-\delta$, we have that $e^{-\epsilon} \leq \frac{p(\mathbf{b}_1)}{p(\mathbf{b}_2)} \leq e^{\epsilon}$. Applying Theorem \ref{thm:epsilon_cd} gives the desired $(\epsilon,\delta)$-CR guarantee.
\end{enumerate}
\end{proof}
\begin{theorem}
Under the same regularity conditions of Theorem \ref{thm:asymptotic_bound}, we have that:
\begin{align*}
\| \nabla L(\mathbf{w}^{(-m)}; \mathcal{D} \setminus \mathcal{D}_m) \|_2 &\leq \gamma (n-m) \left\| \left[ H_{\mathbf{w}^*}^{(m)} \right]^{-1} \Delta^{(m)} \right\|_2^2 \leq \frac{4 \gamma m^2 C^2}{\lambda^2 (n-m)}.
\end{align*}
\end{theorem}
\begin{proof}
The proof is almost identical to that of Theorem \ref{thm:asymptotic_bound}, except that there are $n-m$ terms in the Hessian and $\Delta^{(m)}$ now scales linearly with $m$.
\end{proof}
\begin{theorem}
Suppose $\Phi$ is a randomized learning algorithm that is $(\epsilon_\text{DP}, \delta_\text{DP})$-differentially private, and the outputs of $\Phi$ are used in a linear model by minimizing $L_\mathbf{b}$ and using a removal mechanism that guarantees $(\epsilon_\text{CR},\delta_\text{CR})$-certified removal. Then the entire procedure guarantees
$(\epsilon_\text{DP} \!+\! \epsilon_\text{CR}, \delta_\text{DP} \!+\! \delta_\text{CR})$-certified removal.
\end{theorem}
\begin{proof}
Let $\Phi$ be the randomized algorithm that learns a feature extractor from the
data $\mathcal{D}$ and let $\mu(S) = P(\Phi(\mathcal{D}) \in S)$ be the induced probability
measure over the space, $\Omega$, of all possible feature extractors.
Let $\mathcal{D}' = \mathcal{D} \setminus \mathbf{x}$ be the dataset with $\mathbf{x}$ removed and let $\mu'(\cdot)$ be
the corresponding probability measure for $\Phi(\mathcal{D}')$.
Since $\Phi$ is $(\epsilon_\text{DP}, \delta_\text{DP})$-DP, for any $S \subseteq \Omega$, we have that:
$$\mu(S) = P(\Phi(\mathcal{D}) \in S) \leq e^{\epsilon_\text{DP}} P(\Phi(\mathcal{D}') \in S) = e^{\epsilon_\text{DP}} \mu'(S),$$
with probability $1 - \delta_\text{DP}$. In particular, this shows that $\mu$ is absolutely
continuous w.r.t. $\mu'$ and therefore admits a Radon-Nikodym derivative $g$.
Furthermore, $g$ is (almost everywhere w.r.t. $\mu'$) bounded by $e^{\epsilon_\text{DP}}$. Indeed,
suppose that there exists a set $S \subseteq \Omega$ with $\mu'(S) > 0$ such that
$g \geq e^{\epsilon_\text{DP}} + \alpha$ on $S$ for some $\alpha \geq 0$, then:
$$\mu(S) = \int_S g(S) \hspace{1ex} d\mu' \geq (e^{\epsilon_\text{DP}} + \alpha) \mu'(S) \geq
\mu(S) + \alpha \mu'(S),$$
which is a contradiction unless $\alpha = 0$.
Finally, for any $\phi \in \Omega$ such that $\Phi(\mathcal{D}) = \phi$, let $A(\mathcal{D}, \phi)$
be the learning algorithm that trains a model on $\mathcal{D}$ using the feature extractor
$\phi$. Suppose that $M$ is an $(\epsilon_\text{CR}, \delta_\text{CR})$-CR mechanism, then by Fubini's Theorem:
\begin{align*}
P(M(A(\mathcal{D}, \Phi(\mathcal{D})), \mathcal{D}, \mathbf{x}) \in \mathcal{T}) &= \int_\Omega P(M(A(\mathcal{D}, \phi), \mathcal{D}, \mathbf{x}) \in \mathcal{T}) \hspace{1ex} d\mu \\
&\leq \int_\Omega e^{\epsilon_\text{CR}} P(A(\mathcal{D}', \phi) \in \mathcal{T}) \hspace{1ex} d\mu \\
&= \int_\Omega e^{\epsilon_\text{CR}} P(A(\mathcal{D}', \phi) \in \mathcal{T}) \cdot g \hspace{1ex} d\mu' \\
&\leq \int_\Omega e^{\epsilon_\text{DP} + \epsilon_\text{CR}} P(A(\mathcal{D}', \phi) \in \mathcal{T}) \hspace{1ex} d\mu' \\
&= e^{\epsilon_\text{DP} + \epsilon_\text{CR}} P(A(\mathcal{D}', \Phi(\mathcal{D}')) \in \mathcal{T}),
\end{align*}
with probability at least $1 - \delta_\text{DP} - \delta_\text{CR}$. The lower bound can be shown in a similar fashion.
\end{proof}
\subsection{Definition using differential privacy}
\subsection{Linear Logistic Regression}
\label{sec:linear}
We first experiment on the MNIST digit classification dataset. For simplicity, we restrict to the binary classification problem of distinguishing between digits 3 and 8, and train a regularized logistic regressor using Algorithm \ref{alg:training}. Removal is performed using Algorithm \ref{alg:removal} with $\delta=\texttt{1e-4}$.
\paragraph{Effects of $\lambda$ and $\sigma$.} Training a removal-enabled model using Algorithm \ref{alg:training} requires selecting two hyperparameters: the $L_2$-regularization parameter, $\lambda$, and the standard deviation, $\sigma$, of the sampled perturbation vector $\mathbf{b}$. Figure \ref{fig:mnist_std_lam} shows the effect of $\lambda$ and $\sigma$ on test accuracy and the expected number of removals supported before re-training. When fixing the supported number of removals at 100 (middle plot), the value of $\sigma$ is inversely related to $\epsilon$ (\emph{cf.} line 16 of Algorithm \ref{alg:removal}), hence higher $\epsilon$ results in smaller $\sigma$ and improved accuracy. Increasing $\lambda$ enables more removals before re-training (left and right plots) because it reduces the gradient residual norm, but very high values of $\lambda$ negatively affect test accuracy because the regularization term dominates the loss.
\paragraph{Tightness of the gradient residual norm bounds.} In Algorithm \ref{alg:removal} , we use the data-dependent bound from Corollaries \ref{cor:data_dependent} and \ref{cor:data_dependent_batch} to compute a \emph{per-data} or \emph{per-batch} estimate of the removal error, as opposed to the \emph{worst-case} bound in Theorems \ref{thm:asymptotic_bound} and \ref{thm:asymptotic_bound_batch}. Figure \ref{fig:mnist_tightness} shows the value of different bounds as a function of the number of removed points. We consider two removal scenarios: single point removal and batch removal with batch size $m=10$. We observe three phenomena: (1) The worst-case bounds (light blue and light green) are \emph{several orders of magnitude} higher than the data-dependent bounds (dark blue and dark green), which means that the number of supported removals is \emph{several orders of magnitude} higher when using the data-dependent bounds. (2) The cumulative sum of the gradient residual norm bounds is approximately linear for both the single and batch removal data-dependent bounds. (3) There remains a large gap between the data-dependent norm bounds and the true value of the gradient residual norm (dashed line), which suggests that the utility of our removal mechanism may be further improved via tighter analysis.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig/MNIST_bound_tightness.pdf}
\vspace{-4ex}
\caption{\textbf{Linear logistic regression on MNIST.} Gradient residual norm (on log scale) as a function of the number of removals.}
\label{fig:mnist_tightness}
\vspace{-2ex}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{fig/MNIST_grad_norm_combined.pdf}
\vspace{-5ex}
\caption{\textbf{MNIST training digits sorted by norm of the removal update $\mathbf{\| H_{\mathbf{w}^*}^{-1} \Delta \|_2}$.} The samples with the highest norm (\textbf{top}) appear to be atypical, making it harder to undo their effect on the model. The samples with the lowest norm (\textbf{bottom}) are prototypical 3s and 8s, and hence are much easier to remove.}
\label{fig:mnist_grad_norm}
\end{figure*}
\paragraph{Gradient residual norm and removal difficulty.} The data-dependent bound is governed by the norm of the update $H_{\mathbf{w}^*}^{-1} \Delta$, which measures the influence of the removed point on the parameters and varies greatly depending on the training sample being removed.
Figure \ref{fig:mnist_grad_norm} shows the training samples corresponding to the 10 largest and smallest values of $\| H_{\mathbf{w}^*}^{-1} \Delta \|_2$. There are large visual differences between these samples: large values correspond to oddly-shaped 3s and 8s, while small values correspond to ``prototypical'' digits. This suggests that removing outliers is harder, because the model tends to memorize their details and their impact on the model is easy to distinguish from other samples.
\begin{figure*}[t]
\begin{minipage}{.63\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/lam_std_tradeoff_modified.pdf}
\vspace{-5ex}
\caption{\textbf{Linear models trained on public feature extractors.} Trade-off between test accuracy and the expected number of supported removals (at $\epsilon\!=\!1$) on LSUN (\textbf{left}) and SST (\textbf{right}). The setting of $(\lambda, \sigma)$ is shown next to each point. The number of supported removals rapidly increases when accuracy is slightly sacrificed.}
\label{fig:lsun_sst_tradeoff}
\end{minipage}
\hspace{2ex}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=\columnwidth]{fig/SVHN_tradeoff.pdf}
\vspace{-5ex}
\caption{\textbf{Using $\mathbf{\epsilon}$-DP features.} Trade-off between $\epsilon$ and test accuracy on SVHN of models that support 10 removals. Dashed line shows non-private model accuracy.}
\label{fig:svhn_tradeoff}
\end{minipage}
\vspace{-2ex}
\end{figure*}
\subsection{Non-Linear Logistic Regression using Public,~Pre-Trained Feature Extractors}
\label{sec:text_and_image}
We consider the common scenario in which a feature extractor is trained on public data (\emph{i.e.}, does not require removal), and a linear classifier is trained on these features using non-public data. We study two tasks: (1) scene classification on the LSUN dataset and (2) sentiment classification on the Stanford Sentiment Treebank (SST) dataset. We subsample the LSUN dataset to 100K images per class (\emph{i.e.}, $n = 1\textrm{M}$).
For LSUN, we extract features using a ResNeXt-101 model \citep{xie2017resnext} trained on 1B Instagram images \citep{mahajan2018exploring} and fine-tuned on ImageNet \citep{deng2009imagenet}. For SST, we extract features using a pre-trained RoBERTa \citep{liu2019roberta} language model. At removal time, we use Algorithm \ref{alg:removal} with $\epsilon=1$ and $\delta=\texttt{1e-4}$ in both experiments.
\paragraph{Result on LSUN.} We reduce the 10-way LSUN classification task to 10 one-versus-all tasks and randomly subsample the negative examples to ensure the positive and negative classes are balanced in all binary classification problems. Subsampling benefits removal since a training sample does not always need to be removed from all 10 classifiers.
Figure \ref{fig:lsun_sst_tradeoff} (left) shows the relationship between test accuracy and the expected number of removals on LSUN. The value of $(\lambda, \sigma)$ is shown next to each point, with the left-most point corresponding to training a regular model that supports no removal. At the cost of a small drop in accuracy (from $88.6\%$ to $83.3\%$), the model supports over $10,000$ removals before re-training is needed. As shown in Table \ref{tab:timings}, the computational cost for removal is more than $250 \times$ smaller than re-training the model on the remaining data points.
\paragraph{Result on SST.} SST is a sentiment classification dataset commonly used for benchmarking language models \citep{wang2019glue}. We use SST in the binary classification task of predicting whether or not a movie review is positive. Figure \ref{fig:lsun_sst_tradeoff} (right) shows the trade-off between accuracy and supported number of removals. The regular model (left-most point) attains a test accuracy of $89.0\%$, which matches the performance of competitive prior work \citep{tai2015improved, wieting2016towards, looks2017deep}. As before, a large number of removals is supported at a small loss in test accuracy; the computational costs for removal are $870\times$ lower than for re-training the model.
\subsection{Non-linear Logistic Regression using Differentially~Private Feature Extractors}
\label{sec:non-linear}
When public data is not available for training a feature extractor, we can train a differentially private feature extractor on private data \citep{abadi2016deep} and apply Theorem \ref{thm:extractor} to remove data from the final (removal-enabled) linear layer. This approach has a major advantage over training the entire model using the approach of \citep{abadi2016deep} because the final linear layer can partly correct for the noisy features produced by the private feature extractor.
We evaluate this approach on the Street View House Numbers (SVHN) digit classification dataset. We compare it to a differentially private CNN\footnote{We use a simple CNN with two convolutional layers with 64 filters of size $3 \times 3$ and $2 \times 2$ max-pooling.} trained using the technique of \citet{abadi2016deep}. Since the CNN is differentially private, certified removal is achieved trivially without applying any removal. For a fair comparison, we fix $\delta = \texttt{1e-4}$ and train $(\epsilon_\text{DP}/10, \delta)$-private CNNs for a range of values of $\epsilon_\text{DP}$. By the union bound for group privacy \citep{dwork2011differential}, the resulting models support up to then $(\epsilon_\text{DP}, \delta)$-CR removals.
To measure the effectiveness of Theorem \ref{thm:extractor} for certified data removal, we also train an $(\epsilon_\text{DP}/10, \delta/10)$-differentially private CNN and extract features from its penultimate linear. We use these features in Algorithm \ref{alg:training} to train 10 one-versus-all classifiers with total failure probability of at most $\frac{9}{10} \delta$. Akin to the experiments on LSUN, we subsample the negative examples in each of the binary classifiers to speed up removal. The expected contribution to $\epsilon$ from the updates is set to $\epsilon_\text{CR} \approx \epsilon_\text{DP}/10$, hence achieving $(\epsilon, \delta)$-CR with $\epsilon = \epsilon_\text{DP} + \epsilon_\text{CR} \approx \epsilon_\text{DP} + \epsilon_\text{DP} / 10$ after 10 removals.
Figure \ref{fig:svhn_tradeoff} shows the relationship between test accuracy and $\epsilon$ for both the fully private and the Newton update removal methods. For reference, the dashed line shows the accuracy obtained by a non-private CNN that does not support removal. For smaller values or $\epsilon$, training a private feature extractor (blue) and training the linear layer using Algorithm \ref{alg:training} attains much higher test accuracy than training a fully differentially private model (orange). In particular, at $\epsilon \approx 0.1$, the fully differentially private baseline's accuracy is only $22.7\%$, whereas our approach attains a test accuracy of $71.2\%$. Removal from the linear model trained on top of the private extractor only takes 0.27s, compared to more than 1.5 hour when re-training the CNN from scratch.
\subsection{Linear Classifiers}
Denote by $\mathcal{D} = \{(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_n, y_n)\}$ the training set of $n$ samples, $\forall i: \mathbf{x}_i \in \mathbb{R}^d$, with corresponding targets $y_i$. We assume learning algorithm $A$ tries to minimize the regularized empirical risk of a linear model:
\begin{equation*}
L(\mathbf{w}; \mathcal{D}) = \sum_{i=1}^n \ell(\mathbf{w}^\top \mathbf{x}_i, y_i) + \frac{\lambda n}{2} \|\mathbf{w}\|_2^2,
\end{equation*}
where $\ell(\mathbf{w}^\top \mathbf{x}, y)$ is a convex loss that is differentiable everywhere. We denote $\mathbf{w}^* \!=\! A(\mathcal{D}) \!=\! \textrm{argmin}_\mathbf{w} L(\mathbf{w}; \mathcal{D})$ as it is uniquely determined.
Our approach to certified removal is as follows. We first define a removal mechanism that, given a training data point $(\mathbf{x},y)$ to remove, produces a model $\mathbf{w}^-$ that is approximately equal to the unique minimizer of $L(\mathbf{w}; \mathcal{D} \setminus (\mathbf{x}, y))$. The model produced by such a mechanism may still contain information about $(\mathbf{x},y)$. In particular, if the gradient $\nabla L(\mathbf{w}^-; \mathcal{D} \setminus (\mathbf{x}, y))$ is non-zero, it reveals information about the removed data point. To hide this information, we apply a sufficiently large random perturbation to the model parameters at training time. This allows us to guarantee certified removal; the values of $\epsilon$ and $\delta$ depend on the approximation quality of the removal mechanism and on the distribution from which the model perturbation is sampled.
\paragraph{Removal mechanism.} We assume without loss of generality that we aim to remove the last training sample, $(\mathbf{x}_n, y_n)$. Specifically, we define an efficient removal mechanism that approximately minimizes $L(\mathbf{w}; \mathcal{D}')$ with $\mathcal{D}' = \mathcal{D} \setminus (\mathbf{x}_n,y_n)$.
First, denote the loss gradient at sample $(\mathbf{x}_n, y_n)$ by $\Delta = \lambda \mathbf{w}^* + \nabla \ell((\mathbf{w}^*)^\top \mathbf{x}_n, y_n)$ and the Hessian of $L(\cdot; \mathcal{D}')$ at $\mathbf{w}^*$ by $H_{\mathbf{w}^*} = \nabla^2 L(\mathbf{w}^*; \mathcal{D}')$. We consider the \emph{Newton update removal mechanism} $M$:
\begin{equation}
\mathbf{w}^- = M(\mathbf{w}^*, \mathcal{D}, (\mathbf{x}_n, y_n)) := \mathbf{w}^* + H_{\mathbf{w}^*}^{-1} \Delta,
\label{eq:newton_update}
\end{equation}
which is a one-step Newton update applied to the gradient influence of the removed point $(\mathbf{x}_n, y_n)$. The update $H_{\mathbf{w}^*}^{-1} \Delta$ is also known as the influence function of the training point $(\mathbf{x}_n,y_n)$ on the parameter vector $\mathbf{w}^*$ \citep{cook1982residuals, koh2017understanding}.
The computational cost of this Newton update is dominated by the cost of forming and inverting the Hessian matrix. The Hessian matrix for $\mathcal{D}$ can be formed offline with $O(d^2 n)$ cost. The subsequent Hessian inversion makes the removal mechanism $O(d^3)$ at removal time; the inversion can leverage efficient linear algebra libraries and GPUs.
To bound the approximation error of this removal mechanism, we observe that the quantity $\nabla L(\mathbf{w}^-; \mathcal{D}')$, which we refer to hereafter as the \emph{gradient residual}, is zero only when $\mathbf{w}^-$ is the unique minimizer of $L(\cdot; \mathcal{D}')$. We also observe that the gradient residual norm, $\|\nabla L(\mathbf{w}^-; \mathcal{D}')\|_2$, reflects the error in the approximation $\mathbf{w}^-$ of the minimizer of $L(\cdot; \mathcal{D}')$. We derive an upper bound on the gradient residual norm for the removal mechanism (\emph{cf.} Equation \ref{eq:newton_update}).\\
\begin{theorem}
Suppose that $\forall (\mathbf{x}_i, y_i) \in \mathcal{D}, \mathbf{w} \in \mathbb{R}^d: \|\nabla \ell(\mathbf{w}^\top \mathbf{x}_i, y_i)\|_2 \leq C$. Suppose also that $\ell''$ is $\gamma$-Lipschitz and $\| \mathbf{x}_i \|_2 \leq 1$ for all $(\mathbf{x}_i, y_i) \in \mathcal{D}$. Then:
\begin{align}
\| \nabla L(\mathbf{w}^-; \mathcal{D}') \|_2 &= \|(H_{\mathbf{w}_\eta} - H_{\mathbf{w}^*}) H_{\mathbf{w}^*}^{-1} \Delta\|_2 \label{eq:gradient_residual} \\
&\leq \gamma (n-1) \| H_{\mathbf{w}^*}^{-1} \Delta \|_2^2 \leq \frac{4 \gamma C^2}{\lambda^2 (n-1)} \nonumber,
\end{align}
where $H_{\mathbf{w}_\eta}$ denotes the Hessian of $L(\cdot; \mathcal{D}')$ at the parameter vector $\mathbf{w}_\eta = \mathbf{w}^* + \eta H_{\mathbf{w}^*}^{-1} \Delta$ for some $\eta \in [0,1]$.
\label{thm:asymptotic_bound}
\end{theorem}
\paragraph{Loss perturbation.}
Obtaining a small gradient norm $\| \nabla L(\mathbf{w}^-; \mathcal{D}') \|_2$ via Theorem \ref{thm:asymptotic_bound} does not guarantee certified removal. In particular, the direction of the gradient residual may leak information about the training sample that was ``removed.'' To hide this information, we use the loss perturbation technique of \citet{chaudhuri2011dperm} at training time. It perturbs the empirical risk by a random linear term:
\begin{equation*}
L_{\mathbf{b}}(\mathbf{w}; \mathcal{D}) = \sum_{i=1}^n \ell\left(\mathbf{w}^\top \mathbf{x}_i, y_i\right) + \frac{\lambda n}{2} \|\mathbf{w}\|_2^2 + \mathbf{b}^\top \mathbf{w},
\end{equation*}
with $\mathbf{b} \in \mathbb{R}^d$ drawn randomly from some distribution. We analyze how loss perturbation masks the information in the gradient residual $\nabla L_\mathbf{b}(\mathbf{w}^-; \mathcal{D}')$ through randomness in $\mathbf{b}$.
Let $A(\mathcal{D}')$ be an exact minimizer\footnote{Our result can
be modified to work with approximate loss minimizers by incurring
a small additional error term.} for $L_{\mathbf{b}}(\cdot; \mathcal{D}')$ and let
$\tilde{A}(\mathcal{D}')$ be an approximate minimizer of $L_{\mathbf{b}}(\cdot; \mathcal{D}')$, for example, our removal mechanism applied on the trained model. Specifically, let $\tilde{\mathbf{w}}$ be
an approximate solution produced by $\tilde{A}$. This implies the gradient residual is:
\begin{equation}
\mathbf{u} := \nabla L_{\mathbf{b}}(\tilde{\mathbf{w}}; \mathcal{D}') = \sum_{i=1}^{n-1} \nabla \ell(\tilde{\mathbf{w}}^\top \mathbf{x}_i, y_i) + \lambda (n-1) \tilde{\mathbf{w}} + \mathbf{b}.
\label{eq:approximate_gradient}
\end{equation}
We assume that $\tilde{A}$ can produce a gradient residual $\mathbf{u}$ with $\|\mathbf{u}\|_2 \leq \epsilon'$
for some pre-specified bound $\epsilon'$ that is \emph{independent} of the perturbation vector $\mathbf{b}$.
Let $f_A(\cdot)$ and $f_{\tilde{A}}(\cdot)$ be the density functions over the model
parameters produced by $A$ and $\tilde{A}$, respectively.
We bound the max-divergence between $f_A$ and $f_{\tilde{A}}$ for
any solution $\tilde{\mathbf{w}}$ produced by approximate minimizer $\tilde{A}$.\\
\begin{theorem}
Suppose that $\mathbf{b}$ is drawn from a distribution with density function $p(\cdot)$
such that for any $\mathbf{b}_1, \mathbf{b}_2 \in \mathbb{R}^d$ satisfying
$\|\mathbf{b}_1 - \mathbf{b}_2\|_2 \leq \epsilon'$, we have that:
$e^{-\epsilon} \leq \frac{p(\mathbf{b}_1)}{p(\mathbf{b}_2)} \leq e^{\epsilon}$.
Then $e^{-\epsilon} \leq \frac{f_{\tilde{A}}(\tilde{\mathbf{w}})}{f_A(\tilde{\mathbf{w}})} \leq e^\epsilon$
for any $\tilde{\mathbf{w}}$ produced by $\tilde{A}$.
\label{thm:epsilon_cd}
\end{theorem}
\paragraph{Achieving certified removal.}
We can use Theorem \ref{thm:epsilon_cd} to prove certified removal by combining it with the gradient residual norm bound $\epsilon'$ from Theorem \ref{thm:asymptotic_bound}. The security parameters $\epsilon$ and $\delta$ depend on the distribution from which $\mathbf{b}$ is sampled. We state the guarantee of $(\epsilon,\delta)$-certified removal below for two suitable distributions $p(\mathbf{b})$.\\
\begin{theorem}
Let $A$ be the learning algorithm that returns the unique optimum of the loss $L_{\mathbf{b}}(\mathbf{w}; \mathcal{D})$ and let $M$ be the Newton update removal mechanism (cf., Equation~\ref{eq:newton_update}). Suppose that $\| \nabla L(\mathbf{w}^-; \mathcal{D}') \|_2 \leq \epsilon'$ for some computable bound $\epsilon' > 0$. We have the following guarantees for $M$:
\begin{enumerate}[(i)]
\setlength\itemsep{-1ex}
\vspace{-2ex}
\item If $\mathbf{b}$ is drawn from a distribution with density $p(\mathbf{b}) \propto e^{-\frac{\epsilon}{\epsilon'} \|\mathbf{b}\|_2}$, then $M$ is $\epsilon$-CR for $A$;
\item If $\mathbf{b} \sim \mathcal{N}(0, c \epsilon' / \epsilon)^d$ with $c > 0$, then $M$ is $(\epsilon,\delta)$-CR for $A$ with $\delta = 1.5 \cdot e^{-c^2/2}$.
\end{enumerate}
\label{thm:removal_main}
\end{theorem}
The distribution in (i) is equivalent to sampling a direction uniformly over the unit sphere and sampling a norm from the $\Gamma(d, \frac{\epsilon'}{\epsilon})$ distribution \cite{chaudhuri2011dperm}.
\subsection{Practical Considerations}
\paragraph{Least-squares and logistic regression.}
The certified removal mechanism described above can be used with least-squares and logistic regression, which are ubiquitous in real-world applications of machine learning.
Least-squares regression assumes $\forall i: y_i \in \mathbb{R}$ and uses the loss function $\ell(\mathbf{w}^\top \mathbf{x}_i, y_i) = \left(\mathbf{w}^\top \mathbf{x}_i - y_i\right)^2$. The Hessian of this loss function is $\nabla^2 \ell(\mathbf{w}^\top \mathbf{x}_i, y_i) = \mathbf{x}_i \mathbf{x}_i^\top$, which is independent of $\mathbf{w}$. In particular, the gradient residual from Equation \ref{eq:gradient_residual} is exactly zero, which makes the Newton update in Equation~\ref{eq:newton_update} an $\epsilon$-certified removal mechanism with $\epsilon=0$ (loss perturbation is not needed). This is not surprising since the Newton update assumes a local quadratic approximation of the loss, which is exact for least-squares regression.
Logistic regression assumes $\forall i: y_i \in \{-1, +1\}$ and uses the loss function
$\ell(\mathbf{w}^\top \mathbf{x}_i, y_i) = -\log \sigma\left(y_i \mathbf{w}^\top \mathbf{x}_i\right)$,
where $\sigma(\cdot)$ denotes the sigmoid function, $\sigma(x) = \frac{1}{1 + \exp(-x)}$. The loss gradient and Hessian are given by:
\begin{align*}
\nabla \ell(\mathbf{w}^\top \mathbf{x}_i, y_i) &= \left(\sigma(y_i \mathbf{w}^\top \mathbf{x}_i) - 1\right) y_i \mathbf{x}_i\\
\nabla^2 \ell(\mathbf{w}^\top \mathbf{x}_i, y_i) &= \sigma(y_i \mathbf{w}^\top \mathbf{x}_i) \left(1 - \sigma(y_i \mathbf{w}^\top \mathbf{x}_i)\right) \mathbf{x}_i \mathbf{x}_i^\top.
\end{align*}
Under the assumption that $\| \mathbf{x}_i \|_2 \leq 1$ for all $i$, it is straightforward to show that $\| \nabla \ell(\mathbf{w}^\top \mathbf{x}_i, y_i) \|_2 \leq 1$ and that $\ell''(\mathbf{w}^\top \mathbf{x}_i, y_i)$ is $\gamma$-Lipschitz with $\gamma = 1/4$. This allows us to apply Theorem \ref{thm:asymptotic_bound} to logistic regression.
\paragraph{Data-dependent bound on gradient norm.}
The bound in Theorem \ref{thm:asymptotic_bound} contains a constant factor of $1/\lambda^2$ and may be too loose for practical applications on smaller datasets. Fortunately, we can derive a \emph{data-dependent} bound on the gradient residual that can be efficiently computed and that is much tighter in practice. Recall that the Hessian of $L(\cdot; \mathcal{D}')$ has the form:
$$H_{\mathbf{w}} = (X^-)^\top D_{\mathbf{w}} X^- + \lambda (n-1) I_d,$$
where $X^- \in \mathbb{R}^{(n-1) \times d}$ is the data matrix corresponding to $\mathcal{D}'$, $I_d$ is the identity matrix of size $d\!\times\!d$, and $D_{\mathbf{w}}$ is a diagonal matrix containing values:
$$(D_{\mathbf{w}})_{ii} = \ell''\left(\mathbf{w}^\top \mathbf{x}_i, y_i\right).$$
By Equation \ref{eq:gradient_residual} we have that:
\begin{align*}
\| \nabla L(\mathbf{w}^-; \mathcal{D}') \|_2 &= \| (H_{\mathbf{w}_\eta} - H_{\mathbf{w}^*}) H_{\mathbf{w}^*}^{-1} \Delta \|_2 \\
&= \| (X^-)^\top (D_{\mathbf{w}_\eta} - D_{\mathbf{w}^*}) X^- H_{\mathbf{w}^*}^{-1} \Delta \|_2 \\
&\leq \| X^- \|_2 \| D_{\mathbf{w}_\eta} - D_{\mathbf{w}^*} \|_2 \| X^- H_{\mathbf{w}^*}^{-1} \Delta \|_2.
\end{align*}
The term $\| D_{\mathbf{w}_\eta} - D_{\mathbf{w}^*} \|_2$ corresponds to the maximum singular value of a diagonal matrix, which in turn is the maximum value among its diagonal entries. Given the Lipschitz constant $\gamma$ of the second derivative $\ell''$, we can thus bound it as:
$$\| D_{\mathbf{w}_\eta} - D_{\mathbf{w}^*} \|_2 \leq \gamma \| \mathbf{w}_\eta - \mathbf{w}^* \|_2 \leq \gamma \| H_{\mathbf{w}^*}^{-1} \Delta \|_2.$$\
The following corollary summarizes this derivation.\\
\begin{corollary}
The Newton update $\mathbf{w}^- = \mathbf{w}^* + H_{\mathbf{w}^*}^{-1} \Delta$ satisfies:
$$\| \nabla L(\mathbf{w}^-; \mathcal{D}') \|_2 \leq \gamma \| X^- \|_2 \| H_{\mathbf{w}^*}^{-1} \Delta \|_2 \| X^- H_{\mathbf{w}^*}^{-1} \Delta \|_2,$$
where $\gamma$ is the Lipschitz constant of $\ell''$.
\label{cor:data_dependent}
\end{corollary}
The bound in Corollary \ref{cor:data_dependent} can be easily computed from the Newton update $H_{\mathbf{w}^*}^{-1} \Delta$ and the spectral norm of $X^-$, the latter admitting efficient algorithms such as power iterations.
\paragraph{Multiple removals.} The worst-case gradient residual norm after $T$
removals can be shown to scale linearly in $T$. We can prove this using induction
on $T$. The base case, $T\!=\!1$, is proven above. Suppose that the gradient residual
after $T \!\geq\! 1$ removals is $\mathbf{u}_T$ with $\| \mathbf{u}_T \|_2 \leq T \epsilon'$,
where $\epsilon'$ is the gradient residual norm bound for a single removal.
Let $\mathcal{D}^{(T)}$ be the training dataset with $T$ data points removed.
Consider the modified loss function $L_\mathbf{b}^{(T)}(\mathbf{w}; \mathcal{D}^{(T)}) = L_\mathbf{b}(\mathbf{w}; \mathcal{D}^{(T)}) - \mathbf{u}_T ^\top \mathbf{w}$
and let $\mathbf{w}_T$ be the approximate solution after $T$ removals. Then $\mathbf{w}_T$
is an exact solution of $L_\mathbf{b}^{(T)}(\mathbf{w}; \mathcal{D}^{(T)})$, hence, the argument above can be applied
to $L_\mathbf{b}^{(T)}(\mathbf{w}; \mathcal{D}^{(T)})$ to show that the Newton update approximation $\mathbf{w}_{T+1}$ has
gradient residual $\mathbf{u}'$ with norm at most $\epsilon'$. Then:
\begin{align*}
\mathbf{u}' = \nabla_\mathbf{w} L_\mathbf{b}^{(T)}(\mathbf{w}; \mathcal{D}^{(T)}) = \nabla_\mathbf{w} L_\mathbf{b}(\mathbf{w}; \mathcal{D}^{(T)}) - \mathbf{u}_T \\
\Rightarrow 0 = \nabla_\mathbf{w} L_\mathbf{b}(\mathbf{w}) - \mathbf{u}_T - \mathbf{u}'.
\end{align*}
Thus the gradient residual for $L_\mathbf{b}(\mathbf{w}; \mathcal{D}^{(T)})$ after $T+1$ removals is
$\mathbf{u}_{T+1} := \mathbf{u}_T + \mathbf{u}'$ and its norm is at most $(T+1) \epsilon'$ by the
triangle inequality.
\paragraph{Batch removal.} In certain scenarios, data removal may not need to occur immediately after the data's owner requests removal. This potentially allows for batch removals in which multiple training samples are removed at once for improved efficiency. The Newton update removal mechanism naturally supports this extension. Assume without loss of generality that the batch of training data to be removed is $\mathcal{D}_m = \{(\mathbf{x}_{n-m+1}, y_{n-m+1}), \ldots, (\mathbf{x}_n, y_n)\}$. Define:
\begin{align*}
\Delta^{(m)} &= m\lambda \mathbf{w}^* + \sum_{j=n-m+1}^n \nabla \ell((\mathbf{w}^*)^\top \mathbf{x}_j, y_j)\\
H_{\mathbf{w}^*}^{(m)} &= \nabla^2 L(\mathbf{w}^*; \mathcal{D} \setminus \mathcal{D}_m).
\end{align*}
The batch removal update is:
\begin{equation}
\mathbf{w}^{(-m)} = \mathbf{w}^* + \left[ H_{\mathbf{w}^*}^{(m)} \right]^{-1} \Delta^{(m)}.
\label{eq:newton_update_batch}
\end{equation}
We derive bounds on the gradient residual norm for batch removal that are similar Theorem \ref{thm:asymptotic_bound} and Corollary \ref{cor:data_dependent}.\\
\begin{theorem}
Under the same regularity conditions of Theorem \ref{thm:asymptotic_bound}, we have that:
\begin{align*}
\| \nabla L(\mathbf{w}^{(-m)}; \mathcal{D} \setminus \mathcal{D}_m) \|_2 &\leq \gamma (n-m) \left\| \left[ H_{\mathbf{w}^*}^{(m)} \right]^{-1} \Delta^{(m)} \right\|_2^2 \\
&\leq \frac{4 \gamma m^2 C^2}{\lambda^2 (n-m)}.
\end{align*}
\label{thm:asymptotic_bound_batch}
\end{theorem}
\begin{corollary}
The Newton update $\mathbf{w}^{(-m)} = \mathbf{w}^* + \left[ H_{\mathbf{w}^*}^{(m)} \right]^{-1} \Delta^{(m)}$ satisfies:
\begin{align*}
&\| L(\mathbf{w}^{(-m)}; \mathcal{D} \setminus \mathcal{D}_m) \|_2 \leq \\
& \gamma \| X^{(-m)} \|_2\! \left\| \left[ H_{\mathbf{w}^*}^{(m)} \right]^{-1} \!\!\Delta^{(m)} \right\|_2 \! \left\| X^{(-m)} \! \left[ H_{\mathbf{w}^*}^{(m)} \right]^{-1} \!\!\Delta^{(m)} \right\|_2\!,
\end{align*}
where $X^{(-m)}$ is the data matrix for $\mathcal{D} \setminus \mathcal{D}_m$ and $\gamma$ is the Lipschitz constant of $\ell''$.
\label{cor:data_dependent_batch}
\end{corollary}
\begin{algorithm}[t]
\caption{Training of a certified removal-enabled model.}
\label{alg:training}
\begin{algorithmic}[1]
\STATE \textbf{Input}: Dataset $\mathcal{D}$, loss $\ell$, parameters $\sigma,\lambda>0$.
\STATE Sample $\mathbf{b} \sim \mathcal{N}(0, \sigma)^d$.
\STATE \textbf{Return}: $\text{argmin}_{\mathbf{w} \in \mathbb{R}^d} \scriptstyle{\sum_{i=1}^n \ell(\mathbf{w}^\top \mathbf{x}_i, y_i) + \lambda n \| \mathbf{w} \|_2^2 + \mathbf{b}^\top \mathbf{w}}$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Repeated certified removal of data batches.}
\label{alg:removal}
\begin{algorithmic}[1]
\STATE \textbf{Input}: Dataset $\mathcal{D}$, loss $\ell$, parameters $\epsilon,\delta,\sigma,\lambda>0$.
\STATE \hspace{6ex} Lipschitz constant $\gamma$ of $\ell''$.
\STATE \hspace{6ex} Solution $\mathbf{w}$ computed by Algorithm \ref{alg:training}.
\STATE \hspace{6ex} Sequence of batches of training sample
\STATE \hspace{6ex} indices to be removed: $B_1, B_2, \ldots$
\STATE Gradient residual bound $\beta \gets 0$.
\STATE $c \gets \sqrt{2 \log(1.5/\delta)}$.
\STATE $K \gets \sum_{i=1}^n \mathbf{x}_i \mathbf{x}_i^\top$.
\STATE $X \gets [\mathbf{x}_1 | \mathbf{x}_2 | \cdots | \mathbf{x}_n]^\top$.
\FOR{$j=1,2,\ldots$}
\STATE $\Delta \gets |B_j| \lambda \mathbf{w} + \sum_{i \in B_j} \nabla \ell(\mathbf{w}^\top \mathbf{x}_i, y_i)$.
\STATE $H \gets \sum_{i : i \notin B_1,B_2,\ldots,B_j} \nabla^2 \ell(\mathbf{w}^\top \mathbf{x}_i, y_i)$.
\STATE $X \gets \texttt{remove\_rows}(X, B_j)$.
\STATE $K \gets K - \sum_{i \in B_j} \mathbf{x}_i \mathbf{x}_i^\top$.
\STATE $\beta \gets \beta + \gamma \sqrt{\| K \|_2} \cdot \|H^{-1} \Delta\|_2 \cdot \|XH^{-1} \Delta\|_2$.
\IF{$\beta > \sigma \epsilon / c$}
\STATE Re-train from scratch using Algorithm \ref{alg:training}.
\ELSE
\STATE $\mathbf{w} \gets \mathbf{w} + H^{-1} \Delta$.
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
Interestingly, the gradient residual bound in Theorem \ref{thm:asymptotic_bound_batch} scales quadratically with the number of removals, as opposed to linearly when removing examples one-by-one. This increase in error is due to a more crude approximation of the Hessian, that is, we compute the Hessian only once at the current solution $\mathbf{w}^*$ rather than once per removed data.
\paragraph{Reducing online computation.} The Newton update requires forming and inverting the Hessian. Although the $O(d^3)$ cost of inversion is relatively limited for small $d$ and inversion can be done efficiently on GPUs, the cost of forming the Hessian is $O(d^2 n)$, which may be problematic for large datasets. However, the Hessian can be formed at training time, \emph{i.e.}, before the data to be removed is presented, and only the inverse needs to be computed at removal time.
When computing the data-dependent bound, a similar technique can be used for calculating the term $\| X^- H_{\mathbf{w}^*}^{-1} \Delta \|_2$ -- which involves the product of the $(n-1) \times d$ data matrix $X^-$ with a $d$-dimensional vector. We can reduce the online component of this computation to $O(d^3)$ by forming the SVD of $X$ offline and applying online down-dates \citep{gu1995downdating} to form the SVD of $X^-$ by solving an eigen-decomposition problem on a $d \times d$ matrix. It can be shown that this technique reduces the computation of $\| X^- H_{\mathbf{w}^*}^{-1} \Delta \|_2$ to involve only $d \times d$ matrices and $d$-dimensional vectors, which enables the online computation cost to be independent of $n$.
\paragraph{Pseudo-code.} We present pseudo-code for training removal-enabled models and for the $(\epsilon,\delta)$-CR Newton update mechanism. During training (Algorithm \ref{alg:training}), we add a random linear term to the training loss by sampling a Gaussian noise vector $\mathbf{b}$. The choice of $\sigma$ determines a ``removal budget'' according to Theorem \ref{thm:removal_main}: the maximum gradient residual norm that can be incurred is $\sigma \epsilon / c$. When optimizing the training loss, any optimizer with convergence guarantee for strongly convex loss functions can be used to find the minimizer in Algorithm \ref{alg:training}. We use L-BFGS \citep{liu1989lbfgs} in our experiments as it was the most efficient of the optimizers we tried.
During removal (line 19 in Algorithm \ref{alg:removal}), we apply the batch Newton update (Equation \ref{eq:newton_update_batch}) and compute the gradient residual norm bound using Corollary \ref{cor:data_dependent_batch} (line 15 in Algorithm \ref{alg:removal}). The variable $\beta$ accumulates the gradient residual norm over all removals. If the pre-determined budget of $\sigma \epsilon / c$ is exceeded, we train a new removal-enabled model from scratch using Algorithm \ref{alg:training} on the remaining data points.
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cccc}
\toprule
\textbf{Dataset} & \bf MNIST (\S\ref{sec:linear}) & \bf LSUN (\S\ref{sec:text_and_image}) & \bf SST (\S\ref{sec:text_and_image}) & \bf SVHN (\S\ref{sec:non-linear}) \\
\midrule
\textbf{Removal setting} & CR Linear & Public Extractor + CR Linear & Public Extractor + CR Linear & DP Extractor + CR Linear \\
\textbf{Removal time} & 0.04s & 0.48s & 0.07s & 0.27s \\
\midrule
\textbf{Training time} & 15.6s & 124s & 61.5s & 1.5h \\
\bottomrule
\end{tabular}
}
\vspace{-2ex}
\caption{\textbf{Summary of removal and training times observed in our experiments.} For LSUN and SST, the public extractor is trained on a public dataset and hence removal is only applied to the linear model. For SVHN, removal is applied to a linear model that operates on top of a differentially private feature extractor. In all cases, using the Newton update to (certifiably) remove data is several orders of magnitude faster than re-training the model from scratch.}
\label{tab:timings}
\end{table*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{fig/MNIST_lam_std_combined.pdf}
\vspace{-5ex}
\caption{\textbf{Linear logistic regression on MNIST.} \textbf{Left}: Effect of $L_2$-regularization parameter, $\lambda$, and standard deviation of the objective perturbation, $\sigma$, on test accuracy. \textbf{Middle}: Effect of $\epsilon$ on test accuracy when supporting 100 removals. \textbf{Right}: Trade-off between accuracy and supported number of removals at $\epsilon=1$. At a given $\epsilon$, higher $\lambda$ and $\sigma$ values reduce test accuracy but allow for many more removals.}
\label{fig:mnist_std_lam}
\vspace{-1ex}
\end{figure*}
\subsection{Non-Linear Models}
Deep learning models often apply a linear
model to features extracted by a network pre-trained on a public dataset like ImageNet \citep{ren2015faster, he2017mask, zhao2017pyramid, carreira2017quovadis} for vision tasks, or from language model trained on public text corpora \citep{devlin2019bert,dai2019transformer,yang2019xlnet,liu2019roberta} for natural language tasks. In such setups, we only need to worry about data removal from the linear model that is applied to the output of the feature extractor.
When feature extractors are trained on private data as well, we can use our certified-removal mechanism on linear models that are applied to the output of a differentially-private feature extraction network \citep{abadi2016deep}.\\% to obtain a certified-removal mechanism with better performance than pure differential privacy.
\begin{theorem}
Suppose $\Phi$ is a randomized learning algorithm that is $(\epsilon_\text{DP}, \delta_\text{DP})$-differentially private, and the outputs of $\Phi$ are used in a linear model by minimizing $L_\mathbf{b}$ and using a removal mechanism that guarantees $(\epsilon_\text{CR},\delta_\text{CR})$-certified removal. Then the entire procedure guarantees
$(\epsilon_\text{DP} \!+\! \epsilon_\text{CR}, \delta_\text{DP} \!+\! \delta_\text{CR})$-certified removal.
\label{thm:extractor}
\end{theorem}
The advantage of this approach over training the entire network in a differentially private manner \citep{abadi2016deep} is that the (removal-enabled) linear model can be trained using a much smaller perturbation, which may greatly boost the accuracy of the final model (see Section \ref{sec:non-linear}).
\section{Introduction}
\input{intro.tex}
\section{Certified Removal}
\label{sec:def}
\input{definition.tex}
\section{Removal Mechanisms}
\input{method.tex}
\section{Experiments}
\label{sec:exp}
\input{experiment.tex}
\section{Related Work}
\input{related.tex}
\section{Conclusion}
\input{conclusion.tex}
\newpage
|
2,869,038,154,326 | arxiv | \section{Introduction}
Classical mechanics of a particle on $\mathbb{R}^n$ is often described by endowing a symplectic structure on its phase space $\mathbb{R}^{2n}$. Given a point $(p,q)\in \mathbb{R}^{2n}$, the point $p$ describes the position and $q$ describes the momentum of a particle.
In quantum physics, functions on $\mathbb{R}^{2n}$ are typically referred to as \emph{classical observables}, while (not necessarily bounded) operators on $\mathcal H=L^2(\mathbb{R}^n)$ are referred to as \emph{quantum observables}.
The Weyl's canonical commutation relations on $\mathcal H$ plays the same role as the symplectic phase space structure in classical mechanics. Usually, the theory for the classical observables and the quantum observables are considered on different spaces, with quantization acting as a way to go between classical and quantum observables. In his 1984 paper \cite{werner84}, R.~Werner introduced a framework for the simultaneous treatment of both the classical and the quantum observables, which he called \emph{quantum harmonic analysis}. Here, he extends notions in harmonic analysis for the classical observables, such as convolutions and Fourier transforms, to the quantum observables. In doing so, one achieves interesting interactions between the classical and the quantum side of the theory. Besides being an object of interest for mathematical physics and also of intrinsic interest, this framework also allows for important applications in several fields, e.g., localization operators, Wigner distributions and Toeplitz operators. Reformulating problems in terms of quantum harmonic analysis offers a new framework for tackling problems, which often proves fruitful. For more information, see e.g., \cite{Berge_Berge_Luef_Skrettingland2022, Fulsche2020, Fulsche2022, Halvdansson2022, luef_eirik2018, Luef_Skrettingland2021, Sk20, Werner1983}.
Being a bit more precise, let $W_z$ denote the \emph{Weyl operators} on the Hilbert space $\mathcal H$ defined by
\begin{align*}
W_{(x,\,\xi)}\phi(y)=e^{i\xi y-ix\xi/2}\phi(y-x), \quad z=(x,\,\xi)\in \mathbb{R}^{2n}.
\end{align*}
As is common, we have set Planck's constant to $\hbar =1$.
The Weyl operators are unitary operators on $\mathcal{H}$, and satisfies the \emph{Weyl commutation relations}
\begin{align*}
W_z W_{z'}=e^{-i\sigma(z,\,z')/2}W_{z+z'} = e^{-i\sigma(z,\,z')} W_{z'} W_z, \quad z, z' \in \mathbb R^{2n},
\end{align*}
where $\sigma$ denotes the standard symplectic form on $\mathbb{R}^{2n}$.
Given two trace class operators $S$ and $T$, their convolution is defined by
\begin{align*}
S \ast T(z) \coloneqq \tr(S W_z P T PW_{-z}), \quad z \in \mathbb R^{2n},
\end{align*}
where $P$ is the \emph{parity operator} acting on $\mathcal{H}$ by $P\phi(x) = \phi(-x)$.
The expression $\alpha_z(A) \coloneqq W_z A W_{-z}$ plays the role of \emph{shifting} the operator $A$ by $z$, analogous to the shift $\alpha_z(f) = f(\cdot -z )$ used in the definition of function convolution.
Defining the \emph{Fourier-Weyl transform} (also called \emph{Fourier-Wigner transform}) by
\begin{align*}
\mathcal{F}_W(S)(z) \coloneqq \tr(S W_z), \quad z \in \mathbb R^{2n},
\end{align*}
we have that
\begin{align*}
\mathcal{F}_\sigma(S\ast T) = \mathcal{F}_W(S)\mathcal{F}_W(T),
\end{align*}
where $\mathcal{F}_\sigma$ is the symplectic Fourier transform. Additionally, given a function $f\in L^1(\mathbb{R}^{2n})$ and a trace class operator $A\in \mathcal{T}^1(\mathcal{H})$, one defines the operator $f \ast A$ by
\begin{equation*}
(f\ast A)\phi = \frac{1}{(2\pi)^n}\int_{\mathbb{R}^{2n}}f(z)W_z AW_{-z}\phi\,\mathrm{d}z, \quad z \in \mathbb R^{2n}.
\end{equation*}
Again, we have a relation with the Fourier transforms given by
\[\mathcal{F}_W(f\ast A)=\mathcal{F}_\sigma(f)\mathcal{F}_W(A).\]
Using the convolutions, one can define a product on $L^1(\mathbb{R}^{2n})\oplus\mathcal{T}^1(\mathcal H)$ by \[(f,S)\ast (g,T)=(f\ast g+S\ast T, f\ast T+g\ast S),\] making the space into a commutative Banach algebra. Hence, this new space contains both the classical and the quantum observables.
It is not hard to find reasons why $L^2(\mathbb R^{2n})$ and the Hilbert-Schmidt operators $\mathcal T^2(\mathcal H)$ play similar roles for classical and quantum observables. For example, the Fourier-Weyl transform can be extended to an isomorphism between these two spaces by the Plancherel theorem, see e.g., \cite[Sec.~7.5]{folland2016course}. For other function spaces on $\mathbb{R}^{2n}$, the paper \cite{werner84} gives a definition of \emph{corresponding spaces}. We say that a space of classical observables $\mathcal{D}_0$ and a space of quantum observables $\mathcal{D}_1$ are corresponding spaces if
\begin{equation}\label{eq:corresponding spaces}
(L^1(\mathbb{R}^{2n})\oplus\mathcal{T}^1(\mathcal H))\ast (\mathcal{D}_0\oplus\mathcal{D}_1)\subset \mathcal{D}_0\oplus\mathcal{D}_1.
\end{equation}
Using this definition $L^2(\mathbb{R}^{2n})$ and $\mathcal{T}^2(\mathcal{H})$ are corresponding spaces, after extending the definition of convolution appropriately, the same is true for $L^\infty(\mathbb R^{2n})$ and the space of bounded operators $\mathcal L(\mathcal H)$. It should be noted that this correspondence only becomes unique, in the sense that given a space $\mathcal{D}_0$ there is only one corresponding space $\mathcal{D}_1$, upon assuming additional topological properties on $\mathcal D_0$ and $\mathcal D_1$. Notice that when $\mathcal{D}_0\oplus\mathcal{D}_1\subset L^1(\mathbb{R}^{2n})\oplus\mathcal{T}^1(\mathcal H)$, equation \eqref{eq:corresponding spaces} implies that $\mathcal{D}_0\oplus \mathcal{D}_1$ is an ideal.
This paper has the dual purpose of both furthering our understanding of the space $L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$ as a commutative Banach algebra and introducing a quantum version of Segal algebras.
For the first goal, we completely classify the closed ideals of $L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$. As is well-known, closed ideals of $L^1(\mathbb{R}^n)$ are identical to closed shift-invariant subspaces. This analogy breaks down for closed ideals of $L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$, where
shift-invariance is merely a necessary condition for being a closed ideal, but not sufficient. Nevertheless, there is still a rich connection between shift-invariant subspaces and closed ideals, which we investigate.
As a special case, we describe the Gelfand theory of $L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$. From the Gelfand theory of $L^1(\mathbb{R}^{2n})$ it should come as no surprise that this is related to the Fourier transforms.
The second purpose of the paper is to introduce a notion of \emph{quantum Segal algebras}. The quantum Segal algebras play an analogous role in quantum harmonic analysis to the Segal algebras in classical harmonic analysis. The definition of a quantum Segal algebra is in spirit the same as Segal algebras: A quantum Segal algebra is a dense subalgebra of $L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$ that comes with its own Banach algebra norm such that shifts act isometrically and continuously on it. We show that quantum Segal algebras are, in analogy with Segal algebras, the same as essential $L^1$ modules. Further, we prove that under the technical assumption of the space being \emph{graded} (i.e., the algebra has the structure of a direct sum of its classical and quantum part), they always have the same Gelfand theory as $L^1( \mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$. As a last general result on quantum Segal algebras, we give a description of the intersection of all graded quantum Segal algebras.
After having discussed the general theory of quantum Segal algebras, we turn towards examples of such algebras. We give two different ways of constructing such algebras: By convolving a Segal algebra with a regular trace class operator or by using the Weyl quantization. As a special case of a quantum Segal algebra, we discuss the \emph{quantum Feichtinger algebra}. It is obtained as the direct sum of the Feichtinger algebra $\mathcal S_0(\mathbb R^{2n})$ and the subspace of $\mathcal T^1(\mathcal H)$ obtained by the Weyl quantization of symbols in $\mathcal S_0(\mathbb R^{2n})$. Due to the importance of the classical Feichtinger algebra, we find it appropriate to look into this example in some more detail. In particular, we give some conditions on when an operator belongs to the quantum Feichtinger algebra in terms of the operations of quantum harmonic analysis. Not by chance, this turns out to be entirely analogous to the characterization of the membership of functions in the classical Feichtinger algebra.
Our paper is structured as follows: In Section \ref{sec:preliminaries}, we give a more detailed account of the foundations for our work. We start by recalling the basics of Segal algebras to thereafter review some results in quantum harmonic analysis. In particular, we introduce the notion of \emph{modulation} of an operator, which when applying the Fourier Weyl transform is turned into function shifts. This notion of modulation seems not to be present in the literature thus far. In Section \ref{sec: structure theory}, we conduct the investigation of the closed ideals of $L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$ as well as the Gelfand theory. Section \ref{sec: Quantum Segal} is then devoted to the introduction and the basic properties of quantum Segal algebras. Finally, Section \ref{sec: examples of QSA} will discuss the abovementioned constructions of quantum Segal algebras, in particular the quantum Feichtinger algebra.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Classical Segal Algebras}
Before introducing Segal algebras, let us quickly recall some basic facts about the space of integrable functions $L^1(\mathbb{R}^n)$. We will let $\alpha_x\colon L^1(\mathbb{R}^n)\to L^1(\mathbb{R}^n)$ denote the \emph{shift operator}
\[\alpha_xf(y)\coloneqq f(y-x), \qquad f \in L^{1}(\mathbb{R}^{n}), \ x\in \mathbb{R}^n.\]
For elements $f, g \in L^{1}(\mathbb{R}^{n})$ their \emph{convolution product} is
\[(f \ast g)(x) \coloneqq \frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R}^{n}}f(y)\alpha_{y}g(x) \, \mathrm{d}y = \frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R}^{n}}f(y)g(x - y) \, \mathrm{d}y,\]
making $L^1(\mathbb{R}^n)$ into a Banach algebra. We want to note that the factors $\frac{1}{(2\pi)^{n/2}}$ serve as appropriate normalizations for the Haar measure, which makes the connection between classical and quantum harmonic analysis most natural.
The algebra $L^{1}(\mathbb{R}^{n})$ is equipped with the \emph{involution} \[f^{\ast }(x) \coloneqq \overline{f(-x)}.\]
We use the notation $\mathcal{F}f$ for the \emph{standard Fourier transform} of $f \in L^{1}(\mathbb{R}^{n})$ given by
\[\mathcal{F}f(\xi) \coloneqq \frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R}^{n}}f(t)e^{- i t \xi} \, \mathrm{d}t.\]
We emphasize at this point that the above standard Fourier transform will not play a role later in this paper. It is usually replaced by the \emph{symplectic Fourier transform}, and we stress already at this point that $\widehat{f}$ will not denote the standard Fourier transform of $f$ as above, but its symplectic Fourier transform $\mathcal F_\sigma(f)$, see \eqref{eq:symplectic}.
Let us recall the definition of a Segal algebra, as e.g., given in \cite{reiter71}.
\begin{definition}[Segal Algebra]
A \emph{Segal algebra} $S$ is a linear subspace of $L^{1}(\mathbb{R}^{n})$ satisfying the following four properties:
\begin{enumerate}[(S1)]
\item\label{def:s1} The space $S$ is dense in $L^1(\mathbb R^n)$.
\item\label{def:s2} The space $S$ is a Banach algebra with respect to a norm $\| \cdot \|_S$ and convolution as the product.
\item\label{def:s3} The space $S$ is invariant under shifts, i.e., for $f\in S$ we have that $\alpha_x(f)\in S$ for all $x \in \mathbb{R}^{n}$.
\item\label{def:s4} The shifts $\alpha_x$ are continuous and satisfy $\|\alpha_{x}(f)\|_{S} = \|f\|_{S}$ for all $x \in \mathbb{R}^{n}$ and $f \in S$.
\end{enumerate}
\end{definition}
To check the continuity condition in property \ref{def:s4} one only need to check that given an arbitrary $f \in S$ and $\epsilon > 0$ there exists a neighborhood $U$ of the origin in $\mathbb{R}^{n}$ such that
\[\|\alpha_{x}f - f\|_{S} < \epsilon \quad \text{ for } \quad x \in U.\]
\begin{remark}
Segal algebras are often defined on a locally compact group $G$ as a dense subspace of $L^{1}(G)$ equipped with the left Haar measure and left shifts. In this more general case, the existence and variety of Segal algebras depend heavily on the group $G$ in question. For discrete groups the only Segal algebra is the whole space $l^{1}(G) = L^{1}(G)$.
On the other hand, for $G$ compact the space $L^{p}(G)$ is a Segal algebra for every $1\le p\le \infty$. In particular, the Segal algebra $L^2(G)$ is heavily used in Peter-Weyl theory.
\end{remark}
A well known result is that Segal algebras are (two-sided) ideals of $L^1(\mathbb{R}^n)$, i.e., for a Segal algebra $S$, we have that $f \ast g \in S$ whenever $f \in L^1(\mathbb R^n)$ and $g \in S$. Moreover, Segal algebras always satisfy by \cite[Prop.\ 4.1]{reiter71} the estimate
\begin{equation}\label{eq:banach_ideal_bound}
\|f \ast g\|_{S} \leq C \|f\|_{L^{1}(\mathbb{R}^{n})} \|g\|_{S},
\end{equation}
for $f \in L^{1}(\mathbb{R}^{n})$ and $g \in S$ for a $C$ only dependent on $\|\cdot\|_{S}$. By rescaling the norm on $S$ to an equivalent norm, one can take the constant $C$ to be $1$. More generally, one has that if $T\subset S$ are both Segal algebras then
\begin{equation}\label{eq:banach_ideal_bound_general}
\|f \ast g\|_{T} \leq C \|f\|_{S} \|g\|_{T}\qquad \ f\in S\,, g\in T.
\end{equation}
We refer the reader to \cite[Prop.\ 4.2]{reiter71} for a generalization of \eqref{eq:banach_ideal_bound} involving bounded measures.
\begin{definition}
We say that a Segal algebra $S$ is \emph{star-symmetric} if the involution is an isometry in the $S$-norm, meaning that for $f\in S$ we have that $f^\ast \in S$ and $\| f^\ast\|_S = \| f\|_S.$
\end{definition}
Clearly $L^{1}(\mathbb{R}^{n})$ is a star-symmetric Segal algebra. In the following example, we will review a few classical examples of Segal algebras of central importance.
\begin{example}\label{ex:segal}\hfill
\begin{enumerate}[(E1)]
\item Let $C_{0}(\mathbb{R}^{n})$ denote the continuous function on $\mathbb{R}^{n}$ that vanish towards infinity. Then the space $S_{\infty} \coloneqq L^{1}(\mathbb{R}^{n}) \cap C_{0}(\mathbb{R}^{n})$ is a star-symmetric Segal algebra with the norm
\[\|f\|_{S_{\infty}} \coloneqq \|f\|_{L^{1}(\mathbb{R}^{n})} + \|f\|_{L^\infty(\mathbb{R}^{n})}.\]
\item For $1 < p < \infty$ the space $S_{p} \coloneqq L^{1}(\mathbb{R}^{n}) \cap L^{p}(\mathbb{R}^{n})$ is a star-symmetric Segal algebra with the norm
\[\|f\|_{S_{p}} \coloneqq \|f\|_{L^{1}(\mathbb{R}^{n})} + \|f\|_{L^{p}(\mathbb{R}^{n})}.\]
\item\label{ex:segal E3} Fix a positive and unbounded measure $\mu$ on $\mathbb{R}^{n}$, for example the Lebesgue measure $\mathrm{d}x$. For $1 \leq p < \infty$ we denote by \[S_{p}^{\mu} \coloneqq L^{1}(\mathbb{R}^{n}) \cap \mathcal{F}^{-1}\left(L^{p}(\mathbb{R}^{n}, \mu)\right).\] Then $S_{p}^{\mu}$ is a star-symmetric Segal algebra with the norm
\[\|f\|_{S_{p}^{\mu}} \coloneqq \|f\|_{L^{1}(\mathbb{R}^{n})} + \|\mathcal{F}(f)\|_{L^{p}(\mathbb{R}^{n}, \mu)}.\]
\item Recall that \emph{Wiener's algebra} $W$ on $\mathbb R^{n}$ is defined as the space of those continuous functions on $\mathbb R^{n}$ such that
\begin{align*}
\| f\|_W' \coloneqq \sum_{m \in \mathbb Z^{n}} a_m(f) < \infty, \quad \text{where } a_m(f) = \sup_{x \in [0,1]^{n}} |f(m + x)|.
\end{align*}
Clearly, $W \subset L^1(\mathbb R^{n}) \cap C_0(\mathbb R^{n})$. Letting the norm on $W$ be defined by
\begin{align*}
\| f\|_W \coloneqq \sup_{y \in \mathbb R^{n}} \| \alpha_y(f)\|_W',
\end{align*}
turns $W$ into a Segal algebra. It is immediately clear that $W$ is also a module under pointwise multiplication by functions from $C_0(\mathbb R^{n})$. Indeed, $W$ is the smallest Segal with this property:
\begin{theorem}[\cite{feichtinger1977}]
Let $ S \subset L^1(\mathbb R^n)$ be a Segal algebra which is a $C_0(\mathbb R^n)$ module under pointwise multiplication. Then, $S$ contains $W$ and $\| f\|_W \leq C\| f\|_S$ for all $f\in W$.
\end{theorem}
\item\label{ex:Feichtinger} The \emph{Feichtinger algebra} $\mathcal{S}_{0}(\mathbb R^n)$ in time-frequency analysis is typically defined through the short-time Fourier transform: One can define the Feichtinger algebra as the set of functions $f \in L^{1}(\mathbb{R}^{n}) \cap L^{2}(\mathbb{R}^{n})$ such that $V_{f}f \in L^{1}(\mathbb{R}^{2n})$, where
\begin{equation}
\label{eq:STFT}
V_{g}f(x, \xi) \coloneqq \int_{\mathbb{R}^{n}}f(t)\overline{g(t - x)}e^{- i \xi t} \, \mathrm{d}t.
\end{equation}
Through this description, it is somewhat nontrivial to see that the Feichtinger algebra is a star-symmetric Segal algebra. However, one can also describe the Feichtinger algebra as the minimal Segal algebra that is (in a strong sense) closed under \emph{modulations}
\begin{equation}\label{eq:modulation}
M_{\xi}f(t) \coloneqq e^{i t \xi}f(t).
\end{equation}
We refer the reader to \cite{jakobsen18} for a careful examination of the Feichtinger algebra on locally compact groups.
\end{enumerate}
\end{example}
\begin{remark}\label{re:intersection_Segal}
The intersection of all Segal algebras is precisely the set of continuous functions in $L^{1}(\mathbb{R}^{n})$ whose Fourier transform is compactly supported, we refer to \cite[(vii) p.~26]{reiter71} for a proof. As such, not every Segal algebra contains the Schwartz functions $\mathcal{S}(\mathbb{R}^{n})$ as a subspace.
\end{remark}
Segal algebras on $\mathbb{R}^{n}$ are not unital algebras, since $(2\pi)^{n/2}\delta_0\not\in L^1(\mathbb{R}^n)$. However, every Segal algebra contains an approximate unit that is normalized in $L^{1}(\mathbb{R}^{n})$, see \cite[Prop.~8.1]{reiter71}. As such, the Cohen-Hewitt factorization theorem implies that every element in a Segal algebra $g \in S$ can be factorized as $g = f \ast h$ for $f \in L^{1}(\mathbb{R}^{n})$ and $h \in S$. We succinctly write
\begin{equation}\label{eq:Cohen_Hewitt_factorization}
L^{1}(\mathbb{R}^{n}) \ast S = S.
\end{equation}
\par
It follows from \cite[Prop.~3.1]{dales2014} and \eqref{eq:Cohen_Hewitt_factorization} applied to $S = L^{1}(\mathbb{R}^{n})$ that no maximal ideal of $L^{1}(\mathbb{R}^{n})$ can be dense. Hence Segal algebras are never maximal ideals. We note that $L^{1}(\mathbb{R}^{n})$ has plenty of closed maximal ideals, e.g., for every $\xi \in \mathbb{R}^{n}$ the space
\[I_{\xi} \coloneqq \left\{f \in L^{1}(\mathbb{R}^{n}) : \mathcal{F}f(\xi) = 0\right\},\]
is a maximal ideal.
When it comes to ideals inside a Segal algebra $S \subset L^{1}(\mathbb{R}^{n})$, then every closed ideal $I_{S}$ in $S$ is on the form $I_{S} = I \cap S$, where $I$ is a unique closed ideal of $L^{1}(\mathbb{R}^{n})$ by \cite[Thm.~9.1]{reiter71}.
\subsection{Quantum Harmonic Analysis}
The operators in this article will always be on the Hilbert space $\mathcal H=L^2(\mathbb R^n)$. Recall that the \emph{Weyl operators} acting on $\mathcal H$ are given by
\begin{align*}
W_{(x,\,\xi)}\phi(y)\coloneqq e^{i\xi y-ix\xi/2}\phi(y-x),
\end{align*}
where $x, \, \xi \in \mathbb R^n$.
It is common to use the shorthand notation $W_z\coloneqq W_{(x,\,\xi)}$, where $z = (x,\, \xi)\in \mathbb{R}^{2n}$. The Weyl operators are unitary operators and satisfy $W_{z}^\ast = W_{-z}$. The name Weyl operators comes from the \emph{Weyl commutation relation}, often called the CCR relation, given by
\begin{align*}
W_zW_{z'}=e^{-i\sigma(z,\,z')/2}W_{z+z'}=e^{-i\sigma(z,\,z')}W_{z'}W_{z}.
\end{align*}
Here, $\sigma$ denotes the symplectic form
\begin{align*}
\sigma(z, z')= \sigma((x, \xi), (x', \xi')) \coloneqq \xi x' - x \xi'.
\end{align*}
We will now define the basic operations of quantum harmonic analysis, namely convolutions and Fourier transforms. Unless otherwise stated, the proof of the statements in Section \ref{sec: QHA convolutions} and Section \ref{sec:qha fourier} can be found in \cite{werner84}.
\subsubsection{Convolutions}\label{sec: QHA convolutions}
Given a bounded linear operator $A \in \mathcal L(\mathcal H)$, we define \emph{shifting the operator} $A$ by $ z \in \mathbb R^{2n}$ as
\[\alpha_z(A) \coloneqq W_z A W_{-z}.\]
Using shifts, one can define the \emph{function-operator convolution} for $f \in L^1(\mathbb R^{2n})$ and $A \in \mathcal L(\mathcal H)$ as the operator
\begin{equation}\label{eq:function_operator_convolution}
f \ast A \coloneqq
\frac{1}{(2\pi)^n}\int_{\mathbb R^{2n}} f(z) \alpha_z(A)\,\mathrm{d}z\eqqcolon A\ast f.
\end{equation}
The convolution \eqref{eq:function_operator_convolution} is interpreted as a weak integral and the resulting operator is bounded. We say that a linear space $X\subset \mathcal{L} (\mathcal H)$ is \emph{shift-invariant} if $\alpha_z (X)\subset X$ for all $z\in \mathbb{R}^{2n}$. A Banach space $(X,\|\cdot \|_X)$ of operators $X\subset \mathcal L(\mathcal H)$ is said to have a \emph{strongly shift-invariant norm} if $\| \alpha_z(A)\|_X = \| A\|_X$ and $z \mapsto \alpha_z(A)$ is continuous in $\|\cdot\|_X$ for all $z\in \mathbb{R}^{2n}$. Given a shift-invariant Banach space with a strongly shift-invariant norm $(X,\|\cdot \|_X)$ continuously embedded into $\mathcal L(\mathcal H)$ then $f \ast A \in X$ for all $A\in X$ and $f \in L^1(\mathbb{R}^{2n})$.
Examples of such spaces are the \emph{$p$-Schatten class operators} $\mathcal T^p = \mathcal T^p(\mathcal H)$ for $1 \leq p \le \infty$. The special case $p=1$ is the \emph{trace class operators}, while $p=2$ is the \emph{Hilbert-Schmidt operators}. We use the standard convention that $\mathcal{T}^\infty$ is the compact operators with the operator norm.
We will use $P\colon \mathcal H\to \mathcal H$ to denote the \emph{parity operator} defined by $P\phi(x) = \phi(-x)$. Notice that the parity operator satisfies
\[PW_z=W_{-z}P. \] For $A, B \in \mathcal T^1$ we define their \emph{operator-operator convolution} as
\begin{align*}
A \ast B(z) \coloneqq \tr(A \alpha_z( P B P))=B \ast A(z), \quad z \in \mathbb R^{2n}.
\end{align*}
It is of central importance in quantum harmonic analysis that $A \ast B \in L^1(\mathbb R^{2n})$. Furthermore, one has
\begin{align*}
\frac{1}{(2\pi)^n} \int_{\mathbb R^{2n}} A \ast B(z) \,\mathrm{d}z = \tr(A) \tr(B).
\end{align*}
The convolutions satisfy the following associativity conditions
\[f\ast (A\ast B)=(f\ast A)\ast B, \qquad f\ast (g\ast A)=(f\ast g)\ast A\]
for $f,g\in L^1(\mathbb{R}^{2n})$ and $A,B\in \mathcal{T}^1$.
For $A \in \mathcal T^1$ we define the \emph{operator involution} by
\begin{align*}
A^{\ast_{\textrm{QHA}}} \coloneqq PA^\ast P.
\end{align*}
The involution satisfies
\begin{equation}\label{eq:involution operator operator}
(A\ast B)^\ast =(A^{\ast_{\textrm{QHA}}})\ast (B^{\ast_{\textrm{QHA}}})
\end{equation}
and
\begin{equation}\label{eq:involution function operator}
(f\ast A)^{\ast _{\textrm{QHA}}}=(f^{\ast })\ast (A^{\ast_{\textrm{QHA}}})
\end{equation}
for $f\in L^1(\mathbb{R}^{2n})$ and $A,B\in \mathcal{T}^1$.
One can define a commutative product on $L^1 \oplus \mathcal T^1 \coloneqq L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$ by
\[(f,A)\ast (g,B)=(f\ast g+A\ast B, f\ast B+g\ast A),\]
where $(f,A),\,(g,B)\in L^1 \oplus \mathcal T^1 $.
Defining the norm on $L^1 \oplus \mathcal T^1$ as
\begin{align*}
\| (f, A)\| = \| f\|_{L^1} + \| A\|_{\mathcal T^1}
\end{align*}
turns the space into a commutative Banach algebra.
We can also make the space into a Banach $\ast $-algebra by defining the involution as
\begin{equation*}
(f, A)^\ast = (f^\ast ,A^{\ast_{\textrm{QHA}}}).
\end{equation*}
Additionally, the involution is norm isometric:
\[\|(f,A)^\ast \|=\|(f,A)\|.\]
We define the \emph{shift} of $(f,A)\in L^1\oplus \mathcal{T}^1$ by an element $z\in \mathbb{R}^{2n}$ by
\[\alpha_z(f,A)\coloneqq(\alpha_z f, \alpha_z A).\]
Using the shifts, we define a set $X\subset L^1\oplus \mathcal{T}^1$ to be \emph{shift-invariant} if
\[\alpha_z(X)\subset X\text{ for all } z\in \mathbb{R}^{2n}.\]
\subsubsection{Fourier Transforms}\label{sec:qha fourier}
Quantum harmonic analysis comes with its own notion of Fourier transforms. We let $\mathcal F_{\sigma}(f)$ be the symplectic Fourier transform, i.e.,
\begin{equation}\label{eq:symplectic}
\mathcal F_\sigma(f)(z') \coloneqq \frac{1}{(2\pi)^n}\int_{\mathbb R^{2n}}f(z) e^{-i\sigma(z', z)}\,\mathrm{d}z, \quad z' \in \mathbb R^{2n},\ f \in L^{1}(\mathbb{R}^{2n}).
\end{equation}
Further, we define the \emph{Fourier-Weyl transform} by
\begin{align*}
\mathcal F_W(A)(z') \coloneqq \tr(AW_{z'}), \quad z' \in \mathbb R^{2n},\ A\in \mathcal T^1.
\end{align*}
We will also occasionally denote the symplectic Fourier transform of $f$ by $\widehat{f} = \mathcal F_\sigma(f)$ and the Fourier-Weyl transform by $\widehat{A}=\mathcal{F}_W(A)$.
The Fourier-Weyl and symplectic Fourier transforms satisfy
\begin{equation}\label{eq:conv_operator_operator}
\mathcal F_\sigma(A \ast B) = \mathcal F_W(A) \cdot \mathcal F_W(B)
\end{equation}
and
\begin{equation}\label{eq:conv_function_operator}
\mathcal{F}_W(f \ast A) = \mathcal{F}_\sigma(f) \cdot \mathcal F_W(A),
\end{equation}
for $A,B\in \mathcal{T}^1$ and $f\in L^1(\mathbb{R}^{2n})$.
Notice that
\begin{equation}\label{eq:Fourier conjugate}
\mathcal F_W(A^{\ast_{\textrm{QHA}}}) = \overline{\mathcal F_W(A)}\quad\text{ and }\quad\mathcal F_\sigma(f^{\ast}) = \overline{\mathcal F_\sigma(f)}.
\end{equation}
The map $\mathcal T^1 \ni A \mapsto \mathcal F_W(A)$ is injective and is in fact an isomorphism between $\mathcal{T}^2$ and $L^2(\mathbb{R}^{2n})$. We define the Fourier transform on $L^1\oplus\mathcal{T}^1$ by
\begin{equation*}
\mathcal F(f, A) = (\mathcal F_\sigma(f), \mathcal F_W(A)), \quad (f, A) \in L^1 \oplus \mathcal T^1.
\end{equation*}
The inverse of the Fourier-Weyl transform denoted by $\mathcal F_W^{-1}$ is given by
\begin{equation*}
\mathcal F_W^{-1}(f) = \int_{\mathbb R^{2n}} f(z) W_{-z}\,\mathrm{d}z,
\end{equation*}
understood as an integral in strong operator topology for $f \in L^1(\mathbb R^{2n}).$ Using interpolation, one can define the inverse Fourier-Weyl transform on $L^p$ for $1\le p\le 2$. It should be noted that the inverse Fourier-Weyl transform can be generalized to tempered distributions, see \cite{keyl_kiukas_werner16}. In particular, this means that one can define the Fourier-Weyl transform of all compact operators.
The Fourier-Weyl transform satisfies a version of the Riemann Lebesgue lemma and Hausdorff-Young inequality, see \cite{werner84} and \cite[Prop.~6.5 and 6.6]{luef_eirik2018}.
In particular, we have that
\begin{equation}\label{eq:Riemann Lebesgue}
\mathcal F(f, A) \in C_0(\mathbb R^{2n}) \oplus C_0(\mathbb R^{2n}).
\end{equation}
Analogous results are true for $\mathcal F_W^{-1}$, as we show now.
\begin{lemma}\label{inverseRiemannLebesgue}
Let $1 \leq p \leq 2$ and $\frac{1}{p} + \frac{1}{q} = 1$. For $f \in L^p(\mathbb R^{2n})$ one has $\mathcal F_W^{-1}(f) \in \mathcal T^q(\mathcal H)$ and $\mathcal{F}_W^{-1}$ satisfies the Hausdorff-Young inequality \[\| \mathcal F_W^{-1}(f)\|_{\mathcal T^q} \leq \| f\|_{L^p}.\]
In particular, for $p=1$ we have the Riemann-Lebesgue lemma: $\mathcal F_W^{-1}(f) \in \mathcal T^\infty $.
\end{lemma}
\begin{proof}
For the proof of the Riemann-Lebesgue lemma, note that $\mathcal F_W^{-1}(f) \in \mathcal T^{2}\subset \mathcal T^{\infty}$ for $f \in L^1(\mathbb{R}^{2n}) \cap L^2(\mathbb{R}^{2n})$. Hence, for arbitrary $f \in L^1(\mathbb{R}^{2n})$ the operator $\mathcal F_W^{-1}(f)$ can be approximated by Hilbert-Schmidt operators in operator norm, hence it is compact.
Notice that for compact operators, the operator norm is given by $\|\cdot\|_{\mathrm{Op}}=\|\cdot\|_{\mathcal{T}^\infty}$.
Since $\| W_{-z}\|_{\mathrm{Op}} = 1$ and $z \mapsto W_{-z}$ is continuous in the strong operator topology, clearly $\| \mathcal F_W^{-1}(f)\|_{\mathrm{Op}} \leq \| f\|_{L^1}$. Furthermore, we know that $\mathcal F_W^{-1}\colon L^2(\mathbb R^{2n}) \to \mathcal T^2$ is an isometry.
Hence, the result follows from applying complex interpolation.
\end{proof}
One can also consider the convolution of a complex Radon measure with an operator. Denote by $\operatorname{Meas}(\mathbb R^{2n})$ the set of all complex Radon measures on $\mathbb R^{2n}$, i.e., the set of Borel measures that are finite on compact sets and are both inner and outer regular. Then, for $\mu \in \operatorname{Meas}(\mathbb R^{2n})$ and $A \in \mathcal T^1$, we define
\begin{equation*}
\mu \ast A \coloneqq \frac{1}{(2\pi)^n}\int_{\mathbb R^{2n}} \alpha_z(A) \,\mathrm{d}\mu(z).
\end{equation*}
The above expression is valid as a Bochner integral in $\mathcal T^1$. In particular, $(2\pi)^n\delta_0 \ast A = A$. One can verify that we still have the product formula
\begin{equation*}
\mathcal F_W(\mu \ast A) = \mathcal F_\sigma(\mu) \mathcal F_W(A),
\end{equation*}
where
\begin{equation*}
\mathcal F_\sigma(\mu)(z') \coloneqq \frac{1}{(2\pi)^n}\int_{\mathbb R^{2n}} e^{-i\sigma(z', z)}\, \mathrm{d}\mu(z).
\end{equation*}
Then, if $A \in \mathcal T^1$ is such that $\mathcal F_W(A)(\xi) \neq 0$ for every $\xi \in \mathbb R^{2n}$, we clearly obtain that the map
\begin{equation*}
\operatorname{Meas}(\mathbb R^{2n}) \ni \mu \mapsto \mu \ast A \in \mathcal T^1
\end{equation*}
is injective.
\subsubsection{Schwartz Operators and Weyl Quantization}
The article \cite{keyl_kiukas_werner16} defines Schwartz and tempered operators, mirroring the Schwartz functions $\mathcal{S}(\mathbb{R}^{2n})$ and tempered distributions $\mathcal{S}'(\mathbb{R}^{2n})$ in harmonic analysis.
We denote by $\mathcal S(\mathcal H) \subset \mathcal T^1(\mathcal H)$ the space of \emph{Schwartz operators}, which is a Frech\'{e}t space that is continuously embedded into $\mathcal T^1(\mathcal H)$. Its topology can be described in terms of a countable family of seminorms, see \cite[Sec.\ 3]{keyl_kiukas_werner16}. We will need the fact that the Fourier-Weyl transform is a topological isomorphism $\mathcal F_W\colon \mathcal S(\mathcal H) \to \mathcal S(\mathbb R^{2n})$. Hence, a trace class operator $A$ is a Schwartz operator if and only if $\mathcal F_W(A)$ is a Schwartz function. Similarly, a Hilbert-Schmidt operator $A \in \mathcal T^2(\mathcal H)$ is a Schwartz operator if and only if its integral kernel is a Schwartz function.
Having $\mathcal S(\mathcal H)$ available, one can pass to the dual space $\mathcal S'(\mathcal H)$ of \emph{tempered operators}. The bounded operators $\mathcal L(\mathcal H)$ continuously embed into $\mathcal S'(\mathcal H)$ via
\begin{equation*}
\varphi_A(S) = \tr(AS), \quad S \in \mathcal S(\mathcal H)
\end{equation*}
for $A \in \mathcal L(\mathcal H)$. The Fourier transform, being defined by duality, then extends to a topological isomorphism from $\mathcal S'(\mathcal H)$ to $\mathcal S'(\mathbb R^{2n})$. In particular, we can talk about $\mathcal F_W(A)$ as a tempered distribution on $\mathbb R^{2n}$ for every $A \in \mathcal L(\mathcal H)$. The Fourier-Weyl transform satisfies \eqref{eq:conv_operator_operator} between $\mathcal S(\mathcal H)$ and $\mathcal S'(\mathcal H)$ and \eqref{eq:conv_function_operator} between $\mathcal S(\mathbb R^{2n})$ and $\mathcal S'(\mathcal H)$.
Define the \emph{Weyl quantization} of an $L^1(\mathbb
{R}^{2n})$ function $f$ to be the operator
\[A_f\coloneqq P\mathcal{F}_W^{-1}(\mathcal{F}_\sigma(f))P=\mathcal{F}_W^{-1}(\mathcal{F}_\sigma(\widetilde{f})),\]
where $\widetilde{f}(z)=f(-z)$.
Notice that the Weyl quantization also makes sense for $f \in \mathcal{S}'(\mathbb{R}^{2n})$.
Thus, $f \mapsto A_f$ maps from $\mathcal S'(\mathbb R^{2n})$ to $\mathcal S'(\mathcal H)$.
Since the quantization map is also a topological isomorphism from $\mathcal S(\mathbb R^{2n})$ to $\mathcal S(\mathcal H)$, we have
\begin{equation*}
\mathcal S(\mathcal H) \subset \{ A \in \mathcal T^1(\mathcal H): A = A_f, ~f \in L^1(\mathbb R^{2n})\}.
\end{equation*}
Taking the Weyl quantization of the delta function \[(2\pi)^n\delta_{0}\in \mathcal{S}'(\mathbb R^{2n})\] gives $A_{(2\pi)^n\delta_{0}}=2^n P$, where $P$ is the parity operator.
Note that the quantization map satisfies $\alpha_z(A_f) = A_{\alpha_{-z}(f)}$. For $g \in L^p(\mathbb R^{2n})$ where $1 \leq p \leq \infty$ and $f \in L^1(\mathbb R^{2n})$ we have the convolution formulas
\begin{equation}\label{eq: quantization of convolution}
f \ast A_g = A_{\widetilde{f}\ast g}, \quad A_f \ast A_g = \widetilde{f \ast g}.
\end{equation}
This gives us the following quantization identity
\[2^nf\ast P=A_{\widetilde{f}},\qquad 2^nA_f\ast P=\widetilde{f}\]
for $f\in \mathcal{S}(\mathbb{R}^{2n})$.
\subsection{Modulation Invariance}
In this section, we will develop a notion of modulation for operators. This allows us to define modulation-invariant spaces of operators. Motivated by the fact that the classical Fourier transform takes shifts to modulations and vice versa, we have the following definition.
\begin{definition}
We say that a subspace $\mathcal A \subset \mathcal T^{1}$ is \emph{modulation-invariant} if \[\mathcal F_W(\mathcal A) \coloneqq \left\{\mathcal{F}_{W}(A) : A \in \mathcal{A}\right\}\] is a shift-invariant subspace of $C_{0}(\mathbb{R}^{2n})$.
\end{definition}
\begin{lemma} \label{lem:modulation_invariance}
Let $\mathcal A \subset \mathcal T^1$ be a subspace. Then $\mathcal A$ is modulation-invariant if and only if $W_{z} A W_{z} \in \mathcal A$ for every $A \in \mathcal A$ and $z \in \mathbb R^{2n}$.
\end{lemma}
\begin{proof}
For $z' \in \mathbb{R}^{2n}$ the claim follows easily from the computation
\begin{align*}
\alpha_{z}\left(\mathcal{F}_{W}(A)(z')\right) & = \tr(W_{z' - z}A) \\
& = \tr\left( W_{-\frac{z}{2}}AW_{-\frac{z}{2}} W_{z'}\right) \\ & = \mathcal{F}_{W}\left(W_{-\frac{z}{2}}AW_{-\frac{z}{2}}\right)(z').\qedhere
\end{align*}
\end{proof}
Motivated by the proof of Lemma~\ref{lem:modulation_invariance} we give the following definition for the modulation of an operator.
\begin{definition}
The \emph{modulation} of an operator $A \in \mathcal{T}^{1}$ by $z \in \mathbb{R}^{2n}$ is defined by \[\gamma_{z}(A)\coloneqq W_{-z/2}AW_{-z/2}.\]
For $f \in L^{1}(\mathbb{R}^{2n})$ we define the modulation by
\begin{equation*}
\gamma_{z}(f)(z') \coloneqq e^{-i\sigma(z', z)} f(z').
\end{equation*}
\end{definition}
It is straightforward to verify the formulas
\begin{align*}
\gamma_{z}(f \ast g) & = (\gamma_{z}f) \ast (\gamma_{z} g), \\
\gamma_{z}(f \ast A) & = (\gamma_{z} f) \ast (\gamma_{z}A), \\
\gamma_{z}(A \ast B) & = (\gamma_{z} A) \ast (\gamma_{z}B),
\end{align*}
for $f, g \in L^{1}(\mathbb{R}^{2n})$ and $A, B \in \mathcal T^1$.
Notice that for $A \in \mathcal{T}^{1}$ and $f\in L^1(\mathbb{R}^{2n})$ we can write
\[\mathcal{F}_{W}(A)(z) = \tr(\gamma_{-z}A)\quad \text{ and } \quad \mathcal{F}_{\sigma}(f)(z) = \int_{\mathbb{R}^{2n}}\gamma_{-z}(f)(z')\, \mathrm{d} z'.\]
\begin{lemma}
Let $\mathcal A$ be a shift-invariant subspace of $\mathcal T^1$. Then the following are equivalent:
\begin{enumerate}[1)]
\item\label{eq:modulation_1} $\mathcal A$ is modulation-invariant,
\item $W_{z} A \in \mathcal A$ for every $A\in \mathcal A$ and $z \in \mathbb R^{2n}$,
\item $AW_{z} \in \mathcal A$ for every $A \in \mathcal A$ and $z\in \mathbb R^{2n}$,
\item\label{eq:modulation_4} $W_{z'}AW_{z} \in \mathcal A$ for every $A \in \mathcal A$ and $z',z\in \mathbb R^{2n}$,
\item The space consisting of all integral kernels of $A\in \mathcal{A}$ is shift- and modulation-invariant.
\end{enumerate}
\end{lemma}
\begin{proof}
The equivalence of \ref{eq:modulation_1}--\ref{eq:modulation_4}\ follows from shift-invariance together with the observation
\begin{align*}
\gamma_{z}\alpha_{z'}(\widehat{A})
&=\mathcal{F}_W(\alpha_{z}\gamma_{z'}A)\\
&=\mathcal{F}_W(W_{z}W_{-z'/2}AW_{-z'/2}W_{z})\\
&=e^{i\sigma(z',z)/2}\mathcal{F}_W(W_{z-z'/2}AW_{-z'/2-z}).
\end{align*}
For the last equivalence, recall that a trace class operator can be written as
\[A\phi(s)=\int_{\mathbb{R}^n}K_A(s,t)\phi(t)\,\mathrm{d}t,\quad \phi \in \mathcal{H}.\]
The integral kernel of $W_{z'}AW_z$ with $z = (x, \xi)$ and $z' = (x', \xi')$ is given by
\begin{equation*}
(s, t) \mapsto e^{i\xi' \cdot s - \frac{i}{2} \xi'\cdot x'}e^{i\xi\cdot t + \frac{i}{2} \xi\cdot x}K_A(s-x', t+x)=e^{ \frac{i}{2} \xi\cdot x - \frac{i}{2} \xi'\cdot x'}\gamma_{(\xi,-\xi')}\alpha_{(x',-x)}K_A(s, t).\qedhere
\end{equation*}
\end{proof}
Analogously to the case of the shift action $\alpha_x$, one also proves the following facts:
\begin{lemma}\label{lemma:modulation_cont_t1}
$\gamma_x$ acts strongly continuously and isometrically on the Schatten classes $\mathcal T^p(\mathcal H)$ for every $1 \leq p \leq \infty$. It acts continuous in weak$^\ast$ topology on $\mathcal L(\mathcal H)$.
\end{lemma}
\begin{proof}
The proof is entirely analogous as for the action $\alpha_x$: Verify the statements first for rank one operators, using that $W_x$ is a strongly continuous projective unitary representation. Then, obtain the result on all of $\mathcal T^p(\mathcal H)$ by approximation through finite rank operators.
\end{proof}
\section{Structure Theory of \texorpdfstring{$L^1 \oplus \mathcal T^1$}{}}\label{sec: structure theory}
In Section \ref{sec: QHA convolutions} we defined a product structure for $L^1 \oplus \mathcal T^1$, making it into an involutive commutative Banach algebra. As such we can study the closed ideals of this algebra. We will start with the study of the regular maximal ideals, i.e., the Gelfand theory, and thereafter study the closed graded ideals of $L^1 \oplus \mathcal T^1$.
\subsection{Gelfand Theory for \texorpdfstring{$L^1 \oplus \mathcal T^1$}{}}\label{sec: gelfand}
In this section we will study the Gelfand theory of $L^1 \oplus \mathcal T^1$. We say that an ideal $I$ is \emph{regular} if the quotient $L^1 \oplus \mathcal T^1/I$ is unital, see \cite[Sec.~D4]{rudin}. It is well known that the regular maximal ideals are given by the kernel of the multiplicative linear functionals.
\begin{proposition}\label{prop: mult char}
The set of nonzero multiplicative linear functionals on $L^1\oplus \mathcal{T}^1$ is para\-me\-trized by $\mathbb R^{2n} \times \mathbb Z_2$ as
\begin{equation*}
\chi_{z, j}(f, A) = \begin{cases}
\widehat{f}(z) + \widehat{A}(z), &\quad j = 0,\\
\widehat{f}(z) - \widehat{A}(z), &\quad j = 1.
\end{cases}
\end{equation*}
We write the set of all nonzero multiplicative linear functionals, the \emph{Gelfand spectrum}, as $\chi_{\mathbb R^{2n} \times \mathbb Z_2}$. The Gelfand spectrum topology of $\chi_{\mathbb R^{2n} \times \mathbb Z_2}$ agrees with the standard product topology of the index set $\mathbb{R}^{2n}\times \mathbb{Z}_2$.
\end{proposition}
\begin{proof}
Let $\chi\colon L^1 \oplus \mathcal T^1 \to \mathbb C$ be a nonzero multiplicative linear functional. Note that by using linearity, $\chi$ is determined by its actions on functions and operators separately. By the Gelfand theory of $L^1(\mathbb{R}^{2n})$, the multiplicative character $\chi$ acts on $L^1\oplus\{0\}$ by
\begin{equation*}
\chi(f, 0) = \widehat{f}(z) \quad \text{ for some }z \in \mathbb R^{2n}.
\end{equation*}
For $A, B \in \mathcal T^1$ we have
\begin{align*}
\chi(0, A) \cdot \chi(0, B) = \chi((0, A) \ast (0, B)) = \chi( A \ast B, 0) = \mathcal F_\sigma(A \ast B)(z) = \widehat{A}(z) \cdot \widehat{B}(z).
\end{align*}
Letting $A = B$, we see that
\begin{align*}
\chi(0, A) = \pm \widehat{A}(z).
\end{align*}
Hence, we obtain that
\begin{align*}
\chi(f, A) = \widehat{f}(z) \pm \widehat{A}(z).
\end{align*}
As one easily verifies, every such $\chi$ is a multiplicative linear functional.
Let us now prove the equivalence of the topologies.
A net $(z_\gamma) \subset \mathbb R^{2n}$ converges to $z \in \mathbb R^{2n}$ if and only if $\chi_{z_\gamma, j} \to \chi_{z,j}$ in the Gelfand spectrum topology. Further, it can never happen that $\chi_{z_\gamma, j} \to \chi_{z, j+1}$. If so, we would have $\widehat{A}(z) = -\widehat{A}(z)$, implying that $\widehat{A}(z) = 0$ for every $A \in \mathcal T^1$. This can not happen, since there exist operators with nonvanishing Fourier-Weyl transform, e.g., $\phi \otimes \phi$ where $\phi(x)=e^{-x^2}$.
\end{proof}
As a corollary, the maximal ideal space of $L^1\oplus \mathcal{T}^1$ consists of two disjoint copies of $\mathbb R^{2n}$.
\begin{remark}
Recall by \eqref{eq:Riemann Lebesgue} we have \[\mathcal{F}(L^1 \oplus \mathcal T^1)\subset C_0(\mathbb R^{2n}) \oplus C_0(\mathbb R^{2n}).\]
Using the convolution theorem, we can view the Fourier transform as a Banach algebra homomorphism
\begin{equation*}
\mathcal F\colon L^1 \oplus \mathcal T^1 \to \ell^1(\mathbb Z_2, C_0(\mathbb R^{2n})).
\end{equation*}
\end{remark}
Denote by
\[\Gamma\colon L^1\oplus\mathcal{T}^1\to (\chi_{\mathbb R^{2n} \times \mathbb Z_2}\to \mathbb{C}),\]
the \emph{Gelfand representation} defined by
\[\Gamma(f,A)\chi_{z,j}\coloneqq \chi_{z,j}(f,A)=\widehat{f}(z)+(-1)^j\widehat{A}(z).\]
We will refer to the map $\Gamma(f,A)$ is the \emph{Gelfand transform} of $(f,A)$.
\begin{remark}\label{re:denseness}
The Gelfand representation can be viewed as a map $\Gamma\colon L^1\otimes\mathcal{T}^1\to C_0(\mathbb{R}^{2n}\times \mathbb{Z}_2)$
given by \[\Gamma(f,A)(z,j)=\widehat{f}(z)+(-1)^j\widehat{A}(z).\]
Since both Fourier transforms have dense range, it follows that the image of the Gelfand representation is dense in $C_0(\mathbb{R}^{2n}\times \mathbb{Z}_2)$.
\end{remark}
Recall that a ring is called \emph{Jacobson semisimple} (or \emph{semiprimitive}) if the Jacobson ideal is zero, i.e., the intersection of all the maximal ideals is zero. A property of the Gelfand transform is that $\ker(\Gamma)$ is precisely the Jacobson ideal. It follows from the Gelfand-Naimark theorem that any $C^{\ast }$-algebra is Jacobson semisimple. The following result shows that $L^1 \oplus \mathcal T^1$ shares this property.
\begin{corollary}\label{cor: Jacobson semisimple}
The algebra $L^1 \oplus \mathcal T^1$ is Jacobson semisimple.
\end{corollary}
\begin{proof}
Let $(f, A)$ be such that $\Gamma(f, A) = 0$. Then for every $z \in \mathbb R^{2n}$ we obtain
\begin{align*}
\widehat{f}(z) + \widehat{A}(z) = 0 = \widehat{f}(z) - \widehat{A}(z).
\end{align*}
Hence $\widehat{f}=\widehat{A} = 0 $. Since both the symplectic Fourier transform and the Fourier-Weyl transform are injective, this yields that $(f, A) = 0$. Thus, the Jacobson radical is trivial.
\end{proof}
We say that an involutive Banach algebra
$X$ is \emph{symmetric} if its Gelfand representation satisfies
\[\Gamma(a^\ast )=\overline{\Gamma(a)} \quad \text{ for all } a\in X.\]
Using \eqref{eq:Fourier conjugate} we see that $L^1\oplus \mathcal{T}^1$ is symmetric since
\begin{align*}
\Gamma((f, A)^\ast)(z, j) = \overline{\mathcal F_\sigma(f)(z)} + (-1)^j \overline{\mathcal F_W(A)(z)} = \overline{\Gamma(f, A)(z, j)}.
\end{align*}
Using that $L^1\oplus \mathcal{T}^1$ is symmetric and \cite[Prop.\ 1.14c.]{folland2016course}, we get another proof of the denseness result in Remark \ref{re:denseness}.
\subsection{Closed Ideals of \texorpdfstring{$L^1 \oplus \mathcal T^1$}{}}\label{sec:closed_ideals}
Having discussed Gelfand theory, we now turn to the task of understanding closed ideals of $L^1 \oplus \mathcal T^1$ in general.
Recall that the closed ideals of $L^1(\mathbb{R}^{2n})$ are precisely the closed shift-invariant subspaces of $L^1(\mathbb{R}^{2n})$, see e.g., \cite[Thm.~2.45]{folland2016course}. Having a well-defined shift operator on $L^1\oplus \mathcal{T}^1$, one might hope that carries over to closed ideals in this setting. However, the subspace $\{0\}\oplus \mathcal{T}^1$ is closed and shift-invariant, but not an ideal.
\begin{definition}
We say that a subspace $M\subset L^1\oplus \mathcal{T}^1$ is an \emph{$L^1$ module} if for all $(f,A)\in M$ and $g\in L^1(\mathbb{R}^{2n})$ we have
\[g\ast (f,A)\coloneqq (g,0)\ast (f,A)=(g\ast f,g\ast A)\in M.\]
\end{definition}
Notice that all ideals of $L^1\oplus \mathcal{T}^1$ are $L^1$ modules.
\begin{proposition}\label{prop:L^1 module}
Let $M \subset L^1 \oplus \mathcal T^1$ be a closed subspace. Then $M$ is shift-invariant if and only if $M$ is an $L^1$ module.
\end{proposition}
\begin{proof}
If $M$ is shift-invariant, then for every $(f, A) \in M$ and $g \in L^1$ the convolution satisfies
\begin{align*}
g \ast (f, A) = \int_{\mathbb{R}^{2n}} g(z) \alpha_z(f,A)\,\mathrm{d}z\in M.
\end{align*}
On the other hand, if $M$ is an $L^1$ module, then letting $\{g_t\}_{t>0}$ be a normalized approximate identity of $L^1(\mathbb{R}^{2n})$ gives
\begin{equation*}
\alpha_z(f, A) = \lim_{t \to 0} g_t \ast \alpha_z(f, A) = \lim_{t \to 0} \alpha_z(g_t) \ast (f, A) \in M. \qedhere
\end{equation*}
\end{proof}
As a consequence, for $I$ being a closed ideal of $L^1 \oplus \mathcal T^1$, it is necessary to be shift-invariant. We continue our discussion with the following class of well-behaved ideals:
\begin{definition}
An ideal $I \subset L^1 \oplus \mathcal T^1$ is said to be \emph{graded} if $I = I_{L^1} \oplus I_{\mathcal{T}^1}$, where $I_{L^1} = I \cap (L^1(\mathbb{R}^{2n})\oplus\{0\})$ and $I_{\mathcal{T}^1} = I \cap (\{0\}\oplus \mathcal T^1)$.
\end{definition}
We note that $I_{L^1}$ and $I_{\mathcal T^1}$ in the above definition are \emph{corresponding spaces} in the sense of Werner \cite{werner84}.
Note that a closed ideal $I$ is a graded ideal if and only if $(f,0) \in I$ whenever $(f, A)\in I$. It follows that the regular maximal ideals
\begin{align*}
I_{(z, j)} \coloneqq \{ (f, A) \in L^1 \oplus \mathcal T^1: \widehat{f}(z) + (-1)^j \widehat{A}(z) = 0\}
\end{align*}
are never graded.
\begin{lemma}\label{lem:graded_closed_ideals}
A closed subspace $I \subset L^1 \oplus \mathcal T^1$ is a graded ideal if and only if $I = I_{L^1} \oplus \overline{(\mathcal T^1 \ast I_{L^1})}$ for some closed ideal $I_{L^1}$ in $L^1(\mathbb{R}^{2n})$.
\end{lemma}
\begin{proof}
Let us first quickly check that $I = I_{L^1} \oplus \overline{(\mathcal T^1 \ast I_{L^1})}$ is indeed a closed ideal. If $(f, A \ast g) \in I$ and $(h, B) \in L^1 \oplus \mathcal T^1$, then
\begin{align*}
(h, B) \ast (f, A \ast g) = (h \ast f + B \ast A \ast g, A \ast g\ast h + f \ast B).
\end{align*}
Since $f, g \in I_{L^1}$ and $I_{L^1}$ is an ideal, we have $h \ast f \in I_{L^1}$ and $B \ast A \ast g \in I_{L^1}$, where we have used that $B \ast A \in L^1(\mathbb{R}^{2n})$. Further, $A \ast g \ast h$ and $f \ast B$ are clearly contained in $\mathcal T^1 \ast I_{L^1}$, making $I$ into an ideal. The fact that $I$ is closed is inherited from $I_{L^1}$ being closed.
On the other hand, assume that $I= I_{L^1} \oplus I_{\mathcal{T}^1}$ is a graded closed ideal of $L^1 \oplus \mathcal T^1$.
Note that $I_{L^1}$ is a closed ideal of $L^1(\mathbb{R}^{2n})$. Using the multiplication, we get $(\{0\}\oplus \mathcal T^1) \ast I_{L^1} \subset I_{\mathcal{T}^1}$ and $(\{0\}\oplus \mathcal T^1) \ast I_{\mathcal{T}^1} \subset I_{L^1}$. Further, as $I$ is an $L^1$ module by Proposition \ref{prop:L^1 module}, both $I_{L^1}$ and $I_{\mathcal{T}^1}$ are shift-invariant. Thus using \cite[Thm.~4.1]{werner84} gives that $I_{\mathcal{T}^1} = \overline{\mathcal T^1 \ast I_{L^1}}$, finishing the proof.
\end{proof}
\begin{remark}\label{re graded}
\begin{enumerate}[(1)]
\item For $f \in L^1(\mathbb R^{2n})$ we let $Z(f)$ denote the closed set
\begin{align*}
Z(f) \coloneqq \{ z \in \mathbb R^{2n}: \widehat{f}(z) = 0\}.
\end{align*}
Then, for a closed ideal $I_{L^1}$ of $L^1(\mathbb R^{2n})$, we set \[Z(I_{L^1}) \coloneqq \bigcap_{f \in I_{L^1}} Z(f),\] which is closed. Malliavin's theorem \cite[Sec.~7.6]{rudin} states that spectral synthesis fails for $L^1(\mathbb R^{2n})$, that is: It is not true that every closed ideal $I_{L^1}$ is uniquely determined by $Z(I_{L^1})$.
By analogy, for $(f, A) \in L^1 \oplus \mathcal T^1$ we let
\begin{align*}
Z(f,A) \coloneqq \{ (z, j) \in \mathbb R^{2n} \times \mathbb Z_2: \widehat{f}(z) + (-1)^j \widehat{A}(z) = 0\}.
\end{align*}
If $I$ is a closed ideal of $L^1 \oplus \mathcal T^1$ we denote by
\[Z(I) \coloneqq \bigcap_{(f,A)\in I} Z(f,A).\]
When $I$ is a graded, by Lemma \ref{lem:graded_closed_ideals} we get that
$Z(I) = Z(I_{L^1} ) \times \mathbb Z_2$.
This means that for graded ideals the spectral synthesis again fails, i.e., $I $ is not uniquely determined by this set $Z(I)$.
\item\label{re maximal} Every proper closed graded ideal $I=I_{L^1} \oplus \overline{(\mathcal T^1 \ast I_{L^1})}$ is contained in a regular maximal ideal. If $Z(I) \neq \emptyset$, we have that for $(z, j) \in Z(I)$ the inclusion $I \subset I_{(z, j)}$ holds. In the case $Z(I) = \emptyset$, it remains to show that $I=L^1 \oplus \mathcal T^1$. When $Z(I) = \emptyset$, we have $Z(I_{L^1})=\emptyset$. By the Tauberian theorem on $L^1(\mathbb{R}^{2n})$, we have that $I_{L^1}=L^1(\mathbb{R}^{2n})$. The statement follows from the fact that $\overline{\mathcal{T}^1\ast L^1(\mathbb{R}^{2n})}=\mathcal{T}^1$.
\end{enumerate}
\end{remark}
Every closed ideal contains a maximal graded ideal. To illustrate this, let us consider the regular maximal ideals.
\begin{example}
For $(z, j) \in \mathbb R^{2n} \times \mathbb Z_2$, we have
\begin{align*}
I_0 &= I_{(z, j)} \cap ( L^1(\mathbb{R}^{2n})\oplus \{0\} )= \{ f\in L^1(\mathbb{R}^{2n}): \widehat{f}(z) = 0\}\\
I_1 &= I_{(z, j)} \cap ( \{0\}\oplus \mathcal T^1 ) = \{ A \in \mathcal T^1: ~\widehat{A}(z) = 0\}.
\end{align*}
In this case, $I_0 \oplus I_1$ is a graded closed ideal given by
\begin{align*}
I_0 \oplus I_1 = I_{(z, 0)} \cap I_{(z, 1)}.
\end{align*}
\end{example}
The construction of graded subideals works more generally. Denote by $J$ the map $J(f,A) \coloneqq (f, -A)$.
\begin{lemma}\label{lem: idempotent J}
Let $I$ be a closed ideal of $L^1 \oplus \mathcal T^1$. Then
\begin{enumerate}[1)]
\item $J(I)$ is a closed ideal of $L^1 \oplus \mathcal T^1$.
\item $I$ is graded if and only if $J(I) = I$.
\item\label{lem: idempotent J graded sub} $I \cap J(I)$ is a closed graded ideal.
\item $Z(J(I)) = \{ (z, j+1) \in \mathbb R^{2n} \times \mathbb Z_2: ~(z, j) \in Z(I)\}$.
\item $Z(I \cap J(I)) = Z(I) \cup Z(J(I)) = \{ z \in \mathbb R^{2n}: ~(z, 0) \in Z(I) \text{ or } (z, 1) \in Z(I)\} \times \mathbb Z_2$.
\end{enumerate}
\end{lemma}
The proof of this lemma is straightforward and left to the reader.
Note that the graded ideal $I \cap J(I)$ can be trivial, as the following example shows:
\begin{example}
We consider only $n = 1$, but the example can analogously be carried out for $n > 1$. We set
\begin{align*}
\Omega_j = \{ (x, y) \in \mathbb R^2: ~(-1)^jy \geq 0\}, \quad j\in \mathbb{Z}_2.
\end{align*}
Define the closed ideal
\begin{align*}
I \coloneqq \{ (f, A) \in L^1 \oplus \mathcal T^1: ~\widehat{f}(z) + (-1)^j \widehat{A}(z) = 0 \text{ for } z \in \Omega_j,\, j\in \mathbb{Z}_2 \}.
\end{align*}
By the definition, we have $Z(I) = (\Omega_0 \times \{ 0\}) \cup (\Omega_1 \times \{ 1\})$.
Thus, we clearly have
\begin{align*}
Z(I \cap J(I))=Z(\{(0,0)\}) = \mathbb R^2 \times \mathbb Z_2.
\end{align*}
\end{example}
\begin{proposition}\label{prop: zeros}
Let $I$ be a closed ideal of $L^1 \oplus\mathcal T^1$. If $Z(I) = \emptyset$, then $I = L^1 \oplus \mathcal T^1$.
\end{proposition}
\begin{proof}
We clearly have $I \cap J(I) \subset I$. By Lemma \ref{lem: idempotent J} \ref{lem: idempotent J graded sub}, $I \cap J(I)$ is a closed graded ideal with $Z(I \cap J(I)) = \emptyset$. Thus, $I \cap J(I) = L^1 \oplus \mathcal T^1$ by Remark \ref{re graded} \ref{re maximal}.
\end{proof}
\begin{corollary}
Every proper closed ideal of $L^1 \oplus \mathcal T^1$ is contained in a regular maximal ideal.
\end{corollary}
\begin{proof}
By Proposition \ref{prop: zeros}, if $I$ is proper $Z(I) \neq \emptyset$. Hence, for $(z, j) \in Z(I)$, we have $I \subset I_{(z, j)}$.
\end{proof}
\begin{corollary}
Every maximal closed ideal of $L^1 \oplus \mathcal T^1$ is regular.
\end{corollary}
\begin{proof}
If $I$ is a (proper) maximal closed ideal, then it is contained in a regular maximal ideal by the previous result. By maximality, both ideals have to agree.
\end{proof}
Having looked at the maximal closed graded subideal of $I$, we now turn to the minimal closed graded ideal containing $I$.
\begin{lemma}
Let $I \subset L^1 \oplus \mathcal T^1$ be a closed ideal.
\begin{enumerate}
\item $\overline{I + J(I)}$ is a graded ideal.
\item $Z(\overline{I + J(I)} )= Z(I) \cap Z(J(I)) = \{ z \in \mathbb R^{2n}: ~(z, 0) \in Z(I) \text{ and } (z, 1) \in Z(I)\} \times \mathbb Z_2$.
\end{enumerate}
\end{lemma}
Again, we leave the proof of the lemma to the reader.
\begin{example}
In the case of the maximal ideal $ I_{(z, j)}$, we have that \[\overline{I_{(z, j)} + J(I_{(z, j)})}=L^1\oplus \mathcal{T}^1.\]
This is a simple consequence of Proposition \ref{prop: zeros}
together with $J(I_{(z, j)})=I_{(z, j+1)}$.
\end{example}
Let us now move on to prove some general properties of closed ideals. Let $V\subset L^1\oplus \mathcal{T}^1$, then
\[V'\coloneqq\overline{(\{0\}\oplus \mathcal T^1 ) \ast V}.\]
\begin{proposition}
Let $V \subset L^1 \oplus \mathcal T^1$ a closed, shift-invariant subspace.
\begin{enumerate}
\item $V'$ is a closed, shift-invariant subspace.
\item $Z(V') = Z(V)$.
\item $V = V'' $.
\item $\overline{V + V'}$ is a closed ideal of $L^1 \oplus \mathcal T^1$.
\item $Z(\overline{V + V'}) = Z(V)$.
\end{enumerate}
In particular, if $I \subset L^1 \oplus \mathcal T^1$ is a closed ideal then $I = \overline{I + I'}$.
\end{proposition}
\begin{proof}
\begin{enumerate}
\item By definition $V'$ is closed. Further, $(\{0\}\oplus\mathcal T^1 ) \ast V$ is shift-invariant, by the definition of convolution. Taking the closure preserves shift-invariance.
\item Let $B$ be a regular operator, i.e., $\mathcal{F}_W(B)(z)$ is nonzero for all $z\in \mathbb{R}^{2n}$.
Then by
\eqref{eq:conv_operator_operator} and \eqref{eq:conv_function_operator} we have that
\begin{align*}
\widehat{A \ast B}(z) + (-1)^j \widehat{f \ast B}(z) =- \widehat{B}(z) (\widehat{f}(z) + (-1)^{j}\widehat{A}(z)), \quad (f,A)\in V.
\end{align*}
Hence the result follows from the definition of $Z(V')$.
\item This follows from the fact that $\mathcal T^1 \ast \mathcal T^1 \subset L^1$ is dense, the assumptions on $V$ and associativity of the convolutions.
\item Clearly, $\overline{V + V'}$ is an $L^1$ module. The definition of $V'$ makes $V'$ also invariant under convolution by $\mathcal T^1$.
\item The final property is an immediate consequence of 2.\qedhere
\end{enumerate}
\end{proof}
The above result tells us how to construct ideals of $L^1 \oplus \mathcal T^1$: Pick any closed, shift-invariant subspace $V$ of $L^1 \oplus \mathcal T^1$ and form $\overline{V + V'}$. Then, if $Z(V) \neq \emptyset$, we are guaranteed to obtain a proper closed ideal of $L^1 \oplus \mathcal T^1$. Further, any closed ideal is of this form. Finally, if we let $V \subset L^1(\mathbb{R}^{2n}) \oplus \{ 0\}$ be a closed shift-invariant subspace we obtain a graded ideal. Additionally, by Lemma \ref{lem:graded_closed_ideals} any
closed graded ideal is of this form.
\section{Quantum Segal Algebras}\label{sec: Quantum Segal}
Moving on from the closed ideals and $L^1$ modules, we will in this section study dense $L^1$ modules endowed with a shift invariant norm.
In doing this, we will define a version of Segal algebras in $L^1\oplus \mathcal{T}^1$.
\begin{definition}
A \emph{quantum Segal algebra (QSA)} is a pair $(QS, \| \cdot\|_{QS})$ where:
\begin{enumerate}[(QS1)]
\item \label{label:QS1} $QS$ is a dense subspace of $L^1 \oplus \mathcal T^1$.
\item \label{label:QS2} The space $(QS, \|\cdot\|_{QS})$ is a Banach algebra with multiplication inherited from $L^1 \oplus \mathcal T^1$.
\item \label{label:QS3} The space $QS$ is shift-invariant.
\item \label{label:QS4} The shifts $\alpha_z$ are norm-isometric and continuous on $(QS, \| \cdot\|_{QS})$ for every $z\in \mathbb{R}^{2n}$.
\end{enumerate}
If, additionally, the involution satisfies $QS^\ast =QS$ and $\|(f,A)^\ast\|_{QS}=\|(f,A)\|_{QS}$ for all $(f,A)\in QS$, then we refer to $(QS, \| \cdot\|_{QS})$ as \emph{star-symmetric}.
\end{definition}
As a first consequence of the definition, we obtain that every quantum Segal algebra continuously embeds into the ambient space:
\begin{proposition}
Let $(QS, \| \cdot\|_{QS})$ be a quantum Segal algebra. Then, there is a constant $C > 0$ such that for each $(f, A) \in QS$ we have
\begin{equation*}
\| (f, A)\|_{L^1 \oplus \mathcal T^1} \leq C\| (f, A)\|_{QS}.
\end{equation*}
\end{proposition}
The proposition is a direct application of the following well-known result on continuity of Banach algebra homomorphisms.
Recall by Corollary \ref{cor: Jacobson semisimple} that $L^1 \oplus \mathcal T^1$ is Jacobson semisimple.
\begin{lemma}\label{lem:homeo_implies_cont}
Let $\mathcal A$ and $\mathcal B$ be commutative Banach algebras, where $\mathcal B$ is further assumed to be Jacobson semisimple. Then any algebra homomorphism $\varphi\colon \mathcal A \to \mathcal B$ is continuous.
\end{lemma}
Since the proof is very short, we present it for the reader's convenience.
\begin{proof}
Let $(x_n)_{n \in \mathbb N}$ be a sequence in $\mathcal A$ converging to $0 \in \mathcal A$ and $y=\lim_{n\to \infty}\varphi(x_n) $. Denote by $\mathcal M(\mathcal A)$ and $\mathcal M(\mathcal B)$ the space of multiplicative linear functionals on $\mathcal A$ and $\mathcal B$, respectively. Then, for every $\chi \in \mathcal M(\mathcal B)$ we have $\chi\circ \varphi \in \mathcal M(\mathcal A)$, hence $\chi\circ \varphi$ is continuous. Thus,
\begin{align*}
\chi(y) = \chi(\lim_{n \to \infty} \varphi(x_n)) = \lim_{n \to \infty} \chi(\varphi(x_n)) = \chi(\varphi(0)) = 0.
\end{align*}
Hence, $\chi(y) = 0$ for every $\psi \in \mathcal M(\mathcal B)$. Since $\mathcal B$ is assumed to be Jacobson semisimple, it is $y = 0$. Thus, by the closed graph theorem, $\varphi$ is continuous.
\end{proof}
Recall that every Segal algebra is an $L^1$ module by \eqref{eq:banach_ideal_bound}.
Let $\{g_t\}_{t>0}$ be a normalized approximate identity for $L^1(\mathbb{R}^{2n})$, then $(g_t,0)$ is an approximate identity for $L^1\oplus \mathcal{T}^1$.
Hence, every quantum Segal algebra $QS$ is an \emph{essential} Banach module, and the Cohen-Hewitt factorization theorem implies that
\begin{align*}
QS = \{ g \ast (f, A): ~g \in L^1(\mathbb R^{2n}), \, (f, A) \in QS\}.
\end{align*}
As in the case for Segal algebras, see, e.g., \cite{dunford1974}, the property of being an essential $L^1$ module uniquely determines quantum Segal algebras among dense subalgebras of $L^1 \oplus \mathcal T^1$ in the following sense.
\begin{proposition}
Let $\mathcal A \subset L^1 \oplus \mathcal T^1$ be a dense subalgebra, such that $(\mathcal A, \| \cdot\|_{\mathcal A})$ is a Banach algebra. Then $\mathcal A$ is an essential $L^1$ Banach module if and only if it is a quantum Segal algebra.
\end{proposition}
\begin{proof}
Since $\mathcal{A}$ is essential, we have that every element of $(f, A)=h\ast(g,B)$, for $h \in L^1(\mathbb R^{2n})$ and $(g, B) \in \mathcal A$.
Hence $\mathcal A$ is shift-invariant since
\begin{align*}
\alpha_z(f, A) = \alpha_z(h \ast (g, B)) = \alpha_z(h) \ast (g, B) \in \mathcal A \text{ for all } z\in \mathbb{R}^{2n}.
\end{align*}
Furthermore, for $z \to 0$ we have
\begin{align*}
\| \alpha_z(f, A) - (f, A)\|_{\mathcal A} = \| \alpha_z(h) \ast (g, B) - h \ast (g,B)\|_{\mathcal A} \leq \| \alpha_z(h) - h\|_{L^1} \| (g, B)\|_{\mathcal A} \to 0.
\end{align*}
The only thing left is showing that the shifts act isometric with respect to $\|\cdot\|_{\mathcal{A}}$. On the one hand, for $\{g_{t}\}_{t > 0} \subset L^1(\mathbb R^{2n})$ a normalized approximate identity and $(f, A) \in \mathcal A$ we have
\begin{align*}
\| \alpha_z(f, A)\|_{\mathcal A} = \lim_{t \to 0} \| \alpha_z(g_t) \ast (f, A)\|_{\mathcal A} \leq \limsup_{t \to 0} \| \alpha_z(g_t)\|_{L^1} \| (f, A)\|_{\mathcal A} = \| (f, A)\|_{\mathcal A}.
\end{align*}
The other direction follows from applying the same argument to $ \alpha_{-z}(\alpha_{z}(f, A)) = (f, A)$.
\end{proof}
We have seen that quantum Segal algebras are $L^1$ modules, however, they are not in general $L^1 \oplus \mathcal T^1$ ideals. Remark \ref{remex} below gives an example for this. In particular, this means that quantum Segal algebras are not abstract Segal algebras in the sense of Burnham \cite{Burnham1972}. Hence we should not hope for the full ideal theorem that Segal algebras satisfy. Nevertheless, there is a weak version:
\begin{proposition}
Let $QS$ be a quantum Segal algebra.
\begin{enumerate}[(1)]
\item For every closed ideal $I$ of $L^1 \oplus \mathcal T^1$, the set $I \cap QS$ is a closed ideal of $QS$.
\item If $I$ is a closed ideal of $QS$, then the $L^1 \oplus \mathcal T^1$-closure of $I$ is a closed ideal of $L^1 \oplus \mathcal T^1$. Further, $I \subseteq \overline{I} \cap QS$.
\end{enumerate}
When $QS$ is an ideal of $L^1 \oplus \mathcal T^1$, then we have $I = \overline{I} \cap QS$ in (2).
\end{proposition}
\begin{proof}
The proof is nearly identical to the one given in
\cite[Thm.~1.1]{Burnham1972}, and is hence omitted.
\end{proof}
We have already seen in Lemma \ref{lem:graded_closed_ideals} that graded closed ideals of $L^1\oplus \mathcal T^1$ admit a particularly simple structure. This is also true for quantum Segal algebras. For completeness, we repeat the definition, which is the same as for closed ideals: A quantum Segal algebra $QS$ is graded if $QS \cong (QS \cap (L^1 \oplus \{ 0\})) \oplus (QS \cap (\{ 0\} \oplus \mathcal T^1))$. It should be noted that not every quantum Segal algebra is graded, see Example \ref{qsa:notgraded}.
Note that for a graded quantum Segal algebra $(QS, \|\cdot\|_{QS})$, an easy application of the open mapping theorem yields that its norm is equivalent to the sum of the subspace norms:
\begin{align*}
c_1 (\| (f, 0)\|_{QS} + \| (0, A)\|_{QS}) \leq \| (f, A)\|_{QS} \leq c_2(\| (f, 0)\|_{QS} + \| (0, A)\|_{QS}), \quad (f, A) \in QS.
\end{align*}
Even though we do not yet know if the complete ideal theorem for Segal algebras carries over, on the level of regular maximal ideals this is true for graded quantum Segal algebras.
\begin{proposition}\label{prop: gelfand}
Let $(QS,\|\cdot\|_{QS})$ be a graded quantum Segal algebra.
\begin{enumerate}[(1)]
\item Every $\chi \in \chi_{\mathbb{R}^{2n}\oplus \mathbb{Z}_2}$ is a multiplicative linear functional on $QS$.
\item For every $\chi \in \mathcal M(QS)$ there is some $(z, j) \in \mathbb R^{2n} \times \mathbb Z_2$ such that $\chi = \left.\chi_{z, j}\right|_{QS}$.
\item\label{prop: 3 nonequal} If $(z_1, j_1),\, (z_2, j_2) \in \mathbb R^{2n} \times \mathbb Z_2$ such that $\left.\chi_{z_1, j_1}\right|_{QS} = \left.\chi_{z_2, j_2}\right|_{QS}$ then $(z_1, j_1) = (z_2, j_2)$.
\end{enumerate}
Hence, the Gelfand transform $\Gamma_{QS}$ of $QS$ is simply the restriction $\Gamma_{L^1\oplus \mathcal T^1}|_{QS}$.
\end{proposition}
\begin{proof}
\begin{enumerate}[(1)]
\item This is clear, as $QS$ and $L^1 \oplus \mathcal T^1$ have the same algebraic operations.
\item Notice that $S=QS \cap L^1(\mathbb{R}^{2n})\oplus\{0\}$ is a Segal algebra. Let $\chi\in \mathcal M(QS)$. Then, $\chi|_{S}$ is a nonzero multiplicative linear functional on $S$. As Segal algebras have the same Gelfand theory as $L^1(\mathbb{R}^{2n})$, there is some $z \in \mathbb R^{2n}$ such that
\begin{align*}
\chi|_{S}(f) = \widehat{f}(z), \quad f \in S.
\end{align*}
From this, one concludes as in the proof of Proposition \ref{prop: mult char} that
\begin{align*}
\chi(f, A) = \widehat{f}(z) +(-1)^j \widehat{A}(z)=\chi_{z,j}(f,A).
\end{align*}
\item The assumption yields
\begin{equation*}
\widehat{f}(z_1) = \chi_{z_1, j_1}(f, 0) = \chi_{z_2, j_2}(f,0) = \widehat{f}(z_2).
\end{equation*}
If $z_1 \neq z_2$, then there exists $f \in L^1(\mathbb R^{2n})$ such that $\widehat{f}(z_1) \neq \widehat{f}(z_2)$ and a sequence $\{f_n\}_{n\in \mathbb{N}} \subset S$ with $f_n \to f$ in $L^1(\mathbb{R}^{2n})$. Since point evaluations of the Fourier transform are continuous on $L^1(\mathbb{R}^{2n})$, we thus would obtain
\begin{equation*}
\widehat{f}(z_1) =\lim_{n\to \infty} \widehat{f_n}(z_1) = \lim_{n\to \infty}\widehat{f_n}(z_2) =\widehat{f}(z_2),
\end{equation*}
which is a contradiction. Hence $z_1 = z_2$. Further, if $j_1 \neq j_2$, then $\widehat{A}(z_1) = 0$ for every $A \in QS \cap \{0\}\oplus\mathcal T^1$. Since there exists $B \in \mathcal T^1$ with $\widehat{B}(z_1) \neq 0$, we see that $j_1 = j_2$.\qedhere
\end{enumerate}
\end{proof}
\begin{corollary}
Let $QS$ be a graded quantum Segal algebra. Then, $QS$ is Jacobson semisimple.
\end{corollary}
\section{Examples of Quantum Segal Algebras}\label{sec: examples of QSA}
While we know that $L^{1} \oplus \mathcal{T}^{1}$ is a quantum Segal algebra, we do not yet have any nontrivial examples. In this section we will look at several examples of quantum Segal algebras.
\subsection{Induced Quantum Segal Algebras}
We say that $A \in \mathcal{T}^{1}$ is a \emph{regular operator} if it satisfies $\mathcal{F}_{W}(A)(z) \neq 0$ for all $z \in \mathbb{R}^{2n}$. Using this, we have the following construction.
\begin{definition}
Let $(S, \|\cdot\|_{S})$ denote a Segal algebra and fix a regular operator $A \in \mathcal{T}^{1}$. Define the subspace $S^{A} \subset L^{1} \oplus \mathcal{T}^{1}$ as
\[S^{A} \coloneqq \left\{(f, g \ast A) : f, g \in S \right\},\]
and the norm on $S^{A}$ by
\begin{equation} \label{eq:norm_on_quantum_feichtinger_algebra}
\|(f, g \ast A)\|_{S^{A}} \coloneqq \|f\|_{S} + \|A\|_{\mathcal{T}^{1}}\|g\|_{S}.
\end{equation}
We say that $(S^{A}, \|\cdot\|_{S^{A}})$ is the \emph{induced quantum Segal algebra} of $(S, \|\cdot\|_{S})$ and $A$.
\end{definition}
The reason we need to require that $A \in \mathcal{T}^{1}$ is regular will be clear from the proof of the following result.
\begin{theorem}
\label{thm: induced_segal_algebra}
The induced quantum Segal algebra $(S^{A}, \|\cdot\|_{S^{A}})$ of $(S, \|\cdot\|_{S})$ and $A$ is a quantum Segal algebra. Moreover, in the case that $(S, \|\cdot\|_{S})$ is star-symmetric and $A^{\ast_{\textrm{QHA}}} = A$ we have that $(S^{A}, \|\cdot\|_{S^{A}})$ is star-symmetric.
\end{theorem}
The final claim regarding star-symmetry follows immediately from \eqref{eq:involution operator operator} and \eqref{eq:involution function operator}. However, one can give the precise criterion for when $S^A$ is star-symmetric. We quickly discuss this before turning to the proof of Theorem \ref{thm: induced_segal_algebra}:
\begin{proposition}
Let $(S^A, \|\cdot\|_{S^A})$ be an induced quantum Segal algebra. Then it is star-symmetric if and only if the Segal algebra $S$ is star-symmetric and closed under convolution by the tempered distribution $\phi$ defined by
\begin{equation*}
\varphi = \mathcal F_\sigma( \overline{\mathcal F_W(A)}/\mathcal F_W(A)).
\end{equation*}
\end{proposition}
\begin{proof}
We need to be able to solve the equation
\begin{equation*}
(f_1, g_1 \ast A)^\ast = (f_2, g_2 \ast A)
\end{equation*}
for $f_2, g_2 \in S$ whenever $f_1, g_1 \in S$ are given. Clearly, $f_2=f_1^\ast$ making $S$ star-symmetric. Applying $\mathcal F_W$ to the second component yields
\begin{equation*}
\mathcal F_\sigma(g_1^\ast) \mathcal F_W(A^{\ast_{\textrm{QHA}}}) = \mathcal F_\sigma(g_1^\ast) \overline{\mathcal F_W(A)} = \mathcal F_\sigma(g_2) \mathcal F_W(A).
\end{equation*}
Since $\mathcal F_W(A)$ is nowhere zero, we can solve this for
\begin{equation*}
\mathcal F_\sigma(g_2) = \mathcal F_\sigma(g_1^\ast) \overline{\mathcal F_W(A)}/\mathcal F_W(A),
\end{equation*}
or equivalently
\begin{equation*}
g_2 = g_1^\ast \ast \mathcal F_\sigma^{-1}(\overline{\mathcal F_W(A)}/\mathcal F_W(A))
\end{equation*}
in the sense of tempered distributions.
\end{proof}
\begin{remark}
When $A = A^{\ast_{\textrm{QHA}}}$ we have $\overline{\mathcal F_W(A)}/\mathcal F_W(A) = 1$ implying that $\varphi = (2\pi)^n\delta_0$.
\end{remark}
We break the verification of Theorem \ref{thm: induced_segal_algebra} into the following three lemmas:
\begin{lemma}
The space $(S^{A}, \|\cdot\|_{S^{A}})$ is a Banach space.
\end{lemma}
\begin{proof}
The homogeneity and triangle inequality the norm is straightforward. For the positive definiteness, it is clear from \eqref{eq:norm_on_quantum_feichtinger_algebra} that $\|(f, g \ast A)\|_{S^{A}} = 0$ implies that $(f, g \ast A) = (0, 0)$. Conversely, assume that $(f, g \ast A) = (0,0)$. Then $f = 0$ and $g \ast A = 0$. By using \eqref{eq:conv_function_operator} we have that
\[\mathcal{F}_{W}(g \ast A) = \mathcal{F}_{\sigma}(g)\cdot \mathcal{F}_{W}(A) = 0.\]
Since $A$ is regular, this forces $\mathcal{F}_{\sigma}(g)= 0$. Hence $g = 0$ and $\|\cdot\|_{S^{A}}$ is a norm on $S^{A}$.
The completeness of $S^A$ follows easily from the completeness of $S$.
\end{proof}
\begin{lemma}
The induced Segal algebra $(S^{A}, \|\cdot\|_{S^{A}})$ satisfies \ref{label:QS1} and \ref{label:QS2}.
\end{lemma}
\begin{proof}
To show \ref{label:QS1}, notice that $L^1(\mathbb R^{2n}) \ast A$ is dense in $\mathcal T^1$ whenever $A$ is regular by \cite{werner84}. By \ref{def:s1} the induced Segal algebra $(S^{A}, \|\cdot\|_{S^{1}})$ is a dense subspace of $L^{1} \oplus \mathcal{T}^{1}$.
For \ref{label:QS2}, let us first show that $\ast$ is a well-defined operator on $S^{A}$. For two elements $(f_{1}, g_{1} \ast A)$ and $(f_{2}, g_{2} \ast A)$ we have
\[(f_{1}, g_{1} \ast A) \ast (f_{2}, g_{2} \ast A) = (f_{1} \ast f_{2} + (g_{1} \ast g_{2}) \ast (A \ast A), (f_{1} \ast g_{2} + f_{2} \ast g_{1}) \ast A).\]
Since $S$ is closed under convolution, we know that \[f_{1} \ast f_{2},\, g_{1} \ast g_{2},\, f_{1} \ast g_{2} + f_{2} \ast g_{1}\in S.\] Moreover, we know that $A \ast A \in L^{1}(\mathbb{R}^{2n})$. Since Segal algebras are ideals of $L^1(\mathbb{R}^{2n})$ the product is well-defined.
To see that $(S^{A}, \|\cdot\|_{S^{A}})$ is a Banach algebra we compute
\begin{align*}
& \|(f_{1}, g_{1} \ast A) \ast (f_{2}, g_{2} \ast A)\|_{S^{A}} \\
& \leq \|f_{1} \ast f_{2}\|_{S} + \|g_{1} \ast g_{2}\|_{S} \|A \ast A\|_{L^{1}} + \|A\|_{\mathcal{T}^{1}}\|f_{1} \ast g_{2} + f_{2} \ast g_{1}\|_{S} \\
& \leq \|f_{1}\|_{S} \|f_{2}\|_{S} + \|g_{1}\|_{S} \|g_{2}\|_{S} \|A\|_{\mathcal{T}^{1}}^{2} + \|A\|_{\mathcal{T}^{1}}(\|f_{1}\|_{S} \|g_{2}\|_{S} + \|f_{2}\|_{S} \|g_{1}\|_{S}) \\
& =
\|(f_{1}, g_{1} \ast A)\|_{S^{A}} \cdot \|(f_{2}, g_{2} \ast A)\|_{S^{A}}.\qedhere
\end{align*}
\end{proof}
\begin{lemma}
The induced Segal algebra $(S^{A}, \|\cdot\|_{S^{A}})$ satisfies \ref{label:QS3} and \ref{label:QS4}.
\end{lemma}
\begin{proof}
For any $z \in \mathbb{R}^{2n}$ we have that
\begin{align*}
\alpha_z(f, g \ast A) = (\alpha_z(f), \alpha_z(g) \ast A).
\end{align*}
Since the Segal algebra $S$ is shift invariant and shifts are norm isometric, it follows that $(S^{A}, \|\cdot\|_{S^{A}})$ is shift invariant and shifts are norm isometric.
Finally, the shifts act continuously since
\begin{equation*}
\lim_{z\to 0}\| \alpha_z(f, g \ast A) - (f, g \ast A)\|_{S^{A}}
= \lim_{z\to 0}\| \alpha_z(f) - f\|_{S} + \| A\|_{\mathcal{T}^1} \lim_{z\to 0}\| \alpha_z(g) - g\|_{S}= 0. \qedhere
\end{equation*}
\end{proof}
\begin{example}
Let us assume that $A=\phi\otimes \psi$ is a rank one operator and $g\in S\subset L^1(\mathbb{R}^{2n})$, where $S$ is a Segal algebra. Define the \emph{localization operator} by
\[\mathcal{A}^{\phi,\psi}_g\eta \coloneqq \int_{\mathbb{R}^{2n}}g(z)\langle \eta,W_z\psi\rangle W_z\phi\, \mathrm{d}z .\]
The localization operator can be rewritten as
\[g\ast(\phi\otimes \psi)=\mathcal{A}^{\phi,\psi}_g.\]
Hence the induced quantum Segal algebra with respect to $\phi\otimes \psi$ and $S$ is on the form \[(f,\mathcal{A}^{\phi,\psi}_g), \qquad f,g\in S.\]
To get a quantum Segal algebra we need that $\mathcal{F}_W(\phi\otimes \psi)(z)\not=0$ for all $z\in \mathbb{R}^{2n}$. Computing the Fourier-Weyl transform of $\phi\otimes \psi$ we get
\[\mathcal{F}_W(\phi\otimes\psi)(z) = e^{i x\xi/2}V_{\psi}\phi(-z)\not = 0,\qquad z=(x,\xi)\in \mathbb{R}^{2n},\]
where $V_{\psi}\phi$ is given in \eqref{eq:STFT}. Examples of functions satisfying $V_{\psi}\phi(z)\not = 0$ are given in \cite{GJM20}.
\end{example}
\begin{remark}
Not every quantum Segal algebras is an induced Segal algebra. As an example, let $\mathcal{S}_0=\mathcal{S}_0(\mathbb{R}^{2n})$ denote the Feichtinger algebra defined in Example \ref{ex:segal} \ref{ex:Feichtinger}. Then for any regular operator $A$ the set $L^1(\mathbb{R}^{2n}) \oplus (\mathcal{S}_0 \ast A)$ is a quantum Segal algebra with the norm
\[\|(f,g\ast A)\|_{L^1(\mathbb{R}^{2n}) \oplus (\mathcal{S}_0 \ast A)}=\|f\|_{L^1}+\|g\|_{\mathcal{S}_0}\|A\|_{\mathcal{T}^1}.\]
\end{remark}
\begin{remark}\label{remex}
Induced Segal algebras are in general not ideals of $L^1 \oplus \mathcal T^1$. As an example, let $n=1$ and consider the induced Segal algebra $L^1(\mathbb R^2)^A$ with $A \in \mathcal T^1$ a regular operator. Then the set
\begin{align*}
X=\{ f \ast A: f \in L^1(\mathbb R^2)\} = \{ B \in \mathcal T^{1}: (0, B) \in L^1(\mathbb R^2)^A\}
\end{align*}
is dense in $\mathcal T^1$. Further, $X$ is a proper subset of $\mathcal T^1$, since $A\not\in X$. This follows from $A = 2\pi\delta_0 \ast A$ together with the map $\mu \mapsto \mu \ast A$ being injective for all $\mu\in \mathcal{S}'(\mathbb{R}^{2})$. Nevertheless, since $\mathcal T^1$ is an essential $L^1$ module, we have
\begin{align*}
f\ast B=A, \quad f\in L^1(\mathbb{R}^2),\, B\in \mathcal{T}^1.
\end{align*}
We therefore conclude that $L^1(\mathbb R^2)^A$ is not an $L^1 \oplus \mathcal T^1$ ideal, since $(f, 0)\in L^1(\mathbb R^2)^A$ and \[(0,B)\ast (f,0)=(0,A)\not \in L^1(\mathbb{R}^2)^A.\]
\end{remark}
\begin{remark}
One might initially expect that if $S$ is a modulation-invariant Segal algebra, then the induced quantum Segal algebra $S^{A}$ is modulation-invariant as well. However, this is not the case. As a simple counterexample, let $S = \mathcal{S}_{0}(\mathbb{R}^{2})=\mathcal{S}_{0}$ be the Feichtinger algebra and let $A$ be the regular operator given by
\[\mathcal{F}_{W}(A)(z) = e^{-z^2}, \qquad z \in \mathbb{R}^{2}.\]
Assume by contradiction that $S^{A}$ is modulation-invariant. Then for any $g_{1} \in \mathcal{S}_{0}$ and any $z' \in \mathbb{R}^{2}$ there should exist $g_{2} \in \mathcal{S}_{0}$ such that
\[\gamma_{z'}(g_{1} \ast A) = \gamma_{z'}(g_{1}) \ast \gamma_{z'}(A) = g_{2} \ast A.\]
Taking the Fourier-Weyl transform of both sides gives
\[\mathcal{F}_{\sigma}(g_{1})(z - z')e^{-(z - z')^2} = \mathcal{F}_{\sigma}(g_{2})(z)e^{-z^2}.\]
Since $\mathcal{F}_{\sigma}(\mathcal{S}_0)=\mathcal{S}_0$, we have that
$h_{1} = e^{-z'^2}\mathcal{F}_{\sigma}(g_{1}) \in \mathcal{S}_{0}$ and $h_{2} = \mathcal{F}_{\sigma}(g_{2}) \in \mathcal{S}_{0}$. By picking $z' = -\frac{1}{2}$ we obtain
\[h_{1}\left(z + \frac{1}{2}\right)e^{z} = h_{2}(z).\]
Since $h_{1}$ is arbitrary, and $e^z\mathcal{S}_{0}\not = \mathcal{S}_{0}$ we have a contradiction. For example, for $h_{1}(z) = (1 + |z|^2)^{-1}$ we have no solution $h_2\in \mathcal{S}_0$.
\end{remark}
\subsection{Quantum Segal Algebras through Quantization}
In this section, we will construct quantum Segal algebras using the Weyl quantization. Recall that $\widetilde{f}(z)=f(-z)$, which naturally extends to $\mathcal S(\mathbb R^{2n})$. Given a set $M$ of functions or distributions, we will use the notation \[\widetilde{M} = \{\widetilde{f} : f\in M\}.\] Notice that if $S$ is a Segal algebra, then so is also $\widetilde{S}$.
Given a set of tempered operators $\mathcal{A} \subset \mathcal S'(\mathcal H)$, we denote the set of symbols by $\mathrm{Sym}(\mathcal{A}) \subset \mathcal S'(\mathbb R^{2n})$, i.e.,
\[\widetilde{\mathrm{Sym}(\mathcal A)} = \mathcal F_\sigma(\mathcal F_W(\mathcal A )).\] Vice versa, given a set of tempered distributions $S \subset \mathcal S'(\mathbb R^{2n})$, we denote by $\mathrm{Sym}^{-1}(S)$ the set of quantized operators as a subset of $\mathcal S'(\mathcal H)$, i.e.,
\[\mathrm{Sym}^{-1}(S) = P\mathcal F_W( \mathcal F_\sigma(\widetilde{S}))P.\]
In the following result, we present a Segal algebra which plays an important role for the discussion of quantum Segal algebras.
\begin{lemma}
Denote
\begin{align*}
\mathcal T := \{ f \in L^1(\mathbb R^{2n}): ~A_f \in \mathcal T^1\}.
\end{align*}
Endowed with the norm
\begin{align*}
\| f\|_{\mathcal T} := \| f\|_{L^1} + \| A_f\|_{\mathcal T^1},
\end{align*}
the space $\mathcal T$ is a star-symmetric and strongly modulation-invariant Segal algebra.
\end{lemma}
\begin{proof}
Firstly, note that $\mathcal T$ contains $\mathcal S(\mathbb R^{2n})$, so it is dense in $L^1(\mathbb R^{2n})$. We have
\begin{align*}
\| \alpha_z(f)\|_{\mathcal T} = \| \alpha_z(f)\|_{L^1} + \| A_{\alpha_z(f)}\|_{\mathcal T^1} = \| \alpha_z(f)\|_{L^1} + \| \alpha_{-z}(A_f)\|_{\mathcal T^1} = \| f\|_{\mathcal T}.
\end{align*}
Similarly, the shifts act strongly continuous on $\mathcal T$. Further, it is
\begin{align*}
\| f\ast g\|_{\mathcal T} &= \| f \ast g\|_{L^1} + \| A_{f \ast g}\|_{\mathcal T^1} = \| f\ast g\|_{L^1} + \| \widetilde{f} \ast A_g\|_{\mathcal T^1} \\
&\leq \| f\|_{L^1} \| g\|_{L^1} + \| f\|_{L^1} \| A_g\|_{\mathcal T^1} \leq \| f\|_{\mathcal T} \| g\|_{\mathcal T}
\end{align*}
for $f, g \in \mathcal T$. Therefore, $\mathcal T$ is a Segal algebra. Regarding star-symmetry, note that
\begin{align*}
A_{f^\ast} = A_{\tilde{\overline{f}}} = P(A_{\overline{f}})P = PA_f^\ast P = A_f^{\ast_{\textrm{QHA}}}.
\end{align*}
Since $\| A_f\|_{\mathcal T^1} = \| A_f^{\ast_{\textrm{QHA}}}\|_{\mathcal T^1}$, and clearly $\| f\|_{L^1} = \| f^\ast\|_{L^1}$, we have $\| f\|_{\mathcal T} = \| f^\ast\|_{\mathcal T}$. The strong modulation invariance is now immediate from Lemma \ref{lemma:modulation_cont_t1}.
\end{proof}
\begin{proposition}\label{prop:segal_parts}
Let $QS =S_{1} \oplus S_{2} \subset L^{1}(\mathbb{R}^{2n}) \oplus \mathcal{T}^{1}$ be a graded quantum Segal algebra. Then both $S_{1}$ and $\mathrm{Sym}(S_{2}) \cap L^{1}(\mathbb{R}^{2n})$ are Segal algebras. Additionally, \[\widetilde{\mathrm{Sym}(S_{2})} \ast \widetilde{\mathrm{Sym}(S_{2})} \subset S_{1}.\]
In particular, when $\mathrm{Sym}(S_{2}) \subset L^{1}(\mathbb{R}^{2n})$ the set $\mathrm{Sym}(S_{2})$ is a Segal algebra.
\end{proposition}
\begin{proof}
It is clear that $S_1$ is a Segal algebra.
$S = \mathrm{Sym}(S_{2}) \cap L^{1}(\mathbb{R}^{2n})$ is a Banach space with the shift-invariant norm
\[\|f\|_{S}=\|(0, A_f)\|_{QS}+\|f\|_{L^1},\qquad f\in S.\]
The set $S$ is a Banach algebra, since by Proposition \ref{prop:L^1 module} and \eqref{eq: quantization of convolution} we have \[\widetilde{S}\ast\widetilde{S}\subset \widetilde{S}. \]
It remains to show that $S$ is dense in $L^1(\mathbb R^{2n})$. For this, note that
\begin{align*}
\mathcal T \ast S_2 \subset L^1(\mathbb R^{2n}) \ast S_2 = S_2.
\end{align*}
Further, we have
\begin{align*}
\mathcal{T}\ast S_2=\mathrm{Sym}^{-1}(\mathrm{Sym}^{-1}(\mathcal{T}) \ast \tilde{S_2})\subset \mathrm{Sym}^{-1}( \mathcal{T}^1\ast \tilde{S_2})\subset\mathrm{Sym}^{-1}( \mathcal{T}^1\ast \mathcal{T}^1)\subset \mathrm{Sym}^{-1}(L^1(\mathbb{R}^{2n})),
\end{align*}
where $\tilde{S_2}=\mathrm{Sym}^{-1}(\widetilde{\mathrm{Sym}(S_2)})$. Together, this shows $\mathcal T \ast S_2 \subset S$.
Note that, since $S_2$ is dense in $\mathcal T^1$, either by Proposition \ref{prop: gelfand} \ref{prop: 3 nonequal} or by Wiener's approximation theorem for $\mathcal T^1$, cf.~\cite{werner84}, we have that \[ \{z\in \mathbb{R}^{2n}:\mathcal{F}_W(A)(z)=0 \text{ for every } A \in S_2 \}=\emptyset .\]
Now using \eqref{eq:conv_function_operator} together with the fact that for every $z\in \mathbb R^{2n}$ there exist $f\in \mathcal{T}$ and $A\in S_2$ such that $\mathcal{F}_\sigma(f)\not =0$ and $ \mathcal{F}_W(A)\not = 0$, we get that
\[\{z\in \mathbb{R}^{2n}:\mathcal{F}_W(A)(z)=0 \text{ for every } A \in S \}=\emptyset.\]
The set $\mathcal{F}_W(S)$ is an ideal of $\mathcal{F}_W(\mathrm{Sym}^{-1}(\mathcal{T}))\subset C_0(\mathbb{R}^{2n})$ with pointwise multiplication.
Since $\mathcal{T}$ is a Segal algebra, the set $\mathcal{F}_\sigma( \mathcal{T})$ is a standard algebra by \cite[Re.\ 2.1.15]{reiter20}.
Hence by \cite[Prop.\ 2.1.14]{reiter20} we have that \[C_c(\mathbb{R}^{2n})\cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n}))\subset \mathcal{F}_W(S_2).\]
Since $\mathcal{F}_\sigma(C_c(\mathbb{R}^{2n})\cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n})))$ is dense in $L^1(\mathbb{R}^{2n})$ the space $S$ is a Segal algebra.
The product structure on $QS$ together with \eqref{eq: quantization of convolution} gives
\[\widetilde{\mathrm{Sym}(S_{2})}\ast\widetilde{\mathrm{Sym}(S_{2})} \subset S_{1}.\qedhere\]
\end{proof}
The following result shows how to generate a wealth of examples of graded quantum Segal algebras.
\begin{proposition}\label{Prop:construction}
Let $S_{1}$ and $S_{2}$ be two Segal algebras where $\widetilde{S_{2}}\subset S_{1}$. Then \[QS = S_{1} \oplus (\mathrm{Sym}^{-1}(S_{2}) \cap \mathcal T^1)\] is a quantum Segal algebra with the norm
\[\|(f,A_g)\|_{QS}= \|f\|_{S_{1}}+C\|\widetilde{g}\|_{S_{2}} + C'\| A_g\|_{\mathcal T^1},\]
for positive constants $C$, $C'$.
\end{proposition}
\begin{proof}
For the property \ref{label:QS1}, note that the set $A_0 := L^1(\mathbb R^{2n}) \cap \mathcal F_\sigma(C_c(\mathbb R^{2n}))$ is contained in $\mathcal T$, since $\mathcal T$ is a Segal algebra, cf.~\cite[Prop.~6.2.5]{reiter20}. Further, the set is dense in $\mathcal T$. Since $\mathrm{Sym}^{-1}(\mathcal T)$ is dense in $\mathcal T^1$, we obtain that $\mathrm{Sym}^{-1}(A_0)$ is dense in $\mathcal T^1$. But $A_0$ is also contained in $S_2$. Therefore, $\mathrm{Sym}^{-1}(S_2) \cap \mathcal T^1$ is dense in $\mathcal T^1$.
The conditions \ref{label:QS3} and \ref{label:QS4} are straightforward to verify. For \ref{label:QS2}, $QS$ is easily seen to be a complete subspace of $L^1 \oplus \mathcal T^1$. The product is well-defined by \eqref{eq: quantization of convolution}. Hence, we only need to show that the Banach algebra property holds, that is for $(f_{1}, A_{g_{1}}), (f_{2}, A_{g_{2}}) \in QS$ we have
\[\|(f_1,A_{g_1}) \ast (f_2,A_{g_2})\|_{QS} \leq \|(f_1,A_{g_1})\|_{QS} \cdot \|(f_2,A_{g_2})\|_{QS}.\]
We know by \eqref{eq:banach_ideal_bound_general} and Lemma \ref{lem:homeo_implies_cont} that there exist positive constants $c_1$ and $c_2$ such that
\begin{align*}
\|g\|_{S_{1}} & \le c_1 \cdot \|\widetilde{g}\|_{S_{2}} \\
\|f\ast\widetilde{g}\|_{S_{2}} & \le c_2 \cdot \|f\|_{S_{1}}\|\widetilde{g}\|_{S_{2}}
\end{align*}
for all $f\in S_{1}$ and $g\in S_{2}$.
Denote by $C=\max \{c_1, c_2\}$. Further, by Lemma \ref{lem:homeo_implies_cont}, there is $C' > 0$ with $\| f\|_{L^1} \leq C' \| f\|_{S_1}$ for $f \in S_1$. We get
\begin{align*}
\|(f_1,A_{g_1}) \ast (f_2,A_{g_2})\|_{QS}
&= \|f_1\ast f_2+\widetilde{g_1}\ast\widetilde{g_2}\|_{S_{1}}+\|f_1\ast\widetilde{g_2}+f_2\ast\widetilde{g_1}\|_{S_{2}} \\
&\quad \quad + \| f_1 \ast A_{g_2} + f_2 \ast A_{g_1}\|_{\mathcal T^1}\\
&\le \|f_1\|_{S_{1}}\|f_2\|_{S_{1}}+C^2\|\widetilde{g_1}\|_{S_{2}}\|\widetilde{g_2}\|_{S_{2}}+C\|f_1\|_{S_{1}}\|\widetilde{g_2}\|_{S_{2}} \\
&\quad \quad + C\|f_2\|_{S_{1}}\|\widetilde{g_1}\|_{S_{2}} + \| f_1\|_{L^1} \| A_{g_2}\|_{\mathcal T^1} + \| f_2\|_{L^1} \| A_{g_1}\|_{\mathcal T^1}\\
&\leq\|(f_1,A_{g_1})\|_{QS} \cdot \|(f_2,A_{g_2})\|_{QS}.\qedhere
\end{align*}
\end{proof}
\begin{remark}\hfill
\begin{enumerate}
\item If, under the assumptions of the previous proposition, we further have $\mathrm{Sym}^{-1}(S_2)\subset \mathcal T^1$, then one can instead let $C' = 0$, i.e.,~the norm becomes $\| (f, A_g)\|_{QS} = \| f\|_{S_1} + C\| \widetilde{g}\|_{S_2}$. This follows from essentially the same proof, simply omitting the additional terms.
\item By the previous result, we can consider
\begin{align*}
\mathcal T^Q := \mathcal T \oplus \mathrm{Sym}^{-1}(\mathcal T),
\end{align*}
which is a star-symmetric, strongly modulation-invariant quantum Segal algebra upon being endowed with the norm
\begin{align*}
\| (f, A_g)\|_{\mathcal T^Q} = \| f\|_{\mathcal T} + \| g\|_{\mathcal T}.
\end{align*}
While we won't use this particular quantum Segal algebra in the following, it seems that it is a convenient framework to work within.
\item When we let $S_2 = \widetilde{S_1}$ in the above proposition, then simple computations show that the resulting quantum Segal algebra $QS = S_1 \oplus \mathrm{Sym}^{-1}(\widetilde{S_1})$ is a module over $\mathcal T^Q$.
\end{enumerate}
\end{remark}
\begin{corollary}\label{intersection_all_qsa}
Let $G$ denote the set of all graded quantum Segal algebras. Then
\[\mathcal{F}\Big(\bigcap_{S_1\oplus S_2\in G} S_1\oplus S_2\Big) = (C_{c}(\mathbb{R}^{2n})\cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n}))) \oplus (C_{c}(\mathbb{R}^{2n})\cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n}))) .\]
\end{corollary}
\begin{proof}
We know that the intersection of all Segal algebras satisfies
\[\mathcal{F}_{\sigma}\Big(\bigcap_{S\in \mathcal{S}} S\Big) = C_{c}(\mathbb{R}^{2n}) \cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n})),\]
where $\mathcal{S}$ denotes the set of all Segal algebras.
By the proof of Proposition \ref{prop:segal_parts} we know that
\[\mathcal{F}\Big(\bigcap_{S_1\oplus S_2\in G} S_1\oplus S_2\Big) \supseteq (C_{c}(\mathbb{R}^{2n})\cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n}))) \oplus (C_{c}(\mathbb{R}^{2n})\cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n}))).\]
The remaining inclusion follows from Proposition \ref{Prop:construction}. Note that $\mathrm{Sym}^{-1}( S_2) \subset \mathcal T^1$ can always be enforced by intersecting $S_2$ with the Feichtinger algebra $\mathcal S_0$ which satisfies $\mathrm{Sym}^{-1}(\mathcal{S}_0) \subset \mathcal T^1$, see \cite[Thm.~3.5]{heil2008}.
\end{proof}
\subsection{The Quantum Feichtinger Algebra}
In this section we will describe a quantum Segal algebra that we call the \emph{quantum Feichtinger algebra}. It is a particular example of the quantum Segal algebras obtained by quantization of the Feichtinger algebra. We will show that this algebra is not on the form of an induced Segal algebra. Recall that $\mathcal{S}_{0} \coloneqq \mathcal{S}_{0}(\mathbb{R}^{2n})$ denotes the Feichtinger algebra defined in Example~\ref{ex:Feichtinger}.
\begin{definition}
We say that a bounded linear operator $A$ on $\mathcal H$ is a \emph{Feichtinger operator} if $A = A_{g}$ for $g \in \mathcal{S}_{0}(\mathbb{R}^{2n})$. We denote the \emph{space of Feichtinger operators} as $\mathcal{S}_0(\mathcal H)$.
\end{definition}
Since $\mathcal S_0(\mathbb R^{2n}) \subset L^2(\mathbb R^{2n})$, there is no complication in talking about the integral kernel of a Feichtinger operator. As the discussions in \cite[Sec.~7.4]{Heil2003} shows, the integral kernel is contained in $\mathcal S_0(\mathbb R^{2n})$ if and only if the Weyl symbol is in $\mathcal S_0(\mathbb R^{2n})$, and, in this case, their norms in $\mathcal S_0(\mathbb R^{2n})$ are equivalent. Therefore, $A_{g} \in \mathcal{T}^{1}$ whenever $g \in \mathcal{S}_{0}(\mathbb{R}^{2n})$, which follows from \cite[Thm.~3.5]{heil2008}. Clearly, the space $\mathcal S_0(\mathcal H)$ is a Banach space under the norm
\[\|A_{g}\|_{\mathcal{S}_0(\mathcal{H})} \coloneqq \|g\|_{\mathcal{S}_{0}}, \qquad A_{g} \in \mathcal{S}_0(\mathcal{H}).\]
We will let $\textrm{Fin}(\mathcal{S}_{0})$ denote all finite-rank operators $F$ satisfying $\|F\|_{\textrm{Fin}(\mathcal{S}_{0})} < \infty$, where
\[\|F\|_{\textrm{Fin}(\mathcal{S}_{0})} \coloneqq \inf \left\{\sum_{i=1}^m|\alpha_{i}|\|\phi_{i}\|_{\mathcal{S}_{0}}\|\psi_{i}\|_{\mathcal{S}_{0}} : F = \sum_{i=1}^m\alpha_{i} \cdot \phi_{i} \otimes \psi_{i}\right\}.\]
The Feichtinger operators inherits several equivalent characterizations from the Feichtinger algebra.
\begin{theorem}\label{thm:equivalent_feichtinger_characterizations}
Let $A \in \mathcal{T}^{1}$. Then the following characterizations are equivalent:
\begin{enumerate}[1)]
\item\label{thm:equiv_feichtinger 1} The operator $A$ is a Feichtinger operator.
\item\label{thm:equiv_feichtinger 2} The integral kernel $K_{A}$ of $A$ is in the Feichtinger algebra.
\item\label{thm:equiv_feichtinger 3} The operator $A$ is in the closure of $\textrm{Fin}(\mathcal{S}_{0})$ with the norm $\|\cdot\|_{\textrm{Fin}(\mathcal{S}_{0})}$.
\item\label{thm:equiv_feichtinger 4} The operator $A$ satisfies $\mathcal{F}_{W}(A) \in \mathcal{S}_{0}$.
\item \label{thm:equiv_feichtinger 5}The modulation of $A$ satisfies $\int_{\mathbb R^{2n}} \| (\gamma_z A) \ast A\|_{L^1} \, \mathrm{d}z < \infty$.
\item \label{thm:equiv_feichtinger 6} As a function of $(z, z') \in \mathbb R^{2n}\times \mathbb R^{2n}$, it is $\tr(A (\alpha_z \gamma_{z'}A)^*) \in L^1(\mathbb R^{2n} \times \mathbb R^{2n})$.
\end{enumerate}
Introduce the notation
\[ \| A\|_{B, \gamma}\coloneqq \int_{\mathbb R^{2n}} \| (\gamma_z A) \ast B\|_{L^1} \, \mathrm{d}z, \quad \| A\|_{B, \alpha\gamma} \coloneqq \| \tr(A (\gamma_{z'}\alpha_z B)^*)\|_{L^1(\mathbb R^{2n} \times \mathbb R^{2n})}, \]
where $B \in \mathcal{S}_0(\mathcal H)\setminus \{0\}$ is a fixed operator.
Then $\mathcal{S}_0(\mathcal H)$ can be given the following equivalent norms
\[\|A\|_{\mathcal{S}_0(\mathcal{H})} \cong \| \mathcal F_W(A)\|_{\mathcal S_0} \cong \| K_A\|_{\mathcal S_0} \cong \| A\|_{Fin(\mathcal S_0)} \cong \| A\|_{B, \gamma} \cong \| A\|_{B, \alpha\gamma}. \]
\end{theorem}
\begin{remark}
Note that $V_B(A)(z, z') \coloneqq \tr(A\alpha_z \gamma_{z'}(B))$ serves as an operator analogue to the short-time Fourier transform. To the authors' knowledge, the operator STFT is not present in the literature. It could serve as an interesting object for further studies.
\end{remark}
\begin{proof}
The equivalence between \ref{thm:equiv_feichtinger 1} and \ref{thm:equiv_feichtinger 2} has already been discussed above. The equivalence between \ref{thm:equiv_feichtinger 2} and \ref{thm:equiv_feichtinger 3} is a reformulation of the tensor factorization property of the Feichtinger algebra, see e.g., \cite[Thm.~7.4]{jakobsen18}. Since for $A=A_g$ we have $\mathcal{F}_{\sigma}(\widetilde{g}) = \mathcal{F}_{W}(A_g)$ and $\mathcal{F}_{\sigma}$ is a bijective isometry on $\mathcal{S}_{0}$ the equivalence between \ref{thm:equiv_feichtinger 1} and \ref{thm:equiv_feichtinger 4} follows. For the equivalence between \ref{thm:equiv_feichtinger 1} and \ref{thm:equiv_feichtinger 5} we have
\[(\gamma_z A_{g})\ast A_{g} = A_{\gamma_{-z}g}\ast A_{g} = \widetilde{(\gamma_{-z}g) \ast g}.\]
Hence \ref{thm:equiv_feichtinger 5} holds if and only if
\[\int_{\mathbb R^{2n}} \| \gamma_z g \ast g\|_{L^1} \, \mathrm{d}z < \infty.\]
This is a reformulation of $g$ being in $\mathcal{S}_{0}$, see \cite[Thm.~4.7]{jakobsen18}.
Finally, for seeing that \ref{thm:equiv_feichtinger 4} and \ref{thm:equiv_feichtinger 6} are equivalent, write $f_1 = \mathcal F_W(A)$, and $f_2 = \mathcal F_W(B)$. Then
\begin{align*}
\tr(A (\alpha_z \gamma_{z'}B)^*) &= \langle A, \alpha_z \gamma_{z'}B\rangle_{\mathcal{T}^2} = \langle \mathcal{F}_W( A), \mathcal F_W(\alpha_z \gamma_{z'}B)\rangle_{L^2} = \langle f, \gamma_{z}\alpha_{z'} g\rangle_{L^2}\\
&= \int_{\mathbb R^{2n}} f(v) e^{-i\sigma(z, v)}\overline{g}(v-z')\,\mathrm{d}v.
\end{align*}
It is not hard to see that
\begin{align*}
\int_{\mathbb R^{2n}} \int_{\mathbb R^{2n}} \left| \int_{\mathbb R^{2n}} f(v) e^{-i\sigma(z, v)}\overline{f}(v-z')\,\mathrm{d}v \right| \,\mathrm{d}z\,\mathrm{d}w = \int_{\mathbb R^{2n}} \int_{\mathbb R^{2n}} | V_f(f)(z',z) | \,\mathrm{d}z\,\mathrm{d}w,
\end{align*}
where $V_f(f)$ denotes the STFT.
The equivalences of the norms are straightforward and from the analogous results for the Feichtinger algebra.
\end{proof}
We can now combine the spaces $\mathcal{S}_{0}$ and $\mathcal{S}_0(\mathcal H)$ to form the following algebra.
\begin{definition}
The \emph{quantum Feichtinger algebra} $\mathcal{S}_{0}^{Q} \subset L^{1} \oplus \mathcal{T}^{1}$ is given by
\begin{equation*}
\mathcal{S}_{0}^{Q} \coloneqq \mathcal{S}_{0} \oplus \mathcal{S}_0(\mathcal H) = \left\{(f, A_g) \in L^{1} \oplus \mathcal{T}^{1} : f,g \in \mathcal{S}_{0}\right\}
\end{equation*}
with the norm
\[\|(f,A_g)\|_{\mathcal{S}_0^Q}=\|f\|_{\mathcal{S}_0}+\|g\|_{\mathcal{S}_0}.\]
\end{definition}
We say that a Segal algebra $(QS, \|\cdot \|_{QS})$ is \emph{strongly modulation invariant} if $\| \gamma_z(f, A)\|_{QS} = \| (f, A)\|_{QS}$ and $z \mapsto \gamma_z$ are continuous in the strong sense on $(QS, \|\cdot \|_{QS})$.
\begin{proposition}
The quantum Feichtinger algebra $\mathcal{S}_{0}^{Q}$ is a star-symmetric and strongly modulation-invariant quantum Segal algebra.
\end{proposition}
\begin{proof}
The fact that $\mathcal{S}^Q_0$ is a quantum Segal algebra follows from Proposition \ref{Prop:construction}.
The star-symmetry follows from the property
\[(f, A_{g})^{\ast} = (f^{\ast}, PA_{g}^{\ast}P) = (f^{\ast}, A_{g^*}).\]
Finally, the strong modulation invariance is immediate since for $z \in \mathbb{R}^{2n}$ and $f, g \in \mathcal{S}_{0}$ we have \[\gamma_{z}(f, A_{g}) = (\gamma_{z}(f), \gamma_{z}(A_{g})) = (\gamma_{z}(f), A_{\gamma_{-z}g}). \qedhere\]
\end{proof}
The Feichtinger algebra is yet another example of a quantum Segal algebra which is not of the induced type:
\begin{proposition}
The quantum Feichtinger algebra is not an induced quantum Segal algebra.
\end{proposition}
\begin{proof}
If $\mathcal{S}_0^Q$ was an induced quantum Segal algebra there would exist a regular trace class operator $A$ such that every Feichtinger operator $B$ can be written as
\[B=f*A,\]
for some $f\in \mathcal{S}_0$. By Theorem~\ref{thm:equivalent_feichtinger_characterizations} \ref{thm:equiv_feichtinger 4} together with the fact that $\mathcal{F}_\sigma(\mathcal{S}_0)=\mathcal{S}_0$ this is equivalent to that every function $g\in \mathcal{S}_0$ can be written uniquely as
\[g=f\cdot\mathcal{F}_W(A),\]
for some $f\in \mathcal{S}_0$.
Or equivalently, the map $\phi(f)= f/\mathcal{F}_W(A)$ is a bijection from $\mathcal{S}_0$ to itself. We will show that the surjectivity of $\phi$ contradicts the fact that $\mathcal{F}_W(A)\in L^2(\mathbb{R}^{2n})$.
Denote by
\[h(z)=\prod_{j=1}^n\frac{1}{1+|x_j|^2}\frac{1}{1+|\xi_j|^2}\in \mathcal{S}_0, \qquad z=(x,\xi)\in \mathbb{R}^{2n}.\]
Then consider $\phi^4(h)=h/(\mathcal{F}_W(A))^4\in \mathcal{S}_0$.
Since $\mathcal{S}_0\subset C_0(\mathbb{R}^{2n})$ there exists a $C>0$ such that for all $z\in \mathbb{R}^{2n}$ we have
\[|h(z)|/|\mathcal{F}_W(A)(z)|^{4}\le C,\]
or equivalently
\[\sqrt{|h(z)|}\le C^{1/4}|\mathcal{F}_W(A)(z)|^{2}.\]
However, $\sqrt{|h(z)|}\not \in L^1(\mathbb{R}^{2n})$ implying that $\mathcal{F}_W(A)\not \in L^2(\mathbb{R}^{2n})$.
\end{proof}
\begin{remark}
The Feichtinger algebra $\mathcal S_0(\mathbb R^{n})$ is well-known for being the smallest strongly modulation-invariant Segal algebra. The same is indeed true for graded quantum Segal algebras: Denote by $(X,\|\cdot \|_X)$ a Banach space where $X \subset L^1 \oplus \mathcal T^1$ is a graded subspace, which is both strongly shift- and strongly modulation-invariant. Further, assume that there is at least one element $(f, A) \in X$ with both $f \neq 0$ and $A \neq 0$ and $(f, A) \in \mathcal{S}_0^Q$. By Corollary~\ref{intersection_all_qsa} such elements always exists for $X$ a graded quantum Segal algebra. Then, we endow $\mathcal F(X) \subset C_0(\mathbb R^{2n}) \oplus C_0(\mathbb R^{2n})$ with the norm $\| (\mathcal F_\sigma(f), \mathcal F_W(A))\| = \| (f, A)\|_X$. This turns both components of $\mathcal F(X) $ into a strongly shift- and strongly modulation-invariant space that contains at least one nontrivial element of $\mathcal S_0(\mathbb R^{2n})$. By \cite[Thm.~7.3]{jakobsen18} we thus have $\mathcal S_0(\mathbb R^{2n}) \oplus \mathcal S_0(\mathbb R^{2n}) \subset \mathcal F(X)$.
\end{remark}
One of the useful applications of the Feichtinger algebra is as a space of test functions. Its dual space $\mathcal{S}_0'(\mathbb R^{2n})$ is sometimes referred to as the space of \emph{mild distributions}. It contains, in addition to all $L^p$ spaces, some distributions such as the point measures $\delta_x$ and Dirac combs $\sum_{k \in \mathbb Z^{d}} \delta_k$. Having defined the Feichtinger operators, we can also consider the Banach space dual of $\mathcal S_0(\mathcal H)$, which we will denote by $\mathcal S_0'(\mathcal H)$. Since the Schwartz operators $\mathcal S(\mathcal H)$ embeds continuously into $\mathcal S_0(\mathcal H)$, the dual $\mathcal S_0'(\mathcal H)$ is continuously embedded into $S'(\mathcal H)$. As is the case for the Feichtinger operators, the space $\mathcal S_0'(\mathcal H)$ can be equipped with many equivalent norms. As an example, define the norm
\begin{align*}
\| A_g\|_{\mathcal S_0'} \coloneqq \| g \|_{\mathcal S_0'(\mathbb R^{2n})}.
\end{align*}
Here, $\| \cdot\|_{\mathcal S_0'(\mathbb R^{2n})}$ is any of the equivalent norms of $\mathcal S_0'(\mathbb R^{2n})$.
We will denote by $\langle \cdot, \cdot\rangle_{\mathcal{S}_0',\mathcal{S}_0}$ the sesquilinear pairing of an element in $\mathcal{S}_0'(\mathcal{H}) $ and $\mathcal{S}_0(\mathcal{H}) $.
The following statement is, in principle, the dual statement of Theorem \ref{thm:equivalent_feichtinger_characterizations}.
\begin{theorem}
Let $A \in \mathcal S'(\mathcal H)$ and let $B \in \mathcal{S}_0(\mathcal H)\setminus\{0\}$ be any fixed Feichtinger operator. Denote by
\[ \| A\|_{\infty, B, \gamma}\coloneqq \sup_{z \in \mathbb R^{2n}} \| (\gamma_z A) \ast B\|_{L^1}, \quad \| A\|_{\infty, B, \gamma\alpha} \coloneqq \sup_{z,z' \in \mathbb R^{2n}} |\langle A, \gamma_{z'}\alpha_z(B)\rangle_{\mathcal{S}_0',\mathcal{S}_0}|. \]
Then, the following characterizations are equivalent:
\begin{enumerate}[1)]
\item\label{thm:equiv_mild 1} The operator $A$ can be written as $A = A_{g}$ for some $g \in \mathcal{S}_{0}'(\mathbb{R}^{2n})$.
\item\label{thm:equiv_mild 2} There exists an element $K_{A}\in \mathcal{S}_{0}'(\mathbb{R}^{2n})$ such that $\langle Af_2, f_1\rangle_{\mathcal{S}_0',\mathcal{S}_0}= \langle K_A, f_1\otimes f_2\rangle_{\mathcal{S}_0',\mathcal{S}_0}$ for all $f_1, f_2\in \mathcal{S}_0(\mathbb{R}^n)$.
\item\label{thm:equiv_mild 3} The Fourier-Weyl transform of $A$ satisfies $\mathcal{F}_{W}(A) \in \mathcal{S}_{0}'(\mathbb{R}^{2n})$.
\item\label{thm:equiv_mild 4} $A$ satisfies $\| A\|_{\infty, B, \gamma} < \infty$.
\item\label{thm:equiv_mild 5} $A$ satisfies $\sup_{z,z' \in \mathbb R^{2n}} \| A\|_{\infty, B, \gamma\alpha} < \infty$.
\end{enumerate}
Moreover, the following are equivalent norms on $\mathcal{S}_0'(\mathcal H)$:
\[\|A\|_{\mathcal{S}_0'} \cong \| \mathcal F_W(A)\|_{\mathcal S_0'} \cong \| K_A\|_{\mathcal S_0'} \cong \| A\|_{\infty, B, \gamma} \cong \| A\|_{\infty, B, \gamma\alpha}. \]
\end{theorem}
We omit the proof, as it is very similar to that of Theorem \ref{thm:equivalent_feichtinger_characterizations}, replacing characterizations of $\mathcal S_0(\mathbb R^{2n})$ with those of $\mathcal S_0'(\mathbb R^{2n})$, cf.\ \cite{jakobsen18}.
Note that $\mathcal L(\mathcal H)$ continuously embeds into $\mathcal S_0'(\mathcal H)$. Further, by the kernel theorem for $\mathcal S_0(\mathbb R^n)$, cf.\ \cite[Thm.~9.3]{jakobsen18} and characterization \ref{thm:equiv_mild 2}, $\mathcal S_0'(\mathcal H)$ agrees exactly with those elements of $\mathcal S'(\mathcal H)$ which extend to bounded linear operators from $\mathcal S_0(\mathbb R^n)$ to $\mathcal S_0'(\mathbb R^n)$. Hence, among all the operators given by kernels/symbols in $\mathcal S'$, characterization \ref{thm:equiv_mild 4} and \ref{thm:equiv_mild 5} from the proposition yield characterizations through quantum harmonic analysis methods of those which are well-behaved in the Feichtinger setting.
\begin{remark}
We now briefly discuss how key properties of the mild distributions carry over to $\mathcal S_0'(\mathcal H)$ and $\mathcal S_0'(\mathbb R^{2n}) \oplus \mathcal S_0'(\mathcal H)$. First note that $\mathcal S_0'(\mathbb R^{2n}) \oplus \mathcal S_0'(\mathcal H)$ is not an algebra, but it is a module over $\mathcal S_0(\mathbb R^{2n}) \oplus \mathcal S_0(\mathcal H)$.
\begin{enumerate}[1)]
\item The space $\mathcal{S}_{0}^{Q}$ can through the Weyl quantization be identified with $\mathcal{S}_{0}(\mathbb{R}^{2n}) \oplus \mathcal{S}_{0}(\mathbb{R}^{2n})$ as a Banach space. As such, the Banach space dual $\mathcal{S}_{0}^{Q}{}'\coloneqq(\mathcal{S}_{0}^{Q})'$ of $\mathcal{S}_{0}^{Q}$ can be identified with $\mathcal{S}_{0}'(\mathbb{R}^{2n}) \oplus \mathcal{S}_{0}'(\mathbb{R}^{2n})$.
Hence bounded linear operators
\[A\colon \mathcal{S}_{0}^{Q} \to \mathcal{S}_{0}^{Q}{}'\]
can be decomposed into four operators $A_{ij}\colon \mathcal{S}_{0}(\mathbb{R}^{2n}) \to \mathcal{S}_{0}'(\mathbb{R}^{2n})$ for $i,j =1,2$. Each of these operators can be represented with an integral kernel $K_{ij} \in \mathcal{S}_{0}'(\mathbb{R}^{4n})$ by \cite[Thm.~9.3]{jakobsen18}.
Thus for $f_{1},f_{2},g_{1},g_{2} \in \mathcal{S}_{0}(\mathbb{R}^{2n})$ we can write $A$ as
\begin{multline*}
\langle A(f_{1}, A_{g_{1}}), (f_{2}, A_{g_{2}}) \rangle_{\mathcal{S}_{0}^{Q}{}', \mathcal{S}_{0}^{Q}} = \langle K_{11}, f_{2} \otimes f_{1} \rangle_{\mathcal{S}_0',\mathcal{S}_0} + \langle K_{21}, f_{2} \otimes g_{1} \rangle_{\mathcal{S}_0',\mathcal{S}_0}\\
+ \langle K_{12}, g_{2} \otimes f_{1} \rangle_{\mathcal{S}_0',\mathcal{S}_0} + \langle K_{22}, g_{2} \otimes g_{1} \rangle_{\mathcal{S}_0',\mathcal{S}_0}.
\end{multline*}
From here, it is not difficult to formulate a kernel theorem for bounded linear maps from $\mathcal{S}_0^Q$ to $\mathcal{S}_0^{Q}{}'$. We leave the details to the interested reader.
\item Recall that elements from $\mathcal S_0(\mathbb R^n)$ satisfy Poisson's summation formula. Indeed, there is a more general form with summation over different lattices, but for simplicity we stick to the following basic formulation: It is
\begin{align*}
\sum_{k \in \mathbb Z^n} f(k) =(2\pi)^{n/2} \sum_{k \in 2\pi \mathbb{ Z}^n} \mathcal F(f)(k).
\end{align*}
In even dimensions, the right-hand side can of course be replaced by the symplectic Fourier transform. For $A = A_g \in \mathcal S_0(\mathcal{H})$, we therefore of course have
\begin{align*}
\sum_{k \in 2\pi \mathbb Z^{2n}} \mathcal F_W(A_g)(k) = \sum_{k \in 2\pi \mathbb Z^{2n}} \mathcal F_\sigma(g)(k) = \frac{1}{(2\pi)^n}\sum_{k \in \mathbb Z^{2n}} g(k),
\end{align*}
which can be interpreted as a Poisson summation formula for quantum harmonic analysis. By Theorem \ref{thm:equivalent_feichtinger_characterizations} \ref{thm:equiv_feichtinger 2} we know that every $A \in \mathcal S_0(\mathcal{H})$ is of the form \[Af(s) = \int_{\mathbb R^n} f(t) K_A(s,t)\,\mathrm{d}y\]
for $K_A \in \mathcal{S}_0$. For $(x,\xi) \in \mathbb R^{2n}$, it is not hard to verify that the integral kernel of $AW_{(x, \xi)}$ is now
\begin{equation*}
K_{AW_{(x, \xi)}}(s, t) = e^{i\xi \cdot t+i\xi \cdot x/2} K_A(s,t+x).
\end{equation*}
By \cite[Cor.~3.15]{feichtinger_jakobsen2022} we have
\begin{align*}
\mathcal F_W(A)(x, \xi) = \tr(AW_{(x, \xi)}) = \int_{\mathbb R^n} e^{i\xi \cdot s+i\xi \cdot x/2}K_A(s,s+x)\,\mathrm{d}s.
\end{align*}
On the other hand, if $A=A_g$ then $g$ can be computed by
\begin{align*}
g(x,\xi) = \int_{\mathbb R^n} K_A\left(x+\frac{y}{2}, x-\frac{y}{2}\right) e^{- i \xi \cdot y}\,\mathrm{d}y \in \mathcal S_0(\mathbb R^{2n}).
\end{align*}
Hence, the above Poisson summation formula shows
\begin{align*}
\sum_{k \in 2\pi\mathbb Z^{2n}} \mathcal F_W(A_g)(k)&= \sum_{(k_1, k_2) \in \mathbb Z^{2n}} \int_{\mathbb R^n} e^{2\pi i k_2 \cdot s+2\pi^2 ik_1k_2}K_A(s,s+2\pi k_1)\,\mathrm{d}s=\frac{1}{(2\pi )^n} \sum_{k \in \mathbb Z^{2n}} g(k) \\
&= \frac{1}{(2\pi )^n}\sum_{(k_1, k_2) \in \mathbb Z^{2n}} \int_{\mathbb R^n} K_A(k_1+\frac{y}{2}, k_1-\frac{y}{2}) e^{- i k_2 \cdot y}\,\mathrm{d}y.
\end{align*}
A similar formula can be obtained by expressing the integral kernel in terms of the symbol.
\item Another point of view on the Poisson summation formula is the following: It is
\begin{align*}
\frac{1}{(2\pi)^n} \sum_{k \in \mathbb Z^{2n}} \langle \delta_k, f\rangle = \frac{1}{(2\pi)^n} \sum_{k \in \mathbb Z^{2n}} f(k) = \sum_{k \in 2\pi \mathbb Z^{2n}} \widehat{f}(k) = \sum_{k \in 2\pi\mathbb Z^{2n}} \langle \delta_k, \widehat{f}\rangle,
\end{align*}
where $\sum_{k \in \mathbb Z^{2n}} \delta_k, \ \sum_{k \in 2\pi\mathbb Z^{2n}} \widehat{\delta_k}$ are in $S_0'(\mathbb R^{2n})$. Hence
\begin{align*}
\frac{1}{(2\pi)^n} \sum_{k \in \mathbb Z^{2n}} \alpha_k(\delta_0) = \sum_{k \in 2\pi\mathbb Z^{2n}} \mathcal F_\sigma(\alpha_k(\delta_0)) = \sum_{k \in 2\pi \mathbb Z^{2n}} \gamma_k \mathcal F_\sigma(\delta_0) = \sum_{k \in 2\pi \mathbb Z^{2n}} \frac{1}{(2\pi)^n}\gamma_k(1).
\end{align*}
Applying the inverse Fourier-Weyl transform, using that $\mathcal F_W^{-1}(\delta_0) = \mathrm{Id}$ and $\mathcal F_W^{-1}(1) = 2^nP$, we get
\begin{align*}
\sum_{k \in \mathbb Z^{2n}}\mathcal F_W^{-1} (\alpha_k(\delta_0)) &= \sum_{k \in \mathbb Z^{2n}} \gamma_k \mathcal F_W^{-1}(\delta_0) = \sum_{k \in \mathbb Z^{2n}} \gamma_k(\mathrm{Id}) = \sum_{k \in \mathbb Z^{2n}} W_{k},\\
\sum_{k \in 2\pi \mathbb Z^{2n}} \mathcal F_W^{-1}(\gamma_k(1)) &= \sum_{k \in 2\pi \mathbb Z^{2n}} \alpha_k \mathcal F_W^{-1}(1) = 2^n\sum_{k \in 2\pi \mathbb Z^{2n}} \alpha_k(P) = 2^n\sum_{k \in2\pi \mathbb Z^{2n}} W_{2k}P.
\end{align*}
Hence, we obtain in $\mathcal S_0'(\mathcal{H})$:
\begin{align*}
2^n \sum_{k \in 2\pi \mathbb Z^{2n}} W_{2k}P = \sum_{k \in \mathbb Z^{2n}} W_{k}.
\end{align*}
Applying this to some $A \in \mathcal S_0(\mathcal{H})$ gives the somewhat ominous result:
\begin{align*}
\sum_{k \in \mathbb Z^{2n}} \mathcal F_W(A)(k) = 2^n\sum_{k \in 2\pi \mathbb Z^{2n}} \mathcal F_W(AP)(2k).
\end{align*}
This equality can conceptually be explained a little better by doing the following: Recall the parity operator is both symmetric and unitary, hence $P$ has spectrum contained in $\{ -1, 1\}$. The spectral decomposition of $\mathcal{H}$ is given by $\mathcal{H} = L_{\mathrm{even}}^2 \oplus L_{\mathrm{odd}}^2$, the decomposition into even and odd functions. Now, $P|_{L_{\mathrm{even}}^2} = \mathrm{Id}$ and $P|_{L_{\mathrm{odd}}^2} = -\mathrm{Id}$. We define a unitary square root $V$ of $P$ by letting $V|_{L_{\mathrm{even}}^2} = \mathrm{Id}$ and $V|_{L_{\mathrm{odd}}^2} = i\mathrm{Id}$. Then $VW_{(x,\xi)} = W_{(\xi, -x)}V$ and $W_z V = VW_{(-\xi, x)}$. Thus, we get
\begin{align*}
\sum_{k \in 2\pi \mathbb Z^{2n}} \mathcal F_W(AP)(2k) = \sum_{k_1,k_2 \in 2\pi \mathbb Z^{n}} \mathcal F_W(V A V)(2k_1,-2k_1) = \sum_{k \in 4\pi \mathbb Z^{2n}} \mathcal F_W(V A V)(k).
\end{align*}
Since passing from $A$ to $PAP$ plays the role of a reflection (analogously to passing from $f(z)$ to $f(-z)$, $z \in \mathbb C$), passing from $A$ to $VAV$ can be thought of as rotating the argument by $\frac{\pi}{2}$ (analogously to passing from $f(z)$ to $f(iz)$). Recalling now that taking the Fourier transform can be thought of as rotating the phase space by $\frac{\pi}{2}$, it is somewhat more reasonable that the Poisson summation formula should relate the $A$ and $V A V$.
\end{enumerate}
\end{remark}
\subsection{Miscellaneous examples}
We have thus far seen examples of quantum Segal algebras that are not ideals of $L^1 \oplus \mathcal T^1$, and will now describe a class of examples that are ideals. They are natural analogues of the Segal algebras from Example \ref{ex:segal} \ref{ex:segal E3}.
\begin{example}Let $\mu$ be any Radon measure on $\mathbb R^{2n}$. Define the set $S_p^\mu$ by
\begin{align*}
S_p^\mu = \{ (f, A) \in L^1 \oplus \mathcal T^1: ~\| \widehat{f}\|_{L^p(\mu)} + \| \widehat{A}\|_{L^p(\mathbb{R}^{2n},\mu)} < \infty\},
\end{align*}
with the norm
\begin{align*}
\| (f, A)\|_{S_p^\mu} = \| f\|_{L^1} + \| \widehat{f}\|_{L^p(\mathbb{R}^{2n},\mu)} + \| A\|_{\mathcal T^1} + \| \widehat{A}\|_{L^p(\mathbb{R}^{2n},\mu)}.
\end{align*}
Since $S_p^\mu$ contains any $(f, A) \in L^1 \oplus \mathcal T^1$ such that $\widehat{f}$ and $\widehat{A}$ both have compact support, it is a dense subspace of $L^1 \oplus \mathcal T^1$. Further, it is not hard to verify that $S_p^\mu$ is both a graded quantum Segal algebra and an ideal of $L^1 \oplus \mathcal T^1$. In the case that $\mu$ is a finite measure we have that $S_p^\mu = L^1 \oplus \mathcal T^1$, but for appropriately infinite measures $S_p^\mu$ is a strict subset.
Note that, as in \cite[(vii), p.~26]{reiter71}, these $S_p^\mu$ also have the following theoretical value: Given $A \in \mathcal T^1$ such that $\widehat{A}$ is not compactly supported, then one can explicitly write down a $\mu$ such that the graded quantum Segal algebra $S_1^\mu$ does not contain $A$.
\end{example}
We note that quantum Segal algebras, which are ideals of $L^1 \oplus \mathcal T^1$, are abstract Segal algebras in the sense of Burnham \cite[Def.~1.1]{Burnham1972}. In particular, such quantum Segal algebras have the same closed ideals as $L^1 \oplus \mathcal T^1$. We refrain from giving the precise result here and only refer to \cite[Thm.~1.1]{Burnham1972}. Instead, we want to give another result on such quantum Segal algebras, which is a close relative to our Corollary \ref{intersection_all_qsa}.
\begin{proposition}\label{intersection_all_qsa_ideals}
Let $G$ denote the set of all graded quantum Segal algebras which are ideals of $L^1(\mathbb R^{2n}) \oplus \mathcal T^1(\mathcal H)$. Then
\[\mathcal{F}\Big(\bigcap_{S_1\oplus S_2\in G} S_1\oplus S_2\Big) = (C_{c}(\mathbb{R}^{2n})\cap \mathcal{F}_\sigma(L^1(\mathbb{R}^{2n}))) \oplus (C_{c}(\mathbb{R}^{2n})\cap \mathcal{F}_W(\mathcal T^1(\mathcal H))) .\]
\end{proposition}
\begin{proof}
As described above, the algebras $S_1^\mu$ can be utilized to show the inclusion ``$\subseteq$''. For the reverse inclusion, note that for any $f \in L^1(\mathbb R^{2n})$ with $\widehat{f} \in C_c(\mathbb R^{2n})$ we have that $(f, 0)$ is contained in the intersection by Corollary \ref{intersection_all_qsa}. Let $A \in \mathcal T^1$ such that $\widehat{A}$ is compactly supported. Then, there exists $g \in \mathcal S(\mathbb R^{2n})$ such that $\widehat{g}$ is compactly supported and $\widehat{g} \equiv 1$ on the support of $\widehat{A}$. In particular, $(g, 0)$ is contained in every graded quantum Segal algebra. Thus, the convolution $(g, 0) \ast (0, A)$ is contained in every graded quantum Segal algebra which is an ideal in $L^1 \oplus \mathcal T^1$. By comparing Fourier transforms, we find that $(g, 0) \ast (0, A) = (0, A)$. Therefore, the element $(0, A)$ is contained in every graded quantum Segal algebra which is an ideal of $L^1 \oplus \mathcal T^1$.
\end{proof}
We still owe the reader an example justifying the special treatment of graded quantum Segal algebras, i.e., showing that not every quantum Segal algebra is graded. Here it is:
\begin{example}\label{qsa:notgraded}
Assume that $S_1$ and $S_2$ are two Segal algebras contained in the Feichtinger algebra $\mathcal S_0$ where there exist $f_1\in S_1$ and $g_1\in S_2$ such that $f_1\not \in S_2$ and $g_1\not\in S_1$.
Let
\[S_T=\{(f,A_{\tilde{g}}): f+g \in S_1, ~f-g \in S_2\}\]
with the norm
\[\|(f,A_{\tilde{g}})\|_{S_T}\coloneqq \|f+g\|_{S_1}+\|f-g\|_{S_2}.\]
Then
\begin{align*}
\|(f,A_{\tilde{g}})\ast(h,A_{\tilde{j}})\|_{S_T} &= \|(f+g)\ast(h+j)\|_{S_1}+\|(f-g)\ast(h-j)\|_{S_2}\\
&\le \|(f+g)\|_{S_1}\|(h+j)\|_{S_1}+\|(f-g)\|_{S_2}\|(h-j)\|_{S_2}\\
&\le \|(f,A_{\tilde{g}})\|_{S_T}\|(h,A_{\tilde{j}})\|_{S_T},
\end{align*}
showing that $S_T$ is a normed algebra. Further, it is not hard to see that it is complete and satisfy \ref{label:QS3} and \ref{label:QS4}. The property \ref{label:QS1} follows from the fact that $C_0(\mathbb{R}^{2n})\oplus C_0(\mathbb{R}^{2n})\subset \mathcal{F}(S_T)$.
If we set $f+g=f_1$ and $f-g=g_1$ we have an example $(f, A_g) \in S_T$ where $(f,0)$ cannot be in $S_T$.
We give such an example of two Segal algebras $S_1$, $S_2$ in the case $n=1$, and leave it to the reader to carry out the necessary adaptations for obtaining such subalgebras of $\mathcal S_0(\mathbb R^{2n})$.
Let $S_1=\mathcal S_0(\mathbb R^2) \cap \mathcal{F}^{-1}(L^2(\mathbb{R}^2, (1+x^2)^{10}))$ and $S_2=\mathcal S_0(\mathbb R^2) \cap \mathcal{F}^{-1}(L^2(\mathbb{R}^2, e^{2x}))$. By Example \ref{ex:segal}, these are intersections of Segal algebras, hence themselves Segal algebras.
Consider the function \[f_a(x,\xi)=\frac{1}{x^2+a^2}\frac{1}{\xi^2+a^2}\]
with Fourier transform
\[\mathcal{F}(f_a)(x,\xi)=\frac{\pi}{2a^2}e^{-a(|x|+|\xi|)}.\]
Using that $f_a\in \mathcal S_0(\mathbb R^{2})$ for all $a$, we observe that $f_1\in S_1$ but $f\not \in S_2$. On the other hand, the function \[g(x,\xi)=\frac{1-e^{-x^2}}{1+x^2}\frac{1-e^{-\xi^2}}{1+\xi^2}H(-x)H(-\xi),\] with $H$ the Heaviside function, is smooth and in $L^1(\mathbb{R}^2)$ with the first two derivatives in $L^1(\mathbb{R}^2)$. Hence, its Fourier transform is in $L^1(\mathbb{R}^2)$ as well. We know that $\mathcal S_0(\mathbb R^2)\ast L^1(\mathbb R^2) \subset \mathcal S_0(\mathbb R^2)$, hence \[g_1=\mathcal{F}^{-1}(f_1)\ast \mathcal{F}^{-1}\left(g\right)\in \mathcal S_0(\mathbb R^2).\]
Then $g_1\in S_2$ while $g_1\not \in S_1$.
\end{example}
\bibliographystyle{abbrv}
|
2,869,038,154,327 | arxiv | \section{Introduction}
Let $\mu$ be a probability on $G:={\rm GL}_d(\mathbb{R})$, $d \geq 2$. Then, $\mu$ induces a random walk on $G$ by letting $$S_n: = g_n \cdots g_1,$$ where $n \geq 1$ and the $g_j$'s are independent and identically distributed random elements of $G$ with law given by $\mu$. The study of these random processes and associated limit theorems has a rich history, starting from seminal works of Furstenberg and Kesten \cite{furstenberg-kesten,furstenberg} leading to important progress since then. This topic is still very active, with important new results and techniques being recently discovered. We refer to \cite{bougerol-lacroix,benoist-quint:book} for an overview. See also below for some recent results.
We consider the standard linear action of $G$ on $\mathbb{R}^d$ and the induced action on the real projective space $\P^{d-1}$. Denote by $\| v \|$ the standard euclidean norm of $v \in \mathbb{R}^d$ and, for $g \in G$, let $\|g\|$ be the associated operator norm.
In order to study the random matrices $S_n$, it is useful to look at associated real-valued random variables. An important function in this setting is the \textit{norm cocycle}, defined by $$\sigma(g,x) = \log \frac{\norm{gv}}{\norm{v}}, \quad \text{for }\,\, v \in \mathbb{R}^d \setminus \{0\}, \, x = [v] \in \P^{d-1} \, \text{ and } g \in G.$$
The cocycle relation $\sigma(g_2g_1,x) = \sigma(g_2,g_1 \cdot x) + \sigma(g_1,x)$ can be used to effectively apply methods such as the spectral theory of complex transfer operators (see Subection \ref{subsec:markov-op}) and martingale approximation \cite{benoist-quint:CLT}. Some other significant quantities are: the norm $\|g\|$, the spectral radius $\rho(g)$ and the coefficients of $g$, the latter being object of this article.
The goal of this work is to obtain two new limit theorems for the coefficients of $S_n$ as $n$ tends to infinity. For $v \in \mathbb{R}^d$ and $f \in (\mathbb{R}^d)^*$, its dual space, we denote by $\langle f,v \rangle := f(v)$ their natural coupling. Observe that the $(i,j)$-entry of a matrix $g$ is given by $\langle e_i^* , g e_j \rangle$, where $(e_k)_{1\leq k \leq d}$ (resp. $(e^*_k)_{1\leq k \leq d}$) denotes the canonical basis of $\mathbb{R}^d$ (resp.$(\mathbb{R}^d)^*$). Our results will apply, more generally, to the random variables of the form
$$\log{ |\langle f, S_n v\rangle | \over \norm{f} \norm{v}},$$
with $v \in \mathbb{R}^d \setminus \{0\}$ and $f \in (\mathbb{R}^d)^* \setminus \{0\}$.
In order to obtain meaningful results, some standard assumptions on the measure $\mu$ need to be made. Recall that a matrix $g\in G$ is said to be \textit{proximal} if it admits a unique eigenvalue of maximal modulus which is moreover of multiplicity one. Let $\Gamma_\mu$ be the smallest closed semigroup containing the support of $\mu$. We assume that $\Gamma_\mu$ is \textit{proximal}, that is, it contains a proximal matrix, and \textit{strongly irreducible}, that is, the action of $\Gamma_\mu$ on $\mathbb{R}^d$ does not preserve a finite union of proper linear subspaces. It is well-known that, under the above conditions, $\mu$ admits a unique stationary probability measure on $\P^{d-1}$, see Section \ref{sec:prelim}.
We'll also assume that $\mu$ has a \textit{finite exponential moment}, that is, $\int_{G} N(g)^\varepsilon {\rm d}\mu(g) < \infty$ for some $\varepsilon>0$, where $N(g):=\max\big( \norm{g},\norm{g^{-1}} \big)$.
\medskip
Our first result is a Berry-Esseen bound with rate $O(1/ \sqrt n)$ for the coefficients, which is a quantitative version of the Central Limit Theorem (CLT). For the CLT for the coefficients without convergence rate, see \cite{benoist-quint:book}. The \textit{first Lyapunov exponent} of $\mu$ is, by definition, the number
$$\gamma := \lim_{n \to \infty} \frac1n \int \log\|g_n \cdots g_1\| \, {\rm d} \mu(g_1) \cdots {\rm d} \mu(g_n).$$
\begin{mainthm}\label{thm:BE-coeff}
Let $\mu$ be a probability measure on ${\rm GL}_d(\mathbb{R})$. Assume that $\mu$ has a finite exponential moment and that $\Gamma_\mu$ is proximal and strongly irreducible. Let $\gamma$ be the associated first Lyapunov exponent. Then, there is a constant $C>0$ and a real number $\varrho > 0$, such that, for any $x:=[v]\in \P^{d-1}, y:=[f]\in (\P^{d-1})^*$, any interval $J\subset\mathbb{R}$, and all $n\geq 1$, we have
$$\bigg| \mathbf P \Big( \log{ |\langle f, S_n v\rangle | \over \norm{f} \norm{v}} - n \gamma\in \sqrt n J \Big) - \frac{1}{\sqrt{2 \pi} \, \varrho} \int_{J} e^{-\frac{s^2}{2 \varrho^2}} \, {\rm d} s \bigg| \leq \frac{C}{\sqrt n}.$$
\end{mainthm}
We observe that the rate $O(1/ \sqrt n)$ in the above theorem is optimal as this is also the case for sums of real-valued i.i.d.'s. Many related bounds for the other random variables associated with $S_n$ mentioned above can be found in the recent literature. More details are given below.
\medskip
Our second result is a Local Limit Theorem for the coefficients.
\begin{mainthm}\label{thm:LLT-coeff}
Let $\mu$ be a probability measure on ${\rm GL}_d(\mathbb{R})$. Assume that $\mu$ has a finite exponential moment and that $\Gamma_\mu$ is proximal and strongly irreducible. Let $\gamma$ be the associated first Lyapunov exponent. Let $\varrho > 0$ be as in Theorem \ref{thm:BE-coeff}. Then, for any $x:=[v]\in \P^{d-1}, y:=[f]\in (\P^{d-1})^*$ and any $-\infty<a<b<\infty$, we have
$$\lim_{n\to \infty}\sup_{t\in\mathbb{R}}\bigg| \sqrt{n} \, \mathbf P \Big( t+ \log{ |\langle f, S_n v\rangle | \over \norm{f} \norm{v}} - n \gamma\in [ a, b] \Big) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \bigg| =0.$$
Moreover, the convergence is uniform in $x \in \P^{d-1}$ and $y \in (\P^{d-1})^*$.
\end{mainthm}
As discussed below, Theorem \ref{thm:LLT-coeff} contains a recent result of Grama-Quint-Xiao \cite{grama-quint-xiao}.
\medskip
\noindent \textbf{Related works.} As mentioned before, the rate $O(1 / \sqrt n)$ in Theorem \ref{thm:BE-coeff} is optimal. Before our work, Berry-Esseen bounds for the coefficients of $S_n$ were only known under strong positivity conditions on the matrices in the support of $\mu$, see \cite{xiao-grama-liu:norm-coeff}. Under the assumptions of Theorem \ref{thm:BE-coeff}, it is known for a long time that one can obtain a Berry-Esseen bound for the norm cocycle $\sigma(S_n,x)$ with rate $O(1 / \sqrt n)$, see \cite{lepage:theoremes-limites,bougerol-lacroix} and \cite{fernando-pene} for a refined version. For the variables $\log \|S_n \|$ and $\rho(S_n)$, the progress is more recent and in these cases a Berry-Esseen bound with rate $O(1 / \sqrt n)$ is known under strong positivity conditions and, without such conditions, a $O(\log n / \sqrt n)$ rate can be obtained, see \cite{xiao-grama-liu:hal,xiao-grama-liu:norm-coeff}.
The exponential moment condition in Theorem \ref{thm:BE-coeff} is stronger than what one should require. Parallel to the case of sums of i.i.d.'s, one should expect to have the same result under a third moment condition, that is, $\int_G \big( \log N(g) \big)^3 \, {\rm d} \mu(g) < + \infty$. This is unknown for the coefficients. Under this condition, for the norm cocycle $\sigma(S_n,x)$, the best known rate is $O(n^{-1 \slash 4} \sqrt{\log n})$ obtained in \cite{cuny-dedecker-jan} using martingale approximation methods in the spirit of \cite{benoist-quint:CLT}. This has been recently improved in \cite{cuny-dedecker-merlevede-peligrad} to a $O(1 / \sqrt n)$ (resp. $O((\log n)^{1/2}n^{-1 \slash 2})$) rate under a fourth (resp. third) moment condition. See also \cite{jirak:BE1} for related results under low moment conditions. In the particular case where $d=2$, the authors have obtained the optimal $O(1 / \sqrt n)$ rate under a third moment condition \cite{DKW:LLT} .
Concerning the Local Limit Theorem (LLT), Theorem \ref{thm:LLT-coeff} above strengthens a recent result of Grama-Quint-Xiao \cite{grama-quint-xiao}, which holds under the same hypothesis as Theorem \ref{thm:LLT-coeff}, but only for the parameter $t = 0$. See also \cite{xiao-grama-liu:coeff-large-deviation} for related results and \cite{DKW:LLT} for the case $d=2$ under a third moment condition. These limit theorems allow us to estimate the probability that the random variables $ \frac{1}{\sqrt n} \big(\log{ |\langle f, S_n v \rangle | \over \norm{f} \norm{v}} - n \gamma \big)$ fall on intervals of size $O(1 / \sqrt n)$ around the origin, while Theorem \ref{thm:LLT-coeff} works for intervals of size $O(1 / \sqrt n)$ around an arbitrary point on the real line. For the norm cocycle, the general LLT is due to Le Page \cite{lepage:theoremes-limites}.
\medskip
\noindent \textbf{Overview of the proofs.} When proving limit theorems for the coefficients, the first step is to compare them with the norm cocycle via the elementary identity
\begin{equation*}
\log{ |\langle f, S_n v \rangle | \over \norm{f} \norm{v}} = \sigma(S_n,x) + \log \Delta(S_n x,y),
\end{equation*}
where $\Delta(x,y):= \frac{ |\langle f, v \rangle |}{\norm{f} \norm{v}}$. One can check that $\Delta(x,y) = d (x,H_y)$, where $H_y := \P(\ker f)$ is a hyperplane in $\P^{d-1}$ and $d$ is a natural distance on $\P^{d-1}$ (see Section \ref{sec:prelim}). Then, we can use the above formula and work with the random variable $\sigma(S_n,x) +\log d(S_n x, H_y)$ instead of $\log{ |\langle f, S_n v \rangle | \over \norm{f} \norm{v}} $. The behaviour of $\sigma(S_n,x)$ can be studied via the perturbed Markov operators (see Subsection \ref{subsec:markov-op}). The term $\log d(S_n x, H_y)$ is handled using some large deviation estimates combined with a good partition of unity (see Lemmas \ref{lemma:partition-of-unity} and \ref{lemma:partition-of-unity-2}). The latter is one of our key arguments, applied to approximate the quantity $\sigma(S_n,x) + \log d(S_n x,H_y)$ by a sum of functions of two separate variables $\sigma(S_n,x)$ and $S_n x$, see also \cite{grama-quint-xiao}. We use a partition of $\P^{d-1} \setminus H_y$ by functions $(\chi_k)_{k \geq 0}$ subordinated to ``annuli'' around $H_y$ of the form $ \big\{ w \in \P^{d-1} :\, e^{-k-1} < d(w,H_y) < e^{-k+1} \big\}$. This allows us to have a good control on the errors in a ``uniform'' manner, which is responsible for the sharp bounds. In particular, we don't need to use the zero-one law for algebraic subsets of $\P^{d-1}$ obtained in \cite{grama-quint-xiao}, which is a main ingredient in the proof of their version of the LLT.
For most of our estimates, we strongly rely on the spectral analysis of the Markov operator and its pertubations on a H\"older space $\cali{C}^\alpha(\P^{d-1})$ (see Subsection \ref{subsec:markov-op}). It is crucial to choose $\alpha$ small in order to reduce the impact of the norm of $\chi_k$ when $k$ is large, see Lemmas \ref{lemma:norm-Phi} and \ref{lemma:norm-Phi-2}. A main difficulty that appeared in our computations is how to to handle the ``tail" of the approximation using $\chi_k$. To overcome this problem, we introduce an auxiliary function
$$\Phi_{n}^{\star} (w):= 1 - \sum_{0\leq k\leq A\log n} \chi_k(w)$$ for some well-chosen $A>0$, which has negligible impact on the estimates but whose presence is helpful in the computations, see e.g. Lemmas \ref{lemma:theta-1-bound}, \ref{lemma:theta-2-bound} and \ref{lemma-R-S}.
Our approach can also be applied to the case of more general target functions. More precisely, we can replace the probabilities in Theorems \ref{thm:BE-coeff} and \ref{thm:LLT-coeff} by the expectation of some good test functions on $\mathbb{R}\times \P^{d-1}$. We postpone these questions to a future work in order to keep the current article less technical. The results presented here can be extended to the case of matrices with entries in a local field, see \cite{benoist-quint:book} for local field versions of the results stated in Section \ref{sec:prelim}.
\medskip
\noindent \textbf{Organization of the article.} The article is organized as follows. In Section \ref{sec:prelim}, we recall some standard result from the theory of random matrix products that will be used in the proofs, most notably: spectral gap results, large deviation estimates and regularity properties of the stationary measure. Theorem \ref{thm:BE-coeff} is proved in Section \ref{sec:BE} and Theorem \ref{thm:LLT-coeff} is proved in Section \ref{sec:LLT}.
\medskip
\noindent\textbf{Notations.} Throughout this article, the symbols $\lesssim$ and $\gtrsim$ stand for inequalities up to a multiplicative constant. The dependence of these constants on certain parameters (or lack thereof), if not explicitly stated, will be clear from the context. We denote by $\mathbf E$ the expectation and $\mathbf P$ the probability.
\section{Preliminary results} \label{sec:prelim}
We start with some basic results and notations. We refer to \cite{bougerol-lacroix,benoist-quint:book} for the proofs of the results described here. See also \cite{lepage:theoremes-limites}.
\subsection{Norm cocycle, first Lyapunov exponent and the stationary measure}
Let $G:={\rm GL}_d(\mathbb{R})$. We consider its standard linear action on $\mathbb{R}^d$ and the induced action on the real projective space $\P^{d-1}$. Let $\mu$ be a probability measure on $G$. For $n \geq 1$, we define the convolution measure by $\mu^{*n} := \mu * \cdots * \mu$ ($n$ times) as the push-forward of the product measure $\mu^{\otimes n}$ on $G^n$ by the map $(g_1, \ldots, g_n) \mapsto g_n \cdots g_1$. If $g_j$ are i.i.d. random matrices with law $\mu$ then $\mu^{*n} $ is the law of $S_n := g_n \cdots g_1$.
Denote by $\norm{g}$ the operator norm of the matrix $g$ and define $N(g):=\max\big( \norm{g},\norm{g^{-1}} \big)$. We say that $\mu$ has a \textit{finite exponential moment} if
$$\mathbf{E} \big( N(g)^\varepsilon \big) = \int_G N(g)^\varepsilon \, {\rm d} \mu(g) < \infty \quad \text{for some } \,\, \varepsilon > 0.$$
The \textit{first Lyapunov exponent} is the number
$$\gamma := \lim_{n \to \infty} \frac1n \mathbf{E} \big( \log\|S_n\| \big)=\lim_{n \to \infty} \frac1n \int \log\|g_n \cdots g_1\| \, {\rm d} \mu(g_1) \cdots {\rm d} \mu(g_n).$$
The \textit{norm cocycle} is the function $\sigma: G \times \P^{d-1} \to \mathbb{R}$ given by $$\sigma(g,x) = \sigma_g(x):= \log \frac{\norm{gv}}{\norm{v}}, \quad \text{for }\,\, v \in \mathbb{R}^d \setminus \{0\}, \, x = [v] \in \P^{d-1} \, \text{ and } g \in G.$$
An element $g\in G$ is said to be \textit{proximal} if it admits a unique eigenvalue of maximal modulus which is moreover of multiplicity one. A semigroup $\Gamma$ is said to be \textit{proximal} if it contains a proximal element. We say that (the action of) $\Gamma$ is \textit{strongly irreducible} if it does not preserve a finite union of proper linear subspaces of $\mathbb{R}^d$.
Denote by $\Gamma_\mu$ the semigroup generated by the support of $\mu$. If $\Gamma_\mu$ is proximal and strongly irreducible, then $\mu$ admits a unique \textit{stationary measure}, that is, a probability measure $\nu$ on $\P^{d-1}$ satisfying $$\int_G g_* \nu \, {\rm d} \mu(g)= \nu.$$
The above measure is also called the \textit{Furstenberg measure} associated with $\mu$.
\subsection{Large deviation estimates and regularity}
We equip $\P^{d-1}$ with a natural distance given by
\begin{equation*}
d(x,w) : = \sqrt{1 - \bigg( \frac{\langle v_x,v_w \rangle}{\|v_x\| \|v_w\|} \bigg)^2}, \quad \text{where} \quad v_x,v_w \in \mathbb{R}^d \setminus \{0\}, \, x = [v_x], \,\, w = [v_w] \in \P^{d-1}.
\end{equation*}
Observe that $d(x,w)$ is the sine of the angle between the lines $x$ and $w$ in $\mathbb{R}^d$. Then, $(\P^{d-1}, d)$ has diameter one on which the orthogonal group $\text{O}(d)$ acts transitively and isometrically. We will denote by $\mathbb{B}(x,r)$ the associated open ball of center $x$ and radius $r$ in $\P^{d-1}$.
\medskip
For $y\in (\P^{d-1})^*$, the dual of $\P^{d-1}$, we denote by $H_y$ the kernel of $y$, which is a (projective) hyperplane in $\P^{d-1}$. We'll need the following large deviation estimates. Recall that $\gamma$ denotes the first Lyapunov exponent of $\mu$.
\begin{proposition}[\cite{benoist-quint:book}--Proposition 14.3 and Lemma 14.11] \label{prop:BQLDT}
Let $\mu$ be a probability measure on $G={\rm GL}_d(\mathbb{R})$. Assume that $\mu$ has a finite exponential moment and that $\Gamma_\mu$ is proximal and strongly irreducible. Then, for any $\epsilon>0$ there exist $c>0$ and $n_0 \in\mathbb{N}$ such that, for all $\ell\geq n\geq n_0$, $x\in \P^{d-1}$ and $y\in(\P^{d-1})^*$, one has
$$ \mu^{*n} \big\{g\in G:\, |\sigma(g,x)-n\gamma| \geq n\epsilon \big\}\leq e^{-cn} $$
and
$$ \mu^{*\ell} \big\{g\in G:\, d(gx, H_y) \leq e^{-\epsilon n} \big\} \leq e^{-cn}. $$
\end{proposition}
The next result gives a regularity property of the stationary measure $\nu$. See also \cite{benoist-quint:CLT,DKW:PAMQ} for the case where $\mu$ satisfies weaker moment conditions. For a hyperplane $H$ in $\P^{d-1}$ and $r>0$, we denote $\mathbb{B}(H,r) :=\{x \in \P^{d-1}: d(x,H) < r\}$, which is a ``tubular'' neighborhood of $H$.
\begin{proposition}[\cite{guivarch:1990}, \cite{benoist-quint:book}--Theorem 14.1]\label{prop:regularity}
Let $\mu$ be a probability measure on $G={\rm GL}_d(\mathbb{R})$. Assume that $\mu$ has a finite exponential moment and that $\Gamma_\mu$ is proximal and strongly irreducible. Let $\nu$ be the associated stationary measure. Then, there are constants $C>0$ and $\eta>0$ such that
$$\nu\big(\mathbb{B}(H_y,r)\big)\leq C r^\eta \quad\text{for every} \quad y\in (\P^{d-1})^* \, \, \text{ and } \,\, 0 \leq r \leq 1.$$
\end{proposition}
\subsection{The Markov operator and its perturbations} \label{subsec:markov-op}
The \textit{Markov operator} associated to $\mu$ is the operator
$$\mathcal{P} \varphi(x):=\int_{G} \varphi(gx) \,{\rm d}\mu(g),$$
acting on functions on $\P^{d-1}$.
For $z\in\mathbb{C}$, we consider the perturbation $\mathcal{P}_z$ of $\mathcal{P}$ given by
$$\mathcal{P}_z \varphi(x):=\int_{G} e^{z\sigma(g,x)}\varphi(gx) \,{\rm d}\mu(g),$$
where $\sigma(g,x)$ is the norm cocycle defined above. The operator $\mathcal{P}_z$ is often called the \textit{complex transfer operator}. Notice that $\mathcal{P}_0= \mathcal{P}$ is the original Markov operator. A direct computation using the cocycle relation $\sigma(g_2g_1,x) = \sigma(g_2,g_1 x) + \sigma(g_1,x)$ gives that
\begin{equation} \label{eq:markov-op-iterate}
\mathcal{P}^n_z \varphi (x) = \int_G e^{z \sigma(g,x)} \varphi(gx) \, {\rm d} \mu^{* n} (g).
\end{equation}
In other words, $\mathcal{P}^n_z$ corresponds to the perturbed Markov operator associated with the convolution power $\mu^{\ast n}$.
We recall some fundamental results of Le Page about the spectral properties of the above operators. For $0<\alpha<1$, we denote by $\cali{C}^\alpha(\P^{d-1})$ the space of H\"older continuous functions on $\P^{d-1}$ equipped with the norm
\begin{equation*}
\|\varphi\|_{\cali{C}^\alpha} := \|\varphi\|_\infty + \sup_{x \neq y \in \P^{d-1}} \frac{|\varphi(x)-\varphi(y)|}{d(x,y)^\alpha}.
\end{equation*}
Recall that the essential spectrum of an operator is the subset of the spectrum obtained by removing its isolated points corresponding to eigenvalues of finite multiplicity.
The essential spectral radius $\rho_{\rm ess}$ is then the radius of the smallest disc centered at the origin which contains the essential spectrum.
\begin{theorem}\label{thm:spectral-gap} \cite{lepage:theoremes-limites}, \cite[V.2]{bougerol-lacroix} Let $\mu$ be a probability measure on $G={\rm GL}_d(\mathbb{R})$ with a finite exponential moment such that $\Gamma_\mu$ is proximal and strongly irreducible. Then, there exists an $0<\alpha_0 <1$ such that, for all $0<\alpha \leq \alpha_0$, the operator $\mathcal{P}$ acts continuously on $\cali{C}^\alpha(\P^{d-1})$ with a spectral gap. In other words, $\rho_{\rm ess}(\mathcal{P})<1$ and $\mathcal{P}$ has a single eigenvalue of modulus $\geq 1$ located at $1$, which is isolated and of multiplicity one.
\end{theorem}
It follows directly from the above theorem that $\|\mathcal{P}^n - \mathcal{N}\|_{\cali{C}^\alpha} \leq C \lambda^n$ for some constants $C > 0$ and $0<\lambda<1$, where $\mathcal{N}$ is the projection $\varphi \mapsto \big( \int_{\P^{d-1}} \varphi \, {\rm d} \nu \big) \cdot \mathbf 1$ onto the space of constant functions. Here and in what follows, we denote by $\mathbf 1$ the constant function equal to $1$ on $\P^{d-1}$.
The following result gives the regularity of the family of operators $z \mapsto \mathcal{P}_z$. The second part follows from the general theory of perturbations of linear operators, which implies that the spectral properties of $\mathcal{P}_0$ persist for small values of $z$. For a proof, see e.g.\ \cite[V.4]{bougerol-lacroix}.
\begin{proposition} \label{prop:spectral-decomp}
Let $\mu$ and $\alpha_0$ be as in Theorem \ref{thm:spectral-gap}. There exists $b > 0$ such that for $|\Re z| < b$, the operators $\mathcal{P}_z$ act continuously on $\cali{C}^\alpha(\P^{d-1})$ for all $0<\alpha \leq \alpha_0$. Moreover, the family of operators $z \mapsto \mathcal{P}_z$ is analytic near $z=0$.
In particular, there exists an $\epsilon_0 > 0$ such that, for $|z|\leq \epsilon_0$, one has a decomposition
\begin{equation} \label{eq:P_t-decomp}
\mathcal{P}_z = \lambda_z \mathcal{N}_z + \mathcal{Q}_z,
\end{equation}
where $\lambda_z \in \mathbb{C}$, $\mathcal{N}_z$ and $\mathcal{Q}_z$ are bounded operators on $\cali{C}^{\alpha}(\P^{d-1})$ and
\begin{enumerate}
\item $\lambda_0 = 1$ and $\mathcal{N}_0 \varphi = \int_{\P^{d-1}} \varphi \, {\rm d} \nu$, which is a constant function, where $\nu$ is the unique $\mu$-stationary measure;
\item $\rho:= \displaystyle \lim_{n \to \infty
} \|\mathcal{P}_0^n - \mathcal{N}_0\|_{\cali{C}^\alpha}^{1 \slash n} < 1$;
\item $\lambda_z$ is the unique eigenvalue of maximum modulus of $\mathcal{P}_z$, $\mathcal{N}_z$ is a rank-one projection and $\mathcal{N}_z \mathcal{Q}_z = \mathcal{Q}_z \mathcal{N}_z = 0$;
\item the maps $z \mapsto \lambda_z$, $z \mapsto \mathcal{N}_z$ and $z \mapsto \mathcal{Q}_z$ are analytic;
\item $|\lambda_z| \geq \frac{2 + \rho}{3}$ and for every $k\in\mathbb{N}$, there exists a constant $c > 0$ such that $$\Big \| \frac{{\rm d}^k \mathcal{Q}_z^n}{{\rm d} z^k} \Big \|_{\cali{C}^\alpha} \leq c \Big( \frac{1 + 2 \rho}{3} \Big)^n \quad \text{ for every}\quad n \geq 0;$$
\item for $z=i\xi\in i\mathbb{R}$, we have
$$ \lambda_{i\xi} = 1 + i \gamma \xi - \frac{\varrho^2+\gamma^2}{2}\xi^2+O(|\xi|^3) \quad \text {as } \,\, \xi \to 0,$$ where $\gamma$ is the first Lyapunov exponent of $\mu$ and $\varrho > 0$ is a constant.
\end{enumerate}
\end{proposition}
The constant $\varrho^2 > 0$ appearing the above expansion of $\lambda_{i\xi}$ coincides with the variance in the Central Limit Theorem for the norm cocycle, see \cite{bougerol-lacroix,benoist-quint:book,DKW:LLT}. As a consequence of the above proposition, we can derive the following estimates which will be crucial in the proof of our main theorems. For the proof, see \cite[Proposition 8.5]{DKW:LLT} and \cite[Lemma 9]{lepage:theoremes-limites}.
\begin{lemma}\label{lemma:lambda-estimates}
Let $\epsilon_0$ be as in Proposition \ref{prop:spectral-decomp}. There exists $0 < \xi_0 < \epsilon_0$ such that, for all $n \in \mathbb{N}$ large enough, one has $$\big|\lambda_{{i\xi\over \sqrt n}}^n\big|\leq e^{-{\varrho^2\xi^2\over 3}} \quad\text{for}\quad |\xi|\leq \xi_0\sqrt n,$$
$$\Big| e^{-i\xi\sqrt n \gamma}\lambda_{{i\xi\over \sqrt n}}^n-e ^{-{\varrho^2\xi^2\over 2}} \Big|\leq {c\over \sqrt n}|\xi|^3e^{-{\varrho^2\xi^2\over 2}} \quad\text{for}\quad |\xi|\leq \sqrt[6] n,$$
$$\Big| e^{-i\xi\sqrt n \gamma} \lambda_{{i\xi\over \sqrt n}}^n-e ^{-{\varrho^2\xi^2\over 2}} \Big|\leq {c\over \sqrt n}e^{-{\varrho^2\xi^2\over 4}} \quad\text{for}\quad \sqrt[6] n<|\xi|\leq \xi_0\sqrt n,$$
where $c>0$ is a constant independent of $n$.
\end{lemma}
The following important result describes the spectrum of $\mathcal{P}_{i\xi}$ for large real values of $\xi$. It is one of the main tools in the proof of the Local Limit Theorem for the norm cocycle and it will also be indispensable in our proof of Theorem \ref{thm:LLT-coeff}.
\begin{proposition}\cite{lepage:theoremes-limites}, \cite[Chapter 15]{benoist-quint:book} \label{prop:spec-Pxi}
Let $\mu$ and $\alpha_0$ be as in Theorem \ref{thm:spectral-gap}. Let $K$ be a compact subset of $\mathbb{R} \setminus \{0\}$. Then, for every $0<\alpha \leq \alpha_0$ there exist constants $C_K>0$ and $0<\rho_K<1$ such that $\norm{\mathcal{P}^n_{i\xi}}_{\cali{C}^\alpha}\leq C_K \rho_K^n$ for all $n\geq 1$ and $\xi\in K$.
\end{proposition}
\subsection{Fourier transform and characteristic function}
Recall that the Fourier transform of an integrable function $h$ on $\mathbb{R}$, denoted by $\widehat h$, is defined by
$$\widehat h(\xi):=\int_{-\infty}^{+\infty}h(u)e^{-i u\xi} {\rm d} u$$
and the inverse Fourier transform is $$ \mathcal{F}^{-1} h(u) :={1\over {2\pi}}\int_{-\infty}^{+\infty} h(\xi) e^{ i u\xi} {\rm d} \xi,$$ so that, when $\widehat h$ is integrable, one has $h = \mathcal{F}^{-1} \widehat h$. With these definitions, the Fourier transform of $\widehat h(\xi)$ is $2\pi h(-u)$ and the convolution operator satisfies $\widehat{h_1*h_2}=\widehat h_1\cdot \widehat h_2$.
\begin{lemma}[\cite{DKW:LLT}--Lemma 2.2] \label{l:vartheta}
There exists a smooth strictly positive even function $\vartheta$ on $\mathbb{R}$ with $\int_\mathbb{R} \vartheta(u) {\rm d} u=1$ such that its Fourier transform $\widehat\vartheta$ is a smooth even function supported by $[-1,1]$.
Moreover, for $0<\delta \leq 1$ and $\vartheta_\delta(u):=\delta^{-2}\vartheta(u/\delta^2)$, the function $\widehat{\vartheta_\delta}$ is supported by $[-\delta^{-2},\delta^{-2}]$, $|\widehat{\vartheta_\delta}|\leq 1$ and $\norm{\widehat{\vartheta_\delta}}_{\cali{C}^1}\leq c$ for some constant $c>0$ independent of $\delta$.
\end{lemma}
As a consequence, we have the following approximation lemma.
\begin{lemma}[\cite{DKW:LLT}--Lemma 2.4] \label{lemma:conv-fourier-approx}
Let $\psi$ be a continuous real-valued function with support in a compact set $K$ in $\mathbb{R}$. Assume that $\|\psi\|_\infty \leq 1$. Then, for every $0< \delta \leq 1$ there exist a smooth functions $\psi^\pm_\delta$ such that $\widehat {\psi^\pm_\delta}$ have support in $[-\delta^{-2},\delta^{-2}]$, $$\psi^-_\delta \leq\psi\leq \psi^+_\delta,\quad \lim_{\delta \to 0} \psi^\pm_\delta =\psi \quad \text{and} \quad \lim_{\delta \to 0} \big \|\psi^\pm_\delta -\psi \big \|_{L^1} = 0.$$
Moreover, $\norm{\psi_\delta^\pm}_\infty$, $\norm{\psi_\delta^\pm}_{L^1}$ and $\|\widehat{\psi^\pm_\delta}\|_{\cali{C}^1}$ are bounded by a constant which only depends on $K$.
\end{lemma}
When proving limit theorems for random variables we often resort to the associated characteristic functions. For notational convenience, we will also use their conjugates.
\begin{definition} \label{def:conjugate-cf} \rm
For a real random variable $X$ with cumulative distribution function $F$ (c.d.f.\ for short) , we define its \textit{conjugate characteristic function} by $$\phi_F(\xi):=\mathbf E\big(e^{-i\xi X}\big).$$
\end{definition}
Observe that ${\rm d} F$ is a probability measure on $\mathbb{R}$ and $\phi_F$ is its Fourier transform. In particular, when $F$ is differentiable and $\phi_F$ is integrable, the following inversion formula holds
\begin{equation}\label{inverse-char}
F'(u)={1\over 2\pi}\int_{-\infty}^\infty e^{iu\xi} \phi_F(\xi) \,{\rm d} \xi.
\end{equation}
\section{Berry-Esseen bound for coefficients}\label{sec:BE}
This section is devoted to the proof of Theorem \ref{thm:BE-coeff}. We begin with the following version of Berry-Esseen lemma. See also \cite[XVI.3]{feller:book}.
\begin{lemma} \label{lemma:BE-feller}
Let $F$ be a c.d.f.\ of some real random variable and let $H$ be a differentiable real-valued function on $\mathbb{R}$ with derivative $h$ such that $H(-\infty)=0,H(\infty)=1,|h(u)|\leq m$ for some constant $m >0$. Let $D>0$ and $0<\delta<1$ be real numbers such that $\big|F(u)-H(u) \big|\leq D \delta^2$ for $|u|\geq \delta^{-2}$.
Then, there exist constants $C>0$ and $\kappa > 1$ independent of $F,H,\delta$, such that
$$\sup_{u\in\mathbb{R}}\big|F(u)-H(u) \big|\leq 2\sup_{|u|\leq \kappa \delta^{-2}} \big|(F-H)*\vartheta_\delta(u)\big|+C \delta^2, $$
where $\vartheta_\delta$ is defined in Lemma \ref{l:vartheta}.
\end{lemma}
\begin{proof}
We begin by noticing that, from the definition of $\vartheta_\delta$, we have that, for any $d>0$,
\begin{equation} \label{eq:BE-lemma-1}
\int_{|u|\geq d}\vartheta_\delta(u) \,{\rm d} u=\int_{|u|\geq d}{\vartheta(u/\delta^2)\over \delta^2}\,{\rm d} u=\int_{|u|\geq d\delta^{-2}} \vartheta(u)\,{\rm d} u\leq c\delta^2/d
\end{equation}
for some constant $c>0$ independent of $d$ and $\delta$. This is due to the fact that $\widehat{\vartheta}$ is smooth and compactly supported, hence $\vartheta$ has fast decay at infinity, say $|\vartheta(u)| \lesssim 1 / |u|^2$.
Since the function $F(u)-H(u)$ vanishes at $\pm\infty$, the maximum of $\big|F(u)-H(u) \big|$ exists. Let $u_0$ be a point where this maximum is attained. If $|u_0|\geq \delta^{-2}$, there is nothing to prove because $\sup_{|u|\geq \delta^{-2}}\big|F(u)-H(u) \big|\leq D \delta^2$ by hypothesis. So, we can assume $|u_0|\leq \delta^{-2}$ and $M:=\big|F(u_0)-H(u_0) \big|\geq D \delta^2$. If $M\leq 12 mc\delta^2$, the lemma clearly follows, so we may assume $M>12 mc\delta^2$. We will use the fact that $F(-\infty)=0,F(\infty)=1$ and $F$ is non-decreasing.
After replacing $F(u)$ and $H(u)$ by $1-F(-u)$ and $1-H(-u)$ if necessary, we may assume that $M=F(u_0)-H(u_0)>0$. Let $d>0$ be a constant such that $M \geq 2 m d$ whose precise value will be determined later. Since $F$ is non-decreasing and $h(u)\leq m$ by assumption, we have
$F(u_0+r)-H(u_0+r)\geq M- mr$ for $r\geq 0$.
Thus,
\begin{equation} \label{eq:BE-lemma-2}
F(u)-H(u)\geq M-2md \quad\text{for} \quad u_0\leq u\leq u_0+2d,
\end{equation}
and from the definition of $M$,
$$F(u)-H(u)\geq -M \quad\text{for all }\quad u \in \mathbb{R}.$$
Therefore, because $|u_0| \leq \delta^{-2}$ , we obtain using \eqref{eq:BE-lemma-1} and \eqref{eq:BE-lemma-2} that
\begin{align*}
\sup_{|u|\leq \delta^{-2}+d} \big|(F-H)*\vartheta_\delta(u)\big| &\geq (F-H)*\vartheta_\delta(u_0+d)\\
&=\Big(\int_{|u|< d}+ \int_{|u|\geq d}\Big) (F-H)(u_0+d-u) \cdot \vartheta_\delta (u)\,{\rm d} u\\
&\geq (M-2md)(1-c \delta^2 /d) - Mc \delta^2/d \\ &=(1-2c \delta^2/d)M-2md+2m c \delta^2.
\end{align*}
By setting $d:=4c\delta^2$ and recalling that $M>12 mc\delta^2$, we get that $M \geq 2md$ and the last quantity above equals $M/2-6mc\delta^2$. Since $\delta^{-2}+d \leq (1+4c) \delta^{-2}$, the lemma follows by setting $\kappa := 1+4c$ and $C:= 12 mc$.
\end{proof}
\begin{corollary} \label{cor:BE-feller}
Keep the notations and assumptions of Lemma \ref{lemma:BE-feller}. Assume moreover that $h\in L^1$, $\widehat h\in\cali{C}^1$ and that $\phi_F$ is differentiable at zero (see Definition \ref{def:conjugate-cf}). Then,
$$\sup_{u\in\mathbb{R}}\big|F(u)-H(u) \big|\leq {1\over \pi} \sup_{|u|\leq \kappa \delta^{-2}} \Big|\int_{-\delta^{-2}}^{\delta^{-2}} {\Theta_u(\xi) \over \xi} \,{\rm d} \xi\Big| +C\delta^2,$$
where $\Theta_u(\xi):=e^{iu\xi}\big(\phi_F(\xi)- \widehat {h}(\xi) \big)\widehat{\vartheta_\delta}(\xi)$.
\end{corollary}
\begin{proof}
Notice that, by the convolution formula, the function $\phi_F \cdot\widehat{\vartheta_\delta}$ is the conjugate characteristic function associated with the c.d.f.\ $F*\vartheta_\delta$. Since ${\rm supp}(\widehat{\vartheta_\delta})\subset [-\delta^{-2},\delta^{-2}]$, and $\phi_F$ is bounded by definition, it follows that $\phi_F \cdot\widehat{\vartheta_\delta}$ is integrable. Identity \eqref{inverse-char} gives that
$$ (F*\vartheta_\delta)'(u) ={1\over 2\pi}\int_{-\infty}^\infty e^{iu\xi} \phi_F(\xi)\cdot\widehat{\vartheta_\delta}(\xi) \,{\rm d} \xi. $$
As the inverse Fourier transform of $\widehat{h} \cdot \widehat{\vartheta_\delta}$ is $h*\vartheta_\delta$, we get
$$\big((F-H)*\vartheta_\delta\big)'(u)={1\over 2\pi}\int_{-\delta^{-2}}^{\delta^{-2}} e^{iu\xi} \big( \phi_F(\xi)-\widehat h(\xi)\big) \widehat{\vartheta_\delta}(\xi) \,{\rm d}\xi.$$
Observe that $\phi_F(0) = \mathbf{E}(\mathbf 1) = 1$ and $\widehat h(0) = H(\infty) - H(-\infty) = 1$, so $\phi_F(0)-\widehat h(0)=0$. Moreover, $\phi_F'(0)-\widehat h\,'(0)$ is finite by the assumptions on $F$ and $h$. Integrating the above identity with respect to $u$ yields $$(F-H)*\vartheta_\delta (u)={1\over 2\pi}\int_{-\delta^{-2}}^{\delta^{-2}} {e^{iu\xi}\over i\xi} \big( \phi_F(\xi)-\widehat h(\xi)\big) \widehat{\vartheta_\delta}(\xi) \,{\rm d}\xi .$$
Here, the constant term is zero because, when $u\to\pm\infty$, the left hand side tends to zero and, by the above observations, the integrand is a bounded function, so the integral in the right hand side also tends to zero as $u\to\pm\infty$ by Riemann–Lebesgue lemma. The desired result follows from Lemma \ref{lemma:BE-feller}.
\end{proof}
\medskip
Fix $x:=[v]\in \P^{d-1}, y:=[f]\in (\P^{d-1})^*$ and consider the pairing $$\Delta(x,y):= \frac{ |\langle f, v \rangle |}{\norm{f} \norm{v}},$$ where $\langle \cdot , \cdot \rangle $ denotes the natural pairing between $\mathbb{R}^d$ and $(\mathbb{R}^d)^*$. One can easily check that $\Delta(x,y) = d (x,H_y)$, where $H_y := \P(\ker f)$ and $d$ is the distance defined in Section \ref{sec:prelim}. Using these definitions, it is not hard to see that
\begin{equation}\label{eq:coeff-split}
\log{ |\langle f, S_n v \rangle | \over \norm{f} \norm{v}} = \sigma(S_n,x) + \log d(S_n x, H_y).
\end{equation}
The strategy to prove Theorem \ref{thm:BE-coeff} is to use the above formula and work with the random variable $\sigma(S_n,x) +\log d(S_n x, H_y)$ instead of $\log{ |\langle f, S_n v \rangle | \over \norm{f} \norm{v}} $. Then, the behaviour of $\sigma(S_n,x)$ can be studied via the perturbed Markov operators and the term $\log d(S_n x, H_y)$ can be handled using the large deviation estimates from Section \ref{sec:prelim} combined with a good partition of unity that we now introduce.
\medskip
For integers $k \geq 0$ introduce
\begin{align*}
\cali{T}_k := \big\{ w \in \P^{d-1} :\, e^{-k-1} < d(w,H_y) < e^{-k+1} \big\} = \mathbb{B}(H_y,e^{-k+1}) \setminus \overline{\mathbb{B}(H_y,e^{-k-1})} .
\end{align*}
Note that, since $\P^{d-1}$ has diameter one, these open sets cover $\P^{d-1}$.
\begin{lemma} \label{lemma:partition-of-unity}
There exist non-negative smooth functions $\chi_k$ on $\P^{d-1}$, $k \geq 0$, such that
\begin{enumerate}
\item $\chi_k$ is supported by $\cali{T}_k$;
\item If $w \in \P^{d-1} \setminus H_y$, then $\chi_k(w) \neq 0$ for at most two values of $k$;
\item $\sum_{k\geq 0} \chi_k=1$ on $\P^{d-1} \setminus H_y$;
\item $\norm{\chi_k}_{\cali{C}^1}\leq 12e^{k}$.
\end{enumerate}
\end{lemma}
\begin{proof}
It is easy to find a smooth function $0 \leq \widetilde \chi \leq 1$ supported by $(-1,1)$ such that $\widetilde \chi(t) = 1$ for $|t|$ small, $\widetilde \chi(t) + \widetilde \chi(t-1)= 1$ for $0 \leq t\leq 1$ and $\norm{\widetilde \chi}_{\cali{C}^1}\leq 4$. Define $\widetilde \chi_k (t) := \widetilde \chi(t+k)$. We see that $\widetilde \chi_k$ is supported by $(-k-1,-k+1)$, $\sum_{k\geq 0} \widetilde \chi_k=1$ on $\mathbb{R}_{\leq 0}$ and $\norm{\widetilde \chi_k}_{\cali{C}^1}\leq 4$. Set $\chi_k(w):= \widetilde \chi_k \big( \log d(w,H_y) \big)$. One can easily check that the function $\Psi(w) := \log d(w,H_y)$ satisfies $\norm{\Psi|_{\cali{T}_k }}_{\cali{C}^1} \leq e^{k+1}$. It follows that $\chi_k$ satisfies (1)--(4).
\end{proof}
We now begin the proof of Theorem \ref{thm:BE-coeff}. It suffices to prove Theorem \ref{thm:BE-coeff} for intervals of the type $J=(-\infty,b]$ with $b \in \mathbb{R}$, as the case of an arbitrary interval can be obtained as a consequence. For example, the case $(b,+\infty)$ follows directly by considering its complement. The case of $[b, +\infty)$ can be deduced by approximating it by $(b \pm \varepsilon, + \infty)$ and the case $(-\infty,b)$ follows by taking the complement. The case of bounded intervals can be obtained by considering differences of the previous cases.
Let $A>0$ be a large constant. By Proposition \ref{prop:BQLDT} applied for $\epsilon = 1$, there exists a constant $c>0$ such that with $\ell,m$ large enough and $\ell\geq m$, one has
$$ \mu^{*\ell} \big\{g\in G:\, d(gx, H_y) \leq e^{- m} \big\} \leq e^{-cm}. $$
Setting $\ell:=n$ and $m:=\lfloor A \log n \rfloor$ with $n$ big enough yields
$$ \mu^{*n} \big\{g\in G:\,d(gx,H_y)\leq n^{-A} \big\}\leq e^{-c\lfloor A \log n \rfloor} \leq n^{-cA}e^c \leq e^c/\sqrt n,$$
since $A$ is large.
It follows that $\log d(S_n x,H_y)\leq -A\log n $ with probability less than $e^c/\sqrt n$. Hence, in order to prove Theorem \ref{thm:BE-coeff}, it is enough to show that
\begin{equation}\label{goal-1-varphi}
\Big| \mathcal{L}_n(b) - \frac{1}{\sqrt{2 \pi} \,\varrho} \int_{-\infty}^b e^{-\frac{s^2}{2 \varrho^2}} \, {\rm d} s \Big| \lesssim \frac{1}{\sqrt n},
\end{equation}
uniformly in $b$, where
$$\mathcal{L}_n(b):= \mathbf E \Big( \mathbf 1_{{\sigma(S_n,x) +\log d(S_n x, H_y) - n \gamma\over \sqrt n}\leq b} \mathbf 1_{ \log d(S_n x,H_y)> - A \log n } \Big),$$
where we use $\mathbf 1_\bigstar$ to denote the indicator function of a set defined by the property $\bigstar$.
Let $\chi_k$ be as in Lemma \ref{lemma:partition-of-unity}. It is clear that
$$\sum_{0\leq k\leq A\log n - 1}\chi_k(w) \leq \mathbf 1_{ \log d(w,H_y)> - A \log n} \leq \sum_{0\leq k\leq A\log n+1}\chi_k(w)$$ as functions in $\P^{d-1}$. Using that $\chi_k$ is supported by $\cali{T}_k$, it follows that
\begin{align} \label{eq:Ln-two-sided-bound}
\sum_{0\leq k\leq A\log n-1} \hspace{-7pt} \mathbf{E}\Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma - k+1\over \sqrt n}\leq b} \, \chi_k(S_n x) \Big) \leq \mathcal{L}_n(b)\leq \hspace{-5pt} \sum_{0\leq k\leq A\log n+1} \hspace{-7pt} \mathbf E\Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma - k-1\over \sqrt n}\leq b} \, \chi_k(S_n x) \Big).
\end{align}
For $w \in \P^{d-1}$, let $$\Phi_{n}^{\star} (w):= 1 - \sum_{0\leq k\leq A\log n} \chi_k(w)$$ and define, for $b \in \mathbb{R}$,
\begin{align*}
F_n(b):&= \sum_{0\leq k\leq A\log n} \mathbf{E}\Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma - k\over \sqrt n}\leq b} \chi_k (S_n x) \Big) + \mathbf{E} \Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma \over \sqrt n}\leq b} \, \Phi_{n}^{\star} (S_n x) \Big).
\end{align*}
Notice that $F_n$ is non-decreasing, right-continuous, $F_n(-\infty)=0$ and $F_n(\infty)=1$. Therefore, it is the c.d.f.\ of some probability distribution. We'll see that the term involving $\Phi_{n}^{\star}$ has a negligible impact in our estimates. However, its presence is important and will be useful in our computations.
\begin{lemma} \label{lemma:Ln-Fn}
Let $\mathcal{L}_n$ and $F_n$ be as above. Then, there exists a constant $C>0$ independent of $n$ such that for all $n \geq 1$ and $b \in \mathbb{R},$ $$ F_n(b - 1 / \sqrt n) + C / \sqrt n \leq \mathcal{L}_n(b) \leq F_n(b + 1 / \sqrt n) + C / \sqrt n.$$
\end{lemma}
\begin{proof}
Notice first that $\Phi_{n}^{\star}$ is non-negative, bounded by one and supported by a tubular neighborhood $\mathbf T_n$ of $H_y$ of radius $O(n^{-A})$. As discussed above, the probability that $S_n x$ belongs to $\mathbf T_n$ is $\lesssim 1 / \sqrt n$. This yields the following bounds for the second term in the definition of $F_n$: $$0 \leq \mathbf{E} \Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma \over \sqrt n}\leq b} \, \Phi_{n}^{\star} (S_n x) \Big) \leq \mathbf{E} \Big(\Phi_{n}^{\star}(S_n x) \Big) \lesssim 1 / \sqrt n.$$
Therefore, in order to prove the lemma, we can replace $F_n$ by the function
\begin{equation} \label{eq:def-Fn-tilde}
\widetilde F_n(b) := \sum_{0\leq k\leq A\log n} \mathbf{E}\Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma - k \over \sqrt n}\leq b} \chi_k (S_n x) \Big).
\end{equation}
Using the second inequality in \eqref{eq:Ln-two-sided-bound}, we have
\begin{align*}
\mathcal{L}_n(b) - \widetilde F_n(b + 1 / \sqrt n) \leq \mathbf{E}\Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma - k^+-1\over \sqrt n}\leq b} \chi_{k^+} (S_n x) \Big) \leq \mathbf{E}\Big( \chi_{k^+} (S_n x) \Big),
\end{align*}
where $k^+:= \lfloor A \log n \rfloor + 1$. Since $\chi_k \leq \mathbf 1_{\mathbb{B}(H_y,e^{-k+1})}$ and $\log d(S_n x,H_y)\leq -A\log n +1 $ with probability $\lesssim 1/\sqrt n$, the above quantity is $\lesssim 1/\sqrt n$. This gives the second inequality in the lemma.
Using now the first inequality in \eqref{eq:Ln-two-sided-bound} and letting $k^-:= \lfloor A \log n \rfloor$, we obtain
\begin{align*}
\widetilde F_n(b - 1 /\sqrt n) - \mathcal{L}_n(b) \leq \mathbf{E}\Big( \mathbf 1_{{\sigma(S_n,x) - n \gamma - k^- +1\over \sqrt n}\leq b} \chi_{k^-} (S_n x) \Big) \leq \mathbf{E}\Big(\chi_{k^-} (S_n x) \Big),
\end{align*}
which is $\lesssim 1/\sqrt n$ by the same arguments as before. The lemma follows.
\end{proof}
Introduce $$\Phi_{n,\xi} (w):= \sum_{0\leq k\leq A\log n} e^{i \xi{k \over \sqrt n}}\chi_k(w).$$
\begin{lemma} \label{lemma:char-function-Fn}
The conjugate characteristic function of $F_n$ (cf.\ Definition \ref{def:conjugate-cf}) is given by
$$\phi_{F_n}(\xi)= e^{i\xi\sqrt n \gamma} \mathcal{P}_{-{i\xi\over \sqrt n}}^n \Phi_{n,\xi} (x) + e^{i\xi\sqrt n \gamma} \mathcal{P}_{-{i\xi\over \sqrt n}}^n \Phi_{n}^{\star} (x) = e^{i\xi\sqrt n \gamma} \mathcal{P}_{-{i\xi\over \sqrt n}}^n \big( \Phi_{n,\xi} + \Phi_{n}^{\star} \big) (x).$$
In particular, $\phi_{F_n}$ is differentiable near zero.
\end{lemma}
\begin{proof}
Recall that $x$ is fixed. Let $c_{k,n}:= \int_G \chi_k(gx) \, {\rm d} \mu^{\ast n} (g)$ and $\mu_{k,n}: = c^{-1}_{k,n} \, \chi_k(gx) \, \mu^{\ast n}$, which is a probability measure on $G$ that is absolutely continuous with respect to $\mu^{\ast n}$. Let $Z_{n,k}$ be the measurable function ${\sigma(g,x) - n \gamma - k \over \sqrt n}$ on the probability space $(G,\mu_{k,n})$. The corresponding c.d.f.\ is $$F_{Z_{n,k}}(b) = c^{-1}_{k,n} \int_G \mathbf 1_{{\sigma(g,x) - n \gamma - k\over \sqrt n}\leq b} \chi_k(gx) \, {\rm d} \mu^{\ast n} (g)$$
and the associated conjugate characteristic function is $$\phi_{F_{Z_{n,k}}}(\xi) = c^{-1}_{k,n} \int_G e^{-i \xi{\sigma(g,x) - n \gamma - k \over \sqrt n}} \chi_k(gx) \, {\rm d} \mu^{\ast n} (g) = c^{-1}_{k,n} e^{i\xi\sqrt n \gamma} \mathcal{P}_{-{i\xi\over \sqrt n}}^n \big(e^{i \xi{k\over \sqrt n}}\chi_k \big) (x),$$
where we have used \eqref{eq:markov-op-iterate}.
Analogously, set $d_n:= \int_G \Phi_{n}^{\star}(gx) \, {\rm d} \mu^{\ast n}(g)$, consider the probability measure $\eta_{n}: = d^{-1}_{n} \, \Phi_{n}^{\star} (gx) \, \mu^{\ast n}$ and let $W_{n}$ be the measurable function ${\sigma(g,x) - n \gamma \over \sqrt n}$ on the probability space $(G,\eta_{n})$. Then, the corresponding c.d.f.\ is $$F_{W_n}(b) = d^{-1}_{k} \int_G \mathbf \mathbf 1_{{\sigma(g,x) - n \gamma \over \sqrt n}\leq b} \Phi_{n}^{\star}(gx) \, {\rm d} \mu^{\ast n} (g)$$
and the associated conjugate characteristic function is
\begin{align*}
\phi_{F_{W_n}}(\xi) = d^{-1}_{n} \int_G e^{-i \xi{\sigma(g,x) - n \gamma \over \sqrt n}} \Phi_{n}^{\star}(gx) \, {\rm d} \mu^{\ast n} (g) = d^{-1}_{n} e^{i\xi\sqrt n \gamma} \mathcal{P}_{-{i\xi\over \sqrt n}}^n \Phi_{n}^{\star} (x).
\end{align*}
Notice that, by definition $F_n = \sum_{0\leq k\leq A\log n} c_{k,n} F_{Z_{n,k}} + d_n F_{W_n}$ so, by linearity, $\phi_{F_n} = \sum_{0\leq k\leq A\log n} c_{k,n} \phi_{F_{Z_{n,k}}} + d_n \phi_{F_{W_n}}$. Using the definition of $\Phi_{n,\xi}$, the lemma follows.
\end{proof}
From now on, we fix the value of the constant $A >0$ used above. Fix a value of $\alpha>0$ such that $$\alpha A\leq 1/6 \quad \text{ and } \quad \alpha \leq \alpha_0,$$ where $0<\alpha_0<1$ is the exponent appearing in Theorem \ref{thm:spectral-gap}. Then, from the results of Subsection \ref{subsec:markov-op}, the family $\xi\mapsto\mathcal{P}_{i\xi}$ acting on $\cali{C}^\alpha(\P^{d-1})$ with $\xi \in \mathbb{R}$ is everywhere defined, analytic near $0$ and $\mathcal{P}_0$ has a spectral gap. The bound $\alpha A\leq 1/6$ is chosen so that we can control the impact of the H\"older norms of $\Phi_{n,\xi}$ and $\Phi_{n}^{\star}$ in our estimates, as shown in the following lemma.
\begin{lemma} \label{lemma:norm-Phi}
Let $\Phi_{n,\xi}, \Phi_{n}^{\star}$ be the functions on $\P^{d-1}$ defined above. Then, the following identity holds
\begin{equation} \label{eq:psi_xi+psi_T}
\Phi_{n,\xi} + \Phi_{n}^{\star} = \mathbf 1 + \sum_{0\leq k\leq A\log n} \big( e^{i \xi{k \over \sqrt n}} - 1 \big) \chi_k.
\end{equation}
Moreover, $\norm{\Phi_{n,\xi} + \Phi_{n}^{\star}}_\infty \leq 1$ and there is a constant $C>0$ independent of $\xi$ and $n$ such that
\begin{equation} \label{eq:norm-Phi}
\norm{\Phi_{n,\xi} }_{\cali{C}^\alpha}\leq C \, n^{\alpha A} \quad\text{and}\quad \norm{\Phi_{n}^{\star} }_{\cali{C}^\alpha} \leq C \, n^{\alpha A},
\end{equation}
where $\alpha>0$ is the exponent fixed above. In addition, $\Phi_{n}^{\star} $ is supported by $\big\{w:\,\log d(w,H_y)\leq -A\log n + 1\big\}$.
\end{lemma}
\begin{proof}
Identity \eqref{eq:psi_xi+psi_T} follows directly from the definition of $\Phi_{n,\xi}$ and $ \Phi_{n}^{\star}$. Also from the definition, we have $|\Phi_{n,\xi} + \Phi_{n}^{\star} | \leq \Phi_{n,0} + \Phi_{n}^{\star}$ and the last function is identically equal to $1$ by \eqref{eq:psi_xi+psi_T}. It follows that $\norm{\Phi_{n,\xi} + \Phi_{n}^{\star}}_\infty \leq 1$.
For the first inequality in \eqref{eq:norm-Phi}, notice that $\big \| e^{i \xi{k \over \sqrt n}}\chi_k \big\|_{\cali{C}^\alpha} = \norm{\chi_k}_{\cali{C}^\alpha}$. From Lemma \ref{lemma:partition-of-unity}-(4), the fact that $\norm{\chi_k}_{\cali{C}^0} \leq 1$ and the interpolation inequality $\norm{\, \cdot \,}_{\cali{C}^\alpha} \leq c_\alpha \norm{\, \cdot \,}_{\cali{C}^0}^{1-\alpha} \, \norm{\, \cdot \,}_{\cali{C}^1}^\alpha$ (see \cite[p. 202]{triebel}), it follows that $\norm{\chi_k}_{\cali{C}^\alpha} \leq 12 c_\alpha e^{\alpha k}$. The last inequality can also be checked by a direct computation. Then, the first inequality in \eqref{eq:norm-Phi} follows from the definition of $\Phi_{n,\xi}$ and the fact that at most two $\chi_k$'s are non-zero simultaneously. The second inequality in \eqref{eq:norm-Phi} follows from the first one and the identity $\Phi_{n,0} + \Phi_{n}^{\star} = \mathbf 1$, after increasing the value of $C$ if necessary.
In order to prove the last assertion, observe that, over $\P^{d-1} \setminus H_y$, one has $\Phi_{n}^{\star} = \sum_{k > A\log n} \chi_k$ by Lemma \ref{lemma:partition-of-unity}-(3). Since $\chi_k$ is supported by $\cali{T}_k$, the conclusion follows. This finishes the proof of the lemma.
\end{proof}
Let $$H(b): =\frac{1}{\sqrt{2 \pi} \,\varrho} \int_{-\infty}^b e^{-\frac{s^2}{2 \varrho^2}} \, {\rm d} s$$ be the c.d.f.\ of the normal distribution $\cali N(0;\varrho^2)$. In the notation of Lemma \ref{lemma:BE-feller}, we have $h(u): =\frac{1}{\sqrt{2 \pi} \,\varrho} e^{-\frac{u^2}{2 \varrho^2}}$ and $\widehat h(\xi) = e^{-{\varrho^2 \xi^2\over 2}}$. Let $\xi_0$ be the constant in Lemma \ref{lemma:lambda-estimates}.
\medskip
\begin{lemma} \label{lemma:F_n-estimate}
Let $F_n$ and $H$ be as above. Then, $\big|F_n(b)-H(b)\big|\lesssim 1/\sqrt n$ for $|b|\geq \xi_0 \sqrt n$.
\end{lemma}
\begin{proof}
We only consider the case of $b\leq -\xi_0\sqrt n$. The case $b \geq \xi_0\sqrt n$ can be treated similarly using $1-F_n$ and $1-H$ instead of $F_n$ and $H$. We can also assume that $n$ is large enough. Clearly, $H(b)\lesssim 1/\sqrt n$ for $b\leq -\xi_0\sqrt n$, so it is enough to bound $F_n(b)$.
For $0\leq k\leq A\log n$, we have
\begin{align*}
\mathbf P\Big( {\sigma(S_n,x) - n \gamma - k \over \sqrt n}\leq -\xi_0\sqrt n \Big)&=\mathbf P\Big(\sigma(S_n,x) - n \gamma \leq -\xi_0 n +k \Big)\\
&\leq \mathbf P\Big(\sigma(S_n,x) - n \gamma \leq - \xi_0 n / 2\Big),
\end{align*}
since $n$ is large. By Proposition \ref{prop:BQLDT} applied with $\epsilon= \xi_0 / 2$, there exists a constant $c>0$, independent of $n$, such that the last quantity is bounded by $e^{-cn}$.
Using the definition of $F_n$ and the fact that $\mathbf{E} \big(\Phi_{n}^{\star}(S_n x) \big) \lesssim 1 / \sqrt n$ (see the proof of Lemma \ref{lemma:Ln-Fn}), it follows that $$ F_n (-\xi_0\sqrt n)\lesssim \sum_{0\leq k\leq A\log n}e^{-cn} + 1/\sqrt n \lesssim (A \log n) e^{-cn} + 1/\sqrt n \lesssim 1/\sqrt n.$$ As $F_n(b)$ is non-decreasing in $b$, one gets that $F_n(b) \lesssim 1 / \sqrt n$ for all $b\leq -\xi_0\sqrt n$. The lemma follows.
\end{proof}
Lemmas \ref{lemma:char-function-Fn} and \ref{lemma:F_n-estimate} imply that $F_n$ satisfies the conditions of Corollary \ref{cor:BE-feller} with $\delta_n:=(\xi_0 \sqrt n)^{-1/2}$. Let $\kappa > 1$ be the constant appearing in that corollary. For simplicity, by taking a smaller $\xi_0$ is necessary, one can assume that $2\kappa \xi_0 \leq 1$. Then, Corollary \ref{cor:BE-feller} gives that
\begin{equation} \label{eq:BE-main-estimate}
\sup_{b \in\mathbb{R}}\big|F_n(b)-H(b)\big|\leq {1\over \pi} \sup_{|b|\leq \sqrt n} \Big| \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {\Theta_b(\xi) \over \xi} \,{\rm d} \xi \Big|+{C\over \sqrt n},
\end{equation}
where $C>0$ is a constant independent of $n$ and
$$\Theta_b(\xi):=e^{ib\xi}\big( \phi_{F_n}(\xi)- \widehat {h}(\xi) \big)\widehat{\vartheta_{\delta_n}}(\xi).$$
We now estimate the integral in the right hand side of \eqref{eq:BE-main-estimate}. Define
$$\widetilde h_n(\xi):= \big(\mathcal{N}_0 \Phi_{n,\xi} \big) e^{-{\varrho^2 \xi^2\over 2}} + \big(\mathcal{N}_0 \Phi_{n}^{\star} \big) e^{-{\varrho^2 \xi^2\over 2}} = e^{-{\varrho^2 \xi^2\over 2}} \mathcal{N}_0 ( \Phi_{n,\xi} + \Phi_{n}^{\star} ).$$ In light of Lemma \ref{lemma:char-function-Fn}, we will use it to approximate $\phi_{F_n}$ (see Proposition \ref{prop:spectral-decomp} and Lemma \ref{lemma:lambda-estimates}). Notice that $\mathcal{N}_0 ( \Phi_{n,\xi} + \Phi_{n}^{\star} )$ is a constant independent of $x$. Define also
$$\Theta_b^{(1)}(\xi):= e^{ib\xi}\big( \phi_{F_n}(\xi)- \widetilde {h}_n(\xi) \big)\widehat{\vartheta_{\delta_n}}(\xi) \quad\text{and}
\quad \Theta_b^{(2)}(\xi):= e^{ib\xi}\big( \widetilde {h}_n(\xi)-\widehat h(\xi) \big)\widehat{\vartheta_{\delta_n}}(\xi),$$
so that $\Theta_b = \Theta_b^{(1)} + \Theta_b^{(2)}$.
\medskip
\begin{lemma} \label{lemma:theta-1-bound} We have
$$\sup_{|b|\leq \sqrt n} \Big| \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {\Theta_b^{(1)}(\xi) \over \xi} \,{\rm d} \xi \Big| \lesssim {1\over \sqrt n}.$$
\end{lemma}
\begin{proof}
Using Lemma \ref{lemma:char-function-Fn} and the decomposition in Proposition \ref{prop:spectral-decomp}, we write
$$\Theta_b^{(1)}=\Lambda_1+\Lambda_2+\Lambda_3,$$
where
$$ \Lambda_1(\xi):= e^{ib\xi}\Big( e^{i\xi\sqrt n \gamma} \lambda_{-{i\xi\over \sqrt n}}^n \mathcal{N}_{-{i\xi\over \sqrt n}} (\Phi_{n,\xi} +\Phi_{n}^{\star} ) (x) - e^{-{\varrho^2 \xi^2\over 2}} \mathcal{N}_0 (\Phi_{n,\xi} +\Phi_{n}^{\star} ) (x) \Big) \widehat{\vartheta_{\delta_n}}(\xi), $$
$$\Lambda_2(\xi):=e^{ib\xi}\Big( e^{i\xi\sqrt n \gamma} \mathcal{Q}_{-{i\xi\over \sqrt n}}^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) -e^{i\xi\sqrt n \gamma}\mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \Big)\widehat{\vartheta_{\delta_n}}(\xi)$$
and
$$ \Lambda_3(\xi):=e^{ib\xi}e^{i\xi\sqrt n \gamma}\mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \,\widehat{\vartheta_{\delta_n}}(\xi).$$
We will estimate the integral of $\Lambda_j(\xi) / \xi$, $j=1,2,3$, separately. Notice that, from \eqref{eq:psi_xi+psi_T}, we have that $\Phi_{n,0} + \Phi_{n}^{\star} = \mathbf 1 $. Together with the fact that $\lambda_0=1$, $\mathcal{N}_0 \mathbf 1 = \mathbf 1$ and $\mathcal{Q}_0 \mathbf 1 = 0$, we get that $\Lambda_j(0) = 0$ for $j=1,2,3$. In particular, $\Lambda_j(\xi) / \xi$ is a smooth function of $\xi$ for $j=1,2,3$. We see here the role of the auxiliary function $\Phi_{n}^{\star}$.
\vskip 5pt
In order to estimate $\Lambda_2$ observe that, for $z$ small, the norm of the operator $\mathcal{Q}^n_z - \mathcal{Q}^n_0$ is bounded by a constant times $|z| n \beta^n$ for some $0<\beta<1$. This can be seen by writing the last difference as $\sum_{\ell=0}^{n-1} \mathcal{Q}_z^{n-\ell-1}(\mathcal{Q}_z - \mathcal{Q}_0)\mathcal{Q}_0^\ell$, applying Proposition \ref{prop:spectral-decomp}-(5) and using the fact that $\norm{\mathcal{Q}_z - \mathcal{Q}_0}_{\cali{C}^\alpha} \lesssim |z|$ . Therefore, we have
$$ \Big| \mathcal{Q}_{-{i\xi\over \sqrt n}}^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) - \mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \Big| \lesssim {|\xi|\over \sqrt n} n \beta^n \norm{ \Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha}.$$
Using Lemma \ref{lemma:norm-Phi}, this gives
\begin{align*}
\Big|\int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {\Lambda_2(\xi) \over \xi} \,{\rm d} \xi\Big|
&\leq\int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {1\over |\xi|} \cdot \Big| \mathcal{Q}_{-{i\xi\over \sqrt n}}^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) - \mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \Big|\,{\rm d}\xi \\
&\lesssim \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {1\over |\xi|} \cdot |\xi| \sqrt n \beta^n \norm{ \Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha} \,{\rm d}\xi \lesssim \beta^n n^{\alpha A + 1} \lesssim {1\over \sqrt n}.
\end{align*}
We now estimate $\Lambda_3$ using its derivative $\Lambda'_3$ . Recall that $\norm{\widehat{\vartheta_{\delta_n}}}_{\cali{C}^1}\lesssim 1,|b|\leq \sqrt n$ and $\big|\mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \big|\lesssim \beta^n\norm{ \Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha}$ where $0<\beta<1$ is as before. A direct computation using the definition of $\Phi_{n,\xi} $ gives
\begin{align*}
\sup_{|\xi|\leq \sqrt n} | \Lambda_3'(\xi)| &\leq \Big|b \mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x)\Big|+ \Big|\sqrt n \gamma \mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x)\Big| \\ & \quad \quad + \sum_{1 \leq k \leq A \log n} \Big|{k\over \sqrt n} e^{i \xi \frac{k}{\sqrt n}}\mathcal{Q}_0^n \chi_k (x) \Big| + \Big|\mathcal{Q}_0^n (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x)\Big| \\ &\lesssim \sqrt n \beta^n n^{\alpha A} + \frac{(\log n)^2}{\sqrt n} \beta^n n^{\alpha A} + \beta^n n^{\alpha A} \lesssim (1+\sqrt n) \beta^n n^{\alpha A},
\end{align*}
where we have used that $\norm{\chi_k}_{\cali{C}^\alpha} \lesssim e^{\alpha k}$ and $\norm{ \Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha} \lesssim n^{\alpha A}$, see Lemma \ref{lemma:norm-Phi}.
Applying the mean value theorem over the interval between $0$ and $\xi$ yields
\begin{align*}
\Big|\int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {\Lambda_3(\xi) \over \xi} \,{\rm d} \xi\Big|\leq 2\xi_0\sqrt n\sup_{|\xi|\leq \xi_0\sqrt n} | \Lambda_3'(\xi)|\lesssim \sqrt n (1+\sqrt n) n^{\alpha A} \beta^n \lesssim {1 \over \sqrt n}.
\end{align*}
\vskip 5pt
It remains to estimate the term involving $\Lambda_1$. We have
$$ \Big|\int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {\Lambda_1(\xi) \over \xi} {\rm d} \xi\Big| \leq \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {1\over |\xi|}\cdot \Big| e^{i\xi\sqrt n \gamma} \lambda_{-{i\xi\over \sqrt n}}^n \mathcal{N}_{-{i\xi\over \sqrt n}} (\Phi_{n,\xi} +\Phi_{n}^{\star} ) (x) - e^{-{\varrho^2 \xi^2\over 2}} \mathcal{N}_0 (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \Big| {\rm d} \xi.$$
We split the last integral into two integrals using
$$ \Gamma_1(\xi):= e^{i\xi\sqrt n \gamma} \lambda_{-{i\xi\over \sqrt n}}^n \mathcal{N}_{-{i\xi\over \sqrt n}} (\Phi_{n,\xi} +\Phi_{n}^{\star} ) (x) - e^{i\xi\sqrt n \gamma} \lambda_{-{i\xi\over \sqrt n}}^n \mathcal{N}_0 (\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) $$
and
$$ \Gamma_2(\xi):= e^{i\xi\sqrt n \gamma} \lambda_{-{i\xi\over \sqrt n}}^n \mathcal{N}_0 (\Phi_{n,\xi} +\Phi_{n}^{\star} ) (x) - e^{-{\varrho^2 \xi^2\over 2}} \mathcal{N}_0 (\Phi_{n,\xi} +\Phi_{n}^{\star} ) (x) . $$
\vskip 5pt
\noindent\textbf{Case 1.} $\sqrt[6] n<|\xi|\leq \xi_0\sqrt n$. In this case, by Lemma \ref{lemma:lambda-estimates} we have
\begin{equation} \label{eq:case1-lambda}
\big|\lambda_{-{i\xi\over \sqrt n}}^n\big|\leq e^{-{\varrho^2\xi^2\over 3}} \quad \text{and}\quad \Big| e^{i\xi\sqrt n \gamma} \lambda_{-{i\xi\over \sqrt n}}^n-e ^{-{\varrho^2\xi^2\over 2}} \Big|\lesssim {1\over \sqrt n}e^{-{\varrho^2\xi^2\over 4}}.
\end{equation}
From the analyticity of $\xi\mapsto \mathcal{N}_{i\xi}$ (cf. Proposition \ref{prop:spectral-decomp}), Lemma \ref{lemma:norm-Phi} and the fact that $\alpha A\leq 1/6$, one has
$$\Big\|\big(\mathcal{N}_{-{i\xi\over \sqrt n}}-\mathcal{N}_0 \big) (\Phi_{n,\xi} +\Phi_{n}^{\star} ) \Big\|_\infty \lesssim {|\xi| \over \sqrt n} \norm{ \Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha}\lesssim {|\xi| \over \sqrt n}n^{\alpha A} \leq {|\xi| \over \sqrt n}\sqrt[6] n. $$
Hence, using \eqref{eq:case1-lambda}, we get
$$\int_{\sqrt[6] n<|\xi|\leq \xi_0\sqrt n} {1\over |\xi|}\cdot \big| \Gamma_1(\xi) \big| \,{\rm d} \xi \lesssim \int_{- \infty}^{\infty} {1\over \sqrt[6] n} \cdot e^{-{\varrho^2\xi^2\over 3}} {|\xi| \over \sqrt n}\sqrt[6] n \,{\rm d} \xi \lesssim { 1\over \sqrt n}.$$
Observe that $ \big| \Phi_{n,\xi} +\Phi_{n}^{\star} \big| \leq \Phi_{n,0} +\Phi_{n}^{\star} = 1$, so $\big|\mathcal{N}_0 (\Phi_{n,\xi} +\Phi_{n}^{\star} )\big| \leq 1$. Therefore, using \eqref{eq:case1-lambda}, we obtain
$$\int_{\sqrt[6] n<|\xi|\leq \xi_0\sqrt n} {1\over |\xi|}\cdot \big| \Gamma_2(\xi) \big| \,{\rm d} \xi \lesssim \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {1\over \sqrt[6] n} \cdot{1\over \sqrt n} e^{-{\varrho^2\xi^2\over 4}} \,{\rm d} \xi \lesssim { 1\over \sqrt n}.$$
The bound for $\Lambda_1$ follows in this case.
\vskip 5pt
\noindent\textbf{Case 2.} $|\xi|\leq \sqrt[6] n$. In this case, Lemma \ref{lemma:lambda-estimates} gives that
\begin{equation} \label{eq:case2-lambda}
\big|\lambda_{-{i\xi\over \sqrt n}}^n\big|\leq e^{-{\varrho^2\xi^2\over 3}} \quad \text{and}\quad \Big| e^{i\xi\sqrt n \gamma} \lambda_{-{i\xi\over \sqrt n}}^n-e ^{-{\varrho^2\xi^2\over 2}} \Big|\lesssim {1\over \sqrt n}|\xi|^3e^{-{\varrho^2\xi^2\over 2}}.
\end{equation}
From \eqref{eq:psi_xi+psi_T} it follows that $\norm{\Phi_{n,\xi} +\Phi_{n}^{\star}}_{\cali{C}^\alpha}$ is bounded by
$$ 1 + \sum_{0\leq k\leq A\log n} \big| e^{i \xi{k \over \sqrt n}}-1 \big|\cdot \norm{\chi_k}_{\cali{C}^\alpha}\lesssim 1 + \sum_{0\leq k\leq A\log n}|\xi|{k \over \sqrt n} e^{\alpha k} \lesssim 1 + {\sqrt[6] n (\log n)^2 n^{\alpha A}\over \sqrt n} \lesssim 1 , $$
where in the last step we have used that $\alpha A\leq 1/6$. It follows from the analyticity of $\xi\mapsto\mathcal{N}_{i\xi}$ that
$$ \Big\|\big(\mathcal{N}_{-{i\xi\over \sqrt n}} -\mathcal{N}_0\big)(\Phi_{n,\xi} +\Phi_{n}^{\star} ) \Big\|_\infty \lesssim {|\xi|\over \sqrt n} .$$
We conclude, using \eqref{eq:case2-lambda}, that
$$\int_{|\xi|\leq\sqrt[6] n} {1\over |\xi|}\cdot \big| \Gamma_1(\xi) \big| \,{\rm d} \xi \lesssim \int_{|\xi|\leq\sqrt[6] n} {1\over |\xi|} \cdot e^{-{\varrho^2\xi^2\over 3}} {|\xi|\over \sqrt n} \,{\rm d} \xi \lesssim {1 \over \sqrt n}. $$
For $\Gamma_2$, using that $\big|\mathcal{N}_0 (\Phi_{n,\xi} +\Phi_{n}^{\star} )\big| \leq 1$ as before together with \eqref{eq:case2-lambda}, gives
$$\int_{|\xi|\leq\sqrt[6] n} {1\over |\xi|}\cdot \big| \Gamma_2(\xi) \big| \,{\rm d} \xi \lesssim \int_{|\xi|\leq\sqrt[6] n} {1\over |\xi|}\cdot {1\over \sqrt n}|\xi|^3e^{-{\varrho^2\xi^2\over 2}} \,{\rm d} \xi\lesssim {1 \over \sqrt n}.$$
Together with Case 1, we deduce that
$$\Big|\int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {\Lambda_1(\xi) \over \xi} \,{\rm d} \xi\Big| \lesssim {1 \over \sqrt n},$$
which ends the proof of the lemma.
\end{proof}
\begin{lemma} \label{lemma:theta-2-bound}
We have
$$\sup_{|b|\leq \sqrt n} \Big| \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {\Theta_b^{(2)}(\xi) \over \xi} \,{\rm d} \xi \Big| \lesssim { 1\over \sqrt n}.$$
\end{lemma}
\begin{proof}
Recall that $\chi_k$ is bounded by $1$ and is supported by $ \cali{T}_k \subset \mathbb{B}(H_y,e^{-k+1})$. Therefore,
$$\mathcal{N}_0 \chi_k = \int_{\P^{d-1}} \chi_k \, {\rm d} \nu \leq \nu \big( \mathbb{B}(H_y,e^{-k+1}) \big) \lesssim e^{-k\eta},$$
where in the last step we have used Proposition \ref{prop:regularity}.
Recall that $\widehat h(\xi) = e^{-{\varrho^2 \xi^2\over 2}}$ and $\widetilde h_n(\xi)= e^{-{\varrho^2 \xi^2\over 2}} \mathcal{N}_0 (\Phi_{n,\xi} + \Phi_{n}^{\star})$. Using \eqref{eq:psi_xi+psi_T} and $\mathcal{N}_0 \mathbf 1 = \mathbf 1$, we get
\begin{align*}
\Theta_b^{(2)}(\xi) &= e^{ib \xi} e^{-{\varrho^2 \xi^2\over 2}} \big( \mathcal{N}_0 (\Phi_{n,\xi} + \Phi_{n}^{\star}) - 1 \big) \widehat{\vartheta_{\delta_n}}(\xi) \\ &= e^{ib \xi} e^{-{\varrho^2 \xi^2\over 2}} \sum_{0\leq k\leq A\log n} \big( e^{i \xi{k \over \sqrt n}}-1 \big) \big( \mathcal{N}_0 \chi_k \big) \cdot \widehat{\vartheta_{\delta_n}}(\xi).
\end{align*}
As $\norm{\widehat{\vartheta_{\delta_n}}}_\infty \leq 1$, we obtain
\begin{align*}
\big|\Theta_b^{(2)}(\xi)\big| \leq e^{-{\varrho^2 \xi^2\over 2}} \sum_{0\leq k\leq A\log n} \big| e^{i \xi{k \over \sqrt n}}-1\big| \big( \mathcal{N}_0\chi_k \big) \lesssim e^{-{\varrho^2 \xi^2\over 2}}\sum_{k \geq 0} | \xi| {k \over \sqrt n} e^{-k\eta}\lesssim e^{-{\varrho^2 \xi^2\over 2}} {|\xi|\over \sqrt n},
\end{align*}
where the constants involved do not depend on $b$. Therefore,
$$ \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} \Big|{\Theta_b^{(2)}(\xi)\over \xi} \Big| \,{\rm d} \xi \lesssim \int_{-\xi_0\sqrt n}^{\xi_0\sqrt n} {1\over |\xi|} \cdot e^{-{\varrho^2 \xi^2\over 2}} {|\xi|\over \sqrt n} \,{\rm d} \xi \lesssim {1 \over \sqrt n},$$
thus proving the lemma.
\end{proof}
Gathering the above estimates, we can finish the proof.
\begin{proof}[End of proof of Theorem \ref{thm:BE-coeff}]
Recall that our goal is to prove \eqref{goal-1-varphi}. Estimate \eqref{eq:BE-main-estimate} together with Lemmas \ref{lemma:theta-1-bound} and \ref{lemma:theta-2-bound} give that
$\big|F_n(b)-H(b)\big| \leq C'/ \sqrt n$ for all $b \in \mathbb{R}$, where $C' > 0$ is a constant. Recall that $H(b): =\frac{1}{\sqrt{2 \pi} \,\varrho} \int_{-\infty}^b e^{-\frac{s^2}{2 \varrho^2}} \, {\rm d} s$. Coupling the last estimate with Lemma \ref{lemma:Ln-Fn} and the easy fact that $\sup_{b \in \mathbb{R}} \big|H(b)-H(b \pm 1/ \sqrt n)\big| \lesssim 1/ \sqrt n$ gives that $ \big|\mathcal{L}_n(b)-H(b)\big| \leq C'' / \sqrt n$ for some constant $C'' > 0$. Therefore, \eqref{goal-1-varphi} holds. Observe that all of our estimates are uniform in $x \in \P^{d-1}$ and $y\in (\P^{d-1})^*$. The proof of the theorem is complete.
\end{proof}
\section{Local limit theorem for coefficients} \label{sec:LLT}
This section is devoted to the proof of Theorem \ref{thm:LLT-coeff}. As in the previous section, we fix $x =[v]\in \P^{d-1}$ and $y=[f]\in (\P^{d-1})^*$. Fix also $-\infty<a<b<\infty$ and define
\begin{equation*}
\mathcal{A}_n(t) := \sqrt{n} \, \mathbf P \Big( t+ \log{ |\langle f, S_n v \rangle | \over \norm{f} \norm{v}} - n \gamma\in [ a, b] \Big) .
\end{equation*}
Our goal is to prove that
\begin{equation} \label{eq:LLT-main-limit}
\lim_{n\to \infty}\sup_{t\in\mathbb{R}} \Big|\, \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \, \Big| =0.
\end{equation}
Our strategy is similar to the one employed in the proof of Theorem \ref{thm:BE-coeff}, that is, to replace $\log{ |\langle f, S_n v \rangle | \over \norm{f} \norm{v}} $ by $\sigma(S_n,x) +\log d(S_n x, H_y)$ (see \eqref{eq:coeff-split}) and use the perturbed Markov operator and large deviation estimates to handle $\sigma(S_n,x)$ and $\log d(S_n x, H_y)$. Here, we are dealing with ``local'' probabilities for $\sigma(S_n,x)$, so the analysis is more involved. In particular, we need to use finer approximation results, such as the one in Lemma \ref{lemma:conv-fourier-approx} and properties of the operator $\mathcal{P}_{i\xi}$ for large values of $\xi \in \mathbb{R}$, as in Proposition \ref{prop:spec-Pxi}.
\medskip
Let $0<\zeta \leq 1$ be a constant. For integers $k \geq 0$ introduce
\begin{align*}
\cali{T}_k^\zeta := \big\{ w \in \P^{d-1} :\, e^{-(k+1)\zeta} < d(w,H_y) < e^{-(k-1)\zeta} \big\} = \mathbb{B}(H_y,e^{-(k-1) \zeta}) \setminus \overline{\mathbb{B}(H_y,e^{-(k+1)\zeta})}.
\end{align*}
We have the following version of Lemma \ref{lemma:partition-of-unity}. We'll use the same notation to denote slightly different functions. This shouldn't cause confusion. The functions from Lemma \ref{lemma:partition-of-unity} correspond to the particular case $\zeta =1$.
\begin{lemma} \label{lemma:partition-of-unity-2}
Let $0<\zeta \leq 1$. There exist non-negative smooth functions $\chi_k$ on $\P^{d-1}$, $k \geq 0$, such that
\begin{enumerate}
\item $\chi_k$ is supported by $\cali{T}_k^\zeta$;
\item If $w \in \P^{d-1} \setminus H_y$, then $\chi_k(w) \neq 0$ for at most two values of $k$;
\item $\sum_{k\geq 0} \chi_k=1$ on $\P^{d-1} \setminus H_y$;
\item $\norm{\chi_k}_{\cali{C}^1}\leq 12 \zeta^{-1} e^{k\zeta}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\widetilde \chi_k$ be as in Lemma \ref{lemma:partition-of-unity} and set $\chi_k(w):= \widetilde \chi_k \big(\zeta^{-1} \log d(w,H_y) \big)$. Since the function $\Psi(w) := \log d(w,H_y)$ satisfies $\norm{\Psi|_{\cali{T}_k^\zeta }}_{\cali{C}^1} \leq e^{(k+1) \zeta} \leq 3 e^{k \zeta}$, it follows that $\chi_k$ satisfies (1)--(4).
\end{proof}
We will prove \eqref{eq:LLT-main-limit} by dealing separately with the upper and lower limit.
\subsection{Upper bound} \label{subsec:LLT-upper-limit}
The upper bound in the limit \eqref{eq:LLT-main-limit} is handled by the following proposition.
\begin{proposition} \label{prop:LLT-upper-limit}
Let $\mathcal{A}_n(t)$ be as above. Then, $$\limsup_{n\to \infty}\sup_{t\in\mathbb{R}}\bigg( \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \bigg) \leq 0.$$
\end{proposition}
Let $0< \zeta \leq 1$ be a small constant and define $\psi:\mathbb{R}\to\mathbb{R}_{\geq 0}$ by
\begin{align*}
\psi (u) := &
\begin{cases}
u/\zeta -(a-2\zeta)/\zeta & \text{for} \quad u\in[a-2\zeta,a-\zeta] \\
1 & \text{for} \quad u\in[a-\zeta,b+\zeta] \\
-u/\zeta +(b+2\zeta)/\zeta & \text{for} \quad u\in [b+\zeta,b+2\zeta]
\\
0 & \text{for} \quad u\in \mathbb{R} \setminus [a-2\zeta,b+2\zeta].
\end{cases}
\end{align*}
Notice that $\psi$ is Lipschitz and piecewise affine. Moreover, $0 \leq \psi \leq 1$, its support is contained in $[a-2\zeta,b+2\zeta] \subset [a-2,b+2]$ and $\int_{\mathbb{R}} \psi (u)\,{\rm d} u = b-a+3\zeta$.
For $t\in\mathbb{R}$ and $k\in\mathbb{N}$, consider the translations $$\psi_{t,k}(u):=\psi(u+t-k\zeta).$$
Observe that, for fixed $t,u\in\mathbb{R}$, we have that $\psi_{t,k}(u)\neq 0$ for only finitely many $k$'s. By construction, we have $\psi_{t,k} \geq \mathbf 1_{[a-t+(k-1)\zeta, b-t+(k+1)\zeta]}$.
\medskip
Let $B>0$ be a large constant. Arguing as in Section \ref{sec:BE}, we obtain that, for $n$ large,
$$\mu^{*n} \big\{g\in G:\,d(gx,H_y)\leq e n^{-B} \big\}\leq e^{2c} \, n^{-c B}, $$
for some constant $c> 0$ independent of $n$ and $B$. Taking $B$ large enough allows us to assume that $n^{-c B } \leq 1 / n$ for all $n \geq 1$.
In order to simplify the notation, consider the linear functional
\begin{equation} \label{eq:En-def}
\mathcal{E}_n\big(\Psi\big):= \sqrt n \, \mathbf E\Big( \Psi\big( \sigma(S_n,x)-n\gamma ,S_n x \big) \Big),
\end{equation}
where $\Psi$ is a function of $(u,w) \in \mathbb{R} \times \P^{d-1}$.
For $w \in \P^{d-1}$, set
\begin{equation} \label{eq:Phi-star-def}
\Phi_n^\star (w):= 1 - \sum_{0\leq k\leq B\zeta^{-1}\log n} \chi_k(w),
\end{equation}
where $\chi_k$ are the functions in Lemma \ref{lemma:partition-of-unity-2}. We observe that we use the same notation as in Section \ref{sec:BE} to denote slightly different functions. We recover the functions from last section by taking $\zeta = 1$ and $B=A$. This shouldn't cause any confusion.
Set
$$\mathcal{B}_n(t):= \sum_{0\leq k\leq B\zeta^{-1}\log n} \mathcal{E}_n\big(\psi_{t,k}\cdot\chi_k \big)+ \mathcal{E}_n\big(\psi_{t,0}\cdot \Phi_n^\star \big).$$
\begin{lemma} \label{lemma:llt-ineq-1}
There exists a constant $C_1>0$, independent of $n$ and $\zeta$, such that, for all $t \in \mathbb{R}$, $$\mathcal{A}_n(t) \leq \mathcal{B}_n(t) + C_1 / \sqrt n.$$
\end{lemma}
\begin{proof}
Using the decomposition \eqref{eq:coeff-split} and the fact that $\mathbf P \big( d(S_n x,H_y)\leq n^{-B} \big) \lesssim 1 / n$, we obtain
\begin{align*}
\mathcal{A}_n(t) \leq \sqrt{n} \, \mathbf E \Big( \mathbf 1_{t+ \sigma(S_n,x) +\log d(S_n x, H_y) - n \gamma\in [a, b]} \mathbf 1_{\log d(S_n x, H_y)\geq -B\log n} \Big) + O \Big( {1 \over \sqrt n} \Big) .
\end{align*}
Observe that, when $S_n x \in{\rm supp}(\chi_k)$, we have $-(k+1) \zeta \leq \log d(S_n x,H_y) \leq - (k-1) \zeta$, so
$$ \mathbf 1_{t+ \sigma(S_n,x) +\log d(S_n x, H_y) - n \gamma\in [a, b]}\leq \mathbf 1_{\sigma(S_n,x) - n \gamma\in [a-t+(k-1)\zeta, b-t+(k+1)\zeta]} \leq \psi_{t,k}\big(\sigma(S_n,x) - n \gamma\big). $$
Using that $\mathbf 1_{\log d(w, H_y)\geq -B\log n} \leq \sum_{0\leq k\leq B\zeta^{-1}\log n + 1} \chi_k(w)$ and taking the expectation, it follows that
\begin{align*}
&\mathbf{E} \Big(\mathbf 1_{t+ \sigma(S_n,x) +\log d(S_n x, H_y) - n \gamma\in [a, b]} \mathbf 1_{\log d(S_n x, H_y)\geq -B\log n} \Big) \\
&\leq \sum_{0\leq k\leq B\zeta^{-1}\log n + 1} \mathbf{E} \Big( \psi_{t,k}\big(\sigma(S_n,x)-n\gamma\big)\chi_k(S_n x) \Big) \\ &\leq \sum_{0\leq k\leq B\zeta^{-1}\log n} \mathbf{E} \Big(\psi_{t,k}\big(\sigma(S_n,x)-n\gamma\big)\chi_k(S_n x) \Big) + \mathbf{E}\Big(\chi_{k_0}(S_n x) \Big),
\end{align*}
where $k_0:= \lfloor B \zeta^{-1} \log n \rfloor + 1$.
From the fact that $\chi_{k_0} \leq \mathbf 1_{\mathbb{B}(H_y,e^{-(k_0-1)\zeta})}$, we see that the last term above is bounded by $\mathbf P \big( d(S_n x,H_y)\leq e n^{-B} \big) \lesssim 1 / n$. Hence, there is a constant $C_1 >0$ such that
\begin{equation*}
\mathcal{A}_n(t) \leq \sum_{0\leq k\leq B\zeta^{-1}\log n} \mathcal{E}_n\big(\psi_{t,k}\cdot\chi_k \big)+{C_1 \over \sqrt n} \leq \mathcal{B}_n(t) +{C_1 \over \sqrt n},
\end{equation*}
proving the lemma.
\end{proof}
By Lemma \ref{lemma:conv-fourier-approx}, for every $0<\delta<1$, there exists a smooth function $\psi^+_{\delta}$ such that $\widehat {\psi^+_{\delta}}$ has support in $[-\delta^{-2},\delta^{-2}]$, $$\psi\leq \psi^+_\delta,\quad \lim_{\delta\to 0} \psi^+_{\delta} =\psi \quad \text{and} \quad \lim_{\delta\to 0} \big \|\psi^+_{\delta} -\psi \big \|_{L^1} = 0.$$
Moreover, $\norm{\psi_{\delta}^+}_\infty$, $\norm{\psi_{\delta}^+}_{L^1}$ and $\|\widehat{\psi^+_{\delta}}\|_{\cali{C}^1}$ are bounded by a constant independent of $\delta$ and $\zeta$ since the support of $\psi$ is contained in $[a-2,b+2]$.
As above, for $t\in\mathbb{R}$ and $k\in\mathbb{N}$, we consider the translations
$$ \psi_{t,k}^+(u):=\psi_{\delta}^+(u+t-k\zeta) . $$
We omit the dependence on $\delta$ in order to ease the notation. Define also
\begin{equation} \label{eq:R-def}
\mathcal{R}_n(t):= \sum_{0\leq k\leq B\zeta^{-1}\log n } \mathcal{E}_n\big(\psi_{t,k}^+\cdot\chi_k \big)+ \mathcal{E}_n\big(\psi_{t,0}^+\cdot \Phi_n^\star \big).
\end{equation}
Clearly, we have $\mathcal{B}_n(t)\leq \mathcal{R}_n(t)$. From the definition of $\mathcal{E}_n$, Fourier inversion formula and Fubini's theorem , we have
\begin{align*}
\mathcal{E}_n\big(\psi_{t,k}^+\cdot\chi_k \big)&=\sqrt n \, \int_{G} \psi_{\delta}^+\big(\sigma(g,x)-n\gamma+t-k\zeta\big) \cdot \chi_k(gx) \,{\rm d} \mu^{*n}(g)\\
&={\sqrt n\over 2\pi}\int_{G} \int_{-\infty}^\infty \widehat{\psi_{\delta}^+}(\xi) e^{i\xi(\sigma(g,x)-n\gamma+t-k\zeta )} \cdot \chi_k(gx) \,{\rm d} \xi{\rm d}\mu^{*n}(g)\\
&={\sqrt n\over 2\pi}\int_{-\infty}^\infty \widehat{\psi_{\delta}^+}(\xi) e^{i\xi(t-k\zeta)}\cdot e^{-i\xi n\gamma}\mathcal{P}^n_{i\xi}\chi_k(x) \,{\rm d} \xi,
\end{align*}
where in the last step we have used \eqref{eq:markov-op-iterate}.
Recall that ${\rm supp}\big( \widehat{\psi_{\delta}^+} \big)\subset [-\delta^{-2},\delta^{-2}]$. So, after the change of variables $\xi \mapsto \xi / \sqrt n$, the above identity becomes
$$ \mathcal{E}_n\big(\psi_{t,k}^+\cdot\chi_k \big) ={1\over 2\pi}\int_{-\delta^{-2} \sqrt n}^{\delta^{-2} \sqrt n} \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) e^{i\xi{t-k\zeta\over \sqrt n}}\cdot e^{-i\xi \sqrt n\gamma}\mathcal{P}^n_{{i\xi\over \sqrt n}}\chi_k(x) \,{\rm d} \xi.$$
A similar computation yields $$\mathcal{E}_n\big(\psi_{t,0}^+ \cdot \Phi_n^\star \big) = {1\over 2\pi}\int_{-\delta^{-2} \sqrt n}^{\delta^{-2} \sqrt n} \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) e^{i\xi{t \over \sqrt n}}\cdot e^{-i\xi \sqrt n\gamma}\mathcal{P}^n_{{i\xi\over \sqrt n}}\Phi_{n}^{\star} (x) \,{\rm d} \xi.$$
Define
\begin{equation} \label{eq:Phi-xi-def}
\Phi_{n,\xi} (w):= \sum_{0\leq k\leq B\zeta^{-1}\log n} e^{-i \xi{k\zeta\over \sqrt n}}\chi_k(w).
\end{equation}
We use again the same notation as in Section \ref{sec:BE} to denote a slightly different function. The difference here is the factor $\zeta$ and the sign before $i\xi$. Using this notation and the above computations, \eqref{eq:R-def} becomes
\begin{equation} \label{eq:R-formula}
\mathcal{R}_n(t)= {1\over 2\pi}\int_{-\delta^{-2} \sqrt n}^{\delta^{-2} \sqrt n} \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) e^{i\xi{t\over \sqrt n}}\cdot e^{-i\sqrt n\gamma}\mathcal{P}_{{i\xi\over \sqrt n}}^n(\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \,{\rm d} \xi.
\end{equation}
Fix $\alpha>0$ such that $$\alpha B\leq 1/6 \quad \text{ and } \quad \alpha \leq \alpha_0,$$ where $0<\alpha_0<1$ is the exponent appearing in Theorem \ref{thm:spectral-gap}. Then, all the results of Subsection \ref{subsec:markov-op} apply to the operators $\xi\mapsto\mathcal{P}_{i\xi}$ acting on $\cali{C}^\alpha(\P^{d-1})$.
The next lemma can be proved in the same way as Lemma \ref{lemma:norm-Phi}.
\begin{lemma} \label{lemma:norm-Phi-2}
Let $0 < \zeta \leq 1$, $\Phi_{n,\xi}, \Phi_{n}^{\star}$ and $\alpha > 0$ be as above. Then,
\begin{equation} \label{eq:psi_xi+psi_T-2}
\Phi_{n,\xi} + \Phi_{n}^{\star} = \mathbf 1 + \sum_{0\leq k\leq B\zeta^{-1}\log n} \big(e^{-i \xi{k\zeta\over \sqrt n}} - 1 \big) \chi_k
\end{equation}
and there is a constant $C_\zeta>0$ independent of $n$ and $\xi$ such that
\begin{equation*} \label{eq:norm-Phi-2}
\norm{\Phi_{n,\xi} }_{\cali{C}^\alpha}\leq C_\zeta n^{\alpha B} \quad\text{and}\quad \norm{\Phi_{n}^{\star} }_{\cali{C}^\alpha}\leq C_\zeta n^{\alpha B}.
\end{equation*}
Moreover, $\Phi_{n}^{\star} $ is supported by $\big\{w:\,\log d(w,H_y)\leq -B\log n + 1\big\}$.
\end{lemma}
Define
\begin{equation} \label{eq:S-def}
\mathcal{S}_n(t):={1\over 2\pi}\widehat{\psi_{\delta}^+}(0) \int_{-\infty}^\infty e^{i\xi{t\over \sqrt n}} e^{-{ \varrho^2\xi^2 \over 2}} \,{\rm d} \xi ={1\over \sqrt{2\pi} \, \varrho} e^{-{t^2\over2\varrho^2 n}}\int_{\mathbb{R}} \psi_{\delta}^+ (u)\,{\rm d} u,
\end{equation}
where in the second equality we have used the fact that the inverse Fourier transform of $e^{-{ \varrho^2\xi^2 \over 2}}$ is $ {1\over \sqrt{2\pi} \, \varrho} e^{-{t^2\over2\varrho^2}}$.
\begin{lemma}\label{lemma-R-S}
Fix $0< \delta < 1$ and $0<\zeta \leq 1$. Then, there exists a constant $C_{\zeta,\delta}>0$ such that, for all $n \geq 1$,
$$\sup_{t\in\mathbb{R}}\big|\mathcal{R}_n(t)-\mathcal{S}_n(t) \big| \leq {C_{\zeta,\delta}\over \sqrt[3] n}.$$
\end{lemma}
\begin{proof}
Let $\xi_0>0$ be the constant in Lemma \ref{lemma:lambda-estimates}. In particular, the decomposition of $\mathcal{P}_z$ in Proposition \ref{prop:spectral-decomp} holds for $|z| \leq \xi_0$. Using that decomposition, \eqref{eq:R-formula} and \eqref{eq:S-def}, we can write $$\mathcal{R}_n(t)-\mathcal{S}_n(t) = \Lambda_n^1(t) + \Lambda_n^2(t) + \Lambda_n^3(t) + \Lambda_n^4(t) + \Lambda_n^5(t),$$ where
$$\Lambda_n^1(t):={1\over 2\pi}\int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} e^{i\xi{t\over \sqrt n}} \Big[ \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) e^{-i\sqrt n\gamma}\lambda_{{i\xi\over \sqrt n}}^n\mathcal{N}_0(\Phi_{n,\xi} +\Phi_{n}^{\star} )-\widehat{\psi_{\delta}^+}(0) e^{-{ \varrho^2\xi^2 \over 2}}\Big] \,{\rm d} \xi ,$$
$$\Lambda_n^2(t):= {1\over 2\pi}\int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} e^{i\xi{t\over \sqrt n}}\Big[ \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) e^{-i\sqrt n\gamma}\lambda_{{i\xi\over \sqrt n}}^n\big(\mathcal{N}_{{i\xi\over \sqrt n}}-\mathcal{N}_0\big)(\Phi_{n,\xi} +\Phi_{n}^{\star} ) (x) \Big] \,{\rm d} \xi , $$
$$\Lambda_n^3(t):= {1\over 2\pi}\int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} e^{i\xi{t\over \sqrt n}} \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) e^{-i\sqrt n\gamma} \mathcal{Q}_{{i\xi\over \sqrt n}}^n(\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \,{\rm d} \xi, $$
$$ \Lambda_n^4(t):= {1\over 2\pi}\int_{\xi_0\sqrt n \leq|\xi|\leq\delta^{-2} \sqrt n}e^{i\xi{t\over \sqrt n}} \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) e^{-i\sqrt n\gamma}\mathcal{P}_{{i\xi\over \sqrt n}}^n(\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \,{\rm d} \xi $$
and
$$ \Lambda_n^5(t):= - {1\over 2\pi}\widehat{\psi_{\delta}^+}(0) \int_{|\xi|\geq \xi_0 \sqrt n} e^{i\xi{t\over \sqrt n}} e^{-{ \varrho^2\xi^2 \over 2}} \,{\rm d} \xi. $$
\medskip
We will bound each $\Lambda_n^j$, $j=1,\ldots, 5$, separately. We will use that $$\norm{\Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha}\leq 2C_\zeta n^{\alpha B} \leq 2C_\zeta n^{1 / 6}$$ for every $\xi$, after Lemma \ref{lemma:norm-Phi-2} and the choice of $\alpha$ and $B$.
In order to bound $\Lambda_n^2$, we have, using the analyticity of $\xi\mapsto \mathcal{N}_{i\xi}$, that
$$ \Big\| \big(\mathcal{N}_{{i\xi\over \sqrt n}}-\mathcal{N}_0\big)(\Phi_{n,\xi} +\Phi_{n}^{\star} ) \Big\|_\infty \lesssim {|\xi|\over \sqrt n} \norm{\Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha}\leq { 2C_\zeta|\xi|\over \sqrt [3] n}.$$
Recall, from Lemma \ref{lemma:lambda-estimates}, that $\big|\lambda_{{i\xi\over \sqrt n}}^n\big|\leq e^{-{\varrho^2\xi^2\over 3}}$ for $|\xi|\leq \xi_0 \sqrt n$. Since $\|\widehat{\psi^\pm_{\delta}}\|_{\cali{C}^1}$ is bounded uniformly in $\delta$ and $\zeta$, we get
$$\sup_{t\in\mathbb{R}} \big|\Lambda_n^2(t)\big|\lesssim \int_{-\infty}^{\infty} e^{-{\varrho^2\xi^2\over 3}} {2C_\zeta|\xi|\over \sqrt [3] n} \,{\rm d} \xi \lesssim {C_\zeta\over \sqrt[3] n}. $$
For $\Lambda_n^3$, we use that $\norm{\mathcal{Q}^n_z}_{\cali{C}^\alpha} \leq c \beta^n$ for $|z| \leq \xi_0$, where $c>0$ and $0<\beta<1$ are constants, see Proposition \ref{prop:spectral-decomp}. Therefore, for $|\xi|\leq \xi_0\sqrt n$,
$$\Big\| \mathcal{Q}_{{i\xi\over \sqrt n}}^n(\Phi_{n,\xi} +\Phi_{n}^{\star} ) \Big\|_\infty \lesssim \beta^n\norm{\Phi_{n,\xi} +\Phi_{n}^{\star} }_{\cali{C}^\alpha} \leq 2 C_\zeta \beta^n \sqrt[6] n, $$
which gives
$$\sup_{t\in\mathbb{R}} \big|\Lambda_n^3(t)\big|\lesssim \int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} 2 C_\zeta \beta^n \sqrt[6] n \,{\rm d} \xi = 4\xi_0 C_\zeta \sqrt n \beta^n \sqrt[6] n\lesssim \frac{C_\zeta}{\sqrt n}.$$
In order to bound $\Lambda_n^4$, we use that, after Proposition \ref{prop:spec-Pxi}, there are constants $C_\delta>0$ and $0<\rho_\delta<1$ such that $\norm{\mathcal{P}^n_{i\xi}}_{\cali{C}^\alpha}\leq C_\delta \rho_\delta^n$ for all $\xi_0\leq |\xi|\leq \delta^{-2}$ and $n \geq 1$. Therefore,
$$\sup_{t\in\mathbb{R}} \big|\Lambda_n^4(t)\big|\lesssim \int_{\xi_0\sqrt n \leq|\xi|\leq\delta^{-2} \sqrt n} C_\delta \rho_\delta^n \sqrt[6] n\,{\rm d} \xi\leq 2\delta^{-2}\sqrt n C_\delta \rho_\delta^n C_\zeta \sqrt[6] n\lesssim \frac{C_{\zeta,\delta}'}{\sqrt n},$$
for some constant $C_{\zeta,\delta}'>0$.
The modulus of the term $\Lambda_n^5$ is clearly $\lesssim 1/\sqrt n$, so it only remains to estimate $\Lambda_n^1$. For every $t \in \mathbb{R}$, we have $$\big| \Lambda_n^1(t) \big|\leq \Gamma_n^1+\Gamma_n^2+\Gamma_n^3,$$ where
$$\Gamma_n^1:= {1\over 2\pi}\int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} \Big| \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) \Big| \, \big|\lambda_{{i\xi\over \sqrt n}}^n \big| \cdot\Big| \mathcal{N}_0(\Phi_{n,\xi} +\Phi_{n}^{\star} )- 1 \Big| \,{\rm d} \xi , $$
$$\Gamma_n^2:= {1\over 2\pi}\int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} \big|\lambda_{{i\xi\over \sqrt n}}^n\big|\cdot \Big| \widehat{\psi_{\delta}^+}\Big({\xi\over \sqrt n}\Big) -\widehat{\psi_{\delta}^+}(0) \Big| \,{\rm d} \xi $$
and
$$\Gamma_n^3:= {1\over 2\pi}\int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} \big| \widehat{\psi_{\delta}^+}(0) \big| \cdot\Big| e^{-i\sqrt n\gamma}\lambda_{{i\xi\over \sqrt n}}^n- e^{-{ \varrho^2\xi^2 \over 2}}\Big| \,{\rm d} \xi. $$
Recall that $\chi_k$ is bounded by $1$ and is supported by $ \cali{T}_k^\zeta \subset \mathbb{B}(H_y,e^{-(k-1)\zeta})$. Therefore,
$$\mathcal{N}_0 \chi_k = \int_{\P^{d-1}} \chi_k \, {\rm d} \nu \leq \nu \big(\mathbb{B}(H_y,e^{-(k-1) \zeta})\big) \lesssim e^{-k\zeta \eta},$$
where in the last step we have used Proposition \ref{prop:regularity}.
Using \eqref{eq:psi_xi+psi_T-2}, we get
\begin{align*}
\Big| \mathcal{N}_0(\Phi_{n,\xi} +\Phi_{n}^{\star} )- 1 \Big| &= \Big| \mathcal{N}_0(\Phi_{n,\xi} +\Phi_{n}^{\star} )-\mathcal{N}_0 \mathbf 1\Big|\leq \sum_{0\leq k\leq B\zeta^{-1}\log n} \big| e^{-i \xi{k\zeta\over \sqrt n}}-1 \big|\mathcal{N}_0 \chi_k \\
&\lesssim \sum_{ k \geq 0} |\xi|{k\zeta\over \sqrt n}e^{-k \zeta\eta}\leq c_\zeta {|\xi|\over \sqrt n},
\end{align*}
for some constant $c_\zeta>0$ independent of $n$.
Using that $\big|\lambda_{{i\xi\over \sqrt n}}^n\big|\leq e^{-{\varrho^2\xi^2\over 3}}$ for $|\xi|\leq \xi_0 \sqrt n$ (Lemma \ref{lemma:lambda-estimates}) and that $\|\widehat{\psi^+_{\delta}}\|_{\cali{C}^1}$ is uniformly bounded, we get that
$$ \Gamma_n^1\lesssim \int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} \|\widehat{\psi^+_{\delta}}\|_{\infty} e^{-{\varrho^2\xi^2\over 3}} c_\zeta {|\xi|\over \sqrt n} \,{\rm d} \xi\lesssim {c_\zeta\over \sqrt n}$$
and
$$\Gamma_n^2\lesssim \int_{-\xi_0 \sqrt n}^{\xi_0 \sqrt n} e^{-{\varrho^2\xi^2\over 3}} {|\xi|\over \sqrt n} \|\widehat{\psi^+_{\delta}}\|_{\cali{C}^1} \,{\rm d} \xi\lesssim {1\over \sqrt n}.$$
The bound $\Gamma_n^3\lesssim 1/\sqrt n$ follows by splitting the integral along the intervals $|\xi|\leq \sqrt[6] n$ and $\sqrt[6] n< |\xi| \leq \xi_0\sqrt n$ and using Lemma \ref{lemma:lambda-estimates}.
We conclude that $$\sup_{t\in\mathbb{R}} \big|\Lambda_n^1(t)\big|\lesssim \frac{c_\zeta'}{\sqrt n},$$ for some constant $c_\zeta'>0$ independent of $n$.
\medskip
Gathering the above estimates, we obtain $$\sup_{t\in\mathbb{R}}\big|\mathcal{R}_n(t)-\mathcal{S}_n(t) \big|\lesssim \frac{c_\zeta'}{\sqrt n} + {C_\zeta\over \sqrt[3] n} + \frac{C_\zeta}{\sqrt n} + \frac{C_{\zeta,\delta}'}{\sqrt n} + \frac{1}{\sqrt n}.$$
Hence, the above quantity is bounded by $C_{\zeta,\delta} / \sqrt[3] n$ for some constant $C_{\zeta,\delta} > 0$. This finishes the proof of the lemma.
\end{proof}
The above estimates are enough to obtain the desired upper bound.
\begin{proof}[Proof of Proposition \ref{prop:LLT-upper-limit}]
Fix $0<\delta<1$ and $0 < \zeta \leq 1$ as in the beginning of this subsection. Lemmas \ref{lemma:llt-ineq-1} and \ref{lemma-R-S} and the fact that $\mathcal{B}_n(t) \leq \mathcal{R}_n(t)$ give that
$$\mathcal{A}_n(t) \leq \mathcal{S}_n(t) + {C_{\zeta,\delta}\over \sqrt[3] n}+{C_1 \over \sqrt n} \quad \text{for all } \,\, t \in \mathbb{R}.$$
Recall, from \eqref{eq:S-def}, that $\mathcal{S}_n(t) = {1\over \sqrt{2\pi} \, \varrho} e^{-{t^2\over2\varrho^2 n}}\int_{\mathbb{R}} \psi_{\delta}^+ (u)\,{\rm d} u$ and $\int_{\mathbb{R}} \psi (u)\,{\rm d} u = b-a+3\zeta$. Hence, for every fixed $n$ and $\zeta$,
$$ \Big| \mathcal{S}_n(t)- e^{-{t^2\over2\varrho^2 n}}{b-a+3\zeta\over \sqrt{2\pi} \, \varrho} \Big| \leq {1\over \sqrt{2\pi} \, \varrho} \big\| \psi_{\delta}^+ -\psi\big\|_{L^1}.$$
We deduce that
$$ \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \leq e^{-{t^2\over2\varrho^2 n}} {3\zeta\over \sqrt{2\pi} \, \varrho} + {1\over \sqrt{2\pi} \, \varrho} \big\| \psi_{\delta}^+ -\psi\big\|_{L^1} + {C_{\zeta,\delta}\over \sqrt[3] n}+{C_1 \over \sqrt n}, $$ so
$$\limsup_{n\to \infty} \sup_{t\in\mathbb{R}} \bigg( \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \bigg) \leq {3\zeta\over \sqrt{2\pi} \, \varrho} + {1\over \sqrt{2\pi} \, \varrho} \big\| \psi_{\delta}^+ -\psi\big\|_{L^1}.$$
Since $0<\delta<1$ and $0 < \zeta \leq 1$ are arbitrary and $\big\| \psi_{\delta}^+ -\psi\big\|_{L^1}$ tends to zero as $\delta \to 0$, the proposition follows.
\end{proof}
\subsection{Lower bound} \label{subsec:LLT-lower-limit}
We now deal with the lower bound in the limit in \eqref{eq:LLT-main-limit}.
\begin{proposition} \label{prop:LLT-lower-limit}
Let $\mathcal{A}_n(t)$ be as above. Then, $$\liminf_{n\to \infty}\inf_{t\in\mathbb{R}}\bigg( \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \bigg) \geq 0.$$
\end{proposition}
The argument is a variation of the one used in the proof of Proposition \ref{prop:LLT-upper-limit}, but the upper approximations used above will be replaced by analogous lower approximations. We now give the details.
Let $0< \zeta \leq 1$ be a small constant and define $\widetilde \psi:\mathbb{R}\to\mathbb{R}_{\geq 0}$ by
\begin{align*}
\widetilde \psi (u)= &
\begin{cases}
u/\zeta -(a+\zeta)/\zeta & \text{for} \quad u\in[a+\zeta,a+2\zeta] \\
1 & \text{for} \quad u\in[a+2\zeta,b-2\zeta] \\
-u/\zeta +(b-\zeta)/\zeta & \text{for} \quad u\in [b-2\zeta,b-\zeta]
\\
0 & \text{for} \quad u\in \mathbb{R} \setminus [a+\zeta,b-\zeta].
\end{cases}
\end{align*}
The function $\widetilde \psi$ is Lipschitz and piecewise affine. Moreover, $0 \leq \widetilde \psi \leq 1$, its support is contained in $[a+\zeta,b-\zeta] \subset [a,b]$ and $\int_{\mathbb{R}} \widetilde \psi (u)\,{\rm d} u = b-a - 3\zeta$.
For $t\in\mathbb{R}$ and $k\in\mathbb{N}$, consider the translations
$$\widetilde\psi_{t,k}(u):=\widetilde\psi(u+t-k\zeta).$$
Then, for fixed $t,u\in\mathbb{R}$, we have that $\widetilde \psi_{t,k}(u)\neq 0$ for only finitely many $k$'s and $\mathbf 1_{[a-t+(k+1)\zeta, b-t+(k-1)\zeta]} \geq \widetilde\psi_{t,k}$.
Let $\chi_k$ be as in Lemma \ref{lemma:partition-of-unity-2} and $\Phi_n^\star$ be the function defined in \eqref{eq:Phi-star-def}. Set
$$\widetilde \mathcal{B}_n(t):= \sum_{0\leq k\leq B\zeta^{-1}\log n} \mathcal{E}_n\big(\widetilde \psi_{t,k}\cdot\chi_k \big)+ \mathcal{E}_n\big(\widetilde \psi_{t,0}\cdot \Phi_n^\star \big),$$
where $\mathcal{E}_n$ is defined in \eqref{eq:En-def}.
\begin{lemma} \label{lemma:llt-ineq-2}
There exists a constant $C_2>0$, independent of $n$ and $\zeta$, such that, for all $t \in \mathbb{R}$, $$\mathcal{A}_n(t) \geq \widetilde \mathcal{B}_n(t) - C_2 / \sqrt n.$$
\end{lemma}
\begin{proof}
Using \eqref{eq:coeff-split} and the definition of $\mathcal{A}_n(t)$, it follows that
$$ \mathcal{A}_n(t) \geq \sqrt{n} \, \mathbf E \Big( \mathbf 1_{t+ \sigma(S_n,x) +\log d(S_n x, H_y) - n \gamma\in [a, b]}\mathbf 1_{\log d(S_n x, H_y)\geq -B\log n} \Big).$$
Recall that when $S_n x\in{\rm supp}(\chi_k)$, one has $-(k+1) \zeta \leq \log d(S_n x,H_y) \leq - (k-1) \zeta$, so
$$ \mathbf 1_{t+ \sigma(S_n,x) +\log d(S_n x, H_y) - n \gamma\in [a, b]}\geq \mathbf 1_{\sigma(S_n,x) - n \gamma\in [a-t+(k+1)\zeta, b-t+(k-1)\zeta]} \geq \widetilde\psi_{t,k}\big(\sigma(S_n,x) - n \gamma\big). $$
Using that $\mathbf 1_{\log d(w, H_y)\geq -B\log n} \geq \sum_{0\leq k\leq B\zeta^{-1}\log n - 1} \chi_k(w)$, it follows that
\begin{align*}
\mathbf 1_{t+ \sigma(S_n,x) +\log d(S_n x, H_y) - n \gamma\in [a, b]} \mathbf 1_{\log d(S_n x, H_y)\geq -B\log n}
\geq \sum_{0\leq k\leq B\zeta^{-1}\log n-1} \widetilde \psi_{t,k}\big(\sigma(S_n,x)-n\gamma\big)\chi_k(S_n x) .
\end{align*}
Therefore, if $k_0:= \lfloor B \zeta^{-1} \log n \rfloor$, then
\begin{equation*}
\mathcal{A}_n(t) \geq \sum_{0\leq k\leq B\zeta^{-1}\log n-1} \mathcal{E}_n\big(\widetilde\psi_{t,k}\cdot\chi_k \big) = \widetilde \mathcal{B}_n(t) - \mathcal{E}_n\big(\widetilde \psi_{t,k_0}\cdot\chi_{k_0} \big) - \mathcal{E}_n\big(\widetilde \psi_{t,0}\cdot \Phi_n^\star \big).
\end{equation*}
Arguing as in the proof of Lemma \ref{lemma:llt-ineq-1}, we see that the last two terms above are $\gtrsim - 1 / \sqrt n$. The lemma follows.
\end{proof}
Let $0<\delta<1$. By Lemma \ref{lemma:conv-fourier-approx}, there exists a smooth function $\widetilde \psi^-_{\delta}$ such that $\widehat {\widetilde \psi^-_{\delta}}$ has support in $[-\delta^{-2},\delta^{-2}]$, $$\widetilde \psi^-_{\delta} \leq \widetilde\psi,\quad \lim_{\delta\to 0} \widetilde \psi^-_{\delta} =\widetilde \psi \quad \text{and} \quad \lim_{\delta\to 0} \big \|\widetilde \psi^-_{\delta} -\widetilde\psi \big \|_{L^1} = 0.$$
Moreover, $\norm{\widetilde \psi_{\delta}^-}_\infty$, $\norm{ \widetilde\psi_{\delta}^-}_{L^1}$ and $\|\widehat{ \widetilde \psi^-_{\delta}}\|_{\cali{C}^1}$ are bounded by a constant independent of $\delta$ and $\zeta$ since the support of $\psi$ is contained in $[a,b]$. We warn that, even if $\widetilde \psi$ is non-negative, $\widetilde \psi^-_{\delta}$ might take negative values.
For $t\in\mathbb{R}$ and $k\in\mathbb{N}$, consider the translations
$$ \widetilde\psi_{t,k}^-(u):= \widetilde \psi_{\delta}^-(u+t-k\zeta)$$ and define
$$\widetilde \mathcal{R}_n(t):= \sum_{0\leq k\leq B\zeta^{-1}\log n} \mathcal{E}_n\big(\widetilde \psi_{t,k}^-\cdot\chi_k \big)+ \mathcal{E}_n\big(\widetilde \psi_{t,0}^-\cdot \Phi_n^\star \big) $$ and $$\widetilde \mathcal{S}_n (t):={1\over 2\pi}\widehat{\widetilde \psi_{\delta}^-}(0) \int_{-\infty}^\infty e^{i\xi{t\over \sqrt n}} e^{-{ \varrho^2\xi^2 \over 2}} \,{\rm d} \xi={1\over \sqrt{2\pi} \, \varrho} e^{-{t^2\over2\varrho^2 n}}\int_{\mathbb{R}} \widetilde \psi_{\delta}^- (u)\,{\rm d} u.$$
\begin{lemma} \label{lemma-R-S-2}
Fix $0< \delta < 1$ and $0<\zeta \leq 1$ small enough. Then, there exists a constant $\widetilde C_{\zeta,\delta}>0$ such that, for all $n \geq 1$,
$$\sup_{t\in\mathbb{R}}\big| \widetilde\mathcal{R}_n(t)- \widetilde \mathcal{S}_n(t) \big| \leq {\widetilde C_{\zeta,\delta}\over \sqrt[3] n}.$$
\end{lemma}
\begin{proof}
By the same computations as the ones from Subsection \ref{subsec:LLT-upper-limit}, we obtain the identity
$$\widetilde \mathcal{R}_n(t)= {1\over 2\pi}\int_{-\delta^{-2} \sqrt n}^{\delta^{-2} \sqrt n} \widehat {\widetilde \psi^-_{\delta}} \Big({\xi\over \sqrt n}\Big) e^{i\xi{t\over \sqrt n}}\cdot e^{-i\sqrt n\gamma}\mathcal{P}_{{i\xi\over \sqrt n}}^n(\Phi_{n,\xi} +\Phi_{n}^{\star} )(x) \,{\rm d} \xi,$$
where $\Phi_{n,\xi}$ and $\Phi_{n}^{\star}$ are defined in \eqref{eq:Phi-star-def} and \eqref{eq:Phi-xi-def} respectively. The proof of Lemma \ref{lemma-R-S} can be repeated by using ${\widetilde \psi^-_{\delta}}$ instead of $\psi_\delta^+$. This yields the desired estimate.
\end{proof}
We can now obtain the lower bound.
\begin{proof}[Proof of Proposition \ref{prop:LLT-lower-limit}]
Lemmas \ref{lemma:llt-ineq-2} and \ref{lemma-R-S-2} and the fact that $\widetilde \mathcal{B}_n(t) \geq \widetilde \mathcal{R}_n(t)$ give that
$$\mathcal{A}_n(t) \geq \widetilde \mathcal{S}_n(t) - {\widetilde C_{\zeta,\delta}\over \sqrt[3] n} - {C_2 \over \sqrt n} \quad \text{for all } \,\, t \in \mathbb{R}.$$
Arguing as in the proof of Proposition \ref{prop:LLT-upper-limit} and recalling that $\int_{\mathbb{R}} \widetilde \psi (u)\,{\rm d} u = b-a - 3\zeta$, we get that, for every fixed $n$ and $\zeta$,
$$ \Big| \widetilde \mathcal{S}_n(t)- e^{-{t^2\over2\varrho^2 n}}{b-a - 3\zeta\over \sqrt{2\pi} \, \varrho} \Big| \leq {1\over \sqrt{2\pi} \, \varrho} \big\| \widetilde \psi^-_{\delta} -\widetilde\psi \big \|_{L^1}.$$
Therefore,
$$ \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \geq - e^{-{t^2\over2\varrho^2 n}} {3\zeta\over \sqrt{2\pi} \, \varrho} - {1\over \sqrt{2\pi} \, \varrho} \big\| \widetilde \psi^-_{\delta} -\widetilde\psi \big \|_{L^1} - {\widetilde C_{\zeta,\delta}\over \sqrt[3] n} - {C_2 \over \sqrt n}, $$ and
$$\liminf_{n\to \infty} \inf_{t\in\mathbb{R}} \bigg( \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho} \bigg) \geq - {3\zeta\over \sqrt{2\pi} \, \varrho} - {1\over \sqrt{2\pi} \, \varrho} \big\| \widetilde \psi^-_{\delta} -\widetilde\psi \big \|_{L^1}.$$
Since $0<\delta<1$ and $0 < \zeta \leq 1$ are arbitrary and $ \big\| \widetilde \psi^-_{\delta} -\widetilde\psi \big \|_{L^1}$ tends to zero as $\delta \to 0$, the proposition follows.
\end{proof}
Now, the proof of Theorem \ref{thm:LLT-coeff} can be concluded.
\begin{proof}[Proof of Theorem \ref{thm:LLT-coeff}]
Recall that the conclusion of Theorem \ref{thm:LLT-coeff} is equivalent to the limit \eqref{eq:LLT-main-limit}. Denote $\mathbf f_n(t):= \mathcal{A}_n(t) - e^{-\frac{t^2}{2 \varrho^2 n}} {b-a\over \sqrt{2 \pi}\,\varrho}$. Propositions \ref{prop:LLT-upper-limit} and \ref{prop:LLT-lower-limit} give that $$\limsup_{n\to \infty}\sup_{t\in\mathbb{R}} \mathbf f_n(t) \leq 0 \quad \text{and} \quad \liminf_{n\to \infty}\inf_{t\in\mathbb{R}} \mathbf f_n(t) \geq 0$$ respectively. This clearly implies that $\lim_{n\to \infty}\sup_{t\in\mathbb{R}} |\mathbf f_n(t)| = 0$, yielding \eqref{eq:LLT-main-limit}. It is clear that all of our estimates are uniform in $x \in \P^{d-1}$ and $y\in (\P^{d-1})^*$. The proof of the theorem is finished.
\end{proof}
|